threads
listlengths
1
275
[ { "msg_contents": "The query plan appends sequential scans on the tables in the partition \n(9 tables, ~4 million rows) and then hash joins that with a 14 row \ntable. The join condition is the primary key of each table in the \npartition (and would be the primary key of the parent if that was \nsupported).\nIt would be much faster if it did an index scan on each of the child \ntables and merged the results.\n\nI can achieve this manually by rewriting the query as a union between \nqueries against each of the child tables. Is there a better way? (I'm \nusing PostGreSQL 8.4 with PostGIS 1.4).\n\nRegards,\nMark Thornton\n\nThe query:\n\nselect rriID,ST_AsBinary(centreLine),ST_AsBinary(point)\n from \"RoadLinkInformation\" join LinkIds on \n\"RoadLinkInformation\".rriID=LinkIds.featureid\n join \"MasterRoadLinks\" on \n\"RoadLinkInformation\".roadLinkID=\"MasterRoadLinks\".featureID\n\nTable definitions\n\ncreate temporary table LinkIds (featureid bigint not null)\n\ncreate table \"RoadLinkInformation\" (\n rriid bigint not null primary key,\n roadlinkid bigint not null,\n point geometry,\n bound geometry\n)\n\ncreate table \"MasterRoadLinks\" (\n featureId bigint not null,\n centreLine geometry not null,\n... other columns omitted for clarity\n )\n\nAll \"RoadLinks/*\" tables are children with the same structure and \nfeatureId as the primary key.\n\nThe LinkIds table is the result of a previous query and contains just 14 \nrows (in this example).\n\nRunning the query against a view constructed as a union of the child \ntables results in essentially the same query plan.\n\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=16097.75..191987.73 rows=33266 width=683) (actual \ntime=3003.302..5387.541 rows=14 loops=1)\n Hash Cond: (public.\"MasterRoadLinks\".featureid = \n\"RoadLinkInformation\".roadlinkid)\n -> Append (cost=0.00..159902.54 rows=4130254 width=583) (actual \ntime=2.357..4056.404 rows=4129424 loops=1)\n -> Seq Scan on \"MasterRoadLinks\" (cost=0.00..18.30 rows=830 \nwidth=40) (actual time=0.002..0.002 rows=0 loops=1)\n -> Seq Scan on \"RoadLinks/A Road\" \"MasterRoadLinks\" \n(cost=0.00..22531.32 rows=378732 width=519) (actual time=2.352..268.170 \nrows=378732 loops=1)\n -> Seq Scan on \"RoadLinks/B Road\" \"MasterRoadLinks\" \n(cost=0.00..6684.19 rows=182819 width=587) (actual time=0.008..114.671 \nrows=182819 loops=1)\n -> Seq Scan on \"RoadLinks/Alley\" \"MasterRoadLinks\" \n(cost=0.00..2973.31 rows=100731 width=353) (actual time=0.008..59.283 \nrows=100731 loops=1)\n -> Seq Scan on \"RoadLinks/Local Street\" \"MasterRoadLinks\" \n(cost=0.00..67255.79 rows=2063279 width=450) (actual \ntime=0.048..1281.454 rows=2063279 loops=1)\n -> Seq Scan on \"RoadLinks/Minor Road\" \"MasterRoadLinks\" \n(cost=0.00..30733.42 rows=722942 width=784) (actual time=0.047..517.770 \nrows=722942 loops=1)\n -> Seq Scan on \"RoadLinks/Motorway\" \"MasterRoadLinks\" \n(cost=0.00..683.03 rows=15403 width=820) (actual time=0.005..10.809 \nrows=15403 loops=1)\n -> Seq Scan on \"RoadLinks/Pedestrianised Street\" \n\"MasterRoadLinks\" (cost=0.00..92.93 rows=2993 width=399) (actual \ntime=0.008..1.971 rows=2993 loops=1)\n -> Seq Scan on \"RoadLinks/Private Road - Publicly Accessible\" \n\"MasterRoadLinks\" (cost=0.00..1187.79 rows=30579 width=662) (actual \ntime=0.006..21.177 rows=30579 loops=1)\n -> Seq Scan on \"RoadLinks/Private Road - Restricted Access\" \n\"MasterRoadLinks\" (cost=0.00..27742.46 rows=631946 width=855) (actual \ntime=0.047..459.302 rows=631946 loops=1)\n -> Hash (cost=16071.00..16071.00 rows=2140 width=116) (actual \ntime=0.205..0.205 rows=14 loops=1)\n -> Nested Loop (cost=0.00..16071.00 rows=2140 width=116) \n(actual time=0.045..0.183 rows=14 loops=1)\n -> Seq Scan on linkids (cost=0.00..31.40 rows=2140 \nwidth=8) (actual time=0.006..0.012 rows=14 loops=1)\n -> Index Scan using \"RoadLinkInformation_pkey\" on \n\"RoadLinkInformation\" (cost=0.00..7.48 rows=1 width=116) (actual \ntime=0.008..0.009 rows=1 loops=14)\n Index Cond: (\"RoadLinkInformation\".rriid = \nlinkids.featureid)\n Total runtime: 5387.734 ms\n(19 rows)\n\n", "msg_date": "Fri, 04 Mar 2011 11:40:55 +0000", "msg_from": "Mark Thornton <[email protected]>", "msg_from_op": true, "msg_subject": "Slow join on partitioned table" }, { "msg_contents": "On Fri, Mar 4, 2011 at 6:40 AM, Mark Thornton <[email protected]> wrote:\n> The query plan appends sequential scans on the tables in the partition (9\n> tables, ~4 million rows) and then hash joins that with a 14 row table. The\n> join condition is the primary key of each table in the partition (and would\n> be the primary key of the parent if that was supported).\n> It would be much faster if it did an index scan on each of the child tables\n> and merged the results.\n>\n> I can achieve this manually by rewriting the query as a union between\n> queries against each of the child tables. Is there a better way? (I'm using\n> PostGreSQL 8.4 with PostGIS 1.4).\n\nCan you post the EXPLAIN ANALYZE output of the other formulation of the query?\n\n>               ->  Seq Scan on linkids  (cost=0.00..31.40 rows=2140 width=8)\n> (actual time=0.006..0.012 rows=14 loops=1)\n\nThat seems quite surprising. There are only 14 rows in the table but\nPG thinks 2140? Do you have autovacuum turned on? Has this table\nbeen analyzed recently?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Fri, 4 Mar 2011 11:07:57 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow join on partitioned table" }, { "msg_contents": "On 04/03/2011 16:07, Robert Haas wrote:\n> On Fri, Mar 4, 2011 at 6:40 AM, Mark Thornton<[email protected]> wrote:\n>> I can achieve this manually by rewriting the query as a union between\n>> queries against each of the child tables. Is there a better way? (I'm using\n>> PostGreSQL 8.4 with PostGIS 1.4).\n> Can you post the EXPLAIN ANALYZE output of the other formulation of the query?\n\nSee below (at bottom)\n>\n> That seems quite surprising. There are only 14 rows in the table but\n> PG thinks 2140? Do you have autovacuum turned on? Has this table\n> been analyzed recently?\n>\nIt is a temporary table and thus I hadn't thought to analyze it. How \nshould such tables be treated? Should I analyze it immediately after \ncreation (i.e. when it is empty), after filling it or ... ? The expected \nusage is such that the temporary table will have less than 100 or so rows.\n\nHowever I now find that if I do analyze it I get the better result (plan \nimmediatley below). Curiously this result only applies to the inherited \n(child table) formulation and not to the apparently equivalent query \nover a view of unions. The view of unions is the approach used with SQL \nServer 2008 .\n\nThanks for your help,\nMark Thornton\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..1442.57 rows=218 width=683) (actual \ntime=0.193..1.287 rows=14 loops=1)\n Join Filter: (\"RoadLinkInformation\".roadlinkid = links.featureid)\n -> Nested Loop (cost=0.00..118.19 rows=14 width=116) (actual \ntime=0.044..0.200 rows=14 loops=1)\n -> Seq Scan on linkids (cost=0.00..1.14 rows=14 width=8) \n(actual time=0.011..0.020 rows=14 loops=1)\n -> Index Scan using \"RoadLinkInformation_pkey\" on \n\"RoadLinkInformation\" (cost=0.00..8.35 rows=1 width=116) (actual \ntime=0.009..0.010 rows=1 loops=14)\n Index Cond: (\"RoadLinkInformation\".rriid = \nlinkids.featureid)\n -> Append (cost=0.00..84.03 rows=839 width=46) (actual \ntime=0.051..0.061 rows=1 loops=14)\n -> Seq Scan on \"MasterRoadLinks\" links (cost=0.00..18.30 \nrows=830 width=40) (actual time=0.000..0.000 rows=0 loops=14)\n -> Index Scan using \"RoadLinks/A Road_pkey\" on \"RoadLinks/A \nRoad\" links (cost=0.00..7.36 rows=1 width=519) (actual \ntime=0.007..0.007 rows=0 loops=14)\n Index Cond: (links.featureid = \n\"RoadLinkInformation\".roadlinkid)\n -> Index Scan using \"RoadLinks/B Road_pkey\" on \"RoadLinks/B \nRoad\" links (cost=0.00..7.26 rows=1 width=587) (actual \ntime=0.006..0.006 rows=0 loops=14)\n Index Cond: (links.featureid = \n\"RoadLinkInformation\".roadlinkid)\n -> Index Scan using \"RoadLinks/Alley_pkey\" on \n\"RoadLinks/Alley\" links (cost=0.00..7.24 rows=1 width=353) (actual \ntime=0.005..0.005 rows=0 loops=14)\n Index Cond: (links.featureid = \n\"RoadLinkInformation\".roadlinkid)\n -> Index Scan using \"RoadLinks/Local Street_pkey\" on \n\"RoadLinks/Local Street\" links (cost=0.00..7.67 rows=1 width=450) \n(actual time=0.008..0.008 rows=0 loops=14)\n Index Cond: (links.featureid = \n\"RoadLinkInformation\".roadlinkid)\n -> Index Scan using \"RoadLinks/Minor Road_pkey\" on \n\"RoadLinks/Minor Road\" links (cost=0.00..7.37 rows=1 width=784) (actual \ntime=0.007..0.007 rows=0 loops=14)\n Index Cond: (links.featureid = \n\"RoadLinkInformation\".roadlinkid)\n -> Index Scan using \"RoadLinks/Motorway_pkey\" on \n\"RoadLinks/Motorway\" links (cost=0.00..7.18 rows=1 width=820) (actual \ntime=0.005..0.005 rows=0 loops=14)\n Index Cond: (links.featureid = \n\"RoadLinkInformation\".roadlinkid)\n -> Index Scan using \"RoadLinks/Pedestrianised Street_pkey\" on \n\"RoadLinks/Pedestrianised Street\" links (cost=0.00..7.08 rows=1 \nwidth=399) (actual time=0.004..0.004 rows=0 loops=14)\n Index Cond: (links.featureid = \n\"RoadLinkInformation\".roadlinkid)\n -> Index Scan using \"RoadLinks/Private Road - Publicly \nAccessible_pkey\" on \"RoadLinks/Private Road - Publicly Accessible\" \nlinks (cost=0.00..7.23 rows=1 width=662) (actual time=0.005..0.005 \nrows=0 loops=14)\n Index Cond: (links.featureid = \n\"RoadLinkInformation\".roadlinkid)\n -> Index Scan using \"RoadLinks/Private Road - Restricted \nAccess_pkey\" on \"RoadLinks/Private Road - Restricted Access\" links \n(cost=0.00..7.35 rows=1 width=855) (actual time=0.008..0.009 rows=1 \nloops=14)\n Index Cond: (links.featureid = \n\"RoadLinkInformation\".roadlinkid)\n Total runtime: 1.518 ms\n(27 rows)\n\n\nQuery plan with alternative query:\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Append (cost=0.00..168893.14 rows=11237 width=723) (actual \ntime=0.934..234.609 rows=14 loops=1)\n -> Nested Loop (cost=0.00..22222.78 rows=2140 width=619) (actual \ntime=0.291..0.291 rows=0 loops=1)\n -> Nested Loop (cost=0.00..13811.00 rows=2140 width=116) \n(actual time=0.049..0.181 rows=14 loops=1)\n -> Seq Scan on linkids (cost=0.00..31.40 rows=2140 \nwidth=8) (actual time=0.013..0.018 rows=14 loops=1)\n -> Index Scan using \"RoadLinkInformation_pkey\" on \n\"RoadLinkInformation\" (cost=0.00..6.43 rows=1 width=116) (actual \ntime=0.008..0.009 rows=1 loops=14)\n Index Cond: (public.\"RoadLinkInformation\".rriid = \npg_temp_1.linkids.featureid)\n -> Index Scan using \"RoadLinks/A Road_pkey\" on \"RoadLinks/A \nRoad\" links (cost=0.00..3.91 rows=1 width=519) (actual \ntime=0.007..0.007 rows=0 loops=14)\n Index Cond: (links.featureid = \npublic.\"RoadLinkInformation\".roadlinkid)\n -> Nested Loop (cost=0.00..14729.95 rows=1472 width=687) (actual \ntime=0.231..0.231 rows=0 loops=1)\n -> Nested Loop (cost=0.00..13895.00 rows=2140 width=116) \n(actual time=0.013..0.134 rows=14 loops=1)\n -> Seq Scan on linkids (cost=0.00..31.40 rows=2140 \nwidth=8) (actual time=0.003..0.011 rows=14 loops=1)\n -> Index Scan using \"RoadLinkInformation_pkey\" on \n\"RoadLinkInformation\" (cost=0.00..6.47 rows=1 width=116) (actual \ntime=0.006..0.007 rows=1 loops=14)\n Index Cond: (public.\"RoadLinkInformation\".rriid = \npg_temp_1.linkids.featureid)\n -> Index Scan using \"RoadLinks/B Road_pkey\" on \"RoadLinks/B \nRoad\" links (cost=0.00..0.37 rows=1 width=587) (actual \ntime=0.006..0.006 rows=0 loops=14)\n Index Cond: (links.featureid = \npublic.\"RoadLinkInformation\".roadlinkid)\n -> Nested Loop (cost=0.00..14604.60 rows=811 width=453) (actual \ntime=0.215..0.215 rows=0 loops=1)\n -> Nested Loop (cost=0.00..13895.00 rows=2140 width=116) \n(actual time=0.011..0.127 rows=14 loops=1)\n -> Seq Scan on linkids (cost=0.00..31.40 rows=2140 \nwidth=8) (actual time=0.003..0.009 rows=14 loops=1)\n -> Index Scan using \"RoadLinkInformation_pkey\" on \n\"RoadLinkInformation\" (cost=0.00..6.47 rows=1 width=116) (actual \ntime=0.006..0.007 rows=1 loops=14)\n Index Cond: (public.\"RoadLinkInformation\".rriid = \npg_temp_1.linkids.featureid)\n -> Index Scan using \"RoadLinks/Alley_pkey\" on \n\"RoadLinks/Alley\" links (cost=0.00..0.32 rows=1 width=353) (actual \ntime=0.005..0.005 rows=0 loops=14)\n Index Cond: (links.featureid = \npublic.\"RoadLinkInformation\".roadlinkid)\n -> Nested Loop (cost=0.00..28899.43 rows=2140 width=550) (actual \ntime=0.194..0.323 rows=4 loops=1)\n -> Nested Loop (cost=0.00..14767.00 rows=2140 width=116) \n(actual time=0.011..0.136 rows=14 loops=1)\n -> Seq Scan on linkids (cost=0.00..31.40 rows=2140 \nwidth=8) (actual time=0.003..0.010 rows=14 loops=1)\n -> Index Scan using \"RoadLinkInformation_pkey\" on \n\"RoadLinkInformation\" (cost=0.00..6.87 rows=1 width=116) (actual \ntime=0.006..0.007 rows=1 loops=14)\n Index Cond: (public.\"RoadLinkInformation\".rriid = \npg_temp_1.linkids.featureid)\n -> Index Scan using \"RoadLinks/Local Street_pkey\" on \n\"RoadLinks/Local Street\" links (cost=0.00..6.59 rows=1 width=450) \n(actual time=0.007..0.007 rows=0 loops=14)\n Index Cond: (links.featureid = \npublic.\"RoadLinkInformation\".roadlinkid)\n -> Nested Loop (cost=0.00..23803.64 rows=2140 width=884) (actual \ntime=0.228..0.228 rows=0 loops=1)\n -> Nested Loop (cost=0.00..13939.00 rows=2140 width=116) \n(actual time=0.012..0.126 rows=14 loops=1)\n -> Seq Scan on linkids (cost=0.00..31.40 rows=2140 \nwidth=8) (actual time=0.004..0.010 rows=14 loops=1)\n -> Index Scan using \"RoadLinkInformation_pkey\" on \n\"RoadLinkInformation\" (cost=0.00..6.49 rows=1 width=116) (actual \ntime=0.006..0.006 rows=1 loops=14)\n Index Cond: (public.\"RoadLinkInformation\".rriid = \npg_temp_1.linkids.featureid)\n -> Index Scan using \"RoadLinks/Minor Road_pkey\" on \n\"RoadLinks/Minor Road\" links (cost=0.00..4.59 rows=1 width=784) (actual \ntime=0.006..0.006 rows=0 loops=14)\n Index Cond: (links.featureid = \npublic.\"RoadLinkInformation\".roadlinkid)\n -> Nested Loop (cost=0.00..14517.88 rows=124 width=920) (actual \ntime=0.390..0.390 rows=0 loops=1)\n -> Nested Loop (cost=0.00..13895.00 rows=2140 width=116) \n(actual time=0.011..0.142 rows=14 loops=1)\n -> Seq Scan on linkids (cost=0.00..31.40 rows=2140 \nwidth=8) (actual time=0.003..0.012 rows=14 loops=1)\n -> Index Scan using \"RoadLinkInformation_pkey\" on \n\"RoadLinkInformation\" (cost=0.00..6.47 rows=1 width=116) (actual \ntime=0.006..0.007 rows=1 loops=14)\n Index Cond: (public.\"RoadLinkInformation\".rriid = \npg_temp_1.linkids.featureid)\n -> Index Scan using \"RoadLinks/Motorway_pkey\" on \n\"RoadLinks/Motorway\" links (cost=0.00..0.28 rows=1 width=820) (actual \ntime=0.005..0.005 rows=0 loops=14)\n Index Cond: (links.featureid = \npublic.\"RoadLinkInformation\".roadlinkid)\n -> Hash Join (cost=12418.00..12457.78 rows=24 width=499) (actual \ntime=232.495..232.495 rows=0 loops=1)\n Hash Cond: (pg_temp_1.linkids.featureid = \npublic.\"RoadLinkInformation\".rriid)\n -> Seq Scan on linkids (cost=0.00..31.40 rows=2140 width=8) \n(actual time=0.007..0.012 rows=14 loops=1)\n -> Hash (cost=12377.61..12377.61 rows=3231 width=499) \n(actual time=232.421..232.421 rows=1125 loops=1)\n -> Hash Join (cost=130.34..12377.61 rows=3231 \nwidth=499) (actual time=11.572..230.975 rows=1125 loops=1)\n Hash Cond: \n(public.\"RoadLinkInformation\".roadlinkid = links.featureid)\n -> Seq Scan on \"RoadLinkInformation\" \n(cost=0.00..10422.28 rows=286828 width=116) (actual time=4.306..81.587 \nrows=286828 loops=1)\n -> Hash (cost=92.93..92.93 rows=2993 width=399) \n(actual time=7.029..7.029 rows=2993 loops=1)\n -> Seq Scan on \"RoadLinks/Pedestrianised \nStreet\" links (cost=0.00..92.93 rows=2993 width=399) (actual \ntime=0.014..3.100 rows=2993 loops=1)\n -> Nested Loop (cost=0.00..14535.03 rows=246 width=762) (actual \ntime=0.168..0.168 rows=0 loops=1)\n -> Nested Loop (cost=0.00..13895.00 rows=2140 width=116) \n(actual time=0.031..0.113 rows=14 loops=1)\n -> Seq Scan on linkids (cost=0.00..31.40 rows=2140 \nwidth=8) (actual time=0.005..0.007 rows=14 loops=1)\n -> Index Scan using \"RoadLinkInformation_pkey\" on \n\"RoadLinkInformation\" (cost=0.00..6.47 rows=1 width=116) (actual \ntime=0.005..0.006 rows=1 loops=14)\n Index Cond: (public.\"RoadLinkInformation\".rriid = \npg_temp_1.linkids.featureid)\n -> Index Scan using \"RoadLinks/Private Road - Publicly \nAccessible_pkey\" on \"RoadLinks/Private Road - Publicly Accessible\" \nlinks (cost=0.00..0.29 rows=1 width=662) (actual time=0.003..0.003 \nrows=0 loops=14)\n Index Cond: (links.featureid = \npublic.\"RoadLinkInformation\".roadlinkid)\n -> Nested Loop (cost=0.00..23009.69 rows=2140 width=955) (actual \ntime=0.058..0.256 rows=10 loops=1)\n -> Nested Loop (cost=0.00..13871.00 rows=2140 width=116) \n(actual time=0.009..0.090 rows=14 loops=1)\n -> Seq Scan on linkids (cost=0.00..31.40 rows=2140 \nwidth=8) (actual time=0.003..0.006 rows=14 loops=1)\n -> Index Scan using \"RoadLinkInformation_pkey\" on \n\"RoadLinkInformation\" (cost=0.00..6.45 rows=1 width=116) (actual \ntime=0.004..0.004 rows=1 loops=14)\n Index Cond: (public.\"RoadLinkInformation\".rriid = \npg_temp_1.linkids.featureid)\n -> Index Scan using \"RoadLinks/Private Road - Restricted \nAccess_pkey\" on \"RoadLinks/Private Road - Restricted Access\" links \n(cost=0.00..4.25 rows=1 width=855) (actual time=0.005..0.005 rows=1 \nloops=14)\n Index Cond: (links.featureid = \npublic.\"RoadLinkInformation\".roadlinkid)\n Total runtime: 235.501 ms\n(67 rows)\n\nThe alternative query\n\nselect rriID,ST_AsBinary(centreLine),ST_AsBinary(point)\n from \"RoadLinkInformation\" join LinkIds on \n\"RoadLinkInformation\".rriID=LinkIds.featureid\n join \"RoadLinks/A Road\" as Links on \n\"RoadLinkInformation\".roadLinkID=Links.featureID\n union all\n select rriID,ST_AsBinary(centreLine),ST_AsBinary(point)\n from \"RoadLinkInformation\" join LinkIds on \n\"RoadLinkInformation\".rriID=LinkIds.featureid\n join \"RoadLinks/B Road\" as Links on \n\"RoadLinkInformation\".roadLinkID=Links.featureID\n union all\n select rriID,ST_AsBinary(centreLine),ST_AsBinary(point)\n from \"RoadLinkInformation\" join LinkIds on \n\"RoadLinkInformation\".rriID=LinkIds.featureid\n join \"RoadLinks/Alley\" as Links on \n\"RoadLinkInformation\".roadLinkID=Links.featureID\n union all\n select rriID,ST_AsBinary(centreLine),ST_AsBinary(point)\n from \"RoadLinkInformation\" join LinkIds on \n\"RoadLinkInformation\".rriID=LinkIds.featureid\n join \"RoadLinks/Local Street\" as Links on \n\"RoadLinkInformation\".roadLinkID=Links.featureID\n union all\n select rriID,ST_AsBinary(centreLine),ST_AsBinary(point)\n from \"RoadLinkInformation\" join LinkIds on \n\"RoadLinkInformation\".rriID=LinkIds.featureid\n join \"RoadLinks/Minor Road\" as Links on \n\"RoadLinkInformation\".roadLinkID=Links.featureID\n union all\n select rriID,ST_AsBinary(centreLine),ST_AsBinary(point)\n from \"RoadLinkInformation\" join LinkIds on \n\"RoadLinkInformation\".rriID=LinkIds.featureid\n join \"RoadLinks/Motorway\" as Links on \n\"RoadLinkInformation\".roadLinkID=Links.featureID\n union all\n select rriID,ST_AsBinary(centreLine),ST_AsBinary(point)\n from \"RoadLinkInformation\" join LinkIds on \n\"RoadLinkInformation\".rriID=LinkIds.featureid\n join \"RoadLinks/Pedestrianised Street\" as Links on \n\"RoadLinkInformation\".roadLinkID=Links.featureID\n union all\n select rriID,ST_AsBinary(centreLine),ST_AsBinary(point)\n from \"RoadLinkInformation\" join LinkIds on \n\"RoadLinkInformation\".rriID=LinkIds.featureid\n join \"RoadLinks/Private Road - Publicly Accessible\" as \nLinks on \"RoadLinkInformation\".roadLinkID=Links.featureID\n union all\n select rriID,ST_AsBinary(centreLine),ST_AsBinary(point)\n from \"RoadLinkInformation\" join LinkIds on \n\"RoadLinkInformation\".rriID=LinkIds.featureid\n join \"RoadLinks/Private Road - Restricted Access\" as \nLinks on \"RoadLinkInformation\".roadLinkID=Links.featureID\n", "msg_date": "Fri, 04 Mar 2011 16:47:23 +0000", "msg_from": "Mark Thornton <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow join on partitioned table" }, { "msg_contents": "On 04/03/2011 16:07, Robert Haas wrote:\n> That seems quite surprising. There are only 14 rows in the table but\n> PG thinks 2140? Do you have autovacuum turned on? Has this table\n> been analyzed recently?\n>\nI think autovacuum is enabled, but as a temporary table LinkIds has only \nexisted for a very short time (at least in my current tests).\n\nMark\n\n\n", "msg_date": "Fri, 04 Mar 2011 17:00:27 +0000", "msg_from": "Mark Thornton <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow join on partitioned table" }, { "msg_contents": "On Fri, Mar 4, 2011 at 12:00 PM, Mark Thornton <[email protected]> wrote:\n> On 04/03/2011 16:07, Robert Haas wrote:\n>>\n>> That seems quite surprising. There are only 14 rows in the table but\n>> PG thinks 2140?  Do you have autovacuum turned on?  Has this table\n>> been analyzed recently?\n>>\n> I think autovacuum is enabled, but as a temporary table LinkIds has only\n> existed for a very short time (at least in my current tests).\n\nAutovacuum doesn't work on temporary tables, so time of existence\ndoesn't matter in that case. The best approach is to load the data,\nthen analyze, then query it.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Fri, 4 Mar 2011 13:06:25 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow join on partitioned table" }, { "msg_contents": "On Fri, Mar 4, 2011 at 8:47 AM, Mark Thornton <[email protected]> wrote:\n> It is a temporary table and thus I hadn't thought to analyze it. How should\n> such tables be treated? Should I analyze it immediately after creation (i.e.\n> when it is empty), after filling it or ... ? The expected usage is such that\n> the temporary table will have less than 100 or so rows.\n\nWhen in doubt, analyze.\n\nIf you're engaging in OLAP, or some other workload pattern that\ninvolves writing many small batches, then analyzing all the time is\nbad. If you're engaged in any workload that writes rarely - a bulk\ninsert here, a create-table-as-select there, etc - always analyze.\n\nThe cost of analyzing when you shouldn't is low and O(1) per analysis,\nand the cost of not analyzing when you should have can easily be\nO(n^2) or worse w/r/t data size.\n\nThe cost of analyzing is especially low on a temp table only owned by\nyour current session, because no one else will be disturbed by the\ntable lock you acquire if you context-switch out before it's done.\n\n-Conor\n", "msg_date": "Wed, 9 Mar 2011 14:42:28 -0800", "msg_from": "Conor Walsh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow join on partitioned table" } ]
[ { "msg_contents": "This is not a performance bug -- my query takes a reasonably long\namount of time, but I would like to see if I can get this calculation\nany faster in my setup.\n\nI have a table:\nvolume_id serial primary key\nswitchport_id integer not null\nin_octets bigint not null\nout_octets bigint not null\ninsert_timestamp timestamp default now()\nwith indexes on volume_id, switchport_id, insert_timestamp.\n\nThat is partitioned into about 3000 tables by the switchport_id (FK to\na lookup table), each table has about 30 000 rows currently (a row is\ninserted every 5 minutes into each table).\nI have select queries that filter based on switchport_id and\ntimestamp. Constraint exclusion is used with the switchport_id to get\nthe right table and the insert_timestamp has an index on it (on each\ntable).\n\nAny time the volume tables are queried it is to calculate the deltas\nbetween each in_octets and out_octets from the previous row (ordered\nby timestamp). The delta is used because the external system, where\nthe data is retrieved from, will roll over the value sometimes. I\nhave a function to do this calcuation:\n\n\ncreate or replace function traffic.get_delta_table(p_switchport_id integer,\n p_start_date date, p_end_date date)\nreturns table( volume_id integer,\n insert_timestamp timestamp,\n out_delta bigint,\n out_rate bigint,\n out_rate_order bigint,\n in_delta bigint,\n in_rate bigint,\n in_rate_order bigint)\nas $$\ndeclare\nbegin\n -- we need to force pgsql to make a new plan for each query so it can\n -- use constraint exclusions on switchport id to determine the\npartition table to scan\n return query execute 'select\n t.volume_id,\n t.insert_timestamp,\n t.out_delta,\n t.out_delta * 8 / t.time_difference as out_rate,\n row_number() over (order by t.out_delta * 8 /\nt.time_difference) as out_rate_order,\n t.in_delta,\n t.in_delta * 8 / t.time_difference as in_rate,\n row_number() over(order by t.in_delta * 8 / t.time_difference)\nas in_rate_order\n from\n (select\n n.volume_id,\n n.insert_timestamp,\n extract(epoch from (n.insert_timestamp -\nlag(n.insert_timestamp,1,n.insert_timestamp) over(order by\nn.insert_timestamp)))::integer as time_difference,\n case\n when n.out_octets < lag(out_octets,1,n.out_octets)\nover(order by n.insert_timestamp)\n then n.out_octets\n else n.out_octets - lag(out_octets,1,n.out_octets)\nover(order by n.insert_timestamp)\n end as out_delta,\n case\n when n.in_octets < lag(in_octets,1,n.in_octets) over(order\nby n.insert_timestamp)\n then n.in_octets\n else n.in_octets - lag(in_octets,1,n.in_octets) over(order\nby n.insert_timestamp)\n end as in_delta\n from volume as n\n where n.insert_timestamp between $1 and $2\n and n.switchport_id = '||p_switchport_id||'\n and in_octets != 0\n and out_octets != 0\n ) as t\n where time_difference is not null and time_difference != 0' using\np_start_date, p_end_date;\n\nend; $$ language plpgsql;\n\nThe query inside the function's plan:\n WindowAgg (cost=2269.62..2445.35 rows=6390 width=32) (actual\ntime=7526.526..7531.855 rows=6622 loops=1)\n -> Sort (cost=2269.62..2285.60 rows=6390 width=32) (actual\ntime=7526.497..7527.924 rows=6622 loops=1)\n Sort Key: (((t.in_delta * 8) / t.time_difference))\n Sort Method: external sort Disk: 432kB\n -> WindowAgg (cost=1753.90..1865.72 rows=6390 width=32)\n(actual time=2613.593..2618.727 rows=6622 loops=1)\n -> Sort (cost=1753.90..1769.87 rows=6390 width=32)\n(actual time=2613.566..2614.550 rows=6622 loops=1)\n Sort Key: (((t.out_delta * 8) / t.time_difference))\n Sort Method: quicksort Memory: 710kB\n -> Subquery Scan on t (cost=978.89..1350.00\nrows=6390 width=32) (actual time=2582.254..2606.708 rows=6622 loops=1)\n Filter: ((t.time_difference IS NOT NULL)\nAND (t.time_difference <> 0))\n -> WindowAgg (cost=978.89..1269.32\nrows=6454 width=28) (actual time=2582.243..2596.546 rows=6623 loops=1)\n -> Sort (cost=978.89..995.03\nrows=6454 width=28) (actual time=2582.120..2583.172 rows=6623 loops=1)\n Sort Key: n.insert_timestamp\n Sort Method: quicksort Memory: 710kB\n -> Result (cost=8.87..570.49\nrows=6454 width=28) (actual time=1036.720..2576.755 rows=6623 loops=1)\n -> Append\n(cost=8.87..570.49 rows=6454 width=28) (actual time=1036.718..2574.719\nrows=6623 loops=1)\n -> Bitmap Heap\nScan on volume n (cost=8.87..12.90 rows=1 width=28) (actual\ntime=0.055..0.055 rows=0 loops=1)\n Recheck Cond:\n((switchport_id = 114) AND (insert_timestamp >= '2011-02-01\n00:00:00'::timestamp without time zone) AND (insert_timestamp <=\n'2011-03-02 00:00:00'::timestamp without time zone))\n Filter:\n((in_octets <> 0) AND (out_octets <> 0))\n -> BitmapAnd\n (cost=8.87..8.87 rows=1 width=0) (actual time=0.053..0.053 rows=0\nloops=1)\n ->\nBitmap Index Scan on volume_parent_switchport_id_idx (cost=0.00..4.30\nrows=7 width=0) (actual time=0.045..0.045 rows=0 loops=1)\n\nIndex Cond: (switchport_id = 114)\n ->\nBitmap Index Scan on volume_parent_insert_timestamp_idx\n(cost=0.00..4.32 rows=7 width=0) (never executed)\n\nIndex Cond: ((insert_timestamp >= '2011-02-01 00:00:00'::timestamp\nwithout time zone) AND (insert_timestamp <= '2011-03-02\n00:00:00'::timestamp without time zone))\n -> Bitmap Heap\nScan on volume_114 n (cost=142.40..557.59 rows=6453 width=28) (actual\ntime=1036.662..2573.116 rows=6623 loops=1)\n Recheck Cond:\n((insert_timestamp >= '2011-02-01 00:00:00'::timestamp without time\nzone) AND (insert_timestamp <= '2011-03-02 00:00:00'::timestamp\nwithout time zone))\n Filter:\n((in_octets <> 0) AND (out_octets <> 0) AND (switchport_id = 114))\n -> Bitmap\nIndex Scan on volume_114_insert_timestamp_idx (cost=0.00..140.78\nrows=6453 width=0) (actual time=981.034..981.034 rows=6623 loops=1)\n Index\nCond: ((insert_timestamp >= '2011-02-01 00:00:00'::timestamp without\ntime zone) AND (insert_timestamp <= '2011-03-02 00:00:00'::timestamp\nwithout time zone))\n Total runtime: 7567.261 ms\n\nThis ends up taking anywhere from 300ms to 7000ms (usually its around\n300-400ms) and returns about 8000 rows.\n\nTo get the 95th percentile its a couple simple selects after putting\nthe result set from the above function into a temporary table:\n create temporary table deltas on commit drop as\n select * from get_delta_table(p_switchport_id, p_start_date,\np_end_date);\n\n select round(count(volume_id) * 0.95) into v_95th_row from deltas;\n select in_rate into v_record.in_95th from deltas where\nin_rate_order = v_95th_row;\n select out_rate into v_record.out_95th from deltas where\nout_rate_order = v_95th_row;\n select sum(in_delta), sum(out_delta) into v_record.in_total,\nv_record.out_total from deltas;\n\nUnfortunately using a temporary table means that I cannot run this\nquery on the read-only slave, but I can't see a way around using one.\nThe master has 3000 inserts running against it every 5 minutes --\nwhich used to be every 1 minute but the space and time for calculating\n5x the current number of rows was not worth it.\n\nMy server has 1GB of memory is running Red Hat and the only daemon is Postgres:\neffective cache size is 768MB\nshared buffers are 256MB\nwork_mem is 2MB (I changed it when the explain analyze was showing\n1.5MB used for an on disk sort)\nmax locks per transaction is 3000 (I changed it when I started getting\nthe error that suggests I change the max locks per transaction)\n\n\nAny ideas on speeding this up?\n\nThanks,\n\nLandreville\n", "msg_date": "Fri, 4 Mar 2011 10:18:01 -0500", "msg_from": "Landreville <[email protected]>", "msg_from_op": true, "msg_subject": "Calculating 95th percentiles" }, { "msg_contents": "\n> Any time the volume tables are queried it is to calculate the deltas\n> between each in_octets and out_octets from the previous row (ordered\n> by timestamp). The delta is used because the external system, where\n> the data is retrieved from, will roll over the value sometimes. I\n> have a function to do this calcuation:\n\nWould it be possible to do this when inserting and store the deltas \ndirectly ?\n", "msg_date": "Sat, 05 Mar 2011 08:54:27 +0100", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Calculating 95th percentiles" }, { "msg_contents": "On Fri, Mar 4, 2011 at 4:18 PM, Landreville\n<[email protected]> wrote:\n\n>    create temporary table deltas on commit drop as\n>        select * from get_delta_table(p_switchport_id, p_start_date,\n> p_end_date);\n>\n>    select round(count(volume_id) * 0.95) into v_95th_row from deltas;\n>    select in_rate into v_record.in_95th from deltas where\n> in_rate_order = v_95th_row;\n>    select out_rate into v_record.out_95th from deltas where\n> out_rate_order = v_95th_row;\n>    select sum(in_delta), sum(out_delta) into v_record.in_total,\n> v_record.out_total from deltas;\n>\n> Unfortunately using a temporary table means that I cannot run this\n> query on the read-only slave, but I can't see a way around using one.\n\nIs this fast enough on a slave:\n\n\nwith deltas as (select * from get_delta_table(...)),\np95 as(select round(count(volume_id) * 0.95) as p95v from deltas)\nselect\n(select in_rate from deltas, p95 where\nin_rate_order = p95v),\n(select out_rate from deltas, p95 where\nout_rate_order = p95v)\netc..\n\n?\n\nGreetings\nMarcin\n", "msg_date": "Sun, 6 Mar 2011 01:34:44 +0100", "msg_from": "marcin mank <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Calculating 95th percentiles" }, { "msg_contents": "On Sat, Mar 5, 2011 at 7:34 PM, marcin mank <[email protected]> wrote:\n> Is this fast enough on a slave:\n>\n>\n> with deltas as (select * from get_delta_table(...)),\n> p95 as(select round(count(volume_id) * 0.95) as p95v from deltas)\n> select\n> (select in_rate from deltas, p95 where\n> in_rate_order = p95v),\n> (select out_rate from deltas, p95 where\n> out_rate_order = p95v)\n> etc..\n> Greetings\n> Marcin\n>\n\nI really didn't know you could use a with statement on a read-only\ndatabase -- I don't think I even knew the with statement existed in\nPostgres (is it documented somewhere?). I will try this out.\n\nI am also looking at Pierre's suggestion of calculating the delta\nvalue on insert. To do this I am going to update all the rows\ncurrently in the partitioned tables. Does anyone know if this will\nstill use constraint exclusion in the correlated subquery or will it\nscan every partitioned table for each updated row?:\n\nupdate volume\n set in_delta = in_octets - vprev.in_octets,\n out_delta = out_octets - vprev.out_octets\nfrom volume vprev\nwhere vprev.insert_timestamp =\n(select max(insert_timestamp) from volume v\n where v.switch_port_id = volume.switchport_id\n and v.insert_timestamp < volume.insert_timestamp);\n\nI suppose I can check with an analyze before I execute it (I still\nhave to alter the table to add the delta columns).\n\nThanks,\n\nLandreville\n", "msg_date": "Thu, 31 Mar 2011 13:30:51 -0400", "msg_from": "Landreville <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Calculating 95th percentiles" } ]
[ { "msg_contents": "Hi Guys,\n\nI'm in the process of setting up some new hardware and am just doing some basic disk performance testing with bonnie++ to start with.\n\nI'm seeing a massive difference on the random seeks test, with CFQ not performing very well as far as I can see. The thing is I didn't see this sort of massive divide when doing tests with our current hardware.\n\nCurrent hardware: 2x4core E5420 @2.5Ghz/ 32GB RAM/ Adaptec 5805Z w' 512Mb/ Raid 10/ 8 15k 3.5 Disks\nNew hardware: 4x8core X7550 @2.0Ghz/ 128GB RAM/ H700 w' 1GB/ Raid 10/ 12 15.2k 2.5 Disks\n\nAdmittedly, my testing on our current hardware was on 2.6.26 and on the new hardware it's on 2.6.32 - I think I'm going to have to check the current hardware on the older kernel too.\n\nI'm wondering (and this may be a can of worms) what peoples opinions are on these schedulers? I'm going to have to do some real world testing myself with postgresql too, but initially was thinking of switching from our current CFQ back to deadline.\n\nAny opinions would be appreciated.\n\nRegardless, here are some sample results from the new hardware:\n\nCFQ:\n\nVersion 1.96 ------Sequential Output------ --Sequential Input- --Random-\nConcurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP\nWay5ax 258376M 666 99 434709 96 225498 35 2840 69 952115 76 556.2 3\nLatency 12344us 619ms 522ms 255ms 425ms 529ms\nVersion 1.96 ------Sequential Create------ --------Random Create--------\nWay5ax -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n 16 28808 41 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++\nLatency 6170us 594us 633us 7619us 20us 36us\n1.96,1.96,Way5ax,1,1299173113,258376M,,666,99,434709,96,225498,35,2840,69,952115,76,556.2,3,16,,,,,28808,41,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,12344us,619ms,522ms,255ms,425ms,529ms,6170us,594us,633us,7619us,20us,36us\n\ndeadline:\n\nVersion 1.96 ------Sequential Output------ --Sequential Input- --Random-\nConcurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP\nWay5ax 258376M 696 99 449914 96 287010 47 2952 69 989527 78 2304 19\nLatency 11939us 856ms 570ms 174ms 228ms 24744us\nVersion 1.96 ------Sequential Create------ --------Random Create--------\nWay5ax -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n 16 31338 45 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++\nLatency 5605us 605us 627us 6590us 19us 38us\n1.96,1.96,Way5ax,1,1299237441,258376M,,696,99,449914,96,287010,47,2952,69,989527,78,2304,19,16,,,,,31338,45,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,11939us,856ms,570ms,174ms,228ms,24744us,5605us,605us,627us,6590us,19us,38us\n\nno-op:\n\nVersion 1.96 ------Sequential Output------ --Sequential Input- --Random-\nConcurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP\nWay5ax 258376M 706 99 451578 95 303351 49 4104 96 1003688 78 2294 19\nLatency 11538us 530ms 1460ms 12141us 350ms 22969us\nVersion 1.96 ------Sequential Create------ --------Random Create--------\nWay5ax -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n 16 31137 44 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++\nLatency 5918us 597us 627us 5039us 17us 36us\n1.96,1.96,Way5ax,1,1299245225,258376M,,706,99,451578,95,303351,49,4104,96,1003688,78,2294,19,16,,,,,31137,44,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,11538us,530ms,1460ms,12141us,350ms,22969us,5918us,597us,627us,5039us,17us,36us\n\n--\nGlyn\n\n\n \n", "msg_date": "Fri, 4 Mar 2011 17:34:39 +0000 (GMT)", "msg_from": "Glyn Astill <[email protected]>", "msg_from_op": true, "msg_subject": "Linux I/O schedulers - CFQ & random seeks" }, { "msg_contents": "On 03/04/11 10:34, Glyn Astill wrote:\n > I'm wondering (and this may be a can of worms) what peoples opinions \nare on these schedulers?\n\nWhen testing our new DB box just last month, we saw a big improvement in \nbonnie++ random I/O rates when using the noop scheduler instead of cfq \n(or any other). We've got RAID 10/12 on a 3ware card w/ battery-backed \ncache; 7200rpm drives. Our file system is XFS with \nnoatime,nobarrier,logbufs=8,logbsize=256k. How much is \"big?\" I can't \nfind my notes for it, but I recall that the difference was large enough \nto surprise us. We're running with noop in production right now. No \ncomplaints.\n", "msg_date": "Fri, 04 Mar 2011 11:03:36 -0700", "msg_from": "Wayne Conrad <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux I/O schedulers - CFQ & random seeks" }, { "msg_contents": "On 3/4/11 11:03 AM, Wayne Conrad wrote:\n> On 03/04/11 10:34, Glyn Astill wrote:\n> > I'm wondering (and this may be a can of worms) what peoples opinions \n> are on these schedulers?\n>\n> When testing our new DB box just last month, we saw a big improvement \n> in bonnie++ random I/O rates when using the noop scheduler instead of \n> cfq (or any other). We've got RAID 10/12 on a 3ware card w/ \n> battery-backed cache; 7200rpm drives. Our file system is XFS with \n> noatime,nobarrier,logbufs=8,logbsize=256k. How much is \"big?\" I \n> can't find my notes for it, but I recall that the difference was large \n> enough to surprise us. We're running with noop in production right \n> now. No complaints.\n>\nJust another anecdote, I found that the deadline scheduler performed the \nbest for me. I don't have the benchmarks anymore but deadline vs cfq \nwas dramatically faster for my tests. I posted this to the list years \nago and others announced similar experiences. Noop was a close 2nd to \ndeadline.\n\nXFS (noatime,nodiratime,nobarrier,logbufs=8)\n391GB db cluster directory\nBBU Caching RAID10 12-disk SAS\n128GB RAM\nConstant insert stream\nOLAP-ish query patterns\nHeavy random I/O\n\n", "msg_date": "Fri, 04 Mar 2011 11:39:56 -0700", "msg_from": "Dan Harris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux I/O schedulers - CFQ & random seeks" }, { "msg_contents": "On Fri, Mar 4, 2011 at 11:39 AM, Dan Harris <[email protected]> wrote:\n> Just another anecdote, I found that the deadline scheduler performed the\n> best for me.  I don't have the benchmarks anymore but deadline vs cfq was\n> dramatically faster for my tests.  I posted this to the list years ago and\n> others announced similar experiences.  Noop was a close 2nd to deadline.\n\nThis reflects the results I get with a battery backed caching RAID\ncontroller as well, both Areca and LSI. Noop seemed to scale a little\nbit better for me than deadline with larger loads, but they were\npretty much within a few % of each other either way. CFQ was also\nmuch slower for us.\n", "msg_date": "Fri, 4 Mar 2011 12:02:30 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux I/O schedulers - CFQ & random seeks" }, { "msg_contents": "Dan Harris <[email protected]> wrote:\n \n> Just another anecdote, I found that the deadline scheduler\n> performed the best for me. I don't have the benchmarks anymore\n> but deadline vs cfq was dramatically faster for my tests. I\n> posted this to the list years ago and others announced similar\n> experiences. Noop was a close 2nd to deadline.\n \nThat was our experience when we benchmarked a few years ago. Some\nmore recent benchmarks seem to have shown improvements in cfq, but\nwe haven't had enough of a problem with our current setup to make it\nseem worth the effort of running another set of benchmarks on that.\n \n-Kevin\n", "msg_date": "Fri, 04 Mar 2011 13:07:00 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux I/O schedulers - CFQ & random seeks" }, { "msg_contents": "On Fri, Mar 4, 2011 at 10:34 AM, Glyn Astill <[email protected]> wrote:\n> I'm wondering (and this may be a can of worms) what peoples opinions are on these schedulers?  I'm going to have to do some real world testing myself with postgresql too, but initially was thinking of switching from our current CFQ back to deadline.\n\nIt was a few years ago now, but I went through a similar round of\ntesting, and thought CFQ was fine, until I deployed the box. It fell\non its face, hard. I can't find a reference offhand, but I remember\nreading somewhere that CFQ is optimized for more desktop type\nworkloads, and that in its efforts to ensure fair IO access for all\nprocesses, it can actively interfere with high-concurrency workloads\nlike you'd expect to see on a DB server -- especially one as big as\nyour specs indicate. Then again, it's been a few years, so the\nscheduler may have improved significantly in that span.\n\nMy standard approach since has just been to use no-op. We've shelled\nout enough money for a RAID controller, if not a SAN, so it seems\nsilly to me not to defer to the hardware, and let it do its job. With\nbig caches, command queueing, and direct knowledge of how the data is\nlaid out on the spindles, I'm hard-pressed to imagine a scenario where\nthe kernel is going to be able to do a better job of IO prioritization\nthan the controller.\n\nI'd absolutely recommend testing with pg, so you can get a feel for\nhow it behaves under real-world workloads. The critical thing there\nis that your testing needs to create workloads that are in the\nneighborhood of what you'll see in production. In my case, the final\nround of testing included something like 15-20% of the user-base for\nthe app the db served, and everything seemed fine. Once we opened the\nflood-gates, and all the users were hitting the new db, though,\nnothing worked for anyone. Minute-plus page-loads across the board,\nwhen people weren't simply timing out.\n\nAs always, YMMV, the plural of anecdote isn't data, &c.\n\nrls\n\n-- \n:wq\n", "msg_date": "Fri, 4 Mar 2011 12:09:34 -0700", "msg_from": "Rosser Schwarz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux I/O schedulers - CFQ & random seeks" }, { "msg_contents": "On Sat, Mar 5, 2011 at 6:09 AM, Rosser Schwarz <[email protected]> wrote:\n> On Fri, Mar 4, 2011 at 10:34 AM, Glyn Astill <[email protected]> wrote:\n>> I'm wondering (and this may be a can of worms) what peoples opinions are on these schedulers?  I'm going to have to do some real world testing myself with postgresql too, but initially was thinking of switching from our current CFQ back to deadline.\n>\n> It was a few years ago now, but I went through a similar round of\n> testing, and thought CFQ was fine, until I deployed the box.  It fell\n> on its face, hard.  I can't find a reference offhand, but I remember\n> reading somewhere that CFQ is optimized for more desktop type\n> workloads, and that in its efforts to ensure fair IO access for all\n> processes, it can actively interfere with high-concurrency workloads\n> like you'd expect to see on a DB server -- especially one as big as\n> your specs indicate.  Then again, it's been a few years, so the\n> scheduler may have improved significantly in that span.\n>\n> My standard approach since has just been to use no-op.  We've shelled\n> out enough money for a RAID controller, if not a SAN, so it seems\n> silly to me not to defer to the hardware, and let it do its job.  With\n> big caches, command queueing, and direct knowledge of how the data is\n> laid out on the spindles, I'm hard-pressed to imagine a scenario where\n> the kernel is going to be able to do a better job of IO prioritization\n> than the controller.\n>\n> I'd absolutely recommend testing with pg, so you can get a feel for\n> how it behaves under real-world workloads.  The critical thing there\n> is that your testing needs to create workloads that are in the\n> neighborhood of what you'll see in production.  In my case, the final\n> round of testing included something like 15-20% of the user-base for\n> the app the db served, and everything seemed fine.  Once we opened the\n> flood-gates, and all the users were hitting the new db, though,\n> nothing worked for anyone.  Minute-plus page-loads across the board,\n> when people weren't simply timing out.\n>\n> As always, YMMV, the plural of anecdote isn't data, &c.\n>\n> rls\n>\n> --\n> :wq\n\nI have a somewhat similar story. :)\n\nWe recently upgraded to RHEL 6 (2.6.32 + patches) from RHEL 5.6.\n\nOur machines are:\n\n24 core (4x6) X5670 2.93GHz\n144Gb of RAM\n2 x RAID 1 SAS - WAL (on a 5405Z)\n8 x RAID10 SAS - Data (on a 5805Z)\n\nWe decided to test CFQ again (after using the deadline scheduler) and\nit looked good in normal file system testing and what not.\n\nOnce we ramped up production traffic on the machines, PostgreSQL\npretty much died under the load and could never get to a steady state.\nI think this had something to do with the PG backends not having\nenough I/O bandwidth (due to CFQ) to put data into cache fast enough.\nThis went on for an hour before we decided to switch back to deadline.\nThe system was back to normal working order (with 5-6x the I/O\nthroughput of CFQ) in about 3 minutes, after which I/O wait was down\nto 0-1%.\n\nWe run a (typical?) OLTP workload for a web app and see something like\n2000 to 5000 req/s against PG.\n\nNot sure if this helps in the OP's situation, but I guess it's one of\nthose things you need to test with a production workload to find out.\n:)\n\nRegards,\nOmar\n", "msg_date": "Sun, 6 Mar 2011 11:38:08 +1100", "msg_from": "Omar Kilani <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux I/O schedulers - CFQ & random seeks" }, { "msg_contents": " Hello,\n\n> Once we ramped up production traffic on the machines, PostgreSQL\n> pretty much died under the load and could never get to a steady state.\n> I think this had something to do with the PG backends not having\n> enough I/O bandwidth (due to CFQ) to put data into cache fast enough.\n> This went on for an hour before we decided to switch back to deadline.\n> The system was back to normal working order (with 5-6x the I/O\n> throughput of CFQ) in about 3 minutes, after which I/O wait was down\n> to 0-1%.\n>\n> We run a (typical?) OLTP workload for a web app and see something like\n> 2000 to 5000 req/s against PG.\n>\n> Not sure if this helps in the OP's situation, but I guess it's one of\n> those things you need to test with a production workload to find out.\n> :)\n\n Me too. :) I tried switching schedulers on busy Oracle server and\ndeadline gave +~30% in our case (against CFQ). DB was on HP EVA\nstorage. Not 5-6 fold increase but still \"free\" +30% is pretty nice.\nCentOS 5.5.\n\n Regards,\n\n Mindaugas\n", "msg_date": "Tue, 8 Mar 2011 20:27:52 +0200", "msg_from": "Mindaugas Riauba <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux I/O schedulers - CFQ & random seeks" } ]
[ { "msg_contents": "Sorry for not responding directly to your question and for changing\nthe subject ... ;-)\n\nOn 4 March 2011 18:18, Landreville <[email protected]> wrote:\n> That is partitioned into about 3000 tables by the switchport_id (FK to\n> a lookup table), each table has about 30 000 rows currently (a row is\n> inserted every 5 minutes into each table).\n\nDoes such partitioning really make sense? My impression is that the\nbiggest benefit with table partitioning is to keep old \"inactive\" data\nout of the caches. If so, then it doesn't seem to make much sense to\nsplit a table into 3000 active partitions ... unless, maybe, almost\nall queries goes towards a specific partitioning.\n\nAccording to http://www.postgresql.org/docs/current/interactive/ddl-partitioning.html\n...\n\n\"Query performance can be improved dramatically in certain situations,\nparticularly when most of the heavily accessed rows of the table are\nin a single partition or a small number of partitions. The\npartitioning substitutes for leading columns of indexes, reducing\nindex size and making it more likely that the heavily-used parts of\nthe indexes fit in memory.\"\n\n\"All constraints on all partitions of the master table are examined\nduring constraint exclusion, so large numbers of partitions are likely\nto increase query planning time considerably. Partitioning using these\ntechniques will work well with up to perhaps a hundred partitions;\ndon't try to use many thousands of partitions.\"\n\nWe have started an archiving project internally in our company since\nour database is outgrowing the available memory, I'm advocating that\nwe should look into table partitioning before we do the archiving,\nthough it seems to be out of the scope of the project group looking\ninto the archiving. I'm not sure if I should continue nagging about\nit or forget about it ;-)\n", "msg_date": "Sat, 5 Mar 2011 12:37:20 +0300", "msg_from": "Tobias Brox <[email protected]>", "msg_from_op": true, "msg_subject": "Table partitioning" }, { "msg_contents": "On 05/03/2011 09:37, Tobias Brox wrote:\n> Sorry for not responding directly to your question and for changing\n> the subject ... ;-)\n>\n> On 4 March 2011 18:18, Landreville<[email protected]> wrote:\n>> That is partitioned into about 3000 tables by the switchport_id (FK to\n>> a lookup table), each table has about 30 000 rows currently (a row is\n>> inserted every 5 minutes into each table).\n> Does such partitioning really make sense? My impression is that the\n> biggest benefit with table partitioning is to keep old \"inactive\" data\n> out of the caches. If so, then it doesn't seem to make much sense to\n> split a table into 3000 active partitions ... unless, maybe, almost\n> all queries goes towards a specific partitioning.\nIf your partitions a loosely time based and you don't want to discard \nold data, then surely the number of partitions will grow without limit. \nYou could have partitions for say the last 12 months plus a single \npartition for 'ancient history', but then you have to transfer the \ncontent of the oldest month to ancient each month and change the \nconstraint on 'ancient'.\n\nMark\n\n", "msg_date": "Sat, 05 Mar 2011 09:57:48 +0000", "msg_from": "Mark Thornton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Table partitioning" }, { "msg_contents": "On 05/03/2011 09:37, Tobias Brox wrote:\n> Sorry for not responding directly to your question and for changing\n> the subject ... ;-)\n>\n> On 4 March 2011 18:18, Landreville<[email protected]> wrote:\n>> That is partitioned into about 3000 tables by the switchport_id (FK to\n>> a lookup table), each table has about 30 000 rows currently (a row is\n>> inserted every 5 minutes into each table).\n> Does such partitioning really make sense? My impression is that the\n> biggest benefit with table partitioning is to keep old \"inactive\" data\n> out of the caches. If so, then it doesn't seem to make much sense to\n> split a table into 3000 active partitions ... unless, maybe, almost\n> all queries goes towards a specific partitioning.\nIf your partitions a loosely time based and you don't want to discard \nold data, then surely the number of partitions will grow without limit. \nYou could have partitions for say the last 12 months plus a single \npartition for 'ancient history', but then you have to transfer the \ncontent of the oldest month to ancient each month and change the \nconstraint on 'ancient'.\n\nMark\n\n", "msg_date": "Sat, 05 Mar 2011 09:59:17 +0000", "msg_from": "Mark Thornton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Table partitioning" }, { "msg_contents": "On 5 March 2011 12:59, Mark Thornton <[email protected]> wrote:\n> If your partitions a loosely time based and you don't want to discard old\n> data, then surely the number of partitions will grow without limit.\n\nTrue, but is it relevant? With monthly table partitioning it takes\nhundreds of years before having \"thousands of partitions\".\n", "msg_date": "Sat, 5 Mar 2011 13:09:55 +0300", "msg_from": "Tobias Brox <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Table partitioning" } ]
[ { "msg_contents": "Hi,\n\nI am running Postgresql 8.4.7 with Postgis 2.0 (for raster support).\nServer is mainly 1 user for spatial data processing. This involves queries\nthat can take hours.\n\nThis is running on a ubuntu 10.10 Server with Core2Duo 6600 @ 2.4 GHZ, 6 GB\nRAM.\n\nMy postgresql.conf:\n# - Memory -\nshared_buffers = 1024MB # min 128kB\n # (change requires restart)\ntemp_buffers = 256MB # min 800kB\n#max_prepared_transactions = 0 # zero disables the feature\n # (change requires restart)\n# Note: Increasing max_prepared_transactions costs ~600 bytes of shared\nmemory\n# per transaction slot, plus lock space (see max_locks_per_transaction).\n# It is not advisable to set max_prepared_transactions nonzero unless you\n# actively intend to use prepared transactions.\nwork_mem = 1024MB # min 64kB\nmaintenance_work_mem = 256MB # min 1MB\nmax_stack_depth = 7MB # min 100kB\nwal_buffers = 8MB\neffective_cache_size = 3072MB\n\nEverything else is default.\n\nMy Pgbench results:\n/usr/lib/postgresql/8.4/bin/pgbench -T 60 test1\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nduration: 60 s\nnumber of transactions actually processed: 7004\ntps = 116.728199 (including connections establishing)\ntps = 116.733012 (excluding connections establishing)\n\n\nMy question is if these are acceptable results, or if someone can recommend\nsettings which will improve my servers performance.\n\nAndreas\n\nHi,I am running Postgresql 8.4.7 with Postgis 2.0 (for raster support).Server is mainly 1 user for spatial data processing. This involves queries that can take hours.This is running on a ubuntu 10.10 Server with Core2Duo 6600 @ 2.4 GHZ, 6 GB RAM.\nMy postgresql.conf:# - Memory -shared_buffers = 1024MB                 # min 128kB                                        # (change requires restart)temp_buffers = 256MB                    # min 800kB\n#max_prepared_transactions = 0          # zero disables the feature                                        # (change requires restart)# Note:  Increasing max_prepared_transactions costs ~600 bytes of shared memory\n# per transaction slot, plus lock space (see max_locks_per_transaction).# It is not advisable to set max_prepared_transactions nonzero unless you# actively intend to use prepared transactions.work_mem = 1024MB                               # min 64kB\nmaintenance_work_mem = 256MB            # min 1MBmax_stack_depth = 7MB                   # min 100kBwal_buffers = 8MB  effective_cache_size = 3072MBEverything else is default.My Pgbench results:\n/usr/lib/postgresql/8.4/bin/pgbench -T 60 test1starting vacuum...end.transaction type: TPC-B (sort of)scaling factor: 1query mode: simplenumber of clients: 1duration: 60 snumber of transactions actually processed: 7004\ntps = 116.728199 (including connections establishing)tps = 116.733012 (excluding connections establishing)My question is if these are acceptable results, or if someone can recommend settings which will improve my servers performance.\nAndreas", "msg_date": "Mon, 7 Mar 2011 14:45:03 +0100", "msg_from": "=?ISO-8859-1?Q?Andreas_For=F8_Tollefsen?= <[email protected]>", "msg_from_op": true, "msg_subject": "Performance issues" }, { "msg_contents": "On Mon, Mar 07, 2011 at 02:45:03PM +0100, Andreas For? Tollefsen wrote:\n> Hi,\n> \n> I am running Postgresql 8.4.7 with Postgis 2.0 (for raster support).\n> Server is mainly 1 user for spatial data processing. This involves queries\n> that can take hours.\n> \n> This is running on a ubuntu 10.10 Server with Core2Duo 6600 @ 2.4 GHZ, 6 GB\n> RAM.\n> \n> My postgresql.conf:\n> # - Memory -\n> shared_buffers = 1024MB # min 128kB\n> # (change requires restart)\n> temp_buffers = 256MB # min 800kB\n> #max_prepared_transactions = 0 # zero disables the feature\n> # (change requires restart)\n> # Note: Increasing max_prepared_transactions costs ~600 bytes of shared\n> memory\n> # per transaction slot, plus lock space (see max_locks_per_transaction).\n> # It is not advisable to set max_prepared_transactions nonzero unless you\n> # actively intend to use prepared transactions.\n> work_mem = 1024MB # min 64kB\n> maintenance_work_mem = 256MB # min 1MB\n> max_stack_depth = 7MB # min 100kB\n> wal_buffers = 8MB\n> effective_cache_size = 3072MB\n> \n> Everything else is default.\n> \n> My Pgbench results:\n> /usr/lib/postgresql/8.4/bin/pgbench -T 60 test1\n> starting vacuum...end.\n> transaction type: TPC-B (sort of)\n> scaling factor: 1\n> query mode: simple\n> number of clients: 1\n> duration: 60 s\n> number of transactions actually processed: 7004\n> tps = 116.728199 (including connections establishing)\n> tps = 116.733012 (excluding connections establishing)\n> \n> \n> My question is if these are acceptable results, or if someone can recommend\n> settings which will improve my servers performance.\n> \n> Andreas\n\nYour results are I/O limited. Depending upon your requirements,\nyou may be able to turn off synchronous_commit which can help.\nYour actual workload may be able to use batching to help as well.\nYour work_mem looks pretty darn high for a 6GB system.\n\nCheers,\nKen\n", "msg_date": "Mon, 7 Mar 2011 08:01:01 -0600", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance issues" }, { "msg_contents": "Thanks, Ken.\n\nIt seems like the tip to turn off synchronous_commit did the trick:\n\n/usr/lib/postgresql/8.4/bin/pgbench -T 60 test1\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nduration: 60 s\nnumber of transactions actually processed: 86048\ntps = 1434.123199 (including connections establishing)\ntps = 1434.183362 (excluding connections establishing)\n\nIs this acceptable compared to others when considering my setup?\n\nCheers,\nAndreas\n\n2011/3/7 Kenneth Marshall <[email protected]>\n\n> On Mon, Mar 07, 2011 at 02:45:03PM +0100, Andreas For? Tollefsen wrote:\n> > Hi,\n> >\n> > I am running Postgresql 8.4.7 with Postgis 2.0 (for raster support).\n> > Server is mainly 1 user for spatial data processing. This involves\n> queries\n> > that can take hours.\n> >\n> > This is running on a ubuntu 10.10 Server with Core2Duo 6600 @ 2.4 GHZ, 6\n> GB\n> > RAM.\n> >\n> > My postgresql.conf:\n> > # - Memory -\n> > shared_buffers = 1024MB # min 128kB\n> > # (change requires restart)\n> > temp_buffers = 256MB # min 800kB\n> > #max_prepared_transactions = 0 # zero disables the feature\n> > # (change requires restart)\n> > # Note: Increasing max_prepared_transactions costs ~600 bytes of shared\n> > memory\n> > # per transaction slot, plus lock space (see max_locks_per_transaction).\n> > # It is not advisable to set max_prepared_transactions nonzero unless you\n> > # actively intend to use prepared transactions.\n> > work_mem = 1024MB # min 64kB\n> > maintenance_work_mem = 256MB # min 1MB\n> > max_stack_depth = 7MB # min 100kB\n> > wal_buffers = 8MB\n> > effective_cache_size = 3072MB\n> >\n> > Everything else is default.\n> >\n> > My Pgbench results:\n> > /usr/lib/postgresql/8.4/bin/pgbench -T 60 test1\n> > starting vacuum...end.\n> > transaction type: TPC-B (sort of)\n> > scaling factor: 1\n> > query mode: simple\n> > number of clients: 1\n> > duration: 60 s\n> > number of transactions actually processed: 7004\n> > tps = 116.728199 (including connections establishing)\n> > tps = 116.733012 (excluding connections establishing)\n> >\n> >\n> > My question is if these are acceptable results, or if someone can\n> recommend\n> > settings which will improve my servers performance.\n> >\n> > Andreas\n>\n> Your results are I/O limited. Depending upon your requirements,\n> you may be able to turn off synchronous_commit which can help.\n> Your actual workload may be able to use batching to help as well.\n> Your work_mem looks pretty darn high for a 6GB system.\n>\n> Cheers,\n> Ken\n>\n\nThanks, Ken.It seems like the tip to turn off synchronous_commit did the trick:/usr/lib/postgresql/8.4/bin/pgbench -T 60 test1starting vacuum...end.transaction type: TPC-B (sort of)\nscaling factor: 1query mode: simplenumber of clients: 1duration: 60 snumber of transactions actually processed: 86048tps = 1434.123199 (including connections establishing)\ntps = 1434.183362 (excluding connections establishing)Is this acceptable compared to others when considering my setup?Cheers, Andreas\n2011/3/7 Kenneth Marshall <[email protected]>\nOn Mon, Mar 07, 2011 at 02:45:03PM +0100, Andreas For? Tollefsen wrote:\n> Hi,\n>\n> I am running Postgresql 8.4.7 with Postgis 2.0 (for raster support).\n> Server is mainly 1 user for spatial data processing. This involves queries\n> that can take hours.\n>\n> This is running on a ubuntu 10.10 Server with Core2Duo 6600 @ 2.4 GHZ, 6 GB\n> RAM.\n>\n> My postgresql.conf:\n> # - Memory -\n> shared_buffers = 1024MB                 # min 128kB\n>                                         # (change requires restart)\n> temp_buffers = 256MB                    # min 800kB\n> #max_prepared_transactions = 0          # zero disables the feature\n>                                         # (change requires restart)\n> # Note:  Increasing max_prepared_transactions costs ~600 bytes of shared\n> memory\n> # per transaction slot, plus lock space (see max_locks_per_transaction).\n> # It is not advisable to set max_prepared_transactions nonzero unless you\n> # actively intend to use prepared transactions.\n> work_mem = 1024MB                               # min 64kB\n> maintenance_work_mem = 256MB            # min 1MB\n> max_stack_depth = 7MB                   # min 100kB\n> wal_buffers = 8MB\n> effective_cache_size = 3072MB\n>\n> Everything else is default.\n>\n> My Pgbench results:\n> /usr/lib/postgresql/8.4/bin/pgbench -T 60 test1\n> starting vacuum...end.\n> transaction type: TPC-B (sort of)\n> scaling factor: 1\n> query mode: simple\n> number of clients: 1\n> duration: 60 s\n> number of transactions actually processed: 7004\n> tps = 116.728199 (including connections establishing)\n> tps = 116.733012 (excluding connections establishing)\n>\n>\n> My question is if these are acceptable results, or if someone can recommend\n> settings which will improve my servers performance.\n>\n> Andreas\n\nYour results are I/O limited. Depending upon your requirements,\nyou may be able to turn off synchronous_commit which can help.\nYour actual workload may be able to use batching to help as well.\nYour work_mem looks pretty darn high for a 6GB system.\n\nCheers,\nKen", "msg_date": "Mon, 7 Mar 2011 15:17:05 +0100", "msg_from": "=?ISO-8859-1?Q?Andreas_For=F8_Tollefsen?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance issues" }, { "msg_contents": "On Mon, Mar 07, 2011 at 03:17:05PM +0100, Andreas For? Tollefsen wrote:\n> Thanks, Ken.\n> \n> It seems like the tip to turn off synchronous_commit did the trick:\n> \n> /usr/lib/postgresql/8.4/bin/pgbench -T 60 test1\n> starting vacuum...end.\n> transaction type: TPC-B (sort of)\n> scaling factor: 1\n> query mode: simple\n> number of clients: 1\n> duration: 60 s\n> number of transactions actually processed: 86048\n> tps = 1434.123199 (including connections establishing)\n> tps = 1434.183362 (excluding connections establishing)\n> \n> Is this acceptable compared to others when considering my setup?\n> \n> Cheers,\n> Andreas\n> \n\n\nThese are typical results for synchronous_commit off. The caveat\nis you must be able to handle loosing transactions if you have a\ndatabase crash, but your database is still intact. This differs\nfrom turning fsync off in which a crash means you would need to\nrestore from a backup.\n\nCheers,\nKen\n", "msg_date": "Mon, 7 Mar 2011 08:22:16 -0600", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance issues" }, { "msg_contents": "Ok. Cheers. I will do some more testing on my heavy PostGIS queries which\noften takes hours to complete.\n\nThanks.\nAndreas\n\n2011/3/7 Kenneth Marshall <[email protected]>\n\n> On Mon, Mar 07, 2011 at 03:17:05PM +0100, Andreas For? Tollefsen wrote:\n> > Thanks, Ken.\n> >\n> > It seems like the tip to turn off synchronous_commit did the trick:\n> >\n> > /usr/lib/postgresql/8.4/bin/pgbench -T 60 test1\n> > starting vacuum...end.\n> > transaction type: TPC-B (sort of)\n> > scaling factor: 1\n> > query mode: simple\n> > number of clients: 1\n> > duration: 60 s\n> > number of transactions actually processed: 86048\n> > tps = 1434.123199 (including connections establishing)\n> > tps = 1434.183362 (excluding connections establishing)\n> >\n> > Is this acceptable compared to others when considering my setup?\n> >\n> > Cheers,\n> > Andreas\n> >\n>\n>\n> These are typical results for synchronous_commit off. The caveat\n> is you must be able to handle loosing transactions if you have a\n> database crash, but your database is still intact. This differs\n> from turning fsync off in which a crash means you would need to\n> restore from a backup.\n>\n> Cheers,\n> Ken\n>\n\nOk. Cheers. I will do some more testing on my heavy PostGIS queries which often takes hours to complete.Thanks.Andreas2011/3/7 Kenneth Marshall <[email protected]>\nOn Mon, Mar 07, 2011 at 03:17:05PM +0100, Andreas For? Tollefsen wrote:\n> Thanks, Ken.\n>\n> It seems like the tip to turn off synchronous_commit did the trick:\n>\n> /usr/lib/postgresql/8.4/bin/pgbench -T 60 test1\n> starting vacuum...end.\n> transaction type: TPC-B (sort of)\n> scaling factor: 1\n> query mode: simple\n> number of clients: 1\n> duration: 60 s\n> number of transactions actually processed: 86048\n> tps = 1434.123199 (including connections establishing)\n> tps = 1434.183362 (excluding connections establishing)\n>\n> Is this acceptable compared to others when considering my setup?\n>\n> Cheers,\n> Andreas\n>\n\n\nThese are typical results for synchronous_commit off. The caveat\nis you must be able to handle loosing transactions if you have a\ndatabase crash, but your database is still intact. This differs\nfrom turning fsync off in which a crash means you would need to\nrestore from a backup.\n\nCheers,\nKen", "msg_date": "Mon, 7 Mar 2011 15:29:40 +0100", "msg_from": "=?ISO-8859-1?Q?Andreas_For=F8_Tollefsen?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance issues" }, { "msg_contents": "On Mon, 7 Mar 2011, Andreas For? Tollefsen wrote:\n\n> Ok. Cheers. I will do some more testing on my heavy PostGIS queries which\n> often takes hours to complete.\n\nI'd like to see hours long queries :) EXPLAIN ANALYZE\n\n>\n> Thanks.\n> Andreas\n>\n> 2011/3/7 Kenneth Marshall <[email protected]>\n>\n>> On Mon, Mar 07, 2011 at 03:17:05PM +0100, Andreas For? Tollefsen wrote:\n>>> Thanks, Ken.\n>>>\n>>> It seems like the tip to turn off synchronous_commit did the trick:\n>>>\n>>> /usr/lib/postgresql/8.4/bin/pgbench -T 60 test1\n>>> starting vacuum...end.\n>>> transaction type: TPC-B (sort of)\n>>> scaling factor: 1\n>>> query mode: simple\n>>> number of clients: 1\n>>> duration: 60 s\n>>> number of transactions actually processed: 86048\n>>> tps = 1434.123199 (including connections establishing)\n>>> tps = 1434.183362 (excluding connections establishing)\n>>>\n>>> Is this acceptable compared to others when considering my setup?\n>>>\n>>> Cheers,\n>>> Andreas\n>>>\n>>\n>>\n>> These are typical results for synchronous_commit off. The caveat\n>> is you must be able to handle loosing transactions if you have a\n>> database crash, but your database is still intact. This differs\n>> from turning fsync off in which a crash means you would need to\n>> restore from a backup.\n>>\n>> Cheers,\n>> Ken\n>>\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n", "msg_date": "Mon, 7 Mar 2011 18:34:35 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance issues" }, { "msg_contents": "The synchronous_commit off increased the TPS, but not the speed of the below\nquery.\n\nOleg:\nThis is a query i am working on now. It creates an intersection of two\ngeometries. One is a grid of 0.5 x 0.5 decimal degree sized cells, while the\nother is the country geometries of all countries in the world for a certain\nyear.\n\npriogrid=# EXPLAIN ANALYZE SELECT priogrid_land.gid, gwcode,\nST_Intersection(pri\nogrid_land.cell, cshapeswdate.geom) FROM priogrid_land, cshapeswdate WHERE\nST_In\ntersects(priogrid_land.cell, cshapeswdate.geom);\n QUERY\nPLAN\n\n--------------------------------------------------------------------------------\n------------------------------------------------------------------\n Nested Loop (cost=0.00..12644.85 rows=43351 width=87704) (actual\ntime=1.815..7\n074973.711 rows=130331 loops=1)\n Join Filter: _st_intersects(priogrid_land.cell, cshapeswdate.geom)\n -> Seq Scan on cshapeswdate (cost=0.00..14.42 rows=242 width=87248)\n(actual\n time=0.007..0.570 rows=242 loops=1)\n -> Index Scan using idx_priogrid_land_cell on priogrid_land\n (cost=0.00..7.1\n5 rows=1 width=456) (actual time=0.069..5.604 rows=978 loops=242)\n Index Cond: (priogrid_land.cell && cshapeswdate.geom)\n Total runtime: 7075188.549 ms\n(6 rows)\n\n2011/3/7 Oleg Bartunov <[email protected]>\n\n> On Mon, 7 Mar 2011, Andreas For? Tollefsen wrote:\n>\n> Ok. Cheers. I will do some more testing on my heavy PostGIS queries which\n>> often takes hours to complete.\n>>\n>\n> I'd like to see hours long queries :) EXPLAIN ANALYZE\n>\n>\n>\n>> Thanks.\n>> Andreas\n>>\n>> 2011/3/7 Kenneth Marshall <[email protected]>\n>>\n>> On Mon, Mar 07, 2011 at 03:17:05PM +0100, Andreas For? Tollefsen wrote:\n>>>\n>>>> Thanks, Ken.\n>>>>\n>>>> It seems like the tip to turn off synchronous_commit did the trick:\n>>>>\n>>>> /usr/lib/postgresql/8.4/bin/pgbench -T 60 test1\n>>>> starting vacuum...end.\n>>>> transaction type: TPC-B (sort of)\n>>>> scaling factor: 1\n>>>> query mode: simple\n>>>> number of clients: 1\n>>>> duration: 60 s\n>>>> number of transactions actually processed: 86048\n>>>> tps = 1434.123199 (including connections establishing)\n>>>> tps = 1434.183362 (excluding connections establishing)\n>>>>\n>>>> Is this acceptable compared to others when considering my setup?\n>>>>\n>>>> Cheers,\n>>>> Andreas\n>>>>\n>>>>\n>>>\n>>> These are typical results for synchronous_commit off. The caveat\n>>> is you must be able to handle loosing transactions if you have a\n>>> database crash, but your database is still intact. This differs\n>>> from turning fsync off in which a crash means you would need to\n>>> restore from a backup.\n>>>\n>>> Cheers,\n>>> Ken\n>>>\n>>>\n>>\n> Regards,\n> Oleg\n> _____________________________________________________________\n> Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\n> Sternberg Astronomical Institute, Moscow University, Russia\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(495)939-16-83, +007(495)939-23-83\n>\n\nThe synchronous_commit off increased the TPS, but not the speed of the below query.Oleg:This is a query i am working on now. It creates an intersection of two geometries. One is a grid of 0.5 x 0.5 decimal degree sized cells, while the other is the country geometries of all countries in the world for a certain year.\npriogrid=# EXPLAIN ANALYZE SELECT priogrid_land.gid, gwcode, ST_Intersection(priogrid_land.cell, cshapeswdate.geom) FROM priogrid_land, cshapeswdate WHERE ST_Intersects(priogrid_land.cell, cshapeswdate.geom);\n                                                                    QUERY PLAN--------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop  (cost=0.00..12644.85 rows=43351 width=87704) (actual time=1.815..7074973.711 rows=130331 loops=1)   Join Filter: _st_intersects(priogrid_land.cell, cshapeswdate.geom)   ->  Seq Scan on cshapeswdate  (cost=0.00..14.42 rows=242 width=87248) (actual\n time=0.007..0.570 rows=242 loops=1)   ->  Index Scan using idx_priogrid_land_cell on priogrid_land  (cost=0.00..7.15 rows=1 width=456) (actual time=0.069..5.604 rows=978 loops=242)\n         Index Cond: (priogrid_land.cell && cshapeswdate.geom) Total runtime: 7075188.549 ms(6 rows)2011/3/7 Oleg Bartunov <[email protected]>\nOn Mon, 7 Mar 2011, Andreas For? Tollefsen wrote:\n\n\nOk. Cheers. I will do some more testing on my heavy PostGIS queries which\noften takes hours to complete.\n\n\nI'd like to see hours long queries :) EXPLAIN ANALYZE\n\n\n\nThanks.\nAndreas\n\n2011/3/7 Kenneth Marshall <[email protected]>\n\n\nOn Mon, Mar 07, 2011 at 03:17:05PM +0100, Andreas For? Tollefsen wrote:\n\nThanks, Ken.\n\nIt seems like the tip to turn off synchronous_commit did the trick:\n\n/usr/lib/postgresql/8.4/bin/pgbench -T 60 test1\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nduration: 60 s\nnumber of transactions actually processed: 86048\ntps = 1434.123199 (including connections establishing)\ntps = 1434.183362 (excluding connections establishing)\n\nIs this acceptable compared to others when considering my setup?\n\nCheers,\nAndreas\n\n\n\n\nThese are typical results for synchronous_commit off. The caveat\nis you must be able to handle loosing transactions if you have a\ndatabase crash, but your database is still intact. This differs\nfrom turning fsync off in which a crash means you would need to\nrestore from a backup.\n\nCheers,\nKen\n\n\n\n\n\n        Regards,\n                Oleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83", "msg_date": "Mon, 7 Mar 2011 22:49:48 +0100", "msg_from": "=?ISO-8859-1?Q?Andreas_For=F8_Tollefsen?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance issues" }, { "msg_contents": "On Mon, Mar 07, 2011 at 10:49:48PM +0100, Andreas For Tollefsen wrote:\n- The synchronous_commit off increased the TPS, but not the speed of the below\n- query.\n- \n- Oleg:\n- This is a query i am working on now. It creates an intersection of two\n- geometries. One is a grid of 0.5 x 0.5 decimal degree sized cells, while the\n- other is the country geometries of all countries in the world for a certain\n- year.\n- \n- priogrid=# EXPLAIN ANALYZE SELECT priogrid_land.gid, gwcode,\n- ST_Intersection(pri\n- ogrid_land.cell, cshapeswdate.geom) FROM priogrid_land, cshapeswdate WHERE\n- ST_In\n- tersects(priogrid_land.cell, cshapeswdate.geom);\n- QUERY\n- PLAN\n- \n- --------------------------------------------------------------------------------\n- ------------------------------------------------------------------\n- Nested Loop (cost=0.00..12644.85 rows=43351 width=87704) (actual\n- time=1.815..7\n- 074973.711 rows=130331 loops=1)\n- Join Filter: _st_intersects(priogrid_land.cell, cshapeswdate.geom)\n- -> Seq Scan on cshapeswdate (cost=0.00..14.42 rows=242 width=87248)\n- (actual\n- time=0.007..0.570 rows=242 loops=1)\n- -> Index Scan using idx_priogrid_land_cell on priogrid_land\n- (cost=0.00..7.1\n- 5 rows=1 width=456) (actual time=0.069..5.604 rows=978 loops=242)\n- Index Cond: (priogrid_land.cell && cshapeswdate.geom)\n- Total runtime: 7075188.549 ms\n- (6 rows)\n\nYour estimated and actuals are way off, have you analyzed those tables?\n\nDave\n", "msg_date": "Mon, 7 Mar 2011 15:29:01 -0800", "msg_from": "David Kerr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance issues" }, { "msg_contents": "=?ISO-8859-1?Q?Andreas_For=F8_Tollefsen?= <[email protected]> writes:\n> This is a query i am working on now. It creates an intersection of two\n> geometries. One is a grid of 0.5 x 0.5 decimal degree sized cells, while the\n> other is the country geometries of all countries in the world for a certain\n> year.\n\nHm, are you sure your data is right? Because the actual rowcounts imply\nthat each country intersects about half of the grid cells, which doesn't\nseem right.\n\n> priogrid=# EXPLAIN ANALYZE SELECT priogrid_land.gid, gwcode,\n> ST_Intersection(pri\n> ogrid_land.cell, cshapeswdate.geom) FROM priogrid_land, cshapeswdate WHERE\n> ST_Intersects(priogrid_land.cell, cshapeswdate.geom);\n> QUERY\n> PLAN\n\n> --------------------------------------------------------------------------------\n> ------------------------------------------------------------------\n> Nested Loop (cost=0.00..12644.85 rows=43351 width=87704) (actual\n> time=1.815..7\n> 074973.711 rows=130331 loops=1)\n> Join Filter: _st_intersects(priogrid_land.cell, cshapeswdate.geom)\n> -> Seq Scan on cshapeswdate (cost=0.00..14.42 rows=242 width=87248)\n> (actual\n> time=0.007..0.570 rows=242 loops=1)\n> -> Index Scan using idx_priogrid_land_cell on priogrid_land\n> (cost=0.00..7.1\n> 5 rows=1 width=456) (actual time=0.069..5.604 rows=978 loops=242)\n> Index Cond: (priogrid_land.cell && cshapeswdate.geom)\n> Total runtime: 7075188.549 ms\n> (6 rows)\n\nAFAICT, all of the runtime is going into calculating the ST_Intersects\nand/or ST_Intersection functions. The two scans are only accounting for\nperhaps 5.5 seconds, and the join infrastructure isn't going to be\nterribly expensive, so it's got to be those functions. Not knowing much\nabout PostGIS, I don't know if the functions themselves can be expected\nto be really slow. If it's not them, it could be the cost of fetching\ntheir arguments --- in particular, I bet the country outlines are very\nlarge objects and are toasted out-of-line. There's been some past\ndiscussion of automatically avoiding repeated detoastings in scenarios\nlike the above, but nothing's gotten to the point of acceptance yet.\nPossibly you could do something to force detoasting in a subquery.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 Mar 2011 19:38:41 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance issues " }, { "msg_contents": "Hi. Thanks for the comments. My data is right, and the result is exactly\nwhat i want, but as you say i think what causes the query to be slow is the\nST_Intersection which creates the intersection between the vector grid\n(fishnet) and the country polygons.\nI will check with the postgis user list if they have any idea on how to\nspeed up this query.\n\nBest,\nAndreas\n\n2011/3/8 Tom Lane <[email protected]>\n\n> =?ISO-8859-1?Q?Andreas_For=F8_Tollefsen?= <[email protected]> writes:\n> > This is a query i am working on now. It creates an intersection of two\n> > geometries. One is a grid of 0.5 x 0.5 decimal degree sized cells, while\n> the\n> > other is the country geometries of all countries in the world for a\n> certain\n> > year.\n>\n> Hm, are you sure your data is right? Because the actual rowcounts imply\n> that each country intersects about half of the grid cells, which doesn't\n> seem right.\n>\n> > priogrid=# EXPLAIN ANALYZE SELECT priogrid_land.gid, gwcode,\n> > ST_Intersection(pri\n> > ogrid_land.cell, cshapeswdate.geom) FROM priogrid_land, cshapeswdate\n> WHERE\n> > ST_Intersects(priogrid_land.cell, cshapeswdate.geom);\n> > QUERY\n> > PLAN\n>\n> >\n> --------------------------------------------------------------------------------\n> > ------------------------------------------------------------------\n> > Nested Loop (cost=0.00..12644.85 rows=43351 width=87704) (actual\n> > time=1.815..7\n> > 074973.711 rows=130331 loops=1)\n> > Join Filter: _st_intersects(priogrid_land.cell, cshapeswdate.geom)\n> > -> Seq Scan on cshapeswdate (cost=0.00..14.42 rows=242 width=87248)\n> > (actual\n> > time=0.007..0.570 rows=242 loops=1)\n> > -> Index Scan using idx_priogrid_land_cell on priogrid_land\n> > (cost=0.00..7.1\n> > 5 rows=1 width=456) (actual time=0.069..5.604 rows=978 loops=242)\n> > Index Cond: (priogrid_land.cell && cshapeswdate.geom)\n> > Total runtime: 7075188.549 ms\n> > (6 rows)\n>\n> AFAICT, all of the runtime is going into calculating the ST_Intersects\n> and/or ST_Intersection functions. The two scans are only accounting for\n> perhaps 5.5 seconds, and the join infrastructure isn't going to be\n> terribly expensive, so it's got to be those functions. Not knowing much\n> about PostGIS, I don't know if the functions themselves can be expected\n> to be really slow. If it's not them, it could be the cost of fetching\n> their arguments --- in particular, I bet the country outlines are very\n> large objects and are toasted out-of-line. There's been some past\n> discussion of automatically avoiding repeated detoastings in scenarios\n> like the above, but nothing's gotten to the point of acceptance yet.\n> Possibly you could do something to force detoasting in a subquery.\n>\n> regards, tom lane\n>\n\nHi. Thanks for the comments. My data is right, and the result is exactly what i want, but as you say i think what causes the query to be slow is the ST_Intersection which creates the intersection between the vector grid (fishnet) and the country polygons.\nI will check with the postgis user list if they have any idea on how to speed up this query.Best,Andreas2011/3/8 Tom Lane <[email protected]>\n=?ISO-8859-1?Q?Andreas_For=F8_Tollefsen?= <[email protected]> writes:\n\n> This is a query i am working on now. It creates an intersection of two\n> geometries. One is a grid of 0.5 x 0.5 decimal degree sized cells, while the\n> other is the country geometries of all countries in the world for a certain\n> year.\n\nHm, are you sure your data is right?  Because the actual rowcounts imply\nthat each country intersects about half of the grid cells, which doesn't\nseem right.\n\n> priogrid=# EXPLAIN ANALYZE SELECT priogrid_land.gid, gwcode,\n> ST_Intersection(pri\n> ogrid_land.cell, cshapeswdate.geom) FROM priogrid_land, cshapeswdate WHERE\n> ST_Intersects(priogrid_land.cell, cshapeswdate.geom);\n>                                                                     QUERY\n> PLAN\n\n> --------------------------------------------------------------------------------\n> ------------------------------------------------------------------\n>  Nested Loop  (cost=0.00..12644.85 rows=43351 width=87704) (actual\n> time=1.815..7\n> 074973.711 rows=130331 loops=1)\n>    Join Filter: _st_intersects(priogrid_land.cell, cshapeswdate.geom)\n>    ->  Seq Scan on cshapeswdate  (cost=0.00..14.42 rows=242 width=87248)\n> (actual\n>  time=0.007..0.570 rows=242 loops=1)\n>    ->  Index Scan using idx_priogrid_land_cell on priogrid_land\n>  (cost=0.00..7.1\n> 5 rows=1 width=456) (actual time=0.069..5.604 rows=978 loops=242)\n>          Index Cond: (priogrid_land.cell && cshapeswdate.geom)\n>  Total runtime: 7075188.549 ms\n> (6 rows)\n\nAFAICT, all of the runtime is going into calculating the ST_Intersects\nand/or ST_Intersection functions.  The two scans are only accounting for\nperhaps 5.5 seconds, and the join infrastructure isn't going to be\nterribly expensive, so it's got to be those functions.  Not knowing much\nabout PostGIS, I don't know if the functions themselves can be expected\nto be really slow.  If it's not them, it could be the cost of fetching\ntheir arguments --- in particular, I bet the country outlines are very\nlarge objects and are toasted out-of-line.  There's been some past\ndiscussion of automatically avoiding repeated detoastings in scenarios\nlike the above, but nothing's gotten to the point of acceptance yet.\nPossibly you could do something to force detoasting in a subquery.\n\n                        regards, tom lane", "msg_date": "Tue, 8 Mar 2011 09:42:13 +0100", "msg_from": "=?ISO-8859-1?Q?Andreas_For=F8_Tollefsen?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance issues" }, { "msg_contents": "I have seen really complex geometries cause problems. If you have \nthousands of points, when 10 would do, try ST_Simplify and see if it \ndoesnt speed things up.\n\n-Andy\n\n\nOn 3/8/2011 2:42 AM, Andreas For� Tollefsen wrote:\n> Hi. Thanks for the comments. My data is right, and the result is exactly\n> what i want, but as you say i think what causes the query to be slow is\n> the ST_Intersection which creates the intersection between the vector\n> grid (fishnet) and the country polygons.\n> I will check with the postgis user list if they have any idea on how to\n> speed up this query.\n>\n> Best,\n> Andreas\n>\n> 2011/3/8 Tom Lane <[email protected] <mailto:[email protected]>>\n>\n> =?ISO-8859-1?Q?Andreas_For=F8_Tollefsen?= <[email protected]\n> <mailto:[email protected]>> writes:\n> > This is a query i am working on now. It creates an intersection\n> of two\n> > geometries. One is a grid of 0.5 x 0.5 decimal degree sized\n> cells, while the\n> > other is the country geometries of all countries in the world for\n> a certain\n> > year.\n>\n> Hm, are you sure your data is right? Because the actual rowcounts imply\n> that each country intersects about half of the grid cells, which doesn't\n> seem right.\n>\n> > priogrid=# EXPLAIN ANALYZE SELECT priogrid_land.gid, gwcode,\n> > ST_Intersection(pri\n> > ogrid_land.cell, cshapeswdate.geom) FROM priogrid_land,\n> cshapeswdate WHERE\n> > ST_Intersects(priogrid_land.cell, cshapeswdate.geom);\n> >\n> QUERY\n> > PLAN\n>\n> >\n> --------------------------------------------------------------------------------\n> > ------------------------------------------------------------------\n> > Nested Loop (cost=0.00..12644.85 rows=43351 width=87704) (actual\n> > time=1.815..7\n> > 074973.711 rows=130331 loops=1)\n> > Join Filter: _st_intersects(priogrid_land.cell, cshapeswdate.geom)\n> > -> Seq Scan on cshapeswdate (cost=0.00..14.42 rows=242\n> width=87248)\n> > (actual\n> > time=0.007..0.570 rows=242 loops=1)\n> > -> Index Scan using idx_priogrid_land_cell on priogrid_land\n> > (cost=0.00..7.1\n> > 5 rows=1 width=456) (actual time=0.069..5.604 rows=978 loops=242)\n> > Index Cond: (priogrid_land.cell && cshapeswdate.geom)\n> > Total runtime: 7075188.549 ms\n> > (6 rows)\n>\n> AFAICT, all of the runtime is going into calculating the ST_Intersects\n> and/or ST_Intersection functions. The two scans are only accounting for\n> perhaps 5.5 seconds, and the join infrastructure isn't going to be\n> terribly expensive, so it's got to be those functions. Not knowing much\n> about PostGIS, I don't know if the functions themselves can be expected\n> to be really slow. If it's not them, it could be the cost of fetching\n> their arguments --- in particular, I bet the country outlines are very\n> large objects and are toasted out-of-line. There's been some past\n> discussion of automatically avoiding repeated detoastings in scenarios\n> like the above, but nothing's gotten to the point of acceptance yet.\n> Possibly you could do something to force detoasting in a subquery.\n>\n> regards, tom lane\n>\n>\n\n", "msg_date": "Tue, 08 Mar 2011 09:21:37 -0600", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance issues" }, { "msg_contents": "Andy. Thanks. That is a great tips. I tried it but i get the error:\nNOTICE: ptarray_simplify returned a <2 pts array.\n\nQuery:\nSELECT ST_Intersection(priogrid_land.cell,\nST_Simplify(cshapeswdate.geom,0.1)) AS geom,\npriogrid_land.gid AS divider, gwcode, gwsyear, gweyear, startdate, enddate,\ncapname, caplong, caplat, col, row, xcoord, ycoord\nFROM priogrid_land, cshapeswdate WHERE ST_Intersects(priogrid_land.cell,\nST_Simplify(cshapeswdate.geom,0.1)) AND cshapeswdate.gwsyear <=1946 AND\ncshapeswdate.gweyear >=1946 AND cshapeswdate.startdate <= '1946/1/1';\n\n\n2011/3/8 Andy Colson <[email protected]>\n\n> I have seen really complex geometries cause problems. If you have\n> thousands of points, when 10 would do, try ST_Simplify and see if it doesnt\n> speed things up.\n>\n> -Andy\n>\n>\n>\n> On 3/8/2011 2:42 AM, Andreas Forř Tollefsen wrote:\n>\n>> Hi. Thanks for the comments. My data is right, and the result is exactly\n>> what i want, but as you say i think what causes the query to be slow is\n>> the ST_Intersection which creates the intersection between the vector\n>> grid (fishnet) and the country polygons.\n>> I will check with the postgis user list if they have any idea on how to\n>> speed up this query.\n>>\n>> Best,\n>> Andreas\n>>\n>> 2011/3/8 Tom Lane <[email protected] <mailto:[email protected]>>\n>>\n>>\n>> =?ISO-8859-1?Q?Andreas_For=F8_Tollefsen?= <[email protected]\n>> <mailto:[email protected]>> writes:\n>> > This is a query i am working on now. It creates an intersection\n>> of two\n>> > geometries. One is a grid of 0.5 x 0.5 decimal degree sized\n>> cells, while the\n>> > other is the country geometries of all countries in the world for\n>> a certain\n>> > year.\n>>\n>> Hm, are you sure your data is right? Because the actual rowcounts\n>> imply\n>> that each country intersects about half of the grid cells, which\n>> doesn't\n>> seem right.\n>>\n>> > priogrid=# EXPLAIN ANALYZE SELECT priogrid_land.gid, gwcode,\n>> > ST_Intersection(pri\n>> > ogrid_land.cell, cshapeswdate.geom) FROM priogrid_land,\n>> cshapeswdate WHERE\n>> > ST_Intersects(priogrid_land.cell, cshapeswdate.geom);\n>> >\n>> QUERY\n>> > PLAN\n>>\n>> >\n>>\n>> --------------------------------------------------------------------------------\n>> > ------------------------------------------------------------------\n>> > Nested Loop (cost=0.00..12644.85 rows=43351 width=87704) (actual\n>> > time=1.815..7\n>> > 074973.711 rows=130331 loops=1)\n>> > Join Filter: _st_intersects(priogrid_land.cell,\n>> cshapeswdate.geom)\n>> > -> Seq Scan on cshapeswdate (cost=0.00..14.42 rows=242\n>> width=87248)\n>> > (actual\n>> > time=0.007..0.570 rows=242 loops=1)\n>> > -> Index Scan using idx_priogrid_land_cell on priogrid_land\n>> > (cost=0.00..7.1\n>> > 5 rows=1 width=456) (actual time=0.069..5.604 rows=978 loops=242)\n>> > Index Cond: (priogrid_land.cell && cshapeswdate.geom)\n>> > Total runtime: 7075188.549 ms\n>> > (6 rows)\n>>\n>> AFAICT, all of the runtime is going into calculating the ST_Intersects\n>> and/or ST_Intersection functions. The two scans are only accounting\n>> for\n>> perhaps 5.5 seconds, and the join infrastructure isn't going to be\n>> terribly expensive, so it's got to be those functions. Not knowing\n>> much\n>> about PostGIS, I don't know if the functions themselves can be expected\n>> to be really slow. If it's not them, it could be the cost of fetching\n>> their arguments --- in particular, I bet the country outlines are very\n>> large objects and are toasted out-of-line. There's been some past\n>> discussion of automatically avoiding repeated detoastings in scenarios\n>> like the above, but nothing's gotten to the point of acceptance yet.\n>> Possibly you could do something to force detoasting in a subquery.\n>>\n>> regards, tom lane\n>>\n>>\n>>\n>\n\nAndy. Thanks. That is a great tips. I tried it but i get the error:NOTICE: ptarray_simplify returned a <2 pts array.Query:SELECT ST_Intersection(priogrid_land.cell, ST_Simplify(cshapeswdate.geom,0.1)) AS geom, \npriogrid_land.gid AS divider, gwcode, gwsyear, gweyear, startdate, enddate, capname, caplong, caplat, col, row, xcoord, ycoord FROM priogrid_land, cshapeswdate WHERE ST_Intersects(priogrid_land.cell, ST_Simplify(cshapeswdate.geom,0.1)) AND cshapeswdate.gwsyear <=1946 AND cshapeswdate.gweyear >=1946 AND cshapeswdate.startdate <= '1946/1/1';\n2011/3/8 Andy Colson <[email protected]>\nI have seen really complex geometries cause problems.  If you have thousands of points, when 10 would do, try ST_Simplify and see if it doesnt speed things up.\n\n-Andy\n\n\nOn 3/8/2011 2:42 AM, Andreas Forř Tollefsen wrote:\n\nHi. Thanks for the comments. My data is right, and the result is exactly\nwhat i want, but as you say i think what causes the query to be slow is\nthe ST_Intersection which creates the intersection between the vector\ngrid (fishnet) and the country polygons.\nI will check with the postgis user list if they have any idea on how to\nspeed up this query.\n\nBest,\nAndreas\n\n2011/3/8 Tom Lane <[email protected] <mailto:[email protected]>>\n\n    =?ISO-8859-1?Q?Andreas_For=F8_Tollefsen?= <[email protected]\n    <mailto:[email protected]>> writes:\n     > This is a query i am working on now. It creates an intersection\n    of two\n     > geometries. One is a grid of 0.5 x 0.5 decimal degree sized\n    cells, while the\n     > other is the country geometries of all countries in the world for\n    a certain\n     > year.\n\n    Hm, are you sure your data is right?  Because the actual rowcounts imply\n    that each country intersects about half of the grid cells, which doesn't\n    seem right.\n\n     > priogrid=# EXPLAIN ANALYZE SELECT priogrid_land.gid, gwcode,\n     > ST_Intersection(pri\n     > ogrid_land.cell, cshapeswdate.geom) FROM priogrid_land,\n    cshapeswdate WHERE\n     > ST_Intersects(priogrid_land.cell, cshapeswdate.geom);\n     >\n       QUERY\n     > PLAN\n\n     >\n    --------------------------------------------------------------------------------\n     > ------------------------------------------------------------------\n     >  Nested Loop  (cost=0.00..12644.85 rows=43351 width=87704) (actual\n     > time=1.815..7\n     > 074973.711 rows=130331 loops=1)\n     >    Join Filter: _st_intersects(priogrid_land.cell, cshapeswdate.geom)\n     >    ->  Seq Scan on cshapeswdate  (cost=0.00..14.42 rows=242\n    width=87248)\n     > (actual\n     >  time=0.007..0.570 rows=242 loops=1)\n     >    ->  Index Scan using idx_priogrid_land_cell on priogrid_land\n     >  (cost=0.00..7.1\n     > 5 rows=1 width=456) (actual time=0.069..5.604 rows=978 loops=242)\n     >          Index Cond: (priogrid_land.cell && cshapeswdate.geom)\n     >  Total runtime: 7075188.549 ms\n     > (6 rows)\n\n    AFAICT, all of the runtime is going into calculating the ST_Intersects\n    and/or ST_Intersection functions.  The two scans are only accounting for\n    perhaps 5.5 seconds, and the join infrastructure isn't going to be\n    terribly expensive, so it's got to be those functions.  Not knowing much\n    about PostGIS, I don't know if the functions themselves can be expected\n    to be really slow.  If it's not them, it could be the cost of fetching\n    their arguments --- in particular, I bet the country outlines are very\n    large objects and are toasted out-of-line.  There's been some past\n    discussion of automatically avoiding repeated detoastings in scenarios\n    like the above, but nothing's gotten to the point of acceptance yet.\n    Possibly you could do something to force detoasting in a subquery.\n\n                            regards, tom lane", "msg_date": "Tue, 8 Mar 2011 17:58:58 +0100", "msg_from": "=?ISO-8859-1?Q?Andreas_For=F8_Tollefsen?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance issues" }, { "msg_contents": "Forgot to mention that the query terminates the connection because of a\ncrash of server process.\n\n2011/3/8 Andreas Forø Tollefsen <[email protected]>\n\n> Andy. Thanks. That is a great tips. I tried it but i get the error:\n> NOTICE: ptarray_simplify returned a <2 pts array.\n>\n> Query:\n> SELECT ST_Intersection(priogrid_land.cell,\n> ST_Simplify(cshapeswdate.geom,0.1)) AS geom,\n> priogrid_land.gid AS divider, gwcode, gwsyear, gweyear, startdate, enddate,\n> capname, caplong, caplat, col, row, xcoord, ycoord\n> FROM priogrid_land, cshapeswdate WHERE ST_Intersects(priogrid_land.cell,\n> ST_Simplify(cshapeswdate.geom,0.1)) AND cshapeswdate.gwsyear <=1946 AND\n> cshapeswdate.gweyear >=1946 AND cshapeswdate.startdate <= '1946/1/1';\n>\n>\n> 2011/3/8 Andy Colson <[email protected]>\n>\n> I have seen really complex geometries cause problems. If you have\n>> thousands of points, when 10 would do, try ST_Simplify and see if it doesnt\n>> speed things up.\n>>\n>> -Andy\n>>\n>>\n>>\n>> On 3/8/2011 2:42 AM, Andreas Forř Tollefsen wrote:\n>>\n>>> Hi. Thanks for the comments. My data is right, and the result is exactly\n>>> what i want, but as you say i think what causes the query to be slow is\n>>> the ST_Intersection which creates the intersection between the vector\n>>> grid (fishnet) and the country polygons.\n>>> I will check with the postgis user list if they have any idea on how to\n>>> speed up this query.\n>>>\n>>> Best,\n>>> Andreas\n>>>\n>>> 2011/3/8 Tom Lane <[email protected] <mailto:[email protected]>>\n>>>\n>>>\n>>> =?ISO-8859-1?Q?Andreas_For=F8_Tollefsen?= <[email protected]\n>>> <mailto:[email protected]>> writes:\n>>> > This is a query i am working on now. It creates an intersection\n>>> of two\n>>> > geometries. One is a grid of 0.5 x 0.5 decimal degree sized\n>>> cells, while the\n>>> > other is the country geometries of all countries in the world for\n>>> a certain\n>>> > year.\n>>>\n>>> Hm, are you sure your data is right? Because the actual rowcounts\n>>> imply\n>>> that each country intersects about half of the grid cells, which\n>>> doesn't\n>>> seem right.\n>>>\n>>> > priogrid=# EXPLAIN ANALYZE SELECT priogrid_land.gid, gwcode,\n>>> > ST_Intersection(pri\n>>> > ogrid_land.cell, cshapeswdate.geom) FROM priogrid_land,\n>>> cshapeswdate WHERE\n>>> > ST_Intersects(priogrid_land.cell, cshapeswdate.geom);\n>>> >\n>>> QUERY\n>>> > PLAN\n>>>\n>>> >\n>>>\n>>> --------------------------------------------------------------------------------\n>>> > ------------------------------------------------------------------\n>>> > Nested Loop (cost=0.00..12644.85 rows=43351 width=87704) (actual\n>>> > time=1.815..7\n>>> > 074973.711 rows=130331 loops=1)\n>>> > Join Filter: _st_intersects(priogrid_land.cell,\n>>> cshapeswdate.geom)\n>>> > -> Seq Scan on cshapeswdate (cost=0.00..14.42 rows=242\n>>> width=87248)\n>>> > (actual\n>>> > time=0.007..0.570 rows=242 loops=1)\n>>> > -> Index Scan using idx_priogrid_land_cell on priogrid_land\n>>> > (cost=0.00..7.1\n>>> > 5 rows=1 width=456) (actual time=0.069..5.604 rows=978 loops=242)\n>>> > Index Cond: (priogrid_land.cell && cshapeswdate.geom)\n>>> > Total runtime: 7075188.549 ms\n>>> > (6 rows)\n>>>\n>>> AFAICT, all of the runtime is going into calculating the ST_Intersects\n>>> and/or ST_Intersection functions. The two scans are only accounting\n>>> for\n>>> perhaps 5.5 seconds, and the join infrastructure isn't going to be\n>>> terribly expensive, so it's got to be those functions. Not knowing\n>>> much\n>>> about PostGIS, I don't know if the functions themselves can be\n>>> expected\n>>> to be really slow. If it's not them, it could be the cost of fetching\n>>> their arguments --- in particular, I bet the country outlines are very\n>>> large objects and are toasted out-of-line. There's been some past\n>>> discussion of automatically avoiding repeated detoastings in scenarios\n>>> like the above, but nothing's gotten to the point of acceptance yet.\n>>> Possibly you could do something to force detoasting in a subquery.\n>>>\n>>> regards, tom lane\n>>>\n>>>\n>>>\n>>\n>\n\nForgot to mention that the query terminates the connection because of a crash of server process.2011/3/8 Andreas Forø Tollefsen <[email protected]>\nAndy. Thanks. That is a great tips. I tried it but i get the error:NOTICE: ptarray_simplify returned a <2 pts array.\nQuery:SELECT ST_Intersection(priogrid_land.cell, ST_Simplify(cshapeswdate.geom,0.1)) AS geom, \npriogrid_land.gid AS divider, gwcode, gwsyear, gweyear, startdate, enddate, capname, caplong, caplat, col, row, xcoord, ycoord FROM priogrid_land, cshapeswdate WHERE ST_Intersects(priogrid_land.cell, ST_Simplify(cshapeswdate.geom,0.1)) AND cshapeswdate.gwsyear <=1946 AND cshapeswdate.gweyear >=1946 AND cshapeswdate.startdate <= '1946/1/1';\n2011/3/8 Andy Colson <[email protected]>\n\nI have seen really complex geometries cause problems.  If you have thousands of points, when 10 would do, try ST_Simplify and see if it doesnt speed things up.\n\n-Andy\n\n\nOn 3/8/2011 2:42 AM, Andreas Forř Tollefsen wrote:\n\nHi. Thanks for the comments. My data is right, and the result is exactly\nwhat i want, but as you say i think what causes the query to be slow is\nthe ST_Intersection which creates the intersection between the vector\ngrid (fishnet) and the country polygons.\nI will check with the postgis user list if they have any idea on how to\nspeed up this query.\n\nBest,\nAndreas\n\n2011/3/8 Tom Lane <[email protected] <mailto:[email protected]>>\n\n    =?ISO-8859-1?Q?Andreas_For=F8_Tollefsen?= <[email protected]\n    <mailto:[email protected]>> writes:\n     > This is a query i am working on now. It creates an intersection\n    of two\n     > geometries. One is a grid of 0.5 x 0.5 decimal degree sized\n    cells, while the\n     > other is the country geometries of all countries in the world for\n    a certain\n     > year.\n\n    Hm, are you sure your data is right?  Because the actual rowcounts imply\n    that each country intersects about half of the grid cells, which doesn't\n    seem right.\n\n     > priogrid=# EXPLAIN ANALYZE SELECT priogrid_land.gid, gwcode,\n     > ST_Intersection(pri\n     > ogrid_land.cell, cshapeswdate.geom) FROM priogrid_land,\n    cshapeswdate WHERE\n     > ST_Intersects(priogrid_land.cell, cshapeswdate.geom);\n     >\n       QUERY\n     > PLAN\n\n     >\n    --------------------------------------------------------------------------------\n     > ------------------------------------------------------------------\n     >  Nested Loop  (cost=0.00..12644.85 rows=43351 width=87704) (actual\n     > time=1.815..7\n     > 074973.711 rows=130331 loops=1)\n     >    Join Filter: _st_intersects(priogrid_land.cell, cshapeswdate.geom)\n     >    ->  Seq Scan on cshapeswdate  (cost=0.00..14.42 rows=242\n    width=87248)\n     > (actual\n     >  time=0.007..0.570 rows=242 loops=1)\n     >    ->  Index Scan using idx_priogrid_land_cell on priogrid_land\n     >  (cost=0.00..7.1\n     > 5 rows=1 width=456) (actual time=0.069..5.604 rows=978 loops=242)\n     >          Index Cond: (priogrid_land.cell && cshapeswdate.geom)\n     >  Total runtime: 7075188.549 ms\n     > (6 rows)\n\n    AFAICT, all of the runtime is going into calculating the ST_Intersects\n    and/or ST_Intersection functions.  The two scans are only accounting for\n    perhaps 5.5 seconds, and the join infrastructure isn't going to be\n    terribly expensive, so it's got to be those functions.  Not knowing much\n    about PostGIS, I don't know if the functions themselves can be expected\n    to be really slow.  If it's not them, it could be the cost of fetching\n    their arguments --- in particular, I bet the country outlines are very\n    large objects and are toasted out-of-line.  There's been some past\n    discussion of automatically avoiding repeated detoastings in scenarios\n    like the above, but nothing's gotten to the point of acceptance yet.\n    Possibly you could do something to force detoasting in a subquery.\n\n                            regards, tom lane", "msg_date": "Tue, 8 Mar 2011 18:00:15 +0100", "msg_from": "=?ISO-8859-1?Q?Andreas_For=F8_Tollefsen?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance issues" }, { "msg_contents": "On 3/8/2011 10:58 AM, Andreas Forø Tollefsen wrote:\n> Andy. Thanks. That is a great tips. I tried it but i get the error:\n> NOTICE: ptarray_simplify returned a <2 pts array.\n>\n> Query:\n> SELECT ST_Intersection(priogrid_land.cell,\n> ST_Simplify(cshapeswdate.geom,0.1)) AS geom,\n> priogrid_land.gid AS divider, gwcode, gwsyear, gweyear, startdate,\n> enddate, capname, caplong, caplat, col, row, xcoord, ycoord\n> FROM priogrid_land, cshapeswdate WHERE ST_Intersects(priogrid_land.cell,\n> ST_Simplify(cshapeswdate.geom,0.1)) AND cshapeswdate.gwsyear <=1946 AND\n> cshapeswdate.gweyear >=1946 AND cshapeswdate.startdate <= '1946/1/1';\n>\n>\n> 2011/3/8 Andy Colson <[email protected] <mailto:[email protected]>>\n>\n> I have seen really complex geometries cause problems. If you have\n> thousands of points, when 10 would do, try ST_Simplify and see if it\n> doesnt speed things up.\n>\n> -Andy\n>\n>\n>\n> On 3/8/2011 2:42 AM, Andreas Forř Tollefsen wrote:\n>\n> Hi. Thanks for the comments. My data is right, and the result is\n> exactly\n> what i want, but as you say i think what causes the query to be\n> slow is\n> the ST_Intersection which creates the intersection between the\n> vector\n> grid (fishnet) and the country polygons.\n> I will check with the postgis user list if they have any idea on\n> how to\n> speed up this query.\n>\n> Best,\n> Andreas\n>\n> 2011/3/8 Tom Lane <[email protected] <mailto:[email protected]>\n> <mailto:[email protected] <mailto:[email protected]>>>\n>\n>\n> =?ISO-8859-1?Q?Andreas_For=F8_Tollefsen?=\n> <[email protected] <mailto:[email protected]>\n> <mailto:[email protected] <mailto:[email protected]>>> writes:\n> > This is a query i am working on now. It creates an intersection\n> of two\n> > geometries. One is a grid of 0.5 x 0.5 decimal degree sized\n> cells, while the\n> > other is the country geometries of all countries in the world for\n> a certain\n> > year.\n>\n> Hm, are you sure your data is right? Because the actual\n> rowcounts imply\n> that each country intersects about half of the grid cells,\n> which doesn't\n> seem right.\n>\n> > priogrid=# EXPLAIN ANALYZE SELECT priogrid_land.gid, gwcode,\n> > ST_Intersection(pri\n> > ogrid_land.cell, cshapeswdate.geom) FROM priogrid_land,\n> cshapeswdate WHERE\n> > ST_Intersects(priogrid_land.cell, cshapeswdate.geom);\n> >\n> QUERY\n> > PLAN\n>\n> >\n>\n> --------------------------------------------------------------------------------\n> >\n> ------------------------------------------------------------------\n> > Nested Loop (cost=0.00..12644.85 rows=43351 width=87704)\n> (actual\n> > time=1.815..7\n> > 074973.711 rows=130331 loops=1)\n> > Join Filter: _st_intersects(priogrid_land.cell,\n> cshapeswdate.geom)\n> > -> Seq Scan on cshapeswdate (cost=0.00..14.42 rows=242\n> width=87248)\n> > (actual\n> > time=0.007..0.570 rows=242 loops=1)\n> > -> Index Scan using idx_priogrid_land_cell on priogrid_land\n> > (cost=0.00..7.1\n> > 5 rows=1 width=456) (actual time=0.069..5.604 rows=978 loops=242)\n> > Index Cond: (priogrid_land.cell && cshapeswdate.geom)\n> > Total runtime: 7075188.549 ms\n> > (6 rows)\n>\n> AFAICT, all of the runtime is going into calculating the\n> ST_Intersects\n> and/or ST_Intersection functions. The two scans are only\n> accounting for\n> perhaps 5.5 seconds, and the join infrastructure isn't going\n> to be\n> terribly expensive, so it's got to be those functions. Not\n> knowing much\n> about PostGIS, I don't know if the functions themselves can\n> be expected\n> to be really slow. If it's not them, it could be the cost\n> of fetching\n> their arguments --- in particular, I bet the country\n> outlines are very\n> large objects and are toasted out-of-line. There's been\n> some past\n> discussion of automatically avoiding repeated detoastings in\n> scenarios\n> like the above, but nothing's gotten to the point of\n> acceptance yet.\n> Possibly you could do something to force detoasting in a\n> subquery.\n>\n> regards, tom lane\n>\n>\n>\n>\n\n\n\new... thats not good. Seems like it simplified it down to a single \npoint? (not 100% sure that's what the error means, just a guess)\n\nTry getting some info about it:\n\nselect\n ST_Npoints(geom) As before,\n ST_NPoints(ST_Simplify(geom,0.1)) as after\nfrom cshapeswdate\n\n\nAlso try things like ST_IsSimple ST_IsValid. I seem to recall sometimes \nneeding ST_Points or st_NumPoints instead of ST_Npoints.\n\n-Andy\n", "msg_date": "Tue, 08 Mar 2011 13:13:38 -0600", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance issues" } ]
[ { "msg_contents": "Originally, I posted to -general but I found some time to write some\nsamples, and realized it's probably more of a performance question.\n\nThe original post is here:\nhttp://archives.postgresql.org/pgsql-general/2011-03/msg00198.php\n\nI was hoping that somebody could help me understand the differences\nbetween three plans.\nAll of the plans are updating a table using a second table, and should\nbe logically equivalent.\nTwo of the plans use joins, and one uses an exists subquery.\nOne of the plans uses row constructors and IS NOT DISTINCT FROM. It is\nthis plan which has really awful performance.\nClearly it is due to the nested loop, but why would the planner choose\nthat approach?\n\nI also don't understand why in the 'exists' plan the planner thinks\nthe index scan will provide 1019978 rows, when there are only 1000000,\nbut that is a lesser issue.\n\nHere is a sample SQL file which demonstrates the issues and includes\nall three variations.\n\nbegin;\ncreate temporary table t7 (\n i BIGINT NOT NULL,\n k BIGINT\n);\n\ncreate temporary table t8 (\n i BIGINT NOT NULL,\n j INT\n);\n\nCREATE FUNCTION populate_t8()\nRETURNS VOID\nLANGUAGE SQL\nAS\n$$\ntruncate t8;\ninsert into t8\nSELECT i, 1 from t7\nORDER BY i LIMIT 10000;\n\ninsert into t8\nSELECT i, 2 from t7\nWHERE i > 10000\nORDER BY i LIMIT 10000;\n\nSELECT i, 3 from t7\nWHERE i > 20000\nORDER BY i LIMIT 20000;\n\nanalyze t8;\n$$;\n\nINSERT INTO t7\nselect x, x + 10 from generate_series(1,1000000) as x ;\nanalyze t7;\n\nselect populate_t8();\n\nexplain analyze verbose\nupdate\n t7\nSET\n k = 1\nFROM\n t8\nWHERE\n t7.i = t8.i\n AND\n (\n t8.j = 2\n OR\n t8.j = 1\n );\n\nselect populate_t8();\n\nexplain analyze verbose\nupdate\n t7\nSET\n k = 1\nWHERE\n EXISTS (\n SELECT 1 FROM t8\n WHERE t8.i = t7.i\n AND\n (\n t8.j = 2\n OR\n t8.j = 1\n )\n );\n\nselect populate_t8();\n\nexplain\nupdate\n t7\nSET\n k = 1\nFROM\n t8\nWHERE\n ROW(t7.i) IS NOT DISTINCT FROM ROW(t8.i)\n AND\n (\n t8.j = 2\n OR\n t8.j = 1\n );\n\nexplain analyze verbose\nupdate\n t7\nSET\n k = 1\nFROM\n t8\nWHERE\n ROW(t7.i) IS NOT DISTINCT FROM ROW(t8.i)\n AND\n (\n t8.j = 2\n OR\n t8.j = 1\n );\n\nrollback;\n\n\n\n\n\n-- \nJon\n", "msg_date": "Mon, 7 Mar 2011 13:07:35 -0600", "msg_from": "Jon Nelson <[email protected]>", "msg_from_op": true, "msg_subject": "plan variations: join vs. exists vs. row comparison" }, { "msg_contents": "Jon Nelson <[email protected]> writes:\n> I was hoping that somebody could help me understand the differences\n> between three plans.\n> All of the plans are updating a table using a second table, and should\n> be logically equivalent.\n> Two of the plans use joins, and one uses an exists subquery.\n> One of the plans uses row constructors and IS NOT DISTINCT FROM. It is\n> this plan which has really awful performance.\n> Clearly it is due to the nested loop, but why would the planner choose\n> that approach?\n\nIS NOT DISTINCT FROM pretty much disables all optimizations: it can't be\nan indexqual, merge join qual, or hash join qual. So it's not\nsurprising that you get a sucky plan for it. Possibly somebody will\nwork on improving that someday.\n\nAs for your other questions, what PG version are you using? Because I\ndo get pretty much the same plan (modulo a plain join versus a semijoin)\nfor the first two queries, when using 9.0 or later. And the results of\nANALYZE are only approximate, so you shouldn't be surprised at all if a\nrowcount estimate is off by a couple percent. Most of the time, you\nshould be happy if it's within a factor of 2 of reality.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 Mar 2011 15:00:10 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plan variations: join vs. exists vs. row comparison " }, { "msg_contents": "On Mon, Mar 7, 2011 at 1:07 PM, Jon Nelson <[email protected]> wrote:\n> Originally, I posted to -general but I found some time to write some\n> samples, and realized it's probably more of a performance question.\n>\n> The original post is here:\n> http://archives.postgresql.org/pgsql-general/2011-03/msg00198.php\n>\n> I was hoping that somebody could help me understand the differences\n> between three plans.\n> All of the plans are updating a table using a second table, and should\n> be logically equivalent.\n> Two of the plans use joins, and one uses an exists subquery.\n> One of the plans uses row constructors and IS NOT DISTINCT FROM. It is\n> this plan which has really awful performance.\n\nThe problem is really coming from SQL: it requires row wise\ncomparisons to be of all fields in left to right order and the fact\nthat you can't match NULL to NULL with =.\n\nIf you have a table with a,b,c, (1,1,NULL) is not distinct from (1,2,3) becomes:\nFilter: ((NOT (a IS DISTINCT FROM 1)) AND (NOT (b IS DISTINCT FROM 1))\nAND (NOT (c IS DISTINCT FROM NULL::integer)))\n\nAt present postgresql does not have the facilities to turn that into\nan index lookup. SQL doesn't allow the way you'd want to write this\nthe way you'd really like to:\n\nselect * from v where (a,b,c) = (1,1,NULL);\n\nbecause the comparison can't be applied from a row to another row but\nonly between the member fields. You can cheat the system, but only if\nyou reserve a special index for that purpose:\n\ncreate table v(a int, b int, c int);\ncreate index on v(v);\nselect * from v where v = (1,1, NULL) will match as 'is not distinct\nfrom' does, using the index. This is because composite type\ncomparison (as opposed to its fields) follows a different code path.\nConfused yet? You can also use the above trick with a type if you are\nnot comparing all fields of 'v':\n\ncreate type foo(a int, b int);\ncreate index on v(((a,b)::foo));\nselect * from v where (a,b)::foo = (1,1);\n\nwill get you field subset comparison with index. Note if you do the\nabove, the index can only match on the entire composite, not\nparticular fields...\n\nmerlin\n", "msg_date": "Mon, 7 Mar 2011 14:05:42 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plan variations: join vs. exists vs. row comparison" }, { "msg_contents": "On Mon, Mar 7, 2011 at 2:00 PM, Tom Lane <[email protected]> wrote:\n> Jon Nelson <[email protected]> writes:\n>> I was hoping that somebody could help me understand the differences\n>> between three plans.\n>> All of the plans are updating a table using a second table, and should\n>> be logically equivalent.\n>> Two of the plans use joins, and one uses an exists subquery.\n>> One of the plans uses row constructors and IS NOT DISTINCT FROM. It is\n>> this plan which has really awful performance.\n>> Clearly it is due to the nested loop, but why would the planner choose\n>> that approach?\n>\n> IS NOT DISTINCT FROM pretty much disables all optimizations: it can't be\n> an indexqual, merge join qual, or hash join qual.  So it's not\n> surprising that you get a sucky plan for it.  Possibly somebody will\n> work on improving that someday.\n>\n> As for your other questions, what PG version are you using?  Because I\n> do get pretty much the same plan (modulo a plain join versus a semijoin)\n> for the first two queries, when using 9.0 or later.  And the results of\n> ANALYZE are only approximate, so you shouldn't be surprised at all if a\n> rowcount estimate is off by a couple percent.  Most of the time, you\n> should be happy if it's within a factor of 2 of reality.\n\nSorry - I had stated in the original post that I was using 8.4.5 on 64\nbit openSUSE and CentOS 5.5, and had forgotten to carry that\ninformation over into the second post.\n\nWhat is the difference between a plain join and a semi join?\n\n-- \nJon\n", "msg_date": "Mon, 7 Mar 2011 14:06:56 -0600", "msg_from": "Jon Nelson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: plan variations: join vs. exists vs. row comparison" }, { "msg_contents": "Jon Nelson <[email protected]> wrote:\n \n> What is the difference between a plain join and a semi join?\n \nAs soon as a semi join finds a match it stops looking for more.\n \n-Kevin\n", "msg_date": "Mon, 07 Mar 2011 14:24:36 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plan variations: join vs. exists vs. row\n\t comparison" } ]
[ { "msg_contents": "Dear all,\n\nCan anyone Please guide me with some suggestions on how to tune the \nbelow query as I needed to perform the below query as faster as i can.\n\nI have 3 tables on which the query runs:\n\npdc_uima=# select \npg_size_pretty(pg_total_relation_size('page_content_demo'));\n pg_size_pretty\n----------------\n 1260 MB\npdc_uima=# select \npg_size_pretty(pg_total_relation_size('metadata_demo')); \n pg_size_pretty\n----------------\n 339 MB\npdc_uima=# select \npg_size_pretty(pg_total_relation_size('loc_context_demo'));\n pg_size_pretty\n----------------\n 345 MB\n\n\nMy Query is :\n\nexplain analyze select \nm.doc_category,p.heading,l.lat,l.lon,p.crawled_page_url,p.category,p.dt_stamp,p.crawled_page_id,p.content \nfrom loc_context_demo l,page_content_demo p,metadata_demo m where \nl.source_id=p.crawled_page_id and m.doc_id=l.source_id and \nst_within(l.geom,GeomFromText('POLYGON((19.548124415111626 \n73.21900819489186,19.548124415111626 73.21900819489186,19.55011196668719 \n73.21994746420259,19.552097947014058 73.22087843652453,19.55408236353752 \n73.2218011513938,19.588219714571828 75.1654223522423,19.599133094249137 \n76.46053245473952,19.57365361244478 79.69902443272414,19.68652202327923 \n82.74135922990342,19.56446013085233 85.15028561045767,19.551174510964337 \n85.37052962767306,19.553500408319763 85.37198146688313,19.55582660405639 \n85.37341757236464,19.55815307123746 85.37483800206365,19.56047978332553 \n85.37624281337641,19.562806714176496 85.37763206315,19.565133838033702 \n85.37900580768307,19.567461129522137 85.38036410272655,19.56978856364264 \n85.3817070034843,19.572116115766228 85.38303456461405,19.56649262333915 \n85.15194545531163,18.773772341648947 84.46107113406764,17.95738291093396 \n84.21223929994393,16.939045429366846 \n83.74699366402301,15.915601954028702 \n83.28824222570091,14.692125537681664 \n82.40657922201932,13.869583501048409 81.75586112437654,13.23910975048389 \n81.53550253438608,12.607561680274236 \n81.31596402018643,11.960089890060914 81.3105660302366,11.961002716398268 \n81.3118121189388,11.102247999047648 81.09276935832209,10.230582572954035 \n81.08704044732613,9.364677626102125 80.87125821859627,8.484379037020355 \n80.65888115596269,7.5953685679122565 80.44798762937165,6.678959105840814 \n80.44990760581172,5.756074889890018 80.24361993771154,5.756819343429733 \n80.2442993962505,5.757563827399336 80.24498070122854,5.758308340445826 \n80.24566385572928,4.83232192901788 80.03636862497382,4.832964922142748 \n80.0371046690356,4.833608089257533 80.0378393944808,4.834251429338765 \n80.038572803232,4.834894941366702 80.03930489720865,4.835538624325311 \n80.04003567832711,5.575253995307823 78.3586811224377,5.82022779480326 \n77.52223682832437,6.9742086723828365 \n76.89564878408815,7.6455592543043425 76.26930608306816,8.761889779304363 \n75.43381068367601,10.059251343658966 74.3840274150521,11.136283050704487 \n73.75034557867339,12.187315498051541 \n72.89986083146191,13.242658350472773 \n72.46589681727389,14.721187899066917 \n72.23365448169334,16.384503005199107 \n71.77586874336029,17.834343858181125 \n71.52762561326514,18.868652843809762 \n71.49887565337562,19.487812049094533 \n71.48086802014905,19.489698327426513 71.48186192551053,19.89987693684175 \n71.46838407646581,20.310716259621934 71.454517020832,20.312680952069726 \n71.45872696349684,20.314637217119998 71.46296731473512,20.31658488533959 \n71.46723821288163,20.318523784696943 71.47153979566505,20.53302678388929 \n71.88565153869924,20.767109171722186 \n72.75373018504017,20.791013365997372 73.62713545368305,20.79185810562998 \n73.6280821559539,20.79269895778539 73.62902276312589,20.793535942149113 \n73.6299573226539,20.79436907831312 73.63088588154903,20.795198385776008 \n73.6318084863835,20.796023883943136 73.63272518329538,20.796845592126836 \n73.6336360179933,20.79766352954653 73.63454103576112,20.798477715328943 \n73.63544028146251,20.799288168508316 73.6363337995455,20.80009490802656 \n73.63722163404697,20.800897952733482 \n73.63810382859708,19.980139052593813 74.07773531285727,19.98131962229422 \n74.0780344216337,19.982501271580563 74.078336024665,19.983684009372077 \n74.07864013150498,19.98486784461094 74.07894675180037,19.98605278626243 \n74.07925589529141,19.987238843315097 \n74.07956757181258,19.988426024780967 \n74.07988179129316,19.548124415111626 73.21900819489186))',4326)) and \nm.doc_category='Naxalism'order by p.dt_stamp desc;\n\nToday in the morning , I am shocked to see the result below :\n\n Sort (cost=129344.37..129354.40 rows=4013 width=1418) (actual \ntime=21377.760..21378.441 rows=4485 loops=1)\n Sort Key: p.dt_stamp\n Sort Method: quicksort Memory: 7161kB\n -> Nested Loop (cost=44490.85..129104.18 rows=4013 width=1418) \n(actual time=267.729..21353.703 rows=4485 loops=1)\n -> Hash Join (cost=44490.85..95466.11 rows=3637 width=73) \n(actual time=255.849..915.092 rows=4129 loops=1)\n Hash Cond: (l.source_id = m.doc_id)\n -> Seq Scan on loc_context_demo l (cost=0.00..47083.94 \nrows=16404 width=18) (actual time=0.065..628.255 rows=17072 loops=1)\n Filter: ((geom && \n'0103000020E6100000010000005C000000270BB5E1518C334075A7F23A044E5240270BB5E1518C334075A7F23A044E52404A0F4A23D48C3340A66\n5879E134E5240379D824A568D3340ED504FDF224E52401DAF7E57D88D3340E6DC74FD314E5240CEF23491959633401E3AA24796CA524057C055C96099334095F61D5D791D5340BEAE90F6DA923340\nEE69F9D0BCEC5340D2F745E8BFAF334078C1FB6D72AF544019E8897580903340687E89479E4955400AFBD2C5198D33408343E6C1B6575540580EE833B28D3340CBBE5A8BCE5755403FABFEA64A8E3\n340B443D112E65755407920A31EE38E3340540A8858FD575540CB73639A7B8F3340A1B3BC5C14585540765BCF19149033409E49AC1F2B585540BC37789CAC9033408B3F93A1415855407E0CF12145\n9133401B72ADE257585540DE7ACEA9DD913340A12736E36D58554002BBA63376923340511068A383585540BAAA1AA905913340B7556E79B94955407BEEB5F115C63240CF7C8030821D5540EC35E40\nB17F53140D90B2554950D5540C196004865F03040135383BECEEF5440F86981C7C9D42F408C2D858F72D2544069234A475E622D409773DB64059A5440C749740C3ABD2B40F6605607607054404FF7\nDC976C7A2A4041076CAC456254404B29165312372940755A27C1385454405010EEE690EB274058C75750E05354406F984C8C08EC2740AC55D1BAF453544070FB87D9593426401D04E4EEEF455440F\n0BA43EB0E7624407AAC18129245544012629B06B7BA224080CFD4B1C2375440A5BD758700F82040DE33DE1B2B2A54407CFF404CA8611E407C4A4ED4AB1C54407CA14B0E41B71A40B64B4549CB1C54\n40229EF57E38061740FA471478970F5440280B64A6FB0617400555EF99A20F54403E40DDCFBE07174060FB88C3AD0F5440D4FE49FB81081740A88AE4F4B80F5440708023334C5413403CB711DD530\n25440EE45ADC1F45413405B0243EC5F0254402A2BE45B9D551340E972ECF56B025440E19AB60146561340601910FA77025440341013B3EE5613405D05B0F8830254409216E86F97571340A445CEF1\n8F02544035F622620F4D164000A4AAA1F496534001CD87CBE9471740E09A04546C61534098744DF596E51B401717474F523953401D78337C0D951E4010D9944F3C115340AD89CA6A16862140CFC2E\n28DC3DB5240C5842E31561E2440A31AB9E793985240BD8C5BE4C64526406D4676A905705240A55424D1E75F28402385E2519739524084C31EB73D7C2A401E60E240D11D5240D786518A3F712D40F3\n5BED31F40E52403C8BF8C96E62304026AE5FD5A7F15140A381208F97D531405E60389EC4E1514086BD630860DE324036012694EDDF5140AD741D40E17C3340E00EA98AC6DE5140E9339DDE5C7D334\n057D066D3D6DE5140F428BE555EE63340D2983401FADD51406269CD198B4F3440B9FC8ECE16DD5140BC38DFDB0B503440CA8056C85BDD514085A28D108C503440E0F9A841A1DD514003F100B50B51\n344028F11A3BE7DD5140824556C68A513440364940B52DDE514052B27C71748834406197CA83AEF851402902454461C434405BB0871D3D30524058A819DA7FCA3440C6EEBDFC22685240D7C07A36B\n7CA34407CC17F7F32685240F442A351EECA34402D04B1E84168524022BBE72B25CB344026AB843851685240FB4E9CC55BCB3440B8302D6F606852406DBD141F92CB34408E96DC8C6F685240E45EA4\n38C8CB3440FE66C4917E68524082259E12FECB344054B6157E8D685240419D54AD33CC3440182401529C68524030EC190969CC344052DCB60DAB685240ADD23F269ECC3440CA9866B1B968524092A\nB1705D3CC344044A23F3DC8685240756CF2A507CD3440B8D170B1D6685240406C9864EAFA3340D89D889DF98452403FAD44C337FB334042381684FE84524089B70D3485FB33403B021A7503855240\nEAD919B7D2FB3340E43D9E70088552406A7C8F4C20FC3340EA46AD760D855240772095F46DFC3340AD92518712855240146151AFBBFC334075B095A21785524015F3EA7C09FD33409A4984C81C855\n240270BB5E1518C334075A7F23A044E5240'::geometry) AND _st_within(geom, \n'0103000020E6100000010000005C000000270BB5E1518C334075A7F23A044E5240270BB5E1518C334075A7F\n23A044E52404A0F4A23D48C3340A665879E134E5240379D824A568D3340ED504FDF224E52401DAF7E57D88D3340E6DC74FD314E5240CEF23491959633401E3AA24796CA524057C055C96099334095\nF61D5D791D5340BEAE90F6DA923340EE69F9D0BCEC5340D2F745E8BFAF334078C1FB6D72AF544019E8897580903340687E89479E4955400AFBD2C5198D33408343E6C1B6575540580EE833B28D334\n0CBBE5A8BCE5755403FABFEA64A8E3340B443D112E65755407920A31EE38E3340540A8858FD575540CB73639A7B8F3340A1B3BC5C14585540765BCF19149033409E49AC1F2B585540BC37789CAC90\n33408B3F93A1415855407E0CF121459133401B72ADE257585540DE7ACEA9DD913340A12736E36D58554002BBA63376923340511068A383585540BAAA1AA905913340B7556E79B94955407BEEB5F11\n5C63240CF7C8030821D5540EC35E40B17F53140D90B2554950D5540C196004865F03040135383BECEEF5440F86981C7C9D42F408C2D858F72D2544069234A475E622D409773DB64059A5440C74974\n0C3ABD2B40F6605607607054404FF7DC976C7A2A4041076CAC456254404B29165312372940755A27C1385454405010EEE690EB274058C75750E05354406F984C8C08EC2740AC55D1BAF453544070F\nB87D9593426401D04E4EEEF455440F0BA43EB0E7624407AAC18129245544012629B06B7BA224080CFD4B1C2375440A5BD758700F82040DE33DE1B2B2A54407CFF404CA8611E407C4A4ED4AB1C5440\n7CA14B0E41B71A40B64B4549CB1C5440229EF57E38061740FA471478970F5440280B64A6FB0617400555EF99A20F54403E40DDCFBE07174060FB88C3AD0F5440D4FE49FB81081740A88AE4F4B80F5\n440708023334C5413403CB711DD53025440EE45ADC1F45413405B0243EC5F0254402A2BE45B9D551340E972ECF56B025440E19AB60146561340601910FA77025440341013B3EE5613405D05B0F883\n0254409216E86F97571340A445CEF18F02544035F622620F4D164000A4AAA1F496534001CD87CBE9471740E09A04546C61534098744DF596E51B401717474F523953401D78337C0D951E4010D9944\nF3C115340AD89CA6A16862140CFC2E28DC3DB5240C5842E31561E2440A31AB9E793985240BD8C5BE4C64526406D4676A905705240A55424D1E75F28402385E2519739524084C31EB73D7C2A401E60\nE240D11D5240D786518A3F712D40F35BED31F40E52403C8BF8C96E62304026AE5FD5A7F15140A381208F97D531405E60389EC4E1514086BD630860DE324036012694EDDF5140AD741D40E17C3340E\n00EA98AC6DE5140E9339DDE5C7D334057D066D3D6DE5140F428BE555EE63340D2983401FADD51406269CD198B4F3440B9FC8ECE16DD5140BC38DFDB0B503440CA8056C85BDD514085A28D108C5034\n40E0F9A841A1DD514003F100B50B51344028F11A3BE7DD5140824556C68A513440364940B52DDE514052B27C71748834406197CA83AEF851402902454461C434405BB0871D3D30524058A819DA7FC\nA3440C6EEBDFC22685240D7C07A36B7CA34407CC17F7F32685240F442A351EECA34402D04B1E84168524022BBE72B25CB344026AB843851685240FB4E9CC55BCB3440B8302D6F606852406DBD141F\n92CB34408E96DC8C6F685240E45EA438C8CB3440FE66C4917E68524082259E12FECB344054B6157E8D685240419D54AD33CC3440182401529C68524030EC190969CC344052DCB60DAB685240ADD23\nF269ECC3440CA9866B1B968524092AB1705D3CC344044A23F3DC8685240756CF2A507CD3440B8D170B1D6685240406C9864EAFA3340D89D889DF98452403FAD44C337FB334042381684FE84524089\nB70D3485FB33403B021A7503855240EAD919B7D2FB3340E43D9E70088552406A7C8F4C20FC3340EA46AD760D855240772095F46DFC3340AD92518712855240146151AFBBFC334075B095A21785524\n015F3EA7C09FD33409A4984C81C855240270BB5E1518C334075A7F23A044E5240'::geometry))\n -> Hash (cost=43457.32..43457.32 rows=82682 width=55) \n(actual time=255.707..255.707 rows=82443 loops=1)\n -> Seq Scan on metadata_demo m \n(cost=0.00..43457.32 rows=82682 width=55) (actual time=0.013..230.904 \nrows=82443 loops=1)\n Filter: (doc_category = 'Naxalism'::bpchar)\n -> Index Scan using idx_crawled_id on page_content_demo p \n(cost=0.00..9.24 rows=1 width=1353) (actual time=4.822..4.946 rows=1 \nloops=4129)\n Index Cond: (p.crawled_page_id = l.source_id)\n Total runtime: 21379.870 ms\n(14 rows)\n\n\nYesterday after some Performance tuning ( shared-buffers=1GB,effective \ncache-size=2Gb, work mem=64MB, maintenance_work_mem=256MB) and creating \nindexes as :\n\nCREATE INDEX idx1_source_id_l2\n ON l1 USING btree(source_id,lat,lon);\n\nCREATE INDEX idx_doc_id_m1\n ON m1 USING btree(doc_id,doc_category);\n\n CREATE INDEX idx_crawled_id_p1\n ON p1\n USING btree\n (crawled_page_id,heading,category,crawled_page_url);\n\nmy Total runtime := Total runtime: 704.383 ms\n\nAnd if run the same explain analyze command again ,Total runtime: 696.856 ms\n\nWhat is the reason that first time it takes so much time and I know \nsecond time , Postgres uses cache .\n\nIs it possible to make it run faster at the first time too. Please let \nme know.\n\n\n\n\nThanks & best Regards,\n\nAdarsh Sharma\n\n", "msg_date": "Tue, 08 Mar 2011 10:31:19 +0530", "msg_from": "Adarsh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "How to tune this query" }, { "msg_contents": "In query some are the repeatative information like below value repeating 3 times. \n\n\"19.548124415111626 73.21900819489186\"\n\nYou can create the spatial index on spatial data which will improve the performance of the query & off course ANALYZE after creating index. \n\n--\n\nThanks & Regards\nDhaval Jaiswal \n\n________________________________\n\nFrom: [email protected] on behalf of Adarsh Sharma\nSent: Tue 3/8/2011 10:31 AM\nTo: [email protected]\nCc: [email protected]\nSubject: [GENERAL] How to tune this query\n\n\n\nDear all,\n\nCan anyone Please guide me with some suggestions on how to tune the\nbelow query as I needed to perform the below query as faster as i can.\n\nI have 3 tables on which the query runs:\n\npdc_uima=# select\npg_size_pretty(pg_total_relation_size('page_content_demo'));\n pg_size_pretty\n----------------\n 1260 MB\npdc_uima=# select\npg_size_pretty(pg_total_relation_size('metadata_demo')); \n pg_size_pretty\n----------------\n 339 MB\npdc_uima=# select\npg_size_pretty(pg_total_relation_size('loc_context_demo'));\n pg_size_pretty\n----------------\n 345 MB\n\n\nMy Query is :\n\nexplain analyze select\nm.doc_category,p.heading,l.lat,l.lon,p.crawled_page_url,p.category,p.dt_stamp,p.crawled_page_id,p.content\nfrom loc_context_demo l,page_content_demo p,metadata_demo m where\nl.source_id=p.crawled_page_id and m.doc_id=l.source_id and\nst_within(l.geom,GeomFromText('POLYGON((19.548124415111626\n73.21900819489186,19.548124415111626 73.21900819489186,19.55011196668719\n73.21994746420259,19.552097947014058 73.22087843652453,19.55408236353752\n73.2218011513938,19.588219714571828 75.1654223522423,19.599133094249137\n76.46053245473952,19.57365361244478 79.69902443272414,19.68652202327923\n82.74135922990342,19.56446013085233 85.15028561045767,19.551174510964337\n85.37052962767306,19.553500408319763 85.37198146688313,19.55582660405639\n85.37341757236464,19.55815307123746 85.37483800206365,19.56047978332553\n85.37624281337641,19.562806714176496 85.37763206315,19.565133838033702\n85.37900580768307,19.567461129522137 85.38036410272655,19.56978856364264\n85.3817070034843,19.572116115766228 85.38303456461405,19.56649262333915\n85.15194545531163,18.773772341648947 84.46107113406764,17.95738291093396\n84.21223929994393,16.939045429366846\n83.74699366402301,15.915601954028702\n83.28824222570091,14.692125537681664\n82.40657922201932,13.869583501048409 81.75586112437654,13.23910975048389\n81.53550253438608,12.607561680274236\n81.31596402018643,11.960089890060914 81.3105660302366,11.961002716398268\n81.3118121189388,11.102247999047648 81.09276935832209,10.230582572954035\n81.08704044732613,9.364677626102125 80.87125821859627,8.484379037020355\n80.65888115596269,7.5953685679122565 80.44798762937165,6.678959105840814\n80.44990760581172,5.756074889890018 80.24361993771154,5.756819343429733\n80.2442993962505,5.757563827399336 80.24498070122854,5.758308340445826\n80.24566385572928,4.83232192901788 80.03636862497382,4.832964922142748\n80.0371046690356,4.833608089257533 80.0378393944808,4.834251429338765\n80.038572803232,4.834894941366702 80.03930489720865,4.835538624325311\n80.04003567832711,5.575253995307823 78.3586811224377,5.82022779480326\n77.52223682832437,6.9742086723828365\n76.89564878408815,7.6455592543043425 76.26930608306816,8.761889779304363\n75.43381068367601,10.059251343658966 74.3840274150521,11.136283050704487\n73.75034557867339,12.187315498051541\n72.89986083146191,13.242658350472773\n72.46589681727389,14.721187899066917\n72.23365448169334,16.384503005199107\n71.77586874336029,17.834343858181125\n71.52762561326514,18.868652843809762\n71.49887565337562,19.487812049094533\n71.48086802014905,19.489698327426513 71.48186192551053,19.89987693684175\n71.46838407646581,20.310716259621934 71.454517020832,20.312680952069726\n71.45872696349684,20.314637217119998 71.46296731473512,20.31658488533959\n71.46723821288163,20.318523784696943 71.47153979566505,20.53302678388929\n71.88565153869924,20.767109171722186\n72.75373018504017,20.791013365997372 73.62713545368305,20.79185810562998\n73.6280821559539,20.79269895778539 73.62902276312589,20.793535942149113\n73.6299573226539,20.79436907831312 73.63088588154903,20.795198385776008\n73.6318084863835,20.796023883943136 73.63272518329538,20.796845592126836\n73.6336360179933,20.79766352954653 73.63454103576112,20.798477715328943\n73.63544028146251,20.799288168508316 73.6363337995455,20.80009490802656\n73.63722163404697,20.800897952733482\n73.63810382859708,19.980139052593813 74.07773531285727,19.98131962229422\n74.0780344216337,19.982501271580563 74.078336024665,19.983684009372077\n74.07864013150498,19.98486784461094 74.07894675180037,19.98605278626243\n74.07925589529141,19.987238843315097\n74.07956757181258,19.988426024780967\n74.07988179129316,19.548124415111626 73.21900819489186))',4326)) and\nm.doc_category='Naxalism'order by p.dt_stamp desc;\n\nToday in the morning , I am shocked to see the result below :\n\n Sort (cost=129344.37..129354.40 rows=4013 width=1418) (actual\ntime=21377.760..21378.441 rows=4485 loops=1)\n Sort Key: p.dt_stamp\n Sort Method: quicksort Memory: 7161kB\n -> Nested Loop (cost=44490.85..129104.18 rows=4013 width=1418)\n(actual time=267.729..21353.703 rows=4485 loops=1)\n -> Hash Join (cost=44490.85..95466.11 rows=3637 width=73)\n(actual time=255.849..915.092 rows=4129 loops=1)\n Hash Cond: (l.source_id = m.doc_id)\n -> Seq Scan on loc_context_demo l (cost=0.00..47083.94\nrows=16404 width=18) (actual time=0.065..628.255 rows=17072 loops=1)\n Filter: ((geom &&\n'0103000020E6100000010000005C000000270BB5E1518C334075A7F23A044E5240270BB5E1518C334075A7F23A044E52404A0F4A23D48C3340A66\n5879E134E5240379D824A568D3340ED504FDF224E52401DAF7E57D88D3340E6DC74FD314E5240CEF23491959633401E3AA24796CA524057C055C96099334095F61D5D791D5340BEAE90F6DA923340\nEE69F9D0BCEC5340D2F745E8BFAF334078C1FB6D72AF544019E8897580903340687E89479E4955400AFBD2C5198D33408343E6C1B6575540580EE833B28D3340CBBE5A8BCE5755403FABFEA64A8E3\n340B443D112E65755407920A31EE38E3340540A8858FD575540CB73639A7B8F3340A1B3BC5C14585540765BCF19149033409E49AC1F2B585540BC37789CAC9033408B3F93A1415855407E0CF12145\n9133401B72ADE257585540DE7ACEA9DD913340A12736E36D58554002BBA63376923340511068A383585540BAAA1AA905913340B7556E79B94955407BEEB5F115C63240CF7C8030821D5540EC35E40\nB17F53140D90B2554950D5540C196004865F03040135383BECEEF5440F86981C7C9D42F408C2D858F72D2544069234A475E622D409773DB64059A5440C749740C3ABD2B40F6605607607054404FF7\nDC976C7A2A4041076CAC456254404B29165312372940755A27C1385454405010EEE690EB274058C75750E05354406F984C8C08EC2740AC55D1BAF453544070FB87D9593426401D04E4EEEF455440F\n0BA43EB0E7624407AAC18129245544012629B06B7BA224080CFD4B1C2375440A5BD758700F82040DE33DE1B2B2A54407CFF404CA8611E407C4A4ED4AB1C54407CA14B0E41B71A40B64B4549CB1C54\n40229EF57E38061740FA471478970F5440280B64A6FB0617400555EF99A20F54403E40DDCFBE07174060FB88C3AD0F5440D4FE49FB81081740A88AE4F4B80F5440708023334C5413403CB711DD530\n25440EE45ADC1F45413405B0243EC5F0254402A2BE45B9D551340E972ECF56B025440E19AB60146561340601910FA77025440341013B3EE5613405D05B0F8830254409216E86F97571340A445CEF1\n8F02544035F622620F4D164000A4AAA1F496534001CD87CBE9471740E09A04546C61534098744DF596E51B401717474F523953401D78337C0D951E4010D9944F3C115340AD89CA6A16862140CFC2E\n28DC3DB5240C5842E31561E2440A31AB9E793985240BD8C5BE4C64526406D4676A905705240A55424D1E75F28402385E2519739524084C31EB73D7C2A401E60E240D11D5240D786518A3F712D40F3\n5BED31F40E52403C8BF8C96E62304026AE5FD5A7F15140A381208F97D531405E60389EC4E1514086BD630860DE324036012694EDDF5140AD741D40E17C3340E00EA98AC6DE5140E9339DDE5C7D334\n057D066D3D6DE5140F428BE555EE63340D2983401FADD51406269CD198B4F3440B9FC8ECE16DD5140BC38DFDB0B503440CA8056C85BDD514085A28D108C503440E0F9A841A1DD514003F100B50B51\n344028F11A3BE7DD5140824556C68A513440364940B52DDE514052B27C71748834406197CA83AEF851402902454461C434405BB0871D3D30524058A819DA7FCA3440C6EEBDFC22685240D7C07A36B\n7CA34407CC17F7F32685240F442A351EECA34402D04B1E84168524022BBE72B25CB344026AB843851685240FB4E9CC55BCB3440B8302D6F606852406DBD141F92CB34408E96DC8C6F685240E45EA4\n38C8CB3440FE66C4917E68524082259E12FECB344054B6157E8D685240419D54AD33CC3440182401529C68524030EC190969CC344052DCB60DAB685240ADD23F269ECC3440CA9866B1B968524092A\nB1705D3CC344044A23F3DC8685240756CF2A507CD3440B8D170B1D6685240406C9864EAFA3340D89D889DF98452403FAD44C337FB334042381684FE84524089B70D3485FB33403B021A7503855240\nEAD919B7D2FB3340E43D9E70088552406A7C8F4C20FC3340EA46AD760D855240772095F46DFC3340AD92518712855240146151AFBBFC334075B095A21785524015F3EA7C09FD33409A4984C81C855\n240270BB5E1518C334075A7F23A044E5240'::geometry) AND _st_within(geom,\n'0103000020E6100000010000005C000000270BB5E1518C334075A7F23A044E5240270BB5E1518C334075A7F\n23A044E52404A0F4A23D48C3340A665879E134E5240379D824A568D3340ED504FDF224E52401DAF7E57D88D3340E6DC74FD314E5240CEF23491959633401E3AA24796CA524057C055C96099334095\nF61D5D791D5340BEAE90F6DA923340EE69F9D0BCEC5340D2F745E8BFAF334078C1FB6D72AF544019E8897580903340687E89479E4955400AFBD2C5198D33408343E6C1B6575540580EE833B28D334\n0CBBE5A8BCE5755403FABFEA64A8E3340B443D112E65755407920A31EE38E3340540A8858FD575540CB73639A7B8F3340A1B3BC5C14585540765BCF19149033409E49AC1F2B585540BC37789CAC90\n33408B3F93A1415855407E0CF121459133401B72ADE257585540DE7ACEA9DD913340A12736E36D58554002BBA63376923340511068A383585540BAAA1AA905913340B7556E79B94955407BEEB5F11\n5C63240CF7C8030821D5540EC35E40B17F53140D90B2554950D5540C196004865F03040135383BECEEF5440F86981C7C9D42F408C2D858F72D2544069234A475E622D409773DB64059A5440C74974\n0C3ABD2B40F6605607607054404FF7DC976C7A2A4041076CAC456254404B29165312372940755A27C1385454405010EEE690EB274058C75750E05354406F984C8C08EC2740AC55D1BAF453544070F\nB87D9593426401D04E4EEEF455440F0BA43EB0E7624407AAC18129245544012629B06B7BA224080CFD4B1C2375440A5BD758700F82040DE33DE1B2B2A54407CFF404CA8611E407C4A4ED4AB1C5440\n7CA14B0E41B71A40B64B4549CB1C5440229EF57E38061740FA471478970F5440280B64A6FB0617400555EF99A20F54403E40DDCFBE07174060FB88C3AD0F5440D4FE49FB81081740A88AE4F4B80F5\n440708023334C5413403CB711DD53025440EE45ADC1F45413405B0243EC5F0254402A2BE45B9D551340E972ECF56B025440E19AB60146561340601910FA77025440341013B3EE5613405D05B0F883\n0254409216E86F97571340A445CEF18F02544035F622620F4D164000A4AAA1F496534001CD87CBE9471740E09A04546C61534098744DF596E51B401717474F523953401D78337C0D951E4010D9944\nF3C115340AD89CA6A16862140CFC2E28DC3DB5240C5842E31561E2440A31AB9E793985240BD8C5BE4C64526406D4676A905705240A55424D1E75F28402385E2519739524084C31EB73D7C2A401E60\nE240D11D5240D786518A3F712D40F35BED31F40E52403C8BF8C96E62304026AE5FD5A7F15140A381208F97D531405E60389EC4E1514086BD630860DE324036012694EDDF5140AD741D40E17C3340E\n00EA98AC6DE5140E9339DDE5C7D334057D066D3D6DE5140F428BE555EE63340D2983401FADD51406269CD198B4F3440B9FC8ECE16DD5140BC38DFDB0B503440CA8056C85BDD514085A28D108C5034\n40E0F9A841A1DD514003F100B50B51344028F11A3BE7DD5140824556C68A513440364940B52DDE514052B27C71748834406197CA83AEF851402902454461C434405BB0871D3D30524058A819DA7FC\nA3440C6EEBDFC22685240D7C07A36B7CA34407CC17F7F32685240F442A351EECA34402D04B1E84168524022BBE72B25CB344026AB843851685240FB4E9CC55BCB3440B8302D6F606852406DBD141F\n92CB34408E96DC8C6F685240E45EA438C8CB3440FE66C4917E68524082259E12FECB344054B6157E8D685240419D54AD33CC3440182401529C68524030EC190969CC344052DCB60DAB685240ADD23\nF269ECC3440CA9866B1B968524092AB1705D3CC344044A23F3DC8685240756CF2A507CD3440B8D170B1D6685240406C9864EAFA3340D89D889DF98452403FAD44C337FB334042381684FE84524089\nB70D3485FB33403B021A7503855240EAD919B7D2FB3340E43D9E70088552406A7C8F4C20FC3340EA46AD760D855240772095F46DFC3340AD92518712855240146151AFBBFC334075B095A21785524\n015F3EA7C09FD33409A4984C81C855240270BB5E1518C334075A7F23A044E5240'::geometry))\n -> Hash (cost=43457.32..43457.32 rows=82682 width=55)\n(actual time=255.707..255.707 rows=82443 loops=1)\n -> Seq Scan on metadata_demo m \n(cost=0.00..43457.32 rows=82682 width=55) (actual time=0.013..230.904\nrows=82443 loops=1)\n Filter: (doc_category = 'Naxalism'::bpchar)\n -> Index Scan using idx_crawled_id on page_content_demo p \n(cost=0.00..9.24 rows=1 width=1353) (actual time=4.822..4.946 rows=1\nloops=4129)\n Index Cond: (p.crawled_page_id = l.source_id)\n Total runtime: 21379.870 ms\n(14 rows)\n\n\nYesterday after some Performance tuning ( shared-buffers=1GB,effective\ncache-size=2Gb, work mem=64MB, maintenance_work_mem=256MB) and creating\nindexes as :\n\nCREATE INDEX idx1_source_id_l2\n ON l1 USING btree(source_id,lat,lon);\n\nCREATE INDEX idx_doc_id_m1\n ON m1 USING btree(doc_id,doc_category);\n\n CREATE INDEX idx_crawled_id_p1\n ON p1\n USING btree\n (crawled_page_id,heading,category,crawled_page_url);\n\nmy Total runtime := Total runtime: 704.383 ms\n\nAnd if run the same explain analyze command again ,Total runtime: 696.856 ms\n\nWhat is the reason that first time it takes so much time and I know\nsecond time , Postgres uses cache .\n\nIs it possible to make it run faster at the first time too. Please let\nme know.\n\n\n\n\nThanks & best Regards,\n\nAdarsh Sharma\n\n\n--\nSent via pgsql-general mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-general\n\n\nThe information transmitted is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. \nAny review, re-transmission, dissemination or other use of or taking of any action in reliance upon,this information by persons or entities other than the intended recipient is prohibited. \nIf you received this in error, please contact the sender and delete the material from your computer. \nMicroland takes all reasonable steps to ensure that its electronic communications are free from viruses. \nHowever, given Internet accessibility, the Company cannot accept liability for any virus introduced by this e-mail or any attachment and you are advised to use up-to-date virus checking software. \n\n[GENERAL] How to tune this query\n\n\n\n\n\nIn query some are the repeatative information like below value repeating 3 times. \n\"19.548124415111626 73.21900819489186\"\nYou can create the spatial index on spatial data which will improve the performance of the query & off course ANALYZE after creating index. \n--\n\n\nThanks & Regards\nDhaval Jaiswal \n\nFrom: [email protected] on behalf of Adarsh SharmaSent: Tue 3/8/2011 10:31 AMTo: [email protected]: [email protected]: [GENERAL] How to tune this query\n\nDear all,Can anyone Please guide me with some suggestions on how to tune thebelow query as I needed to perform the below query as faster as i can.I have 3 tables on which the query runs:pdc_uima=# selectpg_size_pretty(pg_total_relation_size('page_content_demo')); pg_size_pretty---------------- 1260 MBpdc_uima=# selectpg_size_pretty(pg_total_relation_size('metadata_demo'));   pg_size_pretty---------------- 339 MBpdc_uima=# selectpg_size_pretty(pg_total_relation_size('loc_context_demo')); pg_size_pretty---------------- 345 MBMy Query is :explain analyze selectm.doc_category,p.heading,l.lat,l.lon,p.crawled_page_url,p.category,p.dt_stamp,p.crawled_page_id,p.contentfrom  loc_context_demo l,page_content_demo p,metadata_demo m wherel.source_id=p.crawled_page_id and m.doc_id=l.source_id andst_within(l.geom,GeomFromText('POLYGON((19.54812441511162673.21900819489186,19.548124415111626 73.21900819489186,19.5501119666871973.21994746420259,19.552097947014058 73.22087843652453,19.5540823635375273.2218011513938,19.588219714571828 75.1654223522423,19.59913309424913776.46053245473952,19.57365361244478 79.69902443272414,19.6865220232792382.74135922990342,19.56446013085233 85.15028561045767,19.55117451096433785.37052962767306,19.553500408319763 85.37198146688313,19.5558266040563985.37341757236464,19.55815307123746 85.37483800206365,19.5604797833255385.37624281337641,19.562806714176496 85.37763206315,19.56513383803370285.37900580768307,19.567461129522137 85.38036410272655,19.5697885636426485.3817070034843,19.572116115766228 85.38303456461405,19.5664926233391585.15194545531163,18.773772341648947 84.46107113406764,17.9573829109339684.21223929994393,16.93904542936684683.74699366402301,15.91560195402870283.28824222570091,14.69212553768166482.40657922201932,13.869583501048409 81.75586112437654,13.2391097504838981.53550253438608,12.60756168027423681.31596402018643,11.960089890060914 81.3105660302366,11.96100271639826881.3118121189388,11.102247999047648 81.09276935832209,10.23058257295403581.08704044732613,9.364677626102125 80.87125821859627,8.48437903702035580.65888115596269,7.5953685679122565 80.44798762937165,6.67895910584081480.44990760581172,5.756074889890018 80.24361993771154,5.75681934342973380.2442993962505,5.757563827399336 80.24498070122854,5.75830834044582680.24566385572928,4.83232192901788 80.03636862497382,4.83296492214274880.0371046690356,4.833608089257533 80.0378393944808,4.83425142933876580.038572803232,4.834894941366702 80.03930489720865,4.83553862432531180.04003567832711,5.575253995307823 78.3586811224377,5.8202277948032677.52223682832437,6.974208672382836576.89564878408815,7.6455592543043425 76.26930608306816,8.76188977930436375.43381068367601,10.059251343658966 74.3840274150521,11.13628305070448773.75034557867339,12.18731549805154172.89986083146191,13.24265835047277372.46589681727389,14.72118789906691772.23365448169334,16.38450300519910771.77586874336029,17.83434385818112571.52762561326514,18.86865284380976271.49887565337562,19.48781204909453371.48086802014905,19.489698327426513 71.48186192551053,19.8998769368417571.46838407646581,20.310716259621934 71.454517020832,20.31268095206972671.45872696349684,20.314637217119998 71.46296731473512,20.3165848853395971.46723821288163,20.318523784696943 71.47153979566505,20.5330267838892971.88565153869924,20.76710917172218672.75373018504017,20.791013365997372 73.62713545368305,20.7918581056299873.6280821559539,20.79269895778539 73.62902276312589,20.79353594214911373.6299573226539,20.79436907831312 73.63088588154903,20.79519838577600873.6318084863835,20.796023883943136 73.63272518329538,20.79684559212683673.6336360179933,20.79766352954653 73.63454103576112,20.79847771532894373.63544028146251,20.799288168508316 73.6363337995455,20.8000949080265673.63722163404697,20.80089795273348273.63810382859708,19.980139052593813 74.07773531285727,19.9813196222942274.0780344216337,19.982501271580563 74.078336024665,19.98368400937207774.07864013150498,19.98486784461094 74.07894675180037,19.9860527862624374.07925589529141,19.98723884331509774.07956757181258,19.98842602478096774.07988179129316,19.548124415111626 73.21900819489186))',4326)) andm.doc_category='Naxalism'order by p.dt_stamp desc;Today in the morning , I am shocked to see the result  below : Sort  (cost=129344.37..129354.40 rows=4013 width=1418) (actualtime=21377.760..21378.441 rows=4485 loops=1)   Sort Key: p.dt_stamp   Sort Method:  quicksort  Memory: 7161kB   ->  Nested Loop  (cost=44490.85..129104.18 rows=4013 width=1418)(actual time=267.729..21353.703 rows=4485 loops=1)         ->  Hash Join  (cost=44490.85..95466.11 rows=3637 width=73)(actual time=255.849..915.092 rows=4129 loops=1)               Hash Cond: (l.source_id = m.doc_id)               ->  Seq Scan on loc_context_demo l  (cost=0.00..47083.94rows=16404 width=18) (actual time=0.065..628.255 rows=17072 loops=1)                     Filter: ((geom &&'0103000020E6100000010000005C000000270BB5E1518C334075A7F23A044E5240270BB5E1518C334075A7F23A044E52404A0F4A23D48C3340A665879E134E5240379D824A568D3340ED504FDF224E52401DAF7E57D88D3340E6DC74FD314E5240CEF23491959633401E3AA24796CA524057C055C96099334095F61D5D791D5340BEAE90F6DA923340EE69F9D0BCEC5340D2F745E8BFAF334078C1FB6D72AF544019E8897580903340687E89479E4955400AFBD2C5198D33408343E6C1B6575540580EE833B28D3340CBBE5A8BCE5755403FABFEA64A8E3340B443D112E65755407920A31EE38E3340540A8858FD575540CB73639A7B8F3340A1B3BC5C14585540765BCF19149033409E49AC1F2B585540BC37789CAC9033408B3F93A1415855407E0CF121459133401B72ADE257585540DE7ACEA9DD913340A12736E36D58554002BBA63376923340511068A383585540BAAA1AA905913340B7556E79B94955407BEEB5F115C63240CF7C8030821D5540EC35E40B17F53140D90B2554950D5540C196004865F03040135383BECEEF5440F86981C7C9D42F408C2D858F72D2544069234A475E622D409773DB64059A5440C749740C3ABD2B40F6605607607054404FF7DC976C7A2A4041076CAC456254404B29165312372940755A27C1385454405010EEE690EB274058C75750E05354406F984C8C08EC2740AC55D1BAF453544070FB87D9593426401D04E4EEEF455440F0BA43EB0E7624407AAC18129245544012629B06B7BA224080CFD4B1C2375440A5BD758700F82040DE33DE1B2B2A54407CFF404CA8611E407C4A4ED4AB1C54407CA14B0E41B71A40B64B4549CB1C5440229EF57E38061740FA471478970F5440280B64A6FB0617400555EF99A20F54403E40DDCFBE07174060FB88C3AD0F5440D4FE49FB81081740A88AE4F4B80F5440708023334C5413403CB711DD53025440EE45ADC1F45413405B0243EC5F0254402A2BE45B9D551340E972ECF56B025440E19AB60146561340601910FA77025440341013B3EE5613405D05B0F8830254409216E86F97571340A445CEF18F02544035F622620F4D164000A4AAA1F496534001CD87CBE9471740E09A04546C61534098744DF596E51B401717474F523953401D78337C0D951E4010D9944F3C115340AD89CA6A16862140CFC2E28DC3DB5240C5842E31561E2440A31AB9E793985240BD8C5BE4C64526406D4676A905705240A55424D1E75F28402385E2519739524084C31EB73D7C2A401E60E240D11D5240D786518A3F712D40F35BED31F40E52403C8BF8C96E62304026AE5FD5A7F15140A381208F97D531405E60389EC4E1514086BD630860DE324036012694EDDF5140AD741D40E17C3340E00EA98AC6DE5140E9339DDE5C7D334057D066D3D6DE5140F428BE555EE63340D2983401FADD51406269CD198B4F3440B9FC8ECE16DD5140BC38DFDB0B503440CA8056C85BDD514085A28D108C503440E0F9A841A1DD514003F100B50B51344028F11A3BE7DD5140824556C68A513440364940B52DDE514052B27C71748834406197CA83AEF851402902454461C434405BB0871D3D30524058A819DA7FCA3440C6EEBDFC22685240D7C07A36B7CA34407CC17F7F32685240F442A351EECA34402D04B1E84168524022BBE72B25CB344026AB843851685240FB4E9CC55BCB3440B8302D6F606852406DBD141F92CB34408E96DC8C6F685240E45EA438C8CB3440FE66C4917E68524082259E12FECB344054B6157E8D685240419D54AD33CC3440182401529C68524030EC190969CC344052DCB60DAB685240ADD23F269ECC3440CA9866B1B968524092AB1705D3CC344044A23F3DC8685240756CF2A507CD3440B8D170B1D6685240406C9864EAFA3340D89D889DF98452403FAD44C337FB334042381684FE84524089B70D3485FB33403B021A7503855240EAD919B7D2FB3340E43D9E70088552406A7C8F4C20FC3340EA46AD760D855240772095F46DFC3340AD92518712855240146151AFBBFC334075B095A21785524015F3EA7C09FD33409A4984C81C855240270BB5E1518C334075A7F23A044E5240'::geometry) AND _st_within(geom,'0103000020E6100000010000005C000000270BB5E1518C334075A7F23A044E5240270BB5E1518C334075A7F23A044E52404A0F4A23D48C3340A665879E134E5240379D824A568D3340ED504FDF224E52401DAF7E57D88D3340E6DC74FD314E5240CEF23491959633401E3AA24796CA524057C055C96099334095F61D5D791D5340BEAE90F6DA923340EE69F9D0BCEC5340D2F745E8BFAF334078C1FB6D72AF544019E8897580903340687E89479E4955400AFBD2C5198D33408343E6C1B6575540580EE833B28D3340CBBE5A8BCE5755403FABFEA64A8E3340B443D112E65755407920A31EE38E3340540A8858FD575540CB73639A7B8F3340A1B3BC5C14585540765BCF19149033409E49AC1F2B585540BC37789CAC9033408B3F93A1415855407E0CF121459133401B72ADE257585540DE7ACEA9DD913340A12736E36D58554002BBA63376923340511068A383585540BAAA1AA905913340B7556E79B94955407BEEB5F115C63240CF7C8030821D5540EC35E40B17F53140D90B2554950D5540C196004865F03040135383BECEEF5440F86981C7C9D42F408C2D858F72D2544069234A475E622D409773DB64059A5440C749740C3ABD2B40F6605607607054404FF7DC976C7A2A4041076CAC456254404B29165312372940755A27C1385454405010EEE690EB274058C75750E05354406F984C8C08EC2740AC55D1BAF453544070FB87D9593426401D04E4EEEF455440F0BA43EB0E7624407AAC18129245544012629B06B7BA224080CFD4B1C2375440A5BD758700F82040DE33DE1B2B2A54407CFF404CA8611E407C4A4ED4AB1C54407CA14B0E41B71A40B64B4549CB1C5440229EF57E38061740FA471478970F5440280B64A6FB0617400555EF99A20F54403E40DDCFBE07174060FB88C3AD0F5440D4FE49FB81081740A88AE4F4B80F5440708023334C5413403CB711DD53025440EE45ADC1F45413405B0243EC5F0254402A2BE45B9D551340E972ECF56B025440E19AB60146561340601910FA77025440341013B3EE5613405D05B0F8830254409216E86F97571340A445CEF18F02544035F622620F4D164000A4AAA1F496534001CD87CBE9471740E09A04546C61534098744DF596E51B401717474F523953401D78337C0D951E4010D9944F3C115340AD89CA6A16862140CFC2E28DC3DB5240C5842E31561E2440A31AB9E793985240BD8C5BE4C64526406D4676A905705240A55424D1E75F28402385E2519739524084C31EB73D7C2A401E60E240D11D5240D786518A3F712D40F35BED31F40E52403C8BF8C96E62304026AE5FD5A7F15140A381208F97D531405E60389EC4E1514086BD630860DE324036012694EDDF5140AD741D40E17C3340E00EA98AC6DE5140E9339DDE5C7D334057D066D3D6DE5140F428BE555EE63340D2983401FADD51406269CD198B4F3440B9FC8ECE16DD5140BC38DFDB0B503440CA8056C85BDD514085A28D108C503440E0F9A841A1DD514003F100B50B51344028F11A3BE7DD5140824556C68A513440364940B52DDE514052B27C71748834406197CA83AEF851402902454461C434405BB0871D3D30524058A819DA7FCA3440C6EEBDFC22685240D7C07A36B7CA34407CC17F7F32685240F442A351EECA34402D04B1E84168524022BBE72B25CB344026AB843851685240FB4E9CC55BCB3440B8302D6F606852406DBD141F92CB34408E96DC8C6F685240E45EA438C8CB3440FE66C4917E68524082259E12FECB344054B6157E8D685240419D54AD33CC3440182401529C68524030EC190969CC344052DCB60DAB685240ADD23F269ECC3440CA9866B1B968524092AB1705D3CC344044A23F3DC8685240756CF2A507CD3440B8D170B1D6685240406C9864EAFA3340D89D889DF98452403FAD44C337FB334042381684FE84524089B70D3485FB33403B021A7503855240EAD919B7D2FB3340E43D9E70088552406A7C8F4C20FC3340EA46AD760D855240772095F46DFC3340AD92518712855240146151AFBBFC334075B095A21785524015F3EA7C09FD33409A4984C81C855240270BB5E1518C334075A7F23A044E5240'::geometry))               ->  Hash  (cost=43457.32..43457.32 rows=82682 width=55)(actual time=255.707..255.707 rows=82443 loops=1)                     ->  Seq Scan on metadata_demo m (cost=0.00..43457.32 rows=82682 width=55) (actual time=0.013..230.904rows=82443 loops=1)                           Filter: (doc_category = 'Naxalism'::bpchar)         ->  Index Scan using idx_crawled_id on page_content_demo p (cost=0.00..9.24 rows=1 width=1353) (actual time=4.822..4.946 rows=1loops=4129)               Index Cond: (p.crawled_page_id = l.source_id) Total runtime: 21379.870 ms(14 rows)Yesterday after some Performance tuning ( shared-buffers=1GB,effectivecache-size=2Gb, work mem=64MB, maintenance_work_mem=256MB)  and creatingindexes as :CREATE INDEX idx1_source_id_l2  ON l1  USING btree(source_id,lat,lon);CREATE INDEX idx_doc_id_m1  ON m1  USING btree(doc_id,doc_category);  CREATE INDEX idx_crawled_id_p1  ON p1  USING btree  (crawled_page_id,heading,category,crawled_page_url);my Total runtime := Total runtime: 704.383 msAnd if run the same explain analyze command again ,Total runtime: 696.856 msWhat is the reason that first time it takes so much time and I knowsecond time , Postgres uses cache .Is it possible to make it run faster at the first time too. Please letme know.Thanks & best Regards,Adarsh Sharma--Sent via pgsql-general mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-generalThe information transmitted is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. \nAny review, re-transmission, dissemination or other use of or taking of any action in reliance upon,this information by persons or entities other than the intended recipient is prohibited. \nIf you received this in error, please contact the sender and delete the material from your computer. \nMicroland takes all reasonable steps to ensure that its electronic communications are free from viruses. \nHowever, given Internet accessibility, the Company cannot accept liability for any virus introduced by this e-mail or any attachment and you are advised to use up-to-date virus checking software.", "msg_date": "Tue, 8 Mar 2011 11:54:59 +0530", "msg_from": "\"Jaiswal Dhaval Sudhirkumar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to tune this query" } ]
[ { "msg_contents": "Hello,\nI have a problem with table partitioning because i have a foreign key \napplied on the partionned table and it throw a constraint violation \nerror during inserts.\nI saw on the manual \n(http://www.postgresql.org/docs/8.4/interactive/ddl-inherit.html caveats \nsection) that it's a limitation due to postgrsql table inheritance \nselect queries performance are really bad without partitionning and i'm \nlooking for a workaround to this foreign key problem or another solution \nfor improve performance for larges tables.\n\nThanks in advance for your responses\n\n", "msg_date": "Tue, 08 Mar 2011 16:45:08 +0100", "msg_from": "Samba GUEYE <[email protected]>", "msg_from_op": true, "msg_subject": "Table partitioning problem" }, { "msg_contents": "On Mar 8, 2011, at 9:45 AM, Samba GUEYE wrote:\n> I have a problem with table partitioning because i have a foreign key applied on the partionned table and it throw a constraint violation error during inserts.\n> I saw on the manual (http://www.postgresql.org/docs/8.4/interactive/ddl-inherit.html caveats section) that it's a limitation due to postgrsql table inheritance select queries performance are really bad without partitionning and i'm looking for a workaround to this foreign key problem or another solution for improve performance for larges tables.\n\nActually, this sounds more like having a foreign key pointed at a parent table in an inheritance tree; which flat-out doesn't do what you'd want.\n\nCan you tell us what the foreign key constraint actually is, and what the inheritance setup for the tables in the FK is?\n--\nJim C. Nasby, Database Architect [email protected]\n512.569.9461 (cell) http://jim.nasby.net\n\n\n", "msg_date": "Wed, 9 Mar 2011 16:01:58 -0600", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Table partitioning problem" }, { "msg_contents": "Hi jim thanks for your answer,\n\nThe database model is some' like that :\nMeasure(Id, numbering,Date, crcCorrect, sensorId) and a SimpleMeasure \n(Id, doubleValue) and GenericMeasure (Id, BlobValue, numberOfElements)\nand in the UML model SimpleMeasure and GenericMeasure inherits from the \nMeasure class so in the database, the foreign key of SimpleMeasure and \nGenericMeasure points to the Measure Table which is partitionned by sensor.\n\nThe measure insertion is successful but problems raise up when inserting \nin the simpleMeasure table because it can't find the foreign key \ninserted the measure table and do not look at the partitionned tables\n\nERROR: insert or update on table \"simpleMeasure\" violates foreign key constraint \"fk_measure_id\"\nDETAIL: Key(measure_id)=(1) is not present in table Measure\n\n\nThe inheritance is just used to set the Postgre's partionning and the \nlimitation of the partitioning comes from here\n\nThe same problem is also related in the following post :\n\nhttp://archives.postgresql.org/pgsql-performance/2008-07/msg00224.php \nand this\nhttp://archives.postgresql.org/pgsql-admin/2007-09/msg00031.php\n\nBest Regards\n\n\nLe 09/03/2011 23:01, Jim Nasby a �crit :\n> On Mar 8, 2011, at 9:45 AM, Samba GUEYE wrote:\n>> I have a problem with table partitioning because i have a foreign key applied on the partionned table and it throw a constraint violation error during inserts.\n>> I saw on the manual (http://www.postgresql.org/docs/8.4/interactive/ddl-inherit.html caveats section) that it's a limitation due to postgrsql table inheritance select queries performance are really bad without partitionning and i'm looking for a workaround to this foreign key problem or another solution for improve performance for larges tables.\n> Actually, this sounds more like having a foreign key pointed at a parent table in an inheritance tree; which flat-out doesn't do what you'd want.\n>\n> Can you tell us what the foreign key constraint actually is, and what the inheritance setup for the tables in the FK is?\n> --\n> Jim C. Nasby, Database Architect [email protected]\n> 512.569.9461 (cell) http://jim.nasby.net\n>\n>\n\n", "msg_date": "Thu, 10 Mar 2011 09:25:06 +0100", "msg_from": "Samba GUEYE <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Table partitioning problem" }, { "msg_contents": "On Thu, Mar 10, 2011 at 3:25 AM, Samba GUEYE <[email protected]> wrote:\n> The measure insertion is successful but problems raise up when inserting in\n> the simpleMeasure table because it can't find the foreign key inserted the\n> measure table and do not look at the partitionned tables\n\nYes, that's how it works.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Fri, 11 Mar 2011 13:31:44 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Table partitioning problem" }, { "msg_contents": "Yeah but is there a workaround to force the root table to propagate the \nforeign key to the partitionned table\nbecause right now all foreign keys to partitionned table throws \nconstraints violation and it's a big problem for me\nLe 11/03/2011 19:31, Robert Haas a �crit :\n> On Thu, Mar 10, 2011 at 3:25 AM, Samba GUEYE<[email protected]> wrote:\n>> The measure insertion is successful but problems raise up when inserting in\n>> the simpleMeasure table because it can't find the foreign key inserted the\n>> measure table and do not look at the partitionned tables\n> Yes, that's how it works.\n>\n\n", "msg_date": "Mon, 14 Mar 2011 17:42:06 +0100", "msg_from": "Samba GUEYE <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Table partitioning problem" }, { "msg_contents": "On Mon, Mar 14, 2011 at 12:42 PM, Samba GUEYE <[email protected]> wrote:\n> Yeah but is there a workaround to force the root table to propagate the\n> foreign key to the partitionned table\n> because right now all foreign keys to partitionned table throws constraints\n> violation and it's a big problem for me\n\nNo. Generally, table partitioning is not a good idea unless you are\ndealing with really large tables, and nearly all of your queries apply\nonly to a single partition. Most likely you are better off not using\ntable inheritance in the first place if you need this feature.\n\nIt would be nice if we had a way to do this for the rare cases where\nit would be useful, but we don't.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Mon, 14 Mar 2011 15:40:33 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Table partitioning problem" }, { "msg_contents": "On Mon, Mar 14, 2011 at 12:40 PM, Robert Haas <[email protected]> wrote:\n> Generally, table partitioning is not a good idea unless you are\n> dealing with really large tables, and nearly all of your queries apply\n> only to a single partition.  Most likely you are better off not using\n> table inheritance in the first place if you need this feature.\n\nI don't know if my tables count as 'large' or not, but I've gotten\nsome good mileage in the past out of time-based partitioning and\nsetting higher compression levels on old tables. Also the ability to\ndrop-and-reload a day is sometimes useful, but I grant that it would\nbe better to never need to do that.\n\n-C.\n", "msg_date": "Mon, 14 Mar 2011 13:22:20 -0700", "msg_from": "Conor Walsh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Table partitioning problem" }, { "msg_contents": "Alright thanks all of you for your answers, but i've got 3 more questions :\n\n 1. Why \"... partitionning is not a good idea ...\" like you said\n Robert and Conor \"... I grant that it would be better to never\n need to do that\" ?\n 2. Is there another way or strategy to deal with very large tables\n (over 100 000 000 rows per year in one table) beyond indexing and\n partitionning?\n 3. If you had to quantify a limit of numbers of rows per table in a\n single postgresql database server what would you say?\n\nPS: i'm using postgresql since less than 2 month because i thought that \npartitioning was a possible solution that doesn't offer me Apache Derby \nfor my large table problem so if these questions sounds \"dummy\" for you \nthis is a postgresql novice talking to you.\n\nRegards\n\nLe 14/03/2011 20:40, Robert Haas a �crit :\n> On Mon, Mar 14, 2011 at 12:42 PM, Samba GUEYE<[email protected]> wrote:\n>> Yeah but is there a workaround to force the root table to propagate the\n>> foreign key to the partitionned table\n>> because right now all foreign keys to partitionned table throws constraints\n>> violation and it's a big problem for me\n> No. Generally, table partitioning is not a good idea unless you are\n> dealing with really large tables, and nearly all of your queries apply\n> only to a single partition. Most likely you are better off not using\n> table inheritance in the first place if you need this feature.\n>\n> It would be nice if we had a way to do this for the rare cases where\n> it would be useful, but we don't.\n>\n\n\n\n\n\n\n\n Alright thanks all of you for your answers, but i've got 3 more\n questions : \n\nWhy \"... partitionning is not a good idea ...\" like you said\n Robert and Conor \"... I grant that it would be better to never\n need to do that\" ?\nIs there another way or strategy to deal with very large\n tables (over 100 000 000 rows per year in one table)  beyond\n indexing and partitionning?\nIf you had to quantify a limit of numbers of rows per table in\n a single postgresql database server what would you say?\n\n\n PS: i'm using postgresql since less than 2 month because i thought\n that partitioning was a possible solution  that doesn't offer me\n Apache Derby for my large table problem so if these questions sounds\n \"dummy\" for you this is a postgresql novice talking to you.\n\n Regards\n\n Le 14/03/2011 20:40, Robert Haas a écrit :\n \nOn Mon, Mar 14, 2011 at 12:42 PM, Samba GUEYE <[email protected]> wrote:\n\n\nYeah but is there a workaround to force the root table to propagate the\nforeign key to the partitionned table\nbecause right now all foreign keys to partitionned table throws constraints\nviolation and it's a big problem for me\n\n\n\nNo. Generally, table partitioning is not a good idea unless you are\ndealing with really large tables, and nearly all of your queries apply\nonly to a single partition. Most likely you are better off not using\ntable inheritance in the first place if you need this feature.\n\nIt would be nice if we had a way to do this for the rare cases where\nit would be useful, but we don't.", "msg_date": "Tue, 15 Mar 2011 11:10:01 +0100", "msg_from": "Samba GUEYE <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Table partitioning problem" }, { "msg_contents": "On 03/15/2011 05:10 AM, Samba GUEYE wrote:\n\n> 1. Why \"... partitionning is not a good idea ...\" like you said\n> Robert and Conor \"... I grant that it would be better to never need\n> to do that\" ?\n\nThere are a number of difficulties the planner has with partitioned \ntables. Only until very recently, MAX and MIN would scan every single \npartition, even if you performed the action on the constraint column. \nBasically quite a few queries will either not have an ideal execution \nplan, or act in unexpected manners unless you physically target the \nexact partition you want.\n\nEven though we have several tables over the 50-million rows, I'm \nreluctant to partition them because we have a very transaction-intensive \ndatabase, and can't risk the possible penalties.\n\n> 2. Is there another way or strategy to deal with very large tables\n> (over 100 000 000 rows per year in one table) beyond indexing and\n> partitionning?\n\nWhat you have is a very good candidate for partitioning... if you can \neffectively guarantee a column to partition the data on. If you're \ngetting 100M rows per year, I could easily see some kind of created_date \ncolumn and then having one partition per month.\n\nOne of the things we hate most about very large tables is the amount of \ntime necessary to vacuum or index them. CPU and disk IO can only go so \nfast, so eventually you encounter a point where it can take hours to \nindex a single column. If you let your table get too big, your \nmaintenance costs will be prohibitive, and partitioning may be required \nat that point.\n\nAs an example, we have a table that was over 100M rows and we have \nenough memory that the entire table was in system cache. Even so, \nrebuilding the indexes on that table required about an hour and ten \nminutes *per index*. We knew this would happen and ran the reindex in \nparallel, which we confirmed by watching five of our CPUs sit at 99% \nutilization for the whole interval.\n\nThat wouldn't have happened if the table were partitioned.\n\n> 3. If you had to quantify a limit of numbers of rows per table in a\n> single postgresql database server what would you say?\n\nI'd personally avoid having any tables over 10-million rows. We have \nquad Xeon E7450's, tons of ram, and even NVRAM PCI cards to reduce IO \ncontention, and still, large tables are a nuisance. Even the best CPU \nwill balk at processing 10-million rows quickly.\n\nAnd yes. Good queries and design will help. Always limiting result sets \nwill help. Efficient, highly selective indexes will help. But \nmaintenance grows linearly, despite our best efforts. The only way to \nsidestep that issue is to partition tables or rewrite your application \nto scale horizontally via data sharding or some other shared-nothing \ncluster with plProxy, GridSQL or PGPool.\n\nYou'll have this problem with any modern database. Big tables are a pain \nin everybody's asses.\n\nIt's too bad PostgreSQL can't assign one thread per data-file and merge \nthe results.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n", "msg_date": "Tue, 15 Mar 2011 08:18:45 -0500", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Table partitioning problem" }, { "msg_contents": "hi\n\nThanks again very much for these clear-cut answers\n\nI think i'll try to implement the partitionning despite all the \ndifficulties you raise about it in this thread\nbecause i can't find any viable solution right now for this situation. \nIt will constrain me to change the datamodel to workaround the \ninheritance foreigh key issue but i will at least test it because we \nhave limited resources and can't afford to have many servers or whatever \nto boost performances...\n\nBest Regards\n\n\nLe 15/03/2011 14:18, Shaun Thomas a �crit :\n> On 03/15/2011 05:10 AM, Samba GUEYE wrote:\n>\n>> 1. Why \"... partitionning is not a good idea ...\" like you said\n>> Robert and Conor \"... I grant that it would be better to never need\n>> to do that\" ?\n>\n> There are a number of difficulties the planner has with partitioned \n> tables. Only until very recently, MAX and MIN would scan every single \n> partition, even if you performed the action on the constraint column. \n> Basically quite a few queries will either not have an ideal execution \n> plan, or act in unexpected manners unless you physically target the \n> exact partition you want.\n>\n> Even though we have several tables over the 50-million rows, I'm \n> reluctant to partition them because we have a very \n> transaction-intensive database, and can't risk the possible penalties.\n>\n>> 2. Is there another way or strategy to deal with very large tables\n>> (over 100 000 000 rows per year in one table) beyond indexing and\n>> partitionning?\n>\n> What you have is a very good candidate for partitioning... if you can \n> effectively guarantee a column to partition the data on. If you're \n> getting 100M rows per year, I could easily see some kind of \n> created_date column and then having one partition per month.\n>\n> One of the things we hate most about very large tables is the amount \n> of time necessary to vacuum or index them. CPU and disk IO can only go \n> so fast, so eventually you encounter a point where it can take hours \n> to index a single column. If you let your table get too big, your \n> maintenance costs will be prohibitive, and partitioning may be \n> required at that point.\n>\n> As an example, we have a table that was over 100M rows and we have \n> enough memory that the entire table was in system cache. Even so, \n> rebuilding the indexes on that table required about an hour and ten \n> minutes *per index*. We knew this would happen and ran the reindex in \n> parallel, which we confirmed by watching five of our CPUs sit at 99% \n> utilization for the whole interval.\n>\n> That wouldn't have happened if the table were partitioned.\n>\n>> 3. If you had to quantify a limit of numbers of rows per table in a\n>> single postgresql database server what would you say?\n>\n> I'd personally avoid having any tables over 10-million rows. We have \n> quad Xeon E7450's, tons of ram, and even NVRAM PCI cards to reduce IO \n> contention, and still, large tables are a nuisance. Even the best CPU \n> will balk at processing 10-million rows quickly.\n>\n> And yes. Good queries and design will help. Always limiting result \n> sets will help. Efficient, highly selective indexes will help. But \n> maintenance grows linearly, despite our best efforts. The only way to \n> sidestep that issue is to partition tables or rewrite your application \n> to scale horizontally via data sharding or some other shared-nothing \n> cluster with plProxy, GridSQL or PGPool.\n>\n> You'll have this problem with any modern database. Big tables are a \n> pain in everybody's asses.\n>\n> It's too bad PostgreSQL can't assign one thread per data-file and \n> merge the results.\n>\n\n", "msg_date": "Tue, 15 Mar 2011 16:19:46 +0100", "msg_from": "Samba GUEYE <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Table partitioning problem" } ]
[ { "msg_contents": "I'm setting up my first PostgreSQL server to replace an existing MySQL server. I've been reading Gregory Smith's book Postgres 9.0 High Performance and also Riggs/Krosing's PostgreSQL 9 Administration Cookbook. While both of these books are excellent, I am completely new to PostgreSQL and I cannot possibly read and understand every aspect of tuning in the short amount time before I have to have this server running.\n\nI started out by using the 11 step process for tuning a new dedicated server (page 145 in Gregory Smith's book) but I found I had more questions than I could get answered in the short amount of time I have. So, plan B is to use pgtune to get a ballpark configuration and then fine tune later as I learn more.\n\nI ran some performance tests where I imported my 11Gb database from our old MySQL server into PostgreSQL 9.0.3. In my testing I left the postgresql.conf at default values. The PostgreSQL test database completely blew away the old MySQL server in performance. Again, the postgresql.conf was never optimized so I feel I will be OK if I just get in the ballpark with tuning the postgresql.conf file.\n\nI'd like to run my plan by you guys to see if it seems sane and make sure I'm not leaving out anything major. \n\nI'll be running PostgreSQL 9.0.3 on a Solaris 10 64 bit (Sparc) box with 16G of RAM. The local file system is ZFS. The database file systems are UFS and are SAN mounted from VERY fast disks with battery backed write cache. I don't know anybody else who is running a mix of ZFS and UFS file systems, I cannot change this. ZFS has it's own file system cache so I'm concerned about the ramifications of having caches for both ZFS and UFS. The only database related files that are stored on the local ZFS file system are the PostgreSQL binaries and the system logs.\n\n From the extensive reading I've done, it seems generally accepted to set the UFS file system cache to use 50% of the system RAM. That leaves 8G left for PostgreSQL. Well, not really 8G, I've reserved 1G for system use which leaves me with 7G for PostgreSQL to use. I ran pgtune and specified 7G as the memory ( 7 * 1024 * 1024 = 7340032 ) and 300 connections. The resulting postgresql.conf is what I plan to use. \n\nAfter reading Gregory Smith's book, I've decided to put the database on one UFS file system, the WAL on a separate UFS file system (mounted with forcedirectio) and the archive logs on yet another UFS file system. I'll be on Solaris 10 so I've set wal_sync_method = fdatasync based on recommendations from other Solaris users. Did a lot of google searches on wal_sync_method and Solaris.\n\nThat's what I plan to go live with in a few days. Since my test server with default configs already blows away the old database server, I think I can get away with this strategy. Time is not on my side.\n\nI originally installed the 32 bit PostgreSQL binaries but later switched to 64 bit binaries. I've read the 32 bit version is faster and uses less memory than the 64 bit version. At this point I'm assuming I need the 64 bit binaries in order to take full advantage the the 7G of RAM I have allocated to PostgreSQL. If I am wrong here please let me know.\n\nThis has been a lot of information to cram down in the short amount of time I've had to deal with this project. I'm going to have to go back and read the PostgreSQL 9.0 High Performance book two or three more times and really dig in to the details but for now I'm going to cheat and use pgtune as described above. Thank you in advance for any advice or additional tips you may be able to provide. \n\nRick\n\n\n\n\n\n\n \n\n I'm setting up my first PostgreSQL server to replace an existing MySQL server.  I've been reading Gregory Smith's book Postgres 9.0 High Performance and also Riggs/Krosing's PostgreSQL 9 Administration Cookbook.  While both of these books are excellent, I am completely new to PostgreSQL and I cannot possibly read and understand every aspect of tuning in the short amount time before I have to have this server running.\n\nI started out by using the 11 step process for tuning a new dedicated server (page 145 in Gregory Smith's book) but I found I had more questions than I could get answered in the short amount of time I have.  So, plan B is to use pgtune to get a ballpark configuration and then fine tune later as I learn more.\n\nI ran some performance tests where I imported my 11Gb database from our old MySQL server into PostgreSQL 9.0.3.  In my testing I left the postgresql.conf at default values.  The PostgreSQL test database completely blew away the old MySQL server in performance.  Again, the postgresql.conf was never optimized so I feel I will be OK if I just get in the ballpark with tuning the postgresql.conf file.\n\nI'd like to run my plan by you guys to see if it seems sane and make sure I'm not leaving out anything major. \n\nI'll be running PostgreSQL 9.0.3 on a Solaris 10 64 bit (Sparc) box with 16G of RAM.    The local file system is ZFS.  The database file systems are UFS and are SAN mounted from VERY fast disks with battery backed write cache.  I don't know anybody else who is running a mix of ZFS and UFS file systems,  I cannot change this.  ZFS has it's own file system cache so I'm concerned about the ramifications of having caches for both ZFS and UFS. The only database related files that are stored on the local ZFS file system are the PostgreSQL binaries and the system logs.\n\n From the extensive reading I've done, it seems generally accepted to set the UFS file system cache to use 50% of the system RAM.  That leaves 8G left for PostgreSQL.  Well, not really 8G,  I've reserved 1G for system use which leaves me with 7G for PostgreSQL to use.  I ran pgtune and specified 7G as the memory ( 7 * 1024 * 1024 = 7340032 ) and 300 connections.  The resulting postgresql.conf is what I plan to use.   \n\nAfter reading Gregory Smith's book, I've decided to put the database on one UFS file system, the WAL on a separate UFS file system (mounted with forcedirectio) and the archive logs on yet another UFS file system.  I'll be on Solaris 10 so I've set wal_sync_method = fdatasync based on recommendations from other Solaris users.  Did a lot of google searches on wal_sync_method and Solaris.\n\nThat's what I plan to go live with in a few days.  Since my test server with default configs already blows away the old database server, I think I can get away with this strategy.  Time is not on my side.\n\nI originally installed the 32 bit PostgreSQL binaries but later switched to 64 bit binaries.  I've read the 32 bit version is faster and uses less memory than the 64 bit version.  At this point I'm assuming I need the 64 bit binaries in order to take full advantage the the 7G of RAM I have allocated to PostgreSQL.  If I am wrong here please let me know.\n\nThis has been a lot of information to cram down in the short amount of time I've had to deal with this project.  I'm going to have to go back and read the PostgreSQL 9.0 High Performance book two or three more times and really dig in to the details but for now I'm going to cheat and use pgtune as described above.  Thank you in advance for any advice or additional tips you may be able to provide.  \n\nRick", "msg_date": "Thu, 10 Mar 2011 04:12:18 -0500", "msg_from": "runner <[email protected]>", "msg_from_op": true, "msg_subject": "Basic performance tuning on dedicated server" }, { "msg_contents": "On Thu, Mar 10, 2011 at 3:12 AM, runner <[email protected]> wrote:\n>\n> I'm setting up my first PostgreSQL server to replace an existing MySQL\n> server.  I've been reading Gregory Smith's book Postgres 9.0 High\n> Performance and also Riggs/Krosing's PostgreSQL 9 Administration Cookbook.\n> While both of these books are excellent, I am completely new to PostgreSQL\n> and I cannot possibly read and understand every aspect of tuning in the\n> short amount time before I have to have this server running.\n>\n> I started out by using the 11 step process for tuning a new dedicated server\n> (page 145 in Gregory Smith's book) but I found I had more questions than I\n> could get answered in the short amount of time I have.  So, plan B is to use\n> pgtune to get a ballpark configuration and then fine tune later as I learn\n> more.\n>\n> I ran some performance tests where I imported my 11Gb database from our old\n> MySQL server into PostgreSQL 9.0.3.  In my testing I left the\n> postgresql.conf at default values.  The PostgreSQL test database completely\n> blew away the old MySQL server in performance.  Again, the postgresql.conf\n> was never optimized so I feel I will be OK if I just get in the ballpark\n> with tuning the postgresql.conf file.\n>\n> I'd like to run my plan by you guys to see if it seems sane and make sure\n> I'm not leaving out anything major.\n>\n> I'll be running PostgreSQL 9.0.3 on a Solaris 10 64 bit (Sparc) box with 16G\n> of RAM.    The local file system is ZFS.  The database file systems are UFS\n> and are SAN mounted from VERY fast disks with battery backed write cache.  I\n> don't know anybody else who is running a mix of ZFS and UFS file systems,  I\n> cannot change this.  ZFS has it's own file system cache so I'm concerned\n> about the ramifications of having caches for both ZFS and UFS. The only\n> database related files that are stored on the local ZFS file system are the\n> PostgreSQL binaries and the system logs.\n>\n> From the extensive reading I've done, it seems generally accepted to set the\n> UFS file system cache to use 50% of the system RAM.  That leaves 8G left for\n> PostgreSQL.  Well, not really 8G,  I've reserved 1G for system use which\n> leaves me with 7G for PostgreSQL to use.  I ran pgtune and specified 7G as\n> the memory ( 7 * 1024 * 1024 = 7340032 ) and 300 connections.  The resulting\n> postgresql.conf is what I plan to use.\n>\n> After reading Gregory Smith's book, I've decided to put the database on one\n> UFS file system, the WAL on a separate UFS file system (mounted with\n> forcedirectio) and the archive logs on yet another UFS file system.  I'll be\n> on Solaris 10 so I've set wal_sync_method = fdatasync based on\n> recommendations from other Solaris users.  Did a lot of google searches on\n> wal_sync_method and Solaris.\n>\n> That's what I plan to go live with in a few days.  Since my test server with\n> default configs already blows away the old database server, I think I can\n> get away with this strategy.  Time is not on my side.\n>\n> I originally installed the 32 bit PostgreSQL binaries but later switched to\n> 64 bit binaries.  I've read the 32 bit version is faster and uses less\n> memory than the 64 bit version.  At this point I'm assuming I need the 64\n> bit binaries in order to take full advantage the the 7G of RAM I have\n> allocated to PostgreSQL.  If I am wrong here please let me know.\n>\n> This has been a lot of information to cram down in the short amount of time\n> I've had to deal with this project.  I'm going to have to go back and read\n> the PostgreSQL 9.0 High Performance book two or three more times and really\n> dig in to the details but for now I'm going to cheat and use pgtune as\n> described above.  Thank you in advance for any advice or additional tips you\n> may be able to provide.\n\n\ncongratulations!\n\npostgres memory tuning is a complicated topic but most it tends to be\nvery subtle in its effects or will apply to specific situations, like\ndealing with i/o storms during checkpooints. The only settings that\noften need to be immediately cranked out of the box are\nmaintenance_work_mem and (much more carefully) work_mem.\n\nRegardless how shared buffers is set, ALL of your server's memory goes\nto postgres less what the o/s keeps for itself and other applications.\n You do not allocate memory to postgres -- you only suggest how it\nmight be used. I stopped obsessing how it was set years ago. In\nfact, on linux for example dealing with background dirty page flushing\nvia the o/s (because stock settings can cause i/o storms) is a bigger\ndeal than shared_buffers by about an order of magnitude imnsho.\n\nThe non memory related settings of postgresql.conf, the planner\nsettings (join/from collapse limit, random_page_cost, etc), i/o\nsettings (fsync, wal_sync_method etc) are typically much more\nimportant for performance than how memory is set up.\n\nThe reason postgres is showing up mysql is almost certainly due to the\nquery planner and/or (if you were using myisam) reaping the benefits\nof mvcc. My knowledge of mysql stops at the 5.0/5.1 era, but I can\ntell you postgres is a much more sophisticated platform in very many\nlevels, and you will be happy you made the switch!\n\nmerlin\n", "msg_date": "Thu, 10 Mar 2011 12:02:15 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Basic performance tuning on dedicated server" } ]
[ { "msg_contents": "Given that doing a massive UPDATE SET foo = bar || ' ' || baz; on a 12 million\nrow table (with about 100 columns -- the US Census PUMS for the 2005-2009 ACS)\nis never going to be that fast, what should one do to make it faster?\n\nI set work_mem to 2048MB, but it currently is only using a little bit of memory\nand CPU. (3% and 15% according to top; on a SELECT DISTINCT ... LIMIT earlier,\nit was using 70% of the memory).\n\nThe data is not particularly sensitive; if something happened and it rolled\nback, that wouldnt be the end of the world. So I don't know if I can use\n\"dangerous\" setting for WAL checkpoints etc. There are also aren't a lot of\nconcurrent hits on the DB, though a few.\n\nI am loathe to create a new table from a select, since the indexes themselves\ntake a really long time to build.\n\nAs the title alludes, I will also be doing GROUP BY's on the data, and would\nlove to speed these up, mostly just for my own impatience... \n\n\n\n", "msg_date": "Thu, 10 Mar 2011 15:40:53 +0000 (UTC)", "msg_from": "fork <[email protected]>", "msg_from_op": true, "msg_subject": "Tuning massive UPDATES and GROUP BY's?" }, { "msg_contents": "On Thu, Mar 10, 2011 at 9:40 AM, fork <[email protected]> wrote:\n> Given that doing a massive UPDATE SET foo = bar || ' ' || baz; on a 12 million\n> row table (with about 100 columns -- the US Census PUMS for the 2005-2009 ACS)\n> is never going to be that fast, what should one do to make it faster?\n>\n> I set work_mem to 2048MB, but it currently is only using a little bit of memory\n> and CPU. (3% and 15% according to top; on a SELECT DISTINCT ... LIMIT earlier,\n> it was using 70% of the memory).\n>\n> The data is not particularly sensitive; if something happened and it rolled\n> back, that wouldnt be the end of the world.  So I don't know if I can use\n> \"dangerous\" setting for WAL checkpoints etc.   There are also aren't a lot of\n> concurrent hits on the DB, though a few.\n>\n> I am loathe to create a new table from a select, since the indexes themselves\n> take a really long time to build.\n\nyou are aware that updating the field for the entire table, especially\nif there is an index on it (or any field being updated), will cause\nall your indexes to be rebuilt anyways? when you update a record, it\ngets a new position in the table, and a new index entry with that\nposition. insert/select to temp, + truncate + insert/select back is\nusually going to be faster and will save you the reindex/cluster.\notoh, if you have foreign keys it can be a headache.\n\n> As the title alludes, I will also be doing GROUP BY's on the data, and would\n> love to speed these up, mostly just for my own impatience...\n\nneed to see the query here to see if you can make them go faster.\n\nmerlin\n", "msg_date": "Thu, 10 Mar 2011 10:45:08 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning massive UPDATES and GROUP BY's?" }, { "msg_contents": "Merlin Moncure <mmoncure <at> gmail.com> writes:\n\n> > I am loathe to create a new table from a select, since the indexes themselves\n> > take a really long time to build.\n> \n> you are aware that updating the field for the entire table, especially\n> if there is an index on it (or any field being updated), will cause\n> all your indexes to be rebuilt anyways? when you update a record, it\n> gets a new position in the table, and a new index entry with that\n> position. \n> insert/select to temp, + truncate + insert/select back is\n> usually going to be faster and will save you the reindex/cluster.\n> otoh, if you have foreign keys it can be a headache.\n\nHmph. I guess I will have to find a way to automate it, since there will be a\nlot of times I want to do this. \n\n> > As the title alludes, I will also be doing GROUP BY's on the data, and would\n> > love to speed these up, mostly just for my own impatience...\n> \n> need to see the query here to see if you can make them go faster.\n\nI guess I was hoping for a blog entry on general guidelines given a DB that is\nreally only for batch analysis versus transaction processing. Like \"put all\nyour temp tables on a different disk\" or whatever. I will post specifics later.\n\n", "msg_date": "Thu, 10 Mar 2011 18:04:39 +0000 (UTC)", "msg_from": "fork <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tuning massive UPDATES and GROUP BY's?" }, { "msg_contents": "On Thu, Mar 10, 2011 at 17:40, fork <[email protected]> wrote:\n> The data is not particularly sensitive; if something happened and it rolled\n> back, that wouldnt be the end of the world.  So I don't know if I can use\n> \"dangerous\" setting for WAL checkpoints etc.   There are also aren't a lot of\n> concurrent hits on the DB, though a few.\n\nIf you don't mind long recovery times in case of a crash, set\ncheckpoint_segments to ~100 and checkpoint_completion_target=0.9; this\nwill improve write throughput significantly.\n\nAlso, if you don't mind CORRUPTing your database after a crash,\nsetting fsync=off and full_page_writes=off gives another significant\nboost.\n\n> I am loathe to create a new table from a select, since the indexes themselves\n> take a really long time to build.\n\nUPDATE on a table with many indexes will probably be slower. If you\nwant to speed up this part, use INSERT INTO x SELECT and take this\nchance to partition your table, such that each individual partition\nand most indexes will fit in your cache. Index builds from a warm\ncache are very fast in PostgreSQL. You can create several indexes at\nonce in separate sessions, and the table will only be scanned once.\n\nDon't forget to bump up maintenance_work_mem for index builds, 256MB\nmight be a reasonable arbitrary value.\n\nThe downside is that partitioning can interfere with your read queries\nif they expect the data in a sorted order. But then, HashAggregate\ntends to be faster than GroupAggregate in many cases, so this might\nnot matter for your queries. Alternatively you can experiment with\nPostgreSQL 9.1 alpha, which has mostly fixed this shortcoming with the\n\"merge append\" plan node.\n\n> As the title alludes, I will also be doing GROUP BY's on the data, and would\n> love to speed these up, mostly just for my own impatience...\n\nI think regular tuning is the best you can do here.\n\nRegards,\nMarti\n", "msg_date": "Fri, 11 Mar 2011 00:42:33 +0200", "msg_from": "Marti Raudsepp <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning massive UPDATES and GROUP BY's?" }, { "msg_contents": "Marti Raudsepp <marti <at> juffo.org> writes:\n\n> If you don't mind long recovery times in case of a crash, set\n> checkpoint_segments to ~100 and checkpoint_completion_target=0.9; this\n> will improve write throughput significantly.\n\nSounds good.\n\n> Also, if you don't mind CORRUPTing your database after a crash,\n> setting fsync=off and full_page_writes=off gives another significant\n> boost.\n\nI probably won't do this... ;)\n\n> UPDATE on a table with many indexes will probably be slower. If you\n> want to speed up this part, use INSERT INTO x SELECT and take this\n> chance to partition your table, \n\nLike the following? Will it rebuild the indexes in a sensical way?\n\nBEGIN;\nCREATE TABLE tempfoo as SELECT *, foo + bar AS newcol FROM bar;\nTRUNCATE foo;\nALTER TABLE foo ADD COLUMN newcol;\nINSERT INTO foo SELECT * FROM tempfoo;\nDROP TABLE tempfoo;\nEND;\n\n> such that each individual partition\n> and most indexes will fit in your cache. \n\nIs there a rule of thumb on tradeoffs in a partitioned table? About half the\ntime, I will want to do GROUP BY's that use the partition column, but about half\nthe time I won't. (I would use the partition column whatever I am most likely\nto cluster by in a single big table, right?)\n\nFor example, I might intuitively partition by age5 (into 20 tables like tab00,\ntab05, tab10, etc). Often a query would be \"SELECT ... FROM PARENTTABLE GROUP BY\nage5, race, etc\", but often it would be \"GROUP BY state\" or whatever with no\nage5 component.\n\nI know I can experiment ;), but it takes a while to load anything, and i would\nrather stand on the shoulders.\n\nThanks so much for all your helps!\n\n\n\n", "msg_date": "Fri, 11 Mar 2011 19:06:39 +0000 (UTC)", "msg_from": "fork <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tuning massive UPDATES and GROUP BY's?" }, { "msg_contents": "On Fri, Mar 11, 2011 at 21:06, fork <[email protected]> wrote:\n> Like the following?  Will it rebuild the indexes in a sensical way?\n\nDon't insert data into an indexed table. A very important point with\nbulk-loading is that you should load all the data first, then create\nthe indexes. Running multiple (different) CREATE INDEX queries in\nparallel can additionally save a lot of time. Also don't move data\nback and forth between the tables, just drop the original when you're\ndone.\n\nDoing this should give a significant performance win. Partitioning\nthem to fit in cache should improve it further, but I'm not sure\nanymore that it's worthwhile considering the costs and extra\nmaintenance.\n\n> Is there a rule of thumb on tradeoffs in a partitioned table?\n\nThe only certain thing is that you'll lose \"group\" aggregate and\n\"merge join\" query plans. If you only see \"HashAggregate\" plans when\nyou EXPLAIN your GROUP BY queries then it probably won't make much of\na difference.\n\n> I would use the partition column whatever I am most likely\n> to cluster by in a single big table, right?\n\nYes.\n\nRegards,\nMarti\n", "msg_date": "Sat, 12 Mar 2011 19:07:29 +0200", "msg_from": "Marti Raudsepp <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning massive UPDATES and GROUP BY's?" }, { "msg_contents": ">Don't insert data into an indexed table. A very important point with\n\n\n>bulk-loading is that you should load all the data first, then create\n>the indexes. Running multiple (different) CREATE INDEX queries in\n>parallel can additionally save a lot of time. Also don't move data\n>back and forth between the tables, just drop the original when you're\n>done.\n\nI just saw your post and it looks similar to what I'm doing.\nWe're going to be loading 12G of data from a MySQL dump into our \npg 9.0.3 database next weekend. I've been testing this for the last\ntwo weeks. Tried removing the indexes and other constraints just for\nthe import but for a noob like me, this was too much to ask. Maybe\nwhen I get more experience. So I *WILL* be importing all of my data\ninto indexed tables. I timed it and it will take eight hours. \n\nI'm sure I could get it down to two or three hours for the import\nif I really knew more about postgres but that's the price you pay when\nyou \"slam dunk\" a project and your staff isn't familiar with the \ndatabase back-end. Other than being very inefficient, and consuming \nmore time than necessary, is there any other down side to importing \ninto an indexed table? In the four test imports I've done,\neverything seems to work fine, just takes a long time.\n\nSorry for hijacking your thread here!\n\n \n\n>Don't insert data into an indexed table. A very important point with\n\n\n>bulk-loading is that you should load all the data first, then create\n>the indexes. Running multiple (different) CREATE INDEX queries in\n>parallel can additionally save a lot of time. Also don't move data\n>back and forth between the tables, just drop the original when you're\n>done.\n\nI just saw your post and it looks similar to what I'm doing.\nWe're going to be loading 12G of data from a MySQL dump into our \npg 9.0.3 database next weekend. I've been testing this for the last\ntwo weeks. Tried removing the indexes and other constraints just for\nthe import but for a noob like me, this was too much to ask. Maybe\nwhen I get more experience. So I *WILL* be importing all of my data\ninto indexed tables. I timed it and it will take eight hours. \n\nI'm sure I could get it down to two or three hours for the import\nif I really knew more about postgres but that's the price you pay when\nyou \"slam dunk\" a project and your staff isn't familiar with the \ndatabase back-end. Other than being very inefficient, and consuming \nmore time than necessary, is there any other down side to importing \ninto an indexed table? In the four test imports I've done,\neverything seems to work fine, just takes a long time.\n\nSorry for hijacking your thread here!", "msg_date": "Sun, 13 Mar 2011 12:36:26 -0400", "msg_from": "runner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning massive UPDATES and GROUP BY's?" }, { "msg_contents": "On Sun, Mar 13, 2011 at 18:36, runner <[email protected]> wrote:\n> Tried removing the indexes and other constraints just for\n> the import but for a noob like me, this was too much to ask. Maybe\n> when I get more experience.\n\npgAdmin should make it pretty easy. Choose each index and constraint,\nsave the code from the \"SQL pane\" for when you need to restore it, and\ndo a right click -> Drop\n\n> Other than being very inefficient, and consuming\n> more time than necessary, is there any other down side to importing\n> into an indexed table?\n\nDoing so will result in somewhat larger (more bloated) indexes, but\ngenerally the performance impact of this is minimal.\n\nRegards,\nMarti\n", "msg_date": "Mon, 14 Mar 2011 12:17:34 +0200", "msg_from": "Marti Raudsepp <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning massive UPDATES and GROUP BY's?" }, { "msg_contents": "On Mon, Mar 14, 2011 at 4:17 AM, Marti Raudsepp <[email protected]> wrote:\n\n> On Sun, Mar 13, 2011 at 18:36, runner <[email protected]> wrote:\n> > Other than being very inefficient, and consuming\n> > more time than necessary, is there any other down side to importing\n> > into an indexed table?\n>\n> Doing so will result in somewhat larger (more bloated) indexes, but\n> generally the performance impact of this is minimal.\n>\n>\nBulk data imports of this size I've done with minimal pain by simply\nbreaking the raw data into chunks (10M records becomes 10 files of 1M\nrecords), on a separate spindle from the database, and performing multiple\nCOPY commands but no more than 1 COPY per server core. I tested this a\nwhile back on a 4 core server and when I attempted 5 COPY's at a time the\ntime to complete went up almost 30%. I don't recall any benefit having\nfewer than 4 in this case but the server was only processing my data at the\ntime. Indexes were on the target table however I dropped all constraints.\n The UNIX split command is handy for breaking the data up into individual\nfiles.\n\nGreg\n\nOn Mon, Mar 14, 2011 at 4:17 AM, Marti Raudsepp <[email protected]> wrote:\nOn Sun, Mar 13, 2011 at 18:36, runner <[email protected]> wrote:\n> Other than being very inefficient, and consuming\n> more time than necessary, is there any other down side to importing\n> into an indexed table?\n\nDoing so will result in somewhat larger (more bloated) indexes, but\ngenerally the performance impact of this is minimal.\nBulk data imports of this size I've done with minimal pain by simply breaking the raw data into chunks (10M records becomes 10 files of 1M records), on a separate spindle from the database, and performing multiple COPY commands but no more than 1 COPY per server core.  I tested this a while back on a 4 core server and when I attempted 5 COPY's at a time the time to complete went up almost 30%.  I don't recall any benefit having fewer than 4 in this case but the server was only processing my data at the time.  Indexes were on the target table however I dropped all constraints.  The UNIX split command is handy for breaking the data up into individual files.\nGreg", "msg_date": "Mon, 14 Mar 2011 07:34:28 -0600", "msg_from": "Greg Spiegelberg <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning massive UPDATES and GROUP BY's?" }, { "msg_contents": "> Bulk data imports of this size I've done with minimal pain by simply \n> breaking the raw data into chunks (10M records becomes 10 files of \n> 1M records), on a separate spindle from the database, and performing \n> multiple COPY commands but no more than 1 COPY per server core. \n> I tested this a while back on a 4 core server and when I attempted 5 \n> COPY's at a time the time to complete went up almost 30%. I don't \n> recall any benefit having fewer than 4 in this case but the server was \n> only processing my data at the time. Indexes were on the target table \n> however I dropped all constraints. The UNIX split command is handy \n> for breaking the data up into individual files.\n\nI'm not using COPY. My dump file is a bunch if INSERT INTO statements. I know it would be faster to use copy. If I can figure out how to do this in one hour I will try it. I did two mysqldumps, one with INSERT INTO and one as CSV to I can try COPY at a later time. I'm running five parallel psql processes to import the data which has been broken out by table.\n\n\n \n\n\n \n\n \n\n\n \n\n\n\n\n> Bulk data imports of this size I've done with minimal pain by simply \n> breaking the raw data into chunks (10M records becomes 10 files of \n> 1M records), on a separate spindle from the database, and performing \n> multiple COPY commands but no more than 1 COPY per server core.  \n> I tested this a while back on a 4 core server and when I attempted 5 \n> COPY's at a time the time to complete went up almost 30%.  I don't \n> recall any benefit having fewer than 4 in this case but the server was \n> only processing my data at the time.  Indexes were on the target table \n> however I dropped all constraints.  The UNIX split command is handy \n> for breaking the data up into individual files.\n\nI'm not using COPY.  My dump file is a bunch if INSERT INTO statements.  I know it would be faster to use copy.  If I can figure out how to do this in one hour I will try it.  I did two mysqldumps, one with INSERT INTO and one as CSV to I can try COPY at a later time.   I'm running five parallel psql processes to import the data which has been broken out by table.", "msg_date": "Mon, 14 Mar 2011 11:54:39 -0400", "msg_from": "runner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning massive UPDATES and GROUP BY's?" } ]
[ { "msg_contents": "Hello, list\n\nOur company is creating a ticketing system. Of course the performance \nissues are very important to us (as to all of you I guess). To increase \nspeed of some queries stable functions are used, but somehow they don't \nact exactly as I expect, so would you please explain what am I doing (or \nexpecting) wrong...\n\nFirst of all I have the stable function witch runs fast and I have no \nproblems with it at all.\nCREATE OR REPLACE FUNCTION web_select_extra_price(prm_price_id integer, \nprm_event_id integer, prm_cashier_id integer)\n RETURNS numeric AS\n'\n........ some code here\n'\n LANGUAGE plpgsql STABLE\n COST 100;\n\nNow the test:\n\n1) query without using the function\nexplain analyze\n SELECT thtp_tick_id, price_id,\n price_price,\n price_color\n FROM ticket_price\n JOIN ticket_has_ticket_price ON (price_id = thtp_price_id)\n WHERE price_event_id = 7820 and (current_timestamp >= price_date AND \ncurrent_timestamp <= price_date_till)\n ORDER BY price_id;\n\nResult:\n\"Sort (cost=132.47..133.77 rows=518 width=25) (actual time=5.125..5.842 \nrows=4335 loops=1)\"\n\" Sort Key: ticket_price.price_id\"\n\" Sort Method: quicksort Memory: 433kB\"\n\" -> Nested Loop (cost=0.00..109.12 rows=518 width=25) (actual \ntime=0.037..3.148 rows=4335 loops=1)\"\n\" -> Index Scan using index_price_event_id on ticket_price \n(cost=0.00..8.52 rows=2 width=21) (actual time=0.014..0.026 rows=7 loops=1)\"\n\" Index Cond: (price_event_id = 7820)\"\n\" Filter: ((now() >= price_date) AND (now() <= \nprice_date_till))\"\n\" -> Index Scan using idx_thtp_price_id on \nticket_has_ticket_price (cost=0.00..47.06 rows=259 width=8) (actual \ntime=0.013..0.211 rows=619 loops=7)\"\n\" Index Cond: (ticket_has_ticket_price.thtp_price_id = \nticket_price.price_id)\"\n\"Total runtime: 6.425 ms\"\n\n\n2) Query using the function\nexplain analyze\n SELECT thtp_tick_id, price_id,\n price_price, web_select_extra_price(price_id, price_event_id, 1),\n price_color\n FROM ticket_price\n JOIN ticket_has_ticket_price ON (price_id = thtp_price_id)\n WHERE price_event_id = 7820 and (current_timestamp >= price_date AND \ncurrent_timestamp <= price_date_till)\n ORDER BY price_id;\n\nResult:\n\"Sort (cost=261.97..263.27 rows=518 width=29) (actual \ntime=704.224..704.927 rows=4335 loops=1)\"\n\" Sort Key: ticket_price.price_id\"\n\" Sort Method: quicksort Memory: 433kB\"\n\" -> Nested Loop (cost=0.00..238.62 rows=518 width=29) (actual \ntime=0.272..699.073 rows=4335 loops=1)\"\n\" -> Index Scan using index_price_event_id on ticket_price \n(cost=0.00..8.52 rows=2 width=25) (actual time=0.011..0.052 rows=7 loops=1)\"\n\" Index Cond: (price_event_id = 7820)\"\n\" Filter: ((now() >= price_date) AND (now() <= \nprice_date_till))\"\n\" -> Index Scan using idx_thtp_price_id on \nticket_has_ticket_price (cost=0.00..47.06 rows=259 width=8) (actual \ntime=0.017..0.582 rows=619 loops=7)\"\n\" Index Cond: (ticket_has_ticket_price.thtp_price_id = \nticket_price.price_id)\"\n\"Total runtime: 705.531 ms\"\n\n\nNow what you can think is that executing web_select_extra_price takes \nthe difference, but\n3) As STABLE function should be executed once for every different set of \nparameters I do\nSELECT web_select_extra_price(price_id, 7820, 1) FROM (\n\n SELECT distinct price_id\n FROM ticket_price\n JOIN ticket_has_ticket_price ON (price_id = thtp_price_id)\n WHERE price_event_id = 7820 and (current_timestamp >= price_date AND \ncurrent_timestamp <= price_date_till)\n ) as qq;\n\nResult:\n\"Subquery Scan on qq (cost=110.34..110.88 rows=2 width=4) (actual \ntime=7.265..8.907 rows=7 loops=1)\"\n\" -> HashAggregate (cost=110.34..110.36 rows=2 width=4) (actual \ntime=6.866..6.873 rows=7 loops=1)\"\n\" -> Nested Loop (cost=0.00..109.05 rows=517 width=4) (actual \ntime=0.037..4.643 rows=4335 loops=1)\"\n\" -> Index Scan using index_price_event_id on \nticket_price (cost=0.00..8.52 rows=2 width=4) (actual time=0.014..0.038 \nrows=7 loops=1)\"\n\" Index Cond: (price_event_id = 7820)\"\n\" Filter: ((now() >= price_date) AND (now() <= \nprice_date_till))\"\n\" -> Index Scan using idx_thtp_price_id on \nticket_has_ticket_price (cost=0.00..47.04 rows=258 width=4) (actual \ntime=0.019..0.336 rows=619 loops=7)\"\n\" Index Cond: (ticket_has_ticket_price.thtp_price_id \n= ticket_price.price_id)\"\n\"Total runtime: 8.966 ms\"\n\n\nYou can see the query has only 7 distinct parameter sets to pass to the \nfunction but...\n4) Explain analyze\n SELECT web_select_extra_price(price_id, 7820, 1)\n FROM ticket_price\n JOIN ticket_has_ticket_price ON (price_id = thtp_price_id)\n WHERE price_event_id = 7820 and (current_timestamp >= price_date AND \ncurrent_timestamp <= price_date_till)\n\nResult:\n\"Nested Loop (cost=0.00..238.30 rows=517 width=4) (actual \ntime=0.365..808.537 rows=4335 loops=1)\"\n\" -> Index Scan using index_price_event_id on ticket_price \n(cost=0.00..8.52 rows=2 width=4) (actual time=0.014..0.040 rows=7 loops=1)\"\n\" Index Cond: (price_event_id = 7820)\"\n\" Filter: ((now() >= price_date) AND (now() <= price_date_till))\"\n\" -> Index Scan using idx_thtp_price_id on ticket_has_ticket_price \n(cost=0.00..47.04 rows=258 width=4) (actual time=0.016..0.655 rows=619 \nloops=7)\"\n\" Index Cond: (ticket_has_ticket_price.thtp_price_id = \nticket_price.price_id)\"\n\"Total runtime: 810.143 ms\"\n\n\nSo I am totally confused... It seems that selecting 4335 rows is a joke \nfor Postgresql, but the great job is done then adding one of 7 possible \nvalues to the result set... Please help me understand what I am missing \nhere?...\n\nFinally the system:\nServer\nPG: Version string PostgreSQL 9.0.3 on i486-pc-linux-gnu, compiled by \nGCC gcc-4.4.real (Debian 4.4.5-10) 4.4.5, 32-bit\n\nClient\nWin XP SP3 with pgAdmin 1.12.2.\n\n-- \nJulius Tuskenis\nProgramavimo skyriaus vadovas\nUAB nSoft\nmob. +37068233050\n\n", "msg_date": "Thu, 10 Mar 2011 18:26:00 +0200", "msg_from": "Julius Tuskenis <[email protected]>", "msg_from_op": true, "msg_subject": "unexpected stable function behavior" }, { "msg_contents": "On Thu, Mar 10, 2011 at 10:26 AM, Julius Tuskenis <[email protected]> wrote:\n> Hello, list\n>\n> Our company is creating a ticketing system. Of course the performance issues\n> are very important to us (as to all of you I guess). To increase speed of\n> some queries stable functions are used, but somehow they don't act exactly\n> as I expect, so would you please explain what am I doing (or expecting)\n> wrong...\n>\n> First of all I have the stable function witch runs fast and I have no\n> problems with it at all.\n> CREATE OR REPLACE FUNCTION web_select_extra_price(prm_price_id integer,\n> prm_event_id integer, prm_cashier_id integer)\n>  RETURNS numeric AS\n> '\n> ........ some code here\n> '\n>  LANGUAGE plpgsql STABLE\n>  COST 100;\n>\n> Now the test:\n>\n> 1) query without using the function\n> explain analyze\n>  SELECT thtp_tick_id, price_id,\n>    price_price,\n>    price_color\n>  FROM ticket_price\n>    JOIN ticket_has_ticket_price ON (price_id = thtp_price_id)\n>  WHERE price_event_id = 7820 and (current_timestamp >= price_date AND\n> current_timestamp <= price_date_till)\n>  ORDER BY price_id;\n>\n> Result:\n> \"Sort  (cost=132.47..133.77 rows=518 width=25) (actual time=5.125..5.842\n> rows=4335 loops=1)\"\n> \"  Sort Key: ticket_price.price_id\"\n> \"  Sort Method:  quicksort  Memory: 433kB\"\n> \"  ->  Nested Loop  (cost=0.00..109.12 rows=518 width=25) (actual\n> time=0.037..3.148 rows=4335 loops=1)\"\n> \"        ->  Index Scan using index_price_event_id on ticket_price\n>  (cost=0.00..8.52 rows=2 width=21) (actual time=0.014..0.026 rows=7\n> loops=1)\"\n> \"              Index Cond: (price_event_id = 7820)\"\n> \"              Filter: ((now() >= price_date) AND (now() <=\n> price_date_till))\"\n> \"        ->  Index Scan using idx_thtp_price_id on ticket_has_ticket_price\n>  (cost=0.00..47.06 rows=259 width=8) (actual time=0.013..0.211 rows=619\n> loops=7)\"\n> \"              Index Cond: (ticket_has_ticket_price.thtp_price_id =\n> ticket_price.price_id)\"\n> \"Total runtime: 6.425 ms\"\n>\n>\n> 2) Query using the function\n> explain analyze\n>  SELECT thtp_tick_id, price_id,\n>    price_price, web_select_extra_price(price_id, price_event_id, 1),\n>    price_color\n>  FROM ticket_price\n>    JOIN ticket_has_ticket_price ON (price_id = thtp_price_id)\n>  WHERE price_event_id = 7820 and (current_timestamp >= price_date AND\n> current_timestamp <= price_date_till)\n>  ORDER BY price_id;\n>\n> Result:\n> \"Sort  (cost=261.97..263.27 rows=518 width=29) (actual time=704.224..704.927\n> rows=4335 loops=1)\"\n> \"  Sort Key: ticket_price.price_id\"\n> \"  Sort Method:  quicksort  Memory: 433kB\"\n> \"  ->  Nested Loop  (cost=0.00..238.62 rows=518 width=29) (actual\n> time=0.272..699.073 rows=4335 loops=1)\"\n> \"        ->  Index Scan using index_price_event_id on ticket_price\n>  (cost=0.00..8.52 rows=2 width=25) (actual time=0.011..0.052 rows=7\n> loops=1)\"\n> \"              Index Cond: (price_event_id = 7820)\"\n> \"              Filter: ((now() >= price_date) AND (now() <=\n> price_date_till))\"\n> \"        ->  Index Scan using idx_thtp_price_id on ticket_has_ticket_price\n>  (cost=0.00..47.06 rows=259 width=8) (actual time=0.017..0.582 rows=619\n> loops=7)\"\n> \"              Index Cond: (ticket_has_ticket_price.thtp_price_id =\n> ticket_price.price_id)\"\n> \"Total runtime: 705.531 ms\"\n>\n>\n> Now what you can think is that executing web_select_extra_price takes the\n> difference, but\n> 3) As STABLE function should be executed once for every different set of\n> parameters I do\n> SELECT web_select_extra_price(price_id, 7820, 1) FROM (\n>\n>  SELECT distinct price_id\n>  FROM ticket_price\n>    JOIN ticket_has_ticket_price ON (price_id = thtp_price_id)\n>  WHERE price_event_id = 7820 and (current_timestamp >= price_date AND\n> current_timestamp <= price_date_till)\n>  ) as qq;\n>\n> Result:\n> \"Subquery Scan on qq  (cost=110.34..110.88 rows=2 width=4) (actual\n> time=7.265..8.907 rows=7 loops=1)\"\n> \"  ->  HashAggregate  (cost=110.34..110.36 rows=2 width=4) (actual\n> time=6.866..6.873 rows=7 loops=1)\"\n> \"        ->  Nested Loop  (cost=0.00..109.05 rows=517 width=4) (actual\n> time=0.037..4.643 rows=4335 loops=1)\"\n> \"              ->  Index Scan using index_price_event_id on ticket_price\n>  (cost=0.00..8.52 rows=2 width=4) (actual time=0.014..0.038 rows=7 loops=1)\"\n> \"                    Index Cond: (price_event_id = 7820)\"\n> \"                    Filter: ((now() >= price_date) AND (now() <=\n> price_date_till))\"\n> \"              ->  Index Scan using idx_thtp_price_id on\n> ticket_has_ticket_price  (cost=0.00..47.04 rows=258 width=4) (actual\n> time=0.019..0.336 rows=619 loops=7)\"\n> \"                    Index Cond: (ticket_has_ticket_price.thtp_price_id =\n> ticket_price.price_id)\"\n> \"Total runtime: 8.966 ms\"\n>\n>\n> You can see the query has only 7 distinct parameter sets to pass to the\n> function but...\n> 4)   Explain analyze\n>  SELECT web_select_extra_price(price_id, 7820, 1)\n>  FROM ticket_price\n>    JOIN ticket_has_ticket_price ON (price_id = thtp_price_id)\n>  WHERE price_event_id = 7820 and (current_timestamp >= price_date AND\n> current_timestamp <= price_date_till)\n>\n> Result:\n> \"Nested Loop  (cost=0.00..238.30 rows=517 width=4) (actual\n> time=0.365..808.537 rows=4335 loops=1)\"\n> \"  ->  Index Scan using index_price_event_id on ticket_price\n>  (cost=0.00..8.52 rows=2 width=4) (actual time=0.014..0.040 rows=7 loops=1)\"\n> \"        Index Cond: (price_event_id = 7820)\"\n> \"        Filter: ((now() >= price_date) AND (now() <= price_date_till))\"\n> \"  ->  Index Scan using idx_thtp_price_id on ticket_has_ticket_price\n>  (cost=0.00..47.04 rows=258 width=4) (actual time=0.016..0.655 rows=619\n> loops=7)\"\n> \"        Index Cond: (ticket_has_ticket_price.thtp_price_id =\n> ticket_price.price_id)\"\n> \"Total runtime: 810.143 ms\"\n>\n>\n> So I am totally confused... It seems that selecting 4335 rows is a joke for\n> Postgresql, but the great job is done then adding one of 7 possible values\n> to the result set... Please help me understand what I am missing here?...\n>\n> Finally the system:\n> Server\n> PG: Version string    PostgreSQL 9.0.3 on i486-pc-linux-gnu, compiled by GCC\n> gcc-4.4.real (Debian 4.4.5-10) 4.4.5, 32-bit\n>\n> Client\n> Win XP SP3 with pgAdmin 1.12.2.\n\nThis is a huge problem with non trivial functions in the select list.\nPushing the result into and a subquery does NOT guarantee that the\ninner result is materialized first. Try a CTE.\n\nwith foo as\n(\n select yadda;\n)\nselect func(foo.a), foo.* from foo;\n\nmerlin\n", "msg_date": "Thu, 10 Mar 2011 15:14:50 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: unexpected stable function behavior" }, { "msg_contents": "Hello, Merlin\n\nThank you for your quick response.\n\n2011.03.10 23:14, Merlin Moncure rašė:\n> This is a huge problem with non trivial functions in the select list.\n> Pushing the result into and a subquery does NOT guarantee that the\n> inner result is materialized first.\n From the postgresql documentation about STABLE functions: \"This \ncategory allows the optimizer to optimize multiple calls of the function \nto a single call.\" I thought that this means that optimizer executes the \nfunction only for now parameter sets and stores results in some \"cache\" \nand use it if the parameters are already known. I realize this is very \nnaive approach and most probably everything is much more complicated. I \nwould appreciate if someone would explain the mechanism (or provide with \nsome useful link).\n\n> Try a CTE.\n>\n> with foo as\n> (\n> select yadda;\n> )\n> select func(foo.a), foo.* from foo;\nI'm sorry, but I'm totally new to CTE. Would you please show me how \nshould I use the stable function and where the parameters should be put \nto improve the behavior of the optimizer for my problem?\n\nThank you in advance\n\n-- \nJulius Tuskenis\nProgramavimo skyriaus vadovas\nUAB nSoft\nmob. +37068233050\n\n\n\n\n\n\n\n Hello, Merlin\n\n Thank you for your quick response.\n\n 2011.03.10 23:14, Merlin Moncure rašė:\n \nThis is a huge problem with non trivial functions in the select list.\nPushing the result into and a subquery does NOT guarantee that the\ninner result is materialized first.\n\n From the postgresql documentation about STABLE functions: \"This category allows the\n optimizer to optimize multiple calls of the function to a single\n call.\" I thought that this means that optimizer\n executes the function only for now parameter sets and stores results\n in some \"cache\" and use it if the parameters are already known. I\n realize this is very naive approach and most probably everything is\n much more complicated. I would appreciate if someone would explain\n the mechanism (or provide with some useful link).\n\n\n Try a CTE.\n\nwith foo as\n(\n select yadda;\n)\nselect func(foo.a), foo.* from foo;\n\n\n I'm sorry, but I'm totally new to CTE. Would you please show me how\n should I use the stable function and where the parameters should be\n put to improve the behavior of the optimizer for my problem?\n\n Thank you in advance\n-- \nJulius Tuskenis\nProgramavimo skyriaus vadovas\nUAB nSoft\nmob. +37068233050", "msg_date": "Mon, 14 Mar 2011 10:46:07 +0200", "msg_from": "Julius Tuskenis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: unexpected stable function behavior" }, { "msg_contents": "On Mon, Mar 14, 2011 at 3:46 AM, Julius Tuskenis <[email protected]> wrote:\n> Hello, Merlin\n>\n> Thank you for your quick response.\n>\n> 2011.03.10 23:14, Merlin Moncure rašė:\n>\n> This is a huge problem with non trivial functions in the select list.\n> Pushing the result into and a subquery does NOT guarantee that the\n> inner result is materialized first.\n>\n> From the postgresql documentation about STABLE functions: \"This category\n> allows the optimizer to optimize multiple calls of the function to a single\n> call.\" I thought that this means that optimizer executes the function only\n> for now parameter sets and stores results in some \"cache\" and use it if the\n> parameters are already known. I realize this is very naive approach and most\n> probably everything is much more complicated. I would appreciate if someone\n> would explain the mechanism (or provide with some useful link).\n\nJust because some optimizations can happen doesn't mean they will\nhappen or there is even capability to make them happen. There was\nsome recent discussion about this very topic here:\nhttp://postgresql.1045698.n5.nabble.com/function-contants-evaluated-for-every-row-td3278945.html.\n\n\n> Try a CTE.\n>\n> with foo as\n> (\n> select yadda;\n> )\n> select func(foo.a), foo.* from foo;\n>\n> I'm sorry, but I'm totally new to CTE. Would you please show me how should I\n> use the stable function and where the parameters should be put to improve\n> the behavior of the optimizer for my problem?\n\nWITH results as\n(\n SELECT distinct price_id as price_id\n FROM ticket_price\n JOIN ticket_has_ticket_price ON (price_id = thtp_price_id)\n WHERE price_event_id = 7820 and (current_timestamp >= price_date AND\ncurrent_timestamp <= price_date_till)\n ) as qq\n)\n SELECT web_select_extra_price(price_id, 7820, 1) from results;\n\n\nAnother way to fight this is to play with the cost planner hint\nparameter in 'create function', but I prefer the CTE -- it gives\nstrong guarantees about order of execution which is what you really\nwant. CTEs are great btw, I'd start learning them immediately.\n\nIMNSHO, this (uncontrolled number of function executions when run via\nfield select list) is a common gotcha w/postgres and a FAQ. Also the\ndocumentation is not very helpful on this point...do you agree CTE is\nthe right way to advise handling this problem...is it worth further\nnotation?\n\nmerlin\n", "msg_date": "Mon, 14 Mar 2011 08:41:38 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: unexpected stable function behavior" }, { "msg_contents": "On Thursday, March 10, 2011 05:26:00 PM Julius Tuskenis wrote:\n> 3) As STABLE function should be executed once for every different set of \n> parameters\nThats not true. Thats not what any of the volatility information (like STABLE, \nIMMUTABLE, VOLATILE) does.\n\nSee http://www.postgresql.org/docs/current/interactive/xfunc-volatility.html\n\nIt *does* change how often a function is executed though. I.e.\n\nSELECT g.i, some_stable_func(1) FROM generate_series(1, 1000) g(i)\n\nwill call some_stable_func only once because it can determine all the \nparameters beforehand.\n\n\nAndres\n", "msg_date": "Mon, 14 Mar 2011 15:08:14 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: unexpected stable function behavior" }, { "msg_contents": "Julius Tuskenis <[email protected]> writes:\n> From the postgresql documentation about STABLE functions: \"This \n> category allows the optimizer to optimize multiple calls of the function \n> to a single call.\" I thought that this means that optimizer executes the \n> function only for now parameter sets and stores results in some \"cache\" \n> and use it if the parameters are already known.\n\nNo, it does not. That function property *allows* the optimizer to\ninvoke the function fewer times than would happen in an un-optimized\nquery. It does not *require* it to do so. There is no such cache\nmechanism in Postgres, and it's unlikely that there ever will be,\nbecause it probably would be a net performance loss on average.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 Mar 2011 13:17:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: unexpected stable function behavior " }, { "msg_contents": "Hello,\n\n2011.03.14 15:41, Merlin Moncure rašė:\n> WITH results as\n> (\n> SELECT distinct price_id as price_id\n> FROM ticket_price\n> JOIN ticket_has_ticket_price ON (price_id = thtp_price_id)\n> WHERE price_event_id = 7820 and (current_timestamp>= price_date AND\n> current_timestamp<= price_date_till)\n> ) as qq\n> )\n> SELECT web_select_extra_price(price_id, 7820, 1) from results;\n>\nThank you Merlin for your help. I have updated my function to use CTE. \nAlthough there was no performance improvement (I had the select with \nfunction using distinct values joined earlyer) it's good to know the \noptimizer will not change the way I want the query to be executed. Thank \nyou once again.\n\n> CTEs are great btw, I'd start learning them immediately.\nI am going to do that.\n> IMNSHO, this (uncontrolled number of function executions when run via\n> field select list) is a common gotcha w/postgres and a FAQ. Also the\n> documentation is not very helpful on this point...\nYes, I totally agree with you. I think sentence like \"Although function \nis marked as STABLE or IMMUTABLE the optimizer is not obliged to take \nadvantage of these properties.\" (sorry for my English).\n> do you agree CTE is the right way to advise handling this problem...is it worth further\n> notation?\nYes, the CTE worked fine for me. Reading some more on this topic I found \nsome comments that the optimizer has no possibility to know how many \ntimes the function is to be called in such queries (without actually \nexecuting the query), so there is no way to determine the cost. That \nexplains why not the optimal plan was chosen to my query.\n\n-- \nJulius Tuskenis\nProgramavimo skyriaus vadovas\nUAB nSoft\nmob. +37068233050\n\n", "msg_date": "Tue, 15 Mar 2011 10:04:31 +0200", "msg_from": "Julius Tuskenis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: unexpected stable function behavior" }, { "msg_contents": "Thank you, Tom for you answer\n\n2011.03.14 19:17, Tom Lane rašė:\n> That function property*allows* the optimizer to\n> invoke the function fewer times than would happen in an un-optimized\n> query. It does not*require* it to do so.\nThank you for clearing that for me. I think these 2 sentences in \ndocumentation \n(http://www.postgresql.org/docs/current/interactive/xfunc-volatility.html) \nwould prevent misunderstandings in the future.\n\n-- \nJulius Tuskenis\nProgramavimo skyriaus vadovas\nUAB nSoft\nmob. +37068233050\n\n\n\n\n\n\n\n Thank you, Tom for you answer\n\n 2011.03.14 19:17, Tom Lane rašė:\n \nThat function property *allows* the optimizer to\ninvoke the function fewer times than would happen in an un-optimized\nquery. It does not *require* it to do so.\n\n Thank you for clearing that for me. I think these 2 sentences in\n documentation (http://www.postgresql.org/docs/current/interactive/xfunc-volatility.html)\n would prevent misunderstandings in the future. \n\n-- \nJulius Tuskenis\nProgramavimo skyriaus vadovas\nUAB nSoft\nmob. +37068233050", "msg_date": "Tue, 15 Mar 2011 10:12:13 +0200", "msg_from": "Julius Tuskenis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: unexpected stable function behavior" } ]
[ { "msg_contents": "Hi postgressers -\n\nAs part of my work with voter file data, I pretty regularly have to \njoin one large-ish (over 500k rows) table to another. Sometimes this \nis via a text field (countyname) + integer (voter id). I've noticed \nsometimes this converges and sometimes it doesn't, seemingly \nregardless of how I index things. So I'm looking for general thoughts \non the joining of large tables, but also running into a specific issue \nwith the following slightly different query:\n\nThis one is between two tables that are a 754k row list of voters and \na 445k row list of property owners. (I'm trying to find records where \nthe owner name matches the voter name at the same address.) I have \nbtree single column indices built on all the relevant fields, and \nmulticolumn indices built across all the columns I'm matching. The \nfull schemas of both tables are below. The machine is an older-ish (3 \nyears ago) dual-core pentium w/ 4GB RAM running FreeBSD, more details \nbelow.\n\nThis is the query I've come up with so far:\n\nexplain analyze\nupdate vanalameda set ownerflag = 'exact'\n from aralameda where\n vanalameda.streetno ~~ aralameda.streetnum and\n vanalameda.streetname ~~ aralameda.streetname and\n vanalameda.lastname ~~ aralameda.ownername and\n vanalameda.firstname ~~ aralameda.ownername;\n\nIf I include the analyze, this didn't complete after running \novernight. If I drop the analyze and just explain, I get this:\n\n\"Nested Loop (cost=46690.74..15384448712.74 rows=204 width=204)\"\n\" Join Filter: (((vanalameda.streetno)::text ~~ \n(aralameda.streetnum)::text) AND ((vanalameda.streetname)::text ~~ \n(aralameda.streetname)::text) AND ((vanalameda.lastname)::text ~~ \n(aralameda.ownername)::text) AND ((vanalameda.firstname)::text ~~ \n(aralameda.ownername)::text))\"\n\" -> Seq Scan on vanalameda (cost=0.00..26597.80 rows=734780 \nwidth=204)\"\n\" -> Materialize (cost=46690.74..58735.87 rows=444613 width=113)\"\n\" -> Seq Scan on aralameda (cost=0.00..38647.13 rows=444613 \nwidth=113)\"\n\nOne general question: does the width of the tables (i.e. the numbers \nof columns not being joined and the size of those fields) matter? The \ntables do have a lot of extra columns that I could slice out.\n\nThanks so much!\n\nDan\n\nSystem:\nclient: pgadmin III, Mac OS\n\nserver:\nselect version();\nPostgreSQL 8.3.7 on i386-portbld-freebsd7.2, compiled by GCC cc (GCC) \n4.2.1 20070719 [FreeBSD]\n(installed from freebsd package system, default configuration)\n\n%sysctl -a | egrep -i 'hw.machine|hw.model|hw.ncpu'\nhw.machine: i386\nhw.model: Genuine Intel(R) CPU 2160 @ 1.80GHz\nhw.ncpu: 2\nhw.machine_arch: i386\n\nw/ 4GB RAM, 1 1GB disk, no RAID.\n\nHere's the tables...\n\n Table \"public.aralameda\"\n Column | Type | Modifiers\n-----------------+-----------------------+-----------\n dt000o039001010 | character varying(13) |\n o3901010 | character varying(15) |\n dt17 | character varying(2) |\n dt046 | character varying(3) |\n streetnum | character varying(10) |\n streetname | character varying(50) |\n unitnum | character varying(10) |\n city | character varying(30) |\n zip | character varying(5) |\n unk3 | character varying(1) |\n crap1 | character varying(12) |\n crap2 | character varying(12) |\n crap3 | character varying(12) |\n crap4 | character varying(12) |\n crap5 | character varying(12) |\n crap6 | character varying(12) |\n crap7 | character varying(12) |\n crap8 | character varying(12) |\n crap9 | character varying(12) |\n crap10 | character varying(12) |\n dt2009 | character varying(4) |\n dt066114 | character varying(6) |\n crap11 | character varying(8) |\n crap12 | character varying(8) |\n ownername | character varying(50) |\n careofname | character varying(50) |\n unk4 | character varying(1) |\n maddr1 | character varying(60) |\n munitnum | character varying(10) |\n mcitystate | character varying(30) |\n mzip | character varying(5) |\n mplus4 | character varying(4) |\n dt40 | character varying(2) |\n dt4 | character varying(1) |\n crap13 | character varying(8) |\n d | character varying(1) |\n dt0500 | character varying(4) |\n unk6 | character varying(1) |\n crap14 | character varying(8) |\n unk7 | character varying(1) |\nIndexes:\n \"arall\" btree (streetnum, streetname, ownername)\n \"aroname\" btree (ownername)\n \"arstreetname\" btree (streetname)\n \"arstreetnum\" btree (streetnum)\n\n Table \"public.vanalameda\"\n Column | Type | Modifiers\n---------------+-----------------------+-----------\n vanid | character varying(8) |\n lastname | character varying(25) |\n firstname | character varying(16) |\n middlename | character varying(16) |\n suffix | character varying(3) |\n streetno | character varying(5) |\n streetnohalf | character varying(3) |\n streetprefix | character varying(2) |\n streetname | character varying(24) |\n streettype | character varying(4) |\n streetsuffix | character varying(2) |\n apttype | character varying(4) |\n aptno | character varying(8) |\n city | character varying(13) |\n state | character varying(2) |\n zip5 | character varying(5) |\n zip4 | character varying(4) |\n vaddress | character varying(33) |\n maddress | character varying(41) |\n mcity | character varying(25) |\n mstate | character varying(2) |\n mzip5 | character varying(5) |\n mzip4 | character varying(4) |\n mstreetno | character varying(6) |\n mstreetnohalf | character varying(9) |\n mstreetprefix | character varying(2) |\n mstreetname | character varying(40) |\n mstreettype | character varying(4) |\n mstreetsuffix | character varying(2) |\n mapttype | character varying(4) |\n maptno | character varying(13) |\n dob | character varying(10) |\n countyfileid | character varying(7) |\n countyid | character varying(3) |\n affno | character varying(12) |\n ownerflag | character varying(20) |\nIndexes:\n \"vanall\" btree (streetno, streetname, lastname, firstname)\n \"vanfname\" btree (firstname)\n \"vanlname\" btree (lastname)\n \"vanstreetname\" btree (streetname)\n \"vanstreetno\" btree (streetno)\n\n\nHi postgressers -As part of my work with voter file data, I pretty regularly have to join one large-ish (over 500k rows) table to another. Sometimes this is via a text field (countyname) + integer (voter id). I've noticed sometimes this converges and sometimes it doesn't, seemingly regardless of how I index things. So I'm looking for general thoughts on the joining of large tables, but also running into a specific issue with the following slightly different query:This one is between two tables that are a 754k row list of voters and a 445k row list of property owners. (I'm trying to find records where the owner name matches the voter name at the same address.) I have btree single column indices built on all the relevant fields, and multicolumn indices built across all the columns I'm matching. The full schemas of both tables are below. The machine is an older-ish (3 years ago) dual-core pentium w/ 4GB RAM running FreeBSD, more details below.This is the query I've come up with so far:explain analyzeupdate vanalameda set ownerflag = 'exact'  from aralameda where  vanalameda.streetno ~~ aralameda.streetnum and  vanalameda.streetname ~~ aralameda.streetname and  vanalameda.lastname ~~ aralameda.ownername and  vanalameda.firstname ~~ aralameda.ownername;If I include the analyze, this didn't complete after running overnight. If I drop the analyze and just explain, I get this:\"Nested Loop  (cost=46690.74..15384448712.74 rows=204 width=204)\"\"  Join Filter: (((vanalameda.streetno)::text ~~ (aralameda.streetnum)::text) AND ((vanalameda.streetname)::text ~~ (aralameda.streetname)::text) AND ((vanalameda.lastname)::text ~~ (aralameda.ownername)::text) AND ((vanalameda.firstname)::text ~~ (aralameda.ownername)::text))\"\"  ->  Seq Scan on vanalameda  (cost=0.00..26597.80 rows=734780 width=204)\"\"  ->  Materialize  (cost=46690.74..58735.87 rows=444613 width=113)\"\"        ->  Seq Scan on aralameda  (cost=0.00..38647.13 rows=444613 width=113)\"One general question: does the width of the tables (i.e. the numbers of columns not being joined and the size of those fields) matter? The tables do have a lot of extra columns that I could slice out.Thanks so much!DanSystem:client: pgadmin III, Mac OSserver:select version();PostgreSQL 8.3.7 on i386-portbld-freebsd7.2, compiled by GCC cc (GCC) 4.2.1 20070719  [FreeBSD](installed from freebsd package system, default configuration)%sysctl -a | egrep -i 'hw.machine|hw.model|hw.ncpu'hw.machine: i386hw.model: Genuine Intel(R) CPU            2160  @ 1.80GHzhw.ncpu: 2hw.machine_arch: i386w/ 4GB RAM, 1 1GB disk, no RAID.Here's the tables...              Table \"public.aralameda\"     Column      |         Type          | Modifiers -----------------+-----------------------+----------- dt000o039001010 | character varying(13) |  o3901010        | character varying(15) |  dt17            | character varying(2)  |  dt046           | character varying(3)  |  streetnum       | character varying(10) |  streetname      | character varying(50) |  unitnum         | character varying(10) |  city            | character varying(30) |  zip             | character varying(5)  |  unk3            | character varying(1)  |  crap1           | character varying(12) |  crap2           | character varying(12) |  crap3           | character varying(12) |  crap4           | character varying(12) |  crap5           | character varying(12) |  crap6           | character varying(12) |  crap7           | character varying(12) |  crap8           | character varying(12) |  crap9           | character varying(12) |  crap10          | character varying(12) |  dt2009          | character varying(4)  |  dt066114        | character varying(6)  |  crap11          | character varying(8)  |  crap12          | character varying(8)  |  ownername       | character varying(50) |  careofname      | character varying(50) |  unk4            | character varying(1)  |  maddr1          | character varying(60) |  munitnum        | character varying(10) |  mcitystate      | character varying(30) |  mzip            | character varying(5)  |  mplus4          | character varying(4)  |  dt40            | character varying(2)  |  dt4             | character varying(1)  |  crap13          | character varying(8)  |  d               | character varying(1)  |  dt0500          | character varying(4)  |  unk6            | character varying(1)  |  crap14          | character varying(8)  |  unk7            | character varying(1)  | Indexes:    \"arall\" btree (streetnum, streetname, ownername)    \"aroname\" btree (ownername)    \"arstreetname\" btree (streetname)    \"arstreetnum\" btree (streetnum)             Table \"public.vanalameda\"    Column     |         Type          | Modifiers ---------------+-----------------------+----------- vanid         | character varying(8)  |  lastname      | character varying(25) |  firstname     | character varying(16) |  middlename    | character varying(16) |  suffix        | character varying(3)  |  streetno      | character varying(5)  |  streetnohalf  | character varying(3)  |  streetprefix  | character varying(2)  |  streetname    | character varying(24) |  streettype    | character varying(4)  |  streetsuffix  | character varying(2)  |  apttype       | character varying(4)  |  aptno         | character varying(8)  |  city          | character varying(13) |  state         | character varying(2)  |  zip5          | character varying(5)  |  zip4          | character varying(4)  |  vaddress      | character varying(33) |  maddress      | character varying(41) |  mcity         | character varying(25) |  mstate        | character varying(2)  |  mzip5         | character varying(5)  |  mzip4         | character varying(4)  |  mstreetno     | character varying(6)  |  mstreetnohalf | character varying(9)  |  mstreetprefix | character varying(2)  |  mstreetname   | character varying(40) |  mstreettype   | character varying(4)  |  mstreetsuffix | character varying(2)  |  mapttype      | character varying(4)  |  maptno        | character varying(13) |  dob           | character varying(10) |  countyfileid  | character varying(7)  |  countyid      | character varying(3)  |  affno         | character varying(12) |  ownerflag     | character varying(20) | Indexes:    \"vanall\" btree (streetno, streetname, lastname, firstname)    \"vanfname\" btree (firstname)    \"vanlname\" btree (lastname)    \"vanstreetname\" btree (streetname)    \"vanstreetno\" btree (streetno)", "msg_date": "Thu, 10 Mar 2011 13:25:24 -0800", "msg_from": "Dan Ancona <[email protected]>", "msg_from_op": true, "msg_subject": "big joins not converging" }, { "msg_contents": "\nOn Mar 10, 2011, at 1:25 PM, Dan Ancona wrote:\n\n> Hi postgressers -\n> \n> As part of my work with voter file data, I pretty regularly have to join one large-ish (over 500k rows) table to another. Sometimes this is via a text field (countyname) + integer (voter id). I've noticed sometimes this converges and sometimes it doesn't, seemingly regardless of how I index things. So I'm looking for general thoughts on the joining of large tables, but also running into a specific issue with the following slightly different query:\n> \n> This one is between two tables that are a 754k row list of voters and a 445k row list of property owners. (I'm trying to find records where the owner name matches the voter name at the same address.) I have btree single column indices built on all the relevant fields, and multicolumn indices built across all the columns I'm matching. The full schemas of both tables are below. The machine is an older-ish (3 years ago) dual-core pentium w/ 4GB RAM running FreeBSD, more details below.\n> \n> This is the query I've come up with so far:\n> \n> explain analyze\n> update vanalameda set ownerflag = 'exact'\n> from aralameda where\n> vanalameda.streetno ~~ aralameda.streetnum and\n> vanalameda.streetname ~~ aralameda.streetname and\n> vanalameda.lastname ~~ aralameda.ownername and\n> vanalameda.firstname ~~ aralameda.ownername;\n> \n> If I include the analyze, this didn't complete after running overnight. If I drop the analyze and just explain, I get this:\n> \n> \"Nested Loop (cost=46690.74..15384448712.74 rows=204 width=204)\"\n> \" Join Filter: (((vanalameda.streetno)::text ~~ (aralameda.streetnum)::text) AND ((vanalameda.streetname)::text ~~ (aralameda.streetname)::text) AND ((vanalameda.lastname)::text ~~ (aralameda.ownername)::text) AND ((vanalameda.firstname)::text ~~ (aralameda.ownername)::text))\"\n> \" -> Seq Scan on vanalameda (cost=0.00..26597.80 rows=734780 width=204)\"\n> \" -> Materialize (cost=46690.74..58735.87 rows=444613 width=113)\"\n> \" -> Seq Scan on aralameda (cost=0.00..38647.13 rows=444613 width=113)\"\n> \n> One general question: does the width of the tables (i.e. the numbers of columns not being joined and the size of those fields) matter? The tables do have a lot of extra columns that I could slice out.\n> \n\nIs there any reason you're using '~~' to compare values, rather than '='?\n\nIf you're intentionally using LIKE-style comparisons then there are some other things you can do, but I don't think you mean to do that, for streeno and streetname anyway.\n\nSwitching to an equality comparison should let your query use an index, most usefully one on (streetname, streetnum) probably.\n\nI'm not sure what you're intending by comparing ownername to both firstname and lastname. I don't think that'll do anything useful, and doubt it'll ever match. Are you expecting firstname and lastname to be substrings of ownername? If so, you might need to use wildcards with the like.\n\n(Also, performance and smart use of indexes tends to get better in newer versions of postgresql. You might want to upgrade to 9.0.3 too.)\n\nCheers,\n Steve\n\n\n", "msg_date": "Thu, 10 Mar 2011 14:13:02 -0800", "msg_from": "Steve Atkins <[email protected]>", "msg_from_op": false, "msg_subject": "Re: big joins not converging" }, { "msg_contents": "Steve Atkins <steve <at> blighty.com> writes:\n\n> \n> \n> On Mar 10, 2011, at 1:25 PM, Dan Ancona wrote:\n> \n> > Hi postgressers -\n> > \n> > As part of my work with voter file data, I pretty regularly have to join one\nlarge-ish (over 500k rows) table\n> to another. Sometimes this is via a text field (countyname) + integer (voter\nid). I've noticed sometimes\n> this converges and sometimes it doesn't, seemingly regardless of how I index\nthings. \n\nBy \"converge\" you mean \"finish running\" -- \"converge\" has a lot of other\novertones for us amateur math types.\n\nNote that I think you are doing \"record linkage\" which is a stepchild academic\nof its own these days. It might bear some research. THere is also a CDC\nmatching program for text files freely downloadalbe to windows (ack), if you\nhunt for it.\n\nFor now, my first thought is that you should try a few different matches, maybe\nvia PL/PGSQL functions, cascading the non-hits to the next step in the process\nwhile shrinking your tables. upcase and delete all spaces, etc. First use\nequality on all columns, which should be able to use indices, and separate those\nrecords. Then try equality on a few columns. Then try some super fuzzy regexes\non a few columns. Etc. \n\nYou will also have to give some thought to scoring a match, with perfection a\none, but, say, name and birthday the same with all else different a .75, etc.\n\nAlso, soundex(), levenshtein, and other fuzzy string tools are your friend. I\nwant to write a version of SAS's COMPGED for Postgres, but I haven't got round\nto it yet.\n\n", "msg_date": "Thu, 10 Mar 2011 23:48:53 +0000 (UTC)", "msg_from": "fork <[email protected]>", "msg_from_op": false, "msg_subject": "Re: big joins not converging" }, { "msg_contents": "On Mar 10, 2011, at 3:48 PM, fork wrote:\n[much thoughtfulness]\n\n> Steve Atkins <steve <at> blighty.com> writes:\n> [also much thoughtfulness]\n\nSteve and fork -- thank you, this is super helpful. I meant to tweak \nthat exact search before sending this around, sorry if that was \nconfusing. That was meant to be a place holder for [some set of \nmatches that works]. And yes, \"not converging\" was incorrect, I did \nmean \"not finishing.\" But together from your answers it sounds pretty \nclear that there's no particularly obvious easy solution that I'm \nmissing; this really is kind of tricky. This is a choice between \ndeveloping some in-house capacity for this and sending people to \nvarious vendors so we'll probably lean on the vendors for now, at \nleast while we work on it. I've gotten my head partway around PL/PGSQL \nfunctions, I may give that another try.\n\nAnd you're right fork, Record Linkage is in fact an entire academic \ndiscipline! I had no idea, this is fascinating and helpful:\n\nhttp://en.wikipedia.org/wiki/Record_linkage\n\nThanks so much!\n\nDan\n\n\n", "msg_date": "Thu, 10 Mar 2011 16:43:09 -0800", "msg_from": "Dan Ancona <[email protected]>", "msg_from_op": true, "msg_subject": "Re: big joins not converging" }, { "msg_contents": "Dan Ancona <da <at> vizbang.com> writes:\n\n> his is a choice between \n> developing some in-house capacity for this and sending people to \n> various vendors so we'll probably lean on the vendors for now, at \n> least while we work on it. \n\nI would try to do the record matching in house and see how far you get, even if\nyou are talking to vendors concurrently. You might get lucky, and you will\nlearn a lot about your data and how much to expect and pay for vendor solutions.\n\nI would: Try building multi column indices on both tables for what you think are\nthe same rows, and match deterministically (if you have a key like social\nsecurity, then do this again on full names). Examine your data to see what\nhits, what misses, what hits multiple. If you know there is a \"good\" and an\n\"iffy\" table, you can use a left outer, otherwise you need a full outer. Then\nput all your leftovers from each into new tables, and try again with something\nfuzzy.\n\nIf you build the indices and use \"=\" and it is still slow, ask again here --\nthat shouldn't happen.\n\n> And you're right fork, Record Linkage is in fact an entire academic \n> discipline!\n\nIndeed. Look for \"blocking\" and \"editing\" with your data first, I think.\n\nI find this problem pretty interesting, so I would love to hear your results. I\nam right now matching building permits to assessor parcels.... I wish I was\nusing PG ...\n\n", "msg_date": "Fri, 11 Mar 2011 16:46:28 +0000 (UTC)", "msg_from": "fork <[email protected]>", "msg_from_op": false, "msg_subject": "Re: big joins not converging" } ]
[ { "msg_contents": "Hi,\n\nI need an ANTI-JOIN (not exists SELECT something from table.../ left join table WHERE table.id IS NULL) on the same table. Acutally I have an index to serve the not exists question, but the query planner chooses to to a bitmap heap scan.\n\nThe table has 100 Mio rows, so doing a heap scan is messed up...\n\nIt would be really fast if Postgres could compare the to indicies. Does Postgres have to visit the table for this ANTI-JOIN?\n\nI know the table has to be visitied at some point to serve the MVCC, but why so early? Can NOT ESISTS only be fixed by the table, because it could miss soemthing otherwise?\n\n-- \nEmpfehlen Sie GMX DSL Ihren Freunden und Bekannten und wir\nbelohnen Sie mit bis zu 50,- Euro! https://freundschaftswerbung.gmx.de\n", "msg_date": "Fri, 11 Mar 2011 16:32:05 +0100", "msg_from": "\"hans wulf\" <[email protected]>", "msg_from_op": true, "msg_subject": "ANTI-JOIN needs table, index scan not possible?" }, { "msg_contents": "> I know the table has to be visitied at some point to serve the MVCC, but why so early? Can NOT ESISTS only be fixed by the table, because it could miss soemthing otherwise?\n\nPossibly because the index entries you're anti-joining against may\npoint to deleted tuples, so you would erroneously omit rows from the\njoin result if you skip the visibility check?\n\n---\nMaciek Sakrejda | System Architect | Truviso\n\n1065 E. Hillsdale Blvd., Suite 215\nFoster City, CA 94404\n(650) 242-3500 Main\nwww.truviso.com\n", "msg_date": "Fri, 11 Mar 2011 09:18:19 -0800", "msg_from": "Maciek Sakrejda <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ANTI-JOIN needs table, index scan not possible?" }, { "msg_contents": "Thanks for the answer.\n\nso there's no way around this problem? A nice index bitmap merge thing would be super fast. Big table ANTI JOIN queries with only a few results expected, are totally broken, if this is true. \n\nThis way the query breaks my neck. This is a massive downside of postgres which makes this kind of query impossible. Mysql gives you the answer in a few seconds :-(\n\n\n\n> Possibly because the index entries you're anti-joining against may\n> point to deleted tuples, so you would erroneously omit rows from the\n> join result if you skip the visibility check?\n> \n> ---\n> Maciek Sakrejda | System Architect | Truviso\n> \n> 1065 E. Hillsdale Blvd., Suite 215\n> Foster City, CA 94404\n> (650) 242-3500 Main\n> www.truviso.com\n\n-- \nSchon gehört? GMX hat einen genialen Phishing-Filter in die\nToolbar eingebaut! http://www.gmx.net/de/go/toolbar\n", "msg_date": "Fri, 11 Mar 2011 18:54:39 +0100", "msg_from": "\"hans wulf\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ANTI-JOIN needs table, index scan not possible?" }, { "msg_contents": "On Fri, Mar 11, 2011 at 06:54:39PM +0100, hans wulf wrote:\n> Thanks for the answer.\n> \n> so there's no way around this problem? A nice index bitmap merge thing would be super fast. Big table ANTI JOIN queries with only a few results expected, are totally broken, if this is true. \n> \n> This way the query breaks my neck. This is a massive downside of postgres which makes this kind of query impossible. Mysql gives you the answer in a few seconds :-(\n> \n> \n\nSuper! I am glad that MySQL can meet your needs. No software is\nperfect and you should definitely chose based on your use-case.\n\nRegards,\nKen\n", "msg_date": "Fri, 11 Mar 2011 16:16:03 -0600", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ANTI-JOIN needs table, index scan not possible?" }, { "msg_contents": "Kenneth Marshall <[email protected]> wrote:\n> On Fri, Mar 11, 2011 at 06:54:39PM +0100, hans wulf wrote:\n>> so there's no way around this problem? A nice index bitmap merge\n>> thing would be super fast. Big table ANTI JOIN queries with only\n>> a few results expected, are totally broken, if this is true. \n>> \n>> This way the query breaks my neck. This is a massive downside of\n>> postgres which makes this kind of query impossible. Mysql gives\n>> you the answer in a few seconds :-(\n> \n> Super! I am glad that MySQL can meet your needs. No software is\n> perfect and you should definitely chose based on your use-case.\n \nWell, as far as I can see we haven't yet seen near enough\ninformation to diagnose the issue, suggest alternative ways to write\nthe query which might perform better, or determine whether there's\nan opportunity to improve the optimizer here.\n \nHans, please read this page and provide more detail:\n \nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n \n-Kevin\n", "msg_date": "Fri, 11 Mar 2011 16:32:03 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ANTI-JOIN needs table, index scan not possible?" }, { "msg_contents": "On Fri, Mar 11, 2011 at 10:32 AM, hans wulf <[email protected]> wrote:\n> I need an ANTI-JOIN (not exists SELECT something from table.../ left join table WHERE table.id IS NULL) on the same table. Acutally I have an index to serve the not exists question, but the query planner chooses to to a bitmap heap scan.\n>\n> The table has 100 Mio rows, so doing a heap scan is messed up...\n>\n> It would be really fast if Postgres could compare the to indicies. Does Postgres have to visit the table for this ANTI-JOIN?\n\nA bitmap heap scan implies that a bitmap index scan is also being\ndone, so it IS using the indexes. Now that leaves open the question\nof why it's not fast... but it's hard to guess the answer to that\nquestion without seeing at least the EXPLAIN output, preferably\nEXPLAIN ANALYZE.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Wed, 23 Mar 2011 00:31:59 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ANTI-JOIN needs table, index scan not possible?" } ]
[ { "msg_contents": "Hello,\n\n \n\nWe are using PostgreSQL 9.0.3, compiled by Visual C++ build 1500,\n32-bit, installed on Windows 2003 R2 32-bit.\n\n \n\nWe have an 'aisposition' table used for a GPS tracking application,\ncontaining ~30 million rows and a number of indexes. Two of these are:\n\n \n\nidx_receiveddatetime: indexes aisposition(receiveddatetime timestamp)\n\n \n\nidx_userid_receiveddatetime: indexes aisposition(userid int4 desc,\nreceiveddatetime timestamp desc)\n\n \n\nThe problem we get is that the following query is taking many minutes to\nrun:\n\n \n\nselect * from aisposition where userid = 311369000 order by userid desc,\nreceiveddatetime desc limit 1\n\n \n\nWhen we 'EXPLAIN' this query, PostgreSQL says it is using the index\nidx_receiveddatetime. The way the application is designed means that in\nvirtually all cases the query will have to scan a very long way into\nidx_receiveddatetime to find the first record where userid = 311369000.\nIf however we delete the idx_receiveddatetime index, the query uses the\nidx_userid_receiveddatetime index, and the query only takes a few\nmilliseconds.\n\n \n\nThe EXPLAIN ANALYZE output with idx_receiveddatetime in place is:\n\n \n\nLimit (cost=0.00..1.30 rows=1 width=398) (actual\ntime=1128097.540..1128097.541 rows=1 loops=1)\n\n -> Index Scan Backward using idx_receiveddatetime on aisposition\n(cost=0.00..2433441.05 rows=1875926 width=398) (actual\ntime=1128097.532..1128097.532 rows=1 loops=1)\n\n Filter: (userid = 311369000)\n\nTotal runtime: 1128097.609 ms\n\n \n\nAnd with that index deleted:\n\n \n\nLimit (cost=0.00..4.01 rows=1 width=398) (actual time=60.633..60.634\nrows=1 loops=1)\n\n -> Index Scan using idx_userid_receiveddatetime on aisposition\n(cost=0.00..7517963.47 rows=1875926 width=398) (actual\ntime=60.629..60.629 rows=1 loops=1)\n\n Index Cond: (userid = 311369000)\n\nTotal runtime: 60.736 ms\n\n \n\nWe would obviously prefer PostgreSQL to use the\nidx_userid_receiveddatetime index in all cases, because we know that\nthis will guarantee results in a timely manner, whereas using\nidx_receiveddatetime will usually require a scan of much of the table\nand our application will not work. What are we doing wrong?\n\n \n\nCheers now,\n\nJohn\n\n\nHello, We are using PostgreSQL 9.0.3, compiled by Visual C++ build 1500, 32-bit, installed on Windows 2003 R2 32-bit. We have an ‘aisposition’ table used for a GPS tracking application, containing ~30 million rows and a number of indexes.  Two of these are: idx_receiveddatetime: indexes aisposition(receiveddatetime timestamp) idx_userid_receiveddatetime: indexes aisposition(userid int4 desc, receiveddatetime timestamp desc) The problem we get is that the following query is taking many minutes to run: select * from aisposition where userid = 311369000 order by userid desc, receiveddatetime desc limit 1 When we ‘EXPLAIN’ this query, PostgreSQL says it is using the index idx_receiveddatetime.  The way the application is designed means that in virtually all cases the query will have to scan a very long way into idx_receiveddatetime to find the first record where userid = 311369000.  If however we delete the idx_receiveddatetime index, the query uses the idx_userid_receiveddatetime index, and the query only takes a few milliseconds. The EXPLAIN ANALYZE output with idx_receiveddatetime in place is: Limit  (cost=0.00..1.30 rows=1 width=398) (actual time=1128097.540..1128097.541 rows=1 loops=1)  ->  Index Scan Backward using idx_receiveddatetime on aisposition  (cost=0.00..2433441.05 rows=1875926 width=398) (actual time=1128097.532..1128097.532 rows=1 loops=1)        Filter: (userid = 311369000)Total runtime: 1128097.609 ms And with that index deleted: Limit  (cost=0.00..4.01 rows=1 width=398) (actual time=60.633..60.634 rows=1 loops=1)  ->  Index Scan using idx_userid_receiveddatetime on aisposition  (cost=0.00..7517963.47 rows=1875926 width=398) (actual time=60.629..60.629 rows=1 loops=1)        Index Cond: (userid = 311369000)Total runtime: 60.736 ms We would obviously prefer PostgreSQL to use the idx_userid_receiveddatetime index in all cases, because we know that this will guarantee results in a timely manner, whereas using idx_receiveddatetime will usually require a scan of much of the table and our application will not work.  What are we doing wrong? Cheers now,John", "msg_date": "Sat, 12 Mar 2011 10:07:41 -0000", "msg_from": "\"John Surcombe\" <[email protected]>", "msg_from_op": true, "msg_subject": "Planner wrongly shuns multi-column index for select .. order by col1,\n\tcol2 limit 1" }, { "msg_contents": "\"John Surcombe\" <[email protected]> writes:\n> When we 'EXPLAIN' this query, PostgreSQL says it is using the index\n> idx_receiveddatetime. The way the application is designed means that in\n> virtually all cases the query will have to scan a very long way into\n> idx_receiveddatetime to find the first record where userid = 311369000.\n> If however we delete the idx_receiveddatetime index, the query uses the\n> idx_userid_receiveddatetime index, and the query only takes a few\n> milliseconds.\n\nThat's just bizarre ... it knows the index is applicable, and the cost\nestimates clearly favor the better index, so why did it pick the worse\none?\n\nI tried to duplicate this locally, without success, so there's some\ncontributing factor you've neglected to mention. Can you put together a\nself-contained test case that acts like this?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 13 Mar 2011 12:24:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner wrongly shuns multi-column index for select .. order by\n\tcol1, col2 limit 1" }, { "msg_contents": "I wrote:\n> \"John Surcombe\" <[email protected]> writes:\n>> When we 'EXPLAIN' this query, PostgreSQL says it is using the index\n>> idx_receiveddatetime. The way the application is designed means that in\n>> virtually all cases the query will have to scan a very long way into\n>> idx_receiveddatetime to find the first record where userid = 311369000.\n>> If however we delete the idx_receiveddatetime index, the query uses the\n>> idx_userid_receiveddatetime index, and the query only takes a few\n>> milliseconds.\n\n> That's just bizarre ... it knows the index is applicable, and the cost\n> estimates clearly favor the better index, so why did it pick the worse\n> one?\n\nNo, scratch that, I misread the plans. It *is* picking the plan it\nthinks has lower cost; it's just a mistaken cost estimate. It's strange\nthough that the less selective indexscan is getting a lower cost\nestimate. I wonder whether your table is (almost) perfectly ordered by\nreceiveddatetime, such that the one-column index has correlation close\nto 1.0. That could possibly lower the cost estimate to the point where\nit'd appear to dominate the other index. It'd be useful to see the\npg_stats.correlation value for both the userid and receiveddatetime\ncolumns.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 13 Mar 2011 18:10:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner wrongly shuns multi-column index for select .. order by\n\tcol1, col2 limit 1" }, { "msg_contents": "> >> When we 'EXPLAIN' this query, PostgreSQL says it is using the index\n> >> idx_receiveddatetime. The way the application is designed means\nthat\n> >> in virtually all cases the query will have to scan a very long way\n> >> into idx_receiveddatetime to find the first record where userid =\n> 311369000.\n> >> If however we delete the idx_receiveddatetime index, the query uses\n> >> the idx_userid_receiveddatetime index, and the query only takes a\nfew\n> >> milliseconds.\n> \n> > That's just bizarre ... it knows the index is applicable, and the\ncost\n> > estimates clearly favor the better index, so why did it pick the\nworse\n> > one?\n> \n> No, scratch that, I misread the plans. It *is* picking the plan it\nthinks has\n> lower cost; it's just a mistaken cost estimate. It's strange though\nthat the less\n> selective indexscan is getting a lower cost estimate. I wonder\nwhether your\n> table is (almost) perfectly ordered by receiveddatetime, such that the\none-\n> column index has correlation close to 1.0. That could possibly lower\nthe cost\n> estimate to the point where it'd appear to dominate the other index.\nIt'd be\n> useful to see the pg_stats.correlation value for both the userid and\n> receiveddatetime columns.\n\nYes, the table is indeed nearly perfectly ordered by receiveddatetime\n(correlation 0.998479). correlation on userid is -0.065556. n_distinct\non userid is also low: 1097.\n\nIs the problem perhaps something like the following: PostgreSQL is\nthinking that because there are not many userids and there is low\ncorrelation, that if it just scans the table from the top in date order,\nthis will be cheap (because receiveddatetime correlation is high so it\nwon't have to seek randomly), and it won't have to scan very far before\nit finds the first row with a matching userid.\n\nThe problem is though that in our application the userids are not\nscattered randomly. There are small regions of the table where a\nparticular userid appears frequently, interspersed with much larger\nregions (perhaps millions or tens of millions of rows) where it doesn't\nappear at all. So in fact the planner's preferred solution is often\npathologically bad.\n\nIs there a solution?\n", "msg_date": "Mon, 14 Mar 2011 09:37:40 -0000", "msg_from": "\"John Surcombe\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Planner wrongly shuns multi-column index for select .. order by\n\tcol1, col2 limit 1" }, { "msg_contents": "\"John Surcombe\" <[email protected]> writes:\n>> It'd be\n>> useful to see the pg_stats.correlation value for both the userid and\n>> receiveddatetime columns.\n\n> Yes, the table is indeed nearly perfectly ordered by receiveddatetime\n> (correlation 0.998479). correlation on userid is -0.065556. n_distinct\n> on userid is also low: 1097.\n\nAh-hah.\n\n> Is the problem perhaps something like the following: PostgreSQL is\n> thinking that because there are not many userids and there is low\n> correlation, that if it just scans the table from the top in date order,\n> this will be cheap (because receiveddatetime correlation is high so it\n> won't have to seek randomly), and it won't have to scan very far before\n> it finds the first row with a matching userid.\n\nThere's some of that, but I think the main problem is that there's a\nvery high discount on the cost estimate for a perfectly-correlated\nindex, and that makes it end up looking cheaper to use than the\nuncorrelated one. (It doesn't help any that we don't do correlation\nproperly for multicolumn indexes; but given what you say above, the\ncorrelation estimate for the two-column index would be small even if\nwe'd computed it exactly.)\n\nYou might find that reducing random_page_cost would avoid the problem.\nThat should reduce the advantage conferred on the high-correlation\nindex, and it probably would represent your actual configuration better\nanyway, given the results you're showing here.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 Mar 2011 13:24:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner wrongly shuns multi-column index for select .. order by\n\tcol1, col2 limit 1" } ]
[ { "msg_contents": "The row estimate is way off. Is autovacuum disabled?", "msg_date": "Sun, 13 Mar 2011 12:21:47 +0000", "msg_from": "Jeremy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Planner wrongly shuns multi-column index for select\n\t.. order by col1, col2 limit 1" } ]
[ { "msg_contents": "Nothing?\n\nNo ideas?\n\nDid I forget to include some useful bit?\n\nOn Fri, Mar 4, 2011 at 8:22 PM, Claudio Freire <[email protected]> wrote:\n> Hello, first post to this list.\n>\n> I have this query that ran in milliseconds in postgres 8.3.7 (usually 50,\n> 100ms), and now it takes a full 10 minutes to complete.\n>\n> I tracked the problem to the usage of hash aggregates to resolve EXISTS\n> clauses in conjunction with large IN clauses, which seem to reverse the\n> execution plan - in 8.3.7, it would use indices to fetch the rows from the\n> IN, then compute the exists with a nested loop, never doing big sequential\n> scans. In 9.0.3, it computes the set of applicable entries with a hash\n> aggregate, but in order to do that it performs a huge index scan - no\n> sequential scans either, but the big index scan is worse.\n>\n> 9.0.3 always misses the estimate of how many rows will come out the hash\n> aggregate, always estimating 200, while in fact the real count is more like\n> 300.000. I've tried increasing statistics in all the columns involved, up to\n> 4000 for each, to the point where it accurately estimates the input to the\n> hash agg, but the output is always estimated to be 200 rows.\n>\n> Rewriting the query to use 0 < (select count(*)..) instead of EXISTS (select\n> * ..) does revert to the old postgres 8.3 plan, although intuitively I would\n> think it to be sub-optimal.\n>\n> The tables in question receive many updates, but never in such a volume as\n> to create enough bloat - plus, the tests I've been running are on a\n> pre-production server without much traffic (so not many updates - probably\n> none in weeks).\n>\n> The server is a Core 2 E7400 dual core with 4GB of ram running linux and a\n> pg 9.0.3 / 8.3.7 (both there, doing migration testing) built from source.\n> Quite smaller than our production server, but I've tested the issue on\n> higher-end hardware and it produces the same results.\n>\n> Any ideas as to how to work around this issue?\n>\n> I can't plug the select count() version everywhere, since I won't be using\n> this form of the query every time (it's generated programatically with an\n> ORM), and some forms of it perform incredibly worse with the select count().\n>\n> Also, any help I can provide to fix it upstream I'll be glad to - I believe\n> (I would have to check) I can even create a dump of the tables (stripping\n> sensitive info of course) - only, well, you'll see the size below - a tad\n> big to be mailing it ;-)\n>\n> pg 9.0 is configured with:\n>\n> work_mem = 64M\n> shared_buffers = 512M\n> temp_buffers = 64M\n> effective_cache_size = 128M\n>\n> pg 8.3.7 is configured with:\n>\n> work_mem = 64M\n> shared_buffers = 100M\n> temp_buffers = 64M\n> effective_cache_size = 128M\n>\n>\n> The query in question:\n>\n>> SELECT member_statistics.member_id\n>>         FROM member_statistics\n>>         WHERE member_statistics.member_id IN ( <<400 ids>> ) AND (EXISTS\n>> (SELECT mat1.tag_id\n>>         FROM member_all_tags_v AS mat1\n>>         WHERE mat1.member_id = member_statistics.member_id AND mat1.tag_id\n>> IN (640, 641, 3637, 3638, 637, 638, 639) AND mat1.polarity >= 90))\n>>\n>> -- View: member_all_tags_v\n>>\n>> CREATE OR REPLACE VIEW member_all_tags_v AS\n>>          SELECT member_tags.member_id, member_tags.last_modification_date,\n>> member_tags.polarity, member_tags.tag_id, 'mr' AS source\n>>            FROM member_tags\n>> UNION ALL\n>>          SELECT member_profile_tags.member_id,\n>> member_profile_tags.last_modification_date, member_profile_tags.polarity,\n>> member_profile_tags.tag_id, 'profile' AS source\n>>            FROM member_profile_tags;\n>>\n>> -- Table: member_profile_tags\n>>\n>> -- DROP TABLE member_profile_tags;\n>>\n>> CREATE TABLE member_profile_tags\n>> (\n>>   member_id integer NOT NULL,\n>>   last_modification_date timestamp without time zone NOT NULL,\n>>   polarity smallint NOT NULL,\n>>   tag_id integer NOT NULL,\n>>   CONSTRAINT member_profile_tags_pkey PRIMARY KEY (member_id, tag_id),\n>>   CONSTRAINT fka52b6e7491ac9123 FOREIGN KEY (tag_id)\n>>       REFERENCES tags (id) MATCH SIMPLE\n>>       ON UPDATE NO ACTION ON DELETE NO ACTION\n>> )\n>> WITH (\n>>   OIDS=FALSE\n>> );\n>>\n>> -- Index: idx_member_profile_tags_tag_id\n>> CREATE INDEX idx_member_profile_tags_tag_id\n>>   ON member_profile_tags\n>>   USING btree\n>>   (tag_id);\n>>\n>>\n>> -- Table: member_tags\n>>\n>> -- DROP TABLE member_tags;\n>> CREATE TABLE member_tags\n>> (\n>>   member_id integer NOT NULL,\n>>   last_modification_date timestamp without time zone NOT NULL,\n>>   polarity smallint NOT NULL,\n>>   tag_id integer NOT NULL,\n>>   CONSTRAINT member_tags_pkey PRIMARY KEY (member_id, tag_id),\n>>   CONSTRAINT fk527ef29e91ac9123 FOREIGN KEY (tag_id)\n>>       REFERENCES tags (id) MATCH SIMPLE\n>>       ON UPDATE NO ACTION ON DELETE NO ACTION\n>> )\n>> WITH (\n>>   OIDS=FALSE\n>> );\n>>\n>> -- Index: idx_member_tags_tag_id\n>> CREATE INDEX idx_member_tags_tag_id\n>>   ON member_tags\n>>   USING btree\n>>   (tag_id);\n>\n>\n> member_tags : 637M bytes, 12.7M rows\n> member_profile_tags : 1824M bytes, 36.6M rows\n> member_statistics : 581M bytes, 2.5M rows\n>\n> member_profile_tags_pkey : 785M\n> member_tags_pkey : 274M\n> member_statistics_pkey : 54M\n>\n> idx_member_tags_tag_id : 274M\n> idx_member_profile_tags_tag_id : 785M\n>\n> member_tags.member_id : 1.217.000 distinct values, mostly evenly spread\n> member_profile_tags.member_id : 947.000 distinct values, mostly evenly\n> spread\n> member_tags.tag_id : 1176 distinct values, some bias towards some values,\n> but mostly well spread\n> member_profile_tags.tag_id : 1822 distinct values, some bias towards some\n> values, but mostly well spread\n>\n>\n> Execution plan for postgresql 8.3.7 - with EXISTS :\n> (second run, so it's hitting the disk cache)\n>\n>>  Bitmap Heap Scan on member_statistics  (cost=2438.19..26177.46 rows=200\n>> width=4) (actual time=2.442..15.515 rows=256 loops=1)\n>>    Recheck Cond: (member_id = ANY ('{<400 ids>}'::integer[]))\n>>    Filter: (subplan)\n>>    ->  Bitmap Index Scan on member_statistics_pkey  (cost=0.00..2438.14\n>> rows=401 width=0) (actual time=2.105..2.105 rows=342 loops=1)\n>>          Index Cond: (member_id = ANY ('{<400 ids>}'::integer[]))\n>>    SubPlan\n>>      ->  Subquery Scan mat1  (cost=45.50..280.61 rows=31 width=4) (actual\n>> time=0.036..0.036 rows=1 loops=341)\n>>            ->  Append  (cost=45.50..280.30 rows=31 width=18) (actual\n>> time=0.036..0.036 rows=1 loops=341)\n>>                  ->  Subquery Scan \"*SELECT* 1\"  (cost=45.50..63.50 rows=3\n>> width=18) (actual time=0.021..0.021 rows=0 loops=341)\n>>                        ->  Bitmap Heap Scan on member_tags\n>> (cost=45.50..63.47 rows=3 width=18) (actual time=0.020..0.020 rows=0\n>> loops=341)\n>>                              Recheck Cond: ((member_id = $0) AND (tag_id =\n>> ANY ('{640,641,3637,3638,637,638,639}'::integer[])))\n>>                              Filter: (polarity >= 90)\n>>                              ->  Bitmap Index Scan on member_tags_pkey\n>> (cost=0.00..45.50 rows=3 width=0) (actual time=0.018..0.018 rows=0\n>> loops=341)\n>>                                    Index Cond: ((member_id = $0) AND\n>> (tag_id = ANY ('{640,641,3637,3638,637,638,639}'::integer[])))\n>>                  ->  Subquery Scan \"*SELECT* 2\"  (cost=49.48..216.81\n>> rows=28 width=18) (actual time=0.025..0.025 rows=1 loops=192)\n>>                        ->  Bitmap Heap Scan on member_profile_tags\n>> (cost=49.48..216.53 rows=28 width=18) (actual time=0.024..0.024 rows=1\n>> loops=192)\n>>                              Recheck Cond: ((member_id = $0) AND (tag_id =\n>> ANY ('{640,641,3637,3638,637,638,639}'::integer[])))\n>>                              Filter: (polarity >= 90)\n>>                              ->  Bitmap Index Scan on\n>> member_profile_tags_pkey  (cost=0.00..49.47 rows=28 width=0) (actual\n>> time=0.023..0.023 rows=1 loops=192)\n>>                                    Index Cond: ((member_id = $0) AND\n>> (tag_id = ANY ('{640,641,3637,3638,637,638,639}'::integer[])))\n>\n>\n> Execution plan for postgresql 8.3.7 - with count() :\n> (second run, so it's hitting the disk cache)\n>\n>>  Bitmap Heap Scan on member_statistics  (cost=2438.18..117455.15 rows=134\n>> width=4) (actual time=1.478..16.256 rows=256 loops=1)\n>>    Recheck Cond: (member_id = ANY ('{<400 ids>}'::integer[]))\n>>    Filter: (0 < (subplan))\n>>    ->  Bitmap Index Scan on member_statistics_pkey  (cost=0.00..2438.14\n>> rows=401 width=0) (actual time=1.208..1.208 rows=342 loops=1)\n>>          Index Cond: (member_id = ANY ('{<400 ids>}'::integer[]))\n>>    SubPlan\n>>      ->  Aggregate  (cost=280.69..280.70 rows=1 width=4) (actual\n>> time=0.042..0.042 rows=1 loops=341)\n>>            ->  Append  (cost=45.50..280.30 rows=31 width=18) (actual\n>> time=0.029..0.041 rows=1 loops=341)\n>>                  ->  Subquery Scan \"*SELECT* 1\"  (cost=45.50..63.50 rows=3\n>> width=18) (actual time=0.017..0.018 rows=0 loops=341)\n>>                        ->  Bitmap Heap Scan on member_tags\n>> (cost=45.50..63.47 rows=3 width=18) (actual time=0.016..0.016 rows=0\n>> loops=341)\n>>                              Recheck Cond: ((member_id = $0) AND (tag_id =\n>> ANY ('{640,641,3637,3638,637,638,639}'::integer[])))\n>>                              Filter: (polarity >= 90)\n>>                              ->  Bitmap Index Scan on member_tags_pkey\n>> (cost=0.00..45.50 rows=3 width=0) (actual time=0.015..0.015 rows=0\n>> loops=341)\n>>                                    Index Cond: ((member_id = $0) AND\n>> (tag_id = ANY ('{640,641,3637,3638,637,638,639}'::integer[])))\n>>                  ->  Subquery Scan \"*SELECT* 2\"  (cost=49.48..216.81\n>> rows=28 width=18) (actual time=0.023..0.023 rows=1 loops=341)\n>>                        ->  Bitmap Heap Scan on member_profile_tags\n>> (cost=49.48..216.53 rows=28 width=18) (actual time=0.021..0.021 rows=1\n>> loops=341)\n>>                              Recheck Cond: ((member_id = $0) AND (tag_id =\n>> ANY ('{640,641,3637,3638,637,638,639}'::integer[])))\n>>                              Filter: (polarity >= 90)\n>>                              ->  Bitmap Index Scan on\n>> member_profile_tags_pkey  (cost=0.00..49.47 rows=28 width=0) (actual\n>> time=0.020..0.020 rows=1 loops=341)\n>>                                    Index Cond: ((member_id = $0) AND\n>> (tag_id = ANY ('{640,641,3637,3638,637,638,639}'::integer[])))\n>\n> So count or exists it's the same.\n>\n>\n> Execution plan for postgresql 9.0.3 - with count() :\n> (second run, so it hits the disk cache)\n>\n>\n>>\n>>  Bitmap Heap Scan on member_statistics  (cost=1664.07..32728.36 rows=134\n>> width=4) (actual time=1.931..32.529 rows=292 loops=1)\n>>    Recheck Cond: (member_id = ANY ('{<400 ids>}'::integer[]))\n>>    Filter: (0 < (SubPlan 1))\n>>    ->  Bitmap Index Scan on member_statistics_pkey  (cost=0.00..1664.03\n>> rows=401 width=0) (actual time=1.555..1.555 rows=401 loops=1)\n>>          Index Cond: (member_id = ANY\n>> ('{159854,159854,32002710,146073,47034441,170998431,126544544,106848929,51215963,168108711,187048,158374569,148139,44044975,154860208,47770056,47823538,74250935,193208,128981,165102267,169419454,171518656,160916161,176834,182057667,137687748,160390262,570059,129741,129744,163637969,161163988,153832149,130312,192527065,127707,154557148,117472,160652001,127009507,170990308,50784999,193598184,183378665,140992296,52810482,151150,167910137,75057914,169769724,137658154,155280126,169863637,177933057,129653508,170738438,156801,108385032,181001,703242,177927538,145672301,142097,169247875,79110932,187384604,145810205,202885,170990369,158835492,158487527,36492073,178043690,40500011,205618,118250288,178021169,180091698,166707,166350922,150842169,184523578,46750524,43276426,164671,138048,166390593,129525899,169495369,160751415,125332301,113503054,183145296,99724114,121683,182793045,178037590,193551192,178295641,184224603,82117469,52878175,135213920,201570,177871715,159755,178455356,44462011,126577519,154833776,129670001,129906,188563,154484,197493,173958,47784176,53940031,149611388,136064,183827330,166173573,163649,169399788,205706,138124,8375182,160587235,194335635,129084308,144277,59081622,113560,183195,129508252,170139,197541,161352615,154537,183014316,191974318,125635503,183845810,78334900,170116007,165262264,148935615,32534347,126806981,172936135,170150857,148427,153832398,101419987,187423701,44440534,139316185,211930,182936539,127546338,43897827,132069,153447398,178228199,40762344,119785,46990314,128082923,207853,193338353,197618,51626995,154063860,177007606,176865,202860,154407934,158417322,154296832,161092610,178080772,177953797,187513176,169404588,160724823,177941517,137352910,176510987,197650,522414,148502,169894745,181746180,48571418,183717039,181276,197661,123935,178088992,187612193,192752674,183469095,199721,47033387,125592620,192588845,177929264,58709043,192717878,49531959,178913,207370,179381,195648,192724034,192545291,149363781,184595528,148689993,144460,177947725,183736845,178007419,164948,160008482,159790607,160191580,156787805,170974303,145243232,154351120,158820,119910,52757612,182338672,121970,186621,203893,160724086,156334768,158469653,117889,183762051,138774,156614507,195726,43654288,177921170,178128022,197783,169487513,197786,187298671,188828828,183865413,165201056,192561636,168168615,126151,124189868,115886,178216111,182873264,192599219,156800180,135009461,192641032,192529268,184554685,136060779,178005185,148389058,164043,32523462,185543,180718799,155403472,166180387,156478676,164314325,8379920,187299031,140504,173273636,193191119,154330335,140087504,56446178,93514979,195812,139384038,152807,187403497,130283,183084268,47942893,44846319,115953,167818483,171025620,158375162,125411199,139319850,181947650,161244419,165209349,51082504,162387211,153805357,183655696,153851154,175314196,137604313,158342426,197917,182856581,171296,204065,113883,182990726,152871,148815145,154998058,180477228,144537138,152879,120112,182039091,156349748,112567,143458553,178107708,45284670,177884725,150820160,125695297,169819463,154498377,189770,124235,47764812,8338658,178265422,125013327,171019601,154787154,236884,47089209,138584,178009433,184524122,132613470,154213727,118112,164918105,126104932,142845158,178056551,177857896,148640999,178046316,126318,151584111,184473832,178205592,184227190,154178935,187748684,125523322,154664315,46991594,154146174,187430273,150857090,154219907,171074949,178077062,160212361,41971083,146929,176596368,183045016,150822290,165059988,163717,178029807,140019102,194117,107615649,193541019,127933340,51142059,189869,153855406,184212914,50566580,172972617,172503347,191762681,165557692}'::integer[]))\n>>    SubPlan 1\n>>      ->  Aggregate  (cost=73.17..73.18 rows=1 width=4) (actual\n>> time=0.070..0.071 rows=1 loops=400)\n>>            ->  Append  (cost=31.23..73.15 rows=2 width=18) (actual\n>> time=0.047..0.065 rows=1 loops=400)\n>>                  ->  Subquery Scan on \"*SELECT* 1\"  (cost=31.23..35.26\n>> rows=1 width=18) (actual time=0.025..0.027 rows=0 loops=400)\n>>                        ->  Bitmap Heap Scan on member_tags\n>> (cost=31.23..35.25 rows=1 width=18) (actual time=0.022..0.022 rows=0\n>> loops=400)\n>>                              Recheck Cond: ((member_id = $0) AND (tag_id =\n>> ANY ('{640,641,3637,3638,637,638,639}'::integer[])))\n>>                              Filter: (polarity >= 90)\n>>                              ->  Bitmap Index Scan on member_tags_pkey\n>> (cost=0.00..31.23 rows=1 width=0) (actual time=0.018..0.018 rows=0\n>> loops=400)\n>>                                    Index Cond: ((member_id = $0) AND\n>> (tag_id = ANY ('{640,641,3637,3638,637,638,639}'::integer[])))\n>>                  ->  Subquery Scan on \"*SELECT* 2\"  (cost=33.85..37.88\n>> rows=1 width=18) (actual time=0.029..0.031 rows=1 loops=400)\n>>                        ->  Bitmap Heap Scan on member_profile_tags\n>> (cost=33.85..37.87 rows=1 width=18) (actual time=0.025..0.026 rows=1\n>> loops=400)\n>>                              Recheck Cond: ((member_id = $0) AND (tag_id =\n>> ANY ('{640,641,3637,3638,637,638,639}'::integer[])))\n>>                              Filter: (polarity >= 90)\n>>                              ->  Bitmap Index Scan on\n>> member_profile_tags_pkey  (cost=0.00..33.85 rows=1 width=0) (actual\n>> time=0.022..0.022 rows=1 loops=400)\n>>                                    Index Cond: ((member_id = $0) AND\n>> (tag_id = ANY ('{640,641,3637,3638,637,638,639}'::integer[])))\n>\n>\n> Execution plan for postgresql 9.0.3 - with EXISTS :\n> (second run, so it \"would hit\" the disk cache - only not because there's not\n> enough RAM)\n>\n>>  Nested Loop  (cost=278457.13..279957.12 rows=200 width=4) (actual\n>> time=65631.381..607728.817 rows=292 loops=1)\n>>    ->  HashAggregate  (cost=278457.13..278459.13 rows=200 width=4) (actual\n>> time=64505.008..65078.142 rows=306596 loops=1)\n>>          ->  Append  (cost=807.95..276438.85 rows=161462 width=18) (actual\n>> time=562.372..63665.663 rows=345836 loops=1)\n>>                ->  Subquery Scan on \"*SELECT* 1\"  (cost=807.95..71891.14\n>> rows=41738 width=18) (actual time=562.368..14646.508 rows=95514 loops=1)\n>>                      ->  Bitmap Heap Scan on member_tags\n>> (cost=807.95..71473.76 rows=41738 width=18) (actual time=562.364..14402.566\n>> rows=95514 loops=1)\n>>                            Recheck Cond: (tag_id = ANY\n>> ('{640,641,3637,3638,637,638,639}'::integer[]))\n>>                            Filter: (polarity >= 90)\n>>                            ->  Bitmap Index Scan on\n>> idx_member_tags_tag_id  (cost=0.00..797.52 rows=42448 width=0) (actual\n>> time=529.863..529.863 rows=95577 loops=1)\n>>                                  Index Cond: (tag_id = ANY\n>> ('{640,641,3637,3638,637,638,639}'::integer[]))\n>>                ->  Subquery Scan on \"*SELECT* 2\"  (cost=2249.62..204547.71\n>> rows=119724 width=18) (actual time=1073.523..48170.919 rows=250322 loops=1)\n>>                      ->  Bitmap Heap Scan on member_profile_tags\n>> (cost=2249.62..203350.47 rows=119724 width=18) (actual\n>> time=1073.520..47529.880 rows=250322 loops=1)\n>>                            Recheck Cond: (tag_id = ANY\n>> ('{640,641,3637,3638,637,638,639}'::integer[]))\n>>                            Filter: (polarity >= 90)\n>>                            ->  Bitmap Index Scan on\n>> idx_member_profile_tags_tag_id  (cost=0.00..2219.68 rows=119724 width=0)\n>> (actual time=963.341..963.341 rows=250322 loops=1)\n>>                                  Index Cond: (tag_id = ANY\n>> ('{640,641,3637,3638,637,638,639}'::integer[]))\n>>    ->  Index Scan using member_statistics_pkey on member_statistics\n>> (cost=0.00..7.48 rows=1 width=4) (actual time=1.767..1.767 rows=0\n>> loops=306596)\n>>          Index Cond: (member_statistics.member_id = \"*SELECT*\n>> 1\".member_id)\n>>          Filter: (member_statistics.member_id = ANY ('{<400\n>> ids>}'::integer[]))\n>>  Total runtime: 607734.942 ms\n>\n>\n>\n>\n>\n", "msg_date": "Mon, 14 Mar 2011 12:54:56 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance regression from 8.3.7 to 9.0.3" }, { "msg_contents": "On Mon, Mar 14, 2011 at 10:54 AM, Claudio Freire <[email protected]> wrote:\n> Nothing?\n>\n> No ideas?\n>\n> Did I forget to include some useful bit?\n>\n> On Fri, Mar 4, 2011 at 8:22 PM, Claudio Freire <[email protected]> wrote:\n>> Hello, first post to this list.\n>>\n>> I have this query that ran in milliseconds in postgres 8.3.7 (usually 50,\n>> 100ms), and now it takes a full 10 minutes to complete.\n>>\n>> I tracked the problem to the usage of hash aggregates to resolve EXISTS\n>> clauses in conjunction with large IN clauses, which seem to reverse the\n>> execution plan - in 8.3.7, it would use indices to fetch the rows from the\n>> IN, then compute the exists with a nested loop, never doing big sequential\n>> scans. In 9.0.3, it computes the set of applicable entries with a hash\n>> aggregate, but in order to do that it performs a huge index scan - no\n>> sequential scans either, but the big index scan is worse.\n>>\n>> 9.0.3 always misses the estimate of how many rows will come out the hash\n>> aggregate, always estimating 200, while in fact the real count is more like\n>> 300.000. I've tried increasing statistics in all the columns involved, up to\n>> 4000 for each, to the point where it accurately estimates the input to the\n>> hash agg, but the output is always estimated to be 200 rows.\n>>\n>> Rewriting the query to use 0 < (select count(*)..) instead of EXISTS (select\n>> * ..) does revert to the old postgres 8.3 plan, although intuitively I would\n>> think it to be sub-optimal.\n>>\n>> The tables in question receive many updates, but never in such a volume as\n>> to create enough bloat - plus, the tests I've been running are on a\n>> pre-production server without much traffic (so not many updates - probably\n>> none in weeks).\n>>\n>> The server is a Core 2 E7400 dual core with 4GB of ram running linux and a\n>> pg 9.0.3 / 8.3.7 (both there, doing migration testing) built from source.\n>> Quite smaller than our production server, but I've tested the issue on\n>> higher-end hardware and it produces the same results.\n>>\n>> Any ideas as to how to work around this issue?\n>>\n>> I can't plug the select count() version everywhere, since I won't be using\n>> this form of the query every time (it's generated programatically with an\n>> ORM), and some forms of it perform incredibly worse with the select count().\n>>\n>> Also, any help I can provide to fix it upstream I'll be glad to - I believe\n>> (I would have to check) I can even create a dump of the tables (stripping\n>> sensitive info of course) - only, well, you'll see the size below - a tad\n>> big to be mailing it ;-)\n>>\n>> pg 9.0 is configured with:\n>>\n>> work_mem = 64M\n>> shared_buffers = 512M\n>> temp_buffers = 64M\n>> effective_cache_size = 128M\n>>\n>> pg 8.3.7 is configured with:\n>>\n>> work_mem = 64M\n>> shared_buffers = 100M\n>> temp_buffers = 64M\n>> effective_cache_size = 128M\n>>\n>>\n>> The query in question:\n>>\n>>> SELECT member_statistics.member_id\n>>>         FROM member_statistics\n>>>         WHERE member_statistics.member_id IN ( <<400 ids>> ) AND (EXISTS\n>>> (SELECT mat1.tag_id\n>>>         FROM member_all_tags_v AS mat1\n>>>         WHERE mat1.member_id = member_statistics.member_id AND mat1.tag_id\n>>> IN (640, 641, 3637, 3638, 637, 638, 639) AND mat1.polarity >= 90))\n\nhm the regression in and of itself is interesting, but I wonder if you\ncan get past your issue like this:\n\nSELECT member_statistics.member_id\n FROM member_statistics\n WHERE member_statistics.member_id IN ( <<400 ids>> ) AND (EXISTS\n (SELECT mat1.tag_id\n FROM member_all_tags_v AS mat1\n WHERE mat1.member_id = member_statistics.member_id AND mat1.tag_id\n IN (640, 641, 3637, 3638, 637, 638, 639) AND mat1.polarity >= 90))\n\nchanges to:\n\nSELECT member_statistics.member_id\n FROM member_statistics\n WHERE EXISTS\n (\n SELECT mat1.tag_id\n FROM member_all_tags_v AS mat1\n WHERE mat1.member_id = member_statistics.member_id\n AND mat1.tag_id\n IN (640, 641, 3637, 3638, 637, 638, 639) AND\nmat1.polarity >= 90\n AND mat1.member_id IN ( <<400 ids>> )\n )\n\nalso, always try to compare vs straight join version:\n\n\nSELECT member_statistics.member_id\n FROM member_statistics\n JOIN VALUES ( <<400 ids>> ) q(member_id) using (member_id)\n JOIN\n (\n SELECT mat1.member_id\n FROM member_all_tags_v AS mat1\n WHERE mat1.tag_id IN (640, 641, 3637, 3638, 637, 638, 639)\n AND mat1.polarity >= 90) p\n USING(member_id)\n ) p using(member_id);\n\nmerlin\n", "msg_date": "Mon, 14 Mar 2011 12:34:31 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance regression from 8.3.7 to 9.0.3" }, { "msg_contents": "On Mon, Mar 14, 2011 at 12:46 PM, Claudio Freire <[email protected]> wrote:\n> On Mon, Mar 14, 2011 at 2:34 PM, Merlin Moncure <[email protected]> wrote:\n>> changes to:\n>>\n>> SELECT member_statistics.member_id\n>>         FROM member_statistics\n>>         WHERE EXISTS\n>>         (\n>>           SELECT mat1.tag_id\n>>           FROM member_all_tags_v AS mat1\n>>           WHERE mat1.member_id = member_statistics.member_id\n>>             AND mat1.tag_id\n>>             IN (640, 641, 3637, 3638, 637, 638, 639) AND\n>> mat1.polarity >= 90\n>>             AND mat1.member_id  IN ( <<400 ids>> )\n>>         )\n>\n> It isn't easy to get the ORM to spit that kind of queries, but I could\n> try them by hand.\n>\n>> also, always try to compare vs straight join version:\n>>\n>>\n>> SELECT member_statistics.member_id\n>>         FROM member_statistics\n>>         JOIN VALUES ( <<400 ids>> ) q(member_id) using (member_id)\n>>         JOIN\n>>         (\n>>           SELECT mat1.member_id\n>>           FROM member_all_tags_v AS mat1\n>>           WHERE mat1.tag_id  IN (640, 641, 3637, 3638, 637, 638, 639)\n>>             AND mat1.polarity >= 90) p\n>>            USING(member_id)\n>>          ) p using(member_id);\n>>\n>> merlin\n>\n> The straight join like that was used long ago, but it replicates rows\n> unacceptably: for each row in the subquery, one copy of member_id is\n> output, which create an unacceptable overhead in the application and\n> network side. It could be perhaps fixed with distinct, but then\n> there's sorting overhead.\n\nah -- right. my mistake. well, you could always work around with\n'distinct', although the exists version should be better (semi vs full\njoin). what options *do* you have in terms of coaxing the ORM to\nproduce particular sql? :-). This is likely 100% work-aroundable via\ntweaking the SQL. I don't have the expertise to suggest a solution\nwith your exact sql, if there is one.\n\nmerlin\n", "msg_date": "Mon, 14 Mar 2011 12:59:59 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance regression from 8.3.7 to 9.0.3" }, { "msg_contents": "Claudio Freire <[email protected]> writes:\n> CREATE OR REPLACE VIEW member_all_tags_v AS\n> SELECT member_tags.member_id, member_tags.last_modification_date,\n> member_tags.polarity, member_tags.tag_id, 'mr' AS source\n> FROM member_tags\n> UNION ALL\n> SELECT member_profile_tags.member_id,\n> member_profile_tags.last_modification_date, member_profile_tags.polarity,\n> member_profile_tags.tag_id, 'profile' AS source\n> FROM member_profile_tags;\n\nTry casting those constants to text (or something) explicitly, ie\n'mr'::text AS source etc. I forget at the moment why leaving them as\nunknown literals interferes with optimization, but it does.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 Mar 2011 14:50:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance regression from 8.3.7 to 9.0.3 " } ]
[ { "msg_contents": "This is postgresql 9.0.3:\n\n\nQuery:\n\nselect\n\tsum(stat_responses) * 100.0 / sum(stat_invites) as stat_response_rate,\n\tsum(real_responses) * 100.0 / sum(real_invites) as real_response_rate\nfrom (\n\tselect\n\t\tms.invites as stat_invites,\n\t\t(select count(*) from invites i join deliveries d on d.id = i.delivery_id\n\t\t\twhere i.member_id = ms.member_id\n\t\t\tand d.recontact_number = 0\n\t\t\tand d.delivery_type = 1) as real_invites,\n\t\tms.responses as stat_responses,\n\t\t(select count(*) from track_logs tl join tracks t on t.id = tl.track_id\n\t\t\twhere t.member_id = ms.member_id\n\t\t\tand t.partner_id is null and t.recontact_number = 0 and\nt.contact_method_id = 1\n\t\t\tand t.delivery_type = 1\n\t\t\tand tl.track_status_id = 10) as real_responses\n\tfrom member_statistics ms\n\tjoin livra_users lu on lu.id = ms.member_id\n\twhere lu.country_id = 2 and lu.is_panelist and lu.confirmed and not\nlu.unregistered\n) as rtab;\n\n\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=224382.22..225969.27 rows=1 width=12)\n -> Hash Join (cost=88355.09..221837.46 rows=254475 width=12)\n Hash Cond: (ms.member_id = lu.id)\n -> Seq Scan on member_statistics ms (cost=0.00..99539.50\nrows=2511850 width=12)\n -> Hash (cost=85174.15..85174.15 rows=254475 width=4)\n -> Bitmap Heap Scan on livra_users lu\n(cost=14391.40..85174.15 rows=254475 width=4)\n Recheck Cond: (country_id = 2)\n Filter: (is_panelist AND confirmed AND (NOT unregistered))\n -> Bitmap Index Scan on ix_user_state\n(cost=0.00..14327.78 rows=763100 width=0)\n Index Cond: (country_id = 2)\n SubPlan 1\n -> Aggregate (cost=181.25..181.26 rows=1 width=0)\n -> Nested Loop (cost=0.00..181.19 rows=24 width=0)\n -> Index Scan using idx_tracks_partner_id_member_id\non tracks t (cost=0.00..49.83 rows=9 width=8)\n Index Cond: ((partner_id IS NULL) AND (member_id = $0))\n Filter: ((recontact_number = 0) AND\n(contact_method_id = 1) AND (delivery_type = 1))\n -> Index Scan using idx_track_logs_track_id on\ntrack_logs tl (cost=0.00..14.56 rows=3 width=8)\n Index Cond: (tl.track_id = t.id)\n Filter: (tl.track_status_id = 10)\n SubPlan 2\n -> Aggregate (cost=1405.75..1405.76 rows=1 width=0)\n -> Nested Loop (cost=0.00..1405.45 rows=119 width=0)\n -> Index Scan using\nidx_invites_member_id_delivery_id on invites i (cost=0.00..431.03\nrows=119 width=4)\n Index Cond: (member_id = $0)\n -> Index Scan using deliveries_pkey on deliveries d\n(cost=0.00..8.18 rows=1 width=4)\n Index Cond: (d.id = i.delivery_id)\n Filter: ((d.recontact_number = 0) AND\n(d.delivery_type = 1))\n(27 rows)\n\n\nIf you inspect the plan, it's not computing the total expected cost correctly.\n\nThe top \"Aggregate\" node acts as a nested loop on SubPlan 1 & 2, but\nit's only adding the cost of the subplans without regard as to how\nmany iterations it will perform (254475)\n\nExplain analyze didn't end in an hour of runtime, running on a Core2\nwith 4G RAM.\n\nIt's not a big issue to me, I can work around it, but it should\nperhaps be looked into.\n", "msg_date": "Mon, 14 Mar 2011 14:24:31 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": true, "msg_subject": "Bug in the planner?" }, { "msg_contents": "Claudio Freire <[email protected]> writes:\n> This is postgresql 9.0.3:\n> Query:\n\n> select\n> \tsum(stat_responses) * 100.0 / sum(stat_invites) as stat_response_rate,\n> \tsum(real_responses) * 100.0 / sum(real_invites) as real_response_rate\n> from (\n> \tselect\n> \t\tms.invites as stat_invites,\n> \t\t(select count(*) from invites i join deliveries d on d.id = i.delivery_id\n> \t\t\twhere i.member_id = ms.member_id\n> \t\t\tand d.recontact_number = 0\n> \t\t\tand d.delivery_type = 1) as real_invites,\n> \t\tms.responses as stat_responses,\n> \t\t(select count(*) from track_logs tl join tracks t on t.id = tl.track_id\n> \t\t\twhere t.member_id = ms.member_id\n> \t\t\tand t.partner_id is null and t.recontact_number = 0 and\n> t.contact_method_id = 1\n> \t\t\tand t.delivery_type = 1\n> \t\t\tand tl.track_status_id = 10) as real_responses\n> \tfrom member_statistics ms\n> \tjoin livra_users lu on lu.id = ms.member_id\n> \twhere lu.country_id = 2 and lu.is_panelist and lu.confirmed and not\n> lu.unregistered\n> ) as rtab;\n\n> The top \"Aggregate\" node acts as a nested loop on SubPlan 1 & 2, but\n> it's only adding the cost of the subplans without regard as to how\n> many iterations it will perform (254475)\n\nHmm, interesting. The reason is that it's computing the cost of the\noutput SELECT list on the basis of the number of times that select list\nwill be evaluated, ie, once. But the aggregate function argument\nexpressions will be evaluated more times than that. Most of the time an\naggregate is applied to something trivial like a Var reference, so\nnobody's noticed that the cost of its input expression is underestimated.\n\n> Explain analyze didn't end in an hour of runtime, running on a Core2\n> with 4G RAM.\n\nA better estimate isn't going to make that go any faster :-(.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 Mar 2011 20:56:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug in the planner? " } ]
[ { "msg_contents": "hi all,\n\nSetup:\nSparc, Solaris 10, Postgres 9.0.2, using streaming replication and hot\nstandby. 1 master 1 slave\n\nEverything works fine (w.r.t replication), but the pg_xlog size grows\ncontinuously, though i had no operations going on. Also the archiving to the\nother side filled up the other side FS.\nls -l /var/postgres/data/pg_xlog | wc -l\n103\nAt start, there were only 15 files. The max_wal_segments is 32, but not sure\nwhy iam seeing 103 files. Also the archiving dir size doubled (w.r.t number\nof files archived). and filled up the filesystem.\nI manually logged into postgres and run checkpoint; did not see any file\nreduction\n\nPasting some of the relevant conf values\nThe checkpoint_segments is commented out, so default to 3 segments right?\n\npostgresql.conf\n-----------------------\nwal_level = hot_standby\ncheckpoint_warning = 30s\n#checkpoint_segments = 3 # in logfile segments, min 1, 16MB\neach\n#checkpoint_timeout = 5min # range 30s-1h\n#checkpoint_completion_target = 0.5 # checkpoint target duration, 0.0 -\n1.0\n\narchive_mode = on # allows archiving to be done\n # (change requires restart)\narchive_command = 'cp %p /var/postgres/walfiles/%f' # this dir\nis NFS mounted dir on the slave node\narchive_timeout = 30\n\nmax_wal_senders = 5\nwal_keep_segments = 32\n\nhot_standby = on\n\ntrack_counts = on\nautovacuum = on\n\n-----------------------------------------\nI want to ensure it only keeps up to certain number of files, not keep on\ngrowing and filling up my filesystem and requiring manual intervention.\nAppreciate any tips or pointers to documentation. Thanks in advance. I\nlooked in the archives, but one user had this problem because he had the\nwal_keep_segments set to 300 (whereas i have it only at 32)\n\nhi all,Setup:Sparc, Solaris 10, Postgres 9.0.2, using streaming replication and hot standby. 1 master 1 slaveEverything works fine (w.r.t replication), but the pg_xlog size grows continuously, though i had no operations going on. Also the archiving to the other side filled up the other side FS.\n\nls -l /var/postgres/data/pg_xlog | wc -l 103 At start, there were only 15 files. The max_wal_segments is 32, but not sure why iam seeing 103 files. Also the archiving dir size doubled (w.r.t number of files archived). and filled up the filesystem. \nI manually logged into postgres and run checkpoint; did not see any file reduction \nPasting some of the relevant conf valuesThe checkpoint_segments is commented out, so default to 3 segments right?postgresql.conf-----------------------wal_level = hot_standby checkpoint_warning = 30s   \n\n#checkpoint_segments = 3                # in logfile segments, min 1, 16MB each#checkpoint_timeout = 5min              # range 30s-1h#checkpoint_completion_target = 0.5     # checkpoint target duration, 0.0 - 1.0\narchive_mode = on               # allows archiving to be done                                # (change requires restart)archive_command = 'cp %p /var/postgres/walfiles/%f'             #  this dir is NFS mounted dir on the slave node\n\narchive_timeout = 30           max_wal_senders = 5      wal_keep_segments = 32hot_standby = on   track_counts = onautovacuum = on  -----------------------------------------I want to ensure it only keeps up to certain number of files, not keep on growing and filling up my filesystem and requiring manual intervention. Appreciate any tips or pointers to documentation. Thanks in advance. I looked in the archives, but one user had this problem because he had the wal_keep_segments set to 300 (whereas i have it only at 32)", "msg_date": "Tue, 15 Mar 2011 11:09:19 -0400", "msg_from": "Tech Madhu <[email protected]>", "msg_from_op": true, "msg_subject": "pg_xlog size" }, { "msg_contents": "Em 15-03-2011 12:09, Tech Madhu escreveu:\n\n[This is not a performance question, next time post at the appropriate list, \nthat is -general]\n\n> Everything works fine (w.r.t replication), but the pg_xlog size grows\n> continuously, though i had no operations going on. Also the archiving to\n> the other side filled up the other side FS.\n> ls -l /var/postgres/data/pg_xlog | wc -l\n> 103\nDid you consider using pg_archivecleanup [1]?\n\n> At start, there were only 15 files. The max_wal_segments is 32, but not\n> sure why iam seeing 103 files. Also the archiving dir size doubled\n> (w.r.t number of files archived). and filled up the filesystem.\n> I manually logged into postgres and run checkpoint; did not see any file\n> reduction\n>\nmax_wal_segments [2] is *not* related to archiving activity.\n\n\n[1] http://www.postgresql.org/docs/9.0/static/pgarchivecleanup.html\n[2] \nhttp://www.postgresql.org/docs/9.0/static/runtime-config-wal.html#GUC-WAL-KEEP-SEGMENTS\n\n\n-- \n Euler Taveira de Oliveira\n http://www.timbira.com/\n", "msg_date": "Wed, 16 Mar 2011 14:27:36 -0300", "msg_from": "Euler Taveira de Oliveira <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_xlog size" }, { "msg_contents": "Thank you. I had pg_archivecleanup added in recovery.conf, but on second\nlook had a typo in the archive dir path. After this change in recovery.conf\nand postgres restart, its fine now. Once my archive dir got cleaned up , i\nnoticed my /var/postgres/data/pg_xlog dir on master also got cleaned up\n\nOn Wed, Mar 16, 2011 at 1:27 PM, Euler Taveira de Oliveira <\[email protected]> wrote:\n\n> Em 15-03-2011 12:09, Tech Madhu escreveu:\n>\n> [This is not a performance question, next time post at the appropriate\n> list, that is -general]\n>\n>\n> Everything works fine (w.r.t replication), but the pg_xlog size grows\n>> continuously, though i had no operations going on. Also the archiving to\n>> the other side filled up the other side FS.\n>> ls -l /var/postgres/data/pg_xlog | wc -l\n>> 103\n>>\n> Did you consider using pg_archivecleanup [1]?\n>\n>\n> At start, there were only 15 files. The max_wal_segments is 32, but not\n>> sure why iam seeing 103 files. Also the archiving dir size doubled\n>> (w.r.t number of files archived). and filled up the filesystem.\n>> I manually logged into postgres and run checkpoint; did not see any file\n>> reduction\n>>\n>> max_wal_segments [2] is *not* related to archiving activity.\n>\n>\n> [1] http://www.postgresql.org/docs/9.0/static/pgarchivecleanup.html\n> [2]\n> http://www.postgresql.org/docs/9.0/static/runtime-config-wal.html#GUC-WAL-KEEP-SEGMENTS\n>\n>\n> --\n> Euler Taveira de Oliveira\n> http://www.timbira.com/\n>\n\nThank you. I had pg_archivecleanup added in recovery.conf, but on second look had a typo in the archive dir path. After this change in recovery.conf and postgres restart, its fine now. Once my archive dir got cleaned up , i noticed my /var/postgres/data/pg_xlog dir on master also got cleaned up \nOn Wed, Mar 16, 2011 at 1:27 PM, Euler Taveira de Oliveira <[email protected]> wrote:\nEm 15-03-2011 12:09, Tech Madhu escreveu:\n\n[This is not a performance question, next time post at the appropriate list, that is -general]\n\n\nEverything works fine (w.r.t replication), but the pg_xlog size grows\ncontinuously, though i had no operations going on. Also the archiving to\nthe other side filled up the other side FS.\nls -l /var/postgres/data/pg_xlog | wc -l\n103\n\nDid you consider using pg_archivecleanup [1]?\n\n\nAt start, there were only 15 files. The max_wal_segments is 32, but not\nsure why iam seeing 103 files. Also the archiving dir size doubled\n(w.r.t number of files archived). and filled up the filesystem.\nI manually logged into postgres and run checkpoint; did not see any file\nreduction\n\n\nmax_wal_segments [2] is *not* related to archiving activity.\n\n\n[1] http://www.postgresql.org/docs/9.0/static/pgarchivecleanup.html\n[2] http://www.postgresql.org/docs/9.0/static/runtime-config-wal.html#GUC-WAL-KEEP-SEGMENTS\n\n\n-- \n  Euler Taveira de Oliveira\n  http://www.timbira.com/", "msg_date": "Wed, 16 Mar 2011 17:27:17 -0400", "msg_from": "Tech Madhu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_xlog size" }, { "msg_contents": "On Wed, Mar 16, 2011 at 12:09 AM, Tech Madhu <[email protected]> wrote:\n> hi all,\n>\n> Setup:\n> Sparc, Solaris 10, Postgres 9.0.2, using streaming replication and hot\n> standby. 1 master 1 slave\n>\n> Everything works fine (w.r.t replication), but the pg_xlog size grows\n> continuously, though i had no operations going on. Also the archiving to the\n> other side filled up the other side FS.\n> ls -l /var/postgres/data/pg_xlog | wc -l\n> 103\n> At start, there were only 15 files. The max_wal_segments is 32, but not sure\n> why iam seeing 103 files. Also the archiving dir size doubled (w.r.t number\n> of files archived). and filled up the filesystem.\n> I manually logged into postgres and run checkpoint; did not see any file\n> reduction\n>\n> Pasting some of the relevant conf values\n> The checkpoint_segments is commented out, so default to 3 segments right?\n>\n> postgresql.conf\n> -----------------------\n> wal_level = hot_standby\n> checkpoint_warning = 30s\n> #checkpoint_segments = 3                # in logfile segments, min 1, 16MB\n> each\n> #checkpoint_timeout = 5min              # range 30s-1h\n> #checkpoint_completion_target = 0.5     # checkpoint target duration, 0.0 -\n> 1.0\n>\n> archive_mode = on               # allows archiving to be done\n>                                 # (change requires restart)\n> archive_command = 'cp %p /var/postgres/walfiles/%f'             #  this dir\n> is NFS mounted dir on the slave node\n> archive_timeout = 30\n\nSince the setting \"archive_timeout\" makes the server create new WAL\nsegment file for each 30 seconds (even if there is no write transaction),\nthe size of pg_xlog directory continuously grows up.\n\nRegards,\n\n-- \nFujii Masao\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n", "msg_date": "Thu, 17 Mar 2011 12:09:20 +0900", "msg_from": "Fujii Masao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_xlog size" } ]
[ { "msg_contents": "Hi all,\n\nWe added an index to a table (to support some different functionality) then\nran into cases where the new index (on month, bl_number in the schema below)\nmade performance of some existing queries ~20,000 times worse. While we do\nhave a workaround (using a CTE to force the proper index to be used) that\ngets us down to ~twice the original performance (i.e. without the new\nindex), I'm wondering if there's a better workaround that can get us closer\nto the original performance. It also seems like kind of a strange case so\nI'm wondering if there's something weird going on in the optimizer. The #\nof rows estimates are pretty accurate so it's guessing that about right, but\nthe planner seems to be putting way too much weight on using a sorted index\nvs. looking up. This is all after an analyze.\n\nNear as I can guess the planner seems to be weighting scanning what should\nbe an expected 100k rows (though in practice it will have to do about 35\nmillion, because the assumption of independence between columns is\nincorrect) given an expected selectivity of 48K rows out of 45 million over\nscanning ~48k rows (using the index) and doing a top-n 100 sort on them\n(actual row count is 43k so pretty close on that). Even giving the\noptimizer the benefit of column independence I don't see how that first plan\ncould possibly come out ahead. It would really help if explain would print\nout the number of rows it expects to scan and analyze would print out the\nnumber of rows it actually scanned (instead of just the number that matched\nthe filter/limit), see the expensive query explain analyze output below.\n\nAt the bottom I have some info on the contents and probability.\n\n\n## The original Query:\n\nexplain analyze\nSELECT customs_records.* FROM \"customs_records\" WHERE (((buyer_id IS NOT\nNULL AND buyer_id IN\n(1172672,1570888,1336461,1336464,1336471,1189285,1336463,1336460,1654709,1000155646,1191114,1336480,1336479,1000384928,1161787,1811495,1507188,1159339,1980765,1200258,1980770,1980768,1980767,1980769,1980766,1980772,1000350850,1000265917,1980764,1980761,1170019,1980762,1184356,1985585))))\nORDER BY month DESC LIMIT 100 OFFSET 0;\n-----------------------------------\n Limit (cost=184626.64..184626.89 rows=100 width=908) (actual\ntime=102.630..102.777 rows=100 loops=1)\n -> Sort (cost=184626.64..184748.19 rows=48623 width=908) (actual\ntime=102.628..102.683 rows=100 loops=1)\n Sort Key: month\n Sort Method: top-N heapsort Memory: 132kB\n -> Bitmap Heap Scan on customs_records (cost=1054.22..182768.30\nrows=48623 width=908) (actual time=5.809..44.832 rows=43352 loops=1)\n Recheck Cond: ((buyer_id IS NOT NULL) AND (buyer_id = ANY\n('{1172672,1570888,1336461,1336464,1336471,1189285,1336463,1336460,1654709,1000155646,1191114,1336480,1336479,1000384928,1161787,1811495,1507188,1159339,1980765,1200258,1980770,1980768,1980767,1980769,1980766,1980772,1000350850,1000265917,1980764,1980761,1170019,1980762,1184356,1985585}'::integer[])))\n -> Bitmap Index Scan on index_customs_records_on_buyer_id\n(cost=0.00..1042.07 rows=48623 width=0) (actual time=4.588..4.588 rows=43352\nloops=1)\n Index Cond: ((buyer_id IS NOT NULL) AND (buyer_id = ANY\n('{1172672,1570888,1336461,1336464,1336471,1189285,1336463,1336460,1654709,1000155646,1191114,1336480,1336479,1000384928,1161787,1811495,1507188,1159339,1980765,1200258,1980770,1980768,1980767,1980769,1980766,1980772,1000350850,1000265917,1980764,1980761,1170019,1980762,1184356,1985585}'::integer[])))\n Total runtime: 102.919 ms\n\n\n## Same query after adding the new index\n### NOTE - it would be very useful here if explain would print out the\nnumber of rows it expects to scan in the index and analyze dumped out the\nnumber of rows actually scanned. Instead analyze is printing the rows\nactually outputed and explain appears to be outputting the number of rows\nexpected to match the filter ignoring the limit... (it exactly matches the\nrow count in the query above)\n##\n\nexplain analyze\nSELECT customs_records.* FROM \"customs_records\" WHERE (((buyer_id IS NOT\nNULL AND buyer_id IN\n(1172672,1570888,1336461,1336464,1336471,1189285,1336463,1336460,1654709,1000155646,1191114,1336480,1336479,1000384928,1161787,1811495,1507188,1159339,1980765,1200258,1980770,1980768,1980767,1980769,1980766,1980772,1000350850,1000265917,1980764,1980761,1170019,1980762,1184356,1985585))))\nORDER BY month DESC LIMIT 100 OFFSET 0;\n--------------------------------------------\n Limit (cost=0.00..161295.58 rows=100 width=908) (actual\ntime=171344.185..3858893.743 rows=100 loops=1)\n -> Index Scan Backward using\nindex_customs_records_on_month_and_bl_number on customs_records\n(cost=0.00..78426750.74 rows=48623 width=908) (actual\ntime=171344.182..3858893.588 rows=100 loops=1)\n Filter: ((buyer_id IS NOT NULL) AND (buyer_id = ANY\n('{1172672,1570888,1336461,1336464,1336471,1189285,1336463,1336460,1654709,1000155646,1191114,1336480,1336479,1000384928,1161787,1811495,1507188,1159339,1980765,1200258,1980770,1980768,1980767,1980769,1980766,1980772,1000350850,1000265917,1980764,1980761,1170019,1980762,1184356,1985585}'::integer[])))\n Total runtime: 3858893.908 ms\n\n\n############################################################\nMy workaround is to use a CTE query to force the planner to not use the\nmonth index for sorting (using a subselect is not enough since the planner\nis too smart for that). However this is still twice as slow as the original\nquery...\n############################################################\n\nexplain analyze\nwith foo as (select customs_records.* FROM \"customs_records\" WHERE\n(((buyer_id IS NOT NULL AND buyer_id IN\n(1172672,1570888,1336461,1336464,1336471,1189285,1336463,1336460,1654709,1000155646,1191114,1336480,1336479,1000384928,1161787,1811495,1507188,1159339,1980765,1200258,1980770,1980768,1980767,1980769,1980766,1980772,1000350850,1000265917,1980764,1980761,1170019,1980762,1184356,1985585)))))\nselect * from foo order by month desc limit 100 ;\n-----------------------------------------------------------\n Limit (cost=185599.10..185599.35 rows=100 width=5325) (actual\ntime=196.968..197.105 rows=100 loops=1)\n CTE foo\n -> Bitmap Heap Scan on customs_records (cost=1054.22..182768.30\nrows=48623 width=908) (actual time=5.765..43.489 rows=43352 loops=1)\n Recheck Cond: ((buyer_id IS NOT NULL) AND (buyer_id = ANY\n('{1172672,1570888,1336461,1336464,1336471,1189285,1336463,1336460,1654709,1000155646,1191114,1336480,1336479,1000384928,1161787,1811495,1507188,1159339,1980765,1200258,1980770,1980768,1980767,1980769,1980766,1980772,1000350850,1000265917,1980764,1980761,1170019,1980762,1184356,1985585}'::integer[])))\n -> Bitmap Index Scan on index_customs_records_on_buyer_id\n(cost=0.00..1042.07 rows=48623 width=0) (actual time=4.544..4.544 rows=43352\nloops=1)\n Index Cond: ((buyer_id IS NOT NULL) AND (buyer_id = ANY\n('{1172672,1570888,1336461,1336464,1336471,1189285,1336463,1336460,1654709,1000155646,1191114,1336480,1336479,1000384928,1161787,1811495,1507188,1159339,1980765,1200258,1980770,1980768,1980767,1980769,1980766,1980772,1000350850,1000265917,1980764,1980761,1170019,1980762,1184356,1985585}'::integer[])))\n -> Sort (cost=2830.80..2952.35 rows=48623 width=5325) (actual\ntime=196.966..197.029 rows=100 loops=1)\n Sort Key: foo.month\n Sort Method: top-N heapsort Memory: 132kB\n -> CTE Scan on foo (cost=0.00..972.46 rows=48623 width=5325)\n(actual time=5.770..153.322 rows=43352 loops=1)\n Total runtime: 207.282 ms\n\n\n#### Table information\n\nTable information - the schema is the table below (with some columns removed\nfor succinctness). There are ~45 million rows, the rows are also fairly\nwide about 80 columns total. buyer_id is null ~30% of the time (as is\nsupplier_id). A given buyer id maps to between 1 and ~100,000 records (in a\ndecreasing distribution, about 1 million unique buyer id values).\nSupplier_id is similar. Note buyer_id and month columns are not always\nindependent (for some buyer_ids there is a strong correlation as in this\ncase where the buyer_ids are associated with only older months, though for\nothers there isn't), though even so I'm still not clear on why it would pick\nthe plan that it does. We can consider these table never updated or inserted\ninto (updates are done in a new db offline that is periodically swapped in).\n\n Table \"public.customs_records\"\n Column | Type\n| Modifiers\n--------------------------+------------------------+--------------------------------------------------------------\n id | integer | not null default\nnextval('customs_records_id_seq'::regclass)\n....\n bl_number | character varying(16) |\n....\n month | date |\n....\n buyer_id | integer |\n...\n supplier_id | integer |\n...\nIndexes:\n \"customs_records_pkey\" PRIMARY KEY, btree (id) WITH (fillfactor=100)\n \"index_customs_records_on_month_and_bl_number\" UNIQUE, btree (month,\nbl_number) WITH (fillfactor=100)\n \"index_customs_records_on_buyer_id\" btree (buyer_id) WITH\n(fillfactor=100) WHERE buyer_id IS NOT NULL\n \"index_customs_records_on_supplier_id_and_buyer_id\" btree (supplier_id,\nbuyer_id) WITH (fillfactor=100) CLUSTER\n\n\ndb version =>\n PostgreSQL 9.0.3 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC)\n4.1.2 20080704 (Red Hat 4.1.2-46), 64-bit\n (enterprise db build)\nubuntu 8.04 LTS is the host\n\nHi all,We added an index to a table (to support some different functionality) then ran into cases where the new index (on month, bl_number in the schema below) made performance of some existing queries ~20,000 times worse.  While we do have a workaround (using a CTE to force the proper index to be used) that gets us down to ~twice the original performance (i.e. without the new index), I'm wondering if there's a better workaround that can get us closer to the original performance. It also seems like kind of a strange case so I'm wondering if there's something weird going on in the optimizer.  The # of rows estimates are pretty accurate so it's guessing that about right, but the planner seems to be putting way too much weight on using a sorted index vs. looking up. This is all after an analyze.\nNear as I can guess the planner seems to be weighting scanning what should be an expected 100k rows (though in practice it will have to do about 35 million, because the assumption of independence between columns is incorrect) given an expected selectivity of 48K rows out of 45 million over scanning ~48k rows (using the index) and doing a top-n 100 sort on them (actual row count is 43k so pretty close on that).  Even giving the optimizer the benefit of column independence I don't see how that first plan could possibly come out ahead.  It would really help if explain would print out the number of rows it expects to scan and analyze would print out the number of rows it actually scanned (instead of just the number that matched the filter/limit), see the expensive query explain analyze output below.\nAt the bottom I have some info on the contents and probability.## The original Query:explain analyzeSELECT\n customs_records.* FROM \"customs_records\" WHERE (((buyer_id IS NOT NULL \nAND buyer_id IN \n(1172672,1570888,1336461,1336464,1336471,1189285,1336463,1336460,1654709,1000155646,1191114,1336480,1336479,1000384928,1161787,1811495,1507188,1159339,1980765,1200258,1980770,1980768,1980767,1980769,1980766,1980772,1000350850,1000265917,1980764,1980761,1170019,1980762,1184356,1985585)))) \n ORDER BY month DESC LIMIT 100 OFFSET 0;----------------------------------- Limit  (cost=184626.64..184626.89 rows=100 width=908) (actual time=102.630..102.777 rows=100 loops=1)\n\n   ->  Sort  (cost=184626.64..184748.19 rows=48623 width=908) (actual time=102.628..102.683 rows=100 loops=1)         Sort Key: month         Sort Method:  top-N heapsort  Memory: 132kB         ->  Bitmap Heap Scan on customs_records  (cost=1054.22..182768.30 rows=48623 width=908) (actual time=5.809..44.832 rows=43352 loops=1)\n\n               Recheck Cond: ((buyer_id IS NOT NULL) AND (buyer_id = ANY ('{1172672,1570888,1336461,1336464,1336471,1189285,1336463,1336460,1654709,1000155646,1191114,1336480,1336479,1000384928,1161787,1811495,1507188,1159339,1980765,1200258,1980770,1980768,1980767,1980769,1980766,1980772,1000350850,1000265917,1980764,1980761,1170019,1980762,1184356,1985585}'::integer[])))\n\n               ->  Bitmap Index Scan on index_customs_records_on_buyer_id  (cost=0.00..1042.07 rows=48623 width=0) (actual time=4.588..4.588 rows=43352 loops=1)                     Index Cond: ((buyer_id IS NOT NULL) AND (buyer_id = ANY ('{1172672,1570888,1336461,1336464,1336471,1189285,1336463,1336460,1654709,1000155646,1191114,1336480,1336479,1000384928,1161787,1811495,1507188,1159339,1980765,1200258,1980770,1980768,1980767,1980769,1980766,1980772,1000350850,1000265917,1980764,1980761,1170019,1980762,1184356,1985585}'::integer[])))\n\n Total runtime: 102.919 ms## Same query after adding the new index### NOTE - it would be very useful here if explain would print out the number of rows it expects to scan in the index and analyze dumped out the number of rows actually scanned.  Instead analyze is printing the rows actually outputed and explain appears to be outputting the number of rows expected to match the filter ignoring the limit... (it exactly matches the row count in the query above)\n\n##explain analyzeSELECT customs_records.* FROM \"customs_records\" WHERE (((buyer_id IS NOT NULL AND buyer_id IN (1172672,1570888,1336461,1336464,1336471,1189285,1336463,1336460,1654709,1000155646,1191114,1336480,1336479,1000384928,1161787,1811495,1507188,1159339,1980765,1200258,1980770,1980768,1980767,1980769,1980766,1980772,1000350850,1000265917,1980764,1980761,1170019,1980762,1184356,1985585))))  ORDER BY month DESC LIMIT 100 OFFSET 0;\n\n-------------------------------------------- Limit  (cost=0.00..161295.58 rows=100 width=908) (actual time=171344.185..3858893.743 rows=100 loops=1)   ->  Index Scan Backward using index_customs_records_on_month_and_bl_number on customs_records  (cost=0.00..78426750.74 rows=48623 width=908) (actual time=171344.182..3858893.588 rows=100 loops=1)\n\n         Filter: ((buyer_id IS NOT NULL) AND (buyer_id = ANY ('{1172672,1570888,1336461,1336464,1336471,1189285,1336463,1336460,1654709,1000155646,1191114,1336480,1336479,1000384928,1161787,1811495,1507188,1159339,1980765,1200258,1980770,1980768,1980767,1980769,1980766,1980772,1000350850,1000265917,1980764,1980761,1170019,1980762,1184356,1985585}'::integer[])))\n\n Total runtime: 3858893.908 ms############################################################My workaround is to use a CTE query to force the planner to not use the month index for sorting (using a subselect is not enough since the planner is too smart for that). However this is still twice as slow as the original query...\n\n############################################################explain analyzewith foo as (select customs_records.* FROM \"customs_records\" WHERE (((buyer_id IS NOT NULL AND buyer_id IN (1172672,1570888,1336461,1336464,1336471,1189285,1336463,1336460,1654709,1000155646,1191114,1336480,1336479,1000384928,1161787,1811495,1507188,1159339,1980765,1200258,1980770,1980768,1980767,1980769,1980766,1980772,1000350850,1000265917,1980764,1980761,1170019,1980762,1184356,1985585))))) select * from foo order by month desc limit 100 ;\n\n----------------------------------------------------------- Limit  (cost=185599.10..185599.35 rows=100 width=5325) (actual time=196.968..197.105 rows=100 loops=1)   CTE foo     ->  Bitmap Heap Scan on customs_records  (cost=1054.22..182768.30 rows=48623 width=908) (actual time=5.765..43.489 rows=43352 loops=1)\n\n           Recheck Cond: ((buyer_id IS NOT NULL) AND (buyer_id = ANY ('{1172672,1570888,1336461,1336464,1336471,1189285,1336463,1336460,1654709,1000155646,1191114,1336480,1336479,1000384928,1161787,1811495,1507188,1159339,1980765,1200258,1980770,1980768,1980767,1980769,1980766,1980772,1000350850,1000265917,1980764,1980761,1170019,1980762,1184356,1985585}'::integer[])))\n\n           ->  Bitmap Index Scan on index_customs_records_on_buyer_id  (cost=0.00..1042.07 rows=48623 width=0) (actual time=4.544..4.544 rows=43352 loops=1)                 Index Cond: ((buyer_id IS NOT NULL) AND (buyer_id = ANY ('{1172672,1570888,1336461,1336464,1336471,1189285,1336463,1336460,1654709,1000155646,1191114,1336480,1336479,1000384928,1161787,1811495,1507188,1159339,1980765,1200258,1980770,1980768,1980767,1980769,1980766,1980772,1000350850,1000265917,1980764,1980761,1170019,1980762,1184356,1985585}'::integer[])))\n\n   ->  Sort  (cost=2830.80..2952.35 rows=48623 width=5325) (actual time=196.966..197.029 rows=100 loops=1)         Sort Key: foo.month         Sort Method:  top-N heapsort  Memory: 132kB         ->  CTE Scan on foo  (cost=0.00..972.46 rows=48623 width=5325) (actual time=5.770..153.322 rows=43352 loops=1)\n\n Total runtime: 207.282 ms#### Table informationTable information - the schema is the table below (with some columns removed for succinctness).  There are ~45 million rows, the rows are also fairly wide about 80 columns total. buyer_id is null ~30% of the time (as is supplier_id). A given buyer id maps to between 1 and ~100,000 records (in a decreasing distribution, about 1 million unique buyer id values).  Supplier_id is similar.  Note buyer_id and month columns are not always independent (for some buyer_ids there is a strong correlation as in this case where the buyer_ids are associated with only older months, though for others there isn't), though even so I'm still not clear on why it would pick the plan that it does. We can consider these table never updated or inserted into (updates are done in a new db offline that is periodically swapped in).\n                                          Table \"public.customs_records\"          Column          |          Type          |                          Modifiers                           \n--------------------------+------------------------+-------------------------------------------------------------- id                       | integer                | not null default nextval('customs_records_id_seq'::regclass)\n.... bl_number                | character varying(16)  | \n.... month                    | date                   | \n.... buyer_id                 | integer                | \n... supplier_id              | integer                | \n...Indexes:\n    \"customs_records_pkey\" PRIMARY KEY, btree (id) WITH (fillfactor=100)    \"index_customs_records_on_month_and_bl_number\" UNIQUE, btree (month, bl_number) WITH (fillfactor=100)\n    \"index_customs_records_on_buyer_id\" btree (buyer_id) WITH (fillfactor=100) WHERE buyer_id IS NOT NULL\n    \"index_customs_records_on_supplier_id_and_buyer_id\" btree (supplier_id, buyer_id) WITH (fillfactor=100) CLUSTER\ndb version => PostgreSQL 9.0.3 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-46), 64-bit (enterprise db build)ubuntu 8.04 LTS is the host", "msg_date": "Tue, 15 Mar 2011 14:23:17 -0400", "msg_from": "Timothy Garnett <[email protected]>", "msg_from_op": true, "msg_subject": "Adding additional index causes 20,000x slowdown for certain select\n\tqueries - postgres 9.0.3" }, { "msg_contents": "Forgot to include our non-default config settings and server info, not that\nit probably makes a difference for this.\n\nfrom pg_settings:\n name | current_setting\n version | PostgreSQL 9.0.3 on\nx86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20080704 (Red Hat\n4.1.2-46), 64-bit\n bytea_output | escape\n checkpoint_completion_target | 0.9\n checkpoint_segments | 24\n effective_cache_size | 24GB\n effective_io_concurrency | 2\n lc_collate | en_US.utf8\n lc_ctype | en_US.utf8\n listen_addresses | *\n log_checkpoints | on\n log_connections | on\n log_disconnections | on\n log_hostname | on\n log_line_prefix | %t\n logging_collector | on\n maintenance_work_mem | 256MB\n max_connections | 120\n max_stack_depth | 2MB\n port | 5432\n server_encoding | UTF8\n shared_buffers | 4GB\n synchronous_commit | off\n tcp_keepalives_idle | 180\n TimeZone | US/Eastern\n track_activity_query_size | 8192\n wal_buffers | 16MB\n wal_writer_delay | 330ms\n work_mem | 512MB\n\nThis is a dual dual-core 64bit intel machine (hyperthreaded so 8 logical\ncpus) with 24GB of memory running basically just the db against a raid 5\ndiskarray.\n\nTim\n\nOn Tue, Mar 15, 2011 at 2:23 PM, Timothy Garnett <[email protected]>wrote:\n\n> Hi all,\n>\n> We added an index to a table (to support some different functionality) then\n> ran into cases where the new index (on month, bl_number in the schema below)\n> made performance of some existing queries ~20,000 times worse. While we do\n> have a workaround (using a CTE to force the proper index to be used) that\n> gets us down to ~twice the original performance (i.e. without the new\n> index), I'm wondering if there's a better workaround that can get us closer\n> to the original performance. It also seems like kind of a strange case so\n> I'm wondering if there's something weird going on in the optimizer. The #\n> of rows estimates are pretty accurate so it's guessing that about right, but\n> the planner seems to be putting way too much weight on using a sorted index\n> vs. looking up. This is all after an analyze.\n>\n> Near as I can guess the planner seems to be weighting scanning what should\n> be an expected 100k rows (though in practice it will have to do about 35\n> million, because the assumption of independence between columns is\n> incorrect) given an expected selectivity of 48K rows out of 45 million over\n> scanning ~48k rows (using the index) and doing a top-n 100 sort on them\n> (actual row count is 43k so pretty close on that). Even giving the\n> optimizer the benefit of column independence I don't see how that first plan\n> could possibly come out ahead. It would really help if explain would print\n> out the number of rows it expects to scan and analyze would print out the\n> number of rows it actually scanned (instead of just the number that matched\n> the filter/limit), see the expensive query explain analyze output below.\n>\n> At the bottom I have some info on the contents and probability.\n>\n>\n> ## The original Query:\n>\n> explain analyze\n> SELECT customs_records.* FROM \"customs_records\" WHERE (((buyer_id IS NOT\n> NULL AND buyer_id IN\n> (1172672,1570888,1336461,1336464,1336471,1189285,1336463,1336460,1654709,1000155646,1191114,1336480,1336479,1000384928,1161787,1811495,1507188,1159339,1980765,1200258,1980770,1980768,1980767,1980769,1980766,1980772,1000350850,1000265917,1980764,1980761,1170019,1980762,1184356,1985585))))\n> ORDER BY month DESC LIMIT 100 OFFSET 0;\n> -----------------------------------\n> Limit (cost=184626.64..184626.89 rows=100 width=908) (actual\n> time=102.630..102.777 rows=100 loops=1)\n> -> Sort (cost=184626.64..184748.19 rows=48623 width=908) (actual\n> time=102.628..102.683 rows=100 loops=1)\n> Sort Key: month\n> Sort Method: top-N heapsort Memory: 132kB\n> -> Bitmap Heap Scan on customs_records (cost=1054.22..182768.30\n> rows=48623 width=908) (actual time=5.809..44.832 rows=43352 loops=1)\n> Recheck Cond: ((buyer_id IS NOT NULL) AND (buyer_id = ANY\n> ('{1172672,1570888,1336461,1336464,1336471,1189285,1336463,1336460,1654709,1000155646,1191114,1336480,1336479,1000384928,1161787,1811495,1507188,1159339,1980765,1200258,1980770,1980768,1980767,1980769,1980766,1980772,1000350850,1000265917,1980764,1980761,1170019,1980762,1184356,1985585}'::integer[])))\n> -> Bitmap Index Scan on index_customs_records_on_buyer_id\n> (cost=0.00..1042.07 rows=48623 width=0) (actual time=4.588..4.588 rows=43352\n> loops=1)\n> Index Cond: ((buyer_id IS NOT NULL) AND (buyer_id =\n> ANY\n> ('{1172672,1570888,1336461,1336464,1336471,1189285,1336463,1336460,1654709,1000155646,1191114,1336480,1336479,1000384928,1161787,1811495,1507188,1159339,1980765,1200258,1980770,1980768,1980767,1980769,1980766,1980772,1000350850,1000265917,1980764,1980761,1170019,1980762,1184356,1985585}'::integer[])))\n> Total runtime: 102.919 ms\n>\n>\n> ## Same query after adding the new index\n> ### NOTE - it would be very useful here if explain would print out the\n> number of rows it expects to scan in the index and analyze dumped out the\n> number of rows actually scanned. Instead analyze is printing the rows\n> actually outputed and explain appears to be outputting the number of rows\n> expected to match the filter ignoring the limit... (it exactly matches the\n> row count in the query above)\n> ##\n>\n> explain analyze\n> SELECT customs_records.* FROM \"customs_records\" WHERE (((buyer_id IS NOT\n> NULL AND buyer_id IN\n> (1172672,1570888,1336461,1336464,1336471,1189285,1336463,1336460,1654709,1000155646,1191114,1336480,1336479,1000384928,1161787,1811495,1507188,1159339,1980765,1200258,1980770,1980768,1980767,1980769,1980766,1980772,1000350850,1000265917,1980764,1980761,1170019,1980762,1184356,1985585))))\n> ORDER BY month DESC LIMIT 100 OFFSET 0;\n> --------------------------------------------\n> Limit (cost=0.00..161295.58 rows=100 width=908) (actual\n> time=171344.185..3858893.743 rows=100 loops=1)\n> -> Index Scan Backward using\n> index_customs_records_on_month_and_bl_number on customs_records\n> (cost=0.00..78426750.74 rows=48623 width=908) (actual\n> time=171344.182..3858893.588 rows=100 loops=1)\n> Filter: ((buyer_id IS NOT NULL) AND (buyer_id = ANY\n> ('{1172672,1570888,1336461,1336464,1336471,1189285,1336463,1336460,1654709,1000155646,1191114,1336480,1336479,1000384928,1161787,1811495,1507188,1159339,1980765,1200258,1980770,1980768,1980767,1980769,1980766,1980772,1000350850,1000265917,1980764,1980761,1170019,1980762,1184356,1985585}'::integer[])))\n> Total runtime: 3858893.908 ms\n>\n>\n> ############################################################\n> My workaround is to use a CTE query to force the planner to not use the\n> month index for sorting (using a subselect is not enough since the planner\n> is too smart for that). However this is still twice as slow as the original\n> query...\n> ############################################################\n>\n> explain analyze\n> with foo as (select customs_records.* FROM \"customs_records\" WHERE\n> (((buyer_id IS NOT NULL AND buyer_id IN\n> (1172672,1570888,1336461,1336464,1336471,1189285,1336463,1336460,1654709,1000155646,1191114,1336480,1336479,1000384928,1161787,1811495,1507188,1159339,1980765,1200258,1980770,1980768,1980767,1980769,1980766,1980772,1000350850,1000265917,1980764,1980761,1170019,1980762,1184356,1985585)))))\n> select * from foo order by month desc limit 100 ;\n> -----------------------------------------------------------\n> Limit (cost=185599.10..185599.35 rows=100 width=5325) (actual\n> time=196.968..197.105 rows=100 loops=1)\n> CTE foo\n> -> Bitmap Heap Scan on customs_records (cost=1054.22..182768.30\n> rows=48623 width=908) (actual time=5.765..43.489 rows=43352 loops=1)\n> Recheck Cond: ((buyer_id IS NOT NULL) AND (buyer_id = ANY\n> ('{1172672,1570888,1336461,1336464,1336471,1189285,1336463,1336460,1654709,1000155646,1191114,1336480,1336479,1000384928,1161787,1811495,1507188,1159339,1980765,1200258,1980770,1980768,1980767,1980769,1980766,1980772,1000350850,1000265917,1980764,1980761,1170019,1980762,1184356,1985585}'::integer[])))\n> -> Bitmap Index Scan on index_customs_records_on_buyer_id\n> (cost=0.00..1042.07 rows=48623 width=0) (actual time=4.544..4.544 rows=43352\n> loops=1)\n> Index Cond: ((buyer_id IS NOT NULL) AND (buyer_id = ANY\n> ('{1172672,1570888,1336461,1336464,1336471,1189285,1336463,1336460,1654709,1000155646,1191114,1336480,1336479,1000384928,1161787,1811495,1507188,1159339,1980765,1200258,1980770,1980768,1980767,1980769,1980766,1980772,1000350850,1000265917,1980764,1980761,1170019,1980762,1184356,1985585}'::integer[])))\n> -> Sort (cost=2830.80..2952.35 rows=48623 width=5325) (actual\n> time=196.966..197.029 rows=100 loops=1)\n> Sort Key: foo.month\n> Sort Method: top-N heapsort Memory: 132kB\n> -> CTE Scan on foo (cost=0.00..972.46 rows=48623 width=5325)\n> (actual time=5.770..153.322 rows=43352 loops=1)\n> Total runtime: 207.282 ms\n>\n>\n> #### Table information\n>\n> Table information - the schema is the table below (with some columns\n> removed for succinctness). There are ~45 million rows, the rows are also\n> fairly wide about 80 columns total. buyer_id is null ~30% of the time (as is\n> supplier_id). A given buyer id maps to between 1 and ~100,000 records (in a\n> decreasing distribution, about 1 million unique buyer id values).\n> Supplier_id is similar. Note buyer_id and month columns are not always\n> independent (for some buyer_ids there is a strong correlation as in this\n> case where the buyer_ids are associated with only older months, though for\n> others there isn't), though even so I'm still not clear on why it would pick\n> the plan that it does. We can consider these table never updated or inserted\n> into (updates are done in a new db offline that is periodically swapped in).\n>\n> Table \"public.customs_records\"\n> Column | Type\n> | Modifiers\n>\n> --------------------------+------------------------+--------------------------------------------------------------\n> id | integer | not null default\n> nextval('customs_records_id_seq'::regclass)\n> ....\n> bl_number | character varying(16) |\n> ....\n> month | date |\n> ....\n> buyer_id | integer |\n> ...\n> supplier_id | integer |\n> ...\n> Indexes:\n> \"customs_records_pkey\" PRIMARY KEY, btree (id) WITH (fillfactor=100)\n> \"index_customs_records_on_month_and_bl_number\" UNIQUE, btree (month,\n> bl_number) WITH (fillfactor=100)\n> \"index_customs_records_on_buyer_id\" btree (buyer_id) WITH\n> (fillfactor=100) WHERE buyer_id IS NOT NULL\n> \"index_customs_records_on_supplier_id_and_buyer_id\" btree (supplier_id,\n> buyer_id) WITH (fillfactor=100) CLUSTER\n>\n>\n> db version =>\n> PostgreSQL 9.0.3 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC)\n> 4.1.2 20080704 (Red Hat 4.1.2-46), 64-bit\n> (enterprise db build)\n> ubuntu 8.04 LTS is the host\n>\n>\n\nForgot to include our non-default config settings and server info, not that it probably makes a difference for this.from pg_settings: name                         | current_setting\n version                      | PostgreSQL 9.0.3 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-46), 64-bit\n bytea_output                 | escape checkpoint_completion_target | 0.9\n checkpoint_segments          | 24 effective_cache_size         | 24GB\n effective_io_concurrency     | 2 lc_collate                   | en_US.utf8\n lc_ctype                     | en_US.utf8 listen_addresses             | *\n log_checkpoints              | on log_connections              | on\n log_disconnections           | on log_hostname                 | on\n log_line_prefix              | %t logging_collector            | on\n maintenance_work_mem         | 256MB max_connections              | 120\n max_stack_depth              | 2MB port                         | 5432\n server_encoding              | UTF8 shared_buffers               | 4GB\n synchronous_commit           | off tcp_keepalives_idle          | 180\n TimeZone                     | US/Eastern track_activity_query_size    | 8192\n wal_buffers                  | 16MB wal_writer_delay             | 330ms\n work_mem                     | 512MBThis is a dual dual-core 64bit intel machine (hyperthreaded so 8 logical cpus) with 24GB of memory running basically just the db against a raid 5 diskarray.\nTimOn Tue, Mar 15, 2011 at 2:23 PM, Timothy Garnett <[email protected]> wrote:\nHi all,We added an index to a table (to support some different functionality) then ran into cases where the new index (on month, bl_number in the schema below) made performance of some existing queries ~20,000 times worse.  While we do have a workaround (using a CTE to force the proper index to be used) that gets us down to ~twice the original performance (i.e. without the new index), I'm wondering if there's a better workaround that can get us closer to the original performance. It also seems like kind of a strange case so I'm wondering if there's something weird going on in the optimizer.  The # of rows estimates are pretty accurate so it's guessing that about right, but the planner seems to be putting way too much weight on using a sorted index vs. looking up. This is all after an analyze.\nNear as I can guess the planner seems to be weighting scanning what should be an expected 100k rows (though in practice it will have to do about 35 million, because the assumption of independence between columns is incorrect) given an expected selectivity of 48K rows out of 45 million over scanning ~48k rows (using the index) and doing a top-n 100 sort on them (actual row count is 43k so pretty close on that).  Even giving the optimizer the benefit of column independence I don't see how that first plan could possibly come out ahead.  It would really help if explain would print out the number of rows it expects to scan and analyze would print out the number of rows it actually scanned (instead of just the number that matched the filter/limit), see the expensive query explain analyze output below.\nAt the bottom I have some info on the contents and probability.## The original Query:explain analyzeSELECT\n customs_records.* FROM \"customs_records\" WHERE (((buyer_id IS NOT NULL \nAND buyer_id IN \n(1172672,1570888,1336461,1336464,1336471,1189285,1336463,1336460,1654709,1000155646,1191114,1336480,1336479,1000384928,1161787,1811495,1507188,1159339,1980765,1200258,1980770,1980768,1980767,1980769,1980766,1980772,1000350850,1000265917,1980764,1980761,1170019,1980762,1184356,1985585)))) \n ORDER BY month DESC LIMIT 100 OFFSET 0;----------------------------------- Limit  (cost=184626.64..184626.89 rows=100 width=908) (actual time=102.630..102.777 rows=100 loops=1)\n\n\n   ->  Sort  (cost=184626.64..184748.19 rows=48623 width=908) (actual time=102.628..102.683 rows=100 loops=1)         Sort Key: month         Sort Method:  top-N heapsort  Memory: 132kB         ->  Bitmap Heap Scan on customs_records  (cost=1054.22..182768.30 rows=48623 width=908) (actual time=5.809..44.832 rows=43352 loops=1)\n\n\n               Recheck Cond: ((buyer_id IS NOT NULL) AND (buyer_id = ANY ('{1172672,1570888,1336461,1336464,1336471,1189285,1336463,1336460,1654709,1000155646,1191114,1336480,1336479,1000384928,1161787,1811495,1507188,1159339,1980765,1200258,1980770,1980768,1980767,1980769,1980766,1980772,1000350850,1000265917,1980764,1980761,1170019,1980762,1184356,1985585}'::integer[])))\n\n\n               ->  Bitmap Index Scan on index_customs_records_on_buyer_id  (cost=0.00..1042.07 rows=48623 width=0) (actual time=4.588..4.588 rows=43352 loops=1)                     Index Cond: ((buyer_id IS NOT NULL) AND (buyer_id = ANY ('{1172672,1570888,1336461,1336464,1336471,1189285,1336463,1336460,1654709,1000155646,1191114,1336480,1336479,1000384928,1161787,1811495,1507188,1159339,1980765,1200258,1980770,1980768,1980767,1980769,1980766,1980772,1000350850,1000265917,1980764,1980761,1170019,1980762,1184356,1985585}'::integer[])))\n\n\n Total runtime: 102.919 ms## Same query after adding the new index### NOTE - it would be very useful here if explain would print out the number of rows it expects to scan in the index and analyze dumped out the number of rows actually scanned.  Instead analyze is printing the rows actually outputed and explain appears to be outputting the number of rows expected to match the filter ignoring the limit... (it exactly matches the row count in the query above)\n\n\n##explain analyzeSELECT customs_records.* FROM \"customs_records\" WHERE (((buyer_id IS NOT NULL AND buyer_id IN (1172672,1570888,1336461,1336464,1336471,1189285,1336463,1336460,1654709,1000155646,1191114,1336480,1336479,1000384928,1161787,1811495,1507188,1159339,1980765,1200258,1980770,1980768,1980767,1980769,1980766,1980772,1000350850,1000265917,1980764,1980761,1170019,1980762,1184356,1985585))))  ORDER BY month DESC LIMIT 100 OFFSET 0;\n\n\n-------------------------------------------- Limit  (cost=0.00..161295.58 rows=100 width=908) (actual time=171344.185..3858893.743 rows=100 loops=1)   ->  Index Scan Backward using index_customs_records_on_month_and_bl_number on customs_records  (cost=0.00..78426750.74 rows=48623 width=908) (actual time=171344.182..3858893.588 rows=100 loops=1)\n\n\n         Filter: ((buyer_id IS NOT NULL) AND (buyer_id = ANY ('{1172672,1570888,1336461,1336464,1336471,1189285,1336463,1336460,1654709,1000155646,1191114,1336480,1336479,1000384928,1161787,1811495,1507188,1159339,1980765,1200258,1980770,1980768,1980767,1980769,1980766,1980772,1000350850,1000265917,1980764,1980761,1170019,1980762,1184356,1985585}'::integer[])))\n\n\n Total runtime: 3858893.908 ms############################################################My workaround is to use a CTE query to force the planner to not use the month index for sorting (using a subselect is not enough since the planner is too smart for that). However this is still twice as slow as the original query...\n\n\n############################################################explain analyzewith foo as (select customs_records.* FROM \"customs_records\" WHERE (((buyer_id IS NOT NULL AND buyer_id IN (1172672,1570888,1336461,1336464,1336471,1189285,1336463,1336460,1654709,1000155646,1191114,1336480,1336479,1000384928,1161787,1811495,1507188,1159339,1980765,1200258,1980770,1980768,1980767,1980769,1980766,1980772,1000350850,1000265917,1980764,1980761,1170019,1980762,1184356,1985585))))) select * from foo order by month desc limit 100 ;\n\n\n----------------------------------------------------------- Limit  (cost=185599.10..185599.35 rows=100 width=5325) (actual time=196.968..197.105 rows=100 loops=1)   CTE foo     ->  Bitmap Heap Scan on customs_records  (cost=1054.22..182768.30 rows=48623 width=908) (actual time=5.765..43.489 rows=43352 loops=1)\n\n\n           Recheck Cond: ((buyer_id IS NOT NULL) AND (buyer_id = ANY ('{1172672,1570888,1336461,1336464,1336471,1189285,1336463,1336460,1654709,1000155646,1191114,1336480,1336479,1000384928,1161787,1811495,1507188,1159339,1980765,1200258,1980770,1980768,1980767,1980769,1980766,1980772,1000350850,1000265917,1980764,1980761,1170019,1980762,1184356,1985585}'::integer[])))\n\n\n           ->  Bitmap Index Scan on index_customs_records_on_buyer_id  (cost=0.00..1042.07 rows=48623 width=0) (actual time=4.544..4.544 rows=43352 loops=1)                 Index Cond: ((buyer_id IS NOT NULL) AND (buyer_id = ANY ('{1172672,1570888,1336461,1336464,1336471,1189285,1336463,1336460,1654709,1000155646,1191114,1336480,1336479,1000384928,1161787,1811495,1507188,1159339,1980765,1200258,1980770,1980768,1980767,1980769,1980766,1980772,1000350850,1000265917,1980764,1980761,1170019,1980762,1184356,1985585}'::integer[])))\n\n\n   ->  Sort  (cost=2830.80..2952.35 rows=48623 width=5325) (actual time=196.966..197.029 rows=100 loops=1)         Sort Key: foo.month         Sort Method:  top-N heapsort  Memory: 132kB         ->  CTE Scan on foo  (cost=0.00..972.46 rows=48623 width=5325) (actual time=5.770..153.322 rows=43352 loops=1)\n\n\n Total runtime: 207.282 ms#### Table informationTable information - the schema is the table below (with some columns removed for succinctness).  There are ~45 million rows, the rows are also fairly wide about 80 columns total. buyer_id is null ~30% of the time (as is supplier_id). A given buyer id maps to between 1 and ~100,000 records (in a decreasing distribution, about 1 million unique buyer id values).  Supplier_id is similar.  Note buyer_id and month columns are not always independent (for some buyer_ids there is a strong correlation as in this case where the buyer_ids are associated with only older months, though for others there isn't), though even so I'm still not clear on why it would pick the plan that it does. We can consider these table never updated or inserted into (updates are done in a new db offline that is periodically swapped in).\n                                          Table \"public.customs_records\"          Column          |          Type          |                          Modifiers                           \n--------------------------+------------------------+-------------------------------------------------------------- id                       | integer                | not null default nextval('customs_records_id_seq'::regclass)\n.... bl_number                | character varying(16)  | \n.... month                    | date                   | \n.... buyer_id                 | integer                | \n... supplier_id              | integer                | \n...Indexes:\n    \"customs_records_pkey\" PRIMARY KEY, btree (id) WITH (fillfactor=100)    \"index_customs_records_on_month_and_bl_number\" UNIQUE, btree (month, bl_number) WITH (fillfactor=100)\n    \"index_customs_records_on_buyer_id\" btree (buyer_id) WITH (fillfactor=100) WHERE buyer_id IS NOT NULL\n    \"index_customs_records_on_supplier_id_and_buyer_id\" btree (supplier_id, buyer_id) WITH (fillfactor=100) CLUSTER\ndb version => PostgreSQL 9.0.3 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-46), 64-bit (enterprise db build)ubuntu 8.04 LTS is the host", "msg_date": "Tue, 15 Mar 2011 14:39:34 -0400", "msg_from": "Timothy Garnett <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding additional index causes 20,000x slowdown for certain\n\tselect queries - postgres 9.0.3" }, { "msg_contents": "Sorry meant with 32GB of memory.\n\nTim\n\nOn Tue, Mar 15, 2011 at 2:39 PM, Timothy Garnett <[email protected]>wrote:\n\n> Forgot to include our non-default config settings and server info, not that\n> it probably makes a difference for this.\n>\n> from pg_settings:\n> name | current_setting\n>\n> version | PostgreSQL 9.0.3 on\n> x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20080704 (Red Hat\n> 4.1.2-46), 64-bit\n> bytea_output | escape\n> checkpoint_completion_target | 0.9\n> checkpoint_segments | 24\n> effective_cache_size | 24GB\n> effective_io_concurrency | 2\n> lc_collate | en_US.utf8\n> lc_ctype | en_US.utf8\n> listen_addresses | *\n> log_checkpoints | on\n> log_connections | on\n> log_disconnections | on\n> log_hostname | on\n> log_line_prefix | %t\n> logging_collector | on\n> maintenance_work_mem | 256MB\n> max_connections | 120\n> max_stack_depth | 2MB\n> port | 5432\n> server_encoding | UTF8\n> shared_buffers | 4GB\n> synchronous_commit | off\n> tcp_keepalives_idle | 180\n> TimeZone | US/Eastern\n> track_activity_query_size | 8192\n> wal_buffers | 16MB\n> wal_writer_delay | 330ms\n> work_mem | 512MB\n>\n> This is a dual dual-core 64bit intel machine (hyperthreaded so 8 logical\n> cpus) with 24GB of memory running basically just the db against a raid 5\n> diskarray.\n>\n> Tim\n>\n>\n> On Tue, Mar 15, 2011 at 2:23 PM, Timothy Garnett <[email protected]>wrote:\n>\n>> Hi all,\n>>\n>> We added an index to a table (to support some different functionality)\n>> then ran into cases where the new index (on month, bl_number in the schema\n>> below) made performance of some existing queries ~20,000 times worse. While\n>> we do have a workaround (using a CTE to force the proper index to be used)\n>> that gets us down to ~twice the original performance (i.e. without the new\n>> index), I'm wondering if there's a better workaround that can get us closer\n>> to the original performance. It also seems like kind of a strange case so\n>> I'm wondering if there's something weird going on in the optimizer. The #\n>> of rows estimates are pretty accurate so it's guessing that about right, but\n>> the planner seems to be putting way too much weight on using a sorted index\n>> vs. looking up. This is all after an analyze.\n>>\n>> Near as I can guess the planner seems to be weighting scanning what should\n>> be an expected 100k rows (though in practice it will have to do about 35\n>> million, because the assumption of independence between columns is\n>> incorrect) given an expected selectivity of 48K rows out of 45 million over\n>> scanning ~48k rows (using the index) and doing a top-n 100 sort on them\n>> (actual row count is 43k so pretty close on that). Even giving the\n>> optimizer the benefit of column independence I don't see how that first plan\n>> could possibly come out ahead. It would really help if explain would print\n>> out the number of rows it expects to scan and analyze would print out the\n>> number of rows it actually scanned (instead of just the number that matched\n>> the filter/limit), see the expensive query explain analyze output below.\n>>\n>> At the bottom I have some info on the contents and probability.\n>>\n>>\n>> ## The original Query:\n>>\n>> explain analyze\n>> SELECT customs_records.* FROM \"customs_records\" WHERE (((buyer_id IS NOT\n>> NULL AND buyer_id IN\n>> (1172672,1570888,1336461,1336464,1336471,1189285,1336463,1336460,1654709,1000155646,1191114,1336480,1336479,1000384928,1161787,1811495,1507188,1159339,1980765,1200258,1980770,1980768,1980767,1980769,1980766,1980772,1000350850,1000265917,1980764,1980761,1170019,1980762,1184356,1985585))))\n>> ORDER BY month DESC LIMIT 100 OFFSET 0;\n>> -----------------------------------\n>> Limit (cost=184626.64..184626.89 rows=100 width=908) (actual\n>> time=102.630..102.777 rows=100 loops=1)\n>> -> Sort (cost=184626.64..184748.19 rows=48623 width=908) (actual\n>> time=102.628..102.683 rows=100 loops=1)\n>> Sort Key: month\n>> Sort Method: top-N heapsort Memory: 132kB\n>> -> Bitmap Heap Scan on customs_records (cost=1054.22..182768.30\n>> rows=48623 width=908) (actual time=5.809..44.832 rows=43352 loops=1)\n>> Recheck Cond: ((buyer_id IS NOT NULL) AND (buyer_id = ANY\n>> ('{1172672,1570888,1336461,1336464,1336471,1189285,1336463,1336460,1654709,1000155646,1191114,1336480,1336479,1000384928,1161787,1811495,1507188,1159339,1980765,1200258,1980770,1980768,1980767,1980769,1980766,1980772,1000350850,1000265917,1980764,1980761,1170019,1980762,1184356,1985585}'::integer[])))\n>> -> Bitmap Index Scan on index_customs_records_on_buyer_id\n>> (cost=0.00..1042.07 rows=48623 width=0) (actual time=4.588..4.588 rows=43352\n>> loops=1)\n>> Index Cond: ((buyer_id IS NOT NULL) AND (buyer_id =\n>> ANY\n>> ('{1172672,1570888,1336461,1336464,1336471,1189285,1336463,1336460,1654709,1000155646,1191114,1336480,1336479,1000384928,1161787,1811495,1507188,1159339,1980765,1200258,1980770,1980768,1980767,1980769,1980766,1980772,1000350850,1000265917,1980764,1980761,1170019,1980762,1184356,1985585}'::integer[])))\n>> Total runtime: 102.919 ms\n>>\n>>\n>> ## Same query after adding the new index\n>> ### NOTE - it would be very useful here if explain would print out the\n>> number of rows it expects to scan in the index and analyze dumped out the\n>> number of rows actually scanned. Instead analyze is printing the rows\n>> actually outputed and explain appears to be outputting the number of rows\n>> expected to match the filter ignoring the limit... (it exactly matches the\n>> row count in the query above)\n>> ##\n>>\n>> explain analyze\n>> SELECT customs_records.* FROM \"customs_records\" WHERE (((buyer_id IS NOT\n>> NULL AND buyer_id IN\n>> (1172672,1570888,1336461,1336464,1336471,1189285,1336463,1336460,1654709,1000155646,1191114,1336480,1336479,1000384928,1161787,1811495,1507188,1159339,1980765,1200258,1980770,1980768,1980767,1980769,1980766,1980772,1000350850,1000265917,1980764,1980761,1170019,1980762,1184356,1985585))))\n>> ORDER BY month DESC LIMIT 100 OFFSET 0;\n>> --------------------------------------------\n>> Limit (cost=0.00..161295.58 rows=100 width=908) (actual\n>> time=171344.185..3858893.743 rows=100 loops=1)\n>> -> Index Scan Backward using\n>> index_customs_records_on_month_and_bl_number on customs_records\n>> (cost=0.00..78426750.74 rows=48623 width=908) (actual\n>> time=171344.182..3858893.588 rows=100 loops=1)\n>> Filter: ((buyer_id IS NOT NULL) AND (buyer_id = ANY\n>> ('{1172672,1570888,1336461,1336464,1336471,1189285,1336463,1336460,1654709,1000155646,1191114,1336480,1336479,1000384928,1161787,1811495,1507188,1159339,1980765,1200258,1980770,1980768,1980767,1980769,1980766,1980772,1000350850,1000265917,1980764,1980761,1170019,1980762,1184356,1985585}'::integer[])))\n>> Total runtime: 3858893.908 ms\n>>\n>>\n>> ############################################################\n>> My workaround is to use a CTE query to force the planner to not use the\n>> month index for sorting (using a subselect is not enough since the planner\n>> is too smart for that). However this is still twice as slow as the original\n>> query...\n>> ############################################################\n>>\n>> explain analyze\n>> with foo as (select customs_records.* FROM \"customs_records\" WHERE\n>> (((buyer_id IS NOT NULL AND buyer_id IN\n>> (1172672,1570888,1336461,1336464,1336471,1189285,1336463,1336460,1654709,1000155646,1191114,1336480,1336479,1000384928,1161787,1811495,1507188,1159339,1980765,1200258,1980770,1980768,1980767,1980769,1980766,1980772,1000350850,1000265917,1980764,1980761,1170019,1980762,1184356,1985585)))))\n>> select * from foo order by month desc limit 100 ;\n>> -----------------------------------------------------------\n>> Limit (cost=185599.10..185599.35 rows=100 width=5325) (actual\n>> time=196.968..197.105 rows=100 loops=1)\n>> CTE foo\n>> -> Bitmap Heap Scan on customs_records (cost=1054.22..182768.30\n>> rows=48623 width=908) (actual time=5.765..43.489 rows=43352 loops=1)\n>> Recheck Cond: ((buyer_id IS NOT NULL) AND (buyer_id = ANY\n>> ('{1172672,1570888,1336461,1336464,1336471,1189285,1336463,1336460,1654709,1000155646,1191114,1336480,1336479,1000384928,1161787,1811495,1507188,1159339,1980765,1200258,1980770,1980768,1980767,1980769,1980766,1980772,1000350850,1000265917,1980764,1980761,1170019,1980762,1184356,1985585}'::integer[])))\n>> -> Bitmap Index Scan on index_customs_records_on_buyer_id\n>> (cost=0.00..1042.07 rows=48623 width=0) (actual time=4.544..4.544 rows=43352\n>> loops=1)\n>> Index Cond: ((buyer_id IS NOT NULL) AND (buyer_id = ANY\n>> ('{1172672,1570888,1336461,1336464,1336471,1189285,1336463,1336460,1654709,1000155646,1191114,1336480,1336479,1000384928,1161787,1811495,1507188,1159339,1980765,1200258,1980770,1980768,1980767,1980769,1980766,1980772,1000350850,1000265917,1980764,1980761,1170019,1980762,1184356,1985585}'::integer[])))\n>> -> Sort (cost=2830.80..2952.35 rows=48623 width=5325) (actual\n>> time=196.966..197.029 rows=100 loops=1)\n>> Sort Key: foo.month\n>> Sort Method: top-N heapsort Memory: 132kB\n>> -> CTE Scan on foo (cost=0.00..972.46 rows=48623 width=5325)\n>> (actual time=5.770..153.322 rows=43352 loops=1)\n>> Total runtime: 207.282 ms\n>>\n>>\n>> #### Table information\n>>\n>> Table information - the schema is the table below (with some columns\n>> removed for succinctness). There are ~45 million rows, the rows are also\n>> fairly wide about 80 columns total. buyer_id is null ~30% of the time (as is\n>> supplier_id). A given buyer id maps to between 1 and ~100,000 records (in a\n>> decreasing distribution, about 1 million unique buyer id values).\n>> Supplier_id is similar. Note buyer_id and month columns are not always\n>> independent (for some buyer_ids there is a strong correlation as in this\n>> case where the buyer_ids are associated with only older months, though for\n>> others there isn't), though even so I'm still not clear on why it would pick\n>> the plan that it does. We can consider these table never updated or inserted\n>> into (updates are done in a new db offline that is periodically swapped in).\n>>\n>> Table \"public.customs_records\"\n>> Column | Type\n>> | Modifiers\n>>\n>> --------------------------+------------------------+--------------------------------------------------------------\n>> id | integer | not null default\n>> nextval('customs_records_id_seq'::regclass)\n>> ....\n>> bl_number | character varying(16) |\n>> ....\n>> month | date |\n>> ....\n>> buyer_id | integer |\n>> ...\n>> supplier_id | integer |\n>> ...\n>> Indexes:\n>> \"customs_records_pkey\" PRIMARY KEY, btree (id) WITH (fillfactor=100)\n>> \"index_customs_records_on_month_and_bl_number\" UNIQUE, btree (month,\n>> bl_number) WITH (fillfactor=100)\n>> \"index_customs_records_on_buyer_id\" btree (buyer_id) WITH\n>> (fillfactor=100) WHERE buyer_id IS NOT NULL\n>> \"index_customs_records_on_supplier_id_and_buyer_id\" btree\n>> (supplier_id, buyer_id) WITH (fillfactor=100) CLUSTER\n>>\n>>\n>> db version =>\n>> PostgreSQL 9.0.3 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC)\n>> 4.1.2 20080704 (Red Hat 4.1.2-46), 64-bit\n>> (enterprise db build)\n>> ubuntu 8.04 LTS is the host\n>>\n>>\n>\n\nSorry meant with 32GB of memory.TimOn Tue, Mar 15, 2011 at 2:39 PM, Timothy Garnett <[email protected]> wrote:\nForgot to include our non-default config settings and server info, not that it probably makes a difference for this.\nfrom pg_settings: name                         | current_setting\n version                      | PostgreSQL 9.0.3 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-46), 64-bit\n bytea_output                 | escape checkpoint_completion_target | 0.9\n checkpoint_segments          | 24 effective_cache_size         | 24GB\n effective_io_concurrency     | 2 lc_collate                   | en_US.utf8\n lc_ctype                     | en_US.utf8 listen_addresses             | *\n log_checkpoints              | on log_connections              | on\n log_disconnections           | on log_hostname                 | on\n log_line_prefix              | %t logging_collector            | on\n maintenance_work_mem         | 256MB max_connections              | 120\n max_stack_depth              | 2MB port                         | 5432\n server_encoding              | UTF8 shared_buffers               | 4GB\n synchronous_commit           | off tcp_keepalives_idle          | 180\n TimeZone                     | US/Eastern track_activity_query_size    | 8192\n wal_buffers                  | 16MB wal_writer_delay             | 330ms\n work_mem                     | 512MBThis is a dual dual-core 64bit intel machine (hyperthreaded so 8 logical cpus) with 24GB of memory running basically just the db against a raid 5 diskarray.\nTimOn Tue, Mar 15, 2011 at 2:23 PM, Timothy Garnett <[email protected]> wrote:\n\nHi all,We added an index to a table (to support some different functionality) then ran into cases where the new index (on month, bl_number in the schema below) made performance of some existing queries ~20,000 times worse.  While we do have a workaround (using a CTE to force the proper index to be used) that gets us down to ~twice the original performance (i.e. without the new index), I'm wondering if there's a better workaround that can get us closer to the original performance. It also seems like kind of a strange case so I'm wondering if there's something weird going on in the optimizer.  The # of rows estimates are pretty accurate so it's guessing that about right, but the planner seems to be putting way too much weight on using a sorted index vs. looking up. This is all after an analyze.\nNear as I can guess the planner seems to be weighting scanning what should be an expected 100k rows (though in practice it will have to do about 35 million, because the assumption of independence between columns is incorrect) given an expected selectivity of 48K rows out of 45 million over scanning ~48k rows (using the index) and doing a top-n 100 sort on them (actual row count is 43k so pretty close on that).  Even giving the optimizer the benefit of column independence I don't see how that first plan could possibly come out ahead.  It would really help if explain would print out the number of rows it expects to scan and analyze would print out the number of rows it actually scanned (instead of just the number that matched the filter/limit), see the expensive query explain analyze output below.\nAt the bottom I have some info on the contents and probability.## The original Query:explain analyzeSELECT\n customs_records.* FROM \"customs_records\" WHERE (((buyer_id IS NOT NULL \nAND buyer_id IN \n(1172672,1570888,1336461,1336464,1336471,1189285,1336463,1336460,1654709,1000155646,1191114,1336480,1336479,1000384928,1161787,1811495,1507188,1159339,1980765,1200258,1980770,1980768,1980767,1980769,1980766,1980772,1000350850,1000265917,1980764,1980761,1170019,1980762,1184356,1985585)))) \n ORDER BY month DESC LIMIT 100 OFFSET 0;----------------------------------- Limit  (cost=184626.64..184626.89 rows=100 width=908) (actual time=102.630..102.777 rows=100 loops=1)\n\n\n\n   ->  Sort  (cost=184626.64..184748.19 rows=48623 width=908) (actual time=102.628..102.683 rows=100 loops=1)         Sort Key: month         Sort Method:  top-N heapsort  Memory: 132kB         ->  Bitmap Heap Scan on customs_records  (cost=1054.22..182768.30 rows=48623 width=908) (actual time=5.809..44.832 rows=43352 loops=1)\n\n\n\n               Recheck Cond: ((buyer_id IS NOT NULL) AND (buyer_id = ANY ('{1172672,1570888,1336461,1336464,1336471,1189285,1336463,1336460,1654709,1000155646,1191114,1336480,1336479,1000384928,1161787,1811495,1507188,1159339,1980765,1200258,1980770,1980768,1980767,1980769,1980766,1980772,1000350850,1000265917,1980764,1980761,1170019,1980762,1184356,1985585}'::integer[])))\n\n\n\n               ->  Bitmap Index Scan on index_customs_records_on_buyer_id  (cost=0.00..1042.07 rows=48623 width=0) (actual time=4.588..4.588 rows=43352 loops=1)                     Index Cond: ((buyer_id IS NOT NULL) AND (buyer_id = ANY ('{1172672,1570888,1336461,1336464,1336471,1189285,1336463,1336460,1654709,1000155646,1191114,1336480,1336479,1000384928,1161787,1811495,1507188,1159339,1980765,1200258,1980770,1980768,1980767,1980769,1980766,1980772,1000350850,1000265917,1980764,1980761,1170019,1980762,1184356,1985585}'::integer[])))\n\n\n\n Total runtime: 102.919 ms## Same query after adding the new index### NOTE - it would be very useful here if explain would print out the number of rows it expects to scan in the index and analyze dumped out the number of rows actually scanned.  Instead analyze is printing the rows actually outputed and explain appears to be outputting the number of rows expected to match the filter ignoring the limit... (it exactly matches the row count in the query above)\n\n\n\n##explain analyzeSELECT customs_records.* FROM \"customs_records\" WHERE (((buyer_id IS NOT NULL AND buyer_id IN (1172672,1570888,1336461,1336464,1336471,1189285,1336463,1336460,1654709,1000155646,1191114,1336480,1336479,1000384928,1161787,1811495,1507188,1159339,1980765,1200258,1980770,1980768,1980767,1980769,1980766,1980772,1000350850,1000265917,1980764,1980761,1170019,1980762,1184356,1985585))))  ORDER BY month DESC LIMIT 100 OFFSET 0;\n\n\n\n-------------------------------------------- Limit  (cost=0.00..161295.58 rows=100 width=908) (actual time=171344.185..3858893.743 rows=100 loops=1)   ->  Index Scan Backward using index_customs_records_on_month_and_bl_number on customs_records  (cost=0.00..78426750.74 rows=48623 width=908) (actual time=171344.182..3858893.588 rows=100 loops=1)\n\n\n\n         Filter: ((buyer_id IS NOT NULL) AND (buyer_id = ANY ('{1172672,1570888,1336461,1336464,1336471,1189285,1336463,1336460,1654709,1000155646,1191114,1336480,1336479,1000384928,1161787,1811495,1507188,1159339,1980765,1200258,1980770,1980768,1980767,1980769,1980766,1980772,1000350850,1000265917,1980764,1980761,1170019,1980762,1184356,1985585}'::integer[])))\n\n\n\n Total runtime: 3858893.908 ms############################################################My workaround is to use a CTE query to force the planner to not use the month index for sorting (using a subselect is not enough since the planner is too smart for that). However this is still twice as slow as the original query...\n\n\n\n############################################################explain analyzewith foo as (select customs_records.* FROM \"customs_records\" WHERE (((buyer_id IS NOT NULL AND buyer_id IN (1172672,1570888,1336461,1336464,1336471,1189285,1336463,1336460,1654709,1000155646,1191114,1336480,1336479,1000384928,1161787,1811495,1507188,1159339,1980765,1200258,1980770,1980768,1980767,1980769,1980766,1980772,1000350850,1000265917,1980764,1980761,1170019,1980762,1184356,1985585))))) select * from foo order by month desc limit 100 ;\n\n\n\n----------------------------------------------------------- Limit  (cost=185599.10..185599.35 rows=100 width=5325) (actual time=196.968..197.105 rows=100 loops=1)   CTE foo     ->  Bitmap Heap Scan on customs_records  (cost=1054.22..182768.30 rows=48623 width=908) (actual time=5.765..43.489 rows=43352 loops=1)\n\n\n\n           Recheck Cond: ((buyer_id IS NOT NULL) AND (buyer_id = ANY ('{1172672,1570888,1336461,1336464,1336471,1189285,1336463,1336460,1654709,1000155646,1191114,1336480,1336479,1000384928,1161787,1811495,1507188,1159339,1980765,1200258,1980770,1980768,1980767,1980769,1980766,1980772,1000350850,1000265917,1980764,1980761,1170019,1980762,1184356,1985585}'::integer[])))\n\n\n\n           ->  Bitmap Index Scan on index_customs_records_on_buyer_id  (cost=0.00..1042.07 rows=48623 width=0) (actual time=4.544..4.544 rows=43352 loops=1)                 Index Cond: ((buyer_id IS NOT NULL) AND (buyer_id = ANY ('{1172672,1570888,1336461,1336464,1336471,1189285,1336463,1336460,1654709,1000155646,1191114,1336480,1336479,1000384928,1161787,1811495,1507188,1159339,1980765,1200258,1980770,1980768,1980767,1980769,1980766,1980772,1000350850,1000265917,1980764,1980761,1170019,1980762,1184356,1985585}'::integer[])))\n\n\n\n   ->  Sort  (cost=2830.80..2952.35 rows=48623 width=5325) (actual time=196.966..197.029 rows=100 loops=1)         Sort Key: foo.month         Sort Method:  top-N heapsort  Memory: 132kB         ->  CTE Scan on foo  (cost=0.00..972.46 rows=48623 width=5325) (actual time=5.770..153.322 rows=43352 loops=1)\n\n\n\n Total runtime: 207.282 ms#### Table informationTable information - the schema is the table below (with some columns removed for succinctness).  There are ~45 million rows, the rows are also fairly wide about 80 columns total. buyer_id is null ~30% of the time (as is supplier_id). A given buyer id maps to between 1 and ~100,000 records (in a decreasing distribution, about 1 million unique buyer id values).  Supplier_id is similar.  Note buyer_id and month columns are not always independent (for some buyer_ids there is a strong correlation as in this case where the buyer_ids are associated with only older months, though for others there isn't), though even so I'm still not clear on why it would pick the plan that it does. We can consider these table never updated or inserted into (updates are done in a new db offline that is periodically swapped in).\n                                          Table \"public.customs_records\"          Column          |          Type          |                          Modifiers                           \n--------------------------+------------------------+-------------------------------------------------------------- id                       | integer                | not null default nextval('customs_records_id_seq'::regclass)\n.... bl_number                | character varying(16)  | \n.... month                    | date                   | \n.... buyer_id                 | integer                | \n... supplier_id              | integer                | \n...Indexes:\n    \"customs_records_pkey\" PRIMARY KEY, btree (id) WITH (fillfactor=100)    \"index_customs_records_on_month_and_bl_number\" UNIQUE, btree (month, bl_number) WITH (fillfactor=100)\n    \"index_customs_records_on_buyer_id\" btree (buyer_id) WITH (fillfactor=100) WHERE buyer_id IS NOT NULL\n    \"index_customs_records_on_supplier_id_and_buyer_id\" btree (supplier_id, buyer_id) WITH (fillfactor=100) CLUSTER\ndb version => PostgreSQL 9.0.3 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-46), 64-bit (enterprise db build)ubuntu 8.04 LTS is the host", "msg_date": "Tue, 15 Mar 2011 14:42:23 -0400", "msg_from": "Timothy Garnett <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding additional index causes 20,000x slowdown for certain\n\tselect queries - postgres 9.0.3" }, { "msg_contents": "Timothy Garnett <[email protected]> wrote:\n \n>>> -> Index Scan Backward using\n>>> index_customs_records_on_month_and_bl_number on customs_records\n>>> (cost=0.00..78426750.74 rows=48623 width=908) (actual\n>>> time=171344.182..3858893.588 rows=100 loops=1)\n \nWe've seen a lot of those lately -- Index Scan Backward performing\nfar worse than alternatives. One part of it is that disk sectors\nare arranged for optimal performance on forward scans; but I don't\nthink we've properly accounted for the higher cost of moving\nbackward through our btree indexes, either. To quote from the\nREADME for the btree AM:\n \n| A backwards scan has one additional bit of complexity: after\n| following the left-link we must account for the possibility that\n| the left sibling page got split before we could read it. So, we\n| have to move right until we find a page whose right-link matches\n| the page we came from. (Actually, it's even harder than that; see\n| deletion discussion below.)\n \nI'm wondering whether the planner should have some multiplier or\nother adjustment to attempt to approximate the known higher cost of\nbackward scans.\n \n-Kevin\n", "msg_date": "Wed, 16 Mar 2011 11:40:50 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding additional index causes 20,000x slowdown\n\tfor certain select queries - postgres 9.0.3" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> Timothy Garnett <[email protected]> wrote:\n> -> Index Scan Backward using\n> index_customs_records_on_month_and_bl_number on customs_records\n> (cost=0.00..78426750.74 rows=48623 width=908) (actual\n> time=171344.182..3858893.588 rows=100 loops=1)\n \n> We've seen a lot of those lately -- Index Scan Backward performing\n> far worse than alternatives.\n\nIt's not clear to me that that has anything to do with Tim's problem.\nIt certainly wouldn't be 20000x faster if it were a forward scan.\n\n> One part of it is that disk sectors\n> are arranged for optimal performance on forward scans; but I don't\n> think we've properly accounted for the higher cost of moving\n> backward through our btree indexes, either. To quote from the\n> README for the btree AM:\n \n> | A backwards scan has one additional bit of complexity: after\n> | following the left-link we must account for the possibility that\n> | the left sibling page got split before we could read it. So, we\n> | have to move right until we find a page whose right-link matches\n> | the page we came from. (Actually, it's even harder than that; see\n> | deletion discussion below.)\n\nThat's complicated, but it's not slow, except in the extremely\ninfrequent case where there actually was an index page split while your\nscan was in flight to the page. The normal code path will only spend\none extra comparison to verify that no such split happened, and then it\ngoes on about its business.\n\nThe point about disk page layout is valid, so I could believe that in\na recently-built index there might be a significant difference in\nforward vs backward scan speed, if none of the index were in memory.\nThe differential would degrade pretty rapidly due to page splits though\n...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 16 Mar 2011 13:01:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Adding additional index causes 20,\n\t000x slowdown for certain select queries - postgres 9.0.3" }, { "msg_contents": "On 03/15/2011 01:23 PM, Timothy Garnett wrote:\n\n> Column | Type\n> --------------------------+------------------------+\n> id | integer |\n> bl_number | character varying(16) |\n> month | date |\n> buyer_id | integer |\n> supplier_id | integer |\n\nOk. In your table description, you don't really talk about the \ndistribution of bl_number. But this part of your query:\n\nORDER BY month DESC LIMIT 100 OFFSET 0\n\nIs probably tricking the planner into using that index. But there's the \nfun thing about dates: we almost always want them in order of most \nrecent to least recent. So you might want to try again with your \nindex_customs_records_on_month_and_bl_number declared like this instead:\n\nCREATE INDEX index_customs_records_on_month_and_bl_number\n ON customs_records (month DESC, bl_number);\n\nOr, if bl_number is more selective anyway, but you need both columns for \nother queries and you want this one to ignore it:\n\nCREATE INDEX index_customs_records_on_month_and_bl_number\n ON customs_records (bl_number, month DESC);\n\nEither way, I bet you'll find that your other queries that use this \nindex are also doing a backwards index scan, which will always be slower \nby about two orders of magnitude, since backwards reads act basically \nlike random reads.\n\nThe effect you're getting is clearly exaggerated, and I've run into it \non occasion for effectively the entire history of PostgreSQL. Normally \nincreasing the statistics on the affected columns and re-analyzing fixes \nit, but on a composite index, that won't necessarily be the case.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n", "msg_date": "Wed, 16 Mar 2011 12:05:06 -0500", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding additional index causes 20,000x slowdown for\n\tcertain select queries - postgres 9.0.3" }, { "msg_contents": "Shaun Thomas <[email protected]> writes:\n> Ok. In your table description, you don't really talk about the \n> distribution of bl_number. But this part of your query:\n\n> ORDER BY month DESC LIMIT 100 OFFSET 0\n\n> Is probably tricking the planner into using that index. But there's the \n> fun thing about dates: we almost always want them in order of most \n> recent to least recent. So you might want to try again with your \n> index_customs_records_on_month_and_bl_number declared like this instead:\n\n> CREATE INDEX index_customs_records_on_month_and_bl_number\n> ON customs_records (month DESC, bl_number);\n\nThat isn't going to dissuade the planner from using that index for this\nquery. It would result in the scan being a forward indexscan instead of\nbackwards. Now it'd be worth trying that, to see if you and Kevin are\nright that it's the backwards aspect that's hurting. I'm not convinced\nthough. I suspect the issue is that the planner is expecting the target\nrecords (the ones selected by the filter condition) to be approximately\nequally distributed in the month ordering, but really there is a\ncorrelation which causes them to be much much further back in the index\nthan it expects. So a lot more of the index has to be scanned than it's\nexpecting.\n\n> Or, if bl_number is more selective anyway, but you need both columns for \n> other queries and you want this one to ignore it:\n\n> CREATE INDEX index_customs_records_on_month_and_bl_number\n> ON customs_records (bl_number, month DESC);\n\nFlipping bl_number around to the front would prevent this index from\nbeing used in this way, but it might also destroy the usefulness of the\nindex for its intended purpose. Tim didn't show us the queries he\nwanted this index for, so it's hard to say if he can fix it by\nredefining the index or not.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 16 Mar 2011 13:38:17 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding additional index causes 20,\n\t000x slowdown for certain select queries - postgres 9.0.3" }, { "msg_contents": "Hi all,\n\nThe bl_number is nearly a unique value per a row (some small portion are\nduplicated on a handful or rows).\n\nWe need the unique on pair of bl_number and month, but evaluating current\nusage we don't make use of selecting on just month currently (though we\nexpect to have usage scenarios that do that in the not too distant future,\ni.e. pulling out all the records that match a given month date). But for\nthe time being we've gone with the suggestion here of flipping the order of\nthe index columns to (bl_number, month) which rescues the original\nperformance (since the new index can no longer be used with the query).\n\nWe'd still be interested in other suggestions for convincing the query\nplanner not to pick the bad plan in this case (since we'll eventually need\nan index on month) without having to use the slower CTE form. To me the\nproblem seems two fold,\n (1) planner doesn't know there's a correlation between month and particular\nbuyer_ids (some are randomly distributed across month)\n (2) even in cases where there isn't a correlation (not all of our buyer\nid's are correlated with month) it still seems really surprising to me the\nplanner thought this plan would be faster, the estimated selectivity of the\nbuyer fields is 48k / 45million ~ 1/1000 so for limit 100 it should expect\nto backward index scan ~100K rows, vs. looking up the expected 48k rows and\ndoing a top-100 sort on them, I'd expect the latter plan to be faster in\nalmost all situations (unless we're clustered on month perhaps, but we're\nactually clustered on supplier_id, buyer_id which would favor the latter\nplan as well I'd think).\n\n(an aside) there's also likely some benefit from clustering in the original\nplan before the new index, since we cluster on supplier_id, buyer_id and a\ngiven buyer_id while having up to 100k rows will generally only have a few\nsupplier ids\n\nTim\n\nOn Wed, Mar 16, 2011 at 1:05 PM, Shaun Thomas <[email protected]> wrote:\n\n> On 03/15/2011 01:23 PM, Timothy Garnett wrote:\n>\n> Column | Type\n>> --------------------------+------------------------+\n>> id | integer |\n>>\n>> bl_number | character varying(16) |\n>> month | date |\n>> buyer_id | integer |\n>> supplier_id | integer |\n>>\n>\n> Ok. In your table description, you don't really talk about the distribution\n> of bl_number. But this part of your query:\n>\n>\n> ORDER BY month DESC LIMIT 100 OFFSET 0\n>\n> Is probably tricking the planner into using that index. But there's the fun\n> thing about dates: we almost always want them in order of most recent to\n> least recent. So you might want to try again with your\n> index_customs_records_on_month_and_bl_number declared like this instead:\n>\n> CREATE INDEX index_customs_records_on_month_and_bl_number\n> ON customs_records (month DESC, bl_number);\n>\n> Or, if bl_number is more selective anyway, but you need both columns for\n> other queries and you want this one to ignore it:\n>\n> CREATE INDEX index_customs_records_on_month_and_bl_number\n> ON customs_records (bl_number, month DESC);\n>\n> Either way, I bet you'll find that your other queries that use this index\n> are also doing a backwards index scan, which will always be slower by about\n> two orders of magnitude, since backwards reads act basically like random\n> reads.\n>\n> The effect you're getting is clearly exaggerated, and I've run into it on\n> occasion for effectively the entire history of PostgreSQL. Normally\n> increasing the statistics on the affected columns and re-analyzing fixes it,\n> but on a composite index, that won't necessarily be the case.\n>\n> --\n> Shaun Thomas\n> OptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n> 312-676-8870\n> [email protected]\n>\n> ______________________________________________\n>\n> See http://www.peak6.com/email_disclaimer.php\n> for terms and conditions related to this email\n>\n\nHi all,The bl_number is nearly a unique value per a row (some small portion are duplicated on a handful or rows).We need the unique on pair of bl_number and month, but evaluating current usage we don't make use of selecting on just month currently (though we expect to have usage scenarios that do that in the not too distant future, i.e. pulling out all the records that match a given month date).  But for the time being we've gone with the suggestion here of flipping the order of the index columns to (bl_number, month) which rescues the original performance (since the new index can no longer be used with the query).\nWe'd still be interested in other suggestions for convincing the query planner not to pick the bad plan in this case (since we'll eventually need an index on month) without having to use the slower CTE form.  To me the problem seems two fold,\n\n (1) planner doesn't know there's a correlation between month and particular buyer_ids (some are randomly distributed across month) (2) even in cases where there isn't a correlation (not all of our buyer id's are correlated with month) it still seems really surprising to me the planner thought this plan would be faster, the estimated selectivity of the buyer fields is 48k / 45million ~ 1/1000 so for limit 100 it should expect to backward index scan ~100K rows, vs. looking up the expected 48k rows and doing a top-100 sort on them, I'd expect the latter plan to be faster in almost all situations (unless we're clustered on month perhaps, but we're actually clustered on supplier_id, buyer_id which would favor the latter plan as well I'd think).\n(an aside) there's also likely some benefit from clustering in the original plan before the new index, since we cluster on supplier_id, buyer_id and a given buyer_id while having up to 100k rows will generally only have a few supplier ids\nTimOn Wed, Mar 16, 2011 at 1:05 PM, Shaun Thomas <[email protected]> wrote:\n\nOn 03/15/2011 01:23 PM, Timothy Garnett wrote:\n\n\n          Column          |          Type\n--------------------------+------------------------+\n id                       | integer                |\n bl_number                | character varying(16)  |\n month                    | date                   |\n buyer_id                 | integer                |\n supplier_id              | integer                |\n\n\nOk. In your table description, you don't really talk about the distribution of bl_number. But this part of your query:\n\nORDER BY month DESC LIMIT 100 OFFSET 0\n\nIs probably tricking the planner into using that index. But there's the fun thing about dates: we almost always want them in order of most recent to least recent. So you might want to try again with your index_customs_records_on_month_and_bl_number declared like this instead:\n\nCREATE INDEX index_customs_records_on_month_and_bl_number\n    ON customs_records (month DESC, bl_number);\n\nOr, if bl_number is more selective anyway, but you need both columns for other queries and you want this one to ignore it:\n\nCREATE INDEX index_customs_records_on_month_and_bl_number\n    ON customs_records (bl_number, month DESC);\n\nEither way, I bet you'll find that your other queries that use this index are also doing a backwards index scan, which will always be slower by about two orders of magnitude, since backwards reads act basically like random reads.\n\nThe effect you're getting is clearly exaggerated, and I've run into it on occasion for effectively the entire history of PostgreSQL. Normally increasing the statistics on the affected columns and re-analyzing fixes it, but on a composite index, that won't necessarily be the case.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee  http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email", "msg_date": "Thu, 17 Mar 2011 12:55:10 -0400", "msg_from": "Timothy Garnett <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding additional index causes 20,000x slowdown for\n\tcertain select queries - postgres 9.0.3" }, { "msg_contents": "Timothy Garnett <[email protected]> wrote:\n \n> We'd still be interested in other suggestions for convincing the\n> query planner not to pick the bad plan in this case\n \nYou could try boosting cpu_tuple_cost. I've seen some evidence that\nthe default number is a bit low in general, so it wouldn't\nnecessarily be bad to try your whole load with a higher setting. If\nthat doesn't work you could set it for the one query. If that\nsetting alone doesn't do it, you could either decrease both page\ncost numbers or multiply all the cpu numbers (again, probably\nboosting cpu_tuple_cost relative to the others).\n \n-Kevin\n", "msg_date": "Thu, 17 Mar 2011 13:13:53 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding additional index causes 20,000x slowdown\n\tfor certain select queries - postgres 9.0.3" }, { "msg_contents": "Thanks, we'll give these a try.\n\nTim\n\nOn Thu, Mar 17, 2011 at 2:13 PM, Kevin Grittner <[email protected]\n> wrote:\n\n> Timothy Garnett <[email protected]> wrote:\n>\n> > We'd still be interested in other suggestions for convincing the\n> > query planner not to pick the bad plan in this case\n>\n> You could try boosting cpu_tuple_cost. I've seen some evidence that\n> the default number is a bit low in general, so it wouldn't\n> necessarily be bad to try your whole load with a higher setting. If\n> that doesn't work you could set it for the one query. If that\n> setting alone doesn't do it, you could either decrease both page\n> cost numbers or multiply all the cpu numbers (again, probably\n> boosting cpu_tuple_cost relative to the others).\n>\n> -Kevin\n>\n\nThanks, we'll give these a try.TimOn Thu, Mar 17, 2011 at 2:13 PM, Kevin Grittner <[email protected]> wrote:\nTimothy Garnett <[email protected]> wrote:\n\n> We'd still be interested in other suggestions for convincing the\n> query planner not to pick the bad plan in this case\n\nYou could try boosting cpu_tuple_cost.  I've seen some evidence that\nthe default number is a bit low in general, so it wouldn't\nnecessarily be bad to try your whole load with a higher setting.  If\nthat doesn't work you could set it for the one query.  If that\nsetting alone doesn't do it, you could either decrease both page\ncost numbers or multiply all the cpu numbers (again, probably\nboosting cpu_tuple_cost relative to the others).\n\n-Kevin", "msg_date": "Thu, 17 Mar 2011 16:33:03 -0400", "msg_from": "Timothy Garnett <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding additional index causes 20,000x slowdown for\n\tcertain select queries - postgres 9.0.3" }, { "msg_contents": "On Wed, Mar 16, 2011 at 1:38 PM, Tom Lane <[email protected]> wrote:\n> That isn't going to dissuade the planner from using that index for this\n> query.  It would result in the scan being a forward indexscan instead of\n> backwards.  Now it'd be worth trying that, to see if you and Kevin are\n> right that it's the backwards aspect that's hurting.  I'm not convinced\n> though.  I suspect the issue is that the planner is expecting the target\n> records (the ones selected by the filter condition) to be approximately\n> equally distributed in the month ordering, but really there is a\n> correlation which causes them to be much much further back in the index\n> than it expects.  So a lot more of the index has to be scanned than it's\n> expecting.\n\nThis has got to be one of the five most frequently reported planner\nproblems, and it's nearly always with a *backward* index scan. So I\nagree with Kevin that we probably ought to have a Todo to make\nbackward index scans look more expensive than forward index scans,\nmaybe related in some way to the correlation estimates for the\nrelevant columns.\n\nBut I don't really think that's the root of the problem. When\nconfronted with this type of query, you can either filter-then-sort,\nor index-scan-in-desired-order-then-filter. I think the heart of the\nproblem is that we're able to estimate the cost of the first plan much\nmore accurately than the cost of the second one. In many cases, the\nfilter is done using a sequential scan, which is easy to cost, and\neven if it's done using a bitmap index scan the cost of that is also\npretty simple to estimate, as long as our selectivity estimate is\nsomewhere in the ballpark. The cost of the sort depends primarily on\nhow many rows we need to sort, and if the qual is something like an\nequality condition, as in this case, then we'll know that pretty\naccurately as well. So we're good.\n\nOn the other hand, when we use an index scan to get the rows in order,\nand then apply the filter condition to them, the cost of the index\nscan is heavily dependent on how far we have to scan through the\nindex, and that depends on the distribution of values in the qual\ncolumn relative to the distribution of values in the index column. We\nhave no data that allow us to estimate that, so we are basically\nshooting in the dark. This is a multi-column statistics problem, but\nI think it's actually harder than what we usually mean by multi-column\nstatistics, where we only need to estimate selectivity. A system that\ncan perfectly estimate the selectivity of state = $1 and zipcode = $2\nmight still be unable to tell us much about how many zipcodes we'd\nhave to read in ascending order to find a given number in some\nparticular state.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Mon, 18 Apr 2011 12:51:59 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding additional index causes 20,000x slowdown for\n\tcertain select queries - postgres 9.0.3" } ]
[ { "msg_contents": "Hi,\n\nI'm having trouble with some sql statements which use an expression with\nmany columns and distinct in the column list of the select.\nselect distinct col1,col2,.....col20,col21\nfrom table1 left join table2 on <join condition>,...\nwhere\n <other expressions>;\n\nThe negative result is a big sort with teporary files.\n -> Sort (cost=5813649.93..5853067.63 rows=15767078 width=80)\n(actual time=79027.079..81556.059 rows=12076838 loops=1)\n Sort Method: external sort Disk: 1086096kB\nBy the way - for this query I have a work_mem of 1 GB - so raising this\nfurther is not generally possible - also not for one special command, due to\nparallelism.\n\nHow do I get around this?\nI have one idea and like to know if there any other approaches or an even\nknown better solution to that problem. By using group by I don't need the\nbig sort for the distinct - I reduce it (theoreticly) to the key columns.\n\nselect <list of key columns>,<non key column>\nfrom tables1left join table2 on <join condition>,...\nwhere\n <other conditions>\ngroup by <list of key columns>\n\nAnother question would be what's the aggregate function which needs as less\nas possible resources (time).\nBelow is a list of sql statements which shows a reduced sample of the sql\nand one of the originating sqls.\n\nAny hints are welcome. They may safe hours\nBest Regards,\nUwe\n\ncreate table a(a_id int,a_a1 int, a_a2 int, a_a3 int, a_a4 int, a_a5 int,\na_a6 int, a_a7 int, a_a8 int, a_a9 int, a_a10 int, primary key (a_id));\ncreate table b(b_id int,b_a1 int, b_a2 int, b_a3 int, b_a4 int, b_a5 int,\nb_a6 int, b_a7 int, b_a8 int, b_a9 int, b_a10 int, primary key (b_id));\ninsert into a select\ngenerate_series(1,1000000),generate_series(1,1000000),generate_series(1,1000000),generate_series(1,1000000),generate_series(1,1000000),generate_series(1,1000000),generate_series(1,1000000),generate_series(1,1000000),generate_series(1,1000000),generate_series(1,1000000),generate_series(1,1000000);\ninsert into b select\ngenerate_series(1,1000000),generate_series(1,1000000),generate_series(1,1000000),generate_series(1,1000000),generate_series(1,1000000),generate_series(1,1000000),generate_series(1,1000000),generate_series(1,1000000),generate_series(1,1000000),generate_series(1,1000000),generate_series(1,1000000);\n\n\n\n-- current state\n------------------------------\npostgres=# explain analyze verbose select distinct\na_id,b_id,coalesce(a_a1,0), a_a2, a_a3, a_a4, a_a5, a_a6, a_a7, a_a8 , a_a9,\na_a10,b_a1, b_a2, b_a3, b_a4, b_a5, b_a6, b_a7, b_a8 , b_a9, b_a10 from a\nleft join b on a_id=b_id;\n\nQUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------\n-------------------------------------------------\n HashAggregate (cost=128694.42..138694.64 rows=1000022 width=88) (actual\ntime=3127.884..3647.814 rows=1000000 loops=1)\n Output: a.a_id, b.b_id, (COALESCE(a.a_a1, 0)), a.a_a2, a.a_a3, a.a_a4,\na.a_a5, a.a_a6, a.a_a7, a.a_a8, a.a_a9, a.a_a10, b.b_a1, b.b_a2, b.b_a3,\nb.b_a4, b.b\n_a5, b.b_a6, b.b_a7, b.b_a8, b.b_a9, b.b_a10\n -> Hash Left Join (cost=31846.50..73693.21 rows=1000022 width=88)\n(actual time=361.938..2010.894 rows=1000000 loops=1)\n Output: a.a_id, b.b_id, COALESCE(a.a_a1, 0), a.a_a2, a.a_a3,\na.a_a4, a.a_a5, a.a_a6, a.a_a7, a.a_a8, a.a_a9, a.a_a10, b.b_a1, b.b_a2,\nb.b_a3, b.b_a4,\n b.b_a5, b.b_a6, b.b_a7, b.b_a8, b.b_a9, b.b_a10\n Hash Cond: (a.a_id = b.b_id)\n -> Seq Scan on a (cost=0.00..19346.22 rows=1000022 width=44)\n(actual time=0.014..118.918 rows=1000000 loops=1)\n Output: a.a_id, a.a_a1, a.a_a2, a.a_a3, a.a_a4, a.a_a5,\na.a_a6, a.a_a7, a.a_a8, a.a_a9, a.a_a10\n -> Hash (cost=19346.22..19346.22 rows=1000022 width=44) (actual\ntime=361.331..361.331 rows=1000000 loops=1)\n Output: b.b_id, b.b_a1, b.b_a2, b.b_a3, b.b_a4, b.b_a5,\nb.b_a6, b.b_a7, b.b_a8, b.b_a9, b.b_a10\n -> Seq Scan on b (cost=0.00..19346.22 rows=1000022\nwidth=44) (actual time=0.008..119.711 rows=1000000 loops=1)\n Output: b.b_id, b.b_a1, b.b_a2, b.b_a3, b.b_a4, b.b_a5,\nb.b_a6, b.b_a7, b.b_a8, b.b_a9, b.b_a10\n Total runtime: 3695.845 ms\n\n\n\n\n\n-- possible future state\n------------------------------\n\npostgres=# explain analyze verbose select a_id,b_id, min(coalesce(a_a1,0)),\nmin( a_a2), min( a_a3), min( a_a4), min( a_a5), min( a_a6), min( a_a7), min(\na_a8 ), min( a_a9), min( a_a10), min(b_a1), min( b_a2), min( b_a3), min(\nb_a4), min( b_a5), min( b_a6), min( b_a7), min( b_a8 ), min( b_a9), min(\nb_a10) from a left join b on a_id=b_id group by a_id,b_id;\n\nQUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------\n-----------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=128694.42..188695.74 rows=1000022 width=88) (actual\ntime=3057.096..3884.902 rows=1000000 loops=1)\n Output: a.a_id, b.b_id, min(COALESCE(a.a_a1, 0)), min(a.a_a2),\nmin(a.a_a3), min(a.a_a4), min(a.a_a5), min(a.a_a6), min(a.a_a7),\nmin(a.a_a8), min(a.a_a9), m\nin(a.a_a10), min(b.b_a1), min(b.b_a2), min(b.b_a3), min(b.b_a4),\nmin(b.b_a5), min(b.b_a6), min(b.b_a7), min(b.b_a8), min(b.b_a9),\nmin(b.b_a10)\n -> Hash Left Join (cost=31846.50..73693.21 rows=1000022 width=88)\n(actual time=362.611..1809.991 rows=1000000 loops=1)\n Output: a.a_id, a.a_a1, a.a_a2, a.a_a3, a.a_a4, a.a_a5, a.a_a6,\na.a_a7, a.a_a8, a.a_a9, a.a_a10, b.b_id, b.b_a1, b.b_a2, b.b_a3, b.b_a4,\nb.b_a5, b.b_\na6, b.b_a7, b.b_a8, b.b_a9, b.b_a10\n Hash Cond: (a.a_id = b.b_id)\n -> Seq Scan on a (cost=0.00..19346.22 rows=1000022 width=44)\n(actual time=0.014..119.920 rows=1000000 loops=1)\n Output: a.a_id, a.a_a1, a.a_a2, a.a_a3, a.a_a4, a.a_a5,\na.a_a6, a.a_a7, a.a_a8, a.a_a9, a.a_a10\n -> Hash (cost=19346.22..19346.22 rows=1000022 width=44) (actual\ntime=362.002..362.002 rows=1000000 loops=1)\n Output: b.b_id, b.b_a1, b.b_a2, b.b_a3, b.b_a4, b.b_a5,\nb.b_a6, b.b_a7, b.b_a8, b.b_a9, b.b_a10\n -> Seq Scan on b (cost=0.00..19346.22 rows=1000022\nwidth=44) (actual time=0.010..121.665 rows=1000000 loops=1)\n Output: b.b_id, b.b_a1, b.b_a2, b.b_a3, b.b_a4, b.b_a5,\nb.b_a6, b.b_a7, b.b_a8, b.b_a9, b.b_a10\n Total runtime: 3934.146 ms\n\nOne of my originating sql statements is:\nselect distinct\n l.link_id, ra.bridge, ra.tunnel, ra.urban, ra.long_haul, ra.stub,\n COALESCE(rac.admin_class, 0), ra.functional_class,\nra.speed_category,ra.travel_direction,\n ra.paved, ra.private, ra.tollway, ra.boat_ferry, ra.rail_ferry,\n ra.multi_digitized, ra.in_process_data, ra.automobiles, ra.buses, ra.taxis,\n ra.carpools, ra.pedestrians, ra.trucks, ra.through_traffic, ra.deliveries,\nra.emergency_vehicles,\n ra.ramp,ra.roundabout,ra.square, ra.parking_lot_road, ra.controlled_access,\nra.frontage,\n CASE WHEN COALESCE(cl.INTERSECTION_ID,0) = 0 THEN 'N' ELSE 'Y' END,\n ra.TRANSPORT_VERIFIED, ra.PLURAL_JUNCTION, ra.vignette, ra.SCENIC_ROUTE,\nra.four_wheel_drive\nfrom nds.link l\njoin nndb.link_road_attributes lra on l.nndb_id = lra.feature_id\njoin nndb.road_attributes ra on lra.road_attr_id = ra.road_attr_id\nleft join nds.road_admin_class rac on rac.nndb_feature_id = l.nndb_id\nleft join nndb.complex_intersection_link cl on l.nndb_id = cl.link_id\n\nHi,I'm having trouble with some sql statements which use an expression with many columns and distinct in the column list of the select.select distinct col1,col2,.....col20,col21from table1 left join table2 on <join condition>,...\nwhere <other expressions>;The negative result is a big sort with teporary files.              ->  Sort  (cost=5813649.93..5853067.63 rows=15767078 width=80) (actual time=79027.079..81556.059 rows=12076838 loops=1)\n                    Sort Method:  external sort  Disk: 1086096kBBy the way - for this query I have a work_mem of 1 GB - so raising this further is not generally possible - also not for one special command, due to parallelism. \nHow do I get around this?I have one idea and like to know if there any other approaches or an even known better solution to that problem. By using group by I don't need the big sort for the distinct - I reduce it (theoreticly) to the key columns.\nselect <list of key columns>,<non key column>from tables1left join table2 on <join condition>,...where <other conditions>group by <list of key columns>Another question would be what's the aggregate function which needs as less as possible resources (time).\nBelow is a list of sql statements which shows a reduced sample of the sql and one of the originating sqls.Any hints are welcome. They may safe hoursBest Regards,Uwecreate table a(a_id int,a_a1 int, a_a2 int, a_a3 int, a_a4 int, a_a5 int, a_a6 int, a_a7 int, a_a8 int, a_a9 int, a_a10 int, primary key (a_id));\ncreate table b(b_id int,b_a1 int, b_a2 int, b_a3 int, b_a4 int, b_a5 int, b_a6 int, b_a7 int, b_a8 int, b_a9 int, b_a10 int, primary key (b_id));insert into a select generate_series(1,1000000),generate_series(1,1000000),generate_series(1,1000000),generate_series(1,1000000),generate_series(1,1000000),generate_series(1,1000000),generate_series(1,1000000),generate_series(1,1000000),generate_series(1,1000000),generate_series(1,1000000),generate_series(1,1000000);\ninsert into b select generate_series(1,1000000),generate_series(1,1000000),generate_series(1,1000000),generate_series(1,1000000),generate_series(1,1000000),generate_series(1,1000000),generate_series(1,1000000),generate_series(1,1000000),generate_series(1,1000000),generate_series(1,1000000),generate_series(1,1000000);\n-- current state------------------------------postgres=# explain analyze verbose select distinct a_id,b_id,coalesce(a_a1,0), a_a2, a_a3, a_a4, a_a5, a_a6, a_a7, a_a8\n , a_a9, a_a10,b_a1, b_a2, b_a3, b_a4, b_a5, b_a6, b_a7, b_a8 , b_a9, \nb_a10 from a left join b on a_id=b_id;                                                                                                  QUERY PLAN--------------------------------------------------------------------------------------------------------------------------------------------------------------\n------------------------------------------------- HashAggregate  (cost=128694.42..138694.64 rows=1000022 width=88) (actual time=3127.884..3647.814 rows=1000000 loops=1)   Output: a.a_id, b.b_id, (COALESCE(a.a_a1, 0)), a.a_a2, a.a_a3, a.a_a4, a.a_a5, a.a_a6, a.a_a7, a.a_a8, a.a_a9, a.a_a10, b.b_a1, b.b_a2, b.b_a3, b.b_a4, b.b\n_a5, b.b_a6, b.b_a7, b.b_a8, b.b_a9, b.b_a10   ->  Hash Left Join  (cost=31846.50..73693.21 rows=1000022 width=88) (actual time=361.938..2010.894 rows=1000000 loops=1)         Output: a.a_id, b.b_id, COALESCE(a.a_a1, 0), a.a_a2, a.a_a3, a.a_a4, a.a_a5, a.a_a6, a.a_a7, a.a_a8, a.a_a9, a.a_a10, b.b_a1, b.b_a2, b.b_a3, b.b_a4,\n b.b_a5, b.b_a6, b.b_a7, b.b_a8, b.b_a9, b.b_a10         Hash Cond: (a.a_id = b.b_id)         ->  Seq Scan on a  (cost=0.00..19346.22 rows=1000022 width=44) (actual time=0.014..118.918 rows=1000000 loops=1)\n               Output: a.a_id, a.a_a1, a.a_a2, a.a_a3, a.a_a4, a.a_a5, a.a_a6, a.a_a7, a.a_a8, a.a_a9, a.a_a10         ->  Hash  (cost=19346.22..19346.22 rows=1000022 width=44) (actual time=361.331..361.331 rows=1000000 loops=1)\n               Output: b.b_id, b.b_a1, b.b_a2, b.b_a3, b.b_a4, b.b_a5, b.b_a6, b.b_a7, b.b_a8, b.b_a9, b.b_a10               ->  Seq Scan on b  (cost=0.00..19346.22 rows=1000022 width=44) (actual time=0.008..119.711 rows=1000000 loops=1)\n                     Output: b.b_id, b.b_a1, b.b_a2, b.b_a3, b.b_a4, b.b_a5, b.b_a6, b.b_a7, b.b_a8, b.b_a9, b.b_a10 Total runtime: 3695.845 ms-- possible future state\n------------------------------\npostgres=# explain analyze verbose select a_id,b_id, min(coalesce(a_a1,0)), min( a_a2), min( a_a3), min( a_a4), min( a_a5), min( a_a6), min( a_a7), min( a_a8 ), min( a_a9), min( a_a10), min(b_a1), min( b_a2), min( b_a3), min( b_a4), min( b_a5), min( b_a6), min( b_a7), min( b_a8 ), min( b_a9), min( b_a10)  from a left join b on a_id=b_id group by a_id,b_id;\n                                                                                                                                                 QUERY PLAN--------------------------------------------------------------------------------------------------------------------------------------------------------------\n----------------------------------------------------------------------------------------------------------------------------------------------- HashAggregate  (cost=128694.42..188695.74 rows=1000022 width=88) (actual time=3057.096..3884.902 rows=1000000 loops=1)\n   Output: a.a_id, b.b_id, min(COALESCE(a.a_a1, 0)), min(a.a_a2), min(a.a_a3), min(a.a_a4), min(a.a_a5), min(a.a_a6), min(a.a_a7), min(a.a_a8), min(a.a_a9), min(a.a_a10), min(b.b_a1), min(b.b_a2), min(b.b_a3), min(b.b_a4), min(b.b_a5), min(b.b_a6), min(b.b_a7), min(b.b_a8), min(b.b_a9), min(b.b_a10)\n   ->  Hash Left Join  (cost=31846.50..73693.21 rows=1000022 width=88) (actual time=362.611..1809.991 rows=1000000 loops=1)         Output: a.a_id, a.a_a1, a.a_a2, a.a_a3, a.a_a4, a.a_a5, a.a_a6, a.a_a7, a.a_a8, a.a_a9, a.a_a10, b.b_id, b.b_a1, b.b_a2, b.b_a3, b.b_a4, b.b_a5, b.b_\na6, b.b_a7, b.b_a8, b.b_a9, b.b_a10         Hash Cond: (a.a_id = b.b_id)         ->  Seq Scan on a  (cost=0.00..19346.22 rows=1000022 width=44) (actual time=0.014..119.920 rows=1000000 loops=1)               Output: a.a_id, a.a_a1, a.a_a2, a.a_a3, a.a_a4, a.a_a5, a.a_a6, a.a_a7, a.a_a8, a.a_a9, a.a_a10\n         ->  Hash  (cost=19346.22..19346.22 rows=1000022 width=44) (actual time=362.002..362.002 rows=1000000 loops=1)               Output: b.b_id, b.b_a1, b.b_a2, b.b_a3, b.b_a4, b.b_a5, b.b_a6, b.b_a7, b.b_a8, b.b_a9, b.b_a10\n               ->  Seq Scan on b  (cost=0.00..19346.22 rows=1000022 width=44) (actual time=0.010..121.665 rows=1000000 loops=1)                     Output: b.b_id, b.b_a1, b.b_a2, b.b_a3, b.b_a4, b.b_a5, b.b_a6, b.b_a7, b.b_a8, b.b_a9, b.b_a10\n Total runtime: 3934.146 msOne of my originating sql statements is:select distinct l.link_id, ra.bridge, ra.tunnel, ra.urban, ra.long_haul, ra.stub,  COALESCE(rac.admin_class, 0), ra.functional_class, ra.speed_category,ra.travel_direction,\n ra.paved, ra.private, ra.tollway, ra.boat_ferry, ra.rail_ferry, ra.multi_digitized, ra.in_process_data, ra.automobiles, ra.buses, ra.taxis, ra.carpools, ra.pedestrians, ra.trucks, ra.through_traffic,  ra.deliveries, ra.emergency_vehicles,\n ra.ramp,ra.roundabout,ra.square, ra.parking_lot_road, ra.controlled_access, ra.frontage, CASE WHEN COALESCE(cl.INTERSECTION_ID,0) = 0 THEN 'N' ELSE 'Y' END, ra.TRANSPORT_VERIFIED, ra.PLURAL_JUNCTION, ra.vignette, ra.SCENIC_ROUTE, ra.four_wheel_drive \nfrom nds.link ljoin nndb.link_road_attributes lra on l.nndb_id = lra.feature_id join nndb.road_attributes ra on lra.road_attr_id = ra.road_attr_idleft join nds.road_admin_class rac on rac.nndb_feature_id = l.nndb_id\nleft join nndb.complex_intersection_link cl  on l.nndb_id = cl.link_id", "msg_date": "Wed, 16 Mar 2011 09:45:30 +0100", "msg_from": "Uwe Bartels <[email protected]>", "msg_from_op": true, "msg_subject": "big distinct clause vs. group by" }, { "msg_contents": "On Wed, Mar 16, 2011 at 4:45 AM, Uwe Bartels <[email protected]> wrote:\n> I'm having trouble with some sql statements which use an expression with\n> many columns and distinct in the column list of the select.\n> select distinct col1,col2,.....col20,col21\n> from table1 left join table2 on <join condition>,...\n> where\n>  <other expressions>;\n>\n> The negative result is a big sort with teporary files.\n>               ->  Sort  (cost=5813649.93..5853067.63 rows=15767078 width=80)\n> (actual time=79027.079..81556.059 rows=12076838 loops=1)\n>                     Sort Method:  external sort  Disk: 1086096kB\n> By the way - for this query I have a work_mem of 1 GB - so raising this\n> further is not generally possible - also not for one special command, due to\n> parallelism.\n>\n> How do I get around this?\n\nHmm. It seems to me that there's no way to work out the distinct\nvalues without either sorting or hashing the output, which will\nnecessarily be slow if you have a lot of data.\n\n> I have one idea and like to know if there any other approaches or an even\n> known better solution to that problem. By using group by I don't need the\n> big sort for the distinct - I reduce it (theoreticly) to the key columns.\n>\n> select <list of key columns>,<non key column>\n> from tables1left join table2 on <join condition>,...\n> where\n>  <other conditions>\n> group by <list of key columns>\n\nYou might try SELECT DISTINCT ON (key columns) <key columns> <non-key\ncolumns> FROM ...\n\n> Another question would be what's the aggregate function which needs as less\n> as possible resources (time).\n\nNot sure I follow this part.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Mon, 18 Apr 2011 12:19:34 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: big distinct clause vs. group by" }, { "msg_contents": "Hi Robert,\n\nthanks for your answer.\nthe aggregate function I was talking about is the function I need to use for\nthe non-group by columns like min() in my example.\nThere are of course several function to choose from, and I wanted to know\nwhich causes as less as possible resources.\n\nbest regards,\nUwe\n\n\nOn 18 April 2011 18:19, Robert Haas <[email protected]> wrote:\n\n> On Wed, Mar 16, 2011 at 4:45 AM, Uwe Bartels <[email protected]>\n> wrote:\n> > I'm having trouble with some sql statements which use an expression with\n> > many columns and distinct in the column list of the select.\n> > select distinct col1,col2,.....col20,col21\n> > from table1 left join table2 on <join condition>,...\n> > where\n> > <other expressions>;\n> >\n> > The negative result is a big sort with teporary files.\n> > -> Sort (cost=5813649.93..5853067.63 rows=15767078\n> width=80)\n> > (actual time=79027.079..81556.059 rows=12076838 loops=1)\n> > Sort Method: external sort Disk: 1086096kB\n> > By the way - for this query I have a work_mem of 1 GB - so raising this\n> > further is not generally possible - also not for one special command, due\n> to\n> > parallelism.\n> >\n> > How do I get around this?\n>\n> Hmm. It seems to me that there's no way to work out the distinct\n> values without either sorting or hashing the output, which will\n> necessarily be slow if you have a lot of data.\n>\n> > I have one idea and like to know if there any other approaches or an even\n> > known better solution to that problem. By using group by I don't need the\n> > big sort for the distinct - I reduce it (theoreticly) to the key columns.\n> >\n> > select <list of key columns>,<non key column>\n> > from tables1left join table2 on <join condition>,...\n> > where\n> > <other conditions>\n> > group by <list of key columns>\n>\n> You might try SELECT DISTINCT ON (key columns) <key columns> <non-key\n> columns> FROM ...\n>\n> > Another question would be what's the aggregate function which needs as\n> less\n> > as possible resources (time).\n>\n> Not sure I follow this part.\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\nHi Robert,thanks for your answer.the aggregate function I was talking about is the function I need to use for the non-group by columns like min() in my example.There are of course several function to choose from, and I wanted to know which causes as less as possible resources.\nbest regards,UweOn 18 April 2011 18:19, Robert Haas <[email protected]> wrote:\nOn Wed, Mar 16, 2011 at 4:45 AM, Uwe Bartels <[email protected]> wrote:\n> I'm having trouble with some sql statements which use an expression with\n> many columns and distinct in the column list of the select.\n> select distinct col1,col2,.....col20,col21\n> from table1 left join table2 on <join condition>,...\n> where\n>  <other expressions>;\n>\n> The negative result is a big sort with teporary files.\n>               ->  Sort  (cost=5813649.93..5853067.63 rows=15767078 width=80)\n> (actual time=79027.079..81556.059 rows=12076838 loops=1)\n>                     Sort Method:  external sort  Disk: 1086096kB\n> By the way - for this query I have a work_mem of 1 GB - so raising this\n> further is not generally possible - also not for one special command, due to\n> parallelism.\n>\n> How do I get around this?\n\nHmm.  It seems to me that there's no way to work out the distinct\nvalues without either sorting or hashing the output, which will\nnecessarily be slow if you have a lot of data.\n\n> I have one idea and like to know if there any other approaches or an even\n> known better solution to that problem. By using group by I don't need the\n> big sort for the distinct - I reduce it (theoreticly) to the key columns.\n>\n> select <list of key columns>,<non key column>\n> from tables1left join table2 on <join condition>,...\n> where\n>  <other conditions>\n> group by <list of key columns>\n\nYou might try SELECT DISTINCT ON (key columns) <key columns> <non-key\ncolumns> FROM ...\n\n> Another question would be what's the aggregate function which needs as less\n> as possible resources (time).\n\nNot sure I follow this part.\n\n--\nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 18 Apr 2011 19:13:28 +0200", "msg_from": "Uwe Bartels <[email protected]>", "msg_from_op": true, "msg_subject": "Re: big distinct clause vs. group by" }, { "msg_contents": "On Mon, Apr 18, 2011 at 7:13 PM, Uwe Bartels <[email protected]> wrote:\n> the aggregate function I was talking about is the function I need to use for\n> the non-group by columns like min() in my example.\n> There are of course several function to choose from, and I wanted to know\n> which causes as less as possible resources.\n\nIf you do not care about the output of the non key columns, why do you\ninclude them in the query at all? That would certainly be the\ncheapest option.\n\nIf you need _any_ column value you can use a constant.\n\nrklemme=> select * from t1;\n k | v\n---+---\n 0 | 0\n 0 | 1\n 1 | 2\n 1 | 3\n 2 | 4\n 2 | 5\n 3 | 6\n 3 | 7\n 4 | 8\n 4 | 9\n(10 rows)\n\nrklemme=> select k, 99 as v from t1 group by k order by k;\n k | v\n---+----\n 0 | 99\n 1 | 99\n 2 | 99\n 3 | 99\n 4 | 99\n(5 rows)\n\nrklemme=>\n\nGreetings from Paderborn\n\nrobert\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n", "msg_date": "Tue, 19 Apr 2011 10:24:18 +0200", "msg_from": "Robert Klemme <[email protected]>", "msg_from_op": false, "msg_subject": "Re: big distinct clause vs. group by" }, { "msg_contents": "Hi Robert,\n\nOh, I do care about these columns.\nBut by using an group by on the key columns, I cannot select the columns as\nthey are. Otherwise you get an error message.\nSo I have to use an aggregate functionlike min().\n\nBest...\nUwe\n\n\nOn 19 April 2011 10:24, Robert Klemme <[email protected]> wrote:\n\n> On Mon, Apr 18, 2011 at 7:13 PM, Uwe Bartels <[email protected]>\n> wrote:\n> > the aggregate function I was talking about is the function I need to use\n> for\n> > the non-group by columns like min() in my example.\n> > There are of course several function to choose from, and I wanted to know\n> > which causes as less as possible resources.\n>\n> If you do not care about the output of the non key columns, why do you\n> include them in the query at all? That would certainly be the\n> cheapest option.\n>\n> If you need _any_ column value you can use a constant.\n>\n> rklemme=> select * from t1;\n> k | v\n> ---+---\n> 0 | 0\n> 0 | 1\n> 1 | 2\n> 1 | 3\n> 2 | 4\n> 2 | 5\n> 3 | 6\n> 3 | 7\n> 4 | 8\n> 4 | 9\n> (10 rows)\n>\n> rklemme=> select k, 99 as v from t1 group by k order by k;\n> k | v\n> ---+----\n> 0 | 99\n> 1 | 99\n> 2 | 99\n> 3 | 99\n> 4 | 99\n> (5 rows)\n>\n> rklemme=>\n>\n> Greetings from Paderborn\n>\n> robert\n>\n> --\n> remember.guy do |as, often| as.you_can - without end\n> http://blog.rubybestpractices.com/\n>\n\nHi Robert,Oh, I do care about these columns.But by using an group by on the key columns, I cannot select the columns as they are. Otherwise you get an error message.So I have to use an aggregate functionlike min().\nBest...Uwe\nOn 19 April 2011 10:24, Robert Klemme <[email protected]> wrote:\nOn Mon, Apr 18, 2011 at 7:13 PM, Uwe Bartels <[email protected]> wrote:\n> the aggregate function I was talking about is the function I need to use for\n> the non-group by columns like min() in my example.\n> There are of course several function to choose from, and I wanted to know\n> which causes as less as possible resources.\n\nIf you do not care about the output of the non key columns, why do you\ninclude them in the query at all?  That would certainly be the\ncheapest option.\n\nIf you need _any_ column value you can use a constant.\n\nrklemme=> select * from t1;\n k | v\n---+---\n 0 | 0\n 0 | 1\n 1 | 2\n 1 | 3\n 2 | 4\n 2 | 5\n 3 | 6\n 3 | 7\n 4 | 8\n 4 | 9\n(10 rows)\n\nrklemme=> select k, 99 as v from t1 group by k order by k;\n k | v\n---+----\n 0 | 99\n 1 | 99\n 2 | 99\n 3 | 99\n 4 | 99\n(5 rows)\n\nrklemme=>\n\nGreetings from Paderborn\n\nrobert\n\n--\nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/", "msg_date": "Tue, 19 Apr 2011 10:47:16 +0200", "msg_from": "Uwe Bartels <[email protected]>", "msg_from_op": true, "msg_subject": "Re: big distinct clause vs. group by" }, { "msg_contents": "On Tue, Apr 19, 2011 at 10:47 AM, Uwe Bartels <[email protected]> wrote:\n> Oh, I do care about these columns.\n> But by using an group by on the key columns, I cannot select the columns as\n> they are. Otherwise you get an error message.\n> So I have to use an aggregate functionlike min().\n\nI find that slightly contradictory: either you do care about the\nvalues then your business requirements dictate the aggregate function.\n If you only want to pick any value actually in the table but do not\ncare about which one (e.g. MIN or MAX or any other) then you don't\nactually care about the value. Because \"SELECT a, MAX(b) ... GROUP BY\na\" and \"SELECT a, MIN(b) ... GROUP BY a\" are not equivalent. And, if\nyou do not care then there is probably no point in selecting them at\nall. At best you could use a constant for any legal value then.\n\nCheers\n\nrobert\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n", "msg_date": "Tue, 19 Apr 2011 11:07:38 +0200", "msg_from": "Robert Klemme <[email protected]>", "msg_from_op": false, "msg_subject": "Re: big distinct clause vs. group by" }, { "msg_contents": "On Tue, Apr 19, 2011 at 11:07 AM, Robert Klemme\n<[email protected]> wrote:\n> I find that slightly contradictory: either you do care about the\n> values then your business requirements dictate the aggregate function.\n>  If you only want to pick any value actually in the table but do not\n> care about which one (e.g. MIN or MAX or any other) then you don't\n> actually care about the value.  Because \"SELECT a, MAX(b) ... GROUP BY\n> a\" and \"SELECT a, MIN(b) ... GROUP BY a\" are not equivalent.  And, if\n> you do not care then there is probably no point in selecting them at\n> all.  At best you could use a constant for any legal value then.\n\nI know it sounds weird, but there are at times when you only want one\nof the actual values - but don't care which one precisely.\n\nIt happened to me at least once.\n\nSo, it may sound as nonsense, but it is probably not. Just uncommon.\n", "msg_date": "Tue, 19 Apr 2011 11:22:05 +0200", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: big distinct clause vs. group by" }, { "msg_contents": "On Apr 18, 2011, at 1:13 PM, Uwe Bartels <[email protected]> wrote:\n> Hi Robert,\n> \n> thanks for your answer.\n> the aggregate function I was talking about is the function I need to use for the non-group by columns like min() in my example.\n> There are of course several function to choose from, and I wanted to know which causes as less as possible resources.\n\nOh, I see. min() is probably as good as anything. You could also create a custom aggregate that just always returns its first input. I've occasionally wished we had such a thing as a built-in.\n\nAnother option is to try to rewrite the query with a subselect so that you do the aggregation first and then add the extra columns by joining against the output of the aggregate. If this can be done without joining the same table twice, it's often much faster, but it isn't always possible. :-(\n\n...Robert", "msg_date": "Sat, 23 Apr 2011 15:34:11 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: big distinct clause vs. group by" }, { "msg_contents": "On 23 April 2011 21:34, Robert Haas <[email protected]> wrote:\n\n> On Apr 18, 2011, at 1:13 PM, Uwe Bartels <[email protected]> wrote:\n> > Hi Robert,\n> >\n> > thanks for your answer.\n> > the aggregate function I was talking about is the function I need to use\n> for the non-group by columns like min() in my example.\n> > There are of course several function to choose from, and I wanted to know\n> which causes as less as possible resources.\n>\n> Oh, I see. min() is probably as good as anything. You could also create a\n> custom aggregate that just always returns its first input. I've occasionally\n> wished we had such a thing as a built-in.\n>\nyes. something like a first match without bothering about alle the rows\ncoming after - especially without sorting everything for throwing them away\nfinally. I'll definitely check this out.\n\n\n>\n> Another option is to try to rewrite the query with a subselect so that you\n> do the aggregation first and then add the extra columns by joining against\n> the output of the aggregate. If this can be done without joining the same\n> table twice, it's often much faster, but it isn't always possible. :-(\n>\nYes, abut I'm talking about big resultset on machines with already 140GB\nRAM. If I start joining these afterwards this gets too expensive. I tried it\nalready. But thanks anyway. Often small hint helps you a lot.\n\nBest Regards and happy Easter.\nUwe\n\n\n\n>\n> ...Robert\n\n\nOn 23 April 2011 21:34, Robert Haas <[email protected]> wrote:\nOn Apr 18, 2011, at 1:13 PM, Uwe Bartels <[email protected]> wrote:\n> Hi Robert,\n>\n> thanks for your answer.\n> the aggregate function I was talking about is the function I need to use for the non-group by columns like min() in my example.\n> There are of course several function to choose from, and I wanted to know which causes as less as possible resources.\n\nOh, I see. min() is probably as good as anything. You could also create a custom aggregate that just always returns its first input. I've occasionally wished we had such a thing as a built-in.\nyes. something like a first match without bothering about alle the rows coming after - especially without sorting everything for throwing them away finally. I'll definitely check this out. \n\nAnother option is to try to rewrite the query with a subselect so that you do the aggregation first and then add the extra columns by joining against the output of the aggregate. If this can be done without joining the same table twice, it's often much faster, but it isn't always possible.  :-(\nYes, abut I'm talking about big resultset on machines with already 140GB RAM. If I start joining these afterwards this gets too expensive. I tried it already. But thanks anyway. Often small hint helps you a lot.\nBest Regards and happy Easter.Uwe \n\n...Robert", "msg_date": "Sun, 24 Apr 2011 21:01:36 +0200", "msg_from": "Uwe Bartels <[email protected]>", "msg_from_op": true, "msg_subject": "Re: big distinct clause vs. group by" }, { "msg_contents": "2011/4/23 Robert Haas <[email protected]>\n\n> On Apr 18, 2011, at 1:13 PM, Uwe Bartels <[email protected]> wrote:\n> > Hi Robert,\n> >\n> > thanks for your answer.\n> > the aggregate function I was talking about is the function I need to use\n> for the non-group by columns like min() in my example.\n> > There are of course several function to choose from, and I wanted to know\n> which causes as less as possible resources.\n>\n> Oh, I see. min() is probably as good as anything. You could also create a\n> custom aggregate that just always returns its first input. I've occasionally\n> wished we had such a thing as a built-in.\n>\n>\nI've once done \"single\" grouping function - it checks that all it's input\nvalues are equal (non-null ones) and returns the value or raises an error if\nthere are two different values.\n\nBest regards, Vitalii Tymchyshyn\n\n\n\n-- \nBest regards,\n Vitalii Tymchyshyn\n\n2011/4/23 Robert Haas <[email protected]>\nOn Apr 18, 2011, at 1:13 PM, Uwe Bartels <[email protected]> wrote:\n> Hi Robert,\n>\n> thanks for your answer.\n> the aggregate function I was talking about is the function I need to use for the non-group by columns like min() in my example.\n> There are of course several function to choose from, and I wanted to know which causes as less as possible resources.\n\nOh, I see. min() is probably as good as anything. You could also create a custom aggregate that just always returns its first input. I've occasionally wished we had such a thing as a built-in.\nI've once done \"single\" grouping function - it checks that all it's input values are equal (non-null ones) and returns the value or raises an error if there are two different values. \nBest regards, Vitalii Tymchyshyn -- Best regards, Vitalii Tymchyshyn", "msg_date": "Mon, 25 Apr 2011 17:22:23 +0300", "msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: big distinct clause vs. group by" }, { "msg_contents": "Hi Vitalii,\n\nthis sounds promising, can you send me that?\n\nBest Regards,\nUwe\n\n\n2011/4/25 Віталій Тимчишин <[email protected]>\n\n>\n>\n> 2011/4/23 Robert Haas <[email protected]>\n>\n>> On Apr 18, 2011, at 1:13 PM, Uwe Bartels <[email protected]> wrote:\n>> > Hi Robert,\n>> >\n>> > thanks for your answer.\n>> > the aggregate function I was talking about is the function I need to use\n>> for the non-group by columns like min() in my example.\n>> > There are of course several function to choose from, and I wanted to\n>> know which causes as less as possible resources.\n>>\n>> Oh, I see. min() is probably as good as anything. You could also create a\n>> custom aggregate that just always returns its first input. I've occasionally\n>> wished we had such a thing as a built-in.\n>>\n>>\n> I've once done \"single\" grouping function - it checks that all it's input\n> values are equal (non-null ones) and returns the value or raises an error if\n> there are two different values.\n>\n> Best regards, Vitalii Tymchyshyn\n>\n>\n>\n> --\n> Best regards,\n> Vitalii Tymchyshyn\n>\n\nHi Vitalii,this sounds promising, can you send me that?Best Regards,Uwe\n2011/4/25 Віталій Тимчишин <[email protected]>\n2011/4/23 Robert Haas <[email protected]>\nOn Apr 18, 2011, at 1:13 PM, Uwe Bartels <[email protected]> wrote:\n> Hi Robert,\n>\n> thanks for your answer.\n> the aggregate function I was talking about is the function I need to use for the non-group by columns like min() in my example.\n> There are of course several function to choose from, and I wanted to know which causes as less as possible resources.\n\nOh, I see. min() is probably as good as anything. You could also create a custom aggregate that just always returns its first input. I've occasionally wished we had such a thing as a built-in.\nI've once done \"single\" grouping function - it checks that all it's input values are equal (non-null ones) and returns the value or raises an error if there are two different values. \nBest regards, Vitalii Tymchyshyn -- Best regards, Vitalii Tymchyshyn", "msg_date": "Mon, 25 Apr 2011 21:01:09 +0200", "msg_from": "Uwe Bartels <[email protected]>", "msg_from_op": true, "msg_subject": "Re: big distinct clause vs. group by" } ]
[ { "msg_contents": "Dear all,\n\nI am facing a problem while creating the index to make the below query \nrun faster. My table size is near about 1065 MB and 428467 rows.\n\nexplain analyze select count(*) from page_content where \npublishing_date like '%2010%' and content_language='en' and content is \nnot null and isprocessable = 1 and (content like '%Militant%'\nOR content like '%jihad%' OR content like '%Mujahid%' OR\n content like '%fedayeen%' OR content like '%insurgent%' OR content \nlike '%terrorist%' OR\n content like '%cadre%' OR content like '%civilians%' OR content like \n'%police%' OR content like '%defence%' OR content like '%cops%' OR \ncontent like '%crpf%' OR content like '%dsf%' OR content like '%ssb%') \nAND (content like '%kill%' or content like '%injure%');\n\n*Output:\n\n* Aggregate (cost=107557.78..107557.79 rows=1 width=0) (actual \ntime=18564.631..18564.631 rows=1 loops=1)\n -> Seq Scan on page_content (cost=0.00..107466.82 rows=36381 \nwidth=0) (actual time=0.146..18529.371 rows=59918 loops=1)\n Filter: ((content IS NOT NULL) AND (publishing_date ~~ \n'%2010%'::text) AND (content_language = 'en'::bpchar) AND (isprocessable \n= 1) AND (((content)\n::text ~~ '%kill%'::text) OR ((content)::text ~~ '%injure%'::text)) AND \n(((content)::text ~~ '%Militant%'::text) OR ((content)::text ~~ \n'%jihad%'::text) OR (\n(content)::text ~~ '%Mujahid%'::text) OR ((content)::text ~~ \n'%fedayeen%'::text) OR ((content)::text ~~ '%insurgent%'::text) OR \n((content)::text ~~ '%terrori\nst%'::text) OR ((content)::text ~~ '%cadre%'::text) OR ((content)::text \n~~ '%civilians%'::text) OR ((content)::text ~~ '%police%'::text) OR \n((content)::text\n~~ '%defence%'::text) OR ((content)::text ~~ '%cops%'::text) OR \n((content)::text ~~ '%crpf%'::text) OR ((content)::text ~~ \n'%dsf%'::text) OR ((content)::text\n ~~ '%ssb%'::text)))\n Total runtime: 18564.673 ms\n\n\n*Index on that Table :\n\n*CREATE INDEX idx_page_id\n ON page_content\n USING btree\n (crawled_page_id);\n\n*Index I create :*\nCREATE INDEX idx_page_id_content\n ON page_content\n USING btree\n (crawled_page_id,content_language,publishing_date,isprocessable);\n\n*Index that fail to create:\n\n*CREATE INDEX idx_page_id_content1\n ON page_content\n USING btree\n (crawled_page_id,content);\n\nError :-ERROR: index row requires 13240 bytes, maximum size is 8191\n********** Error **********\n\nERROR: index row requires 13240 bytes, maximum size is 8191\nSQL state: 54000\n\nHow to resolve this error\nPlease give any suggestion to tune the query.\n\nThanks & best Regards,\n\nAdarsh Sharma\n\n\n\n\n\n\nDear all,\n\nI am facing a problem while  creating the index to make the below query\nrun faster. My table  size is near about 1065 MB and 428467 rows.\n\nexplain analyze select  count(*)  from page_content where\npublishing_date like '%2010%' and content_language='en'  and content is\nnot null and isprocessable = 1 and (content like '%Militant%' \nOR content like '%jihad%' OR  content like '%Mujahid%'  OR \n content like '%fedayeen%' OR content like '%insurgent%'  OR content\nlike '%terrorist%' OR \n  content like '%cadre%'  OR content like '%civilians%' OR content like\n'%police%' OR content like '%defence%' OR content like '%cops%' OR\ncontent like '%crpf%' OR content like '%dsf%' OR content like '%ssb%')\nAND (content like '%kill%' or content like '%injure%');\n\nOutput:\n\n Aggregate  (cost=107557.78..107557.79 rows=1 width=0) (actual\ntime=18564.631..18564.631 rows=1 loops=1)\n   ->  Seq Scan on page_content  (cost=0.00..107466.82 rows=36381\nwidth=0) (actual time=0.146..18529.371 rows=59918 loops=1)\n         Filter: ((content IS NOT NULL) AND (publishing_date ~~\n'%2010%'::text) AND (content_language = 'en'::bpchar) AND\n(isprocessable = 1) AND (((content)\n::text ~~ '%kill%'::text) OR ((content)::text ~~ '%injure%'::text)) AND\n(((content)::text ~~ '%Militant%'::text) OR ((content)::text ~~\n'%jihad%'::text) OR (\n(content)::text ~~ '%Mujahid%'::text) OR ((content)::text ~~\n'%fedayeen%'::text) OR ((content)::text ~~ '%insurgent%'::text) OR\n((content)::text ~~ '%terrori\nst%'::text) OR ((content)::text ~~ '%cadre%'::text) OR ((content)::text\n~~ '%civilians%'::text) OR ((content)::text ~~ '%police%'::text) OR\n((content)::text \n~~ '%defence%'::text) OR ((content)::text ~~ '%cops%'::text) OR\n((content)::text ~~ '%crpf%'::text) OR ((content)::text ~~\n'%dsf%'::text) OR ((content)::text\n ~~ '%ssb%'::text)))\n Total runtime: 18564.673 ms\n\n\nIndex on that Table :\n\nCREATE INDEX idx_page_id\n  ON page_content\n  USING btree\n  (crawled_page_id);\n\nIndex I create :\nCREATE INDEX idx_page_id_content\n  ON page_content\n  USING btree\n  (crawled_page_id,content_language,publishing_date,isprocessable);\n\nIndex that fail to create:\n\nCREATE INDEX idx_page_id_content1\n  ON page_content\n  USING btree\n  (crawled_page_id,content);\n\nError :-ERROR:  index row requires 13240 bytes, maximum size is 8191\n********** Error **********\n\nERROR: index row requires 13240 bytes, maximum size is 8191\nSQL state: 54000\n\nHow to resolve this error \nPlease give any suggestion to tune the query.\n\nThanks & best Regards,\n\nAdarsh Sharma", "msg_date": "Wed, 16 Mar 2011 14:43:38 +0530", "msg_from": "Adarsh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Help with Query Tuning" }, { "msg_contents": "On Wed, Mar 16, 2011 at 02:43:38PM +0530, Adarsh Sharma wrote:\n> Dear all,\n>\n> I am facing a problem while creating the index to make the below query run \n> faster. My table size is near about 1065 MB and 428467 rows.\n>\n> explain analyze select count(*) from page_content where publishing_date \n> like '%2010%' and content_language='en' and content is not null and \n> isprocessable = 1 and (content like '%Militant%'\n> OR content like '%jihad%' OR content like '%Mujahid%' OR\n> content like '%fedayeen%' OR content like '%insurgent%' OR content like \n> '%terrorist%' OR\n> content like '%cadre%' OR content like '%civilians%' OR content like \n> '%police%' OR content like '%defence%' OR content like '%cops%' OR content \n> like '%crpf%' OR content like '%dsf%' OR content like '%ssb%') AND (content \n> like '%kill%' or content like '%injure%');\n>\n> *Output:\n>\n> * Aggregate (cost=107557.78..107557.79 rows=1 width=0) (actual \n> time=18564.631..18564.631 rows=1 loops=1)\n> -> Seq Scan on page_content (cost=0.00..107466.82 rows=36381 width=0) \n> (actual time=0.146..18529.371 rows=59918 loops=1)\n> Filter: ((content IS NOT NULL) AND (publishing_date ~~ \n> '%2010%'::text) AND (content_language = 'en'::bpchar) AND (isprocessable = \n> 1) AND (((content)\n> ::text ~~ '%kill%'::text) OR ((content)::text ~~ '%injure%'::text)) AND \n> (((content)::text ~~ '%Militant%'::text) OR ((content)::text ~~ \n> '%jihad%'::text) OR (\n> (content)::text ~~ '%Mujahid%'::text) OR ((content)::text ~~ \n> '%fedayeen%'::text) OR ((content)::text ~~ '%insurgent%'::text) OR \n> ((content)::text ~~ '%terrori\n> st%'::text) OR ((content)::text ~~ '%cadre%'::text) OR ((content)::text ~~ \n> '%civilians%'::text) OR ((content)::text ~~ '%police%'::text) OR \n> ((content)::text\n> ~~ '%defence%'::text) OR ((content)::text ~~ '%cops%'::text) OR \n> ((content)::text ~~ '%crpf%'::text) OR ((content)::text ~~ '%dsf%'::text) \n> OR ((content)::text\n> ~~ '%ssb%'::text)))\n> Total runtime: 18564.673 ms\n>\n>\n> *Index on that Table :\n>\n> *CREATE INDEX idx_page_id\n> ON page_content\n> USING btree\n> (crawled_page_id);\n>\n> *Index I create :*\n> CREATE INDEX idx_page_id_content\n> ON page_content\n> USING btree\n> (crawled_page_id,content_language,publishing_date,isprocessable);\n>\n> *Index that fail to create:\n>\n> *CREATE INDEX idx_page_id_content1\n> ON page_content\n> USING btree\n> (crawled_page_id,content);\n>\n> Error :-ERROR: index row requires 13240 bytes, maximum size is 8191\n> ********** Error **********\n>\n> ERROR: index row requires 13240 bytes, maximum size is 8191\n> SQL state: 54000\n>\n> How to resolve this error\n> Please give any suggestion to tune the query.\n>\n> Thanks & best Regards,\n>\n> Adarsh Sharma\n>\n\nYou should probably be looking at using full-text indexing:\n\nhttp://www.postgresql.org/docs/9.0/static/textsearch.html\n\nor limit the size of content for the index.\n\nCheers,\nKen\n", "msg_date": "Wed, 16 Mar 2011 11:36:53 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with Query Tuning" }, { "msg_contents": "On 03/16/2011 05:13 AM, Adarsh Sharma wrote:\n> Dear all,\n>\n> I am facing a problem while creating the index to make the below query run faster. My table size is near about 1065 MB and 428467 rows.\n>\n> explain analyze select count(*) from page_content where publishing_date like '%2010%' and content_language='en' and content is not\n> null and isprocessable = 1 and (content like '%Militant%'\n> OR content like '%jihad%' OR content like '%Mujahid%' OR\n> content like '%fedayeen%' OR content like '%insurgent%' OR content like '%terrorist%' OR\n> content like '%cadre%' OR content like '%civilians%' OR content like '%police%' OR content like '%defence%' OR content like '%cops%'\n> OR content like '%crpf%' OR content like '%dsf%' OR content like '%ssb%') AND (content like '%kill%' or content like '%injure%');\n>\n> *Output:\n>\n> * Aggregate (cost=107557.78..107557.79 rows=1 width=0) (actual time=18564.631..18564.631 rows=1 loops=1)\n> -> Seq Scan on page_content (cost=0.00..107466.82 rows=36381 width=0) (actual time=0.146..18529.371 rows=59918 loops=1)\n> Filter: ((content IS NOT NULL) AND (publishing_date ~~ '%2010%'::text) AND (content_language = 'en'::bpchar) AND (isprocessable = 1)\n> AND (((content)\n> ::text ~~ '%kill%'::text) OR ((content)::text ~~ '%injure%'::text)) AND (((content)::text ~~ '%Militant%'::text) OR ((content)::text\n> ~~ '%jihad%'::text) OR (\n> (content)::text ~~ '%Mujahid%'::text) OR ((content)::text ~~ '%fedayeen%'::text) OR ((content)::text ~~ '%insurgent%'::text) OR\n> ((content)::text ~~ '%terrori\n> st%'::text) OR ((content)::text ~~ '%cadre%'::text) OR ((content)::text ~~ '%civilians%'::text) OR ((content)::text ~~\n> '%police%'::text) OR ((content)::text\n> ~~ '%defence%'::text) OR ((content)::text ~~ '%cops%'::text) OR ((content)::text ~~ '%crpf%'::text) OR ((content)::text ~~\n> '%dsf%'::text) OR ((content)::text\n> ~~ '%ssb%'::text)))\n> Total runtime: 18564.673 ms\n>\n\nYou should read the documentation regarding indices and pattern matching as well as fts.\n\nhttp://www.postgresql.org/docs/8.3/static/indexes-types.html\n\nThe optimizer can also use a B-tree index for queries involving the pattern matching operators LIKE and ~ if the pattern is a \nconstant and is anchored to the beginning of the string � for example, col LIKE 'foo%' or col ~ '^foo', but not col LIKE '%bar'. \nHowever, if your server does not use the C locale you will need to create the index with a special operator class to support \nindexing of pattern-matching queries. See Section 11.9 below. It is also possible to use B-tree indexes for ILIKE and ~*, but only \nif the pattern starts with non-alphabetic characters, i.e. characters that are not affected by upper/lower case conversion.\n\nI believe that your query as written using '%pattern%' will always be forced to use sequential scans.\n", "msg_date": "Wed, 16 Mar 2011 13:24:30 -0400", "msg_from": "Reid Thompson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with Query Tuning" }, { "msg_contents": "Thanks Marshall, would I need to change the data type of *content \n*column to tsvector and create a Gist Index on it.\n\nBest Regards,\nAdarsh\n\n\nKenneth Marshall wrote:\n> On Wed, Mar 16, 2011 at 02:43:38PM +0530, Adarsh Sharma wrote:\n> \n>> Dear all,\n>>\n>> I am facing a problem while creating the index to make the below query run \n>> faster. My table size is near about 1065 MB and 428467 rows.\n>>\n>> explain analyze select count(*) from page_content where publishing_date \n>> like '%2010%' and content_language='en' and content is not null and \n>> isprocessable = 1 and (content like '%Militant%'\n>> OR content like '%jihad%' OR content like '%Mujahid%' OR\n>> content like '%fedayeen%' OR content like '%insurgent%' OR content like \n>> '%terrorist%' OR\n>> content like '%cadre%' OR content like '%civilians%' OR content like \n>> '%police%' OR content like '%defence%' OR content like '%cops%' OR content \n>> like '%crpf%' OR content like '%dsf%' OR content like '%ssb%') AND (content \n>> like '%kill%' or content like '%injure%');\n>>\n>> *Output:\n>>\n>> * Aggregate (cost=107557.78..107557.79 rows=1 width=0) (actual \n>> time=18564.631..18564.631 rows=1 loops=1)\n>> -> Seq Scan on page_content (cost=0.00..107466.82 rows=36381 width=0) \n>> (actual time=0.146..18529.371 rows=59918 loops=1)\n>> Filter: ((content IS NOT NULL) AND (publishing_date ~~ \n>> '%2010%'::text) AND (content_language = 'en'::bpchar) AND (isprocessable = \n>> 1) AND (((content)\n>> ::text ~~ '%kill%'::text) OR ((content)::text ~~ '%injure%'::text)) AND \n>> (((content)::text ~~ '%Militant%'::text) OR ((content)::text ~~ \n>> '%jihad%'::text) OR (\n>> (content)::text ~~ '%Mujahid%'::text) OR ((content)::text ~~ \n>> '%fedayeen%'::text) OR ((content)::text ~~ '%insurgent%'::text) OR \n>> ((content)::text ~~ '%terrori\n>> st%'::text) OR ((content)::text ~~ '%cadre%'::text) OR ((content)::text ~~ \n>> '%civilians%'::text) OR ((content)::text ~~ '%police%'::text) OR \n>> ((content)::text\n>> ~~ '%defence%'::text) OR ((content)::text ~~ '%cops%'::text) OR \n>> ((content)::text ~~ '%crpf%'::text) OR ((content)::text ~~ '%dsf%'::text) \n>> OR ((content)::text\n>> ~~ '%ssb%'::text)))\n>> Total runtime: 18564.673 ms\n>>\n>>\n>> *Index on that Table :\n>>\n>> *CREATE INDEX idx_page_id\n>> ON page_content\n>> USING btree\n>> (crawled_page_id);\n>>\n>> *Index I create :*\n>> CREATE INDEX idx_page_id_content\n>> ON page_content\n>> USING btree\n>> (crawled_page_id,content_language,publishing_date,isprocessable);\n>>\n>> *Index that fail to create:\n>>\n>> *CREATE INDEX idx_page_id_content1\n>> ON page_content\n>> USING btree\n>> (crawled_page_id,content);\n>>\n>> Error :-ERROR: index row requires 13240 bytes, maximum size is 8191\n>> ********** Error **********\n>>\n>> ERROR: index row requires 13240 bytes, maximum size is 8191\n>> SQL state: 54000\n>>\n>> How to resolve this error\n>> Please give any suggestion to tune the query.\n>>\n>> Thanks & best Regards,\n>>\n>> Adarsh Sharma\n>>\n>> \n>\n> You should probably be looking at using full-text indexing:\n>\n> http://www.postgresql.org/docs/9.0/static/textsearch.html\n>\n> or limit the size of content for the index.\n>\n> Cheers,\n> Ken\n> \n\n\n\n\n\n\n\nThanks Marshall, would I need to change the data type  of content column\nto tsvector and create a Gist Index on it.\n\nBest Regards,\nAdarsh\n\n\nKenneth Marshall wrote:\n\nOn Wed, Mar 16, 2011 at 02:43:38PM +0530, Adarsh Sharma wrote:\n \n\nDear all,\n\nI am facing a problem while creating the index to make the below query run \nfaster. My table size is near about 1065 MB and 428467 rows.\n\nexplain analyze select count(*) from page_content where publishing_date \nlike '%2010%' and content_language='en' and content is not null and \nisprocessable = 1 and (content like '%Militant%'\nOR content like '%jihad%' OR content like '%Mujahid%' OR\ncontent like '%fedayeen%' OR content like '%insurgent%' OR content like \n'%terrorist%' OR\n content like '%cadre%' OR content like '%civilians%' OR content like \n'%police%' OR content like '%defence%' OR content like '%cops%' OR content \nlike '%crpf%' OR content like '%dsf%' OR content like '%ssb%') AND (content \nlike '%kill%' or content like '%injure%');\n\n*Output:\n\n* Aggregate (cost=107557.78..107557.79 rows=1 width=0) (actual \ntime=18564.631..18564.631 rows=1 loops=1)\n -> Seq Scan on page_content (cost=0.00..107466.82 rows=36381 width=0) \n(actual time=0.146..18529.371 rows=59918 loops=1)\n Filter: ((content IS NOT NULL) AND (publishing_date ~~ \n'%2010%'::text) AND (content_language = 'en'::bpchar) AND (isprocessable = \n1) AND (((content)\n::text ~~ '%kill%'::text) OR ((content)::text ~~ '%injure%'::text)) AND \n(((content)::text ~~ '%Militant%'::text) OR ((content)::text ~~ \n'%jihad%'::text) OR (\n(content)::text ~~ '%Mujahid%'::text) OR ((content)::text ~~ \n'%fedayeen%'::text) OR ((content)::text ~~ '%insurgent%'::text) OR \n((content)::text ~~ '%terrori\nst%'::text) OR ((content)::text ~~ '%cadre%'::text) OR ((content)::text ~~ \n'%civilians%'::text) OR ((content)::text ~~ '%police%'::text) OR \n((content)::text\n~~ '%defence%'::text) OR ((content)::text ~~ '%cops%'::text) OR \n((content)::text ~~ '%crpf%'::text) OR ((content)::text ~~ '%dsf%'::text) \nOR ((content)::text\n~~ '%ssb%'::text)))\nTotal runtime: 18564.673 ms\n\n\n*Index on that Table :\n\n*CREATE INDEX idx_page_id\n ON page_content\n USING btree\n (crawled_page_id);\n\n*Index I create :*\nCREATE INDEX idx_page_id_content\n ON page_content\n USING btree\n (crawled_page_id,content_language,publishing_date,isprocessable);\n\n*Index that fail to create:\n\n*CREATE INDEX idx_page_id_content1\n ON page_content\n USING btree\n (crawled_page_id,content);\n\nError :-ERROR: index row requires 13240 bytes, maximum size is 8191\n********** Error **********\n\nERROR: index row requires 13240 bytes, maximum size is 8191\nSQL state: 54000\n\nHow to resolve this error\nPlease give any suggestion to tune the query.\n\nThanks & best Regards,\n\nAdarsh Sharma\n\n \n\n\nYou should probably be looking at using full-text indexing:\n\nhttp://www.postgresql.org/docs/9.0/static/textsearch.html\n\nor limit the size of content for the index.\n\nCheers,\nKen", "msg_date": "Thu, 17 Mar 2011 10:18:36 +0530", "msg_from": "Adarsh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help with Query Tuning" }, { "msg_contents": "Thanks, I understand it know :-\n\nBut My one doubt which isn't clear :\n\n*Original Query :-*\n\nselect count(*) from page_content where (content like '%Militant%'\nOR content like '%jihad%' OR content like '%Mujahid%' OR\n content like '%fedayeen%' OR content like '%insurgent%' OR content \nlike '%terrORist%' OR\n content like '%cadre%' OR content like '%civilians%' OR content like \n'%police%' OR content like '%defence%' OR content like '%cops%' OR \ncontent like '%crpf%' OR content like '%dsf%' OR content like '%ssb%') \nAND (content like '%kill%' OR content like '%injure%');\n\n*Output :-*\n count\n-------\n 57061\n(1 row)\n\nTime: 19726.555 ms\n\nI need to tune it , use full-text searching as :\n\n*Modified Query :-\n\n*SELECT count(*) from page_content\nWHERE publishing_date like '%2010%' and content_language='en' and \ncontent is not null and isprocessable = 1 and \nto_tsvectOR('english',content) @@ to_tsquery('english','Mujahid' || \n'jihad' || 'Militant' || 'fedayeen' || 'insurgent' || 'terrORist' || \n'cadre' || 'civilians' || 'police' || 'defence' || 'cops' || 'crpf' || \n'dsf' || 'ssb');\n\n*Output :-*\n count\n-------\n 0\n(1 row)\n\nTime: 194685.125 ms\n*\n*I try, SELECT count(*) from page_content\nWHERE publishing_date like '%2010%' and content_language='en' and \ncontent is not null and isprocessable = 1 and \nto_tsvectOR('english',content) @@ to_tsquery('english','%Mujahid%' || \n'%jihad%' || '%Militant%' || '%fedayeen%' || '%insurgent%' || \n'%terrORist%' || '%cadre%' || '%civilians%' || '%police%' || '%defence%' \n|| '%cops%' || '%crpf%' || '%dsf%' || '%ssb%');\n\n count\n-------\n 0\n(1 row)\n\nTime: 194722.468 ms\n\nI know I have to create index but index is the next step, first you have \nto get the correct result .\n\nCREATE INDEX pgweb_idx ON page_content USING gin(to_tsvector('english', \ncontent));\n\n\nPlease guide me where I am going wrong.\n\n\nThanks & best Regards,\n\nAdarsh Sharma\nKenneth Marshall wrote:\n> On Wed, Mar 16, 2011 at 02:43:38PM +0530, Adarsh Sharma wrote:\n> \n>> Dear all,\n>>\n>> I am facing a problem while creating the index to make the below query run \n>> faster. My table size is near about 1065 MB and 428467 rows.\n>>\n>> explain analyze select count(*) from page_content where publishing_date \n>> like '%2010%' and content_language='en' and content is not null and \n>> isprocessable = 1 and (content like '%Militant%'\n>> OR content like '%jihad%' OR content like '%Mujahid%' OR\n>> content like '%fedayeen%' OR content like '%insurgent%' OR content like \n>> '%terrorist%' OR\n>> content like '%cadre%' OR content like '%civilians%' OR content like \n>> '%police%' OR content like '%defence%' OR content like '%cops%' OR content \n>> like '%crpf%' OR content like '%dsf%' OR content like '%ssb%') AND (content \n>> like '%kill%' or content like '%injure%');\n>>\n>> *Output:\n>>\n>> * Aggregate (cost=107557.78..107557.79 rows=1 width=0) (actual \n>> time=18564.631..18564.631 rows=1 loops=1)\n>> -> Seq Scan on page_content (cost=0.00..107466.82 rows=36381 width=0) \n>> (actual time=0.146..18529.371 rows=59918 loops=1)\n>> Filter: ((content IS NOT NULL) AND (publishing_date ~~ \n>> '%2010%'::text) AND (content_language = 'en'::bpchar) AND (isprocessable = \n>> 1) AND (((content)\n>> ::text ~~ '%kill%'::text) OR ((content)::text ~~ '%injure%'::text)) AND \n>> (((content)::text ~~ '%Militant%'::text) OR ((content)::text ~~ \n>> '%jihad%'::text) OR (\n>> (content)::text ~~ '%Mujahid%'::text) OR ((content)::text ~~ \n>> '%fedayeen%'::text) OR ((content)::text ~~ '%insurgent%'::text) OR \n>> ((content)::text ~~ '%terrori\n>> st%'::text) OR ((content)::text ~~ '%cadre%'::text) OR ((content)::text ~~ \n>> '%civilians%'::text) OR ((content)::text ~~ '%police%'::text) OR \n>> ((content)::text\n>> ~~ '%defence%'::text) OR ((content)::text ~~ '%cops%'::text) OR \n>> ((content)::text ~~ '%crpf%'::text) OR ((content)::text ~~ '%dsf%'::text) \n>> OR ((content)::text\n>> ~~ '%ssb%'::text)))\n>> Total runtime: 18564.673 ms\n>>\n>>\n>> *Index on that Table :\n>>\n>> *CREATE INDEX idx_page_id\n>> ON page_content\n>> USING btree\n>> (crawled_page_id);\n>>\n>> *Index I create :*\n>> CREATE INDEX idx_page_id_content\n>> ON page_content\n>> USING btree\n>> (crawled_page_id,content_language,publishing_date,isprocessable);\n>>\n>> *Index that fail to create:\n>>\n>> *CREATE INDEX idx_page_id_content1\n>> ON page_content\n>> USING btree\n>> (crawled_page_id,content);\n>>\n>> Error :-ERROR: index row requires 13240 bytes, maximum size is 8191\n>> ********** Error **********\n>>\n>> ERROR: index row requires 13240 bytes, maximum size is 8191\n>> SQL state: 54000\n>>\n>> How to resolve this error\n>> Please give any suggestion to tune the query.\n>>\n>> Thanks & best Regards,\n>>\n>> Adarsh Sharma\n>>\n>> \n>\n> You should probably be looking at using full-text indexing:\n>\n> http://www.postgresql.org/docs/9.0/static/textsearch.html\n>\n> or limit the size of content for the index.\n>\n> Cheers,\n> Ken\n> \n\n\n\n\n\n\n\nThanks, I understand it know :-\n\nBut My one doubt which isn't clear  :\n\nOriginal Query :-\n\nselect  count(*)  from page_content where (content like '%Militant%' \nOR content like '%jihad%' OR  content like '%Mujahid%'  OR \n content like '%fedayeen%' OR content like '%insurgent%'  OR content\nlike '%terrORist%' OR \n  content like '%cadre%'  OR content like '%civilians%' OR content like\n'%police%' OR content like '%defence%' OR content like '%cops%' OR\ncontent like '%crpf%' OR content like '%dsf%' OR content like '%ssb%')\nAND (content like '%kill%' OR content like '%injure%');\n\nOutput :-\n count \n-------\n 57061\n(1 row)\n\nTime: 19726.555 ms\n\nI need to tune it , use full-text searching as :\n\nModified Query :-\n\nSELECT count(*)  from page_content \nWHERE publishing_date like '%2010%' and content_language='en' and\ncontent is not null and isprocessable = 1 and\nto_tsvectOR('english',content) @@ to_tsquery('english','Mujahid' ||\n'jihad' || 'Militant' || 'fedayeen' || 'insurgent' || 'terrORist' ||\n'cadre' || 'civilians' || 'police' || 'defence' || 'cops' || 'crpf' ||\n'dsf' || 'ssb');\n\nOutput :-\n count \n-------\n     0\n(1 row)\n\nTime: 194685.125 ms\n\nI try, SELECT count(*)  from page_content \nWHERE publishing_date like '%2010%' and content_language='en' and\ncontent is not null and isprocessable = 1 and\nto_tsvectOR('english',content) @@ to_tsquery('english','%Mujahid%' ||\n'%jihad%' || '%Militant%' || '%fedayeen%' || '%insurgent%' ||\n'%terrORist%' || '%cadre%' || '%civilians%' || '%police%' ||\n'%defence%' || '%cops%' || '%crpf%' || '%dsf%' || '%ssb%');\n\n count \n-------\n     0\n(1 row)\n\nTime: 194722.468 ms\n\nI know I have to create index but index is the next step, first you\nhave to get the correct result .\n\nCREATE INDEX pgweb_idx ON page_content USING gin(to_tsvector('english',\ncontent));\n\n\nPlease guide me where I am going wrong.\n\n\nThanks & best Regards,\n\nAdarsh Sharma\nKenneth Marshall wrote:\n\nOn Wed, Mar 16, 2011 at 02:43:38PM +0530, Adarsh Sharma wrote:\n \n\nDear all,\n\nI am facing a problem while creating the index to make the below query run \nfaster. My table size is near about 1065 MB and 428467 rows.\n\nexplain analyze select count(*) from page_content where publishing_date \nlike '%2010%' and content_language='en' and content is not null and \nisprocessable = 1 and (content like '%Militant%'\nOR content like '%jihad%' OR content like '%Mujahid%' OR\ncontent like '%fedayeen%' OR content like '%insurgent%' OR content like \n'%terrorist%' OR\n content like '%cadre%' OR content like '%civilians%' OR content like \n'%police%' OR content like '%defence%' OR content like '%cops%' OR content \nlike '%crpf%' OR content like '%dsf%' OR content like '%ssb%') AND (content \nlike '%kill%' or content like '%injure%');\n\n*Output:\n\n* Aggregate (cost=107557.78..107557.79 rows=1 width=0) (actual \ntime=18564.631..18564.631 rows=1 loops=1)\n -> Seq Scan on page_content (cost=0.00..107466.82 rows=36381 width=0) \n(actual time=0.146..18529.371 rows=59918 loops=1)\n Filter: ((content IS NOT NULL) AND (publishing_date ~~ \n'%2010%'::text) AND (content_language = 'en'::bpchar) AND (isprocessable = \n1) AND (((content)\n::text ~~ '%kill%'::text) OR ((content)::text ~~ '%injure%'::text)) AND \n(((content)::text ~~ '%Militant%'::text) OR ((content)::text ~~ \n'%jihad%'::text) OR (\n(content)::text ~~ '%Mujahid%'::text) OR ((content)::text ~~ \n'%fedayeen%'::text) OR ((content)::text ~~ '%insurgent%'::text) OR \n((content)::text ~~ '%terrori\nst%'::text) OR ((content)::text ~~ '%cadre%'::text) OR ((content)::text ~~ \n'%civilians%'::text) OR ((content)::text ~~ '%police%'::text) OR \n((content)::text\n~~ '%defence%'::text) OR ((content)::text ~~ '%cops%'::text) OR \n((content)::text ~~ '%crpf%'::text) OR ((content)::text ~~ '%dsf%'::text) \nOR ((content)::text\n~~ '%ssb%'::text)))\nTotal runtime: 18564.673 ms\n\n\n*Index on that Table :\n\n*CREATE INDEX idx_page_id\n ON page_content\n USING btree\n (crawled_page_id);\n\n*Index I create :*\nCREATE INDEX idx_page_id_content\n ON page_content\n USING btree\n (crawled_page_id,content_language,publishing_date,isprocessable);\n\n*Index that fail to create:\n\n*CREATE INDEX idx_page_id_content1\n ON page_content\n USING btree\n (crawled_page_id,content);\n\nError :-ERROR: index row requires 13240 bytes, maximum size is 8191\n********** Error **********\n\nERROR: index row requires 13240 bytes, maximum size is 8191\nSQL state: 54000\n\nHow to resolve this error\nPlease give any suggestion to tune the query.\n\nThanks & best Regards,\n\nAdarsh Sharma\n\n \n\n\nYou should probably be looking at using full-text indexing:\n\nhttp://www.postgresql.org/docs/9.0/static/textsearch.html\n\nor limit the size of content for the index.\n\nCheers,\nKen", "msg_date": "Thu, 17 Mar 2011 11:55:21 +0530", "msg_from": "Adarsh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help with Query Tuning" }, { "msg_contents": "> *Modified Query :-\n>\n> *SELECT count(*) from page_content\n> WHERE publishing_date like '%2010%' and content_language='en' and\n> content is not null and isprocessable = 1 and\n> to_tsvectOR('english',content) @@ to_tsquery('english','Mujahid' ||\n> 'jihad' || 'Militant' || 'fedayeen' || 'insurgent' || 'terrORist' ||\n> 'cadre' || 'civilians' || 'police' || 'defence' || 'cops' || 'crpf' ||\n> 'dsf' || 'ssb');\n\nI guess there should be spaces between the words. This way it's just one\nvery long word 'MujahidjihadMilitantfedayeen....' and I doubt that's what\nyou're looking for.\n\nregards\nTomas\n\n", "msg_date": "Thu, 17 Mar 2011 10:34:54 +0100", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Help with Query Tuning" }, { "msg_contents": "Thanks , it works now .. :-)\n\nHere is the output :\n\npdc_uima=# SELECT count(*) from page_content WHERE publishing_date like \n'%2010%' and\npdc_uima-# content_language='en' and content is not null and \nisprocessable = 1 and\npdc_uima-# to_tsvector('english',content) @@ \nto_tsquery('english','Mujahid' || ' | '\npdc_uima(# || 'jihad' || ' | ' || 'Militant' || ' | ' || 'fedayeen' || ' | '\npdc_uima(# || 'insurgent' || ' | ' || 'terrORist' || ' | ' || 'cadre' || \n' | '\npdc_uima(# || 'civilians' || ' | ' || 'police' || ' | ' || 'cops' || \n'crpf' || ' | '\npdc_uima(# || 'defence' || ' | ' || 'dsf' || ' | ' || 'ssb' );\n\n count \n--------\n 137193\n(1 row)\n\nTime: 195441.894 ms\n\n\nBut my original query is to use AND also i.e\n\nselect count(*) from page_content where publishing_date like '%2010%' \nand content_language='en' and content is not null and isprocessable = 1 \nand (content like '%Militant%'\nOR content like '%jihad%' OR content like '%Mujahid%' OR\n content like '%fedayeen%' OR content like '%insurgent%' OR content \nlike '%terrORist%' OR\n content like '%cadre%' OR content like '%civilians%' OR content like \n'%police%' OR content like '%defence%' OR content like '%cops%' OR \ncontent like '%crpf%' OR content like '%dsf%' OR content like '%ssb%') \nAND (content like '%kill%' OR content like '%injure%');\n\n count\n-------\n 57061\n(1 row)\n\nTime: 19423.087 ms\n\n\nNow I have to add AND condition ( AND (content like '%kill%' OR content \nlike '%injure%') ) also.\n\n\nThanks & Regards,\nAdarsh Sharma\n\n\n\[email protected] wrote:\n>> [email protected] wrote:\n>> \n>>>> Yes , I think we caught the problem but it results in the below error :\n>>>>\n>>>> SELECT count(*) from page_content\n>>>> WHERE publishing_date like '%2010%' and content_language='en' and\n>>>> content is not null and isprocessable = 1 and\n>>>> to_tsvector('english',content) @@ to_tsquery('english','Mujahid ' ||\n>>>> 'jihad ' || 'Militant ' || 'fedayeen ' || 'insurgent ' || 'terrORist '\n>>>> || 'cadre ' || 'civilians ' || 'police ' || 'defence ' || 'cops ' ||\n>>>> 'crpf ' || 'dsf ' || 'ssb');\n>>>>\n>>>> ERROR: syntax error in tsquery: \"Mujahid jihad Militant fedayeen\n>>>> insurgent terrORist cadre civilians police defence cops crpf dsf ssb\"\n>>>>\n>>>> \n>>> The text passed to to_tsquery has to be a proper query, i.e. single\n>>> tokens\n>>> separated by boolean operators. In your case, you should put there '|'\n>>> (which means OR) to get something like this\n>>>\n>>> 'Mujahid | jihad | Militant | ...'\n>>>\n>>> or you can use plainto_tsquery() as that accepts simple text, but it\n>>> puts\n>>> '&' (AND) between the tokens and I guess that's not what you want.\n>>>\n>>> Tomas\n>>>\n>>>\n>>> \n>> What to do to make it satisfies the OR condition to match any of the\n>> to_tsquery values as we got it right through like '%Mujahid' or .....\n>> or ....\n>> \n>\n> You can't force the plainto_tsquery to somehow use the OR instead of AND.\n> You need to modify the piece of code that produces the search text to put\n> there '|' characters. So do something like this\n>\n> SELECT count(*) from page_content WHERE publishing_date like '%2010%' and\n> content_language='en' and content is not null and isprocessable = 1 and\n> to_tsvector('english',content) @@ to_tsquery('english','Mujahid' || ' | '\n> || 'jihad' || ' | ' || 'Militant' || ' | ' || 'fedayeen);\n>\n> Not sure where does this text come from, but you can do this in a higher\n> level language, e.g. in PHP. Something like this\n>\n> $words = implode(' | ', explode(' ',$text));\n>\n> and then pass the $words into the query. Or something like that.\n>\n> Tomas\n>\n> \n\n\n\n\n\n\n\nThanks , it works now .. :-) \n\nHere is the output :\n\npdc_uima=# SELECT count(*)  from page_content WHERE publishing_date\nlike '%2010%' and\npdc_uima-# content_language='en' and content is not null and\nisprocessable = 1 and\npdc_uima-# to_tsvector('english',content) @@\nto_tsquery('english','Mujahid' || ' | '\npdc_uima(# || 'jihad' || ' | ' || 'Militant' || ' | ' || 'fedayeen' ||\n' | '\npdc_uima(# || 'insurgent' || ' | ' || 'terrORist' || ' | ' || 'cadre'\n|| ' | '\npdc_uima(# || 'civilians' || ' | ' || 'police' || ' | ' || 'cops' ||\n'crpf' || ' | '\npdc_uima(# || 'defence' || ' | ' || 'dsf' || ' | ' || 'ssb' );\n\n count  \n--------\n 137193\n(1 row)\n\nTime: 195441.894 ms\n\n\nBut my original query is to use AND also i.e \n\nselect  count(*)  from page_content where publishing_date like '%2010%'\nand content_language='en'  and content is not null and isprocessable =\n1 and (content like '%Militant%' \nOR content like '%jihad%' OR  content like '%Mujahid%'  OR \n content like '%fedayeen%' OR content like '%insurgent%'  OR content\nlike '%terrORist%' OR \n  content like '%cadre%'  OR content like '%civilians%' OR content like\n'%police%' OR content like '%defence%' OR content like '%cops%' OR\ncontent like '%crpf%' OR content like '%dsf%' OR content like '%ssb%')\nAND (content like '%kill%' OR content like '%injure%');\n\n count \n-------\n 57061\n(1 row)\n\nTime: 19423.087 ms\n\n\nNow I have to add AND condition (  AND (content like '%kill%' OR\ncontent like '%injure%')  )  also.\n\n\nThanks & Regards,\nAdarsh Sharma\n\n\n\[email protected] wrote:\n\n\[email protected] wrote:\n \n\n\nYes , I think we caught the problem but it results in the below error :\n\nSELECT count(*) from page_content\nWHERE publishing_date like '%2010%' and content_language='en' and\ncontent is not null and isprocessable = 1 and\nto_tsvector('english',content) @@ to_tsquery('english','Mujahid ' ||\n'jihad ' || 'Militant ' || 'fedayeen ' || 'insurgent ' || 'terrORist '\n|| 'cadre ' || 'civilians ' || 'police ' || 'defence ' || 'cops ' ||\n'crpf ' || 'dsf ' || 'ssb');\n\nERROR: syntax error in tsquery: \"Mujahid jihad Militant fedayeen\ninsurgent terrORist cadre civilians police defence cops crpf dsf ssb\"\n\n \n\nThe text passed to to_tsquery has to be a proper query, i.e. single\ntokens\nseparated by boolean operators. In your case, you should put there '|'\n(which means OR) to get something like this\n\n 'Mujahid | jihad | Militant | ...'\n\nor you can use plainto_tsquery() as that accepts simple text, but it\nputs\n'&' (AND) between the tokens and I guess that's not what you want.\n\nTomas\n\n\n \n\nWhat to do to make it satisfies the OR condition to match any of the\nto_tsquery values as we got it right through like '%Mujahid' or .....\nor ....\n \n\n\nYou can't force the plainto_tsquery to somehow use the OR instead of AND.\nYou need to modify the piece of code that produces the search text to put\nthere '|' characters. So do something like this\n\nSELECT count(*) from page_content WHERE publishing_date like '%2010%' and\ncontent_language='en' and content is not null and isprocessable = 1 and\nto_tsvector('english',content) @@ to_tsquery('english','Mujahid' || ' | '\n|| 'jihad' || ' | ' || 'Militant' || ' | ' || 'fedayeen);\n\nNot sure where does this text come from, but you can do this in a higher\nlevel language, e.g. in PHP. Something like this\n\n$words = implode(' | ', explode(' ',$text));\n\nand then pass the $words into the query. Or something like that.\n\nTomas", "msg_date": "Fri, 18 Mar 2011 09:47:38 +0530", "msg_from": "Adarsh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help with Query Tuning" }, { "msg_contents": "\nOn 03/18/2011 12:17 AM, Adarsh Sharma wrote:\n> Thanks , it works now ..:-)\n>\n> Here is the output :\n>\n> pdc_uima=# SELECT count(*) from page_content WHERE publishing_date like '%2010%' and\n> pdc_uima-# content_language='en' and content is not null and isprocessable = 1 and\n> pdc_uima-# to_tsvector('english',content) @@ to_tsquery('english','Mujahid' || ' | '\n> pdc_uima(# || 'jihad' || ' | ' || 'Militant' || ' | ' || 'fedayeen' || ' | '\n> pdc_uima(# || 'insurgent' || ' | ' || 'terrORist' || ' | ' || 'cadre' || ' | '\n> pdc_uima(# || 'civilians' || ' | ' || 'police' || ' | ' || 'cops' || 'crpf' || ' | '\n> pdc_uima(# || 'defence' || ' | ' || 'dsf' || ' | ' || 'ssb' );\n>\n> count\n> --------\n> 137193\n> (1 row)\n>\n> Time: 195441.894 ms\n\nwhat is the type/content for column publishing_date?\nbased on what you show above, I assume it's text? -- if so, whats the format of the date string?\n\n", "msg_date": "Fri, 18 Mar 2011 10:05:33 -0400", "msg_from": "Reid Thompson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with Query Tuning" }, { "msg_contents": "> Thanks , it works now .. :-)\n>\n> Here is the output :\n>\n> pdc_uima=# SELECT count(*) from page_content WHERE publishing_date like\n> '%2010%' and\n> pdc_uima-# content_language='en' and content is not null and\n> isprocessable = 1 and\n> pdc_uima-# to_tsvector('english',content) @@\n> to_tsquery('english','Mujahid' || ' | '\n> pdc_uima(# || 'jihad' || ' | ' || 'Militant' || ' | ' || 'fedayeen' || ' |\n> '\n> pdc_uima(# || 'insurgent' || ' | ' || 'terrORist' || ' | ' || 'cadre' ||\n> ' | '\n> pdc_uima(# || 'civilians' || ' | ' || 'police' || ' | ' || 'cops' ||\n> 'crpf' || ' | '\n> pdc_uima(# || 'defence' || ' | ' || 'dsf' || ' | ' || 'ssb' );\n>\n> count\n> --------\n> 137193\n> (1 row)\n>\n> Time: 195441.894 ms\n>\n>\n> But my original query is to use AND also i.e\n\nHi, just replace \"AND\" and \"OR\" (used with LIKE operator) for \"&\" and \"|\"\n(used with to_tsquery).\n\nSo this\n\n(content like '%Militant%' OR content like '%jihad%') AND (content like\n'%kill%' OR content like '%injure%')\n\nbecomes\n\nto_tsvector('english',content) @@ to_tsquery('english', '(Militant |\njihad) & (kill | injure)')\n\nBTW it seems you somehow believe you'll get exactly the same result from\nthose two queries (LIKE vs. tsearch) - that's false expectation. I believe\nthe fulltext query is much better and more appropriate in this case, just\ndon't expect the same results.\n\nregards\nTomas\n\n", "msg_date": "Fri, 18 Mar 2011 16:30:19 +0100", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Help with Query Tuning" } ]
[ { "msg_contents": "Hey!\nI'm having some trouble optimizing a query that uses a custom operator class.\n#Postgres has given me a solution for natural sort -\nhttp://www.rhodiumtoad.org.uk/junk/naturalsort.sql\n\nI'm trying to run it over a huge table - when running it on demand,\nthe data needs to be dumped to memory and sorted.\n\nSort (cost=31299.83..31668.83 rows=369 width=31)\n Sort Key: name\n -> Seq Scan on solutions_textbookpage (cost=0.00..25006.55\nrows=369 width=31)\n Filter: (active AND (textbook_id = 263))\n\nThat's obviously too slow. I've created an index using the custom\noperator class, so I don't have to do the sort every time I try to\nsort.\n\n Index Scan Backward using natural_page_name_textbook on\nsolutions_textbookpage (cost=0.00..650.56 rows=371 width=31) (actual\ntime=0.061..0.962 rows=369 loops=1)\n Index Cond: (textbook_id = 263)\n Filter: active\n\nObviously a little faster!\n\n\nThe problem I'm having is that because operator classes have a low\ncost estimation pg missestimates and tries to do the sort on demand\nrather than using the index.\n\nI can get pg to use the index by either jacking up cpu_operator_cost\nor lowering random_page_cost. Is this the best way to do that, or is\nthere a smarter way to ensure that pg uses this index when I need it.\n", "msg_date": "Wed, 16 Mar 2011 09:10:02 -0500", "msg_from": "Ben Beecher <[email protected]>", "msg_from_op": true, "msg_subject": "Custom operator class costs" }, { "msg_contents": "On Wed, Mar 16, 2011 at 10:10 AM, Ben Beecher <[email protected]> wrote:\n> Hey!\n> I'm having some trouble optimizing a query that uses a custom operator class.\n> #Postgres has given me a solution for natural sort -\n> http://www.rhodiumtoad.org.uk/junk/naturalsort.sql\n>\n> I'm trying to run it over a huge table - when running it on demand,\n> the data needs to be dumped to memory and sorted.\n>\n> Sort  (cost=31299.83..31668.83 rows=369 width=31)\n>  Sort Key: name\n>  ->  Seq Scan on solutions_textbookpage  (cost=0.00..25006.55\n> rows=369 width=31)\n>        Filter: (active AND (textbook_id = 263))\n>\n> That's obviously too slow. I've created an index using the custom\n> operator class, so I don't have to do the sort every time I try to\n> sort.\n>\n>  Index Scan Backward using natural_page_name_textbook on\n> solutions_textbookpage  (cost=0.00..650.56 rows=371 width=31) (actual\n> time=0.061..0.962 rows=369 loops=1)\n>   Index Cond: (textbook_id = 263)\n>   Filter: active\n>\n> Obviously a little faster!\n\nNot totally obvious, since the sort output doesn't show how long it\nactually took.\n\n> The problem I'm having is that because operator classes have a low\n> cost estimation pg missestimates and tries to do the sort on demand\n> rather than using the index.\n>\n> I can get pg to use the index by either jacking up cpu_operator_cost\n> or lowering random_page_cost. Is this the best way to do that, or is\n> there a smarter way to ensure that pg uses this index when I need it.\n\nIt's pretty often necessary to lower random_page_cost, and sometimes\nseq_page_cost, too. If, for example, the database is fully cached,\nyou might try 0.1/0.1 rather than the default 4/1. Raising the cpu_*\ncosts is equivalent, but I think it's easier to keep in your head if\nyou think about 1 as the nominal cost of reading a page sequentially\nfrom disk, and then lower the value you actually assign to reflect the\nfact that you'll normally be reading from the OS cache or perhaps even\nhitting shared_buffers.\n\nYou might also need to tune effective_cache_size.\n\nIs your operator class function unusually expensive? Are you having\ntrouble with PG not using other indexes it should be picking up, or\njust your custom one?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Mon, 18 Apr 2011 12:24:03 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Custom operator class costs" } ]
[ { "msg_contents": "Tom Lane <[email protected]> wrote:\n> \"Kevin Grittner\" <[email protected]> writes:\n \n>> We've seen a lot of those lately -- Index Scan Backward\n>> performing far worse than alternatives.\n> \n> It's not clear to me that that has anything to do with Tim's\n> problem. It certainly wouldn't be 20000x faster if it were a\n> forward scan.\n \nWell, that's one way of looking at it. Another would be that the\nslower plan with the backward scan was only estimated to be 14.5%\nless expensive than the fast plan, so a pretty moderate modifier\nwould have avoided this particular problem. The fact that the\nbackward scan mis-estimate may be combining multiplicatively with\nother mis-estimates doesn't make it less important.\n \n-Kevin\n\n", "msg_date": "Wed, 16 Mar 2011 12:44:31 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: Adding additional index causes 20,000x\n\tslowdown for certain select queries - postgres 9.0.3" }, { "msg_contents": "On 03/16/2011 12:44 PM, Kevin Grittner wrote:\n\n> Well, that's one way of looking at it. Another would be that the\n> slower plan with the backward scan was only estimated to be 14.5%\n> less expensive than the fast plan, so a pretty moderate modifier\n> would have avoided this particular problem.\n\nI was wondering about that myself. Considering any backwards scan would \nnecessarily be 10-100x slower than a forward scan unless the data was on \nan SSD, I assumed the planner was already using a multiplier to \ndiscourage its use.\n\nIf not, it seems like a valid configurable. We set our random_page_cost \nto 1.5 once the DB was backed by NVRAM. I could see that somehow \ninfluencing precedence of a backwards index scan. But even then, SSDs \nand their ilk react more like RAM than even a large RAID... so should \nthere be a setting that passes such useful info to the planner?\n\nMaybe a good attribute to associate with the tablespace, if nothing else.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n", "msg_date": "Wed, 16 Mar 2011 13:34:52 -0500", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Adding additional index causes 20,000x slowdown\n\tfor certain select queries - postgres 9.0.3" }, { "msg_contents": "On Wed, Mar 16, 2011 at 3:34 PM, Shaun Thomas <[email protected]> wrote:\n> If not, it seems like a valid configurable. We set our random_page_cost to\n> 1.5 once the DB was backed by NVRAM. I could see that somehow influencing\n> precedence of a backwards index scan. But even then, SSDs and their ilk\n> react more like RAM than even a large RAID... so should there be a setting\n> that passes such useful info to the planner?\n\nForgive the naive question...\nbut...\n\nAren't all index scans, forward or backward, random IO?\n", "msg_date": "Wed, 16 Mar 2011 15:39:26 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Adding additional index causes 20,000x slowdown for\n\tcertain select queries - postgres 9.0.3" }, { "msg_contents": "Claudio Freire <[email protected]> wrote:\n \n> Forgive the naive question...\n> but...\n> \n> Aren't all index scans, forward or backward, random IO?\n \nNo. Some could approach that; but, for example, an index scan\nimmediately following a CLUSTER on the index would be totally\nsequential on the heap file access and would tend to be fairly close\nto sequential on the index itself. It would certainly trigger OS\nlevel read-ahead for the heap, and quite possibly for the index. So\nfor a lot of pages, the difference might be between copying a page\nfrom the OS cache to the database cache versus a random disk seek.\n \nTo a lesser degree than CLUSTER you could get some degree of\nsequencing from a bulk load or even from normal data insert\npatterns. Consider a primary key which is sequentially assigned, or\na timestamp column, or receipt numbers, etc.\n \nAs Tom points out, some usage patterns may scramble this natural\norder pretty quickly. Some won't.\n \n-Kevin\n", "msg_date": "Wed, 16 Mar 2011 14:42:07 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: Adding additional index causes 20,000x\n\tslowdown for certain select queries - postgres 9.0.3" } ]
[ { "msg_contents": "Greetings.\n\nI recently ran into a problem with a planner opting for a sequential scan\nrather than a bitmap heap scan because the stats suggested that my delete\nquery was going to affect 33% of the rows, rather than the 1% it really\nwas. I was able to follow the planner's logic and came to the realization\nthat it was a result of the histogram_bounds for that column being out of\ndate.\n\nThe table is regularly purged of some of it's oldest data, and new data is\nconstantly added. It seems to me that PostgreSQL *should* be able to\nidentify a query which is going to delete all rows within a histogram\nbucket, and could possibly react by updating the histogram_bounds at\ncommit-time, rather than needing an additional analyze or needing\nauto-analyze settings jacked way up.\n\nAlternatively, it might be nice to be able to manually describe the table\n(I've been following the \"no hints\" discussion) by providing information\nalong the lines of \"always assume that column event_date is uniformly\ndistributed\". This would be provided as schema information, not additional\nSQL syntax for hints.\n\nIs this something that is remotely feasible, has the suggestion been made\nbefore, or am I asking for something where a solution already exists?\n\nThanks,\n\nDerrick\n\nGreetings.I recently ran into a problem with a planner opting for a sequential scan rather than a bitmap heap scan because the stats suggested that my delete query was going to affect 33% of the rows, rather than the 1% it really was.  I was able to follow the planner's logic and came to the realization that it was a result of the histogram_bounds for that column being out of date.\nThe table is regularly purged of some of it's oldest data, and new data is constantly added.  It seems to me that PostgreSQL *should* be able to identify a query which is going to delete all rows within a histogram bucket, and could possibly react by updating the histogram_bounds at commit-time, rather than needing an additional analyze or needing auto-analyze settings jacked way up.\nAlternatively, it might be nice to be able to manually describe the table (I've been following the \"no hints\" discussion) by providing information along the lines of \"always assume that column event_date is uniformly distributed\".  This would be provided as schema information, not additional SQL syntax for hints.\nIs this something that is remotely feasible, has the suggestion been made before, or am I asking for something where a solution already exists?Thanks,Derrick", "msg_date": "Wed, 16 Mar 2011 15:40:55 -0400", "msg_from": "Derrick Rice <[email protected]>", "msg_from_op": true, "msg_subject": "Updating histogram_bounds after a delete" }, { "msg_contents": "Oh, I'm using 8.2\n\nOn Wed, Mar 16, 2011 at 3:40 PM, Derrick Rice <[email protected]>wrote:\n\n> Greetings.\n>\n> I recently ran into a problem with a planner opting for a sequential scan\n> rather than a bitmap heap scan because the stats suggested that my delete\n> query was going to affect 33% of the rows, rather than the 1% it really\n> was. I was able to follow the planner's logic and came to the realization\n> that it was a result of the histogram_bounds for that column being out of\n> date.\n>\n> The table is regularly purged of some of it's oldest data, and new data is\n> constantly added. It seems to me that PostgreSQL *should* be able to\n> identify a query which is going to delete all rows within a histogram\n> bucket, and could possibly react by updating the histogram_bounds at\n> commit-time, rather than needing an additional analyze or needing\n> auto-analyze settings jacked way up.\n>\n> Alternatively, it might be nice to be able to manually describe the table\n> (I've been following the \"no hints\" discussion) by providing information\n> along the lines of \"always assume that column event_date is uniformly\n> distributed\". This would be provided as schema information, not additional\n> SQL syntax for hints.\n>\n> Is this something that is remotely feasible, has the suggestion been made\n> before, or am I asking for something where a solution already exists?\n>\n> Thanks,\n>\n> Derrick\n>\n\nOh, I'm using 8.2On Wed, Mar 16, 2011 at 3:40 PM, Derrick Rice <[email protected]> wrote:\nGreetings.I recently ran into a problem with a planner opting for a sequential scan rather than a bitmap heap scan because the stats suggested that my delete query was going to affect 33% of the rows, rather than the 1% it really was.  I was able to follow the planner's logic and came to the realization that it was a result of the histogram_bounds for that column being out of date.\nThe table is regularly purged of some of it's oldest data, and new data is constantly added.  It seems to me that PostgreSQL *should* be able to identify a query which is going to delete all rows within a histogram bucket, and could possibly react by updating the histogram_bounds at commit-time, rather than needing an additional analyze or needing auto-analyze settings jacked way up.\nAlternatively, it might be nice to be able to manually describe the table (I've been following the \"no hints\" discussion) by providing information along the lines of \"always assume that column event_date is uniformly distributed\".  This would be provided as schema information, not additional SQL syntax for hints.\nIs this something that is remotely feasible, has the suggestion been made before, or am I asking for something where a solution already exists?Thanks,Derrick", "msg_date": "Wed, 16 Mar 2011 15:41:55 -0400", "msg_from": "Derrick Rice <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Updating histogram_bounds after a delete" }, { "msg_contents": "Derrick Rice <[email protected]> wrote:\n \n> I recently ran into a problem with a planner opting for a\n> sequential scan rather than a bitmap heap scan because the stats\n> suggested that my delete query was going to affect 33% of the\n> rows, rather than the 1% it really was.\n \n> could possibly react by updating the histogram_bounds at\n> commit-time, rather than needing an additional analyze or needing\n> auto-analyze settings jacked way up.\n \nI recommend you try version 9.0 with default autovacuum settings and\nsee how things go. If you still have an issue, let's talk then. \nBesides numerous autovacuum improvements, which make it more\nreliable and less likely to noticeably affect runtime of your\nqueries, there is a feature to probe the end of an index's range in\nsituations where data skew was often causing less than optimal plans\nto be chosen.\n \n>From what you've told us, I suspect you won't see this problem in\n9.0 unless you shoot yourself in the foot by crippling autovacuum.\n \n-Kevin\n", "msg_date": "Wed, 16 Mar 2011 16:56:11 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Updating histogram_bounds after a delete" }, { "msg_contents": "On Wed, Mar 16, 2011 at 5:56 PM, Kevin Grittner <[email protected]\n> wrote:\n\n> there is a feature to probe the end of an index's range in\n> situations where data skew was often causing less than optimal plans\n> to be chosen.\n>\n\nWas this introduced in 9.0 or was it earlier? My company hasn't introduced\nintegrated support for 9.0 yet, but I can go to 8.4.\n\nIt was suggested that I change my SQL from:\n\ndelete from my_table where event_date < now() - interval '12 hours';\n\nto:\n\ndelete from my_table where event_date < now() - interval '12 hours'\nand event_date >= (select min(event_date) from my_table);\n\nWhich, even if the stats are out of date, will be more accurate as it will\nnot consider the histogram buckets that are empty due to previous deletes.\nSeems like exactly what the feature you mentioned would do, no?\n\nThanks for the help,\n\nDerrick\n\nOn Wed, Mar 16, 2011 at 5:56 PM, Kevin Grittner <[email protected]> wrote:\n there is a feature to probe the end of an index's range in\nsituations where data skew was often causing less than optimal plans\nto be chosen.Was this introduced in 9.0 or was it earlier?  My company hasn't introduced integrated support for 9.0 yet, but I can go to 8.4.It was suggested that I change my SQL from:\ndelete from my_table where event_date < now() - interval '12 hours';to:delete from my_table where event_date < now() - interval '12 hours'and event_date >= (select min(event_date) from my_table);\nWhich, even if the stats are out of date, will be more accurate as it will not consider the histogram buckets that are empty due to previous deletes.  Seems like exactly what the feature you mentioned would do, no?\nThanks for the help,Derrick", "msg_date": "Thu, 17 Mar 2011 09:27:41 -0400", "msg_from": "Derrick Rice <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Updating histogram_bounds after a delete" }, { "msg_contents": "Derrick Rice <[email protected]> wrote:\n> Kevin Grittner <[email protected] wrote:\n> \n>> there is a feature to probe the end of an index's range in\n>> situations where data skew was often causing less than optimal\n>> plans to be chosen.\n> \n> Was this introduced in 9.0 or was it earlier?\n \nI don't remember when it was added. I took a stab at searching for\nit, but didn't get it figured out; if nobody who knows off-hand\njumps in, I'll try again when I have more time.\n \n> It was suggested that I change my SQL from:\n> \n> delete from my_table where event_date < now() - interval '12\n> hours';\n> \n> to:\n> \n> delete from my_table where event_date < now() - interval '12\n> hours' and event_date >= (select min(event_date) from my_table);\n \nThat seems like a reasonable workaround.\n \n> Seems like exactly what the feature you mentioned would do, no?\n \nI know it helps with inserts off the end of the range; I'm less\ncertain about deletes. I *think* that's covered, but I'd have to\ndig into the code or do some testing to confirm.\n \n-Kevin\n", "msg_date": "Thu, 17 Mar 2011 09:49:45 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Updating histogram_bounds after a delete" }, { "msg_contents": "On Thu, Mar 17, 2011 at 09:49:45AM -0500, Kevin Grittner wrote:\n> Derrick Rice <[email protected]> wrote:\n> > Kevin Grittner <[email protected] wrote:\n> > \n> >> there is a feature to probe the end of an index's range in\n> >> situations where data skew was often causing less than optimal\n> >> plans to be chosen.\n> > \n> > Was this introduced in 9.0 or was it earlier?\n> \n> I don't remember when it was added. I took a stab at searching for\n> it, but didn't get it figured out; if nobody who knows off-hand\n> jumps in, I'll try again when I have more time.\n> \n\nI think this is it:\n\nhttp://archives.postgresql.org/pgsql-committers/2010-01/msg00021.php\n\nRegards,\nKen\n", "msg_date": "Thu, 17 Mar 2011 09:55:29 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Updating histogram_bounds after a delete" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> Derrick Rice <[email protected]> wrote:\n>> Kevin Grittner <[email protected] wrote:\n>>> there is a feature to probe the end of an index's range in\n>>> situations where data skew was often causing less than optimal\n>>> plans to be chosen.\n\n>> Was this introduced in 9.0 or was it earlier?\n \n> I don't remember when it was added. I took a stab at searching for\n> it, but didn't get it figured out; if nobody who knows off-hand\n> jumps in, I'll try again when I have more time.\n\nAuthor: Tom Lane <[email protected]>\nBranch: master Release: REL9_0_BR [40608e7f9] 2010-01-04 02:44:40 +0000\n\n When estimating the selectivity of an inequality \"column > constant\" or\n \"column < constant\", and the comparison value is in the first or last\n histogram bin or outside the histogram entirely, try to fetch the actual\n column min or max value using an index scan (if there is an index on the\n column). If successful, replace the lower or upper histogram bound with\n that value before carrying on with the estimate. This limits the\n estimation error caused by moving min/max values when the comparison\n value is close to the min or max. Per a complaint from Josh Berkus.\n \n It is tempting to consider using this mechanism for mergejoinscansel as well,\n but that would inject index fetches into main-line join estimation not just\n endpoint cases. I'm refraining from that until we can get a better handle\n on the costs of doing this type of lookup.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 17 Mar 2011 11:00:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Updating histogram_bounds after a delete " }, { "msg_contents": "Kenneth Marshall <[email protected]> wrote:\n \n> I think this is it:\n> \n>\nhttp://archives.postgresql.org/pgsql-committers/2010-01/msg00021.php\n \nLooks like it. Based on the commit date, that would be a 9.0\nchange. Based on the description, I'm not sure it fixes Derrick's\nproblem; the workaround of explicitly using min() for the low end of\na range may need to be a long-term approach.\n \nIt does seem odd, though, that the statistics would be off by that\nmuch. Unless the query is run immediately after a mass delete,\nautovacuum should be fixing that. Perhaps the autovacuum\nimprovements in later releases will solve the problem. If not, an\nexplicit ANALYZE (or perhaps better, VACUUM ANALYZE) immediately\nafter a mass delete would be wise.\n \n-Kevin\n", "msg_date": "Thu, 17 Mar 2011 10:05:02 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Updating histogram_bounds after a delete" } ]
[ { "msg_contents": "Hi,\n\nI've been looking around for information on doing a pg_restore as fast as\npossible. It is for a backup machine so I am not interested in anything like\ncrash recovery or anything else that would impact speed of load. I just want\nto go from no database to database there as fast as possible. The server is\nfor postgresql only and this is the only database, sp both system at\npostgres can be set however is required for the fast load.\n\nCurrently I am using a twin processor box with 2GB of memory and raid 5\ndisk.\n\nI start postgres before my load with these settings, which have been\nsuggested.\n\n\nshared_buffers = 496MB\nmaintenance_work_mem = 160MB\ncheckpoint_segments = 30\nautovacuum = false\nfull_page_writes=false\n\nmaintenance_work_mem and checkpoint_segments were advised to be increased,\nwhich I have done, but these are just guess values as I couldn't see any\nadvise for values, other than \"bigger\".\n\n\nI restore like this;\n\npg_restore -Fc -j 4 -i -O -d my_db my_db_dump.tbz\n\n\nEven as this, it is still slower than I would like.\n\nCan someone suggest some optimal settings (for postgresql 9) that will get\nthis as quick as it can be?\n\nThanks.\n\nHi,\nI've been looking around for information on doing a pg_restore as fast as possible. It is for a backup machine so I am not interested in anything like crash recovery or anything else that would impact speed of load. I just want to go from no database to database there as fast as possible. The server is for postgresql only and this is the only database, sp both system at postgres can be set however is required for the fast load.\nCurrently I am using a twin processor box with 2GB of memory and raid 5 disk.I start postgres before my load with these settings, which have been suggested.\nshared_buffers = 496MBmaintenance_work_mem = 160MBcheckpoint_segments = 30autovacuum = falsefull_page_writes=falsemaintenance_work_mem and checkpoint_segments were advised to be increased, which I have done, but these are just guess values as I couldn't see any advise for values, other than \"bigger\".\nI restore like this;pg_restore -Fc -j 4 -i -O -d my_db my_db_dump.tbzEven as this, it is still slower than I would like.\nCan someone suggest some optimal settings (for postgresql 9) that will get this as quick as it can be?Thanks.", "msg_date": "Thu, 17 Mar 2011 14:25:16 +0000", "msg_from": "Michael Andreasen <[email protected]>", "msg_from_op": true, "msg_subject": "Fastest pq_restore?" }, { "msg_contents": "On 03/17/2011 09:25 AM, Michael Andreasen wrote:\n> Hi,\n>\n> I've been looking around for information on doing a pg_restore as fast as possible. It is for a backup machine so I am not interested in anything like crash recovery or anything else that would impact speed of load. I just want to go from no database to database there as fast as possible. The server is for postgresql only and this is the only database, sp both system at postgres can be set however is required for the fast load.\n>\n> Currently I am using a twin processor box with 2GB of memory and raid 5 disk.\n>\n> I start postgres before my load with these settings, which have been suggested.\n>\n>\n> shared_buffers = 496MB\n> maintenance_work_mem = 160MB\n> checkpoint_segments = 30\n> autovacuum = false\n> full_page_writes=false\n>\n> maintenance_work_mem and checkpoint_segments were advised to be increased, which I have done, but these are just guess values as I couldn't see any advise for values, other than \"bigger\".\n>\n>\n> I restore like this;\n>\n> pg_restore -Fc -j 4 -i -O -d my_db my_db_dump.tbz\n>\n>\n> Even as this, it is still slower than I would like.\n>\n> Can someone suggest some optimal settings (for postgresql 9) that will get this as quick as it can be?\n>\n> Thanks.\n>\n>\n>\n>\n>\n\nautovacuum = off\nfsync = off\nsynchronous_commit = off\nfull_page_writes = off\nbgwriter_lru_maxpages = 0\n\n\n\n-Andy\n", "msg_date": "Thu, 17 Mar 2011 20:08:05 -0500", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fastest pq_restore?" }, { "msg_contents": "Andy Colson <[email protected]> wrote:\n> On 03/17/2011 09:25 AM, Michael Andreasen wrote:\n \n>> I've been looking around for information on doing a pg_restore as\n>> fast as possible.\n\n>> I am using a twin processor box with 2GB of memory\n \n>> shared_buffers = 496MB\n \nProbably about right.\n \n>> maintenance_work_mem = 160MB\n \nYou might get a benefit from a bit more there; hard to say what's\nbest with so little RAM.\n \n>> checkpoint_segments = 30\n \nThis one is hard to call without testing. Oddly, some machines do\nbetter with the default of 3. Nobody knows why.\n \n>> autovacuum = false\n>> full_page_writes=false\n \nGood.\n \n> fsync = off\n> synchronous_commit = off\n \nAbsolutely.\n \n> bgwriter_lru_maxpages = 0\n \nI hadn't thought much about that last one -- do you have benchmarks\nto confirm that it helped with a bulk load?\n \nYou might want to set max_connections to something lower to free up\nmore RAM for caching, especially considering that you have so little\nRAM.\n \n-Kevin\n", "msg_date": "Fri, 18 Mar 2011 09:38:05 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fastest pq_restore?" }, { "msg_contents": "On 3/18/2011 9:38 AM, Kevin Grittner wrote:\n> Andy Colson<[email protected]> wrote:\n>> On 03/17/2011 09:25 AM, Michael Andreasen wrote:\n>\n>>> I've been looking around for information on doing a pg_restore as\n>>> fast as possible.\n>\n>> bgwriter_lru_maxpages = 0\n>\n> I hadn't thought much about that last one -- do you have benchmarks\n> to confirm that it helped with a bulk load?\n>\n\nNope, I got it from the \"running with scissors\" thread (I think), (maybe \nfrom Greg Smith)\n\n\nor here:\n\nhttp://rhaas.blogspot.com/2010/06/postgresql-as-in-memory-only-database_24.html\n\nI dont recall exactly. I saw it, add added a comment to my .conf just \nincase I ever needed it.\n\n-Andy\n", "msg_date": "Fri, 18 Mar 2011 14:10:05 -0500", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fastest pq_restore?" }, { "msg_contents": "On Thu, Mar 17, 2011 at 7:25 AM, Michael Andreasen <[email protected]> wrote:\n> Currently I am using a twin processor box with 2GB of memory and raid 5\n> disk.\n> I start postgres before my load with these settings, which have been\n> suggested.\n>\n> I restore like this;\n> pg_restore -Fc -j 4 -i -O -d my_db my_db_dump.tbz\n>\n\nJust throwing this out there, but you have 4 parallel jobs running the\nrestore (-j 4), with two processors? They are multi-core? You might be\nseeing some contention there if they aren't.\n", "msg_date": "Sat, 19 Mar 2011 09:58:32 -0700", "msg_from": "bricklen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fastest pq_restore?" } ]
[ { "msg_contents": "hey folks,\n\nRunning into some odd performance issues between a few of our db \nboxes. While trying to speed up a query I ran it on another box and \nit was twice as fast. The plans are identical and various portions of \nthe query run in the same amount of time - it all boils down to most \nof the time being spent in a join filter. The plan is as good as it \nis going to get but the thing that is concerning me, which hopefully \nsome folks here may have some insight on, is the very large difference \nin runtime.\n\nthree boxes:\n\tA: Intel(R) Xeon(R) CPU E5345 @ 2.33GHz (Runs query \nfastest)\n\t\t4MB cache\n\tB: Quad-Core AMD Opteron(tm) Processor 2352 (2.1GHZ) (Main production \nbox, currently, middle speed)\n\t\t512k cache\n\tC: Quad-Core AMD Opteron(tm) Processor 2378 (2.4GHZ)\n\t\t512k cache\n\nA & B are running PG 8.4.2 (yes, I know it desperately need to be \nupgraded). C was also on 8.4.2 and since it was not in production I \nupgraded it to 8.4.7 and got the same performance as 8.4.2. Dataset \non A & B is the same C is mostly the same, but is missing a couple \nweeks of data (but since this query runs over 3 years of data, it is \nnegligable - plus C runs the slowest!)\n\nAll three running FC10 with kernel Linux db06 \n2.6.27.19-170.2.35.fc10.x86_64 #1 SMP Mon Feb 23 13:00:23 EST 2009 \nx86_64 x86_64 x86_64 GNU/Linux\n\nLoad is very low on each box. The query is running from shared_buffers \n- no real IO is occuring.\n\nThe average timing for the query in question is 90ms on A, 180ms on B \nand 190ms on C.\n\nNow here's where some odd stuff starts piling up: explain analyze \noverhead on said queries:\n20ms on A, 50ms on B and 85ms on C(!!)\n\nWe had one thought about potential NUMA issues, but doing a series \n(100) of connect, query, disconnect and looking at the timings reveals \nthem all to be solid... but even still we wouldn't expect it to be \nthat awful. The smaller cache of the opterons is also a valid argument.\n\nI know we're running an old kernel, I'm tempted to upgrade to see what \nwill happen, but at the same time I'm afraid it'll upgrade to a kernel \nwith a broken [insert major subsystem here] which has happened before.\n\nAnybody have some insight into this or run into this before?\n\nbtw, little more background on the query:\n\n -> Nested Loop (cost=5.87..2763.69 rows=9943 width=0) (actual \ntime=0.571..2\n74.750 rows=766 loops=1)\n Join Filter: (ce.eventdate >= (md.date - '6 days'::interval))\n -> Nested Loop (cost=5.87..1717.98 rows=27 width=8) \n(actual time=0.53\n3..8.301 rows=159 loops=1)\n\t\t[stuff removed here]\n -> Index Scan using xxxxxxx_date_idx on xxxxxx md\n(cost=0.00..19.50 rows=1099 width=8) (actual time=0.023..0.729 \nrows=951 loops=15\n9)\n Index Cond: (ce.eventdate <= md.date)\n\n\nOn all three boxes that inner nestloop completes in about the same \namount of time - it is that join filter that is causing the pain and \nagony. (If you are noticing the timing differences, that is because \nthe numbers above are the actual numbers, not explain analyze). The \nquery is pulling up a rolling window of events that occured on a \nspecific date. This query pulls up al the data for a period of time. \nce.eventdate is indexed, and is used in the outer nestloop. Thinking \nmore about what is going on cache thrashing is certainly a possibility.\n\nthe amazing explain analyze overhead is also very curious - we all \nknow it adds overhead, but 85ms? Yow.\n\n--\nJeff Trout <[email protected]>\nhttp://www.stuarthamm.net/\nhttp://www.dellsmartexitin.com/\n\n\n\n", "msg_date": "Thu, 17 Mar 2011 11:13:58 -0400", "msg_from": "Jeff <[email protected]>", "msg_from_op": true, "msg_subject": "Xeon twice the performance of opteron" }, { "msg_contents": "On Thu, Mar 17, 2011 at 10:13 AM, Jeff <[email protected]> wrote:\n> hey folks,\n>\n> Running into some odd performance issues between a few of our db boxes.\n\nWe've noticed similar results both in OLTP and data warehousing conditions here.\n\nOpteron machines just seem to lag behind *especially* in data\nwarehousing. Smaller\ncache for sorting/etc... is what I'd always chalked it up to, but I'm\nopen to other theories\nif they exist.\n", "msg_date": "Thu, 17 Mar 2011 11:42:09 -0500", "msg_from": "J Sisson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Xeon twice the performance of opteron" }, { "msg_contents": "On Thu, Mar 17, 2011 at 1:42 PM, J Sisson <[email protected]> wrote:\n> On Thu, Mar 17, 2011 at 10:13 AM, Jeff <[email protected]> wrote:\n>> hey folks,\n>>\n>> Running into some odd performance issues between a few of our db boxes.\n>\n> We've noticed similar results both in OLTP and data warehousing conditions here.\n>\n> Opteron machines just seem to lag behind *especially* in data\n> warehousing.  Smaller\n> cache for sorting/etc... is what I'd always chalked it up to, but I'm\n> open to other theories\n> if they exist.\n\nIt's my theory as well - you know, this could be solved by JITting\ncomplex expressions.\n\nBad cache behavior in application often comes as a side-effect of\ninterpreted execution (in this case, of expressions, conditions,\nfunctions). A JIT usually solves this cache inefficiency.\n\nI know, adding any kind of JIT to pg could be a major task.\n", "msg_date": "Thu, 17 Mar 2011 13:51:37 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Xeon twice the performance of opteron" }, { "msg_contents": "On 3/17/11 9:42 AM, J Sisson wrote:\n> On Thu, Mar 17, 2011 at 10:13 AM, Jeff<[email protected]> wrote:\n>> hey folks,\n>>\n>> Running into some odd performance issues between a few of our db boxes.\n> We've noticed similar results both in OLTP and data warehousing conditions here.\n>\n> Opteron machines just seem to lag behind *especially* in data\n> warehousing. Smaller\n> cache for sorting/etc... is what I'd always chalked it up to, but I'm\n> open to other theories\n> if they exist.\nWe had a similar result with a different CPU-intensive open-source package, and discovered that if we compiled it on the Opteron it ran almost twice as fast as binaries compiled on Intel hardware. We thought we could compile once, run everywhere, but it's not true. It must have been some specific optimization difference between Intel and AMD that the gcc compiler knows about. I don't know if that's the case here, but it's a thought.\n\nCraig\n", "msg_date": "Thu, 17 Mar 2011 09:54:17 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Xeon twice the performance of opteron" }, { "msg_contents": "\n\n> -----Original Message-----\n> From: [email protected] [mailto:pgsql-performance-\n> [email protected]] On Behalf Of Jeff\n> Sent: Thursday, March 17, 2011 9:14 AM\n> To: [email protected]\n> Cc: Brian Ristuccia\n> Subject: [PERFORM] Xeon twice the performance of opteron\n> \n> hey folks,\n> \n> Running into some odd performance issues between a few of our db\n> boxes. While trying to speed up a query I ran it on another box and\n> it was twice as fast. The plans are identical and various portions of\n> the query run in the same amount of time - it all boils down to most\n> of the time being spent in a join filter. The plan is as good as it\n> is going to get but the thing that is concerning me, which hopefully\n> some folks here may have some insight on, is the very large difference\n> in runtime.\n> \n> three boxes:\n> \tA: Intel(R) Xeon(R) CPU E5345 @ 2.33GHz (Runs query\n> fastest)\n> \t\t4MB cache\n> \tB: Quad-Core AMD Opteron(tm) Processor 2352 (2.1GHZ) (Main\n> production\n> box, currently, middle speed)\n> \t\t512k cache\n> \tC: Quad-Core AMD Opteron(tm) Processor 2378 (2.4GHZ)\n> \t\t512k cache\n> \n> A & B are running PG 8.4.2 (yes, I know it desperately need to be\n> upgraded). C was also on 8.4.2 and since it was not in production I\n> upgraded it to 8.4.7 and got the same performance as 8.4.2. Dataset\n> on A & B is the same C is mostly the same, but is missing a couple\n> weeks of data (but since this query runs over 3 years of data, it is\n> negligable - plus C runs the slowest!)\n> \n> All three running FC10 with kernel Linux db06\n> 2.6.27.19-170.2.35.fc10.x86_64 #1 SMP Mon Feb 23 13:00:23 EST 2009\n> x86_64 x86_64 x86_64 GNU/Linux\n> \n> Load is very low on each box. The query is running from shared_buffers\n> - no real IO is occuring.\n> \n> The average timing for the query in question is 90ms on A, 180ms on B\n> and 190ms on C.\n> \n> Now here's where some odd stuff starts piling up: explain analyze\n> overhead on said queries:\n> 20ms on A, 50ms on B and 85ms on C(!!)\n> \n> We had one thought about potential NUMA issues, but doing a series\n> (100) of connect, query, disconnect and looking at the timings reveals\n> them all to be solid... but even still we wouldn't expect it to be\n> that awful. The smaller cache of the opterons is also a valid\n> argument.\n> \n> I know we're running an old kernel, I'm tempted to upgrade to see what\n> will happen, but at the same time I'm afraid it'll upgrade to a kernel\n> with a broken [insert major subsystem here] which has happened before.\n> \n> Anybody have some insight into this or run into this before?\n> \n> btw, little more background on the query:\n> \n> -> Nested Loop (cost=5.87..2763.69 rows=9943 width=0) (actual\n> time=0.571..2\n> 74.750 rows=766 loops=1)\n> Join Filter: (ce.eventdate >= (md.date - '6 days'::interval))\n> -> Nested Loop (cost=5.87..1717.98 rows=27 width=8)\n> (actual time=0.53\n> 3..8.301 rows=159 loops=1)\n> \t\t[stuff removed here]\n> -> Index Scan using xxxxxxx_date_idx on xxxxxx md\n> (cost=0.00..19.50 rows=1099 width=8) (actual time=0.023..0.729\n> rows=951 loops=15\n> 9)\n> Index Cond: (ce.eventdate <= md.date)\n> \n> \n> On all three boxes that inner nestloop completes in about the same\n> amount of time - it is that join filter that is causing the pain and\n> agony. (If you are noticing the timing differences, that is because\n> the numbers above are the actual numbers, not explain analyze). The\n> query is pulling up a rolling window of events that occured on a\n> specific date. This query pulls up al the data for a period of time.\n> ce.eventdate is indexed, and is used in the outer nestloop. Thinking\n> more about what is going on cache thrashing is certainly a possibility.\n> \n> the amazing explain analyze overhead is also very curious - we all\n> know it adds overhead, but 85ms? Yow.\n> \n> --\n> Jeff Trout <[email protected]>\n> http://www.stuarthamm.net/\n> http://www.dellsmartexitin.com/\n\nI am sure you might have already checked for this, but just incase...\nDid you verify that no power savings stuff is turned on in the BIOS or at\nthe kernel ?\n\nI have to set ours to something HP calls static high performance or\nsomething like that if I want boxes that are normally pretty idle to execute\nin a predictable fashion for sub second queries. \n\nI assume you checked with a steam benchmark results on the AMD machines to\nmake sure they are getting in the ballpark of where they are supposed to ? \n\n\n\n\n\n\n", "msg_date": "Thu, 17 Mar 2011 19:24:09 -0600", "msg_from": "\"mark\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Xeon twice the performance of opteron" }, { "msg_contents": "On Thu, Mar 17, 2011 at 9:13 AM, Jeff <[email protected]> wrote:\n> hey folks,\n>\n> Running into some odd performance issues between a few of our db boxes.\n>  While trying to speed up a query I ran it on another box and it was twice\n> as fast.  The plans are identical and various portions of the query run in\n> the same amount of time - it all boils down to most of the time being spent\n> in a join filter.  The plan is as good as it is going to get but the thing\n> that is concerning me, which hopefully some folks here may have some insight\n> on, is the very large difference in runtime.\n\nMy experience puts the 23xx series opterons in a same general\nneighborhood as the E5300 and a little behind the E5400 series Xeons.\nOTOH, the newer Magny Cours Opterons stomp both of those into the\nground.\n\nDo any of those machines have zone.reclaim.mode = 1 ???\n\ni.e.:\n\nsysctl -a|grep zone.reclaim\nvm.zone_reclaim_mode = 0\n\nI had a machine that had just high enough interzone communications\ncost to get it turned on by default and it slowed it right to a crawl\nunder pgsql.\n", "msg_date": "Thu, 17 Mar 2011 19:39:11 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Xeon twice the performance of opteron" }, { "msg_contents": "\nOn Mar 17, 2011, at 9:39 PM, Scott Marlowe wrote:\n\n>\n> My experience puts the 23xx series opterons in a same general\n> neighborhood as the E5300 and a little behind the E5400 series Xeons.\n> OTOH, the newer Magny Cours Opterons stomp both of those into the\n> ground.\n>\n> Do any of those machines have zone.reclaim.mode = 1 ???\n>\n> i.e.:\n>\n> sysctl -a|grep zone.reclaim\n> vm.zone_reclaim_mode = 0\n>\n> I had a machine that had just high enough interzone communications\n> cost to get it turned on by default and it slowed it right to a crawl\n> under pgsql.\n\n\nIt is set to zero on this machine.\n\nI've tried PG compiled on the box itself, same result.\n\nAs for power savings, according to cpuinfo all the cores are running \nat 2.1ghz\n\nWe had another machine which typically runs as a web server running on \nan AMD Opteron(tm) Processor 6128\nwhich after diddling the speed governor to performance (thus bumping \ncpu speed to 2ghz from 800mhz) query speed increased to 100ms, still \nnot as fast as the xeon, but close enough.\n\nI think I'm just hitting some wall of the architecture. I tried \ngetting some oprofile love from it but oprofile seems to not work on \nthat box. however it worked on the xeon box:\n33995 9.6859 postgres j2date\n21925 6.2469 postgres ExecMakeFunctionResultNoSets\n20500 5.8409 postgres slot_deform_tuple\n17623 5.0212 postgres BitmapHeapNext\n13059 3.7208 postgres dt2time\n12271 3.4963 postgres slot_getattr\n11509\n\naside from j2date (probably coming up due to that Join filter I'd \nwager) nothing unexpected.\n\n\n--\nJeff Trout <[email protected]>\nhttp://www.stuarthamm.net/\nhttp://www.dellsmartexitin.com/\n\n\n\n", "msg_date": "Fri, 18 Mar 2011 08:14:57 -0400", "msg_from": "Jeff <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Xeon twice the performance of opteron" }, { "msg_contents": "On 03/17/2011 11:13 AM, Jeff wrote:\n> three boxes:\n> A: Intel(R) Xeon(R) CPU E5345 @ 2.33GHz (Runs query \n> fastest)\n> 4MB cache\n> B: Quad-Core AMD Opteron(tm) Processor 2352 (2.1GHZ) (Main \n> production box, currently, middle speed)\n> 512k cache\n> C: Quad-Core AMD Opteron(tm) Processor 2378 (2.4GHZ)\n> 512k cache\n\nIt's possible that transfer speed between the CPU and memory are very \ndifferent between these systems when running a single-core operation. \nIntel often has an advantage there; I don't have any figures on this \ngeneration of processors to know for sure though. If you can get some \nidle time to run my stream-scaling tool from \nhttps://github.com/gregs1104/stream-scaling that might give you some \ninsight.\n\n> Now here's where some odd stuff starts piling up: explain analyze \n> overhead on said queries:\n> 20ms on A, 50ms on B and 85ms on C(!!)\n\nI found an example in my book where EXPLAIN ANALYZE took a trivial \nCOUNT(*) query from 8ms to 70ms. It's really not cheap for some sorts \nof things.\n\n> I know we're running an old kernel, I'm tempted to upgrade to see what \n> will happen, but at the same time I'm afraid it'll upgrade to a kernel \n> with a broken [insert major subsystem here] which has happened before.\n\nRunning a production server on Fedora Core is a scary operation pretty \nmuch all the time. That said, I wouldn't consider 2.6.27 to be an old \nkernel--not when RHEL5 is still using 2.6.18. The kernel version you \nget for FC10 is probably quite behind on updates, though, relative to a \nkernel.org one that has kept getting bug fixes.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Mon, 28 Mar 2011 02:17:45 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Xeon twice the performance of opteron" } ]
[ { "msg_contents": "Hello,\n\nAt MusicBrainz we're looking to get a new database server, and are\nhoping to buy this in the next couple of days. I'm mostly a software\nguy, but I'm posting this on behalf of Rob, who's actually going to be\nbuying the hardware. Here's a quote of what we're looking to get:\n\n I'm working to spec out a bad-ass 1U database server with loads of\n cores (12), RAM (24GB) and drives (4 SAS) in a hardware RAID-1,0\n configuration:\n\n 1 * SuperMicro 2016R-URF, 1U, redundant power supply, 4 SATA/SAS\n drive bays 2\n 2 * Intel Xeon X5650 Westmere 2.66GHz 12MB L3 Cache LGA 1366 95W\n Six-Core Server Processor 2\n 2 * Crucial 24GB (3 x 4GB) DDR3 SDRAM ECC Registered DDR3 1333,\n CT3KIT51272BV1339 1\n 1 * LSI MegaRAID SATA/SAS 9260-4i ($379) (linux support [1])\n or\n 1 * HighPoint RocketRAID 4320 PCI-Express x8 ($429)\n or\n 1 * Adaptec RAID 3405 controller ($354)\n 4 * Fujitsu MBA3147RC 147GB 15000 RPM\n\n SuperMicro machines have treated us really well over time (better\n than Dell or Sun boxes), so I am really happy to throw more money in\n their direction. Redundant power supplies seem like a good idea for\n a database server.\n\n For $400 more we can get hexa core processors as opposed to quad\n core processors at 2.66Ghz. This seems like a really good deal --\n any thoughts on this?\n\n Crucial memory has also served us really well, so that is a\n no-brainer.\n\n The RAID controller cards are where I need to most feedback! Of the\n LSI, Highpoint or Adaptec cards, which one is likely to have native\n linux support that does not require custom drivers to be installed?\n The LSI card has great specs at a great price point with Linux\n support, but installing the custom driver sounds like a pain. Does\n anyone have any experience with these cards?\n\n We've opted to not go for SSD drives in the server just yet -- it\n doesn't seem clear how well SSDs do in a driver environment.\n\n That's it -- anyone have any feedback?\n\nJust a quick bit more information. Our database is certainly weighted\ntowards being read heavy, rather than write heavy (with a read-only web\nservice accounting for ~90% of our traffic). Our tables vary in size,\nwith the upperbound being around 10mil rows.\n\nI'm not sure exactly what more to say - but any feedback is definitely\nappreciated. We're hoping to purchase this server on Monday, I\nbelieve. Any questions, ask away!\n\nThanks,\n- Ollie\n\n[1]: http://www.lsi.com/storage_home/products_home/internal_raid/megaraid_sas/entry_line/megaraid_sas_9240-4i/index.html\n", "msg_date": "Fri, 18 Mar 2011 00:51:52 +0000", "msg_from": "Oliver Charles <[email protected]>", "msg_from_op": true, "msg_subject": "Request for feedback on hardware for a new database server" }, { "msg_contents": "\nOn Mar 17, 2011, at 5:51 PM, Oliver Charles wrote:\n\n> Hello,\n> \n> At MusicBrainz we're looking to get a new database server, and are\n> hoping to buy this in the next couple of days. I'm mostly a software\n> guy, but I'm posting this on behalf of Rob, who's actually going to be\n> buying the hardware. Here's a quote of what we're looking to get:\n> \n> I'm working to spec out a bad-ass 1U database server with loads of\n> cores (12), RAM (24GB) and drives (4 SAS) in a hardware RAID-1,0\n> configuration:\n> \n> 1 * SuperMicro 2016R-URF, 1U, redundant power supply, 4 SATA/SAS\n> drive bays 2\n> 2 * Intel Xeon X5650 Westmere 2.66GHz 12MB L3 Cache LGA 1366 95W\n> Six-Core Server Processor 2\n> 2 * Crucial 24GB (3 x 4GB) DDR3 SDRAM ECC Registered DDR3 1333,\n> CT3KIT51272BV1339 1\n> 1 * LSI MegaRAID SATA/SAS 9260-4i ($379) (linux support [1])\n> or\n> 1 * HighPoint RocketRAID 4320 PCI-Express x8 ($429)\n> or\n> 1 * Adaptec RAID 3405 controller ($354)\n> 4 * Fujitsu MBA3147RC 147GB 15000 RPM\n\n> \n> That's it -- anyone have any feedback?\n\n\nI'm no expert, but...\n\nThat's very few drives. Even if you turn them into a single array\n(rather than separating out a raid pair for OS and a raid pair\nfor WAL and raid 10 array for data) that's going to give you\nvery little IO bandwidth, especially for typical random\naccess work.\n\nUnless your entire database active set fits in RAM I'd expect your\ncores to sit idle waiting on disk IO much of the time.\n\nDon't forget that you need a BBU for whichever RAID controller\nyou need, or it won't be able to safely do writeback caching, and\nyou'll lose a lot of the benefit.\n\nCheers,\n Steve\n\n", "msg_date": "Thu, 17 Mar 2011 19:05:46 -0700", "msg_from": "Steve Atkins <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Request for feedback on hardware for a new database server" }, { "msg_contents": "On Thu, Mar 17, 2011 at 6:51 PM, Oliver Charles\n<[email protected]> wrote:\n> Hello,\n>\n> At MusicBrainz we're looking to get a new database server, and are\n> hoping to buy this in the next couple of days. I'm mostly a software\n> guy, but I'm posting this on behalf of Rob, who's actually going to be\n> buying the hardware. Here's a quote of what we're looking to get:\n>\n>    I'm working to spec out a bad-ass 1U database server with loads of\n>    cores (12), RAM (24GB) and drives (4 SAS) in a hardware RAID-1,0\n>    configuration:\n>\n>    1 * SuperMicro 2016R-URF, 1U, redundant power supply, 4 SATA/SAS\n>    drive bays 2\n>    2 * Intel Xeon X5650 Westmere 2.66GHz 12MB L3 Cache LGA 1366 95W\n>    Six-Core Server Processor 2\n>    2 * Crucial 24GB (3 x 4GB) DDR3 SDRAM ECC Registered DDR3 1333,\n>    CT3KIT51272BV1339 1\n>    1 * LSI MegaRAID SATA/SAS 9260-4i ($379) (linux support [1])\n>    or\n>    1 * HighPoint RocketRAID 4320 PCI-Express x8 ($429)\n>    or\n>    1 * Adaptec RAID 3405 controller ($354)\n>    4 * Fujitsu MBA3147RC 147GB 15000 RPM\n>\n>    SuperMicro machines have treated us really well over time (better\n>    than Dell or Sun boxes), so I am really happy to throw more money in\n>    their direction.  Redundant power supplies seem like a good idea for\n>    a database server.\n>\n>    For $400 more we can get hexa core processors as opposed to quad\n>    core processors at 2.66Ghz. This seems like a really good deal --\n>    any thoughts on this?\n>\n>    Crucial memory has also served us really well, so that is a\n>    no-brainer.\n>\n>    The RAID controller cards are where I need to most feedback! Of the\n>    LSI, Highpoint or Adaptec cards, which one is likely to have native\n>    linux support that does not require custom drivers to be installed?\n>    The LSI card has great specs at a great price point with Linux\n>    support, but installing the custom driver sounds like a pain. Does\n>    anyone have any experience with these cards?\n>\n>    We've opted to not go for SSD drives in the server just yet -- it\n>    doesn't seem clear how well SSDs do in a driver environment.\n>\n>    That's it -- anyone have any feedback?\n>\n> Just a quick bit more information. Our database is certainly weighted\n> towards being read heavy, rather than write heavy (with a read-only web\n> service accounting for ~90% of our traffic). Our tables vary in size,\n> with the upperbound being around 10mil rows.\n>\n> I'm not sure exactly what more to say - but any feedback is definitely\n> appreciated. We're hoping to purchase this server on Monday, I\n> believe. Any questions, ask away!\n\nI order my boxes from a white box builder called Aberdeen. They'll\ntest whatever hardware you want with whatever OS you want to make sure\nit works before sending it out. As far as I know the LSI card should\njust work with linux, if not, the previous rev should work fine (the\nLSI 8888). I prefer Areca RAID 1680/1880 cards, they run cooler and\nfaster than the LSIs.\n\nAnother point. My experience with 1U chassis and cooling is that they\ndon't move enough air across their cards to make sure they stay cool.\nYou'd be better off ordering a 2U chassis with 8 3.5\" drive bays so\nyou can add drives later if you need to, and it'll provide more\ncooling air across the card.\n\nOur current big 48 core servers are running plain LSI SAS adapters\nwithout HW RAID because the LSI 8888s we were using overheated and\ncooked themselves to death after about 3 months. Those are 1U chassis\nmachines, and our newer machines are all 2U boxes now. BTW, if you\never need more than 2 sockets, right now the Magny Cours AMDs are the\nfastest in that arena. For 2 sockets the Nehalem based machines are\nabout equal to them.\n\nThe high point RAID controllers are toys (or at least they were last I checked).\n\nIf you have to go with 4 drives just make it one big RAID-10 array and\nthen partition that out into 3 or 4 partitions. It's important to put\npg_xlog on a different partition even if it's on the same array, as it\nallows the OS to fsync it separately.\n", "msg_date": "Thu, 17 Mar 2011 21:02:28 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Request for feedback on hardware for a new database server" }, { "msg_contents": "On 2011-03-18 01:51, Oliver Charles wrote:\n> Hello,\n>\n> At MusicBrainz we're looking to get a new database server, and are\n> hoping to buy this in the next couple of days. I'm mostly a software\n> guy, but I'm posting this on behalf of Rob, who's actually going to be\n> buying the hardware. Here's a quote of what we're looking to get:\n\nI think most of it has been said already:\n* Battery backed write cache\n* See if you can get enough memory to make all of your \"active\"\n dataset fit in memory. (typically not that hard in 2011).\n* Dependent on your workload of-course, you're typically not\n bottlenecked by the amount of cpu-cores, so strive for fewer\n faster cores.\n* As few sockets as you can screeze you memory and cpu-requirements\n onto.\n* If you can live with (or design around) the tradeoffs with SSD it\n will buy you way more performance than any significant number\n of rotating drives. (a good backup plan with full WAL-log to a second\n system as an example).\n\n\n-- \nJesper\n", "msg_date": "Fri, 18 Mar 2011 07:19:04 +0100", "msg_from": "Jesper Krogh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Request for feedback on hardware for a new database\n server" }, { "msg_contents": "On 18-3-2011 4:02 Scott Marlowe wrote:\n> On Thu, Mar 17, 2011 at 6:51 PM, Oliver Charles\n> <[email protected]> wrote:\n>\n> Another point. My experience with 1U chassis and cooling is that they\n> don't move enough air across their cards to make sure they stay cool.\n> You'd be better off ordering a 2U chassis with 8 3.5\" drive bays so\n> you can add drives later if you need to, and it'll provide more\n> cooling air across the card.\n>\n> Our current big 48 core servers are running plain LSI SAS adapters\n> without HW RAID because the LSI 8888s we were using overheated and\n> cooked themselves to death after about 3 months. Those are 1U chassis\n> machines, and our newer machines are all 2U boxes now.\n\nWe have several 1U boxes (mostly Dell and Sun) running and had several \nin the past. And we've never had any heating problems with them. That \nincludes machines with more power hungry processors than are currently \navailable, all power slurping FB-dimm slots occupied and two raid cards \ninstalled.\n\nBut than again, a 2U box will likely have more cooling capacity, no \nmatter how you look at it.\n\nAnother tip that may be useful; look at 2.5\" drives. Afaik there is no \nreally good reason to use 3.5\" drives for new servers. The 2.5\" drives \nsave power and room - and thus may allow more air flowing through the \nenclosure - and offer the same performance and reliability (the first I \nknow for sure, the second I'm pretty sure of but haven't seen much proof \nof lately).\n\nYou could even have a 8- or 10-disk 1U enclosure in that way or up to 24 \ndisks in 2U. But those configurations will require some attention to \ncooling again.\n\nBest regards,\n\nArjen\n", "msg_date": "Fri, 18 Mar 2011 08:16:56 +0100", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Request for feedback on hardware for a new database\n server" }, { "msg_contents": "On Fri, Mar 18, 2011 at 1:16 AM, Arjen van der Meijden\n<[email protected]> wrote:\n> On 18-3-2011 4:02 Scott Marlowe wrote:\n>>\n>> On Thu, Mar 17, 2011 at 6:51 PM, Oliver Charles\n>> <[email protected]>  wrote:\n>>\n>> Another point.  My experience with 1U chassis and cooling is that they\n>> don't move enough air across their cards to make sure they stay cool.\n>> You'd be better off ordering a 2U chassis with 8 3.5\" drive bays so\n>> you can add drives later if you need to, and it'll provide more\n>> cooling air across the card.\n>>\n>> Our current big 48 core servers are running plain LSI SAS adapters\n>> without HW RAID because the LSI 8888s we were using overheated and\n>> cooked themselves to death after about 3 months.  Those are 1U chassis\n>> machines, and our newer machines are all 2U boxes now.\n>\n> We have several 1U boxes (mostly Dell and Sun) running and had several in\n> the past. And we've never had any heating problems with them. That includes\n> machines with more power hungry processors than are currently available, all\n> power slurping FB-dimm slots occupied and two raid cards installed.\n\nNote I am talking specifically about the ability to cool the RAID\ncard, not the CPUS etc. Many 1U boxes have poor air flow across the\nexpansion slots for PCI / etc cards, while doing a great job cooling\nthe CPUs and memory. If you don't use high performance RAID cards\n(LSI 9xxx Areca 16xx 18xx) then it's not an issue. Open up your 1U\nand look at the air flow for the expansion slots, it's often just not\nvery much.\n", "msg_date": "Fri, 18 Mar 2011 03:11:38 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Request for feedback on hardware for a new database server" }, { "msg_contents": "On 18-3-2011 10:11, Scott Marlowe wrote:\n> On Fri, Mar 18, 2011 at 1:16 AM, Arjen van der Meijden\n> <[email protected]> wrote:\n>> On 18-3-2011 4:02 Scott Marlowe wrote:\n>> We have several 1U boxes (mostly Dell and Sun) running and had several in\n>> the past. And we've never had any heating problems with them. That includes\n>> machines with more power hungry processors than are currently available, all\n>> power slurping FB-dimm slots occupied and two raid cards installed.\n>\n> Note I am talking specifically about the ability to cool the RAID\n> card, not the CPUS etc. Many 1U boxes have poor air flow across the\n> expansion slots for PCI / etc cards, while doing a great job cooling\n> the CPUs and memory. If you don't use high performance RAID cards\n> (LSI 9xxx Areca 16xx 18xx) then it's not an issue. Open up your 1U\n> and look at the air flow for the expansion slots, it's often just not\n> very much.\n>\n\nI was referring to amongst others two machines that have both a Dell \nPerc 5/i for internal disks and a Perc 5/e for an external disk \nenclosure. Those also had processors that produce quite some heat (2x \nX5160 and 2x X5355) combined with all fb-dimm (8x 2GB) slots filled, \nwhich also produce a lot of heat. Those Dell Perc's are similar to the \nLSI's from the same period in time.\n\nSo the produced heat form the other components was already pretty high. \nStill, I've seen no problems with heat for any component, including all \nfour raid controllers. But I agree, there are some 1U servers that skimp \non fans and thus air flow in the system. We've not had that problem with \nany of our systems. But both Sun and Dell seem to add quite a bit of \nfans in the middle of the system, where others may do it a bit less \nheavy duty and less over-dimensioned.\n\nBest regards,\n\nArjen\n\n", "msg_date": "Fri, 18 Mar 2011 13:44:51 +0100", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Request for feedback on hardware for a new database\n server" }, { "msg_contents": "On Fri, Mar 18, 2011 at 3:19 AM, Jesper Krogh <[email protected]> wrote:\n> * Dependent on your workload of-course, you're typically not\n>  bottlenecked by the amount of cpu-cores, so strive for fewer\n>  faster cores.\n\nDepending on your workload again, but faster memory is even more\nimportant than faster math.\n\nSo go for the architecture with the fastest memory bus.\n", "msg_date": "Fri, 18 Mar 2011 13:02:40 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Request for feedback on hardware for a new database server" }, { "msg_contents": "On Fri, Mar 18, 2011 at 6:44 AM, Arjen van der Meijden\n<[email protected]> wrote:\n> On 18-3-2011 10:11, Scott Marlowe wrote:\n>>\n>> On Fri, Mar 18, 2011 at 1:16 AM, Arjen van der Meijden\n>> <[email protected]>  wrote:\n>>>\n>>> On 18-3-2011 4:02 Scott Marlowe wrote:\n>>> We have several 1U boxes (mostly Dell and Sun) running and had several in\n>>> the past. And we've never had any heating problems with them. That\n>>> includes\n>>> machines with more power hungry processors than are currently available,\n>>> all\n>>> power slurping FB-dimm slots occupied and two raid cards installed.\n>>\n>> Note I am talking specifically about the ability to cool the RAID\n>> card, not the CPUS etc.  Many 1U boxes have poor air flow across the\n>> expansion slots for PCI / etc cards, while doing a great job cooling\n>> the CPUs and memory.  If you don't use high performance RAID cards\n>> (LSI 9xxx  Areca 16xx 18xx) then it's not an issue.  Open up your 1U\n>> and look at the air flow for the expansion slots, it's often just not\n>> very much.\n>>\n>\n> I was referring to amongst others two machines that have both a Dell Perc\n> 5/i for internal disks and a Perc 5/e for an external disk enclosure. Those\n> also had processors that produce quite some heat (2x X5160 and 2x X5355)\n> combined with all fb-dimm (8x 2GB) slots filled, which also produce a lot of\n> heat. Those Dell Perc's are similar to the LSI's from the same period in\n> time.\n>\n> So the produced heat form the other components was already pretty high.\n> Still, I've seen no problems with heat for any component, including all four\n> raid controllers. But I agree, there are some 1U servers that skimp on fans\n> and thus air flow in the system. We've not had that problem with any of our\n> systems. But both Sun and Dell seem to add quite a bit of fans in the middle\n> of the system, where others may do it a bit less heavy duty and less\n> over-dimensioned.\n\nMost machines have different pathways for cooling airflow over their\nRAID cards, and they don't share that air flow with the CPUs. Also,\nthe PERC RAID controllers do not produce a lot of heat. The CPUs on\nthe high performance LSI or Areca controllers are often dual core high\nperformance CPUs in their own right, and those cards have heat sinks\nwith fans on them to cool them. The cards themselves are what make so\nmuch heat and don't get enough cooling in many 1U servers. It has\nnothing to do with what else is in the server, again because the\nairflow for the cards is usually separate.\n", "msg_date": "Fri, 18 Mar 2011 10:32:06 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Request for feedback on hardware for a new database server" }, { "msg_contents": "On Fri, Mar 18, 2011 at 10:32 AM, Scott Marlowe <[email protected]> wrote:\n> On Fri, Mar 18, 2011 at 6:44 AM, Arjen van der Meijden\n> <[email protected]> wrote:\n>> On 18-3-2011 10:11, Scott Marlowe wrote:\n>>>\n>>> On Fri, Mar 18, 2011 at 1:16 AM, Arjen van der Meijden\n>>> <[email protected]>  wrote:\n>>>>\n>>>> On 18-3-2011 4:02 Scott Marlowe wrote:\n>>>> We have several 1U boxes (mostly Dell and Sun) running and had several in\n>>>> the past. And we've never had any heating problems with them. That\n>>>> includes\n>>>> machines with more power hungry processors than are currently available,\n>>>> all\n>>>> power slurping FB-dimm slots occupied and two raid cards installed.\n>>>\n>>> Note I am talking specifically about the ability to cool the RAID\n>>> card, not the CPUS etc.  Many 1U boxes have poor air flow across the\n>>> expansion slots for PCI / etc cards, while doing a great job cooling\n>>> the CPUs and memory.  If you don't use high performance RAID cards\n>>> (LSI 9xxx  Areca 16xx 18xx) then it's not an issue.  Open up your 1U\n>>> and look at the air flow for the expansion slots, it's often just not\n>>> very much.\n>>>\n>>\n>> I was referring to amongst others two machines that have both a Dell Perc\n>> 5/i for internal disks and a Perc 5/e for an external disk enclosure. Those\n>> also had processors that produce quite some heat (2x X5160 and 2x X5355)\n>> combined with all fb-dimm (8x 2GB) slots filled, which also produce a lot of\n>> heat. Those Dell Perc's are similar to the LSI's from the same period in\n>> time.\n>>\n>> So the produced heat form the other components was already pretty high.\n>> Still, I've seen no problems with heat for any component, including all four\n>> raid controllers. But I agree, there are some 1U servers that skimp on fans\n>> and thus air flow in the system. We've not had that problem with any of our\n>> systems. But both Sun and Dell seem to add quite a bit of fans in the middle\n>> of the system, where others may do it a bit less heavy duty and less\n>> over-dimensioned.\n>\n> Most machines have different pathways for cooling airflow over their\n> RAID cards, and they don't share that air flow with the CPUs.  Also,\n> the PERC RAID controllers do not produce a lot of heat.  The CPUs on\n> the high performance LSI or Areca controllers are often dual core high\n> performance CPUs in their own right, and those cards have heat sinks\n> with fans on them to cool them.  The cards themselves are what make so\n> much heat and don't get enough cooling in many 1U servers.  It has\n> nothing to do with what else is in the server, again because the\n> airflow for the cards is usually separate.\n\nAs a followup to this subject, the problem wasn't bad until the server\nload increased, thus increasing the load on the LSI MegaRAID card, at\nwhich point it started producing more heat than it had before. When\nthe machine wasn't working too hard the LSI was fine. Once we started\nhitting higher and higher load is when the card had issues.\n", "msg_date": "Fri, 18 Mar 2011 14:29:06 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Request for feedback on hardware for a new database server" }, { "msg_contents": "On Thu, Mar 17, 2011 at 7:51 PM, Oliver Charles\n<[email protected]> wrote:\n> Hello,\n>\n> At MusicBrainz we're looking to get a new database server, and are\n> hoping to buy this in the next couple of days. I'm mostly a software\n> guy, but I'm posting this on behalf of Rob, who's actually going to be\n> buying the hardware. Here's a quote of what we're looking to get:\n>\n>    I'm working to spec out a bad-ass 1U database server with loads of\n>    cores (12), RAM (24GB) and drives (4 SAS) in a hardware RAID-1,0\n>    configuration:\n>\n>    1 * SuperMicro 2016R-URF, 1U, redundant power supply, 4 SATA/SAS\n>    drive bays 2\n>    2 * Intel Xeon X5650 Westmere 2.66GHz 12MB L3 Cache LGA 1366 95W\n>    Six-Core Server Processor 2\n>    2 * Crucial 24GB (3 x 4GB) DDR3 SDRAM ECC Registered DDR3 1333,\n>    CT3KIT51272BV1339 1\n>    1 * LSI MegaRAID SATA/SAS 9260-4i ($379) (linux support [1])\n>    or\n>    1 * HighPoint RocketRAID 4320 PCI-Express x8 ($429)\n>    or\n>    1 * Adaptec RAID 3405 controller ($354)\n>    4 * Fujitsu MBA3147RC 147GB 15000 RPM\n>\n>    SuperMicro machines have treated us really well over time (better\n>    than Dell or Sun boxes), so I am really happy to throw more money in\n>    their direction.  Redundant power supplies seem like a good idea for\n>    a database server.\n>\n>    For $400 more we can get hexa core processors as opposed to quad\n>    core processors at 2.66Ghz. This seems like a really good deal --\n>    any thoughts on this?\n>\n>    Crucial memory has also served us really well, so that is a\n>    no-brainer.\n>\n>    The RAID controller cards are where I need to most feedback! Of the\n>    LSI, Highpoint or Adaptec cards, which one is likely to have native\n>    linux support that does not require custom drivers to be installed?\n>    The LSI card has great specs at a great price point with Linux\n>    support, but installing the custom driver sounds like a pain. Does\n>    anyone have any experience with these cards?\n>\n>    We've opted to not go for SSD drives in the server just yet -- it\n>    doesn't seem clear how well SSDs do in a driver environment.\n>\n>    That's it -- anyone have any feedback?\n>\n> Just a quick bit more information. Our database is certainly weighted\n> towards being read heavy, rather than write heavy (with a read-only web\n> service accounting for ~90% of our traffic). Our tables vary in size,\n> with the upperbound being around 10mil rows.\n\nIt doesn't sound like SSD are a good fit for you -- you have small\nenough data that you can easily buffer in RAM and not enough writing\nto bottleneck you on the I/O side. The #1 server building mistake is\nfocusing too much on cpu and not enough on i/o, but as noted by others\nyou should be ok with a decent raid controller with a bbu on it. A\nbbu will make a tremendous difference in server responsiveness to\nsudden write bursts (like vacuum), which is particularly critical with\nyour whole setup being on a single physical volume.\n\nKeeping your o/s and the db on the same LUN is a dangerous btw because\nit can limit your ability to log in and deal with certain classes of\nemergency situations. It's possible to do a hybrid type setup where\nyou keep your o/s mounted on a CF or even a thumb drive(s) (most 1U\nservers now have internal usb ports for exactly this purpose) but this\ntakes a certain bit of preparation and understanding what is sane to\ndo with flash..\n\nMy other concern with your setup is you might not have room for\nexpansion unless you have an unallocated pci-e slot in the back (some\n1U have 1, some have 2). With an extra slot, you can pop a sas hba in\nthe future attached to an enclosure if your storage requirements go up\nsignificantly.\n\nOption '2' is to go all out on the raid controller right now, so that\nyou have both internal and external sas ports, although these tend to\nbe much more expensive. Option '3' is to just 2U now, leaving\nyourself room for backplane expansion.\n\nPutting it all together, I am not a fan of 1U database boxes unless\nyou are breaking the storage out -- there are ways you can get burned\nso that you have to redo all your storage volumes (assuming you are\nnot using LVM, which I have very mixed feelings about) or even buy a\ncompletely new server -- both scenarios can be expensive in terms of\ndowntime.\n\nmerlin\n", "msg_date": "Tue, 22 Mar 2011 08:54:15 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Request for feedback on hardware for a new database server" } ]
[ { "msg_contents": "Hello\n\nfor example queries with LIMIT clause can be significantly faster with\nnested loop. But you don't need to disable nested loop globally.\n\nYou can wrap your query to sql functions and disable nested loop just\nfor these functions.\n\nRegards\n\nPavel Stehule\n\n2011/3/18 Anssi Kääriäinen <[email protected]>:\n> Hello list,\n>\n> I am working on a Entity-Attribute-Value (EAV) database using PostgreSQL\n> 8.4.7. The basic problem is that when joining multiple times different\n> entities the planner thinks that there is vastly less rows to join than\n> there is in reality and decides to use multiple nested loops for the join\n> chain. This results in queries where when nested loops are enabled, query\n> time is somewhere around 35 seconds, but with nested loops disabled, the\n> performance is somewhere around 100ms. I don't think there is much hope for\n> getting better statistics, as EAV is just not statistics friendly. The\n> values of an attribute depend on the type of the attribute, and different\n> entities have different attributes defined. The planner has no idea of these\n> correlations.\n>\n> Now, my question is: if I disable nested loops completely for the users of\n> the EAV database what kind of worst case performance loss can I expect? I\n> don't mind if a query that normally runs in 100ms now takes 200ms, but about\n> problems where the query will take much more time to complete than with\n> nested loops enabled. As far as I understand these cases should be pretty\n> rare if non-existent?\n>\n>  - Anssi\n>\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Fri, 18 Mar 2011 08:02:05 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Disabling nested loops - worst case performance" }, { "msg_contents": "Hello list,\n\nI am working on a Entity-Attribute-Value (EAV) database using PostgreSQL \n8.4.7. The basic problem is that when joining multiple times different \nentities the planner thinks that there is vastly less rows to join than \nthere is in reality and decides to use multiple nested loops for the \njoin chain. This results in queries where when nested loops are enabled, \nquery time is somewhere around 35 seconds, but with nested loops \ndisabled, the performance is somewhere around 100ms. I don't think there \nis much hope for getting better statistics, as EAV is just not \nstatistics friendly. The values of an attribute depend on the type of \nthe attribute, and different entities have different attributes defined. \nThe planner has no idea of these correlations.\n\nNow, my question is: if I disable nested loops completely for the users \nof the EAV database what kind of worst case performance loss can I \nexpect? I don't mind if a query that normally runs in 100ms now takes \n200ms, but about problems where the query will take much more time to \ncomplete than with nested loops enabled. As far as I understand these \ncases should be pretty rare if non-existent?\n\n - Anssi\n\n\n\n", "msg_date": "Fri, 18 Mar 2011 09:15:51 +0200", "msg_from": "=?ISO-8859-1?Q?Anssi_K=E4=E4ri=E4inen?= <[email protected]>", "msg_from_op": false, "msg_subject": "Disabling nested loops - worst case performance" }, { "msg_contents": "18.03.11 09:15, Anssi Kääriäinen написав(ла):\n> Hello list,\n>\n> I am working on a Entity-Attribute-Value (EAV) database using \n> PostgreSQL 8.4.7. The basic problem is that when joining multiple \n> times different entities the planner thinks that there is vastly less \n> rows to join than there is in reality and decides to use multiple \n> nested loops for the join chain. This results in queries where when \n> nested loops are enabled, query time is somewhere around 35 seconds, \n> but with nested loops disabled, the performance is somewhere around \n> 100ms. I don't think there is much hope for getting better statistics, \n> as EAV is just not statistics friendly. The values of an attribute \n> depend on the type of the attribute, and different entities have \n> different attributes defined. The planner has no idea of these \n> correlations.\n\nHello.\n\nIf your queries work on single attribute, you can try adding partial \nindexes for different attributes. Note that in this case parameterized \nstatements may prevent index usage, so check also with attribute id inlined.\n\nBest regards, Vitalii Tymchyshyn\n", "msg_date": "Fri, 18 Mar 2011 12:52:03 +0200", "msg_from": "Vitalii Tymchyshyn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Disabling nested loops - worst case performance" }, { "msg_contents": "On 03/18/2011 09:02 AM, Pavel Stehule wrote:\n> for example queries with LIMIT clause can be significantly faster with\n> nested loop. But you don't need to disable nested loop globally.\n>\n> You can wrap your query to sql functions and disable nested loop just\n> for these functions.\n\nThank you for your help, the LIMIT example was something I was not aware of.\n\nThe problem is we are replacing an old database, and we need to \nreplicate certain views for external users. Minimal impact for these \nusers is required. Maybe it would be best to create special user \naccounts for these external users and disable nested loops only for \nthose accounts. Otherwise we will disable nested loops when absolutely \nnecessary.\n\n - Anssi\n", "msg_date": "Fri, 18 Mar 2011 12:58:48 +0200", "msg_from": "=?UTF-8?B?QW5zc2kgS8Okw6RyacOkaW5lbg==?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Disabling nested loops - worst case performance" }, { "msg_contents": "Anssi Kääriäinen, 18.03.2011 08:15:\n> Hello list,\n>\n> I am working on a Entity-Attribute-Value (EAV) database using\n> PostgreSQL 8.4.7. The basic problem is that when joining multiple\n> times different entities the planner thinks that there is vastly less\n> rows to join than there is in reality and decides to use multiple\n> nested loops for the join chain.\n\nDid you consider using hstore instead?\n\nI think in the PostgreSQL world, this is a better alternative than EAV and most probably faster as well.\n\nRegards\nThomas\n\n", "msg_date": "Fri, 18 Mar 2011 12:14:05 +0100", "msg_from": "Thomas Kellerer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Disabling nested loops - worst case performance" }, { "msg_contents": "On 03/18/2011 12:52 PM, Vitalii Tymchyshyn wrote:\n> If your queries work on single attribute, you can try adding partial\n> indexes for different attributes. Note that in this case parameterized\n> statements may prevent index usage, so check also with attribute id inlined.\n>\n> Best regards, Vitalii Tymchyshyn\n\nUnfortunately this does not help for the statistics, and (I guess) \nnested loops will still be used when joining:\n\nhot2=> explain analyze select * from attr_value where attr_tunniste = \n'suhde_hyvaksytty' and arvo_text = 't';\n QUERY \nPLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using attr_value_arvo_text_idx1 on attr_value \n(cost=0.00..343.59 rows=152 width=118) (actual time=0.076..7.768 \nrows=3096 loops=1)\n Index Cond: (arvo_text = 't'::text)\n Filter: ((attr_tunniste)::text = 'suhde_hyvaksytty'::text)\n Total runtime: 10.855 ms\n(4 rows)\n\nhot2=> create index suhde_hyvaksytty_idx on attr_value(arvo_text) where \nattr_tunniste = 'suhde_hyvaksytty';\nCREATE INDEX\nhot2=> analyze attr_value;\nhot2=> explain analyze select * from attr_value where attr_tunniste = \n'suhde_hyvaksytty' and arvo_text = 't';\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using suhde_hyvaksytty_idx on attr_value (cost=0.00..43.72 \nrows=152 width=118) (actual time=0.093..4.776 rows=3096 loops=1)\n Index Cond: (arvo_text = 't'::text)\n Total runtime: 7.817 ms\n(3 rows)\n\n - Anssi\n", "msg_date": "Fri, 18 Mar 2011 13:59:29 +0200", "msg_from": "=?UTF-8?B?QW5zc2kgS8Okw6RyacOkaW5lbg==?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Disabling nested loops - worst case performance" }, { "msg_contents": "On 03/18/2011 01:14 PM, Thomas Kellerer wrote:\n> Did you consider using hstore instead?\n>\n> I think in the PostgreSQL world, this is a better alternative than EAV and most probably faster as well.\nNo, we did not. The reason is that we want to track each attribute with \nbi-temporal timestamps. The actual database schema for the attribute \nvalue table is:\n\nCREATE TABLE attr_value (\n id SERIAL PRIMARY KEY,\n olio_id INTEGER NOT NULL REFERENCES base_olio, -- entity identifier\n attr_tunniste VARCHAR(20) NOT NULL REFERENCES base_attr, -- attr \nidentifier\n kieli_tunniste VARCHAR(20) REFERENCES kieli, -- lang identifier\n arvo_number DECIMAL(18, 9), -- value number\n arvo_ts timestamptz, -- value timestamp\n arvo_text TEXT, -- value text\n arvo_valinta_tunniste VARCHAR(20), -- for choice lists: \n\"value_choice_identifier\"\n real_valid_from TIMESTAMPTZ NOT NULL, -- real_valid_from - \nreal_valid_until define when things have been in \"real\" world\n real_valid_until TIMESTAMPTZ NOT NULL,\n db_valid_from TIMESTAMPTZ NOT NULL, -- db_valid_* defines when \nthings have been in the database\n db_valid_until TIMESTAMPTZ NOT NULL,\n tx_id_insert INTEGER default txid_current(),\n tx_id_delete INTEGER,\n -- foreign keys & checks skipped\n);\n\nNaturally, we have other tables defining the objects, joins between \nobjects and metadata for the EAV. All data modifications are done \nthrough procedures, which ensure uniqueness etc. for the attributes and \njoins.\n\nThe data set is small, and performance in general is not that important, \nas long as the UI is responsive and data can be transferred to other \nsystems in reasonable time. Insert performance is at least 10x worse \nthan when using traditional schema, but it doesn't matter (we have \nsomewhere around 1000 inserts / updates a day max). The only real \nproblem so far is the chained nested loop problem, which really kills \nperformance for some queries.\n\nSurprisingly (at least to me) this schema has worked really well, \nalthough sometimes there is a feeling that we are implementing a \ndatabase using a database...\n\n - Anssi\n", "msg_date": "Fri, 18 Mar 2011 14:19:08 +0200", "msg_from": "=?UTF-8?B?QW5zc2kgS8Okw6RyacOkaW5lbg==?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Disabling nested loops - worst case performance" }, { "msg_contents": "On Fri, Mar 18, 2011 at 7:52 AM, Vitalii Tymchyshyn <[email protected]> wrote:\n> 18.03.11 09:15, Anssi Kääriäinen написав(ла):\n> Hello.\n>\n> If your queries work on single attribute, you can try adding partial indexes\n> for different attributes. Note that in this case parameterized statements\n> may prevent index usage, so check also with attribute id inlined.\n\nAnd if your queries work on a single entity instead, you can partition\nthe table per-entity thus \"teach\" the database enging about the\ncorrelation.\n", "msg_date": "Fri, 18 Mar 2011 12:26:20 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Disabling nested loops - worst case performance" } ]
[ { "msg_contents": "Hi all,\n\nOur system has a postgres database that has a table for statistic which is\nupdated every hour by about 10K clients. Each client only make update to its\nown row in the table. So far I am only seeing one core out of eight cores on\nmy server being active which tells me that the update is being done serial\ninstead of being parallel. Do you know if there is a way for me to make\nthese independent updates happen in parallel?\n\nThank you, your help is very much appreciated!\n\nHi all,Our system has a postgres database that has a table for statistic which is updated every hour by about 10K clients. Each client only make update to its own row in the table. So far I am only seeing one core out of eight cores on my server being active which tells me that the update is being done serial instead of being parallel. Do you know if there is a way for me to make these independent updates happen in parallel?\nThank you, your help is very much appreciated!", "msg_date": "Fri, 18 Mar 2011 09:05:23 -0400", "msg_from": "Red Maple <[email protected]>", "msg_from_op": true, "msg_subject": "Help: massive parallel update to the same table" }, { "msg_contents": ">From: [email protected] [mailto:[email protected]] On Behalf Of Red Maple\n>Sent: Friday, March 18, 2011 9:05 AM\n>To: [email protected]\n>Subject: [PERFORM] Help: massive parallel update to the same table\n>\n>Hi all,\n>\n>Our system has a postgres database that has a table for statistic which is updated every hour by about 10K clients. Each client only make update to its own row in the table. So far >I am only seeing one core out of eight cores on my server being active which tells me that the update is being done serial instead of being parallel. Do you know if there is a way >for me to make these independent updates happen in parallel?\n>\n>Thank you, your help is very much appreciated!\n\nIf they are all happening on one core, you are probably using one DB connection to do the updates. To split them across multiple cores, you need to use multiple DB connections. Be careful if/when you restructure things to filter these requests into a reasonable number of backend DB connections - turning a huge number of clients loose against a DB is not going end well. \n\nBrad.\n", "msg_date": "Fri, 18 Mar 2011 14:23:57 +0000", "msg_from": "\"Nicholson, Brad (Toronto, ON, CA)\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help: massive parallel update to the same table" }, { "msg_contents": "Red Maple <[email protected]> wrote:\n \n> Our system has a postgres database that has a table for statistic\n> which is updated every hour by about 10K clients. Each client only\n> make update to its own row in the table. So far I am only seeing\n> one core out of eight cores on my server being active which tells\n> me that the update is being done serial instead of being parallel.\n> Do you know if there is a way for me to make these independent\n> updates happen in parallel?\n \nIt should be parallel by default. Are you taking out any explicit\nlocks?\n \nAlso, it seems like you're only doing about three updates per\nsecond. I would expect a single-row update to run in a couple ms or\nless, so it would be rare that two requests would be active at the\nsame time, so you wouldn't often see multiple cores active at the\nsame time. (Of course, the background writer, autovacuum, etc.,\nshould occasionally show up concurrently with update queries.)\n \nIs there some particular problem you're trying to solve? (For\nexample, is something too slow?)\n \n-Kevin\n", "msg_date": "Fri, 18 Mar 2011 09:28:02 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help: massive parallel update to the same table" } ]
[ { "msg_contents": "[rearranged - please don't top-post]\n\n[also, bringing this back to the list - please keep the list copied]\n \nRed Maple <[email protected]> wrote:\n> Kevin Grittner <[email protected]> wrote:\n \n>> It should be parallel by default. Are you taking out any\n>> explicit locks?\n \n> my clients use psql to remotely run an update function on the\n> postgres server. Each client run its own psql to connect to the\n> server. What I have noticed is that if I commented out the update\n> in the function so that only query is being done then all the core\n> would kick in and run at 100%. However if I allow the update on\n> the function then only one core would run.\n \n> Currently it take 40min to update all the client statistics\n \nPlease show us the part you commented out to get the faster run\ntime, and the source code for the function you mentioned.\n \n> Do you know if I have configured something incorrectly?\n> \n> I am running postgres 9.0.2 on fedora core 14. Here is my\n> postgres.conf file\n> \n> \n> [over 500 lines of configuration, mostly comments, wrapped]\n \nIf you're going to post that, please strip the comments or post the\nresults of this query:\n \n http://wiki.postgresql.org/wiki/Server_Configuration \n \nI don't think anything in your configuration will affect this\nparticular problem, but it seems likely that you could do some\noverall tuning. If you want to do that, you should probably start a\nnew thread after this issue is sorted out.\n \n-Kevin\n\n", "msg_date": "Fri, 18 Mar 2011 11:06:53 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help: massive parallel update to the same table" }, { "msg_contents": "Hi,\n\nHere is my function. If I comment out the update then it would run all the\ncores, if not then only one core will run....\n\n\nCREATE OR REPLACE FUNCTION my_update_device(this_mac text, number_of_devices\ninteger, this_sysuptime integer)\n RETURNS integer AS\n$BODY$\n DECLARE\n fake_mac macaddr;\n this_id integer;\n new_avgld integer;\n BEGIN\n new_avgld = (this_sysuptime / 120) % 100;\n for i in 1..Number_of_devices loop\n fake_mac = substring(this_mac from 1 for 11) || ':' ||\nupper(to_hex((i-1)/256)) || ':' || upper(to_hex((i-1)%256));\n select into this_id id from ap where lan_mac =\nupper(fake_mac::text);\n if not found then\n return -1;\n end if;\n select into this_sysuptime sysuptime from ap_sysuptime where\nap_id = this_id for update;\n-- \n==============================================================================\n-- >>>>>>>> if I comment out the next update then all cores will be running,\nelse only one core will be running\n-- \n==============================================================================\n update ap_sysuptime set sysuptime = this_sysuptime, last_contacted\n= now() where ap_id = this_id;\n select into new_avgld avg_ld_1min from colubris_device\nwhere node_id = this_id for update;\n new_avgld = (this_avgld / 120 ) % 100;\n end loop;\n return this_id;\n END;\n$BODY$\n LANGUAGE plpgsql VOLATILE\n COST 100;\n\n\n\n\n\n\nOn Fri, Mar 18, 2011 at 12:06 PM, Kevin Grittner <\[email protected]> wrote:\n\n> [rearranged - please don't top-post]\n>\n> [also, bringing this back to the list - please keep the list copied]\n>\n> Red Maple <[email protected]> wrote:\n> > Kevin Grittner <[email protected]> wrote:\n>\n> >> It should be parallel by default. Are you taking out any\n> >> explicit locks?\n>\n> > my clients use psql to remotely run an update function on the\n> > postgres server. Each client run its own psql to connect to the\n> > server. What I have noticed is that if I commented out the update\n> > in the function so that only query is being done then all the core\n> > would kick in and run at 100%. However if I allow the update on\n> > the function then only one core would run.\n>\n> > Currently it take 40min to update all the client statistics\n>\n> Please show us the part you commented out to get the faster run\n> time, and the source code for the function you mentioned.\n>\n> > Do you know if I have configured something incorrectly?\n> >\n> > I am running postgres 9.0.2 on fedora core 14. Here is my\n> > postgres.conf file\n> >\n> >\n> > [over 500 lines of configuration, mostly comments, wrapped]\n>\n> If you're going to post that, please strip the comments or post the\n> results of this query:\n>\n> http://wiki.postgresql.org/wiki/Server_Configuration\n>\n> I don't think anything in your configuration will affect this\n> particular problem, but it seems likely that you could do some\n> overall tuning. If you want to do that, you should probably start a\n> new thread after this issue is sorted out.\n>\n> -Kevin\n>\n>\n\nHi,\n \nHere is my function. If I comment out the update then it would run all the cores, if not then only one core will run....\n \n \nCREATE OR REPLACE FUNCTION my_update_device(this_mac text, number_of_devices integer, this_sysuptime integer)  RETURNS integer AS$BODY$       DECLARE        fake_mac macaddr;        this_id integer;\n        new_avgld integer; BEGIN     new_avgld = (this_sysuptime / 120) % 100;     for i in 1..Number_of_devices loop           fake_mac = substring(this_mac from 1 for 11) ||  ':' || upper(to_hex((i-1)/256)) || ':' || upper(to_hex((i-1)%256));\n           select into this_id id from ap where lan_mac = upper(fake_mac::text);\n           if not found then              return -1;           end if;\n           select into this_sysuptime sysuptime from ap_sysuptime where ap_id = this_id for update;\n-- ==============================================================================-- >>>>>>>> if I comment out the next update then all cores will be running, else only one core will be running \n-- ==============================================================================          update ap_sysuptime set sysuptime = this_sysuptime, last_contacted = now() where ap_id = this_id;                                         select into new_avgld avg_ld_1min from colubris_device where node_id = this_id for update;\n                      new_avgld = (this_avgld / 120 ) % 100;\n         end loop;  return this_id;  END;$BODY$  LANGUAGE plpgsql VOLATILE  COST 100;\n \n\n\n \n \n \n \nOn Fri, Mar 18, 2011 at 12:06 PM, Kevin Grittner <[email protected]> wrote:\n[rearranged - please don't top-post][also, bringing this back to the list - please keep the list copied]\nRed Maple <[email protected]> wrote:> Kevin Grittner <[email protected]> wrote:\n\n>> It should be parallel by default.  Are you taking out any>> explicit locks?\n> my clients use psql to remotely run an update function on the> postgres server. Each client run its own psql to connect to the> server. What I have noticed is that if I commented out the update\n> in the function so that only query is being done then all the core> would kick in and run at 100%. However if I allow the update on> the function then only one core would run.\n> Currently it take 40min to update all the client statisticsPlease show us the part you commented out to get the faster runtime, and the source code for the function you mentioned.\n> Do you know if I have configured something incorrectly?>> I am running postgres 9.0.2 on fedora core 14. Here is my> postgres.conf file>>> [over 500 lines of configuration, mostly comments, wrapped]\nIf you're going to post that, please strip the comments or post theresults of this query: http://wiki.postgresql.org/wiki/Server_Configuration\nI don't think anything in your configuration will affect thisparticular problem, but it seems likely that you could do someoverall tuning.  If you want to do that, you should probably start anew thread after this issue is sorted out.\n-Kevin", "msg_date": "Fri, 18 Mar 2011 14:21:09 -0400", "msg_from": "Red Maple <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help: massive parallel update to the same table" }, { "msg_contents": "Red Maple <[email protected]> wrote:\n \n> Here is my function. If I comment out the update then it would run\n> all the cores, if not then only one core will run....\n \n> CREATE OR REPLACE FUNCTION\n \n> [...]\n\n> select sysuptime\n> into this_sysuptime\n> from ap_sysuptime\n> where ap_id = this_id\n> for update;\n> \n> -- ==================================================\n> -- >>>>>>>> if I comment out the next update\n> -- >>>>>>>> then all cores will be running,\n> -- >>>>>>>> else only one core will be running\n> -- ==================================================\n> update ap_sysuptime\n> set sysuptime = this_sysuptime,\n> last_contacted = now()\n> where ap_id = this_id;\n \nThis proves that you're not showing us the important part. The\nupdate locks the same row previously locked by the SELECT FOR\nUPDATE, so any effect at the row level would be a serialization\nfailure based on a write conflict, which doesn't sound like your\nproblem. They get different locks at the table level, though:\n \nhttp://www.postgresql.org/docs/9.0/interactive/explicit-locking.html#LOCKING-TABLES\n \nSomewhere in code you're not showing us you're acquiring a lock on\nthe ap_sysuptime table which conflicts with a ROW EXCLUSIVE lock but\nnot with a ROW SHARE lock. The lock types which could do that are\nSHARE and SHARE ROW EXCLUSIVE. CREATE INDEX (without CONCURRENTLY)\ncould do that; otherwise it seems that you would need to be\nexplicitly issuing a LOCK statement at one of these levels somewhere\nin your transaction. That is what is causing the transactions to\nrun one at a time.\n \n-Kevin\n", "msg_date": "Fri, 18 Mar 2011 14:21:16 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help: massive parallel update to the same table" }, { "msg_contents": "Hi,\n\nI have found the bug in my code that made the update to the same row in the\ntable instead of two different row. Now I have all cores up and running\n100%.\n\nThank you for all your help.\n\nOn Fri, Mar 18, 2011 at 3:21 PM, Kevin Grittner <[email protected]\n> wrote:\n\n> Red Maple <[email protected]> wrote:\n>\n> > Here is my function. If I comment out the update then it would run\n> > all the cores, if not then only one core will run....\n>\n> > CREATE OR REPLACE FUNCTION\n>\n> > [...]\n>\n> > select sysuptime\n> > into this_sysuptime\n> > from ap_sysuptime\n> > where ap_id = this_id\n> > for update;\n> >\n> > -- ==================================================\n> > -- >>>>>>>> if I comment out the next update\n> > -- >>>>>>>> then all cores will be running,\n> > -- >>>>>>>> else only one core will be running\n> > -- ==================================================\n> > update ap_sysuptime\n> > set sysuptime = this_sysuptime,\n> > last_contacted = now()\n> > where ap_id = this_id;\n>\n> This proves that you're not showing us the important part. The\n> update locks the same row previously locked by the SELECT FOR\n> UPDATE, so any effect at the row level would be a serialization\n> failure based on a write conflict, which doesn't sound like your\n> problem. They get different locks at the table level, though:\n>\n>\n> http://www.postgresql.org/docs/9.0/interactive/explicit-locking.html#LOCKING-TABLES\n>\n> Somewhere in code you're not showing us you're acquiring a lock on\n> the ap_sysuptime table which conflicts with a ROW EXCLUSIVE lock but\n> not with a ROW SHARE lock. The lock types which could do that are\n> SHARE and SHARE ROW EXCLUSIVE. CREATE INDEX (without CONCURRENTLY)\n> could do that; otherwise it seems that you would need to be\n> explicitly issuing a LOCK statement at one of these levels somewhere\n> in your transaction. That is what is causing the transactions to\n> run one at a time.\n>\n> -Kevin\n>\n\nHi,I have found the bug in my code that made the update to the same row in the table instead of two different row. Now I have all cores up and running 100%.Thank you for all your help.\nOn Fri, Mar 18, 2011 at 3:21 PM, Kevin Grittner <[email protected]> wrote:\nRed Maple <[email protected]> wrote:\n\n> Here is my function. If I comment out the update then it would run\n> all the cores, if not then only one core will run....\n\n> CREATE OR REPLACE FUNCTION\n\n> [...]\n\n>       select sysuptime\n>         into this_sysuptime\n>         from ap_sysuptime\n>         where ap_id = this_id\n>         for update;\n>\n>       -- ==================================================\n>       -- >>>>>>>> if I comment out the next update\n>       -- >>>>>>>>   then all cores will be running,\n>       -- >>>>>>>>   else only one core will be running\n>       -- ==================================================\n>       update ap_sysuptime\n>         set sysuptime      = this_sysuptime,\n>             last_contacted = now()\n>         where ap_id = this_id;\n\nThis proves that you're not showing us the important part.  The\nupdate locks the same row previously locked by the SELECT FOR\nUPDATE, so any effect at the row level would be a serialization\nfailure based on a write conflict, which doesn't sound like your\nproblem.  They get different locks at the table level, though:\n\nhttp://www.postgresql.org/docs/9.0/interactive/explicit-locking.html#LOCKING-TABLES\n\nSomewhere in code you're not showing us you're acquiring a lock on\nthe ap_sysuptime table which conflicts with a ROW EXCLUSIVE lock but\nnot with a ROW SHARE lock.  The lock types which could do that are\nSHARE and SHARE ROW EXCLUSIVE.  CREATE INDEX (without CONCURRENTLY)\ncould do that; otherwise it seems that you would need to be\nexplicitly issuing a LOCK statement at one of these levels somewhere\nin your transaction.  That is what is causing the transactions to\nrun one at a time.\n\n-Kevin", "msg_date": "Tue, 22 Mar 2011 10:13:00 -0400", "msg_from": "Red Maple <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help: massive parallel update to the same table" } ]
[ { "msg_contents": "I have a large table but not as large as the kind of numbers that get\ndiscussed on this list. It has 125 million rows.\n\nREINDEXing the table takes half a day, and it's still not finished.\n\nTo write this post I did \"SELECT COUNT(*)\", and here's the output -- so long!\n\n select count(*) from links;\n count\n -----------\n 125418191\n (1 row)\n\n Time: 1270405.373 ms\n\nThat's 1270 seconds!\n\nI suppose the vaccuum analyze is not doing its job? As you can see\nfrom settings below, I have autovacuum set to ON, and there's also a\ncronjob every 10 hours to do a manual vacuum analyze on this table,\nwhich is largest.\n\nPG is version 8.2.9.\n\nAny thoughts on what I can do to improve performance!?\n\nBelow are my settings.\n\n\n\nmax_connections = 300\nshared_buffers = 500MB\neffective_cache_size = 1GB\nmax_fsm_relations = 1500\nmax_fsm_pages = 950000\n\nwork_mem = 100MB\ntemp_buffers = 4096\nauthentication_timeout = 10s\nssl = off\ncheckpoint_warning = 3600\nrandom_page_cost = 1\n\nautovacuum = on\nautovacuum_vacuum_cost_delay = 20\n\nvacuum_cost_delay = 20\nvacuum_cost_limit = 600\n\nautovacuum_naptime = 10\nstats_start_collector = on\nstats_row_level = on\nautovacuum_vacuum_threshold = 75\nautovacuum_analyze_threshold = 25\nautovacuum_analyze_scale_factor = 0.02\nautovacuum_vacuum_scale_factor = 0.01\n\nwal_buffers = 64\ncheckpoint_segments = 128\ncheckpoint_timeout = 900\nfsync = on\nmaintenance_work_mem = 512MB\n", "msg_date": "Sat, 19 Mar 2011 11:07:33 +0800", "msg_from": "Phoenix Kiula <[email protected]>", "msg_from_op": true, "msg_subject": "REINDEX takes half a day (and still not complete!)" }, { "msg_contents": "On Fri, Mar 18, 2011 at 9:07 PM, Phoenix Kiula <[email protected]> wrote:\n> I have a large table but not as large as the kind of numbers that get\n> discussed on this list. It has 125 million rows.\n>\n> REINDEXing the table takes half a day, and it's still not finished.\n>\n> To write this post I did \"SELECT COUNT(*)\", and here's the output -- so long!\n>\n>    select count(*) from links;\n>       count\n>    -----------\n>     125418191\n>    (1 row)\n>\n>    Time: 1270405.373 ms\n>\n> That's 1270 seconds!\n>\n> I suppose the vaccuum analyze is not doing its job? As you can see\n> from settings below, I have autovacuum set to ON, and there's also a\n> cronjob every 10 hours to do a manual vacuum analyze on this table,\n> which is largest.\n>\n> PG is version 8.2.9.\n>\n> Any thoughts on what I can do to improve performance!?\n>\n> Below are my settings.\n>\n>\n>\n> max_connections              = 300\n> shared_buffers               = 500MB\n> effective_cache_size         = 1GB\n> max_fsm_relations            = 1500\n> max_fsm_pages                = 950000\n>\n> work_mem                     = 100MB\n\nWhat is the output of running vacuum verbose as a superuser (you can\nrun it on the postgres database so it returns fast.) We're looking\nfor the output that looks like this:\n\nINFO: free space map contains 1930193 pages in 749 relations\nDETAIL: A total of 1787744 page slots are in use (including overhead).\n1787744 page slots are required to track all free space.\nCurrent limits are: 10000000 page slots, 3000 relations, using 58911 kB.\n\nIf the space needed exceeds page slots then you need to crank up your\nfree space map. If the relations exceeds the available then you'll\nneed to crank up max relations.\n", "msg_date": "Fri, 18 Mar 2011 22:58:17 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: REINDEX takes half a day (and still not complete!)" }, { "msg_contents": "On Fri, Mar 18, 2011 at 9:07 PM, Phoenix Kiula <[email protected]> wrote:\n> autovacuum                   = on\n> autovacuum_vacuum_cost_delay = 20\n>\n> vacuum_cost_delay            = 20\n> vacuum_cost_limit            = 600\n>\n> autovacuum_naptime           = 10\n\nalso, if vacuum can't keep up you can increase the vacuum cost limit,\nand lower the cost delay. Anything above 1ms is still quite a wait\ncompared to 0. And most systems don't have the real granularity to go\nthat low anyway, so 5ms is about as low as you can go and get a change\nbefore 0. Also, if you've got a lot of large relations you might need\nto increase the max workers as well.\n", "msg_date": "Fri, 18 Mar 2011 23:30:15 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: REINDEX takes half a day (and still not complete!)" }, { "msg_contents": "Thanks Scott.\n\n> What is the output of running vacuum verbose as a superuser (you can\n> run it on the postgres database so it returns fast.)\n\n\nHere's the output for postgres DB:\n\n INFO: free space map contains 110614 pages in 33 relations\n DETAIL: A total of 110464 page slots are in use (including overhead).\n 110464 page slots are required to track all free space.\n Current limits are: 950000 page slots, 1500 relations, using 5665 kB.\n VACUUM\n\n\nDoes running it on a postgres database also show the relevant info for\nother databases?\n\n From above it seems fine, right?\n\n\n\n> also, if vacuum can't keep up you can increase the vacuum cost limit,\n> and lower the cost delay.  Anything above 1ms is still quite a wait\n> compared to 0.  And most systems don't have the real granularity to go\n> that low anyway, so 5ms is about as low as you can go and get a change\n> before 0.  Also, if you've got a lot of large relations you might need\n> to increase the max workers as well.\n\n\nI'm not sure I understand this.\n\n(1) I should increase \"max workers\". But I am on version 8.2.9 -- did\nthis version have \"autovacuum_max_workers\"? It seems to be a more\nrecent thing: http://sn.im/27nxe1\n\n(2) The big table in my database (with 125 million rows) has about\n5,000 rows that get DELETEd every day, about 100,000 new INSERTs, and\nabout 12,000 UPDATEs.\n\n(3) What's that thing about cost delay. Which values from vacuum\nshould I check to determine the cost delay -- what's the specific\nformula?\n\nThanks!\n\n\n\n\nOn Sat, Mar 19, 2011 at 12:58 PM, Scott Marlowe <[email protected]> wrote:\n> On Fri, Mar 18, 2011 at 9:07 PM, Phoenix Kiula <[email protected]> wrote:\n>> I have a large table but not as large as the kind of numbers that get\n>> discussed on this list. It has 125 million rows.\n>>\n>> REINDEXing the table takes half a day, and it's still not finished.\n>>\n>> To write this post I did \"SELECT COUNT(*)\", and here's the output -- so long!\n>>\n>>    select count(*) from links;\n>>       count\n>>    -----------\n>>     125418191\n>>    (1 row)\n>>\n>>    Time: 1270405.373 ms\n>>\n>> That's 1270 seconds!\n>>\n>> I suppose the vaccuum analyze is not doing its job? As you can see\n>> from settings below, I have autovacuum set to ON, and there's also a\n>> cronjob every 10 hours to do a manual vacuum analyze on this table,\n>> which is largest.\n>>\n>> PG is version 8.2.9.\n>>\n>> Any thoughts on what I can do to improve performance!?\n>>\n>> Below are my settings.\n>>\n>>\n>>\n>> max_connections              = 300\n>> shared_buffers               = 500MB\n>> effective_cache_size         = 1GB\n>> max_fsm_relations            = 1500\n>> max_fsm_pages                = 950000\n>>\n>> work_mem                     = 100MB\n>\n> What is the output of running vacuum verbose as a superuser (you can\n> run it on the postgres database so it returns fast.)  We're looking\n> for the output that looks like this:\n>\n> INFO:  free space map contains 1930193 pages in 749 relations\n> DETAIL:  A total of 1787744 page slots are in use (including overhead).\n> 1787744 page slots are required to track all free space.\n> Current limits are:  10000000 page slots, 3000 relations, using 58911 kB.\n>\n> If the space needed exceeds page slots then you need to crank up your\n> free space map.  If the relations exceeds the available then you'll\n> need to crank up max relations.\n>\n", "msg_date": "Sun, 20 Mar 2011 16:04:06 +0800", "msg_from": "Phoenix Kiula <[email protected]>", "msg_from_op": true, "msg_subject": "Re: REINDEX takes half a day (and still not complete!)" }, { "msg_contents": "On Fri, Mar 18, 2011 at 10:07 PM, Phoenix Kiula <[email protected]> wrote:\n> I have a large table but not as large as the kind of numbers that get\n> discussed on this list. It has 125 million rows.\n>\n> REINDEXing the table takes half a day, and it's still not finished.\n>\n> To write this post I did \"SELECT COUNT(*)\", and here's the output -- so long!\n>\n>    select count(*) from links;\n>       count\n>    -----------\n>     125418191\n>    (1 row)\n>\n>    Time: 1270405.373 ms\n>\n> That's 1270 seconds!\n>\n> I suppose the vaccuum analyze is not doing its job? As you can see\n> from settings below, I have autovacuum set to ON, and there's also a\n> cronjob every 10 hours to do a manual vacuum analyze on this table,\n> which is largest.\n>\n> PG is version 8.2.9.\n>\n> Any thoughts on what I can do to improve performance!?\n>\n> Below are my settings.\n>\n>\n>\n> max_connections              = 300\n> shared_buffers               = 500MB\n> effective_cache_size         = 1GB\n> max_fsm_relations            = 1500\n> max_fsm_pages                = 950000\n>\n> work_mem                     = 100MB\n> temp_buffers                 = 4096\n> authentication_timeout       = 10s\n> ssl                          = off\n> checkpoint_warning           = 3600\n> random_page_cost             = 1\n>\n> autovacuum                   = on\n> autovacuum_vacuum_cost_delay = 20\n>\n> vacuum_cost_delay            = 20\n> vacuum_cost_limit            = 600\n>\n> autovacuum_naptime           = 10\n> stats_start_collector        = on\n> stats_row_level              = on\n> autovacuum_vacuum_threshold  = 75\n> autovacuum_analyze_threshold = 25\n> autovacuum_analyze_scale_factor  = 0.02\n> autovacuum_vacuum_scale_factor   = 0.01\n>\n> wal_buffers                  = 64\n> checkpoint_segments          = 128\n> checkpoint_timeout           = 900\n> fsync                        = on\n> maintenance_work_mem         = 512MB\n\nhow much memory do you have? you might want to consider raising\nmaintenance_work_mem to 1GB. Are other things going on in the\ndatabase while you are rebuilding your indexes? Is it possible you\nare blocked waiting on a lock for a while?\n\nHow much index data is there? Can we see the table definition along\nwith create index statements?\n\nmerlin\n", "msg_date": "Mon, 21 Mar 2011 08:28:35 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: REINDEX takes half a day (and still not complete!)" }, { "msg_contents": "On Mon, Mar 21, 2011 at 8:14 PM, Phoenix Kiula <[email protected]> wrote:\n> Thanks Merlin, Scott.\n>\n> First, yes, I can increase maintenance_work_memory. I have 8GB RAM in\n> total, and sure, I can dedicate 1GB of it to PG. Currently PG is the\n> most intensive software here.\n\nIf we're talking maintenance work mem, then you might want to set it\nfor a single connection.\n\nset maintenance_work_mem='1000MB';\nreindex yada yada;\n\netc. So it's not global, just local.\n\n> Second, how can I check if there are other things going on in the\n> database while i REINDEX? Maybe some kind of vacuum is going on, but\n> isn't that supposed to wait while REINDEX is happening for at least\n> this table?\n\nOK, my main point has been that if autovacuum is running well enough,\nthen you don't need reindex, and if you are running it it's a\nmaintenance thing you shouldn't have to schedule all the time, but\nonly run until you get autovac tuned up enough to handle your db\nduring the day. however, I know sometimes you're stuck with what\nyou're stuck with.\n\nYou can see what else is running with the pg_stats_activity view,\nwhich will show you all running queries. That and iotop cna show you\nwhich processes are chewing up how much IO. The other pg_stat_*\ntables can get you a good idea of what's happening to your tables in\nthe database. iostat and vmstat can give you an idea how much IO\nbandwidth you're using.\n\nIf a vacuum starts after the reindex it will either wait or abort and\nnot get in the way. If a vacuum is already running I'm not sure if it\nwill get killed or not.\n", "msg_date": "Mon, 21 Mar 2011 20:48:34 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: REINDEX takes half a day (and still not complete!)" }, { "msg_contents": "Sorry, rejuvenating a thread that was basically unanswered.\n\nI closed the database for any kinds of access to focus on maintenance\noperations, killed all earlier processes so that my maintenance is the\nonly stuff going on.\n\nREINDEX is still taking 3 hours -- and it is still not finished!\n\nSimilarly, if I cancel the REINDEX and issue a VACUUM ANALYZE VERBOSE,\nthis too seems to just hang there on my big table.\n\nI changed the maintenance_work_men to 2GB for this operation. It's\nhighly worrisome -- the above slow times are with 2GB of my server\ndedicated to Postgresql!!!!\n\nSurely this is not tenable for enterprise environments? I am on a\n64bit RedHat server with dual CPU Intel Woodcrest or whatever that was\ncalled. Postgres is 8.2.9.\n\nHow do DB folks do this with small maintenance windows? This is for a\nvery high traffic website so it's beginning to get embarrassing.\n\nWould appreciate any thoughts or pointers.\n\nThanks!\n\n\n\nOn Mon, Mar 21, 2011 at 9:28 PM, Merlin Moncure <[email protected]> wrote:\n> On Fri, Mar 18, 2011 at 10:07 PM, Phoenix Kiula <[email protected]> wrote:\n>> I have a large table but not as large as the kind of numbers that get\n>> discussed on this list. It has 125 million rows.\n>>\n>> REINDEXing the table takes half a day, and it's still not finished.\n>>\n>> To write this post I did \"SELECT COUNT(*)\", and here's the output -- so long!\n>>\n>>    select count(*) from links;\n>>       count\n>>    -----------\n>>     125418191\n>>    (1 row)\n>>\n>>    Time: 1270405.373 ms\n>>\n>> That's 1270 seconds!\n>>\n>> I suppose the vaccuum analyze is not doing its job? As you can see\n>> from settings below, I have autovacuum set to ON, and there's also a\n>> cronjob every 10 hours to do a manual vacuum analyze on this table,\n>> which is largest.\n>>\n>> PG is version 8.2.9.\n>>\n>> Any thoughts on what I can do to improve performance!?\n>>\n>> Below are my settings.\n>>\n>>\n>>\n>> max_connections              = 300\n>> shared_buffers               = 500MB\n>> effective_cache_size         = 1GB\n>> max_fsm_relations            = 1500\n>> max_fsm_pages                = 950000\n>>\n>> work_mem                     = 100MB\n>> temp_buffers                 = 4096\n>> authentication_timeout       = 10s\n>> ssl                          = off\n>> checkpoint_warning           = 3600\n>> random_page_cost             = 1\n>>\n>> autovacuum                   = on\n>> autovacuum_vacuum_cost_delay = 20\n>>\n>> vacuum_cost_delay            = 20\n>> vacuum_cost_limit            = 600\n>>\n>> autovacuum_naptime           = 10\n>> stats_start_collector        = on\n>> stats_row_level              = on\n>> autovacuum_vacuum_threshold  = 75\n>> autovacuum_analyze_threshold = 25\n>> autovacuum_analyze_scale_factor  = 0.02\n>> autovacuum_vacuum_scale_factor   = 0.01\n>>\n>> wal_buffers                  = 64\n>> checkpoint_segments          = 128\n>> checkpoint_timeout           = 900\n>> fsync                        = on\n>> maintenance_work_mem         = 512MB\n>\n> how much memory do you have? you might want to consider raising\n> maintenance_work_mem to 1GB.  Are other things going on in the\n> database while you are rebuilding your indexes?  Is it possible you\n> are blocked waiting on a lock for a while?\n>\n> How much index data is there?  Can we see the table definition along\n> with create index statements?\n>\n> merlin\n>\n", "msg_date": "Sun, 17 Apr 2011 23:30:30 +0800", "msg_from": "Phoenix Kiula <[email protected]>", "msg_from_op": true, "msg_subject": "Re: REINDEX takes half a day (and still not complete!)" }, { "msg_contents": "On Sun, Apr 17, 2011 at 9:30 AM, Phoenix Kiula <[email protected]> wrote:\n> Sorry, rejuvenating a thread that was basically unanswered.\n>\n> I closed the database for any kinds of access to focus on maintenance\n> operations, killed all earlier processes so that my maintenance is the\n> only stuff going on.\n>\n> REINDEX is still taking 3 hours -- and it is still not finished!\n>\n> Similarly, if I cancel the REINDEX and issue a VACUUM ANALYZE VERBOSE,\n> this too seems to just hang there on my big table.\n>\n> I changed the maintenance_work_men to 2GB for this operation. It's\n> highly worrisome -- the above slow times are with 2GB of my server\n> dedicated to Postgresql!!!!\n>\n> Surely this is not tenable for enterprise environments? I am on a\n> 64bit RedHat server with dual CPU Intel Woodcrest or whatever that was\n> called. Postgres is 8.2.9.\n>\n> How do DB folks do this with small maintenance windows? This is for a\n> very high traffic website so it's beginning to get embarrassing.\n>\n> Would appreciate any thoughts or pointers.\n\nUpgrade to something more modern than 8.2.x. Autovacuum was still\nvery much in its infancy back then. 9.0 or higher is a good choice.\nWhat do iostat -xd 10 and vmstat 10 and top say about these processes\nwhen they're running. \"It's taking a really long time and seems like\nit's hanging\" tells us nothing useful. Your OS has tools to let you\nfigure out what's bottlenecking your operations, so get familiar with\nthem and let us know what they tell you. These are all suggestions I\nmade before which you have now classified as \"not answering your\nquestions\" so I'm getting a little tired of helping you when you don't\nseem interested in helping yourself.\n\nWhat are your vacuum and autovacuum costing values set to? Can you\nmake vacuum and / or autovacuum more aggresive?\n", "msg_date": "Sun, 17 Apr 2011 09:44:04 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: REINDEX takes half a day (and still not complete!)" }, { "msg_contents": "> \n> \n> How do DB folks do this with small maintenance windows? This is for a\n> very high traffic website so it's beginning to get embarrassing.\n\nNormally there is no need to issue reindex. What's your reason for the need?\n\nJesper\n\n> \n\nHow do DB folks do this with small maintenance windows? This is for avery high traffic website so it's beginning to get embarrassing.Normally there is no need to issue reindex.  What's your reason for the need?Jesper", "msg_date": "Sun, 17 Apr 2011 17:45:24 +0200", "msg_from": "Jesper Krogh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: REINDEX takes half a day (and still not complete!)" }, { "msg_contents": "On Sun, Apr 17, 2011 at 9:44 AM, Scott Marlowe <[email protected]> wrote:\n> On Sun, Apr 17, 2011 at 9:30 AM, Phoenix Kiula <[email protected]> wrote:\n>> Sorry, rejuvenating a thread that was basically unanswered.\n>>\n>> I closed the database for any kinds of access to focus on maintenance\n>> operations, killed all earlier processes so that my maintenance is the\n>> only stuff going on.\n>>\n>> REINDEX is still taking 3 hours -- and it is still not finished!\n>>\n>> Similarly, if I cancel the REINDEX and issue a VACUUM ANALYZE VERBOSE,\n>> this too seems to just hang there on my big table.\n>>\n>> I changed the maintenance_work_men to 2GB for this operation. It's\n>> highly worrisome -- the above slow times are with 2GB of my server\n>> dedicated to Postgresql!!!!\n>>\n>> Surely this is not tenable for enterprise environments? I am on a\n>> 64bit RedHat server with dual CPU Intel Woodcrest or whatever that was\n>> called. Postgres is 8.2.9.\n>>\n>> How do DB folks do this with small maintenance windows? This is for a\n>> very high traffic website so it's beginning to get embarrassing.\n>>\n>> Would appreciate any thoughts or pointers.\n>\n> Upgrade to something more modern than 8.2.x.  Autovacuum was still\n> very much in its infancy back then.  9.0 or higher is a good choice.\n> What do iostat -xd 10 and vmstat 10 and top say about these processes\n> when they're running.  \"It's taking a really long time and seems like\n> it's hanging\" tells us nothing useful.  Your OS has tools to let you\n> figure out what's bottlenecking your operations, so get familiar with\n> them and let us know what they tell you.  These are all suggestions I\n> made before which you have now classified as \"not answering your\n> questions\" so I'm getting a little tired of helping you when you don't\n> seem interested in helping yourself.\n>\n> What are your vacuum and autovacuum costing values set to?  Can you\n> make vacuum and / or autovacuum more aggresive?\n\nAlso a few more questions, what are you using for storage? How many\ndrives, RAID controller if any, RAID configuration etc.?\n", "msg_date": "Sun, 17 Apr 2011 10:09:45 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: REINDEX takes half a day (and still not complete!)" }, { "msg_contents": "Thanks Scott.\n\nI have shared huge amounts of info in my emails to Merlin and you.\nIntentionally not shared in public. Apologies if you are feeling\ntired.\n\nThe reason I need to REINDEX is because a simple SELECT query based on\nthe index column is taking ages. It used to take less than a second. I\nwant to make sure that the index is properly in place, at least.\n\nWe went through some BLOAT reports. Apparently Merlin told me there's\nno significant bloat.\n\nA manual VACUUM right now takes ages too. AUTOVACUUM settings are below.\n\nIt's a RAID 1 setup. Two Raptor 10000rpm disks.\n\nTOP does not show much beyond \"postmaster\". How should I use TOP and\nwhat info can I give you? This is what it looks like:\n\n\n14231 root 18 0 4028 872 728 R 93.8 0.0 28915:37\nexim_dbmbuild\n11001 root 25 0 4056 864 716 R 93.8 0.0 23111:06\nexim_dbmbuild\n16400 root 25 0 4824 864 720 R 92.5 0.0 33843:52\nexim_dbmbuild\n 4799 postgres 15 0 532m 94m 93m D 0.7 1.2 0:00.14\npostmaster\n12292 nobody 15 0 48020 14m 5088 S 0.7 0.2 0:00.06 httpd\n12943 root 17 0 2828 1224 776 R 0.7 0.0 0:00.04 top\n 7236 mysql 16 0 224m 64m 3692 S 0.3 0.8 26:43.46 mysqld\n31421 postgres 15 0 530m 12m 12m S 0.3 0.2 0:03.08\npostmaster\n31430 postgres 15 0 10456 576 224 S 0.3 0.0 0:00.08\npostmaster\n 955 postgres 15 0 532m 91m 90m S 0.3 1.1 0:00.15\npostmaster\n 1054 postgres 15 0 532m 196m 195m S 0.3 2.4 0:00.37\npostmaster\n 1232 postgres 15 0 532m 99m 98m D 0.3 1.2 0:00.27\npostmaster\n 1459 postgres 15 0 532m 86m 85m S 0.3 1.1 0:00.12\npostmaster\n 4552 postgres 15 0 532m 86m 85m S 0.3 1.1 0:00.08\npostmaster\n 7187 postgres 15 0 532m 157m 155m S 0.3 1.9 0:00.19\npostmaster\n 7587 postgres 15 0 532m 175m 173m D 0.3 2.2 0:00.23\npostmaster\n 8131 postgres 15 0 532m 154m 152m S 0.3 1.9 0:00.15\npostmaster\n 9473 nobody 16 0 48268 15m 5800 S 0.3 0.2 0:00.34 httpd\n 9474 nobody 15 0 48096 14m 5472 S 0.3 0.2 0:00.27 httpd\n10688 nobody 16 0 0 0 0 Z 0.3 0.0 0:00.20 httpd\n<defunct>\n12261 nobody 15 0 47956 13m 4296 S 0.3 0.2 0:00.08 httpd\n12278 nobody 15 0 47956 13m 4052 S 0.3 0.2 0:00.04 httpd\n12291 nobody 15 0 47972 14m 4956 S 0.3 0.2 0:00.07 httpd\n12673 nobody 15 0 47912 13m 4180 S 0.3 0.2 0:00.02 httpd\n12674 nobody 15 0 47936 13m 4924 S 0.3 0.2 0:00.02 httpd\n12678 nobody 16 0 47912 13m 4060 S 0.3 0.2 0:00.01 httpd\n12727 nobody 15 0 47912 13m 4024 S 0.3 0.2 0:00.03 httpd\n12735 nobody 15 0 47912 13m 4144 S 0.3 0.2 0:00.02 httpd\n\n\nVMSTAT 10 shows this:\n\n\n r b swpd free buff cache si so bi bo in cs us sy id wa\n 3 14 99552 17900 41108 7201712 0 0 42 11 0 0 8 34 41 16\n 2 17 99552 16468 41628 7203012 0 0 1326 84 1437 154810 7 66 12 15\n 3 7 99476 16796 41056 7198976 0 0 1398 96 1453 156211 7 66 21 6\n 3 17 99476 17228 39132 7177240 0 0 1325 68 1529 156111 8 65 16 11\n\n\n\n\nThe results of \"iostat -xd 10\" is:\n\n\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s\navgrq-sz avgqu-sz await svctm %util\nsda 0.24 24.55 9.33 4.41 111.31 231.75 55.65 115.88\n 24.97 0.17 12.09 6.67 9.17\nsdb 0.06 97.65 2.21 3.97 91.59 389.58 45.80 194.79\n 77.84 0.06 9.95 2.73 1.69\nsdc 1.46 62.71 187.20 29.13 132.43 311.72 66.22\n155.86 2.05 0.36 1.65 1.12 24.33\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s\navgrq-sz avgqu-sz await svctm %util\nsda 0.00 7.41 0.30 3.50 2.40 87.29 1.20 43.64\n 23.58 0.13 32.92 10.03 3.81\nsdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n 0.00 0.00 0.00 0.00 0.00\nsdc 0.00 18.32 158.26 4.10 2519.32 180.98 1259.66\n90.49 16.63 13.04 79.91 6.17 100.11\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s\navgrq-sz avgqu-sz await svctm %util\nsda 0.00 6.21 0.00 1.40 0.00 60.86 0.00 30.43\n 43.43 0.03 20.07 15.00 2.10\nsdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n 0.00 0.00 0.00 0.00 0.00\nsdc 0.10 10.31 159.06 2.50 2635.44 101.70 1317.72\n50.85 16.94 12.82 79.44 6.20 100.12\n\n\n\n\n8GB memory in total. 1GB devoted to PGSQL during these operations.\nOtherwise, my settings are as follows (and yes I did make the vacuum\nsettings more aggressive based on your email, which has had no\napparent impact) --\n\nmax_connections = 350\nshared_buffers = 500MB\neffective_cache_size = 1250MB\nmax_fsm_relations = 1500\nmax_fsm_pages = 950000\nwork_mem = 100MB\nmaintenance_work_mem = 200MB\ntemp_buffers = 4096\nauthentication_timeout = 10s\nssl = off\ncheckpoint_warning = 3600\nrandom_page_cost = 1\n\n\n\nWhat else can I share?\n\nThanks much for offering to help.\n\n\n\nOn Sun, Apr 17, 2011 at 11:44 PM, Scott Marlowe <[email protected]> wrote:\n> On Sun, Apr 17, 2011 at 9:30 AM, Phoenix Kiula <[email protected]> wrote:\n>> Sorry, rejuvenating a thread that was basically unanswered.\n>>\n>> I closed the database for any kinds of access to focus on maintenance\n>> operations, killed all earlier processes so that my maintenance is the\n>> only stuff going on.\n>>\n>> REINDEX is still taking 3 hours -- and it is still not finished!\n>>\n>> Similarly, if I cancel the REINDEX and issue a VACUUM ANALYZE VERBOSE,\n>> this too seems to just hang there on my big table.\n>>\n>> I changed the maintenance_work_men to 2GB for this operation. It's\n>> highly worrisome -- the above slow times are with 2GB of my server\n>> dedicated to Postgresql!!!!\n>>\n>> Surely this is not tenable for enterprise environments? I am on a\n>> 64bit RedHat server with dual CPU Intel Woodcrest or whatever that was\n>> called. Postgres is 8.2.9.\n>>\n>> How do DB folks do this with small maintenance windows? This is for a\n>> very high traffic website so it's beginning to get embarrassing.\n>>\n>> Would appreciate any thoughts or pointers.\n>\n> Upgrade to something more modern than 8.2.x.  Autovacuum was still\n> very much in its infancy back then.  9.0 or higher is a good choice.\n> What do iostat -xd 10 and vmstat 10 and top say about these processes\n> when they're running.  \"It's taking a really long time and seems like\n> it's hanging\" tells us nothing useful.  Your OS has tools to let you\n> figure out what's bottlenecking your operations, so get familiar with\n> them and let us know what they tell you.  These are all suggestions I\n> made before which you have now classified as \"not answering your\n> questions\" so I'm getting a little tired of helping you when you don't\n> seem interested in helping yourself.\n>\n> What are your vacuum and autovacuum costing values set to?  Can you\n> make vacuum and / or autovacuum more aggresive?\n>\n", "msg_date": "Mon, 18 Apr 2011 00:59:44 +0800", "msg_from": "Phoenix <[email protected]>", "msg_from_op": false, "msg_subject": "Re: REINDEX takes half a day (and still not complete!)" }, { "msg_contents": "On April 17, 2011, Phoenix <[email protected]> wrote:\n> >> Surely this is not tenable for enterprise environments? I am on a\n> >> 64bit RedHat server with dual CPU Intel Woodcrest or whatever that was\n> >> called. Postgres is 8.2.9.\n> >> \n\n.. and you have essentially 1 disk drive. Your hardware is not sized for a \ndatabase server.\n\n>> it's a RAID 1 setup. Two Raptor 10000rpm disks.\n\n\nOn April 17, 2011, Phoenix <[email protected]> wrote:\n> >> Surely this is not tenable for enterprise environments? I am on a\n> >> 64bit RedHat server with dual CPU Intel Woodcrest or whatever that was\n> >> called. Postgres is 8.2.9.\n> >> \n\n.. and you have essentially 1 disk drive. Your hardware is not sized for a database server.\n\n>> it's a RAID 1 setup. Two Raptor 10000rpm disks.", "msg_date": "Sun, 17 Apr 2011 11:13:52 -0700", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: REINDEX takes half a day (and still not complete!)" }, { "msg_contents": "People are running larger InnoDB databases on poorer hardware. Note\nthat I wouldn't dream of it because I care about data integrity and\nstability, but this discussion is purely about performance and I know\nit is possible.\n\nI am sure throwing hardware at it is not the solution. Just trying to\nhighlight what the root cause is. Raptor disks are not that bad, even\nif there's just \"one\" disk with RAID1, especially for a SELECT-heavy\nweb app.\n\nScott's idea of upgrading to 9.x is a good one. But it's not been easy\nin the past. There have been issues related to UTF-8, after the whole\nRPM stuff on CentOS has been sorted out.\n\nQUESTION:\nIf auto_vaccum is ON, and I'm running a manual vacuum, will they\ncoflict with each other or will basically one of them wait for the\nother to finish?\n\n\n\nOn Mon, Apr 18, 2011 at 2:13 AM, Alan Hodgson <[email protected]> wrote:\n> On April 17, 2011, Phoenix <[email protected]> wrote:\n>\n>> >> Surely this is not tenable for enterprise environments? I am on a\n>\n>> >> 64bit RedHat server with dual CPU Intel Woodcrest or whatever that was\n>\n>> >> called. Postgres is 8.2.9.\n>\n>> >>\n>\n> .. and you have essentially 1 disk drive. Your hardware is not sized for a\n> database server.\n>\n>>> it's a RAID 1 setup. Two Raptor 10000rpm disks.\n", "msg_date": "Mon, 18 Apr 2011 02:30:31 +0800", "msg_from": "Shashank Tripathi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: REINDEX takes half a day (and still not complete!)" }, { "msg_contents": "On Sun, Apr 17, 2011 at 10:59 AM, Phoenix <[email protected]> wrote:\n> TOP does not show much beyond \"postmaster\". How should I use TOP and\n> what info can I give you? This is what it looks like:\n\nWe're basically looking to see if the postmaster process doing the\nvacuuming or reindexing is stuck in a D state, which means it's\nwaiting on IO.\nhot the c key while it's running and you should get a little more info\non which processes are what.\n\n>  4799 postgres  15   0  532m  94m  93m D  0.7  1.2   0:00.14\n> postmaster\n\nThat is likely the postmaster that is waiting on IO.\n\n> VMSTAT 10 shows this:\n>\n>  r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy id wa\n>  3 14  99552  17900  41108 7201712    0    0    42    11    0     0  8 34 41 16\n>  2 17  99552  16468  41628 7203012    0    0  1326    84 1437 154810  7 66 12 15\n>  3  7  99476  16796  41056 7198976    0    0  1398    96 1453 156211  7 66 21  6\n>  3 17  99476  17228  39132 7177240    0    0  1325    68 1529 156111  8 65 16 11\n\nSo, we're at 11 to 15% io wait. I'm gonna guess you have 8 cores /\nthreads in your CPUs, and 1/8th ot 100% is 12% so looks like you're\nprobably IO bound here. iostat tells us more:\n\n> The results of \"iostat -xd 10\" is:\n> Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s\n> avgrq-sz avgqu-sz   await  svctm  %util\n> sda          0.00   7.41  0.30  3.50    2.40   87.29     1.20    43.64\n>   23.58     0.13   32.92  10.03   3.81\n> sdb          0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00\n>    0.00     0.00    0.00   0.00   0.00\n> sdc          0.00  18.32 158.26  4.10 2519.32  180.98  1259.66\n> 90.49    16.63    13.04   79.91   6.17 100.11\n\n100% IO utilization, so yea, it's likely that your sdc drive is your\nbottleneck. Given our little data is actually moving through the sdc\ndrive, it's not very fast.\n\n> Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s\n\n> 8GB memory in total. 1GB devoted to PGSQL during these operations.\n> Otherwise, my settings are as follows (and yes I did make the vacuum\n> settings more aggressive based on your email, which has had no\n> apparent impact) --\n\nYeah, as it gets more aggressive it can use more of your IO bandwidth.\n Since you\n\n> What else can I share?\n\nThat's a lot of help. I'm assuming you're running software or\nmotherboard fake-raid on this RAID-1 set? I'd suggest buying a $500\nor so battery backed caching RAID controller first, the improvements\nin performance are huge with such a card. You might wanna try testing\nthe current RAID-1 set with bonnie++ to get an idea of how fast it is.\n", "msg_date": "Sun, 17 Apr 2011 12:38:54 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: REINDEX takes half a day (and still not complete!)" }, { "msg_contents": "Thanks for these suggestions.\n\nI am beginning to wonder if the issue is deeper.\n\nI set autovacuum to off, then turned off all the connections to the\ndatabase, and did a manual vacuum just to see how long it takes.\n\nThis was last night my time. I woke up this morning and it has still\nnot finished.\n\nThe maintenance_men given to the DB for this process was 2GB.\n\nThere is nothing else going on on the server! Now, even REINDEX is\njust failing in the middle:\n\n\n# REINDEX INDEX new_idx_userid;\nserver closed the connection unexpectedly\n\tThis probably means the server terminated abnormally\n\tbefore or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n\n\nWhat else could be wrong?\n\n\n\n\nOn Mon, Apr 18, 2011 at 2:38 AM, Scott Marlowe <[email protected]> wrote:\n> On Sun, Apr 17, 2011 at 10:59 AM, Phoenix <[email protected]> wrote:\n>> TOP does not show much beyond \"postmaster\". How should I use TOP and\n>> what info can I give you? This is what it looks like:\n>\n> We're basically looking to see if the postmaster process doing the\n> vacuuming or reindexing is stuck in a D state, which means it's\n> waiting on IO.\n> hot the c key while it's running and you should get a little more info\n> on which processes are what.\n>\n>>  4799 postgres  15   0  532m  94m  93m D  0.7  1.2   0:00.14\n>> postmaster\n>\n> That is likely the postmaster that is waiting on IO.\n>\n>> VMSTAT 10 shows this:\n>>\n>>  r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy id wa\n>>  3 14  99552  17900  41108 7201712    0    0    42    11    0     0  8 34 41 16\n>>  2 17  99552  16468  41628 7203012    0    0  1326    84 1437 154810  7 66 12 15\n>>  3  7  99476  16796  41056 7198976    0    0  1398    96 1453 156211  7 66 21  6\n>>  3 17  99476  17228  39132 7177240    0    0  1325    68 1529 156111  8 65 16 11\n>\n> So, we're at 11 to 15% io wait.  I'm gonna guess you have 8 cores /\n> threads in your CPUs, and 1/8th ot 100% is 12% so looks like you're\n> probably IO bound here.  iostat tells us more:\n>\n>> The results of \"iostat -xd 10\" is:\n>> Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s\n>> avgrq-sz avgqu-sz   await  svctm  %util\n>> sda          0.00   7.41  0.30  3.50    2.40   87.29     1.20    43.64\n>>   23.58     0.13   32.92  10.03   3.81\n>> sdb          0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00\n>>    0.00     0.00    0.00   0.00   0.00\n>> sdc          0.00  18.32 158.26  4.10 2519.32  180.98  1259.66\n>> 90.49    16.63    13.04   79.91   6.17 100.11\n>\n> 100% IO utilization, so yea, it's likely that your sdc drive is your\n> bottleneck.  Given our little data is actually moving through the sdc\n> drive, it's not very fast.\n>\n>> Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s\n>\n>> 8GB memory in total. 1GB devoted to PGSQL during these operations.\n>> Otherwise, my settings are as follows (and yes I did make the vacuum\n>> settings more aggressive based on your email, which has had no\n>> apparent impact) --\n>\n> Yeah, as it gets more aggressive it can use more of your IO bandwidth.\n>  Since you\n>\n>> What else can I share?\n>\n> That's a lot of help.  I'm assuming you're running software or\n> motherboard fake-raid on this RAID-1 set?  I'd suggest buying a $500\n> or so battery backed caching RAID controller first,  the improvements\n> in performance are huge with such a card.  You might wanna try testing\n> the current RAID-1 set with bonnie++ to get an idea of how fast it is.\n>\n", "msg_date": "Mon, 18 Apr 2011 13:14:34 +0800", "msg_from": "Phoenix Kiula <[email protected]>", "msg_from_op": true, "msg_subject": "Re: REINDEX takes half a day (and still not complete!)" }, { "msg_contents": "Btw, hardware is not an issue. My db has been working fine for a\nwhile. Smaller poorer systems around the web run InnoDB databases. I\nwouldn't touch that with a barge pole.\n\nI have a hardware RAID controller, not \"fake\". It's a good quality\nbattery-backed 3Ware:\nhttp://192.19.193.26/products/serial_ata2-9000.asp\n\n\n\nOn Mon, Apr 18, 2011 at 1:14 PM, Phoenix Kiula <[email protected]> wrote:\n> Thanks for these suggestions.\n>\n> I am beginning to wonder if the issue is deeper.\n>\n> I set autovacuum to off, then turned off all the connections to the\n> database, and did a manual vacuum just to see how long it takes.\n>\n> This was last night my time. I woke up this morning and it has still\n> not finished.\n>\n> The maintenance_men given to the DB for this process was 2GB.\n>\n> There is nothing else going on on the server! Now, even REINDEX is\n> just failing in the middle:\n>\n>\n> # REINDEX INDEX new_idx_userid;\n> server closed the connection unexpectedly\n>        This probably means the server terminated abnormally\n>        before or while processing the request.\n> The connection to the server was lost. Attempting reset: Failed.\n>\n>\n> What else could be wrong?\n>\n>\n>\n>\n> On Mon, Apr 18, 2011 at 2:38 AM, Scott Marlowe <[email protected]> wrote:\n>> On Sun, Apr 17, 2011 at 10:59 AM, Phoenix <[email protected]> wrote:\n>>> TOP does not show much beyond \"postmaster\". How should I use TOP and\n>>> what info can I give you? This is what it looks like:\n>>\n>> We're basically looking to see if the postmaster process doing the\n>> vacuuming or reindexing is stuck in a D state, which means it's\n>> waiting on IO.\n>> hot the c key while it's running and you should get a little more info\n>> on which processes are what.\n>>\n>>>  4799 postgres  15   0  532m  94m  93m D  0.7  1.2   0:00.14\n>>> postmaster\n>>\n>> That is likely the postmaster that is waiting on IO.\n>>\n>>> VMSTAT 10 shows this:\n>>>\n>>>  r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy id wa\n>>>  3 14  99552  17900  41108 7201712    0    0    42    11    0     0  8 34 41 16\n>>>  2 17  99552  16468  41628 7203012    0    0  1326    84 1437 154810  7 66 12 15\n>>>  3  7  99476  16796  41056 7198976    0    0  1398    96 1453 156211  7 66 21  6\n>>>  3 17  99476  17228  39132 7177240    0    0  1325    68 1529 156111  8 65 16 11\n>>\n>> So, we're at 11 to 15% io wait.  I'm gonna guess you have 8 cores /\n>> threads in your CPUs, and 1/8th ot 100% is 12% so looks like you're\n>> probably IO bound here.  iostat tells us more:\n>>\n>>> The results of \"iostat -xd 10\" is:\n>>> Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s\n>>> avgrq-sz avgqu-sz   await  svctm  %util\n>>> sda          0.00   7.41  0.30  3.50    2.40   87.29     1.20    43.64\n>>>   23.58     0.13   32.92  10.03   3.81\n>>> sdb          0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00\n>>>    0.00     0.00    0.00   0.00   0.00\n>>> sdc          0.00  18.32 158.26  4.10 2519.32  180.98  1259.66\n>>> 90.49    16.63    13.04   79.91   6.17 100.11\n>>\n>> 100% IO utilization, so yea, it's likely that your sdc drive is your\n>> bottleneck.  Given our little data is actually moving through the sdc\n>> drive, it's not very fast.\n>>\n>>> Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s\n>>\n>>> 8GB memory in total. 1GB devoted to PGSQL during these operations.\n>>> Otherwise, my settings are as follows (and yes I did make the vacuum\n>>> settings more aggressive based on your email, which has had no\n>>> apparent impact) --\n>>\n>> Yeah, as it gets more aggressive it can use more of your IO bandwidth.\n>>  Since you\n>>\n>>> What else can I share?\n>>\n>> That's a lot of help.  I'm assuming you're running software or\n>> motherboard fake-raid on this RAID-1 set?  I'd suggest buying a $500\n>> or so battery backed caching RAID controller first,  the improvements\n>> in performance are huge with such a card.  You might wanna try testing\n>> the current RAID-1 set with bonnie++ to get an idea of how fast it is.\n>>\n>\n", "msg_date": "Mon, 18 Apr 2011 13:19:03 +0800", "msg_from": "Phoenix Kiula <[email protected]>", "msg_from_op": true, "msg_subject": "Re: REINDEX takes half a day (and still not complete!)" }, { "msg_contents": "On Mon, Apr 18, 2011 at 7:14 AM, Phoenix Kiula <[email protected]> wrote:\n> # REINDEX INDEX new_idx_userid;\n> server closed the connection unexpectedly\n>        This probably means the server terminated abnormally\n>        before or while processing the request.\n> The connection to the server was lost. Attempting reset: Failed.\n>\n>\n> What else could be wrong?\n\n\nThat's hardly enough information to guess, but since we're trying to\nguess, maybe your maintainance_mem went overboard and your server ran\nout of RAM. Or disk space.\n\nAside from a bug, that's the only reason I can think for a pg backend\nto bail out like that. Well, the connection could have been cut off by\nother means (ie: someone tripped on the cable or something), but lets\nnot dwell on those options.\n", "msg_date": "Mon, 18 Apr 2011 08:39:58 +0200", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: REINDEX takes half a day (and still not complete!)" }, { "msg_contents": "On Mon, Apr 18, 2011 at 8:39 AM, Claudio Freire <[email protected]> wrote:\n> Aside from a bug, that's the only reason I can think for a pg backend\n> to bail out like that. Well, the connection could have been cut off by\n> other means (ie: someone tripped on the cable or something), but lets\n> not dwell on those options.\n\n\nSorry for the double-post, but I should add, you really should set up\nsome kind of monitoring, like cacti[0] with snmp or a similar setup,\nso you can monitor the state of your server in detail without having\nto stare at it.\n\n[0] http://www.cacti.net/\n", "msg_date": "Mon, 18 Apr 2011 08:42:08 +0200", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: REINDEX takes half a day (and still not complete!)" }, { "msg_contents": "On Sun, Apr 17, 2011 at 11:19 PM, Phoenix Kiula <[email protected]> wrote:\n> Btw, hardware is not an issue. My db has been working fine for a\n> while. Smaller poorer systems around the web run InnoDB databases. I\n> wouldn't touch that with a barge pole.\n\nDid you or someone in an earlier post say that you didn't have\nproblems with table bloat? I can't remember for sure.\n\nAnyway if it's not hardware then it's drivers or your OS. The output\nof iostat is abysmally bad. 100% utilization but actual throughput is\npretty low. Have you used the CLI utility for your RAID card to check\nfor possible problems or errors? Maybe your battery is dead or\nnon-functioning? Don't just rule out hardware until you're sure yours\nis working well.\n", "msg_date": "Mon, 18 Apr 2011 01:26:38 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: REINDEX takes half a day (and still not complete!)" }, { "msg_contents": "On Mon, Apr 18, 2011 at 1:26 AM, Scott Marlowe <[email protected]> wrote:\n> On Sun, Apr 17, 2011 at 11:19 PM, Phoenix Kiula <[email protected]> wrote:\n>> Btw, hardware is not an issue. My db has been working fine for a\n>> while. Smaller poorer systems around the web run InnoDB databases. I\n>> wouldn't touch that with a barge pole.\n>\n> Did you or someone in an earlier post say that you didn't have\n> problems with table bloat?  I can't remember for sure.\n>\n> Anyway if it's not hardware then it's drivers or your OS.  The output\n> of iostat is abysmally bad.  100% utilization but actual throughput is\n> pretty low.  Have you used the CLI utility for your RAID card to check\n> for possible problems or errors?  Maybe your battery is dead or\n> non-functioning?  Don't just rule out hardware until you're sure yours\n> is working well.\n\nFor instance, here is what I get from iostat on my very CPU bound 8\ncore opteron machine with a battery backed caching controller:\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz\navgqu-sz await svctm %util\nsda 0.00 9.50 0.30 11.20 2.40 1826.40 159.03\n 0.01 0.54 0.50 0.58\nsdb 42.40 219.80 114.60 41.10 27982.40 2088.80\n193.14 0.26 1.67 1.42 22.16\n\nNote that sda is the system / pg_xlog drive, and sdb is the /data/base\ndir, minus pg_xlog. I'm reading ~19MB/s and writing ~1MB/s on sdb and\nthat's using 22% of the IO approximately. My CPUs are all pegged at\n100% and I'm getting ~2500 tps.\n\nI'm betting pgbench on your system will get something really low like\n200 tps and be maxing out your %util.\n", "msg_date": "Mon, 18 Apr 2011 01:35:13 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: REINDEX takes half a day (and still not complete!)" }, { "msg_contents": "On Sun, Apr 17, 2011 at 11:19 PM, Phoenix Kiula <[email protected]> wrote:\n> Btw, hardware is not an issue. My db has been working fine for a\n> while. Smaller poorer systems around the web run InnoDB databases. I\n> wouldn't touch that with a barge pole.\n>\n> I have a hardware RAID controller, not \"fake\". It's a good quality\n> battery-backed 3Ware:\n> http://192.19.193.26/products/serial_ata2-9000.asp\n\n(please stop top posting)\n\nAlso, when you run top and hit c what do those various postgres\nprocesses say they're doing? bgwriter, SELECT, VACUMM etc?\n", "msg_date": "Mon, 18 Apr 2011 01:38:10 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: REINDEX takes half a day (and still not complete!)" }, { "msg_contents": "On Mon, Apr 18, 2011 at 3:38 PM, Scott Marlowe <[email protected]> wrote:\n> On Sun, Apr 17, 2011 at 11:19 PM, Phoenix Kiula <[email protected]> wrote:\n>> Btw, hardware is not an issue. My db has been working fine for a\n>> while. Smaller poorer systems around the web run InnoDB databases. I\n>> wouldn't touch that with a barge pole.\n>>\n>> I have a hardware RAID controller, not \"fake\". It's a good quality\n>> battery-backed 3Ware:\n>> http://192.19.193.26/products/serial_ata2-9000.asp\n>\n> (please stop top posting)\n>\n> Also, when you run top and hit c what do those various postgres\n> processes say they're doing?  bgwriter, SELECT, VACUMM etc?\n>\n\n\n\n\n\nThanks. But let me do the \"top\" stuff later. I think I have a bigger\nproblem now.\n\nWhile doing a PG dump, I seem to get this error:\n\n ERROR: invalid memory alloc request size 4294967293\n\nUpon googling, this seems to be a data corruption issue!\n\nOne of the older messages suggests that I do \"file level backup and\nrestore the data\" -\nhttp://archives.postgresql.org/pgsql-admin/2008-05/msg00191.php\n\nHow does one do this -- should I copy the data folder? What are the\nspecific steps to restore from here, would I simply copy the files\nfrom the data folder back to the new install or something? Cant find\nthese steps in the PG documentation.\n\nI'm on PG 8.2.9, CentOS 5, with 8GB of RAM.\n", "msg_date": "Mon, 18 Apr 2011 15:45:27 +0800", "msg_from": "Phoenix Kiula <[email protected]>", "msg_from_op": true, "msg_subject": "Re: REINDEX takes half a day (and still not complete!)" }, { "msg_contents": "On Mon, Apr 18, 2011 at 1:45 AM, Phoenix Kiula <[email protected]> wrote:\n> On Mon, Apr 18, 2011 at 3:38 PM, Scott Marlowe <[email protected]> wrote:\n>> On Sun, Apr 17, 2011 at 11:19 PM, Phoenix Kiula <[email protected]> wrote:\n>>> Btw, hardware is not an issue. My db has been working fine for a\n>>> while. Smaller poorer systems around the web run InnoDB databases. I\n>>> wouldn't touch that with a barge pole.\n>>>\n>>> I have a hardware RAID controller, not \"fake\". It's a good quality\n>>> battery-backed 3Ware:\n>>> http://192.19.193.26/products/serial_ata2-9000.asp\n>>\n>> (please stop top posting)\n>>\n>> Also, when you run top and hit c what do those various postgres\n>> processes say they're doing?  bgwriter, SELECT, VACUMM etc?\n>>\n>\n>\n>\n>\n>\n> Thanks. But let me do the \"top\" stuff later. I think I have a bigger\n> problem now.\n>\n> While doing a PG dump, I seem to get this error:\n>\n>    ERROR: invalid memory alloc request size 4294967293\n>\n> Upon googling, this seems to be a data corruption issue!\n>\n> One of the older messages suggests that I do \"file level backup and\n> restore the data\" -\n> http://archives.postgresql.org/pgsql-admin/2008-05/msg00191.php\n>\n> How does one do this -- should I copy the data folder? What are the\n> specific steps to restore from here, would I simply copy the files\n> from the data folder back to the new install or something? Cant find\n> these steps in the PG documentation.\n>\n> I'm on PG 8.2.9, CentOS 5, with 8GB of RAM.\n\nI wonder if you've got a drive going bad (or both of them) what does\nyour RAID card have to say about the drives?\n\nTo do a file level backup, setup another machine on the same network,\nwith enough space on a drive with write access for the account you\nwant to backup to. Shut down the Postgres server (sudo\n/etc/init.d/postgresql stop or something like that) then use rsync\n-avl /data/pgdir remoteserver:/newdatadir/ to back it up. you want to\nstart with that so you can at least get back to where you are now if\nthings go wrong.\n\nAlso, after that, run memtest86+ to make sure you don't have memory errors.\n", "msg_date": "Mon, 18 Apr 2011 01:48:16 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: REINDEX takes half a day (and still not complete!)" }, { "msg_contents": "You mean the maintenance instead of mentioning the recovery? If yes\n\nThe following types of administration commands are not accepted during\nrecovery mode:\n\n -\n\n * Data Definition Language (DDL) - e.g. CREATE INDEX*\n -\n\n * Privilege and Ownership - GRANT, REVOKE, REASSIGN*\n -\n\n * Maintenance commands - ANALYZE, VACUUM, CLUSTER, REINDEX*\n\nThanks.\n\n\nOn Sun, Apr 17, 2011 at 5:30 PM, Phoenix Kiula <[email protected]>wrote:\n\n> Sorry, rejuvenating a thread that was basically unanswered.\n>\n> I closed the database for any kinds of access to focus on maintenance\n> operations, killed all earlier processes so that my maintenance is the\n> only stuff going on.\n>\n> REINDEX is still taking 3 hours -- and it is still not finished!\n>\n> Similarly, if I cancel the REINDEX and issue a VACUUM ANALYZE VERBOSE,\n> this too seems to just hang there on my big table.\n>\n> I changed the maintenance_work_men to 2GB for this operation. It's\n> highly worrisome -- the above slow times are with 2GB of my server\n> dedicated to Postgresql!!!!\n>\n> Surely this is not tenable for enterprise environments? I am on a\n> 64bit RedHat server with dual CPU Intel Woodcrest or whatever that was\n> called. Postgres is 8.2.9.\n>\n> How do DB folks do this with small maintenance windows? This is for a\n> very high traffic website so it's beginning to get embarrassing.\n>\n> Would appreciate any thoughts or pointers.\n>\n> Thanks!\n>\n>\n>\n> On Mon, Mar 21, 2011 at 9:28 PM, Merlin Moncure <[email protected]>\n> wrote:\n> > On Fri, Mar 18, 2011 at 10:07 PM, Phoenix Kiula <[email protected]>\n> wrote:\n> >> I have a large table but not as large as the kind of numbers that get\n> >> discussed on this list. It has 125 million rows.\n> >>\n> >> REINDEXing the table takes half a day, and it's still not finished.\n> >>\n> >> To write this post I did \"SELECT COUNT(*)\", and here's the output -- so\n> long!\n> >>\n> >> select count(*) from links;\n> >> count\n> >> -----------\n> >> 125418191\n> >> (1 row)\n> >>\n> >> Time: 1270405.373 ms\n> >>\n> >> That's 1270 seconds!\n> >>\n> >> I suppose the vaccuum analyze is not doing its job? As you can see\n> >> from settings below, I have autovacuum set to ON, and there's also a\n> >> cronjob every 10 hours to do a manual vacuum analyze on this table,\n> >> which is largest.\n> >>\n> >> PG is version 8.2.9.\n> >>\n> >> Any thoughts on what I can do to improve performance!?\n> >>\n> >> Below are my settings.\n> >>\n> >>\n> >>\n> >> max_connections = 300\n> >> shared_buffers = 500MB\n> >> effective_cache_size = 1GB\n> >> max_fsm_relations = 1500\n> >> max_fsm_pages = 950000\n> >>\n> >> work_mem = 100MB\n> >> temp_buffers = 4096\n> >> authentication_timeout = 10s\n> >> ssl = off\n> >> checkpoint_warning = 3600\n> >> random_page_cost = 1\n> >>\n> >> autovacuum = on\n> >> autovacuum_vacuum_cost_delay = 20\n> >>\n> >> vacuum_cost_delay = 20\n> >> vacuum_cost_limit = 600\n> >>\n> >> autovacuum_naptime = 10\n> >> stats_start_collector = on\n> >> stats_row_level = on\n> >> autovacuum_vacuum_threshold = 75\n> >> autovacuum_analyze_threshold = 25\n> >> autovacuum_analyze_scale_factor = 0.02\n> >> autovacuum_vacuum_scale_factor = 0.01\n> >>\n> >> wal_buffers = 64\n> >> checkpoint_segments = 128\n> >> checkpoint_timeout = 900\n> >> fsync = on\n> >> maintenance_work_mem = 512MB\n> >\n> > how much memory do you have? you might want to consider raising\n> > maintenance_work_mem to 1GB. Are other things going on in the\n> > database while you are rebuilding your indexes? Is it possible you\n> > are blocked waiting on a lock for a while?\n> >\n> > How much index data is there? Can we see the table definition along\n> > with create index statements?\n> >\n> > merlin\n> >\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nYou mean the maintenance instead of mentioning the recovery? If yesThe following types of administration commands are not accepted during recovery mode: Data Definition Language (DDL) - e.g. CREATE INDEX \n Privilege and Ownership - GRANT, REVOKE, REASSIGN Maintenance commands - ANALYZE, VACUUM, CLUSTER, REINDEX\nThanks.On Sun, Apr 17, 2011 at 5:30 PM, Phoenix Kiula <[email protected]> wrote:\nSorry, rejuvenating a thread that was basically unanswered.\n\nI closed the database for any kinds of access to focus on maintenance\noperations, killed all earlier processes so that my maintenance is the\nonly stuff going on.\n\nREINDEX is still taking 3 hours -- and it is still not finished!\n\nSimilarly, if I cancel the REINDEX and issue a VACUUM ANALYZE VERBOSE,\nthis too seems to just hang there on my big table.\n\nI changed the maintenance_work_men to 2GB for this operation. It's\nhighly worrisome -- the above slow times are with 2GB of my server\ndedicated to Postgresql!!!!\n\nSurely this is not tenable for enterprise environments? I am on a\n64bit RedHat server with dual CPU Intel Woodcrest or whatever that was\ncalled. Postgres is 8.2.9.\n\nHow do DB folks do this with small maintenance windows? This is for a\nvery high traffic website so it's beginning to get embarrassing.\n\nWould appreciate any thoughts or pointers.\n\nThanks!\n\n\n\nOn Mon, Mar 21, 2011 at 9:28 PM, Merlin Moncure <[email protected]> wrote:\n> On Fri, Mar 18, 2011 at 10:07 PM, Phoenix Kiula <[email protected]> wrote:\n>> I have a large table but not as large as the kind of numbers that get\n>> discussed on this list. It has 125 million rows.\n>>\n>> REINDEXing the table takes half a day, and it's still not finished.\n>>\n>> To write this post I did \"SELECT COUNT(*)\", and here's the output -- so long!\n>>\n>>    select count(*) from links;\n>>       count\n>>    -----------\n>>     125418191\n>>    (1 row)\n>>\n>>    Time: 1270405.373 ms\n>>\n>> That's 1270 seconds!\n>>\n>> I suppose the vaccuum analyze is not doing its job? As you can see\n>> from settings below, I have autovacuum set to ON, and there's also a\n>> cronjob every 10 hours to do a manual vacuum analyze on this table,\n>> which is largest.\n>>\n>> PG is version 8.2.9.\n>>\n>> Any thoughts on what I can do to improve performance!?\n>>\n>> Below are my settings.\n>>\n>>\n>>\n>> max_connections              = 300\n>> shared_buffers               = 500MB\n>> effective_cache_size         = 1GB\n>> max_fsm_relations            = 1500\n>> max_fsm_pages                = 950000\n>>\n>> work_mem                     = 100MB\n>> temp_buffers                 = 4096\n>> authentication_timeout       = 10s\n>> ssl                          = off\n>> checkpoint_warning           = 3600\n>> random_page_cost             = 1\n>>\n>> autovacuum                   = on\n>> autovacuum_vacuum_cost_delay = 20\n>>\n>> vacuum_cost_delay            = 20\n>> vacuum_cost_limit            = 600\n>>\n>> autovacuum_naptime           = 10\n>> stats_start_collector        = on\n>> stats_row_level              = on\n>> autovacuum_vacuum_threshold  = 75\n>> autovacuum_analyze_threshold = 25\n>> autovacuum_analyze_scale_factor  = 0.02\n>> autovacuum_vacuum_scale_factor   = 0.01\n>>\n>> wal_buffers                  = 64\n>> checkpoint_segments          = 128\n>> checkpoint_timeout           = 900\n>> fsync                        = on\n>> maintenance_work_mem         = 512MB\n>\n> how much memory do you have? you might want to consider raising\n> maintenance_work_mem to 1GB.  Are other things going on in the\n> database while you are rebuilding your indexes?  Is it possible you\n> are blocked waiting on a lock for a while?\n>\n> How much index data is there?  Can we see the table definition along\n> with create index statements?\n>\n> merlin\n>\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Mon, 18 Apr 2011 13:15:09 +0200", "msg_from": "Sethu Prasad <[email protected]>", "msg_from_op": false, "msg_subject": "Re: REINDEX takes half a day (and still not complete!)" }, { "msg_contents": "> Thanks. But let me do the \"top\" stuff later. I think I have a bigger\n> problem now.\n>\n> While doing a PG dump, I seem to get this error:\n>\n> ERROR: invalid memory alloc request size 4294967293\n>\n> Upon googling, this seems to be a data corruption issue!\n>\n> One of the older messages suggests that I do \"file level backup and\n> restore the data\" -\n> http://archives.postgresql.org/pgsql-admin/2008-05/msg00191.php\n>\n> How does one do this -- should I copy the data folder? What are the\n> specific steps to restore from here, would I simply copy the files\n> from the data folder back to the new install or something? Cant find\n> these steps in the PG documentation.\n\nJust stop the database, and copy the 'data' directory somewhere else (to a\ndifferent machine prefferably). You can then start the database from this\ndirectory copy (not sure how that works in CentOS, but you can always run\n\"postmaster -D directory\").\n\n>\n> I'm on PG 8.2.9, CentOS 5, with 8GB of RAM.\n>\n\nThis is a massive thread (and part of the important info is in another\nthread other mailing lists), so maybe I've missed something important, but\nit seems like:\n\n1) You're I/O bound (according to the 100% utilization reported by iostat).\n\n2) Well, you're running RAID1 setup, which basically means it's 1 drive\n(and you're doing reindex, which means a lot of read/writes).\n\n3) The raid controller should handle this, unless it's broken, the battery\nis empty (and thus the writes are not cached) or something like that. I'm\nnot that familiar with 3ware - is there any diagnostic tool that you use\nto check the health of the controller / drives?\n\n4) I know you've mentioned there is no bloat (according to off-the-list\ndiscussion with Merlin) - is this true for the table only? Because if the\nindex is not bloated, then there's no point in running reindex ...\n\nBTW what is the size of the database and that big table? I know it's 125\nmillion rows, but how much is that? 1GB, 1TB, ... how much? What does\nthis return\n\n SELECT reltuples FROM pg_class WHERE relname = 'links';\n\nDo you have any pg_dump backups? What size are they, compared to the live\ndatabase? Havou you tried to rebuild the database from these backups? That\nwould give you a fresh indexes, so you could see how a 'perfectly clean'\ndatabase looks (whether the indexes bloated, what speed is expected etc.).\n\nregards\nTomas\n\n", "msg_date": "Mon, 18 Apr 2011 17:15:10 +0200", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: REINDEX takes half a day (and still not complete!)" }, { "msg_contents": "On Apr 17, 2011, at 11:30 AM, Phoenix Kiula <[email protected]> wrote:\n> Sorry, rejuvenating a thread that was basically unanswered.\n> \n> I closed the database for any kinds of access to focus on maintenance\n> operations, killed all earlier processes so that my maintenance is the\n> only stuff going on.\n> \n> REINDEX is still taking 3 hours -- and it is still not finished!\n> \n> Similarly, if I cancel the REINDEX and issue a VACUUM ANALYZE VERBOSE,\n> this too seems to just hang there on my big table.\n> \n> I changed the maintenance_work_men to 2GB for this operation. It's\n> highly worrisome -- the above slow times are with 2GB of my server\n> dedicated to Postgresql!!!!\n> \n> Surely this is not tenable for enterprise environments? I am on a\n> 64bit RedHat server with dual CPU Intel Woodcrest or whatever that was\n> called. Postgres is 8.2.9.\n> \n> How do DB folks do this with small maintenance windows? This is for a\n> very high traffic website so it's beginning to get embarrassing.\n> \n> Would appreciate any thoughts or pointers.\n\nAn upgrade would probably help you a lot, and as others have said it sounds like your hardware is failing, so you probably want to deal with that first.\n\nI am a bit surprised, however, that no one seems to have mentioned using CLUSTER rather than VACUUM or REINDEX. Sometimes that's worth a try...\n\n...Robert", "msg_date": "Sat, 23 Apr 2011 15:44:23 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: REINDEX takes half a day (and still not complete!)" }, { "msg_contents": "On 04/23/2011 03:44 PM, Robert Haas wrote:\n> On Apr 17, 2011, at 11:30 AM, Phoenix Kiula<[email protected]> wrote:\n> \n>> Postgres is 8.2.9.\n>>\n>> \n> An upgrade would probably help you a lot, and as others have said it sounds like your hardware is failing, so you probably want to deal with that first.\n>\n> I am a bit surprised, however, that no one seems to have mentioned using CLUSTER rather than VACUUM or REINDEX. Sometimes that's worth a try...\n> \n\nDon't know if it was for this reason or not for not mentioning it by \nothers, but CLUSTER isn't so great in 8.2. The whole \"not MVCC-safe\" \nbit does not inspire confidence on a production server.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\n\n", "msg_date": "Sat, 30 Apr 2011 04:07:13 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: REINDEX takes half a day (and still not complete!)" }, { "msg_contents": "On Sat, Apr 30, 2011 at 4:07 PM, Greg Smith <[email protected]> wrote:\n> On 04/23/2011 03:44 PM, Robert Haas wrote:\n>>\n>> On Apr 17, 2011, at 11:30 AM, Phoenix Kiula<[email protected]>\n>>  wrote:\n>>\n>>>\n>>> Postgres is 8.2.9.\n>>>\n>>>\n>>\n>> An upgrade would probably help you a lot, and as others have said it\n>> sounds like your hardware is failing, so you probably want to deal with that\n>> first.\n>>\n>> I am a bit surprised, however, that no one seems to have mentioned using\n>> CLUSTER rather than VACUUM or REINDEX. Sometimes that's worth a try...\n>>\n>\n> Don't know if it was for this reason or not for not mentioning it by others,\n> but CLUSTER isn't so great in 8.2.  The whole \"not MVCC-safe\" bit does not\n> inspire confidence on a production server.\n\n\n\n\nTo everyone. Thanks so much for everything, truly. We have managed to\nsalvage the data by exporting it in bits and pieces.\n\n1. First the schema only\n2. Then pg_dump of specific small tables\n3. Then pg_dump of timed bits of the big mammoth table\n\nNot to jinx it, but the newer hardware seems to be doing well. I am on\n9.0.4 now and it's pretty fast.\n\nAlso, as has been mentioned in this thread and other discussions on\nthe list, just doing a dump and then fresh reload has compacted the DB\nto nearly 1/3rd of its previously reported size!\n\nI suppose that's what I am going to do on a periodic basis from now\non. There is a lot of DELETE/UPDATE activity. But I wonder if the\nvacuum stuff really should do something that's similar in function?\nWhat do the high-end enterprise folks do -- surely they can't be\ndumping/restoring every quarter or so....or are they?\n\nAnyway, many many thanks to the lovely folks on this list. Much appreciated!\n", "msg_date": "Sat, 30 Apr 2011 17:26:36 +0800", "msg_from": "Phoenix Kiula <[email protected]>", "msg_from_op": true, "msg_subject": "Re: REINDEX takes half a day (and still not complete!)" }, { "msg_contents": "On Sat, Apr 30, 2011 at 05:26:36PM +0800, Phoenix Kiula wrote:\n> On Sat, Apr 30, 2011 at 4:07 PM, Greg Smith <[email protected]> wrote:\n> > On 04/23/2011 03:44 PM, Robert Haas wrote:\n> >>\n> >> On Apr 17, 2011, at 11:30 AM, Phoenix Kiula<[email protected]>\n> >> ?wrote:\n> >>\n> >>>\n> >>> Postgres is 8.2.9.\n> >>>\n> >>>\n> >>\n> >> An upgrade would probably help you a lot, and as others have said it\n> >> sounds like your hardware is failing, so you probably want to deal with that\n> >> first.\n> >>\n> >> I am a bit surprised, however, that no one seems to have mentioned using\n> >> CLUSTER rather than VACUUM or REINDEX. Sometimes that's worth a try...\n> >>\n> >\n> > Don't know if it was for this reason or not for not mentioning it by others,\n> > but CLUSTER isn't so great in 8.2. ?The whole \"not MVCC-safe\" bit does not\n> > inspire confidence on a production server.\n> \n> \n> \n> \n> To everyone. Thanks so much for everything, truly. We have managed to\n> salvage the data by exporting it in bits and pieces.\n> \n> 1. First the schema only\n> 2. Then pg_dump of specific small tables\n> 3. Then pg_dump of timed bits of the big mammoth table\n> \n> Not to jinx it, but the newer hardware seems to be doing well. I am on\n> 9.0.4 now and it's pretty fast.\n> \n> Also, as has been mentioned in this thread and other discussions on\n> the list, just doing a dump and then fresh reload has compacted the DB\n> to nearly 1/3rd of its previously reported size!\n> \n> I suppose that's what I am going to do on a periodic basis from now\n> on. There is a lot of DELETE/UPDATE activity. But I wonder if the\n> vacuum stuff really should do something that's similar in function?\n> What do the high-end enterprise folks do -- surely they can't be\n> dumping/restoring every quarter or so....or are they?\n> \n> Anyway, many many thanks to the lovely folks on this list. Much appreciated!\n> \n\nThe autovacuum and space management in 9.0 is dramatically more effective\nand efficient then that of 8.2. Unless you have an odd corner-case there\nreally should be no reason for a periodic dump/restore. This is not your\ngrandmother's Oldsmobile... :)\n\nRegards,\nKen\n", "msg_date": "Sat, 30 Apr 2011 09:34:21 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: REINDEX takes half a day (and still not complete!)" }, { "msg_contents": "On Apr 30, 2011, at 9:34 AM, Kenneth Marshall wrote:\n>> I suppose that's what I am going to do on a periodic basis from now\n>> on. There is a lot of DELETE/UPDATE activity. But I wonder if the\n>> vacuum stuff really should do something that's similar in function?\n>> What do the high-end enterprise folks do -- surely they can't be\n>> dumping/restoring every quarter or so....or are they?\n>> \n>> Anyway, many many thanks to the lovely folks on this list. Much appreciated!\n>> \n> \n> The autovacuum and space management in 9.0 is dramatically more effective\n> and efficient then that of 8.2. Unless you have an odd corner-case there\n> really should be no reason for a periodic dump/restore. This is not your\n> grandmother's Oldsmobile... :)\n\nIn 10+ years of using Postgres, I've never come across a case where you actually *need* to dump and restore on a regular basis. However, you can certainly run into scenarios where vacuum simply can't keep up. If your restored database is 1/3 the size of the original then this is certainly what was happening on your 8.2 setup.\n\nAs Kenneth mentioned, 9.0 is far better in this regard than 8.2, though it's still possible that you're doing something that will give it fits. I suggest that you run a weekly vacuumdb -av, capture that output and run it through pgFouine. That will give you a ton of useful information about the amount of bloat you have in each table. I would definitely look at anything with over 20% bloat.\n\nBTW, in case you're still questioning using Postgres in an enterprise setting; all of our production OLTP databases run on Postgres. The largest one is ~1.5TB and does over 650TPS on average (with peaks that are much higher). Unplanned downtime on that database would cost us well over $100k/hour, and we're storing financial information, so data quality issues are not an option (data quality was one of the primary reasons we moved away from MySQL in 2006). So yes, you can absolutely run very large Postgres databases in a high-workload environment. BTW, that's also on version 8.3.\n--\nJim C. Nasby, Database Architect [email protected]\n512.569.9461 (cell) http://jim.nasby.net\n\n\n", "msg_date": "Wed, 4 May 2011 08:28:03 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: REINDEX takes half a day (and still not complete!)" }, { "msg_contents": "On Sat, Apr 30, 2011 at 4:26 AM, Phoenix Kiula <[email protected]> wrote:\n> I suppose that's what I am going to do on a periodic basis from now\n> on. There is a lot of DELETE/UPDATE activity. But I wonder if the\n> vacuum stuff really should do something that's similar in function?\n> What do the high-end enterprise folks do -- surely they can't be\n> dumping/restoring every quarter or so....or are they?\n\nThe pg_reorg tool (google it) can rebuild a live table rebuilds\nwithout taking major locks. It's better to try an engineer your\ndatabase so that you have enough spare i/o to manage 1-2 continuously\nrunning vacuums, but if things get really out of whack it's there.\n\nmerlin\n", "msg_date": "Fri, 6 May 2011 10:15:38 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: REINDEX takes half a day (and still not complete!)" } ]
[ { "msg_contents": "Hi all,\n\nAt Bull company, we want to answer a call for tender from a large \ncompany. And we are asked for information about PostgreSQL performance \nunder AIX on Power 7 servers.\n\nBy chance, has someone some data about this ?\nHas someone performed a benchmark using AIX quite recently ?\n\nAre there any reasons for having performance level significantly \ndifferent between AIX and, let say, Linux, on a given platform ?\n\nThanks by advance for any help.\n\nPhilippe BEAUDOIN\n\n", "msg_date": "Sat, 19 Mar 2011 10:00:29 +0100", "msg_from": "phb07 <[email protected]>", "msg_from_op": true, "msg_subject": "Performance on AIX " }, { "msg_contents": "On 03/19/2011 04:00 AM, phb07 wrote:\n> Hi all,\n>\n> At Bull company, we want to answer a call for tender from a large company. And we are asked for information about PostgreSQL performance under AIX on Power 7 servers.\n>\n> By chance, has someone some data about this ?\n> Has someone performed a benchmark using AIX quite recently ?\n>\n> Are there any reasons for having performance level significantly different between AIX and, let say, Linux, on a given platform ?\n>\n> Thanks by advance for any help.\n>\n> Philippe BEAUDOIN\n>\n>\nDunno, never gotten to play with AIX or a Power 7... If you sent me one I'd be more than happy to benchmark it and send it back :-)\n\nOr, more seriously, even remote ssh would do.\n\n-Andy\n", "msg_date": "Sat, 19 Mar 2011 08:17:31 -0500", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on AIX" }, { "msg_contents": "Phillippe,\n\n> At Bull company, we want to answer a call for tender from a large\n> company. And we are asked for information about PostgreSQL performance\n> under AIX on Power 7 servers.\n\nAfilias runs PostgreSQL on AIX. I don't know the architecture, though.\n Or what they think of it as a platform.\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Tue, 22 Mar 2011 14:24:36 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on AIX" } ]
[ { "msg_contents": "I have noticed that SELECT ... = ANY(ARRAY(...)) is about twice as fast as SELECT IN ( ... ).\nCan anyone explain a reason for this? Results are the bottom and are reproducible. I can test with other versions if that is necessary.\n\n./configure --prefix=/usr/local/pgsql84 --with-openssl --with-perl\nCentOS release 5.4 (Final)\npsql (PostgreSQL) 8.4.1\n\nprompt2=# select count(*) from nodes;\n count \n--------\n 754734\n(1 row)\n\n\nprompt2=# \\d nodes\n Table \"public.nodes\"\n Column | Type | Modifiers \n--------------+--------------------------+-----------------------------------------------------------\n node_id | integer | not null default nextval(('node_id_seq'::text)::regclass)\n node_type_id | integer | not null\n template_id | integer | not null\n timestamp | timestamp with time zone | default ('now'::text)::timestamp(6) with time zone\nIndexes:\n \"nodes_pkey\" PRIMARY KEY, btree (node_id)\n \"n_node_id_index\" btree (node_id)\n \"n_node_type_id_index\" btree (node_type_id)\n \"n_template_id_index\" btree (template_id)\n\nprompt2=# select count(*) from nodes where node_id = any( Array(select node_id from nodes limit 100000) );\n count \n--------\n 100000\n(1 row)\n\nTime: 404.530 ms\nprompt2=# select count(*) from nodes where node_id = any( Array(select node_id from nodes limit 100000) );\n count \n--------\n 100000\n(1 row)\n\nTime: 407.316 ms\nprompt2=# select count(*) from nodes where node_id = any( Array(select node_id from nodes limit 100000) );\n count \n--------\n 100000\n(1 row)\n\nTime: 408.728 ms\nprompt2=# select count(*) from nodes where node_id in (select node_id from nodes limit 100000 );\n count \n--------\n 100000\n(1 row)\n\nTime: 793.840 ms\nprompt2=# select count(*) from nodes where node_id in (select node_id from nodes limit 100000 );\n count \n--------\n 100000\n(1 row)\n\nTime: 779.137 ms\nprompt2=# select count(*) from nodes where node_id in (select node_id from nodes limit 100000 );\n count \n--------\n 100000\n(1 row)\n\nTime: 781.820 ms\n\n", "msg_date": "Sun, 20 Mar 2011 02:47:15 -0400", "msg_from": "Adam Tistler <[email protected]>", "msg_from_op": true, "msg_subject": "Select in subselect vs select = any array" }, { "msg_contents": "Hello\n\n2011/3/20 Adam Tistler <[email protected]>:\n> I have noticed that SELECT ... = ANY(ARRAY(...))  is about twice as fast as SELECT IN ( ... ).\n> Can anyone explain a reason for this?  Results are the bottom and are reproducible.  I can test with other versions if that is necessary.\n>\n\nsend a result of EXPLAIN ANALYZE SELECT ..., please\n\nThe reasons can be different - less seq scans, indexes\n\nRegards\n\nPavel Stehule\n\n\n\n> ./configure --prefix=/usr/local/pgsql84 --with-openssl --with-perl\n> CentOS release 5.4 (Final)\n> psql (PostgreSQL) 8.4.1\n>\n> prompt2=# select count(*) from nodes;\n>  count\n> --------\n>  754734\n> (1 row)\n>\n>\n> prompt2=# \\d nodes\n>                                        Table \"public.nodes\"\n>    Column    |           Type           |                         Modifiers\n> --------------+--------------------------+-----------------------------------------------------------\n>  node_id      | integer                  | not null default nextval(('node_id_seq'::text)::regclass)\n>  node_type_id | integer                  | not null\n>  template_id  | integer                  | not null\n>  timestamp    | timestamp with time zone | default ('now'::text)::timestamp(6) with time zone\n> Indexes:\n>    \"nodes_pkey\" PRIMARY KEY, btree (node_id)\n>    \"n_node_id_index\" btree (node_id)\n>    \"n_node_type_id_index\" btree (node_type_id)\n>    \"n_template_id_index\" btree (template_id)\n>\n> prompt2=# select count(*) from nodes where node_id = any(  Array(select node_id from nodes limit 100000) );\n>  count\n> --------\n>  100000\n> (1 row)\n>\n> Time: 404.530 ms\n> prompt2=# select count(*) from nodes where node_id = any(  Array(select node_id from nodes limit 100000) );\n>  count\n> --------\n>  100000\n> (1 row)\n>\n> Time: 407.316 ms\n> prompt2=# select count(*) from nodes where node_id = any(  Array(select node_id from nodes limit 100000) );\n>  count\n> --------\n>  100000\n> (1 row)\n>\n> Time: 408.728 ms\n> prompt2=# select count(*) from nodes where node_id in (select node_id from nodes limit 100000 );\n>  count\n> --------\n>  100000\n> (1 row)\n>\n> Time: 793.840 ms\n> prompt2=# select count(*) from nodes where node_id in (select node_id from nodes limit 100000 );\n>  count\n> --------\n>  100000\n> (1 row)\n>\n> Time: 779.137 ms\n> prompt2=# select count(*) from nodes where node_id in (select node_id from nodes limit 100000 );\n>  count\n> --------\n>  100000\n> (1 row)\n>\n> Time: 781.820 ms\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Sun, 20 Mar 2011 07:51:20 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Select in subselect vs select = any array" }, { "msg_contents": "logicops2=# explain analyze select count(*) from nodes where node_id = any( Array(select node_id from nodes limit 100000) );\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=1718.59..1718.60 rows=1 width=0) (actual time=509.126..509.127 rows=1 loops=1)\n InitPlan 1 (returns $0)\n -> Limit (cost=0.00..1637.04 rows=100000 width=4) (actual time=0.010..76.604 rows=100000 loops=1)\n -> Seq Scan on nodes (cost=0.00..12355.41 rows=754741 width=4) (actual time=0.008..38.105 rows=100000 loops=1)\n -> Bitmap Heap Scan on nodes (cost=42.67..81.53 rows=10 width=0) (actual time=447.274..484.283 rows=100000 loops=1)\n Recheck Cond: (node_id = ANY ($0))\n -> Bitmap Index Scan on n_node_id_index (cost=0.00..42.67 rows=10 width=0) (actual time=447.074..447.074 rows=100000 loops=1)\n Index Cond: (node_id = ANY ($0))\n Total runtime: 509.209 ms\n(9 rows)\n\nTime: 510.009 ms\n\n\nlogicops2=# explain analyze select count(*) from nodes where node_id in (select node_id from nodes limit 100000);\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=3017.17..3017.18 rows=1 width=0) (actual time=1052.866..1052.866 rows=1 loops=1)\n -> Nested Loop (cost=2887.04..3016.67 rows=200 width=0) (actual time=167.310..1021.540 rows=100000 loops=1)\n -> HashAggregate (cost=2887.04..2889.04 rows=200 width=4) (actual time=167.198..251.205 rows=100000 loops=1)\n -> Limit (cost=0.00..1637.04 rows=100000 width=4) (actual time=0.008..80.090 rows=100000 loops=1)\n -> Seq Scan on nodes (cost=0.00..12355.41 rows=754741 width=4) (actual time=0.007..41.566 rows=100000 loops=1)\n -> Index Scan using n_node_id_index on nodes (cost=0.00..0.63 rows=1 width=4) (actual time=0.006..0.007 rows=1 loops=100000)\n Index Cond: (public.nodes.node_id = public.nodes.node_id)\n Total runtime: 1053.523 ms\n(8 rows)\n\nTime: 1054.864 ms\n\n\n\nOn Mar 20, 2011, at 2:51 AM, Pavel Stehule wrote:\n\n> Hello\n> \n> 2011/3/20 Adam Tistler <[email protected]>:\n>> I have noticed that SELECT ... = ANY(ARRAY(...)) is about twice as fast as SELECT IN ( ... ).\n>> Can anyone explain a reason for this? Results are the bottom and are reproducible. I can test with other versions if that is necessary.\n>> \n> \n> send a result of EXPLAIN ANALYZE SELECT ..., please\n> \n> The reasons can be different - less seq scans, indexes\n> \n> Regards\n> \n> Pavel Stehule\n> \n> \n> \n>> ./configure --prefix=/usr/local/pgsql84 --with-openssl --with-perl\n>> CentOS release 5.4 (Final)\n>> psql (PostgreSQL) 8.4.1\n>> \n>> prompt2=# select count(*) from nodes;\n>> count\n>> --------\n>> 754734\n>> (1 row)\n>> \n>> \n>> prompt2=# \\d nodes\n>> Table \"public.nodes\"\n>> Column | Type | Modifiers\n>> --------------+--------------------------+-----------------------------------------------------------\n>> node_id | integer | not null default nextval(('node_id_seq'::text)::regclass)\n>> node_type_id | integer | not null\n>> template_id | integer | not null\n>> timestamp | timestamp with time zone | default ('now'::text)::timestamp(6) with time zone\n>> Indexes:\n>> \"nodes_pkey\" PRIMARY KEY, btree (node_id)\n>> \"n_node_id_index\" btree (node_id)\n>> \"n_node_type_id_index\" btree (node_type_id)\n>> \"n_template_id_index\" btree (template_id)\n>> \n>> prompt2=# select count(*) from nodes where node_id = any( Array(select node_id from nodes limit 100000) );\n>> count\n>> --------\n>> 100000\n>> (1 row)\n>> \n>> Time: 404.530 ms\n>> prompt2=# select count(*) from nodes where node_id = any( Array(select node_id from nodes limit 100000) );\n>> count\n>> --------\n>> 100000\n>> (1 row)\n>> \n>> Time: 407.316 ms\n>> prompt2=# select count(*) from nodes where node_id = any( Array(select node_id from nodes limit 100000) );\n>> count\n>> --------\n>> 100000\n>> (1 row)\n>> \n>> Time: 408.728 ms\n>> prompt2=# select count(*) from nodes where node_id in (select node_id from nodes limit 100000 );\n>> count\n>> --------\n>> 100000\n>> (1 row)\n>> \n>> Time: 793.840 ms\n>> prompt2=# select count(*) from nodes where node_id in (select node_id from nodes limit 100000 );\n>> count\n>> --------\n>> 100000\n>> (1 row)\n>> \n>> Time: 779.137 ms\n>> prompt2=# select count(*) from nodes where node_id in (select node_id from nodes limit 100000 );\n>> count\n>> --------\n>> 100000\n>> (1 row)\n>> \n>> Time: 781.820 ms\n>> \n>> \n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>> \n\n", "msg_date": "Sun, 20 Mar 2011 23:20:56 -0400", "msg_from": "Adam Tistler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Select in subselect vs select = any array" }, { "msg_contents": "Hello\n\nI think so HashAggregate goes out of memory - you can try to increase\na work_mem.\n\nThere are better queries for counting duplicit then cross join\n\nRegards\n\nPavel Stehule\n\n2011/3/21 Adam Tistler <[email protected]>:\n> logicops2=# explain analyze select count(*) from nodes where node_id = any(  Array(select node_id from nodes limit 100000) );\n>                                                               QUERY PLAN\n> -----------------------------------------------------------------------------------------------------------------------------------------\n>  Aggregate  (cost=1718.59..1718.60 rows=1 width=0) (actual time=509.126..509.127 rows=1 loops=1)\n>   InitPlan 1 (returns $0)\n>     ->  Limit  (cost=0.00..1637.04 rows=100000 width=4) (actual time=0.010..76.604 rows=100000 loops=1)\n>           ->  Seq Scan on nodes  (cost=0.00..12355.41 rows=754741 width=4) (actual time=0.008..38.105 rows=100000 loops=1)\n>   ->  Bitmap Heap Scan on nodes  (cost=42.67..81.53 rows=10 width=0) (actual time=447.274..484.283 rows=100000 loops=1)\n>         Recheck Cond: (node_id = ANY ($0))\n>         ->  Bitmap Index Scan on n_node_id_index  (cost=0.00..42.67 rows=10 width=0) (actual time=447.074..447.074 rows=100000 loops=1)\n>               Index Cond: (node_id = ANY ($0))\n>  Total runtime: 509.209 ms\n> (9 rows)\n>\n> Time: 510.009 ms\n>\n>\n> logicops2=# explain analyze select count(*) from nodes where node_id in (select node_id from nodes limit 100000);\n>                                                               QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------\n>  Aggregate  (cost=3017.17..3017.18 rows=1 width=0) (actual time=1052.866..1052.866 rows=1 loops=1)\n>   ->  Nested Loop  (cost=2887.04..3016.67 rows=200 width=0) (actual time=167.310..1021.540 rows=100000 loops=1)\n>         ->  HashAggregate  (cost=2887.04..2889.04 rows=200 width=4) (actual time=167.198..251.205 rows=100000 loops=1)\n>               ->  Limit  (cost=0.00..1637.04 rows=100000 width=4) (actual time=0.008..80.090 rows=100000 loops=1)\n>                     ->  Seq Scan on nodes  (cost=0.00..12355.41 rows=754741 width=4) (actual time=0.007..41.566 rows=100000 loops=1)\n>         ->  Index Scan using n_node_id_index on nodes  (cost=0.00..0.63 rows=1 width=4) (actual time=0.006..0.007 rows=1 loops=100000)\n>               Index Cond: (public.nodes.node_id = public.nodes.node_id)\n>  Total runtime: 1053.523 ms\n> (8 rows)\n>\n> Time: 1054.864 ms\n>\n>\n>\n> On Mar 20, 2011, at 2:51 AM, Pavel Stehule wrote:\n>\n>> Hello\n>>\n>> 2011/3/20 Adam Tistler <[email protected]>:\n>>> I have noticed that SELECT ... = ANY(ARRAY(...))  is about twice as fast as SELECT IN ( ... ).\n>>> Can anyone explain a reason for this?  Results are the bottom and are reproducible.  I can test with other versions if that is necessary.\n>>>\n>>\n>> send a result of EXPLAIN ANALYZE SELECT ..., please\n>>\n>> The reasons can be different - less seq scans, indexes\n>>\n>> Regards\n>>\n>> Pavel Stehule\n>>\n>>\n>>\n>>> ./configure --prefix=/usr/local/pgsql84 --with-openssl --with-perl\n>>> CentOS release 5.4 (Final)\n>>> psql (PostgreSQL) 8.4.1\n>>>\n>>> prompt2=# select count(*) from nodes;\n>>>  count\n>>> --------\n>>>  754734\n>>> (1 row)\n>>>\n>>>\n>>> prompt2=# \\d nodes\n>>>                                        Table \"public.nodes\"\n>>>    Column    |           Type           |                         Modifiers\n>>> --------------+--------------------------+-----------------------------------------------------------\n>>>  node_id      | integer                  | not null default nextval(('node_id_seq'::text)::regclass)\n>>>  node_type_id | integer                  | not null\n>>>  template_id  | integer                  | not null\n>>>  timestamp    | timestamp with time zone | default ('now'::text)::timestamp(6) with time zone\n>>> Indexes:\n>>>    \"nodes_pkey\" PRIMARY KEY, btree (node_id)\n>>>    \"n_node_id_index\" btree (node_id)\n>>>    \"n_node_type_id_index\" btree (node_type_id)\n>>>    \"n_template_id_index\" btree (template_id)\n>>>\n>>> prompt2=# select count(*) from nodes where node_id = any(  Array(select node_id from nodes limit 100000) );\n>>>  count\n>>> --------\n>>>  100000\n>>> (1 row)\n>>>\n>>> Time: 404.530 ms\n>>> prompt2=# select count(*) from nodes where node_id = any(  Array(select node_id from nodes limit 100000) );\n>>>  count\n>>> --------\n>>>  100000\n>>> (1 row)\n>>>\n>>> Time: 407.316 ms\n>>> prompt2=# select count(*) from nodes where node_id = any(  Array(select node_id from nodes limit 100000) );\n>>>  count\n>>> --------\n>>>  100000\n>>> (1 row)\n>>>\n>>> Time: 408.728 ms\n>>> prompt2=# select count(*) from nodes where node_id in (select node_id from nodes limit 100000 );\n>>>  count\n>>> --------\n>>>  100000\n>>> (1 row)\n>>>\n>>> Time: 793.840 ms\n>>> prompt2=# select count(*) from nodes where node_id in (select node_id from nodes limit 100000 );\n>>>  count\n>>> --------\n>>>  100000\n>>> (1 row)\n>>>\n>>> Time: 779.137 ms\n>>> prompt2=# select count(*) from nodes where node_id in (select node_id from nodes limit 100000 );\n>>>  count\n>>> --------\n>>>  100000\n>>> (1 row)\n>>>\n>>> Time: 781.820 ms\n>>>\n>>>\n>>> --\n>>> Sent via pgsql-performance mailing list ([email protected])\n>>> To make changes to your subscription:\n>>> http://www.postgresql.org/mailpref/pgsql-performance\n>>>\n>\n>\n", "msg_date": "Mon, 21 Mar 2011 06:54:53 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Select in subselect vs select = any array" }, { "msg_contents": "Pavel, thanks for the help.\n\nI increased work_mem from 16MB to 64MB, no difference. The queries are really just a test case. My actual queries are actual just large number of primary keys that I am selecting from the db:\n\nFor example:\n select * from nodes where node_id in ( 1, 2, 3 ..... )\n\nI found that even for small queries, the following is faster:\n select * from nodes where node_in = any (array[1,2,3 .... ])\n\n\nIts not really a big deal to me, I was just wondering if others could reproduce it on other systems/versions and if perhaps this is an issue that I should point out to postgres-dev.\n\n\nResults below:\n\nlogicops2=# explain analyze select count(*) from nodes where node_id in ( select node_id from nodes limit 100000 );\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=3017.18..3017.19 rows=1 width=0) (actual time=1017.051..1017.051 rows=1 loops=1)\n -> Nested Loop (cost=2887.05..3016.68 rows=200 width=0) (actual time=157.290..986.329 rows=100000 loops=1)\n -> HashAggregate (cost=2887.05..2889.05 rows=200 width=4) (actual time=157.252..241.995 rows=100000 loops=1)\n -> Limit (cost=0.00..1637.05 rows=100000 width=4) (actual time=0.009..73.942 rows=100000 loops=1)\n -> Seq Scan on nodes (cost=0.00..12355.34 rows=754734 width=4) (actual time=0.008..35.428 rows=100000 loops=1)\n -> Index Scan using n_node_id_index on nodes (cost=0.00..0.63 rows=1 width=4) (actual time=0.006..0.006 rows=1 loops=100000)\n Index Cond: (public.nodes.node_id = public.nodes.node_id)\n Total runtime: 1017.794 ms\n(8 rows)\n\nlogicops2=# explain analyze select count(*) from nodes where node_id = any(array ( select node_id from nodes limit 100000 ));\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=1718.60..1718.61 rows=1 width=0) (actual time=485.554..485.555 rows=1 loops=1)\n InitPlan 1 (returns $0)\n -> Limit (cost=0.00..1637.05 rows=100000 width=4) (actual time=0.011..73.037 rows=100000 loops=1)\n -> Seq Scan on nodes (cost=0.00..12355.34 rows=754734 width=4) (actual time=0.010..34.462 rows=100000 loops=1)\n -> Bitmap Heap Scan on nodes (cost=42.67..81.53 rows=10 width=0) (actual time=433.003..461.108 rows=100000 loops=1)\n Recheck Cond: (node_id = ANY ($0))\n -> Bitmap Index Scan on n_node_id_index (cost=0.00..42.67 rows=10 width=0) (actual time=432.810..432.810 rows=100000 loops=1)\n Index Cond: (node_id = ANY ($0))\n Total runtime: 485.638 ms\n(9 rows)\n\nOn Mar 21, 2011, at 1:54 AM, Pavel Stehule wrote:\n\n> Hello\n> \n> I think so HashAggregate goes out of memory - you can try to increase\n> a work_mem.\n> \n> There are better queries for counting duplicit then cross join\n> \n> Regards\n> \n> Pavel Stehule\n> \n> 2011/3/21 Adam Tistler <[email protected]>:\n>> logicops2=# explain analyze select count(*) from nodes where node_id = any( Array(select node_id from nodes limit 100000) );\n>> QUERY PLAN\n>> -----------------------------------------------------------------------------------------------------------------------------------------\n>> Aggregate (cost=1718.59..1718.60 rows=1 width=0) (actual time=509.126..509.127 rows=1 loops=1)\n>> InitPlan 1 (returns $0)\n>> -> Limit (cost=0.00..1637.04 rows=100000 width=4) (actual time=0.010..76.604 rows=100000 loops=1)\n>> -> Seq Scan on nodes (cost=0.00..12355.41 rows=754741 width=4) (actual time=0.008..38.105 rows=100000 loops=1)\n>> -> Bitmap Heap Scan on nodes (cost=42.67..81.53 rows=10 width=0) (actual time=447.274..484.283 rows=100000 loops=1)\n>> Recheck Cond: (node_id = ANY ($0))\n>> -> Bitmap Index Scan on n_node_id_index (cost=0.00..42.67 rows=10 width=0) (actual time=447.074..447.074 rows=100000 loops=1)\n>> Index Cond: (node_id = ANY ($0))\n>> Total runtime: 509.209 ms\n>> (9 rows)\n>> \n>> Time: 510.009 ms\n>> \n>> \n>> logicops2=# explain analyze select count(*) from nodes where node_id in (select node_id from nodes limit 100000);\n>> QUERY PLAN\n>> ----------------------------------------------------------------------------------------------------------------------------------------\n>> Aggregate (cost=3017.17..3017.18 rows=1 width=0) (actual time=1052.866..1052.866 rows=1 loops=1)\n>> -> Nested Loop (cost=2887.04..3016.67 rows=200 width=0) (actual time=167.310..1021.540 rows=100000 loops=1)\n>> -> HashAggregate (cost=2887.04..2889.04 rows=200 width=4) (actual time=167.198..251.205 rows=100000 loops=1)\n>> -> Limit (cost=0.00..1637.04 rows=100000 width=4) (actual time=0.008..80.090 rows=100000 loops=1)\n>> -> Seq Scan on nodes (cost=0.00..12355.41 rows=754741 width=4) (actual time=0.007..41.566 rows=100000 loops=1)\n>> -> Index Scan using n_node_id_index on nodes (cost=0.00..0.63 rows=1 width=4) (actual time=0.006..0.007 rows=1 loops=100000)\n>> Index Cond: (public.nodes.node_id = public.nodes.node_id)\n>> Total runtime: 1053.523 ms\n>> (8 rows)\n>> \n>> Time: 1054.864 ms\n>> \n>> \n>> \n>> On Mar 20, 2011, at 2:51 AM, Pavel Stehule wrote:\n>> \n>>> Hello\n>>> \n>>> 2011/3/20 Adam Tistler <[email protected]>:\n>>>> I have noticed that SELECT ... = ANY(ARRAY(...)) is about twice as fast as SELECT IN ( ... ).\n>>>> Can anyone explain a reason for this? Results are the bottom and are reproducible. I can test with other versions if that is necessary.\n>>>> \n>>> \n>>> send a result of EXPLAIN ANALYZE SELECT ..., please\n>>> \n>>> The reasons can be different - less seq scans, indexes\n>>> \n>>> Regards\n>>> \n>>> Pavel Stehule\n>>> \n>>> \n>>> \n>>>> ./configure --prefix=/usr/local/pgsql84 --with-openssl --with-perl\n>>>> CentOS release 5.4 (Final)\n>>>> psql (PostgreSQL) 8.4.1\n>>>> \n>>>> prompt2=# select count(*) from nodes;\n>>>> count\n>>>> --------\n>>>> 754734\n>>>> (1 row)\n>>>> \n>>>> \n>>>> prompt2=# \\d nodes\n>>>> Table \"public.nodes\"\n>>>> Column | Type | Modifiers\n>>>> --------------+--------------------------+-----------------------------------------------------------\n>>>> node_id | integer | not null default nextval(('node_id_seq'::text)::regclass)\n>>>> node_type_id | integer | not null\n>>>> template_id | integer | not null\n>>>> timestamp | timestamp with time zone | default ('now'::text)::timestamp(6) with time zone\n>>>> Indexes:\n>>>> \"nodes_pkey\" PRIMARY KEY, btree (node_id)\n>>>> \"n_node_id_index\" btree (node_id)\n>>>> \"n_node_type_id_index\" btree (node_type_id)\n>>>> \"n_template_id_index\" btree (template_id)\n>>>> \n>>>> prompt2=# select count(*) from nodes where node_id = any( Array(select node_id from nodes limit 100000) );\n>>>> count\n>>>> --------\n>>>> 100000\n>>>> (1 row)\n>>>> \n>>>> Time: 404.530 ms\n>>>> prompt2=# select count(*) from nodes where node_id = any( Array(select node_id from nodes limit 100000) );\n>>>> count\n>>>> --------\n>>>> 100000\n>>>> (1 row)\n>>>> \n>>>> Time: 407.316 ms\n>>>> prompt2=# select count(*) from nodes where node_id = any( Array(select node_id from nodes limit 100000) );\n>>>> count\n>>>> --------\n>>>> 100000\n>>>> (1 row)\n>>>> \n>>>> Time: 408.728 ms\n>>>> prompt2=# select count(*) from nodes where node_id in (select node_id from nodes limit 100000 );\n>>>> count\n>>>> --------\n>>>> 100000\n>>>> (1 row)\n>>>> \n>>>> Time: 793.840 ms\n>>>> prompt2=# select count(*) from nodes where node_id in (select node_id from nodes limit 100000 );\n>>>> count\n>>>> --------\n>>>> 100000\n>>>> (1 row)\n>>>> \n>>>> Time: 779.137 ms\n>>>> prompt2=# select count(*) from nodes where node_id in (select node_id from nodes limit 100000 );\n>>>> count\n>>>> --------\n>>>> 100000\n>>>> (1 row)\n>>>> \n>>>> Time: 781.820 ms\n>>>> \n>>>> \n>>>> --\n>>>> Sent via pgsql-performance mailing list ([email protected])\n>>>> To make changes to your subscription:\n>>>> http://www.postgresql.org/mailpref/pgsql-performance\n>>>> \n>> \n>> \n\n", "msg_date": "Mon, 21 Mar 2011 02:16:56 -0400", "msg_from": "Adam Tistler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Select in subselect vs select = any array" }, { "msg_contents": "2011/3/21 Adam Tistler <[email protected]>:\n> Pavel, thanks for the help.\n>\n> I increased work_mem from 16MB to 64MB, no difference.  The queries are really just a test case.  My actual queries are actual just large number of primary keys that I am selecting from the db:\n>\n> For example:\n>   select * from nodes where node_id in ( 1, 2, 3 ..... )\n>\n> I found that even for small queries, the following is faster:\n>   select * from nodes where node_in = any (array[1,2,3 .... ])\n\nit depends on version. I think so on last postgres, these queries are\nsame, not sure.\n\nRegards\n\nPavel\n\n>\n>\n> Its not really a big deal to me, I was just wondering if others could reproduce it on other systems/versions and if perhaps this is an issue that I should point out to postgres-dev.\n>\n>\n> Results below:\n>\n> logicops2=# explain analyze select count(*) from nodes where node_id in ( select node_id from nodes limit 100000 );\n>                                                               QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------\n>  Aggregate  (cost=3017.18..3017.19 rows=1 width=0) (actual time=1017.051..1017.051 rows=1 loops=1)\n>   ->  Nested Loop  (cost=2887.05..3016.68 rows=200 width=0) (actual time=157.290..986.329 rows=100000 loops=1)\n>         ->  HashAggregate  (cost=2887.05..2889.05 rows=200 width=4) (actual time=157.252..241.995 rows=100000 loops=1)\n>               ->  Limit  (cost=0.00..1637.05 rows=100000 width=4) (actual time=0.009..73.942 rows=100000 loops=1)\n>                     ->  Seq Scan on nodes  (cost=0.00..12355.34 rows=754734 width=4) (actual time=0.008..35.428 rows=100000 loops=1)\n>         ->  Index Scan using n_node_id_index on nodes  (cost=0.00..0.63 rows=1 width=4) (actual time=0.006..0.006 rows=1 loops=100000)\n>               Index Cond: (public.nodes.node_id = public.nodes.node_id)\n>  Total runtime: 1017.794 ms\n> (8 rows)\n>\n> logicops2=# explain analyze select count(*) from nodes where node_id = any(array ( select node_id from nodes limit 100000 ));\n>                                                               QUERY PLAN\n> -----------------------------------------------------------------------------------------------------------------------------------------\n>  Aggregate  (cost=1718.60..1718.61 rows=1 width=0) (actual time=485.554..485.555 rows=1 loops=1)\n>   InitPlan 1 (returns $0)\n>     ->  Limit  (cost=0.00..1637.05 rows=100000 width=4) (actual time=0.011..73.037 rows=100000 loops=1)\n>           ->  Seq Scan on nodes  (cost=0.00..12355.34 rows=754734 width=4) (actual time=0.010..34.462 rows=100000 loops=1)\n>   ->  Bitmap Heap Scan on nodes  (cost=42.67..81.53 rows=10 width=0) (actual time=433.003..461.108 rows=100000 loops=1)\n>         Recheck Cond: (node_id = ANY ($0))\n>         ->  Bitmap Index Scan on n_node_id_index  (cost=0.00..42.67 rows=10 width=0) (actual time=432.810..432.810 rows=100000 loops=1)\n>               Index Cond: (node_id = ANY ($0))\n>  Total runtime: 485.638 ms\n> (9 rows)\n>\n> On Mar 21, 2011, at 1:54 AM, Pavel Stehule wrote:\n>\n>> Hello\n>>\n>> I think so HashAggregate goes out of memory - you can try to increase\n>> a work_mem.\n>>\n>> There are better queries for counting duplicit then cross join\n>>\n>> Regards\n>>\n>> Pavel Stehule\n>>\n>> 2011/3/21 Adam Tistler <[email protected]>:\n>>> logicops2=# explain analyze select count(*) from nodes where node_id = any(  Array(select node_id from nodes limit 100000) );\n>>>                                                               QUERY PLAN\n>>> -----------------------------------------------------------------------------------------------------------------------------------------\n>>>  Aggregate  (cost=1718.59..1718.60 rows=1 width=0) (actual time=509.126..509.127 rows=1 loops=1)\n>>>   InitPlan 1 (returns $0)\n>>>     ->  Limit  (cost=0.00..1637.04 rows=100000 width=4) (actual time=0.010..76.604 rows=100000 loops=1)\n>>>           ->  Seq Scan on nodes  (cost=0.00..12355.41 rows=754741 width=4) (actual time=0.008..38.105 rows=100000 loops=1)\n>>>   ->  Bitmap Heap Scan on nodes  (cost=42.67..81.53 rows=10 width=0) (actual time=447.274..484.283 rows=100000 loops=1)\n>>>         Recheck Cond: (node_id = ANY ($0))\n>>>         ->  Bitmap Index Scan on n_node_id_index  (cost=0.00..42.67 rows=10 width=0) (actual time=447.074..447.074 rows=100000 loops=1)\n>>>               Index Cond: (node_id = ANY ($0))\n>>>  Total runtime: 509.209 ms\n>>> (9 rows)\n>>>\n>>> Time: 510.009 ms\n>>>\n>>>\n>>> logicops2=# explain analyze select count(*) from nodes where node_id in (select node_id from nodes limit 100000);\n>>>                                                               QUERY PLAN\n>>> ----------------------------------------------------------------------------------------------------------------------------------------\n>>>  Aggregate  (cost=3017.17..3017.18 rows=1 width=0) (actual time=1052.866..1052.866 rows=1 loops=1)\n>>>   ->  Nested Loop  (cost=2887.04..3016.67 rows=200 width=0) (actual time=167.310..1021.540 rows=100000 loops=1)\n>>>         ->  HashAggregate  (cost=2887.04..2889.04 rows=200 width=4) (actual time=167.198..251.205 rows=100000 loops=1)\n>>>               ->  Limit  (cost=0.00..1637.04 rows=100000 width=4) (actual time=0.008..80.090 rows=100000 loops=1)\n>>>                     ->  Seq Scan on nodes  (cost=0.00..12355.41 rows=754741 width=4) (actual time=0.007..41.566 rows=100000 loops=1)\n>>>         ->  Index Scan using n_node_id_index on nodes  (cost=0.00..0.63 rows=1 width=4) (actual time=0.006..0.007 rows=1 loops=100000)\n>>>               Index Cond: (public.nodes.node_id = public.nodes.node_id)\n>>>  Total runtime: 1053.523 ms\n>>> (8 rows)\n>>>\n>>> Time: 1054.864 ms\n>>>\n>>>\n>>>\n>>> On Mar 20, 2011, at 2:51 AM, Pavel Stehule wrote:\n>>>\n>>>> Hello\n>>>>\n>>>> 2011/3/20 Adam Tistler <[email protected]>:\n>>>>> I have noticed that SELECT ... = ANY(ARRAY(...))  is about twice as fast as SELECT IN ( ... ).\n>>>>> Can anyone explain a reason for this?  Results are the bottom and are reproducible.  I can test with other versions if that is necessary.\n>>>>>\n>>>>\n>>>> send a result of EXPLAIN ANALYZE SELECT ..., please\n>>>>\n>>>> The reasons can be different - less seq scans, indexes\n>>>>\n>>>> Regards\n>>>>\n>>>> Pavel Stehule\n>>>>\n>>>>\n>>>>\n>>>>> ./configure --prefix=/usr/local/pgsql84 --with-openssl --with-perl\n>>>>> CentOS release 5.4 (Final)\n>>>>> psql (PostgreSQL) 8.4.1\n>>>>>\n>>>>> prompt2=# select count(*) from nodes;\n>>>>>  count\n>>>>> --------\n>>>>>  754734\n>>>>> (1 row)\n>>>>>\n>>>>>\n>>>>> prompt2=# \\d nodes\n>>>>>                                        Table \"public.nodes\"\n>>>>>    Column    |           Type           |                         Modifiers\n>>>>> --------------+--------------------------+-----------------------------------------------------------\n>>>>>  node_id      | integer                  | not null default nextval(('node_id_seq'::text)::regclass)\n>>>>>  node_type_id | integer                  | not null\n>>>>>  template_id  | integer                  | not null\n>>>>>  timestamp    | timestamp with time zone | default ('now'::text)::timestamp(6) with time zone\n>>>>> Indexes:\n>>>>>    \"nodes_pkey\" PRIMARY KEY, btree (node_id)\n>>>>>    \"n_node_id_index\" btree (node_id)\n>>>>>    \"n_node_type_id_index\" btree (node_type_id)\n>>>>>    \"n_template_id_index\" btree (template_id)\n>>>>>\n>>>>> prompt2=# select count(*) from nodes where node_id = any(  Array(select node_id from nodes limit 100000) );\n>>>>>  count\n>>>>> --------\n>>>>>  100000\n>>>>> (1 row)\n>>>>>\n>>>>> Time: 404.530 ms\n>>>>> prompt2=# select count(*) from nodes where node_id = any(  Array(select node_id from nodes limit 100000) );\n>>>>>  count\n>>>>> --------\n>>>>>  100000\n>>>>> (1 row)\n>>>>>\n>>>>> Time: 407.316 ms\n>>>>> prompt2=# select count(*) from nodes where node_id = any(  Array(select node_id from nodes limit 100000) );\n>>>>>  count\n>>>>> --------\n>>>>>  100000\n>>>>> (1 row)\n>>>>>\n>>>>> Time: 408.728 ms\n>>>>> prompt2=# select count(*) from nodes where node_id in (select node_id from nodes limit 100000 );\n>>>>>  count\n>>>>> --------\n>>>>>  100000\n>>>>> (1 row)\n>>>>>\n>>>>> Time: 793.840 ms\n>>>>> prompt2=# select count(*) from nodes where node_id in (select node_id from nodes limit 100000 );\n>>>>>  count\n>>>>> --------\n>>>>>  100000\n>>>>> (1 row)\n>>>>>\n>>>>> Time: 779.137 ms\n>>>>> prompt2=# select count(*) from nodes where node_id in (select node_id from nodes limit 100000 );\n>>>>>  count\n>>>>> --------\n>>>>>  100000\n>>>>> (1 row)\n>>>>>\n>>>>> Time: 781.820 ms\n>>>>>\n>>>>>\n>>>>> --\n>>>>> Sent via pgsql-performance mailing list ([email protected])\n>>>>> To make changes to your subscription:\n>>>>> http://www.postgresql.org/mailpref/pgsql-performance\n>>>>>\n>>>\n>>>\n>\n>\n", "msg_date": "Mon, 21 Mar 2011 07:38:31 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Select in subselect vs select = any array" }, { "msg_contents": "\n> -----Original Message-----\n> From: [email protected] [mailto:pgsql-performance-\n> [email protected]] On Behalf Of Adam Tistler\n> Sent: Monday, March 21, 2011 12:17 AM\n> To: Pavel Stehule\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Select in subselect vs select = any array\n> \n> Pavel, thanks for the help.\n> \n> I increased work_mem from 16MB to 64MB, no difference. The queries are\n> really just a test case. My actual queries are actual just large\n> number of primary keys that I am selecting from the db:\n> \n> For example:\n> select * from nodes where node_id in ( 1, 2, 3 ..... )\n> \n\nWhat does \"large\" number of primary keys mean ?\n\nI have seen some \"odd\" things happen when I passed, carelessly, tens of\nthousands of items to an in list for a generated query, but I don't get the\nfeeling that isn't the case here.\n\n\n\n\n..: Mark\n\n", "msg_date": "Mon, 21 Mar 2011 20:55:11 -0600", "msg_from": "\"mark\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Select in subselect vs select = any array" }, { "msg_contents": "On Sun, Mar 20, 2011 at 11:20 PM, Adam Tistler <[email protected]> wrote:\n> logicops2=# explain analyze select count(*) from nodes where node_id = any(  Array(select node_id from nodes limit 100000) );\n>                                                               QUERY PLAN\n> -----------------------------------------------------------------------------------------------------------------------------------------\n>  Aggregate  (cost=1718.59..1718.60 rows=1 width=0) (actual time=509.126..509.127 rows=1 loops=1)\n>   InitPlan 1 (returns $0)\n>     ->  Limit  (cost=0.00..1637.04 rows=100000 width=4) (actual time=0.010..76.604 rows=100000 loops=1)\n>           ->  Seq Scan on nodes  (cost=0.00..12355.41 rows=754741 width=4) (actual time=0.008..38.105 rows=100000 loops=1)\n>   ->  Bitmap Heap Scan on nodes  (cost=42.67..81.53 rows=10 width=0) (actual time=447.274..484.283 rows=100000 loops=1)\n>         Recheck Cond: (node_id = ANY ($0))\n>         ->  Bitmap Index Scan on n_node_id_index  (cost=0.00..42.67 rows=10 width=0) (actual time=447.074..447.074 rows=100000 loops=1)\n>               Index Cond: (node_id = ANY ($0))\n>  Total runtime: 509.209 ms\n> (9 rows)\n>\n> Time: 510.009 ms\n>\n>\n> logicops2=# explain analyze select count(*) from nodes where node_id in (select node_id from nodes limit 100000);\n>                                                               QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------\n>  Aggregate  (cost=3017.17..3017.18 rows=1 width=0) (actual time=1052.866..1052.866 rows=1 loops=1)\n>   ->  Nested Loop  (cost=2887.04..3016.67 rows=200 width=0) (actual time=167.310..1021.540 rows=100000 loops=1)\n>         ->  HashAggregate  (cost=2887.04..2889.04 rows=200 width=4) (actual time=167.198..251.205 rows=100000 loops=1)\n>               ->  Limit  (cost=0.00..1637.04 rows=100000 width=4) (actual time=0.008..80.090 rows=100000 loops=1)\n>                     ->  Seq Scan on nodes  (cost=0.00..12355.41 rows=754741 width=4) (actual time=0.007..41.566 rows=100000 loops=1)\n>         ->  Index Scan using n_node_id_index on nodes  (cost=0.00..0.63 rows=1 width=4) (actual time=0.006..0.007 rows=1 loops=100000)\n>               Index Cond: (public.nodes.node_id = public.nodes.node_id)\n>  Total runtime: 1053.523 ms\n> (8 rows)\n>\n> Time: 1054.864 ms\n\nThis is a pretty interesting example. I think this is just an\noptimizer limitation.\n\nWhen trying to build a join tree (in this case, between the copy of\nnodes inside the subselect and the copy outside the subselect), the\nplanner considers three main join strategies: hash join, nested loop,\nmerge join. A merge or hash join will have to read the\noutside-the-subselect copy of nodes in its entirety (I think); the\nonly way to avoid that is to compute the subselect first and then use\nthe index probes to pull out just the matching rows. That's what the\nplanner did in both cases, but in the second case it's not smart\nenough to see that it can gather up all the values from the inner side\nof the join and shove them into a bitmap index scan all at once, so it\njust uses a regular index scan to pull 'em out one at a time.\n\nI think this would be pretty tricky to support, since the join node\nwould need to understand all the parameter passing that needs to\nhappen between the inner and outer sides of the loop; it's almost like\na whole new join type.\n\nYou might also want to make the opposite transformation, turning the\nfirst plan into the second one, if (for example) the subselect is\ngoing to return a gigabyte of data. But we're just not that smart.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Mon, 18 Apr 2011 14:27:27 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Select in subselect vs select = any array" } ]
[ { "msg_contents": "I have a function where in\nIn a cursor loop I\n\n1. create a temp table (on commit drop)\n\n2. insert data into it\n\n3. Run Analyze on the table\n\n\n\nSelect/update outside the loop.\n\nThis has been running fine for a while on multiple setups, large and small volumes. The setups all have the same hardware configuration.\n\nOn one particular setup with about 200k records and this analyze runs for 45min and then times out(statement timeout is set to 45 min). typically this takes a few seconds at best. But when I move the analyze outside the loop everything runs fine.\n\n\nAn section of the code for reference.\n\nCREATE TEMP TABLE tmp_hierarchy_sorted ( sort_id serial, aqu_document_id integer,parent_id integer, ancestor_id integer, object_hierarchy character varying(255), object_hierarchy_array text[], levels integer) ON COMMIT DROP TABLESPACE tblspc_tmp ;\n CREATE UNIQUE INDEX tmp_hierarchy_sorted_aqu_document_id_idx ON tmp_hierarchy_sorted USING btree( aqu_document_id ) TABLESPACE tblspc_index;';\n execute vSQL;\n\n --get min doc number for that collection based on existing promoted collections in the matter\n select coalesce(max(doc_number_max),0) into iMin_Doc_number\n FROM doc_Collection c\n WHERE exists (SELECT 1 FROM doc_collection c1 WHERE c1.id = iCollectionId and c1.matter_id = c.matter_id and c1.doc_number_prefix = c.doc_number_prefix)\n AND status = 'PROMOTED';\n\n --go ancestor by ancestor for ones that are not loose files\n open curAncestor for\n select distinct id FROM aqu_document_hierarchy h where collection_Id = iCollectionId and ancestor_id =-1 and parent_id = -1\n AND EXISTS (select 1 from aqu_document_hierarchy h1 where h1.ancestor_id = h.id ) order by id ;\n LOOP\n FETCH curAncestor into iAncestor_id;\n EXIT WHEN NOT FOUND;\n --insert each ancestor into the table as this is not part in the bulk insert\n vSQL := 'INSERT INTO tmp_hierarchy_sorted( aqu_document_id, parent_id , ancestor_id , object_hierarchy, object_hierarchy_array,levels)\n (select id, -1, -1, object_hierarchy, regexp_split_to_array(object_hierarchy, ''/'') ,0\n from aqu_document_hierarchy where collection_Id =' || iCollectionId || ' AND id = ' || iAncestor_id || ')';\n execute vSQL;\n\n -- insert filtered documents for that ancestor\n vSQL := 'INSERT INTO tmp_hierarchy_sorted (aqu_document_id, parent_id , ancestor_id , object_hierarchy, object_hierarchy_array, levels)\n (\n SELECT id, parent_id, ancestor_id, object_hierarchy, regexp_split_to_array(object_hierarchy, ''/'') as object_hierarchy_array, array_length(regexp_split_to_array(object_hierarchy, ''/'') ,1) as levels\n FROM aqu_document_hierarchy h WHERE EXISTS (SELECT 1 FROM aqu_document_error_details e where e.aqu_document_id = h.id and e.exit_status in (2,3,4,5) ) AND ancestor_id = ' || iAncestor_id ||\n ' ORDER BY regexp_split_to_array(object_hierarchy, ''/'')\n );';\n execute vSQL;\n ANALYZE tmp_hierarchy_sorted;\n\n END LOOP;\n\n\n\nThanks for the help\n-mridula\n\n\n\nThe information contained in this email message and its attachments is intended only for the private and confidential use of the recipient(s) named above, unless the sender expressly agrees otherwise. Transmission of email over the Internet is not a secure communications medium. If you are requesting or have requested the transmittal of personal data, as defined in applicable privacy laws by means of email or in an attachment to email, you must select a more secure alternate means of transmittal that supports your obligations to protect such personal data. If the reader of this message is not the intended recipient and/or you have received this email in error, you must take no action based on the information in this email and you are hereby notified that any dissemination, misuse or copying or disclosure of this communication is strictly prohibited. If you have received this communication in error, please notify us immediately by email and delete the original message. \n\n\n\n\n\n\n\n\n\nI have a function where in \nIn a cursor loop I \n1.      \ncreate a temp table (on commit drop) \n2.      \ninsert data into it\n3.      \nRun Analyze on the table\n \nSelect/update outside the loop. \n \nThis has been running fine for a while on multiple setups,\nlarge and small volumes. The setups all have the same hardware configuration.\n \nOn one particular setup with about 200k records and this\nanalyze runs for 45min and then times out(statement timeout is set to 45 min).\ntypically this takes a few seconds at best. But when I move the analyze outside\nthe loop everything runs fine.\n \n \nAn section of the code for reference.\n \nCREATE TEMP TABLE tmp_hierarchy_sorted (  sort_id serial, \naqu_document_id integer,parent_id integer,  ancestor_id integer, \nobject_hierarchy character varying(255), object_hierarchy_array text[], levels\ninteger) ON COMMIT DROP TABLESPACE tblspc_tmp               ;\n          CREATE UNIQUE INDEX\ntmp_hierarchy_sorted_aqu_document_id_idx ON tmp_hierarchy_sorted USING btree(\naqu_document_id ) TABLESPACE tblspc_index;';\n    execute vSQL;\n \n    --get min doc number for that collection based on\nexisting promoted collections in the matter\n    select coalesce(max(doc_number_max),0) into\niMin_Doc_number\n    FROM doc_Collection c\n        WHERE exists (SELECT 1 FROM doc_collection c1 WHERE\nc1.id = iCollectionId and c1.matter_id = c.matter_id and c1.doc_number_prefix =\nc.doc_number_prefix)\n        AND status = 'PROMOTED';\n \n    --go ancestor by ancestor for ones that are not loose\nfiles\n    open curAncestor for\n        select distinct id FROM aqu_document_hierarchy h\nwhere collection_Id = iCollectionId and ancestor_id =-1 and parent_id = -1\n        AND EXISTS (select 1 from aqu_document_hierarchy h1\nwhere h1.ancestor_id = h.id ) order by id ;\n    LOOP\n        FETCH curAncestor into iAncestor_id;\n        EXIT WHEN NOT FOUND;\n        --insert each ancestor into the table as this is not\npart in the bulk insert\n        vSQL := 'INSERT INTO tmp_hierarchy_sorted( \naqu_document_id, parent_id ,  ancestor_id ,  object_hierarchy,\nobject_hierarchy_array,levels)\n         (select id, -1, -1, object_hierarchy,\nregexp_split_to_array(object_hierarchy, ''/'') ,0\n         from aqu_document_hierarchy where collection_Id ='\n|| iCollectionId || ' AND id = ' || iAncestor_id || ')';\n        execute vSQL;\n \n        -- insert filtered documents for that ancestor\n        vSQL := 'INSERT INTO tmp_hierarchy_sorted \n(aqu_document_id, parent_id ,  ancestor_id ,  object_hierarchy,\nobject_hierarchy_array, levels)\n         (\n         SELECT id, parent_id, ancestor_id,\nobject_hierarchy, regexp_split_to_array(object_hierarchy, ''/'')  as\nobject_hierarchy_array, array_length(regexp_split_to_array(object_hierarchy,\n''/'')  ,1) as levels\n         FROM aqu_document_hierarchy h WHERE  EXISTS (SELECT\n1 FROM aqu_document_error_details e where e.aqu_document_id = h.id and\ne.exit_status in (2,3,4,5) ) AND ancestor_id = ' || iAncestor_id ||\n             ' ORDER BY\nregexp_split_to_array(object_hierarchy, ''/'')\n        );';\n        execute vSQL;\n    ANALYZE tmp_hierarchy_sorted;\n       \n    END LOOP;\n \n \n \nThanks for the help\n-mridula\n\n\n\nThe information contained in this email message and its attachments is intended only for the private and confidential use of the recipient(s) named above, unless the sender expressly agrees otherwise. Transmission of email over the Internet is not a secure communications medium. If you are requesting or have requested the transmittal of personal data, as defined in applicable privacy laws by means of email or in an attachment to email, you must select a more secure alternate means of transmittal that supports your obligations to protect such personal data. If the reader of this message is not the intended recipient and/or you have received this email in error, you must take no action based on the information in this email and you are hereby notified that any dissemination, misuse or copying or disclosure of this communication is strictly prohibited. If you have received this communication in error, please notify us immediately by email and delete the original message.", "msg_date": "Tue, 22 Mar 2011 09:13:30 -0700", "msg_from": "\"Mahadevan, Mridula\" <[email protected]>", "msg_from_op": true, "msg_subject": "Analyze on temp table taking very long" }, { "msg_contents": "\"Mahadevan, Mridula\" <[email protected]> writes:\n> This has been running fine for a while on multiple setups, large and small volumes. The setups all have the same hardware configuration.\n\n> On one particular setup with about 200k records and this analyze runs for 45min and then times out(statement timeout is set to 45 min). typically this takes a few seconds at best. But when I move the analyze outside the loop everything runs fine.\n\nIs it actually *running*, as in doing something, or is it just blocked?\nI can't immediately think of any reason for some other process to have\na lock on a temp table that belongs to your process; but it seems\nunlikely that ANALYZE would randomly take much longer than expected\nunless something was preventing it from making progress.\n\nLook into pg_locks and/or watch the backend with strace next time this\nhappens.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 22 Mar 2011 18:56:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Analyze on temp table taking very long " }, { "msg_contents": "Thanks for the tip. I'll also check in the lock, it's a customer setup and we don't get access to the box very frequently. \nAlso\nThe code was something like this. \n\nloop \n\tinserting data into the tmptbl\n\tanalyze tmptbl\nend loop\n\nif I replace this with \n\nloop \n\tinserting data into the tmptbl\nend loop\nanalyze\n\n\nIt goes through fine. \n\n-mridula\n\n\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Tom Lane\nSent: Tuesday, March 22, 2011 3:57 PM\nTo: Mahadevan, Mridula\nCc: [email protected]\nSubject: Re: [PERFORM] Analyze on temp table taking very long\n\n\"Mahadevan, Mridula\" <[email protected]> writes:\n> This has been running fine for a while on multiple setups, large and small volumes. The setups all have the same hardware configuration.\n\n> On one particular setup with about 200k records and this analyze runs for 45min and then times out(statement timeout is set to 45 min). typically this takes a few seconds at best. But when I move the analyze outside the loop everything runs fine.\n\nIs it actually *running*, as in doing something, or is it just blocked?\nI can't immediately think of any reason for some other process to have\na lock on a temp table that belongs to your process; but it seems\nunlikely that ANALYZE would randomly take much longer than expected\nunless something was preventing it from making progress.\n\nLook into pg_locks and/or watch the backend with strace next time this\nhappens.\n\n\t\t\tregards, tom lane\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\nThe information contained in this email message and its attachments is intended only for the private and confidential use of the recipient(s) named above, unless the sender expressly agrees otherwise. Transmission of email over the Internet is not a secure communications medium. If you are requesting or have requested the transmittal of personal data, as defined in applicable privacy laws by means of email or in an attachment to email, you must select a more secure alternate means of transmittal that supports your obligations to protect such personal data. If the reader of this message is not the intended recipient and/or you have received this email in error, you must take no action based on the information in this email and you are hereby notified that any dissemination, misuse or copying or disclosure of this communication is strictly prohibited. If you have received this communication in error, please notify us immediately by email and delete the original message. \n\n", "msg_date": "Fri, 25 Mar 2011 10:32:42 -0700", "msg_from": "\"Mahadevan, Mridula\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Analyze on temp table taking very long" } ]
[ { "msg_contents": "I posted many weeks ago about a severe problem with a table that was\nobviously bloated and was stunningly slow. Up to 70 seconds just to get a\nrow count on 300k rows.\n\nI removed the text column, so it really was just a few columns of fixed\ndata.\nStill very bloated. Table size was 450M\n\nThe advice I was given was to do CLUSTER, but this did not reduce the table\nsize in the least.\nNor performance.\n\nAlso to resize my free space map (which still does need to be done).\nSince that involves tweaking the kernel settings, taking the site down and\nrebooting postgres and exposing the system to all kinds of risks and\nunknowns and expensive experimentations I was unable to do it and have had\nto hobble along with a slow table in my backend holding up jobs.\n\nMuch swearing that nobody should ever do VACUUM FULL. Manual advises\nagainst it. Only crazy people do that.\n\nFinally I decide to stop taking advice.\n\nns=> explain analyze select count(*) from fastadder_fastadderstatus;\n---------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=62602.08..62602.09 rows=1 width=0) (actual\ntime=25320.000..25320.000 rows=1 loops=1)\n -> Seq Scan on fastadder_fastadderstatus (cost=0.00..61815.86\nrows=314486 width=0) (actual time=180.000..25140.000 rows=314493 loops=1)\n Total runtime: *25320.000* ms\n\nns=> vacuum full fastadder_fastadderstatus;\n\ntook about 20 minutes\n\nns=> explain analyze select count(*) from fastadder_fastadderstatus;\n----------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=7478.03..7478.04 rows=1 width=0) (actual\ntime=940.000..940.000 rows=1 loops=1)\n -> Seq Scan on fastadder_fastadderstatus (cost=0.00..6691.82\nrows=314482 width=0) (actual time=0.000..530.000 rows=314493 loops=1)\n Total runtime: *940.000 ms*\n\nmoral of the story: if your table is really bloated, just do VACUUM FULL\n\nCLUSTER will not reduce table bloat in and identical fashion\n\nI posted many weeks ago about a severe problem with a table that was obviously bloated and was stunningly slow. Up to 70 seconds just to get a row count on 300k rows.I removed the text column, so it really was just a few columns of fixed data.\nStill very bloated.  Table size was 450MThe advice I was given was to do CLUSTER, but this did not reduce the table size in the least.Nor performance.Also to resize my free space map (which still does need to be done).\nSince that involves tweaking the kernel settings, taking the site down and rebooting postgres and exposing the system to all kinds of risks and unknowns and expensive experimentations I was unable to do it and have had to hobble along with a slow table in my backend holding up jobs.\nMuch swearing that nobody should ever do VACUUM FULL.  Manual advises against it.  Only crazy people do that.Finally I decide to stop taking advice.\nns=> explain analyze select count(*) from fastadder_fastadderstatus;---------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate  (cost=62602.08..62602.09 rows=1 width=0) (actual time=25320.000..25320.000 rows=1 loops=1)   ->  Seq Scan on fastadder_fastadderstatus  (cost=0.00..61815.86 rows=314486 width=0) (actual time=180.000..25140.000 rows=314493 loops=1)\n Total runtime: 25320.000 msns=> vacuum full fastadder_fastadderstatus;took about 20 minutesns=> explain analyze select count(*) from fastadder_fastadderstatus;\n---------------------------------------------------------------------------------------------------------------------------------------- Aggregate  (cost=7478.03..7478.04 rows=1 width=0) (actual time=940.000..940.000 rows=1 loops=1)\n   ->  Seq Scan on fastadder_fastadderstatus  (cost=0.00..6691.82 rows=314482 width=0) (actual time=0.000..530.000 rows=314493 loops=1) Total runtime: 940.000 msmoral of the story:  if your table is really bloated, just do VACUUM FULL\nCLUSTER will not reduce table bloat in and identical fashion", "msg_date": "Wed, 23 Mar 2011 01:52:55 +0100", "msg_from": "felix <[email protected]>", "msg_from_op": true, "msg_subject": "good old VACUUM FULL" }, { "msg_contents": "On 23/03/11 11:52, felix wrote:\n> I posted many weeks ago about a severe problem with a table that was\n> obviously bloated and was stunningly slow. Up to 70 seconds just to get\n> a row count on 300k rows.\n>\n> I removed the text column, so it really was just a few columns of fixed\n> data.\n> Still very bloated. Table size was 450M\n>\n> The advice I was given was to do CLUSTER, but this did not reduce the\n> table size in the least.\n> Nor performance.\n>\n> Also to resize my free space map (which still does need to be done).\n> Since that involves tweaking the kernel settings, taking the site down\n> and rebooting postgres and exposing the system to all kinds of risks and\n> unknowns and expensive experimentations I was unable to do it and have\n> had to hobble along with a slow table in my backend holding up jobs.\n>\n> Much swearing that nobody should ever do VACUUM FULL. Manual advises\n> against it. Only crazy people do that.\n\n<snip>\n\n> moral of the story: if your table is really bloated, just do VACUUM FULL\n\nYou'll need to reindex that table now - vacuum full can bloat your \nindexes which will affect your other queries.\n\nreindex table fastadder_fastadderstatus;\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n\n", "msg_date": "Wed, 23 Mar 2011 15:24:42 +1100", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: good old VACUUM FULL" }, { "msg_contents": "On Tue, Mar 22, 2011 at 6:52 PM, felix <[email protected]> wrote:\n> I posted many weeks ago about a severe problem with a table that was\n> obviously bloated and was stunningly slow. Up to 70 seconds just to get a\n> row count on 300k rows.\n> I removed the text column, so it really was just a few columns of fixed\n> data.\n> Still very bloated.  Table size was 450M\n> The advice I was given was to do CLUSTER, but this did not reduce the table\n> size in the least.\n\nThen either cluster failed (did you get an error message) or the table\nwas not bloated. Given that it looks like it was greatly reduced in\nsize by the vacuum full, I'd guess cluster failed for some reason.\n", "msg_date": "Wed, 23 Mar 2011 00:16:55 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: good old VACUUM FULL" }, { "msg_contents": "On 03/23/2011 01:16 AM, Scott Marlowe wrote:\n\n> Then either cluster failed (did you get an error message) or the table\n> was not bloated. Given that it looks like it was greatly reduced in\n> size by the vacuum full, I'd guess cluster failed for some reason.\n\nOr it just bloated again. Remember, he still hasn't changed his \nmax_fsm_pages setting, and that table apparently experiences *very* high \nturnover.\n\nA 25x bloat factor isn't unheard of for such a table. We have one that \nneeds to have autovacuum or be manually vacuumed frequently because it \nexperiences several thousand update/deletes per minute. The daily \nturnover of that particular table is around 110x. If our fsm settings \nwere too low, or we didn't vacuum regularly, I could easily see that \ntable quickly becoming unmanageable. I fear for his django_session table \nfor similar reasons.\n\nFelix, I know you don't want to \"experiment\" with kernel parameters, but \nyou *need* to increase your max_fsm_pages setting.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n", "msg_date": "Wed, 23 Mar 2011 08:13:28 -0500", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: good old VACUUM FULL" }, { "msg_contents": "On 23/03/2011 12:24 PM, Chris wrote:\n\n> You'll need to reindex that table now - vacuum full can bloat your\n> indexes which will affect your other queries.\n\nIt doesn't seem to matter much for a one-off. Index bloat problems have \nmainly been encountered where people are running VACUUM FULL as part of \nroutine maintenance - for example, from a nightly cron job.\n\n-- \nCraig Ringer\n\nTech-related writing at http://soapyfrogs.blogspot.com/\n", "msg_date": "Fri, 01 Apr 2011 11:41:34 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: good old VACUUM FULL" } ]
[ { "msg_contents": "Dear all,\n\nI have 2 tables in my database name clause2( 4900 MB) & \npage_content(1582 MB).\n\nMy table definations are as :\n\n*page_content :-\n\n*CREATE TABLE page_content\n(\n content_id integer,\n wkb_geometry geometry,\n link_level integer,\n isprocessable integer,\n isvalid integer,\n isanalyzed integer,\n islocked integer,\n content_language character(10),\n url_id integer,\n publishing_date character(40),\n heading character(150),\n category character(150),\n crawled_page_url character(500),\n keywords character(500),\n dt_stamp timestamp with time zone,\n \"content\" character varying,\n crawled_page_id bigint,\n id integer\n)\nWITH (\n OIDS=FALSE\n);\n\n*Indexes on it :-*\nCREATE INDEX idx_page_id ON page_content USING btree (crawled_page_id);\nCREATE INDEX idx_page_id_content ON page_content USING btree \n(crawled_page_id, content_language, publishing_date, isprocessable);\nCREATE INDEX pgweb_idx ON page_content USING gin \n(to_tsvector('english'::regconfig, content::text));\n\n*clause 2:-\n*CREATE TABLE clause2\n(\n id bigint NOT NULL DEFAULT nextval('clause_id_seq'::regclass),\n source_id integer,\n sentence_id integer,\n clause_id integer,\n tense character varying(30),\n clause text,\n CONSTRAINT pk_clause_demo_id PRIMARY KEY (id)\n)WITH ( OIDS=FALSE);\n\n*Indexes on it :\n\n*CREATE INDEX idx_clause2_march10\n ON clause2\n USING btree\n (id, source_id);*\n\n*I perform a join query on it as :\n\n* explain analyze select distinct(p.crawled_page_id) from page_content p \n, clause2 c where p.crawled_page_id != c.source_id ;\n\n*What it takes more than 1 hour to complete. As I issue the explain \nanalyze command and cannot able to wait for output but I send my explain \noutput as :\n QUERY \nPLAN \n--------------------------------------------------------------------------------------------------------\n Unique (cost=927576.16..395122387390.13 rows=382659 width=8)\n -> Nested Loop (cost=927576.16..360949839832.15 rows=13669019023195 \nwidth=8)\n Join Filter: (p.crawled_page_id <> c.source_id)\n -> Index Scan using idx_page_id on page_content p \n(cost=0.00..174214.02 rows=428817 width=8)\n -> Materialize (cost=927576.16..1370855.12 rows=31876196 width=4)\n -> Seq Scan on clause2 c (cost=0.00..771182.96 \nrows=31876196 width=4)\n(6 rows)\n\n\nPlease guide me how to make the above query run faster as I am not able \nto do that.\n\n\nThanks, Adarsh\n\n*\n\n\n*\n\n\n\n\n\n\n\nDear all,\n\nI have 2 tables in my database name clause2( 4900 MB) &\npage_content(1582 MB).\n\nMy table definations are as :\n\npage_content :-\n\nCREATE TABLE page_content\n(\n  content_id integer,\n  wkb_geometry geometry,\n  link_level integer,\n  isprocessable integer,\n  isvalid integer,\n  isanalyzed integer,\n  islocked integer,\n  content_language character(10),\n  url_id integer,\n  publishing_date character(40),\n  heading character(150),\n  category character(150),\n  crawled_page_url character(500),\n  keywords character(500),\n  dt_stamp timestamp with time zone,\n  \"content\" character varying,\n  crawled_page_id bigint,\n  id integer\n)\nWITH (\n  OIDS=FALSE\n);\n\nIndexes on it :-\nCREATE INDEX idx_page_id  ON page_content  USING btree \n(crawled_page_id);\nCREATE INDEX idx_page_id_content   ON page_content  USING btree \n(crawled_page_id, content_language, publishing_date, isprocessable);\nCREATE INDEX pgweb_idx  ON page_content   USING gin  \n(to_tsvector('english'::regconfig, content::text));\n\nclause 2:-\nCREATE TABLE clause2\n(\n  id bigint NOT NULL DEFAULT nextval('clause_id_seq'::regclass),\n  source_id integer,\n  sentence_id integer,\n  clause_id integer,\n  tense character varying(30),\n  clause text,\n  CONSTRAINT pk_clause_demo_id PRIMARY KEY (id)\n)WITH ( OIDS=FALSE);\n\nIndexes on it :\n\nCREATE INDEX idx_clause2_march10\n  ON clause2\n  USING btree\n  (id, source_id);\n\nI perform a join query on it as :\n\n explain analyze select distinct(p.crawled_page_id) from\npage_content p , clause2  c where p.crawled_page_id != c.source_id ;\n\nWhat it takes more than 1 hour to complete. As I issue the explain\nanalyze command and cannot able to wait for output but I send my\nexplain output as :\n                                             QUERY\nPLAN                                               \n--------------------------------------------------------------------------------------------------------\n Unique  (cost=927576.16..395122387390.13 rows=382659 width=8)\n   ->  Nested Loop  (cost=927576.16..360949839832.15\nrows=13669019023195 width=8)\n         Join Filter: (p.crawled_page_id <> c.source_id)\n         ->  Index Scan using idx_page_id on page_content p \n(cost=0.00..174214.02 rows=428817 width=8)\n         ->  Materialize  (cost=927576.16..1370855.12 rows=31876196\nwidth=4)\n               ->  Seq Scan on clause2 c  (cost=0.00..771182.96\nrows=31876196 width=4)\n(6 rows)\n\n\nPlease guide me how to make the above query run faster as I am not able\nto do that.\n\n\nThanks, Adarsh", "msg_date": "Wed, 23 Mar 2011 11:58:17 +0530", "msg_from": "Adarsh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Reason of Slowness of query" }, { "msg_contents": "On Wed, Mar 23, 2011 at 11:58 AM, Adarsh Sharma <[email protected]>wrote:\n\n> Dear all,\n>\n> I have 2 tables in my database name clause2( 4900 MB) & page_content(1582\n> MB).\n>\n> My table definations are as :\n>\n> *page_content :-\n>\n> *CREATE TABLE page_content\n> (\n> content_id integer,\n> wkb_geometry geometry,\n> link_level integer,\n> isprocessable integer,\n> isvalid integer,\n> isanalyzed integer,\n> islocked integer,\n> content_language character(10),\n> url_id integer,\n> publishing_date character(40),\n> heading character(150),\n> category character(150),\n> crawled_page_url character(500),\n> keywords character(500),\n> dt_stamp timestamp with time zone,\n> \"content\" character varying,\n> crawled_page_id bigint,\n> id integer\n> )\n> WITH (\n> OIDS=FALSE\n> );\n>\n> *Indexes on it :-*\n> CREATE INDEX idx_page_id ON page_content USING btree (crawled_page_id);\n> CREATE INDEX idx_page_id_content ON page_content USING btree\n> (crawled_page_id, content_language, publishing_date, isprocessable);\n> CREATE INDEX pgweb_idx ON page_content USING gin\n> (to_tsvector('english'::regconfig, content::text));\n>\n> *clause 2:-\n> *CREATE TABLE clause2\n> (\n> id bigint NOT NULL DEFAULT nextval('clause_id_seq'::regclass),\n> source_id integer,\n> sentence_id integer,\n> clause_id integer,\n> tense character varying(30),\n> clause text,\n> CONSTRAINT pk_clause_demo_id PRIMARY KEY (id)\n> )WITH ( OIDS=FALSE);\n>\n> *Indexes on it :\n>\n> *CREATE INDEX idx_clause2_march10\n> ON clause2\n> USING btree\n> (id, source_id);*\n>\n> *I perform a join query on it as :\n>\n> * explain analyze select distinct(p.crawled_page_id) from page_content p ,\n> clause2 c where p.crawled_page_id != c.source_id ;\n>\n> *What it takes more than 1 hour to complete. As I issue the explain\n> analyze command and cannot able to wait for output but I send my explain\n> output as :\n> QUERY\n> PLAN\n>\n> --------------------------------------------------------------------------------------------------------\n> Unique (cost=927576.16..395122387390.13 rows=382659 width=8)\n> -> Nested Loop (cost=927576.16..360949839832.15 rows=13669019023195\n> width=8)\n> Join Filter: (p.crawled_page_id <> c.source_id)\n> -> Index Scan using idx_page_id on page_content p\n> (cost=0.00..174214.02 rows=428817 width=8)\n> -> Materialize (cost=927576.16..1370855.12 rows=31876196\n> width=4)\n> -> Seq Scan on clause2 c (cost=0.00..771182.96\n> rows=31876196 width=4)\n> (6 rows)\n>\n>\n> Please guide me how to make the above query run faster as I am not able to\n> do that.\n>\n>\n> Thanks, Adarsh\n>\n> *\n>\n> *\n>\n\nCould you try just explaining the below query:\nexplain select distinct(p.crawled_page_id) from page_content p where NOT\nEXISTS (select 1 from clause2 c where c.source_id = p.crawled_page_id);\n\nThe idea here is to avoid directly using NOT operator.\n\n\n\nRegards,\nChetan\n\n-- \nChetan Suttraway\nEnterpriseDB <http://www.enterprisedb.com/>, The Enterprise\nPostgreSQL<http://www.enterprisedb.com/>\n company.\n\nOn Wed, Mar 23, 2011 at 11:58 AM, Adarsh Sharma <[email protected]> wrote:\n\nDear all,\n\nI have 2 tables in my database name clause2( 4900 MB) &\npage_content(1582 MB).\n\nMy table definations are as :\n\npage_content :-\n\nCREATE TABLE page_content\n(\n  content_id integer,\n  wkb_geometry geometry,\n  link_level integer,\n  isprocessable integer,\n  isvalid integer,\n  isanalyzed integer,\n  islocked integer,\n  content_language character(10),\n  url_id integer,\n  publishing_date character(40),\n  heading character(150),\n  category character(150),\n  crawled_page_url character(500),\n  keywords character(500),\n  dt_stamp timestamp with time zone,\n  \"content\" character varying,\n  crawled_page_id bigint,\n  id integer\n)\nWITH (\n  OIDS=FALSE\n);\n\nIndexes on it :-\nCREATE INDEX idx_page_id  ON page_content  USING btree \n(crawled_page_id);\nCREATE INDEX idx_page_id_content   ON page_content  USING btree \n(crawled_page_id, content_language, publishing_date, isprocessable);\nCREATE INDEX pgweb_idx  ON page_content   USING gin  \n(to_tsvector('english'::regconfig, content::text));\n\nclause 2:-\nCREATE TABLE clause2\n(\n  id bigint NOT NULL DEFAULT nextval('clause_id_seq'::regclass),\n  source_id integer,\n  sentence_id integer,\n  clause_id integer,\n  tense character varying(30),\n  clause text,\n  CONSTRAINT pk_clause_demo_id PRIMARY KEY (id)\n)WITH ( OIDS=FALSE);\n\nIndexes on it :\n\nCREATE INDEX idx_clause2_march10\n  ON clause2\n  USING btree\n  (id, source_id);\n\nI perform a join query on it as :\n\n explain analyze select distinct(p.crawled_page_id) from\npage_content p , clause2  c where p.crawled_page_id != c.source_id ;\n\nWhat it takes more than 1 hour to complete. As I issue the explain\nanalyze command and cannot able to wait for output but I send my\nexplain output as :\n                                             QUERY\nPLAN                                               \n--------------------------------------------------------------------------------------------------------\n Unique  (cost=927576.16..395122387390.13 rows=382659 width=8)\n   ->  Nested Loop  (cost=927576.16..360949839832.15\nrows=13669019023195 width=8)\n         Join Filter: (p.crawled_page_id <> c.source_id)\n         ->  Index Scan using idx_page_id on page_content p \n(cost=0.00..174214.02 rows=428817 width=8)\n         ->  Materialize  (cost=927576.16..1370855.12 rows=31876196\nwidth=4)\n               ->  Seq Scan on clause2 c  (cost=0.00..771182.96\nrows=31876196 width=4)\n(6 rows)\n\n\nPlease guide me how to make the above query run faster as I am not able\nto do that.\n\n\nThanks, Adarsh\n\n\n\nCould you try just explaining the below query:explain  select distinct(p.crawled_page_id) from page_content p where NOT EXISTS (select 1 from  clause2 c where c.source_id = p.crawled_page_id);\nThe idea here is to avoid directly using NOT operator. Regards,Chetan-- Chetan SuttrawayEnterpriseDB, The Enterprise PostgreSQL company.", "msg_date": "Wed, 23 Mar 2011 12:46:55 +0530", "msg_from": "Chetan Suttraway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reason of Slowness of query" }, { "msg_contents": "On Tue, Mar 22, 2011 at 11:28 PM, Adarsh Sharma <[email protected]>wrote:\n\n> *\n> *I perform a join query on it as :\n>\n> * explain analyze select distinct(p.crawled_page_id) from page_content p ,\n> clause2 c where p.crawled_page_id != c.source_id ;\n>\n> *What it takes more than 1 hour to complete. As I issue the explain\n> analyze command and cannot able to wait for output but I send my explain\n> output as :\n>\n\n\nplease describe what your query is trying to select, as it is possible that\nquery isn't doing what you think it is. joining 2 tables where id1 != id2\nwill create a cross multiple of the two tables such that every row from the\nfirst table is matched with every single row from the second table that\ndoesn't have a matching id. Then you are looking for distinct values on\nthat potentially enormous set of rows.\n\ndb_v2=# select * from table1;\n id | value\n----+-------\n 1 | 1\n 2 | 2\n 3 | 3\n(3 rows)\n\ndb_v2=# select * from table2;\n id | value\n----+-------\n 1 | 4\n 2 | 5\n 3 | 6\n(3 rows)\n\ndb_v2=# select t1.id, t1.value, t2.id, t2.value from table1 t1, table2 t2\nwhere t1.id != t2.id;\n id | value | id | value\n----+-------+----+-------\n 1 | 1 | 2 | 5\n 1 | 1 | 3 | 6\n 2 | 2 | 1 | 4\n 2 | 2 | 3 | 6\n 3 | 3 | 1 | 4\n 3 | 3 | 2 | 5\n\nSo if you have a couple of million rows in each table, you are selecting\ndistinct over a potentially huge set of data. If you are actually trying\nto find all ids from one table which have no match at all in the other\ntable, then you need an entirely different query:\n\ndb_v2=# insert into table2 (value) values (7);\nINSERT 0 1\n\ndb_v2=# select * from table2;\n id | value\n----+-------\n 1 | 4\n 2 | 5\n 3 | 6\n 4 | 7\n\ndb_v2=# select t2.id, t2.value from table2 t2 where not exists (select 1\nfrom table1 t1 where t1.id = t2.id);\n id | value\n----+-------\n 4 | 7\n\nOn Tue, Mar 22, 2011 at 11:28 PM, Adarsh Sharma <[email protected]> wrote:\n\nI perform a join query on it as :\n\n explain analyze select distinct(p.crawled_page_id) from\npage_content p , clause2  c where p.crawled_page_id != c.source_id ;\n\nWhat it takes more than 1 hour to complete. As I issue the explain\nanalyze command and cannot able to wait for output but I send my\nexplain output as :please describe what your query is trying to select, as it is possible that query isn't doing what you think it is.  joining 2 tables where id1 != id2 will create a cross multiple of the two tables such that every row from the first table is matched with every single row from the second table that doesn't have a matching id.  Then you are looking for distinct values on that potentially enormous set of rows.\ndb_v2=# select * from table1; id | value ----+-------\n  1 |     1  2 |     2  3 |     3(3 rows)db_v2=# select * from table2;\n id | value ----+-------  1 |     4  2 |     5  3 |     6\n(3 rows)db_v2=# select t1.id, t1.value, t2.id, t2.value from table1 t1, table2 t2 where t1.id != t2.id;\n id | value | id | value ----+-------+----+-------  1 |     1 |  2 |     5  1 |     1 |  3 |     6\n  2 |     2 |  1 |     4  2 |     2 |  3 |     6  3 |     3 |  1 |     4  3 |     3 |  2 |     5\nSo if you have a couple of million rows in each table, you are selecting distinct over a potentially huge set of data.   If you are actually trying to find all ids from one table which have no match at all in the other table, then you need an entirely different query:\ndb_v2=# insert into table2 (value) values (7);INSERT 0 1\ndb_v2=# select * from table2; id | value ----+-------  1 |     4  2 |     5\n  3 |     6  4 |     7db_v2=# select t2.id, t2.value from table2 t2 where not exists (select 1 from table1 t1 where t1.id = t2.id); \n id | value ----+-------  4 |     7", "msg_date": "Wed, 23 Mar 2011 00:20:29 -0700", "msg_from": "Samuel Gendler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reason of Slowness of query" }, { "msg_contents": "23.03.11 08:28, Adarsh Sharma ???????(??):\n> *\n> *I perform a join query on it as :\n>\n> * explain analyze select distinct(p.crawled_page_id) from page_content \n> p , clause2 c where p.crawled_page_id != c.source_id ;*\nYour query is wrong. This query will return every *crawled_page_id* if \nclause2 has more then 1 source_id. This is because DB will be able to \nfind clause with source_id different from crawled_page_id. You need to \nuse \"not exists\" or \"not in\".\n\nBest regards, Vitalii Tymchyshyn.\n\n\n\n\n\n\n 23.03.11 08:28, Adarsh Sharma написав(ла):\n \n\nI perform a join query on it as :\n\n explain analyze select distinct(p.crawled_page_id) from\n page_content p , clause2  c where p.crawled_page_id !=\n c.source_id ;\n\n Your query is wrong. This query will return every crawled_page_id\n if clause2 has more then 1 source_id. This is because DB will be\n able to find clause with source_id different from crawled_page_id.\n You need to use \"not exists\" or \"not in\".\n\n Best regards, Vitalii Tymchyshyn.", "msg_date": "Wed, 23 Mar 2011 09:25:34 +0200", "msg_from": "Vitalii Tymchyshyn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reason of Slowness of query" }, { "msg_contents": "On Wed, Mar 23, 2011 at 12:50 PM, Samuel Gendler\n<[email protected]>wrote:\n\n> On Tue, Mar 22, 2011 at 11:28 PM, Adarsh Sharma <[email protected]>wrote:\n>\n>> *\n>> *I perform a join query on it as :\n>>\n>> * explain analyze select distinct(p.crawled_page_id) from page_content p\n>> , clause2 c where p.crawled_page_id != c.source_id ;\n>>\n>> *What it takes more than 1 hour to complete. As I issue the explain\n>> analyze command and cannot able to wait for output but I send my explain\n>> output as :\n>>\n>\n>\n> please describe what your query is trying to select, as it is possible that\n> query isn't doing what you think it is. joining 2 tables where id1 != id2\n> will create a cross multiple of the two tables such that every row from the\n> first table is matched with every single row from the second table that\n> doesn't have a matching id. Then you are looking for distinct values on\n> that potentially enormous set of rows.\n>\n> db_v2=# select * from table1;\n> id | value\n> ----+-------\n> 1 | 1\n> 2 | 2\n> 3 | 3\n> (3 rows)\n>\n> db_v2=# select * from table2;\n> id | value\n> ----+-------\n> 1 | 4\n> 2 | 5\n> 3 | 6\n> (3 rows)\n>\n> db_v2=# select t1.id, t1.value, t2.id, t2.value from table1 t1, table2 t2\n> where t1.id != t2.id;\n> id | value | id | value\n> ----+-------+----+-------\n> 1 | 1 | 2 | 5\n> 1 | 1 | 3 | 6\n> 2 | 2 | 1 | 4\n> 2 | 2 | 3 | 6\n> 3 | 3 | 1 | 4\n> 3 | 3 | 2 | 5\n>\n> So if you have a couple of million rows in each table, you are selecting\n> distinct over a potentially huge set of data. If you are actually trying\n> to find all ids from one table which have no match at all in the other\n> table, then you need an entirely different query:\n>\n> db_v2=# insert into table2 (value) values (7);\n> INSERT 0 1\n>\n> db_v2=# select * from table2;\n> id | value\n> ----+-------\n> 1 | 4\n> 2 | 5\n> 3 | 6\n> 4 | 7\n>\n> db_v2=# select t2.id, t2.value from table2 t2 where not exists (select 1\n> from table1 t1 where t1.id = t2.id);\n> id | value\n> ----+-------\n> 4 | 7\n>\n>\n\nCheck this setup:\npg=# create table t1(a int, b int);\nCREATE TABLE\npg=# create index t1_b on t1(b);\nCREATE INDEX\npg=# create table t2(c int, d int);\nCREATE TABLE\npg=# create index t2_cd on t2(c,d);\nCREATE INDEX\npg=# explain select distinct(b) from t1,t2 where t1.b !=t2.d;\n QUERY\nPLAN\n-------------------------------------------------------------------------------\n Unique (cost=0.00..80198.86 rows=200 width=4)\n -> Nested Loop (cost=0.00..68807.10 rows=4556702 width=4)\n Join Filter: (t1.b <> t2.d)\n -> Index Scan using t1_b on t1 (cost=0.00..76.35 rows=2140\nwidth=4)\n -> Materialize (cost=0.00..42.10 rows=2140 width=4)\n -> Seq Scan on t2 (cost=0.00..31.40 rows=2140 width=4)\n(6 rows)\n\n\npg=# explain select distinct(b) from t1 where NOT EXISTS (select 1 from t2\nwhere t2.d=t1.b);\n QUERY PLAN\n------------------------------------------------------------------------\n HashAggregate (cost=193.88..193.89 rows=1 width=4)\n -> Hash Anti Join (cost=58.15..193.88 rows=1 width=4)\n Hash Cond: (t1.b = t2.d)\n -> Seq Scan on t1 (cost=0.00..31.40 rows=2140 width=4)\n -> Hash (cost=31.40..31.40 rows=2140 width=4)\n -> Seq Scan on t2 (cost=0.00..31.40 rows=2140 width=4)\n(6 rows)\n\nThe cost seems to be on higher side, but maybe on your system with index\nscan on t2 and t1, the cost might be on lower side.\n\nAnother query which forced index scan was :\npg=# explain select distinct(b) from t1,t2 where t1.b >t2.d union all\nselect distinct(b) from t1,t2 where t1.b <t2.d;\n QUERY\nPLAN\n-------------------------------------------------------------------------------------\n Append (cost=0.00..100496.74 rows=400 width=4)\n -> Unique (cost=0.00..50246.37 rows=200 width=4)\n -> Nested Loop (cost=0.00..46430.04 rows=1526533 width=4)\n -> Index Scan using t1_b on t1 (cost=0.00..76.35 rows=2140\nwidth=4)\n -> Index Scan using t2_d on t2 (cost=0.00..12.75 rows=713\nwidth=4)\n Index Cond: (public.t1.b > public.t2.d)\n -> Unique (cost=0.00..50246.37 rows=200 width=4)\n -> Nested Loop (cost=0.00..46430.04 rows=1526533 width=4)\n -> Index Scan using t1_b on t1 (cost=0.00..76.35 rows=2140\nwidth=4)\n -> Index Scan using t2_d on t2 (cost=0.00..12.75 rows=713\nwidth=4)\n Index Cond: (public.t1.b < public.t2.d)\n(11 rows)\n\n\nThis looks like to a acceptable.\nPlease try this above query with your setup and post the explain output.\n\n\n-- \nRegards,\nChetan Suttraway\nEnterpriseDB <http://www.enterprisedb.com/>, The Enterprise\nPostgreSQL<http://www.enterprisedb.com/>\n company.\n\nOn Wed, Mar 23, 2011 at 12:50 PM, Samuel Gendler <[email protected]> wrote:\nOn Tue, Mar 22, 2011 at 11:28 PM, Adarsh Sharma <[email protected]> wrote:\n\nI perform a join query on it as :\n\n explain analyze select distinct(p.crawled_page_id) from\npage_content p , clause2  c where p.crawled_page_id != c.source_id ;\n\nWhat it takes more than 1 hour to complete. As I issue the explain\nanalyze command and cannot able to wait for output but I send my\nexplain output as :please describe what your query is trying to select, as it is possible that query isn't doing what you think it is.  joining 2 tables where id1 != id2 will create a cross multiple of the two tables such that every row from the first table is matched with every single row from the second table that doesn't have a matching id.  Then you are looking for distinct values on that potentially enormous set of rows.\ndb_v2=# select * from table1; id | value ----+-------\n\n\n  1 |     1  2 |     2  3 |     3(3 rows)db_v2=# select * from table2;\n id | value ----+-------  1 |     4  2 |     5  3 |     6\n\n\n(3 rows)db_v2=# select t1.id, t1.value, t2.id, t2.value from table1 t1, table2 t2 where t1.id != t2.id;\n id | value | id | value ----+-------+----+-------  1 |     1 |  2 |     5  1 |     1 |  3 |     6\n  2 |     2 |  1 |     4  2 |     2 |  3 |     6  3 |     3 |  1 |     4  3 |     3 |  2 |     5\n\nSo if you have a couple of million rows in each table, you are selecting distinct over a potentially huge set of data.   If you are actually trying to find all ids from one table which have no match at all in the other table, then you need an entirely different query:\ndb_v2=# insert into table2 (value) values (7);INSERT 0 1\n\n\ndb_v2=# select * from table2; id | value ----+-------  1 |     4  2 |     5\n\n\n  3 |     6  4 |     7db_v2=# select t2.id, t2.value from table2 t2 where not exists (select 1 from table1 t1 where t1.id = t2.id); \n id | value ----+-------  4 |     7\nCheck this setup:pg=# create table t1(a int, b int);CREATE TABLEpg=# create index t1_b on t1(b);CREATE INDEXpg=# create table t2(c int, d int);CREATE TABLEpg=# create index t2_cd on t2(c,d);\n\nCREATE INDEXpg=# explain select distinct(b) from t1,t2 where t1.b !=t2.d;                                  QUERY PLAN                                   -------------------------------------------------------------------------------\n\n Unique  (cost=0.00..80198.86 rows=200 width=4)   ->  Nested Loop  (cost=0.00..68807.10 rows=4556702 width=4)         Join Filter: (t1.b <> t2.d)         ->  Index Scan using t1_b on t1  (cost=0.00..76.35 rows=2140 width=4)\n\n         ->  Materialize  (cost=0.00..42.10 rows=2140 width=4)               ->  Seq Scan on t2  (cost=0.00..31.40 rows=2140 width=4)(6 rows)pg=# explain select distinct(b) from t1 where NOT EXISTS (select 1 from t2 where t2.d=t1.b);\n\n                               QUERY PLAN                               ------------------------------------------------------------------------ HashAggregate  (cost=193.88..193.89 rows=1 width=4)   ->  Hash Anti Join  (cost=58.15..193.88 rows=1 width=4)\n\n         Hash Cond: (t1.b = t2.d)         ->  Seq Scan on t1  (cost=0.00..31.40 rows=2140 width=4)         ->  Hash  (cost=31.40..31.40 rows=2140 width=4)               ->  Seq Scan on t2  (cost=0.00..31.40 rows=2140 width=4)\n\n(6 rows)The cost seems to be on higher side, but maybe on your system with index scan on t2 and t1, the cost might be on lower side.Another query which forced index scan was :pg=# explain select distinct(b) from t1,t2 where t1.b >t2.d union all  select distinct(b) from t1,t2 where  t1.b <t2.d;\n\n                                     QUERY PLAN                                      ------------------------------------------------------------------------------------- Append  (cost=0.00..100496.74 rows=400 width=4)\n\n   ->  Unique  (cost=0.00..50246.37 rows=200 width=4)         ->  Nested Loop  (cost=0.00..46430.04 rows=1526533 width=4)               ->  Index Scan using t1_b on t1  (cost=0.00..76.35 rows=2140 width=4)\n\n               ->  Index Scan using t2_d on t2  (cost=0.00..12.75 rows=713 width=4)                     Index Cond: (public.t1.b > public.t2.d)   ->  Unique  (cost=0.00..50246.37 rows=200 width=4)         ->  Nested Loop  (cost=0.00..46430.04 rows=1526533 width=4)\n\n               ->  Index Scan using t1_b on t1  (cost=0.00..76.35 rows=2140 width=4)               ->  Index Scan using t2_d on t2  (cost=0.00..12.75 rows=713 width=4)                     Index Cond: (public.t1.b < public.t2.d)\n\n(11 rows)This looks like to a acceptable.Please try this above query with your setup and post the explain output.-- Regards,Chetan SuttrawayEnterpriseDB, The Enterprise PostgreSQL company.", "msg_date": "Wed, 23 Mar 2011 12:55:35 +0530", "msg_from": "Chetan Suttraway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reason of Slowness of query" }, { "msg_contents": "Thanks Chetan, here is the output of your updated query :\n\n\n*explain select distinct(p.crawled_page_id) from page_content p where \nNOT EXISTS (select 1 from clause2 c where c.source_id = p.crawled_page_id);\n\n*\n QUERY \nPLAN \n---------------------------------------------------------------------------------------\n HashAggregate (cost=1516749.47..1520576.06 rows=382659 width=8)\n -> Hash Anti Join (cost=1294152.41..1515791.80 rows=383071 width=8)\n Hash Cond: (p.crawled_page_id = c.source_id)\n -> Seq Scan on page_content p (cost=0.00..87132.17 \nrows=428817 width=8)\n -> Hash (cost=771182.96..771182.96 rows=31876196 width=4)\n -> Seq Scan on clause2 c (cost=0.00..771182.96 \nrows=31876196 width=4)\n(6 rows)\n\nAnd my explain analyze output is :\n\n QUERY \nPLAN \n--------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=1516749.47..1520576.06 rows=382659 width=8) \n(actual time=56666.181..56669.270 rows=72 loops=1)\n -> Hash Anti Join (cost=1294152.41..1515791.80 rows=383071 width=8) \n(actual time=45740.789..56665.816 rows=74 loops=1)\n Hash Cond: (p.crawled_page_id = c.source_id)\n -> Seq Scan on page_content p (cost=0.00..87132.17 \nrows=428817 width=8) (actual time=0.012..715.915 rows=428467 loops=1)\n -> Hash (cost=771182.96..771182.96 rows=31876196 width=4) \n(actual time=45310.524..45310.524 rows=31853083 loops=1)\n -> Seq Scan on clause2 c (cost=0.00..771182.96 \nrows=31876196 width=4) (actual time=0.055..23408.884 rows=31853083 loops=1)\n Total runtime: 56687.660 ms\n(7 rows)\n\nBut Is there is any option to tune it further and one more thing output \nrows varies from 6 to 7.\n\n\nThanks & best Regards,\nAdarsh Sharma\n\n\n\n\n\n\nChetan Suttraway wrote:\n>\n>\n> On Wed, Mar 23, 2011 at 11:58 AM, Adarsh Sharma \n> <[email protected] <mailto:[email protected]>> wrote:\n>\n> Dear all,\n>\n> I have 2 tables in my database name clause2( 4900 MB) &\n> page_content(1582 MB).\n>\n> My table definations are as :\n>\n> *page_content :-\n>\n> *CREATE TABLE page_content\n> (\n> content_id integer,\n> wkb_geometry geometry,\n> link_level integer,\n> isprocessable integer,\n> isvalid integer,\n> isanalyzed integer,\n> islocked integer,\n> content_language character(10),\n> url_id integer,\n> publishing_date character(40),\n> heading character(150),\n> category character(150),\n> crawled_page_url character(500),\n> keywords character(500),\n> dt_stamp timestamp with time zone,\n> \"content\" character varying,\n> crawled_page_id bigint,\n> id integer\n> )\n> WITH (\n> OIDS=FALSE\n> );\n>\n> *Indexes on it :-*\n> CREATE INDEX idx_page_id ON page_content USING btree \n> (crawled_page_id);\n> CREATE INDEX idx_page_id_content ON page_content USING btree \n> (crawled_page_id, content_language, publishing_date, isprocessable);\n> CREATE INDEX pgweb_idx ON page_content USING gin \n> (to_tsvector('english'::regconfig, content::text));\n>\n> *clause 2:-\n> *CREATE TABLE clause2\n> (\n> id bigint NOT NULL DEFAULT nextval('clause_id_seq'::regclass),\n> source_id integer,\n> sentence_id integer,\n> clause_id integer,\n> tense character varying(30),\n> clause text,\n> CONSTRAINT pk_clause_demo_id PRIMARY KEY (id)\n> )WITH ( OIDS=FALSE);\n>\n> *Indexes on it :\n>\n> *CREATE INDEX idx_clause2_march10\n> ON clause2\n> USING btree\n> (id, source_id);*\n>\n> *I perform a join query on it as :\n>\n> * explain analyze select distinct(p.crawled_page_id) from\n> page_content p , clause2 c where p.crawled_page_id != c.source_id ;\n>\n> *What it takes more than 1 hour to complete. As I issue the\n> explain analyze command and cannot able to wait for output but I\n> send my explain output as :\n> QUERY\n> PLAN \n> --------------------------------------------------------------------------------------------------------\n> Unique (cost=927576.16..395122387390.13 rows=382659 width=8)\n> -> Nested Loop (cost=927576.16..360949839832.15\n> rows=13669019023195 width=8)\n> Join Filter: (p.crawled_page_id <> c.source_id)\n> -> Index Scan using idx_page_id on page_content p \n> (cost=0.00..174214.02 rows=428817 width=8)\n> -> Materialize (cost=927576.16..1370855.12\n> rows=31876196 width=4)\n> -> Seq Scan on clause2 c (cost=0.00..771182.96\n> rows=31876196 width=4)\n> (6 rows)\n>\n>\n> Please guide me how to make the above query run faster as I am not\n> able to do that.\n>\n>\n> Thanks, Adarsh\n>\n> *\n>\n> *\n>\n>\n> Could you try just explaining the below query:\n> explain select distinct(p.crawled_page_id) from page_content p where \n> NOT EXISTS (select 1 from clause2 c where c.source_id = \n> p.crawled_page_id);\n>\n> The idea here is to avoid directly using NOT operator.\n>\n>\n>\n> Regards,\n> Chetan\n>\n> -- \n> Chetan Suttraway\n> EnterpriseDB <http://www.enterprisedb.com/>, The Enterprise PostgreSQL \n> <http://www.enterprisedb.com/> company.\n>\n>\n>\n\n\n\n\n\n\n\nThanks Chetan, here is the output of your updated query :\n\n\nexplain  select distinct(p.crawled_page_id) from page_content p\nwhere\nNOT EXISTS (select 1 from  clause2 c where c.source_id =\np.crawled_page_id);\n\n\n                                      QUERY\nPLAN                                       \n---------------------------------------------------------------------------------------\n HashAggregate  (cost=1516749.47..1520576.06 rows=382659 width=8)\n   ->  Hash Anti Join  (cost=1294152.41..1515791.80 rows=383071\nwidth=8)\n         Hash Cond: (p.crawled_page_id = c.source_id)\n         ->  Seq Scan on page_content p  (cost=0.00..87132.17\nrows=428817 width=8)\n         ->  Hash  (cost=771182.96..771182.96 rows=31876196 width=4)\n               ->  Seq Scan on clause2 c  (cost=0.00..771182.96\nrows=31876196 width=4)\n(6 rows)\n\nAnd my explain analyze output is :\n\n                                                      QUERY\nPLAN                                                                 \n--------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate  (cost=1516749.47..1520576.06 rows=382659 width=8)\n(actual time=56666.181..56669.270 rows=72 loops=1)\n   ->  Hash Anti Join  (cost=1294152.41..1515791.80 rows=383071\nwidth=8) (actual time=45740.789..56665.816 rows=74 loops=1)\n         Hash Cond: (p.crawled_page_id = c.source_id)\n         ->  Seq Scan on page_content p  (cost=0.00..87132.17\nrows=428817 width=8) (actual time=0.012..715.915 rows=428467 loops=1)\n         ->  Hash  (cost=771182.96..771182.96 rows=31876196 width=4)\n(actual time=45310.524..45310.524 rows=31853083 loops=1)\n               ->  Seq Scan on clause2 c  (cost=0.00..771182.96\nrows=31876196 width=4) (actual time=0.055..23408.884 rows=31853083\nloops=1)\n Total runtime: 56687.660 ms\n(7 rows)\n\nBut Is there is any option to tune it further and one more thing output\nrows varies from 6 to 7.\n\n\nThanks & best Regards,\nAdarsh Sharma\n\n\n\n\n\n\nChetan Suttraway wrote:\n\n\nOn Wed, Mar 23, 2011 at 11:58 AM, Adarsh\nSharma <[email protected]>\nwrote:\n\nDear all,\n\nI have 2 tables in my database name clause2( 4900 MB) &\npage_content(1582 MB).\n\nMy table definations are as :\n\npage_content :-\n\nCREATE TABLE page_content\n(\n  content_id integer,\n  wkb_geometry geometry,\n  link_level integer,\n  isprocessable integer,\n  isvalid integer,\n  isanalyzed integer,\n  islocked integer,\n  content_language character(10),\n  url_id integer,\n  publishing_date character(40),\n  heading character(150),\n  category character(150),\n  crawled_page_url character(500),\n  keywords character(500),\n  dt_stamp timestamp with time zone,\n  \"content\" character varying,\n  crawled_page_id bigint,\n  id integer\n)\nWITH (\n  OIDS=FALSE\n);\n\nIndexes on it :-\nCREATE INDEX idx_page_id  ON page_content  USING btree \n(crawled_page_id);\nCREATE INDEX idx_page_id_content   ON page_content  USING btree \n(crawled_page_id, content_language, publishing_date, isprocessable);\nCREATE INDEX pgweb_idx  ON page_content   USING gin  \n(to_tsvector('english'::regconfig, content::text));\n\nclause 2:-\nCREATE TABLE clause2\n(\n  id bigint NOT NULL DEFAULT nextval('clause_id_seq'::regclass),\n  source_id integer,\n  sentence_id integer,\n  clause_id integer,\n  tense character varying(30),\n  clause text,\n  CONSTRAINT pk_clause_demo_id PRIMARY KEY (id)\n)WITH ( OIDS=FALSE);\n\nIndexes on it :\n\nCREATE INDEX idx_clause2_march10\n  ON clause2\n  USING btree\n  (id, source_id);\n\nI perform a join query on it as :\n\n explain analyze select distinct(p.crawled_page_id) from\npage_content p , clause2  c where p.crawled_page_id != c.source_id ;\n\nWhat it takes more than 1 hour to complete. As I issue the\nexplain\nanalyze command and cannot able to wait for output but I send my\nexplain output as :\n                                             QUERY\nPLAN                                               \n--------------------------------------------------------------------------------------------------------\n Unique  (cost=927576.16..395122387390.13 rows=382659 width=8)\n   ->  Nested Loop  (cost=927576.16..360949839832.15\nrows=13669019023195 width=8)\n         Join Filter: (p.crawled_page_id <> c.source_id)\n         ->  Index Scan using idx_page_id on page_content p \n(cost=0.00..174214.02 rows=428817 width=8)\n         ->  Materialize  (cost=927576.16..1370855.12 rows=31876196\nwidth=4)\n               ->  Seq Scan on clause2 c  (cost=0.00..771182.96\nrows=31876196 width=4)\n(6 rows)\n\n\nPlease guide me how to make the above query run faster as I am not able\nto do that.\n\n\nThanks, Adarsh\n\n\n\n\n\n\nCould you try just explaining the below query:\nexplain  select distinct(p.crawled_page_id) from page_content p where\nNOT EXISTS (select 1 from  clause2 c where c.source_id =\np.crawled_page_id);\n\nThe idea here is to avoid directly using NOT operator.\n\n\n\n\n\nRegards,\nChetan\n\n-- \nChetan Suttraway\nEnterpriseDB, The Enterprise\nPostgreSQL company.", "msg_date": "Wed, 23 Mar 2011 13:00:26 +0530", "msg_from": "Adarsh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Reason of Slowness of query" }, { "msg_contents": "23.03.11 09:30, Adarsh Sharma ???????(??):\n> Thanks Chetan, here is the output of your updated query :\n>\n>\n> *explain select distinct(p.crawled_page_id) from page_content p where \n> NOT EXISTS (select 1 from clause2 c where c.source_id = \n> p.crawled_page_id);\n>\n> *\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------\n> HashAggregate (cost=1516749.47..1520576.06 rows=382659 width=8)\n> -> Hash Anti Join (cost=1294152.41..1515791.80 rows=383071 width=8)\n> Hash Cond: (p.crawled_page_id = c.source_id)\n> -> Seq Scan on page_content p (cost=0.00..87132.17 \n> rows=428817 width=8)\n> -> Hash (cost=771182.96..771182.96 rows=31876196 width=4)\n> -> Seq Scan on clause2 c (cost=0.00..771182.96 \n> rows=31876196 width=4)\n> (6 rows)\n>\n> And my explain analyze output is :\n>\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------------\n> HashAggregate (cost=1516749.47..1520576.06 rows=382659 width=8) \n> (actual time=56666.181..56669.270 rows=72 loops=1)\n> -> Hash Anti Join (cost=1294152.41..1515791.80 rows=383071 \n> width=8) (actual time=45740.789..56665.816 rows=74 loops=1)\n> Hash Cond: (p.crawled_page_id = c.source_id)\n> -> Seq Scan on page_content p (cost=0.00..87132.17 \n> rows=428817 width=8) (actual time=0.012..715.915 rows=428467 loops=1)\n> -> Hash (cost=771182.96..771182.96 rows=31876196 width=4) \n> (actual time=45310.524..45310.524 rows=31853083 loops=1)\n> -> Seq Scan on clause2 c (cost=0.00..771182.96 \n> rows=31876196 width=4) (actual time=0.055..23408.884 rows=31853083 \n> loops=1)\n> Total runtime: 56687.660 ms\n> (7 rows)\n>\n> But Is there is any option to tune it further and one more thing \n> output rows varies from 6 to 7.\nYou need an index on source_id to prevent seq scan, like the next:\nCREATE INDEX idx_clause2_source_id\n ON clause2\n (source_id);*\n\n*Best regards, Vitalii Tymchyshyn\n\n\n\n\n\n\n\n 23.03.11 09:30, Adarsh Sharma написав(ла):\n \n\n Thanks Chetan, here is the output of your updated query :\n\n\nexplain  select distinct(p.crawled_page_id) from page_content p\n where\n NOT EXISTS (select 1 from  clause2 c where c.source_id =\n p.crawled_page_id);\n\n\n                                       QUERY\n PLAN                                       \n---------------------------------------------------------------------------------------\n  HashAggregate  (cost=1516749.47..1520576.06 rows=382659 width=8)\n    ->  Hash Anti Join  (cost=1294152.41..1515791.80 rows=383071\n width=8)\n          Hash Cond: (p.crawled_page_id = c.source_id)\n          ->  Seq Scan on page_content p  (cost=0.00..87132.17\n rows=428817 width=8)\n          ->  Hash  (cost=771182.96..771182.96 rows=31876196\n width=4)\n                ->  Seq Scan on clause2 c  (cost=0.00..771182.96\n rows=31876196 width=4)\n (6 rows)\n\n And my explain analyze output is :\n\n                                                       QUERY\n PLAN                                                                \n \n--------------------------------------------------------------------------------------------------------------------------------------------\n  HashAggregate  (cost=1516749.47..1520576.06 rows=382659 width=8)\n (actual time=56666.181..56669.270 rows=72 loops=1)\n    ->  Hash Anti Join  (cost=1294152.41..1515791.80 rows=383071\n width=8) (actual time=45740.789..56665.816 rows=74 loops=1)\n          Hash Cond: (p.crawled_page_id = c.source_id)\n          ->  Seq Scan on page_content p  (cost=0.00..87132.17\n rows=428817 width=8) (actual time=0.012..715.915 rows=428467\n loops=1)\n          ->  Hash  (cost=771182.96..771182.96 rows=31876196\n width=4)\n (actual time=45310.524..45310.524 rows=31853083 loops=1)\n                ->  Seq Scan on clause2 c  (cost=0.00..771182.96\n rows=31876196 width=4) (actual time=0.055..23408.884 rows=31853083\n loops=1)\n  Total runtime: 56687.660 ms\n (7 rows)\n\n But Is there is any option to tune it further and one more thing\n output\n rows varies from 6 to 7.\n\n You need an index on source_id to prevent seq scan, like the next:\n CREATE INDEX idx_clause2_source_id\n   ON clause2\n   (source_id);\n\nBest regards, Vitalii Tymchyshyn", "msg_date": "Wed, 23 Mar 2011 10:51:25 +0200", "msg_from": "Vitalii Tymchyshyn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reason of Slowness of query" } ]
[ { "msg_contents": "Thanks Chetan, After my Lunch Break, I tried the below steps :\n\n*My original query was :\n*explain analyze select distinct(p.crawled_page_id) from page_content p \n, clause2 c where p.crawled_page_id != c.source_id\n\nwhich hangs because it is wrong query to fetch the desired output .\n\n*Next Updated Query be Chetan Suttraway :*\n\nexplain analyze select distinct(p.crawled_page_id) from page_content p\n where NOT EXISTS (select 1 from clause2 c where c.source_id = \np.crawled_page_id);\n\n \nQUERY \nPLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=100278.16..104104.75 rows=382659 width=8) (actual \ntime=7192.843..7195.923 rows=72 loops=1)\n -> Nested Loop Anti Join (cost=0.00..99320.46 rows=383079 width=8) \n(actual time=0.040..7192.426 rows=74 loops=1)\n -> Seq Scan on page_content p (cost=0.00..87132.17 \nrows=428817 width=8) (actual time=0.009..395.599 rows=428467 loops=1)\n -> Index Scan using idx_clause2_source_id on clause2 c \n(cost=0.00..18.18 rows=781 width=4) (actual time=0.014..0.014 rows=1 \nloops=428467)\n Index Cond: (c.source_id = p.crawled_page_id)\n Total runtime: 7199.748 ms\n(6 rows)\n\nI think it is very much faster but I don't understand the query :\n\n*explain select distinct(b) from t1,t2 where t1.b >t2.d union all \nselect distinct(b) from t1,t2 where t1.b <t2.d;\n\n*As i transform it into my format as:\n\nexplain select distinct(p.crawled_page_id) from page_content p , clause2 \nc where p.crawled_page_id > c.source_id union all select \ndistinct(p.crawled_page_id) from page_content p,clause2 c where \np.crawled_page_id < c.source_id;\n\n QUERY \nPLAN \n---------------------------------------------------------------------------------------------------------------------\n Append (cost=0.00..296085951076.34 rows=765318 width=8)\n -> Unique (cost=0.00..148042971711.58 rows=382659 width=8)\n -> Nested Loop (cost=0.00..136655213119.84 rows=4555103436696 \nwidth=8)\n -> Index Scan using idx_page_id on page_content p \n(cost=0.00..174214.02 rows=428817 width=8)\n -> Index Scan using idx_clause2_source_id on clause2 c \n(cost=0.00..185898.05 rows=10622488 width=4)\n Index Cond: (p.crawled_page_id > c.source_id)\n -> Unique (cost=0.00..148042971711.58 rows=382659 width=8)\n -> Nested Loop (cost=0.00..136655213119.84 rows=4555103436696 \nwidth=8)\n -> Index Scan using idx_page_id on page_content p \n(cost=0.00..174214.02 rows=428817 width=8)\n -> Index Scan using idx_clause2_source_id on clause2 c \n(cost=0.00..185898.05 rows=10622488 width=4)\n Index Cond: (p.crawled_page_id < c.source_id)\n(11 rows)\n\nI don't think this is correct because it produce 11 rows output.\n\nAny further suggestions, Please guide.\n\nThanks & best Regards,\nAdarsh Sharma\n\n\n\n\n\n\nThanks Chetan, After my Lunch Break, I tried the below steps :\n\nMy original query was :\nexplain analyze select distinct(p.crawled_page_id) from\npage_content p , clause2  c where p.crawled_page_id != c.source_id \n\nwhich hangs because it is wrong query to fetch the desired output .\n\nNext Updated Query be Chetan Suttraway :\n\nexplain analyze select distinct(p.crawled_page_id) from page_content p\n where NOT EXISTS (select 1 from  clause2 c where c.source_id =\np.crawled_page_id);\n\n                                                                   \nQUERY\nPLAN                                                                     \n\n-----------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate  (cost=100278.16..104104.75 rows=382659 width=8) (actual\ntime=7192.843..7195.923 rows=72 loops=1)\n   ->  Nested Loop Anti Join  (cost=0.00..99320.46 rows=383079\nwidth=8) (actual time=0.040..7192.426 rows=74 loops=1)\n         ->  Seq Scan on page_content p  (cost=0.00..87132.17\nrows=428817 width=8) (actual time=0.009..395.599 rows=428467 loops=1)\n         ->  Index Scan using idx_clause2_source_id on clause2 c \n(cost=0.00..18.18 rows=781 width=4) (actual time=0.014..0.014 rows=1\nloops=428467)\n               Index Cond: (c.source_id = p.crawled_page_id)\n Total runtime: 7199.748 ms\n(6 rows)\n\nI think it is very much faster but I don't understand the query :\n\nexplain select distinct(b) from t1,t2 where t1.b >t2.d union all \nselect distinct(b) from t1,t2 where  t1.b <t2.d;\n\nAs i transform it into my format as:\n\nexplain select distinct(p.crawled_page_id) from page_content p ,\nclause2 c where p.crawled_page_id > c.source_id union all  select\ndistinct(p.crawled_page_id) from page_content p,clause2 c where\np.crawled_page_id < c.source_id;\n\n                                                     QUERY\nPLAN                                                      \n---------------------------------------------------------------------------------------------------------------------\n Append  (cost=0.00..296085951076.34 rows=765318 width=8)\n   ->  Unique  (cost=0.00..148042971711.58 rows=382659 width=8)\n         ->  Nested Loop  (cost=0.00..136655213119.84\nrows=4555103436696 width=8)\n               ->  Index Scan using idx_page_id on page_content p \n(cost=0.00..174214.02 rows=428817 width=8)\n               ->  Index Scan using idx_clause2_source_id on clause2\nc  (cost=0.00..185898.05 rows=10622488 width=4)\n                     Index Cond: (p.crawled_page_id > c.source_id)\n   ->  Unique  (cost=0.00..148042971711.58 rows=382659 width=8)\n         ->  Nested Loop  (cost=0.00..136655213119.84\nrows=4555103436696 width=8)\n               ->  Index Scan using idx_page_id on page_content p \n(cost=0.00..174214.02 rows=428817 width=8)\n               ->  Index Scan using idx_clause2_source_id on clause2\nc  (cost=0.00..185898.05 rows=10622488 width=4)\n                     Index Cond: (p.crawled_page_id < c.source_id)\n(11 rows)\n\nI don't think this is correct because it produce 11 rows output.\n\nAny further suggestions, Please guide.\n\nThanks & best Regards,\nAdarsh Sharma", "msg_date": "Wed, 23 Mar 2011 14:47:51 +0530", "msg_from": "Adarsh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Re-Reason of Slowness of Query" }, { "msg_contents": "23.03.11 11:17, Adarsh Sharma ???????(??):\n>\n> I think it is very much faster but I don't understand the query :\n>\n> *explain select distinct(b) from t1,t2 where t1.b >t2.d union all \n> select distinct(b) from t1,t2 where t1.b <t2.d;\n> *\nI don't understand it too. What are you trying to get? Is it\nselect distinct(b) from t1 where b > (select min(d) from t2)**or b < \n(select max(d) from t2)\n?\n\nCan you explain in words, not SQL, what do you expect do retrieve?\n\nBest regards, Vitalii Tymchyshyn\n\n\n\n\n\n\n 23.03.11 11:17, Adarsh Sharma написав(ла):\n \n\n I think it is very much faster but I don't understand the query :\n\nexplain select distinct(b) from t1,t2 where t1.b >t2.d union\n all \n select distinct(b) from t1,t2 where  t1.b <t2.d;\n\n\n I don't understand it too. What are you trying to get? Is it \n select distinct(b) from t1 where  b > (select min(d) from t2)\nor b  < (select max(d) from t2)\n ?\n\n Can you explain in words, not SQL, what do you expect do retrieve?\n\n Best regards, Vitalii Tymchyshyn", "msg_date": "Wed, 23 Mar 2011 11:26:46 +0200", "msg_from": "Vitalii Tymchyshyn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re-Reason of Slowness of Query" }, { "msg_contents": "I just want to retrieve that id 's from page_content which do not have \nany entry in clause2 table.\n\nThanks , Adarsh\n\nVitalii Tymchyshyn wrote:\n> 23.03.11 11:17, Adarsh Sharma ???????(??):\n>>\n>> I think it is very much faster but I don't understand the query :\n>>\n>> *explain select distinct(b) from t1,t2 where t1.b >t2.d union all \n>> select distinct(b) from t1,t2 where t1.b <t2.d;\n>> *\n> I don't understand it too. What are you trying to get? Is it\n> select distinct(b) from t1 where b > (select min(d) from t2)* *or b \n> < (select max(d) from t2)\n> ?\n>\n> Can you explain in words, not SQL, what do you expect do retrieve?\n>\n> Best regards, Vitalii Tymchyshyn\n\n\n\n\n\n\n\nI just want to retrieve that id 's from page_content which do not have\nany entry in clause2 table.\n\nThanks , Adarsh\n\nVitalii Tymchyshyn wrote:\n\n\n23.03.11 11:17, Adarsh Sharma написав(ла):\n \nI think it is very much faster but I don't understand the query :\n\nexplain select distinct(b) from t1,t2 where t1.b >t2.d union\nall  select distinct(b) from t1,t2 where  t1.b <t2.d;\n\n\nI don't understand it too. What are you trying to get? Is it \nselect distinct(b) from t1 where  b > (select min(d) from t2) or\nb  < (select max(d) from t2)\n?\n\nCan you explain in words, not SQL, what do you expect do retrieve?\n\nBest regards, Vitalii Tymchyshyn", "msg_date": "Wed, 23 Mar 2011 15:40:12 +0530", "msg_from": "Adarsh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re-Reason of Slowness of Query" }, { "msg_contents": "23.03.11 12:10, Adarsh Sharma ???????(??):\n> I just want to retrieve that id 's from page_content which do not have \n> any entry in clause2 table.\n>\nThen\nselect distinct(p.crawled_page_id) from page_content p\n where NOT EXISTS (select 1 from clause2 c where c.source_id = \np.crawled_page_id);\nis correct query.\n\nBest regards, Vitalii Tymchyshyn.\n\n\n\n\n\n\n 23.03.11 12:10, Adarsh Sharma написав(ла):\n \n\n I just want to retrieve that id 's from page_content which do not\n have\n any entry in clause2 table.\n\n\n Then \n select distinct(p.crawled_page_id) from page_content p\n  where NOT EXISTS (select 1 from  clause2 c where c.source_id =\n p.crawled_page_id);\n is correct query.\n\n Best regards, Vitalii Tymchyshyn.", "msg_date": "Wed, 23 Mar 2011 12:12:08 +0200", "msg_from": "Vitalii Tymchyshyn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re-Reason of Slowness of Query" }, { "msg_contents": "Vitalii Tymchyshyn wrote:\n> 23.03.11 12:10, Adarsh Sharma ???????(??):\n>> I just want to retrieve that id 's from page_content which do not \n>> have any entry in clause2 table.\n>>\n> Then\n> select distinct(p.crawled_page_id) from page_content p\n> where NOT EXISTS (select 1 from clause2 c where c.source_id = \n> p.crawled_page_id);\n> is correct query.\n>\n\nI can't understand how* select 1 from clause2 c where c.source_id = \np.crawled_page_id works too, *i get my output .\n\nWhat is the significance of 1 here.\n\nThanks , Adarsh\n**\n> Best regards, Vitalii Tymchyshyn.\n\n\n\n\n\n\n\nVitalii Tymchyshyn wrote:\n\n\n23.03.11 12:10, Adarsh Sharma написав(ла):\n \n\nI just want to retrieve that id 's from page_content which do not have\nany entry in clause2 table.\n\n\nThen \nselect distinct(p.crawled_page_id) from page_content p\n where NOT EXISTS (select 1 from  clause2 c where c.source_id =\np.crawled_page_id);\nis correct query.\n\n\n\nI can't understand how select 1 from  clause2 c where c.source_id =\np.crawled_page_id works too, i get my output .\n\nWhat is the significance of 1 here.\n\nThanks , Adarsh\n\n Best\nregards, Vitalii Tymchyshyn.", "msg_date": "Wed, 23 Mar 2011 15:49:18 +0530", "msg_from": "Adarsh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re-Reason of Slowness of Query" }, { "msg_contents": "23.03.11 12:19, Adarsh Sharma ???????(??):\n> Vitalii Tymchyshyn wrote:\n>> 23.03.11 12:10, Adarsh Sharma ???????(??):\n>>> I just want to retrieve that id 's from page_content which do not \n>>> have any entry in clause2 table.\n>>>\n>> Then\n>> select distinct(p.crawled_page_id) from page_content p\n>> where NOT EXISTS (select 1 from clause2 c where c.source_id = \n>> p.crawled_page_id);\n>> is correct query.\n>>\n>\n> I can't understand how*select 1 from clause2 c where c.source_id = \n> p.crawled_page_id works too, *i get my output .\n>\n> What is the significance of 1 here.\nNo significance. You can put anything there. E.g. \"*\". Simply arbitrary \nconstant. Exists checks if there were any rows, it does not matter which \ncolumns are there or what is in this columns.\n\nBest regards, Vitalii Tymchyshyn\n\n\n\n\n\n\n\n 23.03.11 12:19, Adarsh Sharma написав(ла):\n \n\n Vitalii Tymchyshyn wrote:\n \n\n 23.03.11 12:10, Adarsh Sharma написав(ла):\n \n\n I just want to retrieve that id 's from page_content which do\n not have\n any entry in clause2 table.\n\n\n Then \n select distinct(p.crawled_page_id) from page_content p\n  where NOT EXISTS (select 1 from  clause2 c where c.source_id =\n p.crawled_page_id);\n is correct query.\n\n\n\n I can't understand how select 1 from  clause2 c where\n c.source_id =\n p.crawled_page_id works too, i get my output .\n\n What is the significance of 1 here.\n\n No significance. You can put anything there. E.g. \"*\". Simply\n arbitrary constant. Exists checks if there were any rows, it does\n not matter which columns are there or what is in this columns.\n\n Best regards, Vitalii Tymchyshyn", "msg_date": "Wed, 23 Mar 2011 12:37:58 +0200", "msg_from": "Vitalii Tymchyshyn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re-Reason of Slowness of Query" }, { "msg_contents": "> I just want to retrieve that id 's from page_content which do not have\n> any entry in clause2 table.\n\nIn that case the query probably does not work (at least the query you've\nsent in the first post) as it will return even those IDs that have at\nleast one other row in 'clause2' (not matching the != condition). At least\nthat's how I understand it.\n\nSo instead of this\n\nselect distinct(p.crawled_page_id)\nfrom page_content p, clause2 c where p.crawled_page_id != c.source_id ;\n\nyou should probably do this\n\nselect distinct(p.crawled_page_id)\nfrom page_content p left join clause2 c on (p.crawled_page_id =\nc.source_id) where (c.source_id is null);\n\nI guess this will be much more efficient too.\n\nregards\nTomas\n\n", "msg_date": "Wed, 23 Mar 2011 11:38:23 +0100", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Re-Reason of Slowness of Query" }, { "msg_contents": "On Wed, Mar 23, 2011 at 3:49 PM, Adarsh Sharma <[email protected]>wrote:\n\n> Vitalii Tymchyshyn wrote:\n>\n> 23.03.11 12:10, Adarsh Sharma написав(ла):\n>\n> I just want to retrieve that id 's from page_content which do not have any\n> entry in clause2 table.\n>\n> Then\n> select distinct(p.crawled_page_id) from page_content p\n> where NOT EXISTS (select 1 from clause2 c where c.source_id =\n> p.crawled_page_id);\n> is correct query.\n>\n>\n> I can't understand how* select 1 from clause2 c where c.source_id =\n> p.crawled_page_id works too, *i get my output .\n>\n> What is the significance of 1 here.\n>\n> Thanks , Adarsh\n> **\n>\n> Best regards, Vitalii Tymchyshyn.\n>\n>\n>\nIts the inverted logic for finding crawled_page_id not matching with\nsource_id.\nActually, the idea was to force index scan on clause2 though.\n\n\n-- \nRegards,\nChetan Suttraway\nEnterpriseDB <http://www.enterprisedb.com/>, The Enterprise\nPostgreSQL<http://www.enterprisedb.com/>\n company.\n\nOn Wed, Mar 23, 2011 at 3:49 PM, Adarsh Sharma <[email protected]> wrote:\n\nVitalii Tymchyshyn wrote:\n\n \n23.03.11 12:10, Adarsh Sharma написав(ла):\n \n \nI just want to retrieve that id 's from page_content which do not have\nany entry in clause2 table.\n\n\nThen \nselect distinct(p.crawled_page_id) from page_content p\n where NOT EXISTS (select 1 from  clause2 c where c.source_id =\np.crawled_page_id);\nis correct query.\n\n\n\nI can't understand how select 1 from  clause2 c where c.source_id =\np.crawled_page_id works too, i get my output .\n\nWhat is the significance of 1 here.\n\nThanks , Adarsh\n\n Best\nregards, Vitalii Tymchyshyn.\n\n\n\nIts the inverted logic for finding crawled_page_id not matching with source_id.Actually, the idea was to force index scan on clause2 though.-- Regards,Chetan Suttraway\nEnterpriseDB, The Enterprise PostgreSQL company.", "msg_date": "Wed, 23 Mar 2011 16:09:37 +0530", "msg_from": "Chetan Suttraway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re-Reason of Slowness of Query" }, { "msg_contents": "On Wed, Mar 23, 2011 at 4:08 PM, <[email protected]> wrote:\n\n> > I just want to retrieve that id 's from page_content which do not have\n> > any entry in clause2 table.\n>\n> In that case the query probably does not work (at least the query you've\n> sent in the first post) as it will return even those IDs that have at\n> least one other row in 'clause2' (not matching the != condition). At least\n> that's how I understand it.\n>\n> true.\n\n\n> So instead of this\n>\n> select distinct(p.crawled_page_id)\n> from page_content p, clause2 c where p.crawled_page_id != c.source_id ;\n>\n> you should probably do this\n>\n> select distinct(p.crawled_page_id)\n> from page_content p left join clause2 c on (p.crawled_page_id =\n> c.source_id) where (c.source_id is null);\n>\n> I guess this will be much more efficient too.\n>\n>\nThis looks like to give expected results. Also note that the where clause\n\"is null\" is really required and is not an\noptional predicate.\n\n\n\n> regards\n> Tomas\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nRegards,\nChetan Suttraway\nEnterpriseDB <http://www.enterprisedb.com/>, The Enterprise\nPostgreSQL<http://www.enterprisedb.com/>\n company.\n\nOn Wed, Mar 23, 2011 at 4:08 PM, <[email protected]> wrote:\n> I just want to retrieve that id 's from page_content which do not have\n> any entry in clause2 table.\n\nIn that case the query probably does not work (at least the query you've\nsent in the first post) as it will return even those IDs that have at\nleast one other row in 'clause2' (not matching the != condition). At least\nthat's how I understand it.\ntrue. \nSo instead of this\n\nselect distinct(p.crawled_page_id)\nfrom page_content p, clause2 c where p.crawled_page_id != c.source_id ;\n\nyou should probably do this\n\nselect distinct(p.crawled_page_id)\nfrom page_content p left join clause2 c on (p.crawled_page_id =\nc.source_id) where (c.source_id is null);\n\nI guess this will be much more efficient too.\nThis looks like to give expected results. Also note that the where clause \"is null\" is really required and is not anoptional predicate. \n\n\nregards\nTomas\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- Regards,Chetan SuttrawayEnterpriseDB, The Enterprise PostgreSQL company.", "msg_date": "Wed, 23 Mar 2011 16:14:22 +0530", "msg_from": "Chetan Suttraway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re-Reason of Slowness of Query" }, { "msg_contents": "Thank U all, for U'r Nice Support.\n\nLet me Conclude the results, below results are obtained after finding \nthe needed queries :\n\n*First Option :\n\n*pdc_uima=# explain analyze select distinct(p.crawled_page_id)\npdc_uima-# from page_content p left join clause2 c on (p.crawled_page_id =\npdc_uima(# c.source_id) where (c.source_id is null);\n \nQUERY \nPLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=100278.16..104104.75 rows=382659 width=8) (actual \ntime=87927.000..87930.084 rows=72 loops=1)\n -> Nested Loop Anti Join (cost=0.00..99320.46 rows=383079 width=8) \n(actual time=0.191..87926.546 rows=74 loops=1)\n -> Seq Scan on page_content p (cost=0.00..87132.17 \nrows=428817 width=8) (actual time=0.027..528.978 rows=428467 loops=1)\n -> Index Scan using idx_clause2_source_id on clause2 c \n(cost=0.00..18.18 rows=781 width=4) (actual time=0.202..0.202 rows=1 \nloops=428467)\n Index Cond: (p.crawled_page_id = c.source_id)\n Total runtime: 87933.882 ms :-(\n(6 rows)\n\n*Second Option :\n\n*pdc_uima=# explain analyze select distinct(p.crawled_page_id) from \npage_content p\npdc_uima-# where NOT EXISTS (select 1 from clause2 c where c.source_id \n= p.crawled_page_id);\n \nQUERY \nPLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=100278.16..104104.75 rows=382659 width=8) (actual \ntime=7047.259..7050.261 rows=72 loops=1)\n -> Nested Loop Anti Join (cost=0.00..99320.46 rows=383079 width=8) \n(actual time=0.039..7046.826 rows=74 loops=1)\n -> Seq Scan on page_content p (cost=0.00..87132.17 \nrows=428817 width=8) (actual time=0.008..388.976 rows=428467 loops=1)\n -> Index Scan using idx_clause2_source_id on clause2 c \n(cost=0.00..18.18 rows=781 width=4) (actual time=0.013..0.013 rows=1 \nloops=428467)\n Index Cond: (c.source_id = p.crawled_page_id)\n Total runtime: 7054.074 ms :-)\n(6 rows)\n\n\nThanks & best Regards,\nAdarsh Sharma\n\n\n\n\nChetan Suttraway wrote:\n>\n>\n> On Wed, Mar 23, 2011 at 4:08 PM, <[email protected] <mailto:[email protected]>> wrote:\n>\n> > I just want to retrieve that id 's from page_content which do\n> not have\n> > any entry in clause2 table.\n>\n> In that case the query probably does not work (at least the query\n> you've\n> sent in the first post) as it will return even those IDs that have at\n> least one other row in 'clause2' (not matching the != condition).\n> At least\n> that's how I understand it.\n>\n> true.\n> \n>\n> So instead of this\n>\n> select distinct(p.crawled_page_id)\n> from page_content p, clause2 c where p.crawled_page_id !=\n> c.source_id ;\n>\n> you should probably do this\n>\n> select distinct(p.crawled_page_id)\n> from page_content p left join clause2 c on (p.crawled_page_id =\n> c.source_id) where (c.source_id is null);\n>\n> I guess this will be much more efficient too.\n>\n>\n> This looks like to give expected results. Also note that the where \n> clause \"is null\" is really required and is not an\n> optional predicate.\n>\n> \n>\n> regards\n> Tomas\n>\n>\n> --\n> Sent via pgsql-performance mailing list\n> ([email protected]\n> <mailto:[email protected]>)\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n>\n>\n> -- \n> Regards,\n> Chetan Suttraway\n> EnterpriseDB <http://www.enterprisedb.com/>, The Enterprise PostgreSQL \n> <http://www.enterprisedb.com/> company.\n>\n>\n>\n\n\n\n\n\n\n\nThank U all, for U'r Nice Support.\n\nLet me Conclude the results, below results are obtained after finding\nthe needed queries :\n\nFirst Option :\n\npdc_uima=# explain analyze select distinct(p.crawled_page_id)\npdc_uima-# from page_content p left join clause2 c on\n(p.crawled_page_id =\npdc_uima(# c.source_id) where (c.source_id is null);\n                                                                    \nQUERY\nPLAN                                                                     \n\n-----------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate  (cost=100278.16..104104.75 rows=382659 width=8) (actual\ntime=87927.000..87930.084 rows=72 loops=1)\n   ->  Nested Loop Anti Join  (cost=0.00..99320.46 rows=383079\nwidth=8) (actual time=0.191..87926.546 rows=74 loops=1)\n         ->  Seq Scan on page_content p  (cost=0.00..87132.17\nrows=428817 width=8) (actual time=0.027..528.978 rows=428467 loops=1)\n         ->  Index Scan using idx_clause2_source_id on clause2 c \n(cost=0.00..18.18 rows=781 width=4) (actual time=0.202..0.202 rows=1\nloops=428467)\n               Index Cond: (p.crawled_page_id = c.source_id)\n Total runtime: 87933.882 ms :-( \n(6 rows)\n\nSecond Option :\n\npdc_uima=# explain analyze select distinct(p.crawled_page_id) from\npage_content p\npdc_uima-#  where NOT EXISTS (select 1 from  clause2 c where\nc.source_id = p.crawled_page_id);\n                                                                    \nQUERY\nPLAN                                                                     \n\n-----------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate  (cost=100278.16..104104.75 rows=382659 width=8) (actual\ntime=7047.259..7050.261 rows=72 loops=1)\n   ->  Nested Loop Anti Join  (cost=0.00..99320.46 rows=383079\nwidth=8) (actual time=0.039..7046.826 rows=74 loops=1)\n         ->  Seq Scan on page_content p  (cost=0.00..87132.17\nrows=428817 width=8) (actual time=0.008..388.976 rows=428467 loops=1)\n         ->  Index Scan using idx_clause2_source_id on clause2 c \n(cost=0.00..18.18 rows=781 width=4) (actual time=0.013..0.013 rows=1\nloops=428467)\n               Index Cond: (c.source_id = p.crawled_page_id)\n Total runtime: 7054.074 ms :-) \n(6 rows)\n\n\nThanks & best Regards,\nAdarsh Sharma\n\n\n\n\nChetan Suttraway wrote:\n\n\nOn Wed, Mar 23, 2011 at 4:08 PM, <[email protected]>\nwrote:\n\n> I just want to retrieve that id 's from\npage_content which do not have\n> any entry in clause2 table.\n\n\nIn that case the query probably does not work (at least the query you've\nsent in the first post) as it will return even those IDs that have at\nleast one other row in 'clause2' (not matching the != condition). At\nleast\nthat's how I understand it.\n\n\ntrue.\n \n\nSo\ninstead of this\n\nselect distinct(p.crawled_page_id)\n\nfrom page_content p, clause2 c where p.crawled_page_id != c.source_id ;\n\nyou should probably do this\n\nselect distinct(p.crawled_page_id)\n\nfrom page_content p left join clause2 c on (p.crawled_page_id =\nc.source_id) where (c.source_id is null);\n\nI guess this will be much more efficient too.\n\n\n\nThis looks like to give expected results. Also note that the where\nclause \"is null\" is really required and is not an\noptional predicate.\n\n \n\nregards\nTomas\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n\n\n-- \nRegards,\nChetan Suttraway\nEnterpriseDB, The Enterprise\nPostgreSQL company.", "msg_date": "Wed, 23 Mar 2011 16:51:17 +0530", "msg_from": "Adarsh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re-Reason of Slowness of Query" }, { "msg_contents": "23.03.11 13:21, Adarsh Sharma ???????(??):\n> Thank U all, for U'r Nice Support.\n>\n> Let me Conclude the results, below results are obtained after finding \n> the needed queries :\n>\n> *First Option :\n>\n> *pdc_uima=# explain analyze select distinct(p.crawled_page_id)\n> pdc_uima-# from page_content p left join clause2 c on (p.crawled_page_id =\n> pdc_uima(# c.source_id) where (c.source_id is null);\n> \n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------------------------------------------------------------\n> HashAggregate (cost=100278.16..104104.75 rows=382659 width=8) \n> (actual time=87927.000..87930.084 rows=72 loops=1)\n> -> Nested Loop Anti Join (cost=0.00..99320.46 rows=383079 \n> width=8) (actual time=0.191..87926.546 rows=74 loops=1)\n> -> Seq Scan on page_content p (cost=0.00..87132.17 \n> rows=428817 width=8) (actual time=0.027..528.978 rows=428467 loops=1)\n> -> Index Scan using idx_clause2_source_id on clause2 c \n> (cost=0.00..18.18 rows=781 width=4) (actual time=0.202..0.202 rows=1 \n> loops=428467)\n> Index Cond: (p.crawled_page_id = c.source_id)\n> Total runtime: 87933.882 ms:-(\n> (6 rows)\n>\n> *Second Option :\n>\n> *pdc_uima=# explain analyze select distinct(p.crawled_page_id) from \n> page_content p\n> pdc_uima-# where NOT EXISTS (select 1 from clause2 c where \n> c.source_id = p.crawled_page_id);\n> \n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------------------------------------------------------------\n> HashAggregate (cost=100278.16..104104.75 rows=382659 width=8) \n> (actual time=7047.259..7050.261 rows=72 loops=1)\n> -> Nested Loop Anti Join (cost=0.00..99320.46 rows=383079 \n> width=8) (actual time=0.039..7046.826 rows=74 loops=1)\n> -> Seq Scan on page_content p (cost=0.00..87132.17 \n> rows=428817 width=8) (actual time=0.008..388.976 rows=428467 loops=1)\n> -> Index Scan using idx_clause2_source_id on clause2 c \n> (cost=0.00..18.18 rows=781 width=4) (actual time=0.013..0.013 rows=1 \n> loops=428467)\n> Index Cond: (c.source_id = p.crawled_page_id)\n> Total runtime: 7054.074 ms :-)\n> (6 rows)\n>\n\nActually the plans are equal, so I suppose it depends on what were run \nfirst :). Slow query operates with data mostly on disk, while fast one \nwith data in memory.\n\nBest regards, Vitalii Tymchyshyn\n\n\n\n\n\n\n 23.03.11 13:21, Adarsh Sharma написав(ла):\n \n\n Thank U all, for U'r Nice Support.\n\n Let me Conclude the results, below results are obtained after\n finding\n the needed queries :\n\nFirst Option :\n\npdc_uima=# explain analyze select distinct(p.crawled_page_id)\n pdc_uima-# from page_content p left join clause2 c on\n (p.crawled_page_id =\n pdc_uima(# c.source_id) where (c.source_id is null);\n                                                                    \n QUERY\nPLAN                                                                     \n \n-----------------------------------------------------------------------------------------------------------------------------------------------------\n  HashAggregate  (cost=100278.16..104104.75 rows=382659 width=8)\n (actual\n time=87927.000..87930.084 rows=72 loops=1)\n    ->  Nested Loop Anti Join  (cost=0.00..99320.46 rows=383079\n width=8) (actual time=0.191..87926.546 rows=74 loops=1)\n          ->  Seq Scan on page_content p  (cost=0.00..87132.17\n rows=428817 width=8) (actual time=0.027..528.978 rows=428467\n loops=1)\n          ->  Index Scan using idx_clause2_source_id on clause2\n c \n (cost=0.00..18.18 rows=781 width=4) (actual time=0.202..0.202\n rows=1\n loops=428467)\n                Index Cond: (p.crawled_page_id = c.source_id)\n  Total runtime: 87933.882 ms :-(\n \n (6 rows)\n\nSecond Option :\n\npdc_uima=# explain analyze select distinct(p.crawled_page_id)\n from\n page_content p\n pdc_uima-#  where NOT EXISTS (select 1 from  clause2 c where\n c.source_id = p.crawled_page_id);\n                                                                    \n QUERY\nPLAN                                                                     \n \n-----------------------------------------------------------------------------------------------------------------------------------------------------\n  HashAggregate  (cost=100278.16..104104.75 rows=382659 width=8)\n (actual\n time=7047.259..7050.261 rows=72 loops=1)\n    ->  Nested Loop Anti Join  (cost=0.00..99320.46 rows=383079\n width=8) (actual time=0.039..7046.826 rows=74 loops=1)\n          ->  Seq Scan on page_content p  (cost=0.00..87132.17\n rows=428817 width=8) (actual time=0.008..388.976 rows=428467\n loops=1)\n          ->  Index Scan using idx_clause2_source_id on clause2\n c \n (cost=0.00..18.18 rows=781 width=4) (actual time=0.013..0.013\n rows=1\n loops=428467)\n                Index Cond: (c.source_id = p.crawled_page_id)\n  Total runtime: 7054.074 ms \n :-) \n (6 rows)\n\n\n\n Actually the plans are equal, so I suppose it depends on what were\n run first :). Slow query operates with data mostly on disk, while\n fast one with data in memory. \n\n Best regards, Vitalii Tymchyshyn", "msg_date": "Wed, 23 Mar 2011 13:21:38 +0200", "msg_from": "Vitalii Tymchyshyn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re-Reason of Slowness of Query" }, { "msg_contents": "On Wed, Mar 23, 2011 at 4:51 PM, Vitalii Tymchyshyn <[email protected]>wrote:\n\n> 23.03.11 13:21, Adarsh Sharma написав(ла):\n>\n> Thank U all, for U'r Nice Support.\n>\n> Let me Conclude the results, below results are obtained after finding the\n> needed queries :\n>\n> *First Option :\n>\n> *pdc_uima=# explain analyze select distinct(p.crawled_page_id)\n> pdc_uima-# from page_content p left join clause2 c on (p.crawled_page_id =\n> pdc_uima(# c.source_id) where (c.source_id is null);\n> QUERY\n> PLAN\n>\n> -----------------------------------------------------------------------------------------------------------------------------------------------------\n> HashAggregate (cost=100278.16..104104.75 rows=382659 width=8) (actual\n> time=87927.000..87930.084 rows=72 loops=1)\n> -> Nested Loop Anti Join (cost=0.00..99320.46 rows=383079 width=8)\n> (actual time=0.191..87926.546 rows=74 loops=1)\n> -> Seq Scan on page_content p (cost=0.00..87132.17 rows=428817\n> width=8) (actual time=0.027..528.978 rows=428467 loops=1)\n> -> Index Scan using idx_clause2_source_id on clause2 c\n> (cost=0.00..18.18 rows=781 width=4) (actual time=0.202..0.202 rows=1\n> loops=428467)\n> Index Cond: (p.crawled_page_id = c.source_id)\n> Total runtime: 87933.882 ms :-(\n> (6 rows)\n>\n> *Second Option :\n>\n> *pdc_uima=# explain analyze select distinct(p.crawled_page_id) from\n> page_content p\n> pdc_uima-# where NOT EXISTS (select 1 from clause2 c where c.source_id =\n> p.crawled_page_id);\n> QUERY\n> PLAN\n>\n> -----------------------------------------------------------------------------------------------------------------------------------------------------\n> HashAggregate (cost=100278.16..104104.75 rows=382659 width=8) (actual\n> time=7047.259..7050.261 rows=72 loops=1)\n> -> Nested Loop Anti Join (cost=0.00..99320.46 rows=383079 width=8)\n> (actual time=0.039..7046.826 rows=74 loops=1)\n> -> Seq Scan on page_content p (cost=0.00..87132.17 rows=428817\n> width=8) (actual time=0.008..388.976 rows=428467 loops=1)\n> -> Index Scan using idx_clause2_source_id on clause2 c\n> (cost=0.00..18.18 rows=781 width=4) (actual time=0.013..0.013 rows=1\n> loops=428467)\n> Index Cond: (c.source_id = p.crawled_page_id)\n> Total runtime: 7054.074 ms :-)\n> (6 rows)\n>\n>\n> Actually the plans are equal, so I suppose it depends on what were run\n> first :). Slow query operates with data mostly on disk, while fast one with\n> data in memory.\n>\n> yeah. maybe the easiest way, is to start a fresh session and fire the\nqueries.\n\n\n> Best regards, Vitalii Tymchyshyn\n>\n\n\n\n-- \nRegards,\nChetan Suttraway\nEnterpriseDB <http://www.enterprisedb.com/>, The Enterprise\nPostgreSQL<http://www.enterprisedb.com/>\n company.\n\nOn Wed, Mar 23, 2011 at 4:51 PM, Vitalii Tymchyshyn <[email protected]> wrote:\n\n 23.03.11 13:21, Adarsh Sharma написав(ла):\n \n \n Thank U all, for U'r Nice Support.\n\n Let me Conclude the results, below results are obtained after\n finding\n the needed queries :\n\nFirst Option :\n\npdc_uima=# explain analyze select distinct(p.crawled_page_id)\n pdc_uima-# from page_content p left join clause2 c on\n (p.crawled_page_id =\n pdc_uima(# c.source_id) where (c.source_id is null);\n                                                                    \n QUERY\nPLAN                                                                     \n \n-----------------------------------------------------------------------------------------------------------------------------------------------------\n  HashAggregate  (cost=100278.16..104104.75 rows=382659 width=8)\n (actual\n time=87927.000..87930.084 rows=72 loops=1)\n    ->  Nested Loop Anti Join  (cost=0.00..99320.46 rows=383079\n width=8) (actual time=0.191..87926.546 rows=74 loops=1)\n          ->  Seq Scan on page_content p  (cost=0.00..87132.17\n rows=428817 width=8) (actual time=0.027..528.978 rows=428467\n loops=1)\n          ->  Index Scan using idx_clause2_source_id on clause2\n c \n (cost=0.00..18.18 rows=781 width=4) (actual time=0.202..0.202\n rows=1\n loops=428467)\n                Index Cond: (p.crawled_page_id = c.source_id)\n  Total runtime: 87933.882 ms :-(\n \n (6 rows)\n\nSecond Option :\n\npdc_uima=# explain analyze select distinct(p.crawled_page_id)\n from\n page_content p\n pdc_uima-#  where NOT EXISTS (select 1 from  clause2 c where\n c.source_id = p.crawled_page_id);\n                                                                    \n QUERY\nPLAN                                                                     \n \n-----------------------------------------------------------------------------------------------------------------------------------------------------\n  HashAggregate  (cost=100278.16..104104.75 rows=382659 width=8)\n (actual\n time=7047.259..7050.261 rows=72 loops=1)\n    ->  Nested Loop Anti Join  (cost=0.00..99320.46 rows=383079\n width=8) (actual time=0.039..7046.826 rows=74 loops=1)\n          ->  Seq Scan on page_content p  (cost=0.00..87132.17\n rows=428817 width=8) (actual time=0.008..388.976 rows=428467\n loops=1)\n          ->  Index Scan using idx_clause2_source_id on clause2\n c \n (cost=0.00..18.18 rows=781 width=4) (actual time=0.013..0.013\n rows=1\n loops=428467)\n                Index Cond: (c.source_id = p.crawled_page_id)\n  Total runtime: 7054.074 ms \n :-) \n (6 rows)\n\n\n\n Actually the plans are equal, so I suppose it depends on what were\n run first :). Slow query operates with data mostly on disk, while\n fast one with data in memory. \nyeah. maybe the easiest way, is to start a fresh session and fire the queries. \n\n Best regards, Vitalii Tymchyshyn\n\n-- Regards,Chetan SuttrawayEnterpriseDB, The Enterprise PostgreSQL company.", "msg_date": "Wed, 23 Mar 2011 16:54:10 +0530", "msg_from": "Chetan Suttraway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re-Reason of Slowness of Query" }, { "msg_contents": "Vitalii Tymchyshyn wrote:\n> 23.03.11 13:21, Adarsh Sharma ???????(??):\n>> Thank U all, for U'r Nice Support.\n>>\n>> Let me Conclude the results, below results are obtained after finding \n>> the needed queries :\n>>\n>> *First Option :\n>>\n>> *pdc_uima=# explain analyze select distinct(p.crawled_page_id) from \n>> page_content p left join clause2 c on (p.crawled_page_id = \n>> c.source_id) where (c.source_id is null);\n>> \n>> QUERY \n>> PLAN \n>>\n>> -----------------------------------------------------------------------------------------------------------------------------------------------------\n>> HashAggregate (cost=100278.16..104104.75 rows=382659 width=8) \n>> (actual time=87927.000..87930.084 rows=72 loops=1)\n>> -> Nested Loop Anti Join (cost=0.00..99320.46 rows=383079 \n>> width=8) (actual time=0.191..87926.546 rows=74 loops=1)\n>> -> Seq Scan on page_content p (cost=0.00..87132.17 \n>> rows=428817 width=8) (actual time=0.027..528.978 rows=428467 loops=1)\n>> -> Index Scan using idx_clause2_source_id on clause2 c \n>> (cost=0.00..18.18 rows=781 width=4) (actual time=0.202..0.202 rows=1 \n>> loops=428467)\n>> Index Cond: (p.crawled_page_id = c.source_id)\n>> Total runtime: 87933.882 ms :-(\n>> (6 rows)\n>>\n>> *Second Option :\n>>\n>> *pdc_uima=# explain analyze select distinct(p.crawled_page_id) from \n>> page_content p\n>> pdc_uima-# where NOT EXISTS (select 1 from clause2 c where \n>> c.source_id = p.crawled_page_id);\n>> \n>> QUERY \n>> PLAN \n>>\n>> -----------------------------------------------------------------------------------------------------------------------------------------------------\n>> HashAggregate (cost=100278.16..104104.75 rows=382659 width=8) \n>> (actual time=7047.259..7050.261 rows=72 loops=1)\n>> -> Nested Loop Anti Join (cost=0.00..99320.46 rows=383079 \n>> width=8) (actual time=0.039..7046.826 rows=74 loops=1)\n>> -> Seq Scan on page_content p (cost=0.00..87132.17 \n>> rows=428817 width=8) (actual time=0.008..388.976 rows=428467 loops=1)\n>> -> Index Scan using idx_clause2_source_id on clause2 c \n>> (cost=0.00..18.18 rows=781 width=4) (actual time=0.013..0.013 rows=1 \n>> loops=428467)\n>> Index Cond: (c.source_id = p.crawled_page_id)\n>> Total runtime: 7054.074 ms :-)\n>> (6 rows)\n>>\n>\n> Actually the plans are equal, so I suppose it depends on what were run \n> first :). Slow query operates with data mostly on disk, while fast one \n> with data in memory.\n\nYes U 'r absolutely right, if I run it again, it display the output as :\n\npdc_uima=# explain analyze select distinct(p.crawled_page_id) from \npage_content p left join clause2 c on (p.crawled_page_id = c.source_id) \nwhere (c.source_id is null);\n\n \nQUERY \nPLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=100278.16..104104.75 rows=382659 width=8) (actual \ntime=7618.452..7621.427 rows=72 loops=1)\n -> Nested Loop Anti Join (cost=0.00..99320.46 rows=383079 width=8) \n(actual time=0.131..7618.043 rows=74 loops=1)\n -> Seq Scan on page_content p (cost=0.00..87132.17 \nrows=428817 width=8) (actual time=0.020..472.811 rows=428467 loops=1)\n -> Index Scan using idx_clause2_source_id on clause2 c \n(cost=0.00..18.18 rows=781 width=4) (actual time=0.015..0.015 rows=1 \nloops=428467)\n Index Cond: (p.crawled_page_id = c.source_id)\n Total runtime: 7637.132 ms\n(6 rows)\n\nI let U know after a fresh start (session ).\nThen the true result comes and if further tuning required can be performed.\n\nBest Regards, Adarsh\n>\n> Best regards, Vitalii Tymchyshyn\n\n\n\n\n\n\n\nVitalii Tymchyshyn wrote:\n\n\n23.03.11 13:21, Adarsh Sharma написав(ла):\n \n\nThank U all, for U'r Nice Support.\n\nLet me Conclude the results, below results are obtained after finding\nthe needed queries :\n\nFirst Option :\n\npdc_uima=# explain analyze select distinct(p.crawled_page_id)\nfrom page_content p left join clause2 c on (p.crawled_page_id =\nc.source_id) where (c.source_id is null);\n                                                                    \nQUERY\nPLAN                                                                     \n \n-----------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate  (cost=100278.16..104104.75 rows=382659 width=8) (actual\ntime=87927.000..87930.084 rows=72 loops=1)\n   ->  Nested Loop Anti Join  (cost=0.00..99320.46 rows=383079\nwidth=8) (actual time=0.191..87926.546 rows=74 loops=1)\n         ->  Seq Scan on page_content p  (cost=0.00..87132.17\nrows=428817 width=8) (actual time=0.027..528.978 rows=428467 loops=1)\n         ->  Index Scan using idx_clause2_source_id on clause2 c \n(cost=0.00..18.18 rows=781 width=4) (actual time=0.202..0.202 rows=1\nloops=428467)\n               Index Cond: (p.crawled_page_id = c.source_id)\n Total runtime: 87933.882 ms :-( \n(6 rows)\n\nSecond Option :\n\npdc_uima=# explain analyze select distinct(p.crawled_page_id)\nfrom page_content p\npdc_uima-#  where NOT EXISTS (select 1 from  clause2 c where\nc.source_id = p.crawled_page_id);\n                                                                    \nQUERY\nPLAN                                                                     \n \n-----------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate  (cost=100278.16..104104.75 rows=382659 width=8) (actual\ntime=7047.259..7050.261 rows=72 loops=1)\n   ->  Nested Loop Anti Join  (cost=0.00..99320.46 rows=383079\nwidth=8) (actual time=0.039..7046.826 rows=74 loops=1)\n         ->  Seq Scan on page_content p  (cost=0.00..87132.17\nrows=428817 width=8) (actual time=0.008..388.976 rows=428467 loops=1)\n         ->  Index Scan using idx_clause2_source_id on clause2 c \n(cost=0.00..18.18 rows=781 width=4) (actual time=0.013..0.013 rows=1\nloops=428467)\n               Index Cond: (c.source_id = p.crawled_page_id)\n Total runtime: 7054.074 ms :-) \n(6 rows)\n\n\n\nActually the plans are equal, so I suppose it depends on what were run\nfirst :). Slow query operates with data mostly on disk, while fast one\nwith data in memory. \n\n\nYes U 'r absolutely right, if I run it again, it display the output as :\n\npdc_uima=# explain analyze select distinct(p.crawled_page_id) from\npage_content p left join clause2 c on (p.crawled_page_id = c.source_id)\nwhere (c.source_id is null);\n\n                                                                   \nQUERY\nPLAN                                                                     \n\n-----------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate  (cost=100278.16..104104.75 rows=382659 width=8) (actual\ntime=7618.452..7621.427 rows=72 loops=1)\n   ->  Nested Loop Anti Join  (cost=0.00..99320.46 rows=383079\nwidth=8) (actual time=0.131..7618.043 rows=74 loops=1)\n         ->  Seq Scan on page_content p  (cost=0.00..87132.17\nrows=428817 width=8) (actual time=0.020..472.811 rows=428467 loops=1)\n         ->  Index Scan using idx_clause2_source_id on clause2 c \n(cost=0.00..18.18 rows=781 width=4) (actual time=0.015..0.015 rows=1\nloops=428467)\n               Index Cond: (p.crawled_page_id = c.source_id)\n Total runtime: 7637.132 ms\n(6 rows)\n\nI let U know after a fresh start (session ).\nThen the true result comes and if further tuning required can be\nperformed.\n\nBest Regards, Adarsh\n \nBest regards, Vitalii Tymchyshyn", "msg_date": "Wed, 23 Mar 2011 17:01:26 +0530", "msg_from": "Adarsh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re-Reason of Slowness of Query" }, { "msg_contents": "> Actually the plans are equal, so I suppose it depends on what were\n> run first :). Slow query operates with data mostly on disk, while\n> fast one with data in memory.\n>\n> yeah. maybe the easiest way, is to start a fresh session and fire the \n> queries.\n\n\nAfter the fresh start , the results obtained are :\n\npdc_uima=# explain analyze select distinct(p.crawled_page_id)\npdc_uima-# from page_content p left join clause2 c on (p.crawled_page_id =\npdc_uima(# c.source_id) where (c.source_id is null);\n \nQUERY \nPLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=100278.16..104104.75 rows=382659 width=8) (actual \ntime=7725.132..7728.341 rows=72 loops=1)\n -> Nested Loop Anti Join (cost=0.00..99320.46 rows=383079 width=8) \n(actual time=0.115..7724.713 rows=74 loops=1)\n -> Seq Scan on page_content p (cost=0.00..87132.17 \nrows=428817 width=8) (actual time=0.021..472.199 rows=428467 loops=1)\n -> Index Scan using idx_clause2_source_id on clause2 c \n(cost=0.00..18.18 rows=781 width=4) (actual time=0.015..0.015 rows=1 \nloops=428467)\n Index Cond: (p.crawled_page_id = c.source_id)\n Total runtime: 7731.840 ms\n(6 rows)\n\npdc_uima=# explain analyze select distinct(p.crawled_page_id) \nfrom page_content p\npdc_uima-# where NOT EXISTS (select 1 from clause2 c where \nc.source_id = p.crawled_page_id);\n \nQUERY \nPLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=100278.16..104104.75 rows=382659 width=8) (actual \ntime=6192.249..6195.368 rows=72 loops=1)\n -> Nested Loop Anti Join (cost=0.00..99320.46 rows=383079 width=8) \n(actual time=0.036..6191.838 rows=74 loops=1)\n -> Seq Scan on page_content p (cost=0.00..87132.17 \nrows=428817 width=8) (actual time=0.008..372.489 rows=428467 loops=1)\n -> Index Scan using idx_clause2_source_id on clause2 c \n(cost=0.00..18.18 rows=781 width=4) (actual time=0.012..0.012 rows=1 \nloops=428467)\n Index Cond: (c.source_id = p.crawled_page_id)\n Total runtime: 6198.567 ms\n(6 rows)\n\n> This seems a slight upper hand of the second query .\n\nWould it be possible to tune it further.\nMy postgresql.conf parameters are as follows : ( Total RAM = 16 GB )\n\nshared_buffers = 4GB\nmax_connections=700\neffective_cache_size = 6GB\nwork_mem=16MB\nmaintenance_mem=64MB\n\nI think to change\n\nwork_mem=64MB\nmaintenance_mem=256MB\n\nDoes it has some effects now.\n\n\nThanks & best Regards,\nAdarsh Sharma\n\n>\n> Best regards, Vitalii Tymchyshyn\n>\n>\n>\n>\n> -- \n> Regards,\n> Chetan Suttraway\n> EnterpriseDB <http://www.enterprisedb.com/>, The Enterprise PostgreSQL \n> <http://www.enterprisedb.com/> company.\n>\n>\n>\n\n\n\n\n\n\n\n\n\n\n\n Actually the plans are\nequal, so I suppose it depends on what were run first :). Slow query\noperates with data mostly on disk, while fast one with data in memory. \n\n\n\nyeah. maybe the easiest way, is to start a fresh session and\nfire the queries.\n\n\n\n\n\nAfter the fresh start , the results obtained are :\n\npdc_uima=# explain analyze select distinct(p.crawled_page_id)\npdc_uima-#  from page_content p left join clause2 c on\n(p.crawled_page_id =\npdc_uima(#  c.source_id) where (c.source_id is null);\n                                                                    \nQUERY\nPLAN                                                                     \n\n-----------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate  (cost=100278.16..104104.75 rows=382659 width=8) (actual\ntime=7725.132..7728.341 rows=72 loops=1)\n   ->  Nested Loop Anti Join  (cost=0.00..99320.46 rows=383079\nwidth=8) (actual time=0.115..7724.713 rows=74 loops=1)\n         ->  Seq Scan on page_content p  (cost=0.00..87132.17\nrows=428817 width=8) (actual time=0.021..472.199 rows=428467 loops=1)\n         ->  Index Scan using idx_clause2_source_id on clause2 c \n(cost=0.00..18.18 rows=781 width=4) (actual time=0.015..0.015 rows=1\nloops=428467)\n               Index Cond: (p.crawled_page_id = c.source_id)\n Total runtime: 7731.840 ms\n(6 rows)\n\npdc_uima=#          explain analyze select distinct(p.crawled_page_id)\nfrom page_content p\npdc_uima-#   where NOT EXISTS (select 1 from  clause2 c where\nc.source_id = p.crawled_page_id);\n                                                                    \nQUERY\nPLAN                                                                     \n\n-----------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate  (cost=100278.16..104104.75 rows=382659 width=8) (actual\ntime=6192.249..6195.368 rows=72 loops=1)\n   ->  Nested Loop Anti Join  (cost=0.00..99320.46 rows=383079\nwidth=8) (actual time=0.036..6191.838 rows=74 loops=1)\n         ->  Seq Scan on page_content p  (cost=0.00..87132.17\nrows=428817 width=8) (actual time=0.008..372.489 rows=428467 loops=1)\n         ->  Index Scan using idx_clause2_source_id on clause2 c \n(cost=0.00..18.18 rows=781 width=4) (actual time=0.012..0.012 rows=1\nloops=428467)\n               Index Cond: (c.source_id = p.crawled_page_id)\n Total runtime: 6198.567 ms\n(6 rows)\n\n\n\nThis seems a slight upper hand of the second query .\n\n\n\nWould it be possible to tune it further.\nMy postgresql.conf parameters are as follows : ( Total RAM = 16 GB )\n\nshared_buffers = 4GB\nmax_connections=700\neffective_cache_size = 6GB\nwork_mem=16MB\nmaintenance_mem=64MB\n\nI think to change \n\nwork_mem=64MB\nmaintenance_mem=256MB\n\nDoes it has some effects now.\n\n\nThanks & best Regards,\nAdarsh Sharma\n\n\n\n \n\n\n Best regards, Vitalii\nTymchyshyn\n\n\n\n\n\n\n-- \nRegards,\nChetan Suttraway\nEnterpriseDB, The Enterprise\nPostgreSQL company.", "msg_date": "Wed, 23 Mar 2011 17:09:18 +0530", "msg_from": "Adarsh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re-Reason of Slowness of Query" }, { "msg_contents": ">\n>> Actually the plans are equal, so I suppose it depends on what were\n>> run first :). Slow query operates with data mostly on disk, while\n>> fast one with data in memory.\n>>\n>> yeah. maybe the easiest way, is to start a fresh session and fire the\n>> queries.\n>\n>\n> After the fresh start , the results obtained are :\n\nAs Chetan Suttraway already pointed out, the execution plans are exactly\nthe same. And by \"excactly\" I mean there's no difference in evaluating\nthose two queries.\n\nThe difference is due to cached data - not just in shared buffers (which\nwill be lost of postgres restart) but also in filesystem cache (which is\nmanaged by kernel, not postgres).\n\nSo the first execution had to load (some of) the data into shared buffers,\nwhile the second execution already had a lot of data in shared buffers.\nThat's why the first query run in 7.7sec while the second 6.2sec.\n\n>> This seems a slight upper hand of the second query .\n\nAgain, there's no difference between those two queries, they're exactly\nthe same. It's just a matter of which of them is executed first.\n\n> Would it be possible to tune it further.\n\nI don't think so. The only possibility I see is to add a flag into\npage_content table, update it using a trigger (when something is\ninserted/deleted from clause2). Then you don't need to do the join.\n\n> My postgresql.conf parameters are as follows : ( Total RAM = 16 GB )\n>\n> shared_buffers = 4GB\n> max_connections=700\n> effective_cache_size = 6GB\n> work_mem=16MB\n> maintenance_mem=64MB\n>\n> I think to change\n>\n> work_mem=64MB\n> maintenance_mem=256MB\n>\n> Does it has some effects now.\n\nGenerally a good idea, but we don't know if there are other processes\nrunning on the same machine and what kind of system is this (how many\nusers are there, what kind of queries do they run). If there's a lot of\nusers, keep work_mem low. If there's just a few users decrease\nmax_connections and bump up work_mem and consider increasing\nshared_buffers.\n\nMaintenance_work_mem is used for vacuum/create index etc. so it really\ndoes not affect regular queries.\n\nSome of those values (e.g. work_mem/maintenance_work_mem) are dynamic, so\nyou can set them for the current connection and see how it affects the\nqueries.\n\nJust do something like\n\ndb=# SET work_mem='32MB'\ndb=# EXPLAIN ANALYZE SELECT ...\n\nBut I don't think this will improve the query we've been talking about.\n\nregards\nTomas\n\n", "msg_date": "Wed, 23 Mar 2011 12:51:53 +0100", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Re-Reason of Slowness of Query" }, { "msg_contents": "On 03/23/2011 04:17 AM, Adarsh Sharma wrote:\n\n> explain analyze select distinct(p.crawled_page_id) from page_content\n> p where NOT EXISTS (select 1 from clause2 c where c.source_id =\n> p.crawled_page_id);\n\nYou know... I'm surprised nobody has mentioned this, but DISTINCT is \nvery slow unless you have a fairly recent version of Postgres that \nreplaces it with something faster. Try this:\n\nEXPLAIN ANALYZE\nSELECT p.crawled_page_id\n FROM page_content p\n WHERE NOT EXISTS (\n SELECT 1\n FROM clause2 c\n WHERE c.source_id = p.crawled_page_id\n )\n GROUP BY p.crawled_page_id;\n\nOr if you like the cleaner query without a sub-select:\n\nEXPLAIN ANALYZE\nSELECT p.crawled_page_id\n FROM page_content p\n LEFT JOIN clause2 c ON (c.source_id = p.crawled_page_id)\n WHERE c.source_id IS NULL\n GROUP BY p.crawled_page_id;\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n", "msg_date": "Wed, 23 Mar 2011 08:34:29 -0500", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re-Reason of Slowness of Query" }, { "msg_contents": "> On 03/23/2011 04:17 AM, Adarsh Sharma wrote:\n>\n>> explain analyze select distinct(p.crawled_page_id) from page_content\n>> p where NOT EXISTS (select 1 from clause2 c where c.source_id =\n>> p.crawled_page_id);\n>\n> You know... I'm surprised nobody has mentioned this, but DISTINCT is\n> very slow unless you have a fairly recent version of Postgres that\n> replaces it with something faster. Try this:\n\nNobody mentioned that because the explain plan already uses hash aggregate\n(instead of the old sort)\n\n HashAggregate (cost=100278.16..104104.75 rows=382659 width=8) (actual\ntime=7047.259..7050.261 rows=72 loops=1)\n\nwhich means this is at least 8.4. Plus the 'distinct' step uses less than\n1% of total time, so even if you improve it the impact will be minimal.\n\nregards\nTomas\n\n", "msg_date": "Wed, 23 Mar 2011 15:16:01 +0100", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Re-Reason of Slowness of Query" }, { "msg_contents": "On 03/23/2011 09:16 AM, [email protected] wrote:\n\n> which means this is at least 8.4. Plus the 'distinct' step uses less than\n> 1% of total time, so even if you improve it the impact will be minimal.\n\nHaha. Noted. I guess I'm still on my original crusade against DISTINCT. \nI was pulling it out of so much old code it's been fused to my DNA. \nActually, we're still on 8.2 so... :)\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n", "msg_date": "Wed, 23 Mar 2011 09:19:12 -0500", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re-Reason of Slowness of Query" }, { "msg_contents": "[email protected] wrote:\n>> On 03/23/2011 04:17 AM, Adarsh Sharma wrote:\n>>\n>> \n>>> explain analyze select distinct(p.crawled_page_id) from page_content\n>>> p where NOT EXISTS (select 1 from clause2 c where c.source_id =\n>>> p.crawled_page_id);\n>>> \n>> You know... I'm surprised nobody has mentioned this, but DISTINCT is\n>> very slow unless you have a fairly recent version of Postgres that\n>> replaces it with something faster. Try this:\n>> \n>\n> Nobody mentioned that because the explain plan already uses hash aggregate\n> (instead of the old sort)\n>\n> HashAggregate (cost=100278.16..104104.75 rows=382659 width=8) (actual\n> time=7047.259..7050.261 rows=72 loops=1)\n>\n> which means this is at least 8.4. Plus the 'distinct' step uses less than\n> 1% of total time, so even if you improve it the impact will be minimal.\n>\n> \n\nYes, U\"r absolutely right I am using Version 8.4SS and i am satisfied \nwith the below query results:\n\npdc_uima=# explain analyze select distinct(p.crawled_page_id) from \npage_content p\npdc_uima-# where NOT EXISTS (select 1 from clause2 c where c.source_id \n= p.crawled_page_id);\n \nQUERY \nPLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=100278.16..104104.75 rows=382659 width=8) (actual \ntime=5149.308..5152.251 rows=72 loops=1)\n -> Nested Loop Anti Join (cost=0.00..99320.46 rows=383079 width=8) \n(actual time=0.119..5148.954 rows=74 loops=1)\n -> Seq Scan on page_content p (cost=0.00..87132.17 \nrows=428817 width=8) (actual time=0.021..444.487 rows=428467 loops=1)\n -> Index Scan using idx_clause2_source_id on clause2 c \n(cost=0.00..18.18 rows=781 width=4) (actual time=0.009..0.009 rows=1 \nloops=428467)\n Index Cond: (c.source_id = p.crawled_page_id)\n Total runtime: 5155.874 ms\n(6 rows)\n\nI don't think that the above results are optimized further.\n\n\nThanks & best Regards,\nAdarsh Sharma\n> regards\n> Tomas\n>\n> \n\n\n\n\n\n\n\[email protected] wrote:\n\n\nOn 03/23/2011 04:17 AM, Adarsh Sharma wrote:\n\n \n\nexplain analyze select distinct(p.crawled_page_id) from page_content\np where NOT EXISTS (select 1 from clause2 c where c.source_id =\np.crawled_page_id);\n \n\nYou know... I'm surprised nobody has mentioned this, but DISTINCT is\nvery slow unless you have a fairly recent version of Postgres that\nreplaces it with something faster. Try this:\n \n\n\nNobody mentioned that because the explain plan already uses hash aggregate\n(instead of the old sort)\n\n HashAggregate (cost=100278.16..104104.75 rows=382659 width=8) (actual\ntime=7047.259..7050.261 rows=72 loops=1)\n\nwhich means this is at least 8.4. Plus the 'distinct' step uses less than\n1% of total time, so even if you improve it the impact will be minimal.\n\n \n\n\nYes, U\"r absolutely right I am using Version 8.4SS and i am satisfied\nwith the below query results:\n\npdc_uima=# explain analyze select distinct(p.crawled_page_id) from\npage_content p\npdc_uima-#  where NOT EXISTS (select 1 from  clause2 c where\nc.source_id = p.crawled_page_id);\n                                                                    \nQUERY\nPLAN                                                                     \n\n-----------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate  (cost=100278.16..104104.75 rows=382659 width=8) (actual\ntime=5149.308..5152.251 rows=72 loops=1)\n   ->  Nested Loop Anti Join  (cost=0.00..99320.46 rows=383079\nwidth=8) (actual time=0.119..5148.954 rows=74 loops=1)\n         ->  Seq Scan on page_content p  (cost=0.00..87132.17\nrows=428817 width=8) (actual time=0.021..444.487 rows=428467 loops=1)\n         ->  Index Scan using idx_clause2_source_id on clause2 c \n(cost=0.00..18.18 rows=781 width=4) (actual time=0.009..0.009 rows=1\nloops=428467)\n               Index Cond: (c.source_id = p.crawled_page_id)\n Total runtime: 5155.874 ms\n(6 rows)\n\nI don't think that the above results are optimized further.\n\n\nThanks & best Regards,\nAdarsh Sharma\n\nregards\nTomas", "msg_date": "Thu, 24 Mar 2011 10:22:54 +0530", "msg_from": "Adarsh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re-Reason of Slowness of Query" } ]
[ { "msg_contents": "Hi,\n\nI have very bad bgwriter statistics on a server which runs since many weeks\nand it is still the same after a recent restart.\nThere are roughly 50% of buffers written by the backend processes and the\nrest by checkpoints.\nThe statistics below are from a server with 140GB RAM, 32GB shared_buffers\nand a runtime of one hour.\n\nAs you can see in the pg_buffercache view that there are most buffers\nwithout usagecount - so they are as free or even virgen as they can be.\nAt the same time I have 53% percent of the dirty buffers written by the\nbackend process.\n\nI want to tune the database to achieve a ratio of max 10% backend writer vs.\n90% checkpoint or bgwriter writes.\nBut I don't understand how postgres is unable to fetch a free buffer.\nDoes any body have an idea?\n\nI'm running postgres 8.4.4 64 Bit on linux.\n\nBest Regards,\nUwe\n\nbackground writer stats\n checkpoints_timed | checkpoints_req | buffers_checkpoint | buffers_clean |\nmaxwritten_clean | buffers_backend | buffers_alloc\n-------------------+-----------------+--------------------+---------------+------------------+-----------------+---------------\n 3 | 0 | 99754 | 0\n| 0 | 115307 | 246173\n(1 row)\n\n\nbackground writer relative stats\n checkpoints_timed | minutes_between_checkpoint | buffers_checkpoint |\nbuffers_clean | buffers_backend | total_writes | avg_checkpoint_write\n-------------------+----------------------------+--------------------+---------------+-----------------+--------------+----------------------\n 100% | 10 | 46% |\n0% | 53% | 0.933 MB/s | 259.000 MB\n(1 row)\n\npostgres=# select usagecount,count(*),isdirty from pg_buffercache group by\nisdirty,usagecount order by isdirty,usagecount;\n usagecount | count | isdirty\n------------+---------+---------\n 1 | 31035 | f\n 2 | 13109 | f\n 3 | 184290 | f\n 4 | 6581 | f\n 5 | 912068 | f\n 1 | 6 | t\n 2 | 35 | t\n 3 | 48 | t\n 4 | 53 | t\n 5 | 43066 | t\n | 3004013 |\n(11 rows)\n\nHi,I have very bad bgwriter statistics on a server which runs since many weeks and it is still the same after a recent restart.\nThere are roughly 50% of buffers written by the backend processes and the rest by checkpoints.The statistics below are from a server with 140GB RAM, 32GB shared_buffers and a runtime of one hour.\nAs you can see in the pg_buffercache view that there are most buffers without usagecount - so they are as free or even virgen as they can be.\nAt the same time I have 53% percent of the dirty buffers written by the backend process.I want to tune the database to achieve a ratio of max 10% backend writer vs. 90% checkpoint or bgwriter writes.But I don't understand how postgres is unable to fetch a free buffer.\nDoes any body have an idea?I'm running postgres 8.4.4 64 Bit on linux.Best Regards,Uwebackground writer stats\n checkpoints_timed | checkpoints_req | buffers_checkpoint | buffers_clean | maxwritten_clean | buffers_backend | buffers_alloc\n-------------------+-----------------+--------------------+---------------+------------------+-----------------+---------------\n                 3 |               0 |              99754 |             0 |                0 |          115307 |        246173\n(1 row)background writer relative stats\n checkpoints_timed | minutes_between_checkpoint | buffers_checkpoint | buffers_clean | buffers_backend | total_writes | avg_checkpoint_write\n-------------------+----------------------------+--------------------+---------------+-----------------+--------------+----------------------\n 100%              |                         10 | 46%                | 0%            | 53%             | 0.933 MB/s   | 259.000 MB\n(1 row)postgres=# select usagecount,count(*),isdirty from pg_buffercache group by\nisdirty,usagecount order by isdirty,usagecount; usagecount |  count  | isdirty\n------------+---------+---------          1 |   31035 | f\n          2 |   13109 | f          3 |  184290 | f\n          4 |    6581 | f          5 |  912068 | f\n          1 |       6 | t          2 |      35 | t\n          3 |      48 | t          4 |      53 | t\n          5 |   43066 | t            | 3004013 |\n(11 rows)", "msg_date": "Wed, 23 Mar 2011 13:51:31 +0100", "msg_from": "Uwe Bartels <[email protected]>", "msg_from_op": true, "msg_subject": "buffercache/bgwriter" }, { "msg_contents": "Wednesday, March 23, 2011, 1:51:31 PM you wrote:\n\n[rearranged for quoting]\n\n> background writer stats\n> checkpoints_timed | checkpoints_req | buffers_checkpoint | buffers_clean |\n> maxwritten_clean | buffers_backend | buffers_alloc\n> -------------------+-----------------+--------------------+---------------+------------------+-----------------+---------------\n> 3 | 0 | 99754 | 0\n> | 0 | 115307 | 246173\n> (1 row)\n\nbuffers_clean = 0 ?!\n\n> But I don't understand how postgres is unable to fetch a free buffer.\n> Does any body have an idea?\n\nSomehow looks like the bgwriter is completely disabled. How are the \nrelevant settings in your postgresql.conf?\n\n\n-- \nJochen Erwied | home: [email protected] +49-208-38800-18, FAX: -19\nSauerbruchstr. 17 | work: [email protected] +49-2151-7294-24, FAX: -50\nD-45470 Muelheim | mobile: [email protected] +49-173-5404164\n\n", "msg_date": "Wed, 23 Mar 2011 14:19:59 +0100", "msg_from": "Jochen Erwied <[email protected]>", "msg_from_op": false, "msg_subject": "Re: buffercache/bgwriter" }, { "msg_contents": "Hi Jochen,\n\nyes, I had that impression too.\nBut it is running. ...And has almost no effect. I changed all parameter to\nthe most aggressive, but....\nBefore I restarted the server I had a percentage of writes by the bgwriter\nof less that 1 percent.\n\npostgres=# select name,setting from pg_settings where name like 'bgw%';\n name | setting\n-------------------------+---------\n bgwriter_delay | 10\n bgwriter_lru_maxpages | 1000\n bgwriter_lru_multiplier | 10\n\nBest...\nUwe\n\nUwe Bartels\nSystemarchitect - Freelancer\nmailto: [email protected]\ntel: +49 172 3899006\nprofile: https://www.xing.com/profile/Uwe_Bartels\nwebsite: http://www.uwebartels.com\n\n\n\nOn 23 March 2011 14:19, Jochen Erwied <[email protected]>wrote:\n\n> Wednesday, March 23, 2011, 1:51:31 PM you wrote:\n>\n> [rearranged for quoting]\n>\n> > background writer stats\n> > checkpoints_timed | checkpoints_req | buffers_checkpoint | buffers_clean\n> |\n> > maxwritten_clean | buffers_backend | buffers_alloc\n> >\n> -------------------+-----------------+--------------------+---------------+------------------+-----------------+---------------\n> > 3 | 0 | 99754 | 0\n> > | 0 | 115307 | 246173\n> > (1 row)\n>\n> buffers_clean = 0 ?!\n>\n> > But I don't understand how postgres is unable to fetch a free buffer.\n> > Does any body have an idea?\n>\n> Somehow looks like the bgwriter is completely disabled. How are the\n> relevant settings in your postgresql.conf?\n>\n>\n> --\n> Jochen Erwied | home: [email protected] +49-208-38800-18, FAX:\n> -19\n> Sauerbruchstr. 17 | work: [email protected] +49-2151-7294-24, FAX:\n> -50\n> D-45470 Muelheim | mobile: [email protected]\n> +49-173-5404164\n>\n>\n\nHi Jochen,yes, I had that impression too.But it is running. ...And has almost no effect. I changed all parameter to the most aggressive, but....Before I restarted the server I had a percentage of writes by the bgwriter of less that 1 percent.\npostgres=# select name,setting from pg_settings where name like 'bgw%';          name           | setting\n-------------------------+--------- bgwriter_delay          | 10\n bgwriter_lru_maxpages   | 1000 bgwriter_lru_multiplier | 10\nBest...UweUwe BartelsSystemarchitect - Freelancermailto: [email protected]: +49 172 3899006profile: https://www.xing.com/profile/Uwe_Bartels\nwebsite: http://www.uwebartels.com\nOn 23 March 2011 14:19, Jochen Erwied <[email protected]> wrote:\nWednesday, March 23, 2011, 1:51:31 PM you wrote:\n\n[rearranged for quoting]\n\n> background writer stats\n>  checkpoints_timed | checkpoints_req | buffers_checkpoint | buffers_clean |\n> maxwritten_clean | buffers_backend | buffers_alloc\n> -------------------+-----------------+--------------------+---------------+------------------+-----------------+---------------\n>                  3 |               0 |              99754 |             0\n> |                0 |          115307 |        246173\n> (1 row)\n\nbuffers_clean = 0 ?!\n\n> But I don't understand how postgres is unable to fetch a free buffer.\n> Does any body have an idea?\n\nSomehow looks like the bgwriter is completely disabled. How are the\nrelevant settings in your postgresql.conf?\n\n\n--\nJochen Erwied     |   home: [email protected]     +49-208-38800-18, FAX: -19\nSauerbruchstr. 17 |   work: [email protected]  +49-2151-7294-24, FAX: -50\nD-45470 Muelheim  | mobile: [email protected]       +49-173-5404164", "msg_date": "Wed, 23 Mar 2011 14:39:55 +0100", "msg_from": "Uwe Bartels <[email protected]>", "msg_from_op": true, "msg_subject": "Re: buffercache/bgwriter" }, { "msg_contents": "> Hi,\n>\n> I have very bad bgwriter statistics on a server which runs since many\n> weeks\n> and it is still the same after a recent restart.\n> There are roughly 50% of buffers written by the backend processes and the\n> rest by checkpoints.\n> The statistics below are from a server with 140GB RAM, 32GB shared_buffers\n> and a runtime of one hour.\n>\n> As you can see in the pg_buffercache view that there are most buffers\n> without usagecount - so they are as free or even virgen as they can be.\n> At the same time I have 53% percent of the dirty buffers written by the\n> backend process.\n\nThere are some nice old threads dealing with this - see for example\n\nhttp://postgresql.1045698.n5.nabble.com/Bgwriter-and-pg-stat-bgwriter-buffers-clean-aspects-td2071472.html\n\nhttp://postgresql.1045698.n5.nabble.com/tuning-bgwriter-in-8-4-2-td1926854.html\n\nand there even some nice external links to more detailed explanation\n\nhttp://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm\n\nregards\nTomas\n\n", "msg_date": "Wed, 23 Mar 2011 15:41:40 +0100", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: buffercache/bgwriter" }, { "msg_contents": "Hi Thomas,\n\nthanks, but there were no new informations in there for me.\nthis article\nhttp://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm I know and\nothers on his website.\n\nBest...\nUwe\n\n\nOn 23 March 2011 15:41, <[email protected]> wrote:\n\n> > Hi,\n> >\n> > I have very bad bgwriter statistics on a server which runs since many\n> > weeks\n> > and it is still the same after a recent restart.\n> > There are roughly 50% of buffers written by the backend processes and the\n> > rest by checkpoints.\n> > The statistics below are from a server with 140GB RAM, 32GB\n> shared_buffers\n> > and a runtime of one hour.\n> >\n> > As you can see in the pg_buffercache view that there are most buffers\n> > without usagecount - so they are as free or even virgen as they can be.\n> > At the same time I have 53% percent of the dirty buffers written by the\n> > backend process.\n>\n> There are some nice old threads dealing with this - see for example\n>\n>\n> http://postgresql.1045698.n5.nabble.com/Bgwriter-and-pg-stat-bgwriter-buffers-clean-aspects-td2071472.html\n>\n>\n> http://postgresql.1045698.n5.nabble.com/tuning-bgwriter-in-8-4-2-td1926854.html\n>\n> and there even some nice external links to more detailed explanation\n>\n> http://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm\n>\n> regards\n> Tomas\n>\n>\n\nHi Thomas,thanks, but there were no new informations in there for me.this article http://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm I know and others on his website.\nBest...Uwe\nOn 23 March 2011 15:41, <[email protected]> wrote:\n> Hi,\n>\n> I have very bad bgwriter statistics on a server which runs since many\n> weeks\n> and it is still the same after a recent restart.\n> There are roughly 50% of buffers written by the backend processes and the\n> rest by checkpoints.\n> The statistics below are from a server with 140GB RAM, 32GB shared_buffers\n> and a runtime of one hour.\n>\n> As you can see in the pg_buffercache view that there are most buffers\n> without usagecount - so they are as free or even virgen as they can be.\n> At the same time I have 53% percent of the dirty buffers written by the\n> backend process.\n\nThere are some nice old threads dealing with this - see for example\n\nhttp://postgresql.1045698.n5.nabble.com/Bgwriter-and-pg-stat-bgwriter-buffers-clean-aspects-td2071472.html\n\nhttp://postgresql.1045698.n5.nabble.com/tuning-bgwriter-in-8-4-2-td1926854.html\n\nand there even some nice external links to more detailed explanation\n\nhttp://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm\n\nregards\nTomas", "msg_date": "Wed, 23 Mar 2011 15:54:01 +0100", "msg_from": "Uwe Bartels <[email protected]>", "msg_from_op": true, "msg_subject": "Re: buffercache/bgwriter" }, { "msg_contents": "\r\n\r\n> -----Original Message-----\r\n> From: [email protected] [mailto:pgsql-performance-\r\n> [email protected]] On Behalf Of [email protected]\r\n> Sent: Wednesday, March 23, 2011 10:42 AM\r\n> To: Uwe Bartels\r\n> Cc: [email protected]\r\n> Subject: Re: [PERFORM] buffercache/bgwriter\r\n> \r\n> > Hi,\r\n> >\r\n> > I have very bad bgwriter statistics on a server which runs since many\r\n> > weeks\r\n> > and it is still the same after a recent restart.\r\n> > There are roughly 50% of buffers written by the backend processes and\r\n> the\r\n> > rest by checkpoints.\r\n> > The statistics below are from a server with 140GB RAM, 32GB\r\n> shared_buffers\r\n> > and a runtime of one hour.\r\n> >\r\n> > As you can see in the pg_buffercache view that there are most buffers\r\n> > without usagecount - so they are as free or even virgen as they can\r\n> be.\r\n> > At the same time I have 53% percent of the dirty buffers written by\r\n> the\r\n> > backend process.\r\n> \r\n> There are some nice old threads dealing with this - see for example\r\n> \r\n> http://postgresql.1045698.n5.nabble.com/Bgwriter-and-pg-stat-bgwriter-\r\n> buffers-clean-aspects-td2071472.html\r\n> \r\n> http://postgresql.1045698.n5.nabble.com/tuning-bgwriter-in-8-4-2-\r\n> td1926854.html\r\n> \r\n> and there even some nice external links to more detailed explanation\r\n> \r\n> http://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm\r\n\r\nThe interesting question here is - with 3 million unallocated buffers, why is the DB evicting buffers (buffers_backend column) instead of allocating the unallocated buffers?\r\n\r\nBrad.\r\n", "msg_date": "Wed, 23 Mar 2011 14:58:19 +0000", "msg_from": "\"Nicholson, Brad (Toronto, ON, CA)\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: buffercache/bgwriter" }, { "msg_contents": "Hi Brad,\n\nyes. that's the question....\nin the source code in freelist.c there is something that I don't understand.\n\nThis is the first try to get a free page. The second try scans used buffers.\nWhat makes me wonder is the why postgres is checking for <<buf->usage_count\n== 0>>\nwhere usage_count is supposed to be NULL initially.\n\n while (StrategyControl->firstFreeBuffer >= 0)\n {\n buf = &BufferDescriptors[StrategyControl->firstFreeBuffer];\n Assert(buf->freeNext != FREENEXT_NOT_IN_LIST);\n\n /* Unconditionally remove buffer from freelist */\n StrategyControl->firstFreeBuffer = buf->freeNext;\n buf->freeNext = FREENEXT_NOT_IN_LIST;\n\n /*\n * If the buffer is pinned or has a nonzero usage_count, we cannot\nuse\n * it; discard it and retry. (This can only happen if VACUUM put a\n * valid buffer in the freelist and then someone else used it before\n * we got to it. It's probably impossible altogether as of 8.3, but\n * we'd better check anyway.)\n */\n LockBufHdr(buf);\n if (buf->refcount == 0 && buf->usage_count == 0)\n {\n if (strategy != NULL)\n AddBufferToRing(strategy, buf);\n return buf;\n }\n UnlockBufHdr(buf);\n }\n\n\nBest...\nUwe\n\n\n\nOn 23 March 2011 15:58, Nicholson, Brad (Toronto, ON, CA) <[email protected]\n> wrote:\n\n>\n>\n> > -----Original Message-----\n> > From: [email protected] [mailto:pgsql-performance-\n> > [email protected]] On Behalf Of [email protected]\n> > Sent: Wednesday, March 23, 2011 10:42 AM\n> > To: Uwe Bartels\n> > Cc: [email protected]\n> > Subject: Re: [PERFORM] buffercache/bgwriter\n> >\n> > > Hi,\n> > >\n> > > I have very bad bgwriter statistics on a server which runs since many\n> > > weeks\n> > > and it is still the same after a recent restart.\n> > > There are roughly 50% of buffers written by the backend processes and\n> > the\n> > > rest by checkpoints.\n> > > The statistics below are from a server with 140GB RAM, 32GB\n> > shared_buffers\n> > > and a runtime of one hour.\n> > >\n> > > As you can see in the pg_buffercache view that there are most buffers\n> > > without usagecount - so they are as free or even virgen as they can\n> > be.\n> > > At the same time I have 53% percent of the dirty buffers written by\n> > the\n> > > backend process.\n> >\n> > There are some nice old threads dealing with this - see for example\n> >\n> > http://postgresql.1045698.n5.nabble.com/Bgwriter-and-pg-stat-bgwriter-\n> > buffers-clean-aspects-td2071472.html\n> >\n> > http://postgresql.1045698.n5.nabble.com/tuning-bgwriter-in-8-4-2-\n> > td1926854.html\n> >\n> > and there even some nice external links to more detailed explanation\n> >\n> > http://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm\n>\n> The interesting question here is - with 3 million unallocated buffers, why\n> is the DB evicting buffers (buffers_backend column) instead of allocating\n> the unallocated buffers?\n>\n> Brad.\n>\n\nHi Brad,yes. that's the question....in the source code in freelist.c there is something that I don't understand.This is the first try to get a free page. The second try scans used buffers.What makes me wonder is the why postgres is checking for <<buf->usage_count == 0>>\nwhere usage_count is supposed to be NULL initially.    while (StrategyControl->firstFreeBuffer >= 0)    {        buf = &BufferDescriptors[StrategyControl->firstFreeBuffer];        Assert(buf->freeNext != FREENEXT_NOT_IN_LIST);\n        /* Unconditionally remove buffer from freelist */        StrategyControl->firstFreeBuffer = buf->freeNext;        buf->freeNext = FREENEXT_NOT_IN_LIST;        /*         * If the buffer is pinned or has a nonzero usage_count, we cannot use\n         * it; discard it and retry.  (This can only happen if VACUUM put a         * valid buffer in the freelist and then someone else used it before         * we got to it.  It's probably impossible altogether as of 8.3, but\n         * we'd better check anyway.)         */        LockBufHdr(buf);        if (buf->refcount == 0 && buf->usage_count == 0)        {            if (strategy != NULL)                AddBufferToRing(strategy, buf);\n            return buf;        }        UnlockBufHdr(buf);    }Best...Uwe\nOn 23 March 2011 15:58, Nicholson, Brad (Toronto, ON, CA) <[email protected]> wrote:\n\n\n> -----Original Message-----\n> From: [email protected] [mailto:pgsql-performance-\n> [email protected]] On Behalf Of [email protected]\n> Sent: Wednesday, March 23, 2011 10:42 AM\n> To: Uwe Bartels\n> Cc: [email protected]\n> Subject: Re: [PERFORM] buffercache/bgwriter\n>\n> > Hi,\n> >\n> > I have very bad bgwriter statistics on a server which runs since many\n> > weeks\n> > and it is still the same after a recent restart.\n> > There are roughly 50% of buffers written by the backend processes and\n> the\n> > rest by checkpoints.\n> > The statistics below are from a server with 140GB RAM, 32GB\n> shared_buffers\n> > and a runtime of one hour.\n> >\n> > As you can see in the pg_buffercache view that there are most buffers\n> > without usagecount - so they are as free or even virgen as they can\n> be.\n> > At the same time I have 53% percent of the dirty buffers written by\n> the\n> > backend process.\n>\n> There are some nice old threads dealing with this - see for example\n>\n> http://postgresql.1045698.n5.nabble.com/Bgwriter-and-pg-stat-bgwriter-\n> buffers-clean-aspects-td2071472.html\n>\n> http://postgresql.1045698.n5.nabble.com/tuning-bgwriter-in-8-4-2-\n> td1926854.html\n>\n> and there even some nice external links to more detailed explanation\n>\n> http://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm\n\nThe interesting question here is - with 3 million unallocated buffers, why is the DB evicting buffers (buffers_backend column) instead of allocating the unallocated buffers?\n\nBrad.", "msg_date": "Wed, 23 Mar 2011 16:26:04 +0100", "msg_from": "Uwe Bartels <[email protected]>", "msg_from_op": true, "msg_subject": "Re: buffercache/bgwriter" }, { "msg_contents": "On Wed, Mar 23, 2011 at 6:19 AM, Jochen Erwied\n<[email protected]> wrote:\n> Wednesday, March 23, 2011, 1:51:31 PM you wrote:\n>\n> [rearranged for quoting]\n>\n>> background writer stats\n>>  checkpoints_timed | checkpoints_req | buffers_checkpoint | buffers_clean |\n>> maxwritten_clean | buffers_backend | buffers_alloc\n>> -------------------+-----------------+--------------------+---------------+------------------+-----------------+---------------\n>>                  3 |               0 |              99754 |             0\n>> |                0 |          115307 |        246173\n>> (1 row)\n>\n> buffers_clean = 0 ?!\n>\n>> But I don't understand how postgres is unable to fetch a free buffer.\n>> Does any body have an idea?\n>\n> Somehow looks like the bgwriter is completely disabled. How are the\n> relevant settings in your postgresql.conf?\n\nI suspect the work load is entirely bulk inserts, and is using a\nBuffer Access Strategy. By design, bulk inserts generally write out\ntheir own buffers.\n\nCheers,\n\nJeff\n", "msg_date": "Wed, 23 Mar 2011 08:36:53 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: buffercache/bgwriter" }, { "msg_contents": "On Wed, Mar 23, 2011 at 8:26 AM, Uwe Bartels <[email protected]> wrote:\n> Hi Brad,\n>\n> yes. that's the question....\n> in the source code in freelist.c there is something that I don't understand.\n>\n> This is the first try to get a free page. The second try scans used buffers.\n> What makes me wonder is the why postgres is checking for <<buf->usage_count\n> == 0>>\n> where usage_count is supposed to be NULL initially.\n\nThe code comment preceding that check seems to explain that it is\nprobably not needed but simply done from an abundance of caution.\n\n>         /*\n>          * If the buffer is pinned or has a nonzero usage_count, we cannot\n> use\n>          * it; discard it and retry.  (This can only happen if VACUUM put a\n>          * valid buffer in the freelist and then someone else used it before\n>          * we got to it.  It's probably impossible altogether as of 8.3, but\n>          * we'd better check anyway.)\n\nSeems like maybe an Assert would be called for.\n\nCheers,\n\nJeff\n", "msg_date": "Wed, 23 Mar 2011 09:01:51 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: buffercache/bgwriter" }, { "msg_contents": "On 23 March 2011 16:36, Jeff Janes <[email protected]> wrote:\n\n> On Wed, Mar 23, 2011 at 6:19 AM, Jochen Erwied\n> <[email protected]> wrote:\n> > Wednesday, March 23, 2011, 1:51:31 PM you wrote:\n> >\n> > [rearranged for quoting]\n> >\n> >> background writer stats\n> >> checkpoints_timed | checkpoints_req | buffers_checkpoint |\n> buffers_clean |\n> >> maxwritten_clean | buffers_backend | buffers_alloc\n> >>\n> -------------------+-----------------+--------------------+---------------+------------------+-----------------+---------------\n> >> 3 | 0 | 99754 |\n> 0\n> >> | 0 | 115307 | 246173\n> >> (1 row)\n> >\n> > buffers_clean = 0 ?!\n> >\n> >> But I don't understand how postgres is unable to fetch a free buffer.\n> >> Does any body have an idea?\n> >\n> > Somehow looks like the bgwriter is completely disabled. How are the\n> > relevant settings in your postgresql.conf?\n>\n> I suspect the work load is entirely bulk inserts, and is using a\n> Buffer Access Strategy. By design, bulk inserts generally write out\n> their own buffers.\n>\n> Cheers,\n>\n> Jeff\n>\n\nYes. that's true. We are converting databases from one schema into another\nwith a lot of computing in between.\nBut most of the written data is accessed soon for other conversions.\nOK. That sounds very simple and thus trustable ;).\n\nSo everything is fine and there is no need/potential for optimization?\n\nBest...\nUwe\n\nOn 23 March 2011 16:36, Jeff Janes <[email protected]> wrote:\nOn Wed, Mar 23, 2011 at 6:19 AM, Jochen Erwied\n<[email protected]> wrote:\n> Wednesday, March 23, 2011, 1:51:31 PM you wrote:\n>\n> [rearranged for quoting]\n>\n>> background writer stats\n>>  checkpoints_timed | checkpoints_req | buffers_checkpoint | buffers_clean |\n>> maxwritten_clean | buffers_backend | buffers_alloc\n>> -------------------+-----------------+--------------------+---------------+------------------+-----------------+---------------\n>>                  3 |               0 |              99754 |             0\n>> |                0 |          115307 |        246173\n>> (1 row)\n>\n> buffers_clean = 0 ?!\n>\n>> But I don't understand how postgres is unable to fetch a free buffer.\n>> Does any body have an idea?\n>\n> Somehow looks like the bgwriter is completely disabled. How are the\n> relevant settings in your postgresql.conf?\n\nI suspect the work load is entirely bulk inserts, and is using a\nBuffer Access Strategy.  By design, bulk inserts generally write out\ntheir own buffers.\n\nCheers,\n\nJeff\nYes. that's true. We are converting databases from one schema into another  with a lot of computing in between.But most of the written data is accessed soon for other conversions.\nOK. That sounds very simple and thus trustable ;).So everything is fine and there is no need/potential for optimization?Best...Uwe", "msg_date": "Wed, 23 Mar 2011 17:16:17 +0100", "msg_from": "Uwe Bartels <[email protected]>", "msg_from_op": true, "msg_subject": "Re: buffercache/bgwriter" }, { "msg_contents": "2011/3/23 Uwe Bartels <[email protected]>:\n> On 23 March 2011 16:36, Jeff Janes <[email protected]> wrote:\n>>\n>> On Wed, Mar 23, 2011 at 6:19 AM, Jochen Erwied\n>> <[email protected]> wrote:\n>> > Wednesday, March 23, 2011, 1:51:31 PM you wrote:\n>> >\n>> > [rearranged for quoting]\n>> >\n>> >> background writer stats\n>> >>  checkpoints_timed | checkpoints_req | buffers_checkpoint |\n>> >> buffers_clean |\n>> >> maxwritten_clean | buffers_backend | buffers_alloc\n>> >>\n>> >> -------------------+-----------------+--------------------+---------------+------------------+-----------------+---------------\n>> >>                  3 |               0 |              99754 |\n>> >> 0\n>> >> |                0 |          115307 |        246173\n>> >> (1 row)\n>> >\n>> > buffers_clean = 0 ?!\n>> >\n>> >> But I don't understand how postgres is unable to fetch a free buffer.\n>> >> Does any body have an idea?\n>> >\n>> > Somehow looks like the bgwriter is completely disabled. How are the\n>> > relevant settings in your postgresql.conf?\n>>\n>> I suspect the work load is entirely bulk inserts, and is using a\n>> Buffer Access Strategy.  By design, bulk inserts generally write out\n>> their own buffers.\n>>\n>> Cheers,\n>>\n>> Jeff\n>\n> Yes. that's true. We are converting databases from one schema into another\n> with a lot of computing in between.\n> But most of the written data is accessed soon for other conversions.\n> OK. That sounds very simple and thus trustable ;).\n\nyes, it is.\n\n>\n> So everything is fine and there is no need/potential for optimization?\n>\n\nThere are probably room for improvements, without more thinking, I\nwould suggest:\n\n * review bufferstrategy to increase the buffer size for the pool when\nthere is a lot of free buffers\n* have a bgwriter working just behind the seqscan (and probably a\nbiger pool of buffers anyway)\n* do not use the special bufferstrategy when the buffer cache has\nmore than X% of free pages\n* add more :)\n\nI believe it should be ok to do good improvement for special case\neasely identifiable like yours.\n\n-- \nCédric Villemain               2ndQuadrant\nhttp://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support\n", "msg_date": "Wed, 23 Mar 2011 21:23:46 +0100", "msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: buffercache/bgwriter" }, { "msg_contents": "Hi Cédric,\n\nOK, sounds promising. But all of these improvements are for the postgres\ndevelopers.\nFor me as an administrator I can't do a thing right now. OK.\n\nThanks for you suggestions. I think for batchjobs other that just COPY they\ncould speed up the process quite well if now the backend process has to do\nall (or 50%) of the writings.\n\nIt would also be good to see how many buffers were written by backend\nprocesses grouped by Buffer Access Strategy - to better distinguish evil\nbackend writes from wanted backend writes.\n\nBest Regards,\nUwe\n\nOn 23 March 2011 21:23, Cédric Villemain\n<[email protected]>wrote:\n\n> 2011/3/23 Uwe Bartels <[email protected]>:\n> > On 23 March 2011 16:36, Jeff Janes <[email protected]> wrote:\n> >>\n> >> On Wed, Mar 23, 2011 at 6:19 AM, Jochen Erwied\n> >> <[email protected]> wrote:\n> >> > Wednesday, March 23, 2011, 1:51:31 PM you wrote:\n> >> >\n> >> > [rearranged for quoting]\n> >> >\n> >> >> background writer stats\n> >> >> checkpoints_timed | checkpoints_req | buffers_checkpoint |\n> >> >> buffers_clean |\n> >> >> maxwritten_clean | buffers_backend | buffers_alloc\n> >> >>\n> >> >>\n> -------------------+-----------------+--------------------+---------------+------------------+-----------------+---------------\n> >> >> 3 | 0 | 99754 |\n> >> >> 0\n> >> >> | 0 | 115307 | 246173\n> >> >> (1 row)\n> >> >\n> >> > buffers_clean = 0 ?!\n> >> >\n> >> >> But I don't understand how postgres is unable to fetch a free buffer.\n> >> >> Does any body have an idea?\n> >> >\n> >> > Somehow looks like the bgwriter is completely disabled. How are the\n> >> > relevant settings in your postgresql.conf?\n> >>\n> >> I suspect the work load is entirely bulk inserts, and is using a\n> >> Buffer Access Strategy. By design, bulk inserts generally write out\n> >> their own buffers.\n> >>\n> >> Cheers,\n> >>\n> >> Jeff\n> >\n> > Yes. that's true. We are converting databases from one schema into\n> another\n> > with a lot of computing in between.\n> > But most of the written data is accessed soon for other conversions.\n> > OK. That sounds very simple and thus trustable ;).\n>\n> yes, it is.\n>\n> >\n> > So everything is fine and there is no need/potential for optimization?\n> >\n>\n> There are probably room for improvements, without more thinking, I\n> would suggest:\n>\n> * review bufferstrategy to increase the buffer size for the pool when\n> there is a lot of free buffers\n> * have a bgwriter working just behind the seqscan (and probably a\n> biger pool of buffers anyway)\n> * do not use the special bufferstrategy when the buffer cache has\n> more than X% of free pages\n> * add more :)\n>\n> I believe it should be ok to do good improvement for special case\n> easely identifiable like yours.\n>\n> --\n> Cédric Villemain 2ndQuadrant\n> http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support\n>\n\nHi Cédric,OK, sounds promising. But all of these improvements are for the postgres developers.For me as an administrator I can't do a thing right now. OK.Thanks for you suggestions. I think for batchjobs other that just COPY they could speed up the process quite well if now the backend process has to do all (or 50%)  of the writings.\nIt would also be good to see how many buffers were written by backend processes grouped by Buffer Access Strategy - to better distinguish evil backend writes from wanted backend writes.\nBest Regards,UweOn 23 March 2011 21:23, Cédric Villemain <[email protected]> wrote:\n2011/3/23 Uwe Bartels <[email protected]>:\n> On 23 March 2011 16:36, Jeff Janes <[email protected]> wrote:\n>>\n>> On Wed, Mar 23, 2011 at 6:19 AM, Jochen Erwied\n>> <[email protected]> wrote:\n>> > Wednesday, March 23, 2011, 1:51:31 PM you wrote:\n>> >\n>> > [rearranged for quoting]\n>> >\n>> >> background writer stats\n>> >>  checkpoints_timed | checkpoints_req | buffers_checkpoint |\n>> >> buffers_clean |\n>> >> maxwritten_clean | buffers_backend | buffers_alloc\n>> >>\n>> >> -------------------+-----------------+--------------------+---------------+------------------+-----------------+---------------\n>> >>                  3 |               0 |              99754 |\n>> >> 0\n>> >> |                0 |          115307 |        246173\n>> >> (1 row)\n>> >\n>> > buffers_clean = 0 ?!\n>> >\n>> >> But I don't understand how postgres is unable to fetch a free buffer.\n>> >> Does any body have an idea?\n>> >\n>> > Somehow looks like the bgwriter is completely disabled. How are the\n>> > relevant settings in your postgresql.conf?\n>>\n>> I suspect the work load is entirely bulk inserts, and is using a\n>> Buffer Access Strategy.  By design, bulk inserts generally write out\n>> their own buffers.\n>>\n>> Cheers,\n>>\n>> Jeff\n>\n> Yes. that's true. We are converting databases from one schema into another\n> with a lot of computing in between.\n> But most of the written data is accessed soon for other conversions.\n> OK. That sounds very simple and thus trustable ;).\n\nyes, it is.\n\n>\n> So everything is fine and there is no need/potential for optimization?\n>\n\nThere are probably room for improvements, without more thinking, I\nwould suggest:\n\n * review bufferstrategy to increase the buffer size for the pool when\nthere is a lot of free buffers\n* have a bgwriter working just behind the seqscan (and probably a\nbiger pool of buffers anyway)\n* do not use  the special bufferstrategy when  the buffer cache has\nmore than X% of free pages\n* add more :)\n\nI believe it should be ok to do good improvement for special case\neasely identifiable like yours.\n\n--\nCédric Villemain               2ndQuadrant\nhttp://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support", "msg_date": "Thu, 24 Mar 2011 10:19:04 +0100", "msg_from": "Uwe Bartels <[email protected]>", "msg_from_op": true, "msg_subject": "Re: buffercache/bgwriter" }, { "msg_contents": "On 03/24/2011 05:19 AM, Uwe Bartels wrote:\n> It would also be good to see how many buffers were written by backend \n> processes grouped by Buffer Access Strategy - to better distinguish \n> evil backend writes from wanted backend writes.\n\nSince all these writes are being cached by the operating system, which \nstrategy writes them out isn't that useful to track. The only really \n\"evil\" type of writes are ones where the background writer doesn't \nabsorb the fsync calls and the backends have to do that themselves. And \nas of V9.1, that is something you can distinguish in pg_stat_bgwriter \n(and it's also less likely to happen, too)\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Mon, 28 Mar 2011 02:02:42 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: buffercache/bgwriter" }, { "msg_contents": "OK. Thanks.\n\nUwe\n\n\nOn 28 March 2011 08:02, Greg Smith <[email protected]> wrote:\n\n> On 03/24/2011 05:19 AM, Uwe Bartels wrote:\n>\n>> It would also be good to see how many buffers were written by backend\n>> processes grouped by Buffer Access Strategy - to better distinguish evil\n>> backend writes from wanted backend writes.\n>>\n>\n> Since all these writes are being cached by the operating system, which\n> strategy writes them out isn't that useful to track. The only really \"evil\"\n> type of writes are ones where the background writer doesn't absorb the fsync\n> calls and the backends have to do that themselves. And as of V9.1, that is\n> something you can distinguish in pg_stat_bgwriter (and it's also less likely\n> to happen, too)\n>\n> --\n> Greg Smith 2ndQuadrant US [email protected] Baltimore, MD\n> PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n> \"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nOK. Thanks.Uwe\nOn 28 March 2011 08:02, Greg Smith <[email protected]> wrote:\nOn 03/24/2011 05:19 AM, Uwe Bartels wrote:\n\nIt would also be good to see how many buffers were written by backend processes grouped by Buffer Access Strategy - to better distinguish evil backend writes from wanted backend writes.\n\n\nSince all these writes are being cached by the operating system, which strategy writes them out isn't that useful to track.  The only really \"evil\" type of writes are ones where the background writer doesn't absorb the fsync calls and the backends have to do that themselves.  And as of V9.1, that is something you can distinguish in pg_stat_bgwriter (and it's also less likely to happen, too)\n\n-- \nGreg Smith   2ndQuadrant US    [email protected]   Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Mon, 28 Mar 2011 22:23:25 +0200", "msg_from": "Uwe Bartels <[email protected]>", "msg_from_op": true, "msg_subject": "Re: buffercache/bgwriter" } ]
[ { "msg_contents": "Folks,\n\nYet more evidence that we need some way to assess query plans which are\nhigh-risk and avoid them (or have Yet Another GUC):\n\n Merge Join (cost=29.16..1648.00 rows=382 width=78) (actual\ntime=57215.167..57215.216 rows=1 loops=1)\n Merge Cond: (rn.node_id = device_nodes.node_id)\n -> Nested Loop (cost=0.00..11301882.40 rows=6998 width=62) (actual\ntime=57209.291..57215.030 rows=112 loops=1)\n Join Filter: (node_ep.node_id = rn.node_id)\n -> Nested Loop (cost=0.00..11003966.85 rows=90276 width=46)\n(actual time=0.027..52792.422 rows=90195 loops=1)\n -> Index Scan using ix_ne_ns on node_ep\n(cost=0.00..1545943.45 rows=32606992 width=26) (actual\ntime=0.010..7787.043 rows=32606903 loops=1)\n -> Index Scan using ix_nefp_eid on ep_fp\n(cost=0.00..0.28 rows=1 width=20) (actual time=0.001..0.001 rows=0\nloops=32606903)\n Index Cond: (ep_fp.ep_id = node_ep.ep_id)\n -> Materialize (cost=0.00..5.30 rows=220 width=16) (actual\ntime=0.000..0.019 rows=220 loops=90195)\n -> Seq Scan on mytable rn (cost=0.00..4.20 rows=220\nwidth=16) (actual time=0.008..0.043 rows=220 loops=1)\n -> Sort (cost=28.18..28.21 rows=12 width=16) (actual\ntime=0.164..0.165 rows=10 loops=1)\n Sort Key: device_nodes.node_id\n Sort Method: quicksort Memory: 25kB\n -> Index Scan using ix_dn_did on device_nodes\n(cost=0.00..27.96 rows=12 width=16) (actual time=0.086..0.134 rows=10\nloops=1)\n Index Cond: (dev_id = 18165)\n Total runtime: 57215.329 ms\n\n\nAFAICT, what's happening in this query is that PostgreSQL's statistics\non the device_nodes and several other tables are slightly out of date\n(as in 5% of the table). Thus it thinks that nothing will match the\nlist of node_ids in \"mytable\", and that it can exit the merge join early\nand ignore the whole huge cost of the join plan. This particular form\nof out-of-dateness will be fixed in 9.1 (it's due to values being higher\nthan the highest histogram bucket in pg_stat), but not all forms will be.\n\nIt really seems like we should be able to detect an obvious high-risk\nsituation like this one. Or maybe we're just being too optimistic about\ndiscarding subplans?\n\nBTW, the optimal plan for this query (post-analyze) is this one:\n\n Nested Loop (cost=0.00..213068.26 rows=12 width=78) (actual\ntime=0.374..0.514 rows=1 loops=1)\n Join Filter: (device_nodes.node_id = rn.node_id)\n -> Seq Scan on mytable rn (cost=0.00..4.20 rows=220 width=16)\n(actual time=0.013..0.050 rows=220 loops=1)\n -> Materialize (cost=0.00..213024.49 rows=12 width=62) (actual\ntime=0.001..0.002 rows=1 loops=220)\n -> Nested Loop (cost=0.00..213024.43 rows=12 width=62)\n(actual time=0.077..0.278 rows=1 loops=1)\n -> Nested Loop (cost=0.00..211740.04 rows=4428\nwidth=42) (actual time=0.070..0.269 rows=1 loops=1)\n -> Index Scan using ix_dn_did on device_nodes\n(cost=0.00..51.92 rows=13 width=16) (actual time=0.058..0.115 rows=10\nloops=1)\n Index Cond: (dev_id = 18165)\n -> Index Scan using ix_ne_ns on node_ep\n(cost=0.00..16137.45 rows=11700 width=26) (actual time=0.014..0.014\nrows=0 loops=10)\n Index Cond: (node_ep.node_id =\ndevice_nodes.node_id)\n -> Index Scan using ix_nefp_eid on ep_fp\n(cost=0.00..0.28 rows=1 width=20) (actual time=0.006..0.007 rows=1 loops=1)\n Index Cond: (ep_fp.ep_id = node_ep.ep_id);\n\n\n-- -- Josh Berkus PostgreSQL Experts Inc. http://www.pgexperts.com\n", "msg_date": "Wed, 23 Mar 2011 10:12:18 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Shouldn't we have a way to avoid \"risky\" plans?" }, { "msg_contents": "On Wed, Mar 23, 2011 at 2:12 PM, Josh Berkus <[email protected]> wrote:\n> Folks,\n>\n>...\n> It really seems like we should be able to detect an obvious high-risk\n> situation like this one.  Or maybe we're just being too optimistic about\n> discarding subplans?\n\nWhy not letting the GEQO learn from past mistakes?\n\nIf somehow a post-mortem analysis of queries can be done and accounted\nfor, then these kinds of mistakes would be a one-time occurrence.\n\nIdeas:\n * you estimate cost IFF there's no past experience.\n * if rowcount estimates miss by much, a correction cache could be\npopulated with extra (volatile - ie in shared memory) statistics\n * or, if rowcount estimates miss by much, autoanalyze could be scheduled\n * consider plan bailout: execute a tempting plan, if it takes too\nlong or its effective cost raises well above the expected cost, bail\nto a safer plan\n * account for worst-case performance when evaluating plans\n", "msg_date": "Wed, 23 Mar 2011 14:35:55 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shouldn't we have a way to avoid \"risky\" plans?" }, { "msg_contents": "On Wed, Mar 23, 2011 at 1:12 PM, Josh Berkus <[email protected]> wrote:\n> AFAICT, what's happening in this query is that PostgreSQL's statistics\n> on the device_nodes and several other tables are slightly out of date\n> (as in 5% of the table).\n\nWhat about some manner of query feedback mechanism ( along the lines\nof what explain analyze yields ) to track \"stats staleness\" in\ngeneral?\n\nProbably, I misunderstand the costs of executing explain analyze.\n", "msg_date": "Wed, 23 Mar 2011 14:02:14 -0400", "msg_from": "Justin Pitts <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shouldn't we have a way to avoid \"risky\" plans?" }, { "msg_contents": "On 3/23/11 10:35 AM, Claudio Freire wrote:\n> * consider plan bailout: execute a tempting plan, if it takes too\n> long or its effective cost raises well above the expected cost, bail\n> to a safer plan\n\nThat would actually solve this particular case. It would still require\nus to have some definition of \"safer\" though.\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Wed, 23 Mar 2011 13:29:17 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Shouldn't we have a way to avoid \"risky\" plans?" }, { "msg_contents": "On Wed, Mar 23, 2011 at 5:29 PM, Josh Berkus <[email protected]> wrote:\n> On 3/23/11 10:35 AM, Claudio Freire wrote:\n>>  *  consider plan bailout: execute a tempting plan, if it takes too\n>> long or its effective cost raises well above the expected cost, bail\n>> to a safer plan\n>\n> That would actually solve this particular case.  It would still require\n> us to have some definition of \"safer\" though.\n\nIn my head, safer = better worst-case performance.\n", "msg_date": "Wed, 23 Mar 2011 17:46:19 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shouldn't we have a way to avoid \"risky\" plans?" }, { "msg_contents": "Claudio Freire <[email protected]> writes:\n> On Wed, Mar 23, 2011 at 5:29 PM, Josh Berkus <[email protected]> wrote:\n>> On 3/23/11 10:35 AM, Claudio Freire wrote:\n>>> �* �consider plan bailout: execute a tempting plan, if it takes too\n>>> long or its effective cost raises well above the expected cost, bail\n>>> to a safer plan\n\n>> That would actually solve this particular case. �It would still require\n>> us to have some definition of \"safer\" though.\n\n> In my head, safer = better worst-case performance.\n\nIf the planner starts operating on the basis of worst case rather than\nexpected-case performance, the complaints will be far more numerous than\nthey are today.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 23 Mar 2011 17:00:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shouldn't we have a way to avoid \"risky\" plans? " }, { "msg_contents": "On Wed, Mar 23, 2011 at 6:00 PM, Tom Lane <[email protected]> wrote:\n> Claudio Freire <[email protected]> writes:\n>> In my head, safer = better worst-case performance.\n>\n> If the planner starts operating on the basis of worst case rather than\n> expected-case performance, the complaints will be far more numerous than\n> they are today.\n\nI imagine, that's why, if you put my comment in context, I was talking\nabout picking a safer plan only when the \"better on average one\" fails\nmiserably.\n", "msg_date": "Wed, 23 Mar 2011 18:08:15 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shouldn't we have a way to avoid \"risky\" plans?" }, { "msg_contents": "\n> If the planner starts operating on the basis of worst case rather than\n> expected-case performance, the complaints will be far more numerous than\n> they are today.\n\nYeah, I don't think that's the way to go. The other thought I had was\nto accumulate a \"risk\" stat the same as we accumulate a \"cost\" stat.\n\nHowever, I'm thinking that I'm overengineering what seems to be a fairly\nisolated problem, in that we might simply need to adjust the costing on\nthis kind of a plan.\n\nAlso, can I say that the cost figures in this plan are extremely\nconfusing? Is it really necessary to show them the way we do?\n\nMerge Join (cost=29.16..1648.00 rows=382 width=78) (actual\ntime=57215.167..57215.216 rows=1 loops=1)\n Merge Cond: (rn.node_id = device_nodes.node_id)\n -> Nested Loop (cost=0.00..11301882.40 rows=6998 width=62) (actual\ntime=57209.291..57215.030 rows=112 loops=1)\n Join Filter: (node_ep.node_id = rn.node_id)\n -> Nested Loop (cost=0.00..11003966.85 rows=90276 width=46)\n(actual time=0.027..52792.422 rows=90195 loops=1)\n\nThe first time I saw the above, I thought we had some kind of glibc math\nbug on the host system. Costs are supposed to accumulate upwards.\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Wed, 23 Mar 2011 17:05:12 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Shouldn't we have a way to avoid \"risky\" plans?" }, { "msg_contents": "2011/3/23 Tom Lane <[email protected]>\n\n> Claudio Freire <[email protected]> writes:\n> > On Wed, Mar 23, 2011 at 5:29 PM, Josh Berkus <[email protected]> wrote:\n> >> On 3/23/11 10:35 AM, Claudio Freire wrote:\n> >>> * consider plan bailout: execute a tempting plan, if it takes too\n> >>> long or its effective cost raises well above the expected cost, bail\n> >>> to a safer plan\n>\n> >> That would actually solve this particular case. It would still require\n> >> us to have some definition of \"safer\" though.\n>\n> > In my head, safer = better worst-case performance.\n>\n> If the planner starts operating on the basis of worst case rather than\n> expected-case performance, the complaints will be far more numerous than\n> they are today.\n>\n> This can se GUC-controllable. Like plan_safety=0..1 with low default value.\nThis can influence costs of plans where cost changes dramatically with small\ntable changes and/or statistics is uncertain. Also this can be used as\ndirect \"hint\" for such dangerous queries by changing GUC for session/single\nquery.\n\n\n-- \nBest regards,\n Vitalii Tymchyshyn\n\n2011/3/23 Tom Lane <[email protected]>\nClaudio Freire <[email protected]> writes:\n> On Wed, Mar 23, 2011 at 5:29 PM, Josh Berkus <[email protected]> wrote:\n>> On 3/23/11 10:35 AM, Claudio Freire wrote:\n>>>  *  consider plan bailout: execute a tempting plan, if it takes too\n>>> long or its effective cost raises well above the expected cost, bail\n>>> to a safer plan\n\n>> That would actually solve this particular case.  It would still require\n>> us to have some definition of \"safer\" though.\n\n> In my head, safer = better worst-case performance.\n\nIf the planner starts operating on the basis of worst case rather than\nexpected-case performance, the complaints will be far more numerous than\nthey are today.This can se GUC-controllable. Like plan_safety=0..1 with low default value. This can influence costs of plans where cost changes dramatically with small table changes and/or statistics is uncertain. Also this can be used as direct \"hint\" for such dangerous queries by changing GUC for session/single query. \n-- Best regards, Vitalii Tymchyshyn", "msg_date": "Thu, 24 Mar 2011 10:44:33 +0200", "msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shouldn't we have a way to avoid \"risky\" plans?" }, { "msg_contents": "2011/3/24 Віталій Тимчишин <[email protected]>:\n> 2011/3/23 Tom Lane <[email protected]>\n>>\n>> Claudio Freire <[email protected]> writes:\n>> > On Wed, Mar 23, 2011 at 5:29 PM, Josh Berkus <[email protected]> wrote:\n>> >> On 3/23/11 10:35 AM, Claudio Freire wrote:\n>> >>>  *  consider plan bailout: execute a tempting plan, if it takes too\n>> >>> long or its effective cost raises well above the expected cost, bail\n>> >>> to a safer plan\n>>\n>> >> That would actually solve this particular case.  It would still require\n>> >> us to have some definition of \"safer\" though.\n>>\n>> > In my head, safer = better worst-case performance.\n>>\n>> If the planner starts operating on the basis of worst case rather than\n>> expected-case performance, the complaints will be far more numerous than\n>> they are today.\n>>\n> This can se GUC-controllable. Like plan_safety=0..1 with low default value.\n> This can influence costs of plans where cost changes dramatically with small\n> table changes and/or statistics is uncertain. Also this can be used as\n> direct \"hint\" for such dangerous queries by changing GUC for session/single\n> query.\n\nISTM if you add statistics miss and 'risk margin' to the things the\nplanner would have to consider while generating a plan, you are\ngreatly increasing the number of plan paths that would have to be\nconsidered for any non trivial query.\n\nmerlin\n\nmerlin\n", "msg_date": "Thu, 24 Mar 2011 13:41:45 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shouldn't we have a way to avoid \"risky\" plans?" }, { "msg_contents": ">> This can se GUC-controllable. Like plan_safety=0..1 with low default value.\n>> This can influence costs of plans where cost changes dramatically with small\n>> table changes and/or statistics is uncertain. Also this can be used as\n>> direct \"hint\" for such dangerous queries by changing GUC for session/single\n>> query.\n>\n> ISTM if you add statistics miss and 'risk margin' to the things the\n> planner would have to consider while generating a plan, you are\n> greatly increasing the number of plan paths that would have to be\n> considered for any non trivial query.\n\n\nFWIW, these ideas are well established in the statistical community.\n\nCurrently, we are essentially using \"maximum likelihood estimators\".\nWe estimate a bunch of selectivities by choosing what is most likely,\nplug them in to our objective function, and then minimize based upon\nthe plugged in values. In a wide variety of cases, MLE's can be shown\nto be \"asymptotically\" optimal. That is, as our sample distribution\napproaches the true distribution, the best we can possibly do is to\nuse the MLE. This is pretty sensible - if we actually knew all of the\nselectivities then the results aren't really random anymore. However,\nthey often perform very poorly with small sample sizes - particularly\nif the loss function is very sensitive to relatively small\nfluctuations in the parameter estimates ( which postgres certainly is\n- switching from a hash join to a nest-loop can be disastrous ).\n\nUsing the estimate that minimizes the \"worst-case\" performance is\nprecisely a minimax estimator. There, the goal is to minimize the risk\nfunction ( iow, plan cost ) under the worst possible conditions.\nWikipedia has a pretty good treatment - just think \"plan cost\"\nwhenever you see \"risk\".\n\nAnother approach, that hasn't been suggested yet, is some Bayesian\nupdate method. There, rather than calculating a specific parameter\nvalue ( like ndistinct ), you try to store the entire distribution and\nchoose the plan that minimizes cost averaged over all of the possible\nparameter values.\n\nExample: ( please excuse the unrealistic numbers )\n\nFor instance, rather than estimate the selectivity of the join (\nrelation1.col1 = relation2.col1 ) to be 0.01, we would say it is 0.1\nw/ probability 0.2 and 0.001 with probability 0.8. So, here is how we\nwould choose the plan now:\n\ncost( nestloop | selectivity = 0.01 ) = 1\ncost( hashjoin | selectivity = 0.01 ) = 2\ncost( mergejoin | selectivity = 0.01 ) = 50\n\nHere would be the bayesian approach:\n\ncost( nestloop | selectivity = 0.001 ) = 0.1\ncost( hashjoin | selectivity = 0.001 ) = 1\ncost( mergejoin | selectivity = 0.001 ) = 50\n\ncost( nestloop | selectivity = 0.1 ) = 10\ncost( hashjoin | selectivity = 0.1 ) = 3\ncost( mergejoin | selectivity = 0.1 ) = 50\n\nSo, the bayesian costs are:\n\nnestloop: 0.1*0.8 + 10*0.2 = 2.08\nhashjoin: 1.0*0.8 + 3*0.2 = 1.4\nnestloop: 50*0.8 + 50*0.2 = 50\n\nso the hashjoin would be chosen.\n\nFor completeness, the minimax costs would be:\n\nnestloop: max( 0.1, 10 )\nhashjoin: max( 1, 3 )\nnestloop: max( 50, 50 )\n\nSo, again, the hashjoin is chosen.\n\nI obviously have a bias towards the Bayesian approach, but it's not\nbecause I expect it to necessarily perform a whole lot better but,\nrather, it reduces to the other two approaches. If we want the current\nbehavior, then simply store the MLE selectivity with probability 1. If\nwe want the minimax estimate, choose the worst possible value. Or\nanything in between.\n\nAlso, ( not that I have even close to the experience / expertise to\nmake this claim - so take this with a grain of salt ) it seems that\nthe code changes would be substantial but pretty straightforward and\neasy to localize. Rather than passing a selectivity, pass a pair of\narrays with selectivities and probabilities. Initially, we could keep\nthe current estimates ( every passed array would be of length one )\nand then make changes as problems appear ( like Josh's )\n\nI hope my little estimation procedure tutorial has been a little\nhelpful, please feel free to contact me off list if you have\nquestions/want references.\n\nBest,\nNathan Boley\n", "msg_date": "Thu, 24 Mar 2011 13:30:42 -0700", "msg_from": "Nathan Boley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shouldn't we have a way to avoid \"risky\" plans?" }, { "msg_contents": "On Thu, Mar 24, 2011 at 5:30 PM, Nathan Boley <[email protected]> wrote:\n> Another approach, that hasn't been suggested yet, is some Bayesian\n> update method. There, rather than calculating a specific parameter\n> value ( like ndistinct ), you try to store the entire distribution and\n> choose the plan that minimizes cost averaged over all of the possible\n> parameter values.\n\nI've done similar stuff for work, you don't have to go all the way to\nstoring complete probability distributions, usually a simple\nlikelihood range is enough.\n\nIn essence, instead of having a scalar MLE for plan cost, you\nimplement a \"ranged\" estimator, that estimates the most-likely range\nof plan costs, with mean and standard deviation from mean.\n\nThis essentially gives a risk value, since risky plans will have very\nlarge standard deviations from the mean.\n\n> Also, ( not that I have even close to the experience / expertise to\n> make this claim - so take this with a grain of salt ) it seems that\n> the code changes would be substantial but pretty straightforward and\n> easy to localize. Rather than passing a selectivity, pass a pair of\n> arrays with selectivities and probabilities.\n\nIf you approximage the probability distributions as I outlined above,\nit's even simpler. Approximate, but simpler - and since you retain the\noriginal cost estimations in the form of mean cost values, you can\neasily tune the GEQO to perform as it currently does (ignore variance)\nor with a safety margin (account for variance).\n\n\nAbout issues like these being uncommon - I disagree.\n\nI routinely have to work around query inefficiencies because GEQO does\nsomething odd - and since postgres gives me too few tools to tweak\nplans (increase statistics, use subqueries, rephrase joins, no direct\ntool before CTEs which are rather new), it becomes an art form, and it\nbecomes very unpredictable and an administrative burden. Out of the\nblue, statistics change, queries that worked fine start to perform\npoorly, and sites go down.\n\nIf GEQO could detect unsafe plans and work around them automatically,\nit would be a major improvement.\n\nGranted, though, this should be approached with care.\n", "msg_date": "Thu, 24 Mar 2011 19:23:15 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shouldn't we have a way to avoid \"risky\" plans?" }, { "msg_contents": "24.03.11 20:41, Merlin Moncure О©ҐО©ҐО©ҐО©ҐО©ҐО©ҐО©Ґ(О©ҐО©Ґ):\n> 2011/3/24 О©ҐО©ҐО©ҐО©ҐліО©Ґ О©ҐО©ҐО©ҐО©ҐО©ҐО©ҐО©ҐО©Ґ<[email protected]>:\n>>\n>> This can se GUC-controllable. Like plan_safety=0..1 with low default value.\n>> This can influence costs of plans where cost changes dramatically with small\n>> table changes and/or statistics is uncertain. Also this can be used as\n>> direct \"hint\" for such dangerous queries by changing GUC for session/single\n>> query.\n> ISTM if you add statistics miss and 'risk margin' to the things the\n> planner would have to consider while generating a plan, you are\n> greatly increasing the number of plan paths that would have to be\n> considered for any non trivial query.\nWhy so? I simply change cost estimation functions. This won't change \nnumber of pathes.\n\nBest regards, Vitalii Tymchyshyn.\n", "msg_date": "Fri, 25 Mar 2011 11:43:14 +0200", "msg_from": "Vitalii Tymchyshyn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shouldn't we have a way to avoid \"risky\" plans?" }, { "msg_contents": "Vitalii Tymchyshyn <[email protected]> writes:\n> 24.03.11 20:41, Merlin Moncure �������(��):\n>> ISTM if you add statistics miss and 'risk margin' to the things the\n>> planner would have to consider while generating a plan, you are\n>> greatly increasing the number of plan paths that would have to be\n>> considered for any non trivial query.\n\n> Why so? I simply change cost estimation functions. This won't change \n> number of pathes.\n\nIf you have multiple figures of merit, that means you have to keep more\npaths, with consequent slowdown when it comes to choosing which path to\nuse at higher join levels.\n\nAs an example, we used to keep only the paths with best total cost.\nWhen we started to optimize LIMIT, we had to keep around paths with best\nstartup cost too, in case that made for the best combined result at a\nhigher join level. If you're going to consider \"risk\" while choosing\npaths, that means you'll start keeping paths you would have discarded\nbefore, while not necessarily getting rid of any other paths. The only\nway to avoid that would be to have a completely brain-dead notion of\nrisk that wasn't affected by how the path is used at a higher join\nlevel, and I'm pretty sure that that wouldn't solve anybody's problem.\n\nAny significant expansion of the planner's fundamental cost model *will*\nmake it slower. By a lot. Rather than going into this with fantasies\nof \"it won't cost anything\", you should be worrying about how to keep\nthe speed penalty to factor-of-two rather than factor-of-ten.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 25 Mar 2011 10:12:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shouldn't we have a way to avoid \"risky\" plans? " }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n>> If the planner starts operating on the basis of worst case rather than\n>> expected-case performance, the complaints will be far more numerous than\n>> they are today.\n\n> Yeah, I don't think that's the way to go. The other thought I had was\n> to accumulate a \"risk\" stat the same as we accumulate a \"cost\" stat.\n\n> However, I'm thinking that I'm overengineering what seems to be a fairly\n> isolated problem, in that we might simply need to adjust the costing on\n> this kind of a plan.\n\nmergejoinscansel doesn't currently try to fix up the histogram bounds by\nconsulting indexes. At the time I was afraid of the costs of doing\nthat, and I still am; but it would be a way to address this issue.\n\nAuthor: Tom Lane <[email protected]>\nBranch: master Release: REL9_0_BR [40608e7f9] 2010-01-04 02:44:40 +0000\n\n When estimating the selectivity of an inequality \"column > constant\" or\n \"column < constant\", and the comparison value is in the first or last\n histogram bin or outside the histogram entirely, try to fetch the actual\n column min or max value using an index scan (if there is an index on the\n column). If successful, replace the lower or upper histogram bound with\n that value before carrying on with the estimate. This limits the\n estimation error caused by moving min/max values when the comparison\n value is close to the min or max. Per a complaint from Josh Berkus.\n \n It is tempting to consider using this mechanism for mergejoinscansel as well,\n but that would inject index fetches into main-line join estimation not just\n endpoint cases. I'm refraining from that until we can get a better handle\n on the costs of doing this type of lookup.\n\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 25 Mar 2011 10:24:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shouldn't we have a way to avoid \"risky\" plans? " }, { "msg_contents": "25.03.11 16:12, Tom Lane написав(ла):\n> Vitalii Tymchyshyn<[email protected]> writes:\n>\n>> Why so? I simply change cost estimation functions. This won't change\n>> number of pathes.\n> If you have multiple figures of merit, that means you have to keep more\n> paths, with consequent slowdown when it comes to choosing which path to\n> use at higher join levels.\n>\n> As an example, we used to keep only the paths with best total cost.\n> When we started to optimize LIMIT, we had to keep around paths with best\n> startup cost too, in case that made for the best combined result at a\n> higher join level. If you're going to consider \"risk\" while choosing\n> paths, that means you'll start keeping paths you would have discarded\n> before, while not necessarily getting rid of any other paths. The only\n> way to avoid that would be to have a completely brain-dead notion of\n> risk that wasn't affected by how the path is used at a higher join\n> level, and I'm pretty sure that that wouldn't solve anybody's problem.\n>\n> Any significant expansion of the planner's fundamental cost model *will*\n> make it slower. By a lot. Rather than going into this with fantasies\n> of \"it won't cost anything\", you should be worrying about how to keep\n> the speed penalty to factor-of-two rather than factor-of-ten.\nBut I am not talking about model change, it's more like formula change. \nIntroducing limit added one variable where outer plan could influence \ninner plan selection.\nBut I am talking simply about cost calculation for given node. Now cost \nis based on statistical expected value, the proposal is (something like) \nto take maximum cost on n% probability range near expected value.\nThis, of course, will make calculations slower, but won't add any degree \nof freedom to calculations.\n\nBest regards, Vitalii Tymchyshyn\n\n\n", "msg_date": "Fri, 25 Mar 2011 16:41:13 +0200", "msg_from": "Vitalii Tymchyshyn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shouldn't we have a way to avoid \"risky\" plans?" }, { "msg_contents": "\n\nOn 3/23/11 2:08 PM, \"Claudio Freire\" <[email protected]> wrote:\n\n>On Wed, Mar 23, 2011 at 6:00 PM, Tom Lane <[email protected]> wrote:\n>> Claudio Freire <[email protected]> writes:\n>>> In my head, safer = better worst-case performance.\n>>\n>> If the planner starts operating on the basis of worst case rather than\n>> expected-case performance, the complaints will be far more numerous than\n>> they are today.\n>\n>I imagine, that's why, if you put my comment in context, I was talking\n>about picking a safer plan only when the \"better on average one\" fails\n>miserably.\n\nPostgres' assumption about what is 'better on average' is wrong in the\npresence of nonlinear relationships between various statistics and\nexecution time anyway.\n\nAVG(f(x)) != f(AVG(x))\n\nIn english, the fastest plan for the average (most likely) case is not\nalways the fastest plan on average. It works very well for many cases,\nbut falls down in others.\n\nMany of the 'why is this query slow' and 'I wish there were hints'\nproblems I see here that are not user error seem related to this. The\napproaches discussed by Nathan Boley and Claudio Freire in this thread\ncould significantly mitigate many of the issues I have seen when wrestling\nwith the planner.\n\n>\n>-- \n>Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Fri, 25 Mar 2011 10:55:08 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shouldn't we have a way to avoid \"risky\" plans?" }, { "msg_contents": "> mergejoinscansel doesn't currently try to fix up the histogram bounds by\n> consulting indexes.  At the time I was afraid of the costs of doing\n> that, and I still am; but it would be a way to address this issue.\n>\n\nAnother cheaper but less accurate way to deal with this is to note\nthat we are trying to estimate the max of the population by using the\nmax of the sample, which obviously has a negative bias. If we could\ncorrect the bias ( though the bootstrap, or an analytical correction\nunder some parametric assumptions ( ie, the distribution is uniform in\nthe last bucket ) ) , then we should get better estimates at the cost\nof some analyze time. But this wouldn't even deal with Josh's\nparticular problem, since it's due to out of date stats rather than\nsampling error...\n\n-Nathan\n", "msg_date": "Fri, 25 Mar 2011 15:43:04 -0700", "msg_from": "Nathan Boley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shouldn't we have a way to avoid \"risky\" plans?" }, { "msg_contents": "\n> mergejoinscansel doesn't currently try to fix up the histogram bounds\n> by\n> consulting indexes. At the time I was afraid of the costs of doing\n> that, and I still am; but it would be a way to address this issue.\n\nOh? Hmmm. I have a ready-made test case for the benefit case on this. However, I'm not clear on how we would test the costs.\n\nBut this type of query plan is clearly pathological, and is experienced by users as a performance regression since 8.3. I now have the user doing analyzes of fairly large tables 2/hour to avoid the problem. So I don't think we can leave it alone.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\nSan Francisco\n", "msg_date": "Fri, 25 Mar 2011 20:03:17 -0500 (CDT)", "msg_from": "Joshua Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shouldn't we have a way to avoid \"risky\" plans?" }, { "msg_contents": "On Thu, Mar 24, 2011 at 4:30 PM, Nathan Boley <[email protected]> wrote:\n> Another approach, that hasn't been suggested yet, is some Bayesian\n> update method. There, rather than calculating a specific parameter\n> value ( like ndistinct ), you try to store the entire distribution and\n> choose the plan that minimizes cost averaged over all of the possible\n> parameter values.\n>\n> Example: ( please excuse the unrealistic numbers )\n>\n> For instance, rather than estimate the selectivity of the join (\n> relation1.col1 = relation2.col1 ) to be 0.01, we would say it is 0.1\n> w/ probability 0.2 and 0.001 with probability 0.8. So, here is how we\n> would choose the plan now:\n>\n> cost( nestloop | selectivity = 0.01 ) = 1\n> cost( hashjoin | selectivity = 0.01 ) = 2\n> cost( mergejoin | selectivity = 0.01 ) = 50\n>\n> Here would be the bayesian approach:\n>\n> cost( nestloop | selectivity = 0.001 ) = 0.1\n> cost( hashjoin | selectivity = 0.001 ) = 1\n> cost( mergejoin | selectivity = 0.001 ) = 50\n>\n> cost( nestloop | selectivity = 0.1 ) = 10\n> cost( hashjoin | selectivity = 0.1 ) = 3\n> cost( mergejoin | selectivity = 0.1 ) = 50\n>\n> So, the bayesian costs are:\n>\n> nestloop: 0.1*0.8 + 10*0.2 = 2.08\n> hashjoin: 1.0*0.8 + 3*0.2 = 1.4\n> nestloop: 50*0.8 + 50*0.2 = 50\n>\n> so the hashjoin would be chosen.\n>\n> For completeness, the minimax costs would be:\n>\n> nestloop: max( 0.1, 10 )\n> hashjoin: max( 1, 3   )\n> nestloop: max( 50, 50 )\n>\n> So, again, the hashjoin is chosen.\n>\n> I obviously have a bias towards the Bayesian approach, but it's not\n> because I expect it to necessarily perform a whole lot better but,\n> rather, it reduces to the other two approaches. If we want the current\n> behavior, then simply store the MLE selectivity with probability 1. If\n> we want the minimax estimate, choose the worst possible value. Or\n> anything in between.\n\nThis is a very elegant suggestion to this problem, and I really like\nit. It elegantly models the concept we're trying to capture here,\nwhich is that we're sometimes just guessing how things really are, and\nif it turns out that we're way off, we may be stuck in a\npathologically bad plan.\n\nOne limitation of this method is that it is difficult to apply more\nthan locally. Consider:\n\nSELECT * FROM foo, bar WHERE foo.id = bar.id AND some_crazy_function(foo.id)\n\nThe best method of joining foo and bar is likely going to depend on\nthe percentage of rows in foo for which some_crazy_function(foo.id)\nreturns true, and we have no idea what that is. We could represent\nthat by kicking out a range of probabilistic *cost* estimates for each\npath over foo, but that adds a lot of code complexity. And compute\ntime - because (I think) now we've made it so that more paths have to\npercolate all the way up through the join tree.\n\nIt's perhaps also worth looking at our old nemesis:\n\nSELECT * FROM foo WHERE a = 1 ORDER BY b LIMIT 1\n\nWhat I really want to do here is have the planner be able to reflect\nthe fact that an index scan over b may be very expensive or very cheap\ndepending on how lucky we get applying the a = 1 predicate, but I'm\nnot quite sure how to make that work.\n\nIt seems like the time when this would help the most without costing\ntoo much or requiring excessively invasive surgery is the case where\nthe join selectivity itself is uncertain. We can estimate that fairly\naccurately as far as MCVs go, but after that it does get very murky.\nStill, my gut feeling is that many (not all) of the worst problems\nactually bubble up from under the join, rather than happening at that\nlevel.\n\nThoughts?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Tue, 19 Apr 2011 10:22:55 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shouldn't we have a way to avoid \"risky\" plans?" }, { "msg_contents": "On Fri, Mar 25, 2011 at 10:24 AM, Tom Lane <[email protected]> wrote:\n> Josh Berkus <[email protected]> writes:\n>>> If the planner starts operating on the basis of worst case rather than\n>>> expected-case performance, the complaints will be far more numerous than\n>>> they are today.\n>\n>> Yeah, I don't think that's the way to go.  The other thought I had was\n>> to accumulate a \"risk\" stat the same as we accumulate a \"cost\" stat.\n>\n>> However, I'm thinking that I'm overengineering what seems to be a fairly\n>> isolated problem, in that we might simply need to adjust the costing on\n>> this kind of a plan.\n>\n> mergejoinscansel doesn't currently try to fix up the histogram bounds by\n> consulting indexes.  At the time I was afraid of the costs of doing\n> that, and I still am; but it would be a way to address this issue.\n\nApparently, this is a pain point for the MySQL query planner - not so\nmuch for merge joins, which I don't think are supported in any of the\nmajor forks anyway - but the planner's desire to go estimate things by\nprobing the indexes. IIRC, the MariaDB guys are looking into adding\npersistent statistics to address this problem. That doesn't\nnecessarily mean that we shouldn't do this, but it probably does mean\nthat we should be awfully careful about it.\n\nAnother thought is that we might want to consider reducing\nautovacuum_analyze_scale_factor. The root of the original problem\nseems to be that the table had some data churn but not enough to cause\nan ANALYZE. Now, if the data churn is random, auto-analyzing after\n10% churn might be reasonable, but a lot of data churn is non-random,\nand ANALYZE is fairly cheap. I'm just shooting in the dark here; I\nmight be all wet. I think part of the problem is that the AV launcher\nisn't very smart about looking at the overall picture. It'd be nice,\nfor example, to be able to be more aggressive when the system is quiet\nand to be a bit more careful when the system is saturated, but it's a\nbit tricky to think about how to make that work, or exactly what the\nheuristics should be.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Tue, 19 Apr 2011 10:29:13 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shouldn't we have a way to avoid \"risky\" plans?" }, { "msg_contents": "On 4/19/11 7:29 AM, Robert Haas wrote:\n> Another thought is that we might want to consider reducing\n> autovacuum_analyze_scale_factor. The root of the original problem\n> seems to be that the table had some data churn but not enough to cause\n> an ANALYZE. Now, if the data churn is random, auto-analyzing after\n> 10% churn might be reasonable, but a lot of data churn is non-random,\n> and ANALYZE is fairly cheap.\n\nI wouldn't reduce the defaults for PostgreSQL; this is something you do\non specific tables.\n\nFor example, on very large tables I've been known to set\nanalyze_scale_factor to 0 and analyze_threshold to 5000.\n\nAnd don't assume that analyzing is always cheap. If you have an 800GB\ntable, most of which is very cold data, and have statistics set to 5000\nfor some columns, accessing many of the older blocks could take a while.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n", "msg_date": "Tue, 19 Apr 2011 18:50:40 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Shouldn't we have a way to avoid \"risky\" plans?" }, { "msg_contents": "On Apr 19, 2011, at 9:50 PM, Josh Berkus <[email protected]> wrote:\n> For example, on very large tables I've been known to set\n> analyze_scale_factor to 0 and analyze_threshold to 5000.\n\nHuh? Why?\n> \n\n...Robert\n", "msg_date": "Sat, 23 Apr 2011 12:11:38 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shouldn't we have a way to avoid \"risky\" plans?" }, { "msg_contents": "On Mar 24, 2011, at 5:23 PM, Claudio Freire wrote:\n> I routinely have to work around query inefficiencies because GEQO does\n> something odd - and since postgres gives me too few tools to tweak\n> plans (increase statistics, use subqueries, rephrase joins, no direct\n> tool before CTEs which are rather new), it becomes an art form, and it\n> becomes very unpredictable and an administrative burden. Out of the\n> blue, statistics change, queries that worked fine start to perform\n> poorly, and sites go down.\n> \n> If GEQO could detect unsafe plans and work around them automatically,\n> it would be a major improvement.\n\nThis isn't limited to GEQO queries either. Every few months we'll have what should be a very fast query suddenly become far slower. Still on the order of seconds, but when you're running several of those a second and they normally take fractions of a second, this kind of performance degradation can easily bring a server to it's knees. Every time this has happened the solution has been to re-analyze a fairly large table; even with default stats target of 1000 it's very easy for one bad analyze to ruin your day. \n--\nJim C. Nasby, Database Architect [email protected]\n512.569.9461 (cell) http://jim.nasby.net\n\n\n", "msg_date": "Wed, 4 May 2011 10:40:25 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shouldn't we have a way to avoid \"risky\" plans?" } ]
[ { "msg_contents": "Given two tables:\n\nCREATE TABLE product_price_history\n(\n hid bigint NOT NULL,\n hdate timestamp without time zone NOT NULL,\n id bigint NOT NULL,\n product_id bigint NOT NULL,\n.... more columns here\n CONSTRAINT pk_product_price_history PRIMARY KEY (hid);\n\nCREATE INDEX idx_product_price_history_id_hdate\n ON product_price_history\n USING btree\n (id, hdate);\n\n\nCREATE TABLE product_price_offer_history\n(\n hid bigint NOT NULL,\n product_price_id bigint NOT NULL,\n isfeatured smallint NOT NULL,\n price double precision NOT NULL,\n shipping double precision NOT NULL,\n .... some more coumns here\n CONSTRAINT pk_product_price_offer_history PRIMARY KEY (hid, offerno)\n);\n\nStats:\n\nproduct_price_history - tablesize=23GB, indexes size=4GB, row count = 87 \nmillion\nproduct_price_offer_history - tablesize=24GB, indexes size=7GB, row \ncount = 235 million\n\n\nThese tables store historical data of some million products from the \nlast year.\nThe following commands are executed on them daily:\n\nCLUSTER idx_product_price_history_id_hdate on product_price_history;\nCLUSTER pk_product_price_offer_history on product_price_offer_history;\n\nHere is a query:\n\nselect\n date_part('epoch', min(pph.hdate) ) as hdate_ticks,\n min(ppoh.price+ppoh.shipping) as price_plus_shipping\nfrom\n product_price_history pph\n inner join product_price_offer_history ppoh on ppoh.hid = pph.hid\nwhere pph.id = 37632081\n and ppoh.isfeatured=1\ngroup by ppoh.merchantid,pph.hid,pph.hdate\norder by pph.hid asc\n\n\nI think that the query plan is correct:\n\n\n\"GroupAggregate (cost=5553554.25..5644888.17 rows=2283348 width=50)\"\n\" -> Sort (cost=5553554.25..5559262.62 rows=2283348 width=50)\"\n\" Sort Key: pph.hid, ppoh.merchantid, pph.hdate\"\n\" -> Nested Loop (cost=0.00..5312401.66 rows=2283348 width=50)\"\n\" -> Index Scan using idx_product_price_history_id_hdate \non product_price_history pph (cost=0.00..8279.80 rows=4588 width=16)\"\n\" Index Cond: (id = 37632081)\"\n\" -> Index Scan using pk_product_price_offer_history on \nproduct_price_offer_history ppoh (cost=0.00..1149.86 rows=498 width=42)\"\n\" Index Cond: (ppoh.hid = pph.hid)\"\n\" Filter: (ppoh.isfeatured = 1)\"\n\nSo it uses two index scans on the indexes we CLUSTER the tables on. \nNumber of rows returned is usually between 100 and 20 000.\n\n\nHere is the problem. When I first open this query for a given \nidentifier, it runs for 100 seconds. When I try to run it again for the \nsame identifier it returns the same rows within one second!\n\nThe indexes are very well conditioned: from the 235 million rows, any id \ngiven occurs at most 20 000 times. It is a btree index, so it should \nalready be stored sorted, and the 20 000 rows to be returned should fit \ninto a few database pages. Even if they are not in the cache, PostgreSQL \nshould be able to read the required pages within a second.\n\nI understand that for an index scan, PostgreSQL also needs to read the \nrows from the table. But since these tables are CLUSTER-ed on those \nspecific indexes, all the data needed shoud fit on a few database pages \nand PostgreSQL should be able to read them within a second.\n\nThen why it is taking 100 seconds to do the query for the first time and \nwhy it is just one sec for the second time? Probably my thinking is \nwrong, but I suppose it means that the data is spread on thousands of \npages on the disk.\n\nHow is that possible? What am I doing wrong?\n\nThanks,\n\n Laszlo\n\n\n-- \nThis message has been scanned for viruses and\ndangerous content by MailScanner, and is\nbelieved to be clean.\n\n", "msg_date": "Wed, 23 Mar 2011 21:29:16 +0100", "msg_from": "Laszlo Nagy <[email protected]>", "msg_from_op": true, "msg_subject": "Slow query on CLUTER -ed tables" }, { "msg_contents": "2011/3/23 Laszlo Nagy <[email protected]>:\n> \"GroupAggregate  (cost=5553554.25..5644888.17 rows=2283348 width=50)\"\n> \"  ->  Sort  (cost=5553554.25..5559262.62 rows=2283348 width=50)\"\n> \"        Sort Key: pph.hid, ppoh.merchantid, pph.hdate\"\n> \"        ->  Nested Loop  (cost=0.00..5312401.66 rows=2283348 width=50)\"\n> \"              ->  Index Scan using idx_product_price_history_id_hdate on\n> product_price_history pph  (cost=0.00..8279.80 rows=4588 width=16)\"\n> \"                    Index Cond: (id = 37632081)\"\n> \"              ->  Index Scan using pk_product_price_offer_history on\n> product_price_offer_history ppoh  (cost=0.00..1149.86 rows=498 width=42)\"\n> \"                    Index Cond: (ppoh.hid = pph.hid)\"\n> \"                    Filter: (ppoh.isfeatured = 1)\"\n\nI suspect that, since the matched hid's probably aren't sequential,\nmany of those ~500 product_price_offer_history rows will be far apart\non disk.\n\nPlease show the EXPLAIN ANALYZE output in the slow case, not just\nEXPLAIN. Also, PostgreSQL version? What configuration options have you\nchanged? (http://wiki.postgresql.org/wiki/SlowQueryQuestions)\n\nRegards,\nMarti\n", "msg_date": "Wed, 23 Mar 2011 23:56:16 +0200", "msg_from": "Marti Raudsepp <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query on CLUTER -ed tables" }, { "msg_contents": "\n> I suspect that, since the matched hid's probably aren't sequential,\n> many of those ~500 product_price_offer_history rows will be far apart\n> on disk.\nOMG I was a fool! I'll CLUSTER on a different index and it will be fast, \nI'm sure.\n\nThanks!\n\n L\n\n", "msg_date": "Fri, 25 Mar 2011 12:56:14 +0100", "msg_from": "Laszlo Nagy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query on CLUTER -ed tables" } ]
[ { "msg_contents": "Hi All,\n\npg9.0.3 explain analyze running very slow compared to old box with much less\nconfiguration.\n\nBut actual query is performing much better than the old server.\n\n============old Server===============\nOS: CentOS release 5.4 (Final)\nLinux Server 2.6.18-164.6.1.el5 #1 SMP Tue Nov 3 16:12:36 EST 2009 x86_64\nx86_64 x86_64 GNU/Linux\n\nRAM - 16GB\nCPU - 8 Core\ndisk - 300GB\nRAID10 on the disk\n\nPostgresql 9.0.3\n\nPostgres Config:\nshared_buffers = 6GB\nwork_mem = 32MB\nmaintenance_work_mem = 512MB\neffective_cache_size = 12GB\n\n#explain analyze select * from photo;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\n Seq Scan on photo (cost=0.00..8326849.24 rows=395405824 width=168) (actual\ntime=5.632..157757.284 rows=395785382 loops=1)\n Total runtime: 187443.850 ms\n(2 rows)\n\n============newServer===============\n\nCentOS release 5.4 (Final)\nLinux Server 2.6.18-164.6.1.el5 #1 SMP Tue Nov 3 16:12:36 EST 2009 x86_64\nx86_64 x86_64 GNU/Linux\n\nRAM - 64GB\nCPU - 12 Core\ndisk - 1TB\nRAID10 on the disk\n\nPostgresql 9.0.3\nPostgres Config:\nshared_buffers = 16GB\nwork_mem = 32MB\nmaintenance_work_mem = 1024MB\neffective_cache_size = 12GB\n\n\n# explain analyze select * from photo;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\n Seq Scan on photo (cost=0.00..8326810.24 rows=395579424 width=165) (actual\ntime=0.051..316879.928 rows=395648020 loops=1)\n Total runtime: 605703.206 ms\n(2 rows)\n\n\nI read other articles about the same issue but could not find the exact\nsolution.\n\n\nI ran gettimeofday() on both machines and got the below results:\n\nResults:\n\n*[Old Server]# time /tmp/gtod*\n\nreal 0m0.915s\n\nuser 0m0.914s\n\nsys 0m0.001s\n\n*[New Server]# time /tmp/gtod*\n\nreal 0m7.542s\n\nuser 0m7.540s\n\nsys 0m0.001s\n\n\nI am not sure how to fix this issue, any help would be in great assistance.\n\n\nThanks\n\nDeepak\n\nHi All,pg9.0.3 explain analyze running very slow compared to old box with much less configuration. But actual query is performing much better than the old server. \n============old Server===============OS: CentOS release 5.4 (Final)Linux Server 2.6.18-164.6.1.el5 #1 SMP Tue Nov 3 16:12:36 EST 2009 x86_64 x86_64 x86_64 GNU/Linux\nRAM - 16GBCPU - 8 Coredisk - 300GBRAID10 on the diskPostgresql 9.0.3Postgres Config: shared_buffers = 6GBwork_mem = 32MB\nmaintenance_work_mem = 512MB  effective_cache_size = 12GB#explain analyze select * from photo;                                                         QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------- Seq Scan on photo  (cost=0.00..8326849.24 rows=395405824 width=168) (actual time=5.632..157757.284 rows=395785382 loops=1)\n Total runtime: 187443.850 ms(2 rows)============newServer===============CentOS release 5.4 (Final)Linux Server 2.6.18-164.6.1.el5 #1 SMP Tue Nov 3 16:12:36 EST 2009 x86_64 x86_64 x86_64 GNU/Linux\nRAM - 64GBCPU - 12 Coredisk - 1TBRAID10 on the diskPostgresql 9.0.3Postgres Config: shared_buffers = 16GB\nwork_mem = 32MBmaintenance_work_mem = 1024MB  effective_cache_size = 12GB# explain analyze select * from photo;                                                        QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------- Seq Scan on photo  (cost=0.00..8326810.24 rows=395579424 width=165) (actual time=0.051..316879.928 rows=395648020 loops=1)\n Total runtime: 605703.206 ms(2 rows)I read other articles about the same issue but could not find the exact solution. \nI ran gettimeofday() on both machines and got the below results:\n\nResults:\n[Old Server]# time /tmp/gtodreal  0m0.915s\nuser  0m0.914s\nsys   0m0.001s\n[New Server]#  time /tmp/gtod\nreal  0m7.542s\nuser  0m7.540s\nsys   0m0.001sI am not sure how to fix this issue, any help would be in great assistance.\nThanksDeepak", "msg_date": "Wed, 23 Mar 2011 19:04:21 -0700", "msg_from": "DM <[email protected]>", "msg_from_op": true, "msg_subject": "pg9.0.3 explain analyze running very slow compared to a different box\n\twith much less configuration" }, { "msg_contents": "You might take a look here:\nhttp://archives.postgresql.org/pgsql-admin/2011-01/msg00050.php\nMy problem had to do with the speed of gettimeofday. You might want to do some special setting regarding\nyour box's way of reading time for the hw clock.\n\nΣτις Thursday 24 March 2011 04:04:21 ο/η DM έγραψε:\n> Hi All,\n> \n> pg9.0.3 explain analyze running very slow compared to old box with much less\n> configuration.\n> \n> But actual query is performing much better than the old server.\n> \n> ============old Server===============\n> OS: CentOS release 5.4 (Final)\n> Linux Server 2.6.18-164.6.1.el5 #1 SMP Tue Nov 3 16:12:36 EST 2009 x86_64\n> x86_64 x86_64 GNU/Linux\n> \n> RAM - 16GB\n> CPU - 8 Core\n> disk - 300GB\n> RAID10 on the disk\n> \n> Postgresql 9.0.3\n> \n> Postgres Config:\n> shared_buffers = 6GB\n> work_mem = 32MB\n> maintenance_work_mem = 512MB\n> effective_cache_size = 12GB\n> \n> #explain analyze select * from photo;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on photo (cost=0.00..8326849.24 rows=395405824 width=168) (actual\n> time=5.632..157757.284 rows=395785382 loops=1)\n> Total runtime: 187443.850 ms\n> (2 rows)\n> \n> ============newServer===============\n> \n> CentOS release 5.4 (Final)\n> Linux Server 2.6.18-164.6.1.el5 #1 SMP Tue Nov 3 16:12:36 EST 2009 x86_64\n> x86_64 x86_64 GNU/Linux\n> \n> RAM - 64GB\n> CPU - 12 Core\n> disk - 1TB\n> RAID10 on the disk\n> \n> Postgresql 9.0.3\n> Postgres Config:\n> shared_buffers = 16GB\n> work_mem = 32MB\n> maintenance_work_mem = 1024MB\n> effective_cache_size = 12GB\n> \n> \n> # explain analyze select * from photo;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on photo (cost=0.00..8326810.24 rows=395579424 width=165) (actual\n> time=0.051..316879.928 rows=395648020 loops=1)\n> Total runtime: 605703.206 ms\n> (2 rows)\n> \n> \n> I read other articles about the same issue but could not find the exact\n> solution.\n> \n> \n> I ran gettimeofday() on both machines and got the below results:\n> \n> Results:\n> \n> *[Old Server]# time /tmp/gtod*\n> \n> real 0m0.915s\n> \n> user 0m0.914s\n> \n> sys 0m0.001s\n> \n> *[New Server]# time /tmp/gtod*\n> \n> real 0m7.542s\n> \n> user 0m7.540s\n> \n> sys 0m0.001s\n> \n> \n> I am not sure how to fix this issue, any help would be in great assistance.\n> \n> \n> Thanks\n> \n> Deepak\n> \n\n\n\n-- \nAchilleas Mantzios\n", "msg_date": "Thu, 24 Mar 2011 11:11:03 +0200", "msg_from": "Achilleas Mantzios <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg9.0.3 explain analyze running very slow compared to a different\n\tbox with much less configuration" }, { "msg_contents": "On Thu, Mar 24, 2011 at 11:11, Achilleas Mantzios\n<[email protected]> wrote:\n> My problem had to do with the speed of gettimeofday. You might want to do some special setting regarding\n> your box's way of reading time for the hw clock.\n\nJust for extra info, on x86, TSC is usually the \"fast\" timeofday\nimplementation. On recent CPUs in single-socket configurations, TSC\nshould always be available, regardless of any power management. I\ndon't know about multi-socket. If you want to know whether your kernel\nis using tsc, run:\n\ncat /sys/devices/system/clocksource/clocksource0/current_clocksource\n\nOn older CPUs, you often had to disable some sort of power management\nin order to get a stable TSC -- the \"ondemand\" scaling governor is the\ntop suspect. Disabling this is distro-specific. You have to reboot to\nget the kernel to re-test TSC. Unfortunately disabling power\nmanagement later at boot doesn't help you, you have to prevent it from\nactivating at all.\n\nFor debugging, grepping dmesg for tsc or clocksource is often helpful.\nOn machines with unstable TSC you'll see output like this:\n\n[ 0.000000] Fast TSC calibration using PIT\n[ 0.164068] checking TSC synchronization [CPU#0 -> CPU#1]: passed.\n[ 0.196730] Switching to clocksource tsc\n[ 0.261347] Marking TSC unstable due to TSC halts in idle\n[ 0.261536] Switching to clocksource acpi_pm\n\nIf you just want to get repeatable timings, you can force both\nmachines to use the hpet clocksource:\necho hpet > /sys/devices/system/clocksource/clocksource0/current_clocksource\n\nMarti\n", "msg_date": "Thu, 24 Mar 2011 13:39:19 +0200", "msg_from": "Marti Raudsepp <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg9.0.3 explain analyze running very slow compared to a\n\tdifferent box with much less configuration" }, { "msg_contents": "Στις Thursday 24 March 2011 13:39:19 ο/η Marti Raudsepp έγραψε:\n> On Thu, Mar 24, 2011 at 11:11, Achilleas Mantzios\n> <[email protected]> wrote:\n> > My problem had to do with the speed of gettimeofday. You might want to do some special setting regarding\n> > your box's way of reading time for the hw clock.\n> \n> Just for extra info, on x86, TSC is usually the \"fast\" timeofday\n> implementation. On recent CPUs in single-socket configurations, TSC\n> should always be available, regardless of any power management. I\n> don't know about multi-socket. If you want to know whether your kernel\n> is using tsc, run:\n> \n\nThat's what i am experiencing as well, in two of my FreeBSD boxes (work/home) i get:\n\nphenom ii X4 :\n==========\n% sysctl -a | grep -i timecounter\nkern.timecounter.tick: 1\nkern.timecounter.choice: TSC(-100) HPET(900) ACPI-fast(1000) i8254(0) dummy(-1000000)\nkern.timecounter.hardware: TSC\nkern.timecounter.stepwarnings: 0\nkern.timecounter.tc.i8254.mask: 65535\nkern.timecounter.tc.i8254.counter: 1960\nkern.timecounter.tc.i8254.frequency: 1193182\nkern.timecounter.tc.i8254.quality: 0\nkern.timecounter.tc.ACPI-fast.mask: 4294967295\nkern.timecounter.tc.ACPI-fast.counter: 3642319843\nkern.timecounter.tc.ACPI-fast.frequency: 3579545\nkern.timecounter.tc.ACPI-fast.quality: 1000\nkern.timecounter.tc.HPET.mask: 4294967295\nkern.timecounter.tc.HPET.counter: 1160619197\nkern.timecounter.tc.HPET.frequency: 14318180\nkern.timecounter.tc.HPET.quality: 900\nkern.timecounter.tc.TSC.mask: 4294967295\nkern.timecounter.tc.TSC.counter: 2788277817\nkern.timecounter.tc.TSC.frequency: 3400155810\nkern.timecounter.tc.TSC.quality: -100\nkern.timecounter.smp_tsc: 0\nkern.timecounter.invariant_tsc: 1\n\nPentium 4\n======\n% sysctl -a | grep -i timecounter\nkern.timecounter.tick: 1\nkern.timecounter.choice: TSC(800) ACPI-fast(1000) i8254(0) dummy(-1000000)\nkern.timecounter.hardware: ACPI-fast\nkern.timecounter.stepwarnings: 0\nkern.timecounter.tc.i8254.mask: 65535\nkern.timecounter.tc.i8254.counter: 13682\nkern.timecounter.tc.i8254.frequency: 1193182\nkern.timecounter.tc.i8254.quality: 0\nkern.timecounter.tc.ACPI-fast.mask: 16777215\nkern.timecounter.tc.ACPI-fast.counter: 6708142\nkern.timecounter.tc.ACPI-fast.frequency: 3579545\nkern.timecounter.tc.ACPI-fast.quality: 1000\nkern.timecounter.tc.TSC.mask: 4294967295\nkern.timecounter.tc.TSC.counter: 3109326068\nkern.timecounter.tc.TSC.frequency: 2663194296\nkern.timecounter.tc.TSC.quality: 800\nkern.timecounter.smp_tsc: 0\nkern.timecounter.invariant_tsc: 0\n\nTSC, it seems, outperform the rest of clocks in terms of frequency.\n\n> cat /sys/devices/system/clocksource/clocksource0/current_clocksource\n> \n> On older CPUs, you often had to disable some sort of power management\n> in order to get a stable TSC -- the \"ondemand\" scaling governor is the\n> top suspect. Disabling this is distro-specific. You have to reboot to\n> get the kernel to re-test TSC. Unfortunately disabling power\n> management later at boot doesn't help you, you have to prevent it from\n> activating at all.\n> \n> For debugging, grepping dmesg for tsc or clocksource is often helpful.\n> On machines with unstable TSC you'll see output like this:\n> \n> [ 0.000000] Fast TSC calibration using PIT\n> [ 0.164068] checking TSC synchronization [CPU#0 -> CPU#1]: passed.\n> [ 0.196730] Switching to clocksource tsc\n> [ 0.261347] Marking TSC unstable due to TSC halts in idle\n> [ 0.261536] Switching to clocksource acpi_pm\n> \n> If you just want to get repeatable timings, you can force both\n> machines to use the hpet clocksource:\n> echo hpet > /sys/devices/system/clocksource/clocksource0/current_clocksource\n> \n> Marti\n> \n\n\n\n-- \nAchilleas Mantzios\n", "msg_date": "Thu, 24 Mar 2011 14:07:29 +0200", "msg_from": "Achilleas Mantzios <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg9.0.3 explain analyze running very slow compared to a different\n\tbox with much less configuration" }, { "msg_contents": "Thank you for your research on and posting on it, when I first encountered\nthis issue I saw your posting/research on this issue, this gave me a great\ninsight.\n\ngettimeofday() on my new box is slow, after further research we found that,\nwhen we set ACPI=Off, we got a good clock performance even the explain\nanalyze gave approximately gave the right values, but the hyperthreading is\noff.\n\ncould you guide me how to set, the parameter current_clocksource to TSC,\n\n\nThanks\nDeepak\n\nOn Thu, Mar 24, 2011 at 5:07 AM, Achilleas Mantzios <\[email protected]> wrote:\n\n> Στις Thursday 24 March 2011 13:39:19 ο/η Marti Raudsepp έγραψε:\n> > On Thu, Mar 24, 2011 at 11:11, Achilleas Mantzios\n> > <[email protected]> wrote:\n> > > My problem had to do with the speed of gettimeofday. You might want to\n> do some special setting regarding\n> > > your box's way of reading time for the hw clock.\n> >\n> > Just for extra info, on x86, TSC is usually the \"fast\" timeofday\n> > implementation. On recent CPUs in single-socket configurations, TSC\n> > should always be available, regardless of any power management. I\n> > don't know about multi-socket. If you want to know whether your kernel\n> > is using tsc, run:\n> >\n>\n> That's what i am experiencing as well, in two of my FreeBSD boxes\n> (work/home) i get:\n>\n> phenom ii X4 :\n> ==========\n> % sysctl -a | grep -i timecounter\n> kern.timecounter.tick: 1\n> kern.timecounter.choice: TSC(-100) HPET(900) ACPI-fast(1000) i8254(0)\n> dummy(-1000000)\n> kern.timecounter.hardware: TSC\n> kern.timecounter.stepwarnings: 0\n> kern.timecounter.tc.i8254.mask: 65535\n> kern.timecounter.tc.i8254.counter: 1960\n> kern.timecounter.tc.i8254.frequency: 1193182\n> kern.timecounter.tc.i8254.quality: 0\n> kern.timecounter.tc.ACPI-fast.mask: 4294967295\n> kern.timecounter.tc.ACPI-fast.counter: 3642319843\n> kern.timecounter.tc.ACPI-fast.frequency: 3579545\n> kern.timecounter.tc.ACPI-fast.quality: 1000\n> kern.timecounter.tc.HPET.mask: 4294967295\n> kern.timecounter.tc.HPET.counter: 1160619197\n> kern.timecounter.tc.HPET.frequency: 14318180\n> kern.timecounter.tc.HPET.quality: 900\n> kern.timecounter.tc.TSC.mask: 4294967295\n> kern.timecounter.tc.TSC.counter: 2788277817\n> kern.timecounter.tc.TSC.frequency: 3400155810\n> kern.timecounter.tc.TSC.quality: -100\n> kern.timecounter.smp_tsc: 0\n> kern.timecounter.invariant_tsc: 1\n>\n> Pentium 4\n> ======\n> % sysctl -a | grep -i timecounter\n> kern.timecounter.tick: 1\n> kern.timecounter.choice: TSC(800) ACPI-fast(1000) i8254(0) dummy(-1000000)\n> kern.timecounter.hardware: ACPI-fast\n> kern.timecounter.stepwarnings: 0\n> kern.timecounter.tc.i8254.mask: 65535\n> kern.timecounter.tc.i8254.counter: 13682\n> kern.timecounter.tc.i8254.frequency: 1193182\n> kern.timecounter.tc.i8254.quality: 0\n> kern.timecounter.tc.ACPI-fast.mask: 16777215\n> kern.timecounter.tc.ACPI-fast.counter: 6708142\n> kern.timecounter.tc.ACPI-fast.frequency: 3579545\n> kern.timecounter.tc.ACPI-fast.quality: 1000\n> kern.timecounter.tc.TSC.mask: 4294967295\n> kern.timecounter.tc.TSC.counter: 3109326068\n> kern.timecounter.tc.TSC.frequency: 2663194296\n> kern.timecounter.tc.TSC.quality: 800\n> kern.timecounter.smp_tsc: 0\n> kern.timecounter.invariant_tsc: 0\n>\n> TSC, it seems, outperform the rest of clocks in terms of frequency.\n>\n> > cat /sys/devices/system/clocksource/clocksource0/current_clocksource\n> >\n> > On older CPUs, you often had to disable some sort of power management\n> > in order to get a stable TSC -- the \"ondemand\" scaling governor is the\n> > top suspect. Disabling this is distro-specific. You have to reboot to\n> > get the kernel to re-test TSC. Unfortunately disabling power\n> > management later at boot doesn't help you, you have to prevent it from\n> > activating at all.\n> >\n> > For debugging, grepping dmesg for tsc or clocksource is often helpful.\n> > On machines with unstable TSC you'll see output like this:\n> >\n> > [ 0.000000] Fast TSC calibration using PIT\n> > [ 0.164068] checking TSC synchronization [CPU#0 -> CPU#1]: passed.\n> > [ 0.196730] Switching to clocksource tsc\n> > [ 0.261347] Marking TSC unstable due to TSC halts in idle\n> > [ 0.261536] Switching to clocksource acpi_pm\n> >\n> > If you just want to get repeatable timings, you can force both\n> > machines to use the hpet clocksource:\n> > echo hpet >\n> /sys/devices/system/clocksource/clocksource0/current_clocksource\n> >\n> > Marti\n> >\n>\n>\n>\n> --\n> Achilleas Mantzios\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nThank you for your research on and posting on it, when I first encountered this issue I saw your posting/research on this issue, this gave me a great insight. gettimeofday() on my new box is slow, after further research we found that, when we set ACPI=Off, we got a good clock performance even the explain analyze gave approximately gave the right values, but the hyperthreading is off.\ncould you guide me how to set, the parameter current_clocksource to TSC, \n\nThanksDeepakOn Thu, Mar 24, 2011 at 5:07 AM, Achilleas Mantzios <[email protected]> wrote:\nΣτις Thursday 24 March 2011 13:39:19 ο/η Marti Raudsepp έγραψε:\n> On Thu, Mar 24, 2011 at 11:11, Achilleas Mantzios\n> <[email protected]> wrote:\n> > My problem had to do with the speed of gettimeofday. You might want to do some special setting regarding\n> > your box's way of reading time for the hw clock.\n>\n> Just for extra info, on x86, TSC is usually the \"fast\" timeofday\n> implementation. On recent CPUs in single-socket configurations, TSC\n> should always be available, regardless of any power management. I\n> don't know about multi-socket. If you want to know whether your kernel\n> is using tsc, run:\n>\n\nThat's what i am experiencing as well, in two of my FreeBSD boxes (work/home) i get:\n\nphenom ii X4 :\n==========\n% sysctl -a | grep -i timecounter\nkern.timecounter.tick: 1\nkern.timecounter.choice: TSC(-100) HPET(900) ACPI-fast(1000) i8254(0) dummy(-1000000)\nkern.timecounter.hardware: TSC\nkern.timecounter.stepwarnings: 0\nkern.timecounter.tc.i8254.mask: 65535\nkern.timecounter.tc.i8254.counter: 1960\nkern.timecounter.tc.i8254.frequency: 1193182\nkern.timecounter.tc.i8254.quality: 0\nkern.timecounter.tc.ACPI-fast.mask: 4294967295\nkern.timecounter.tc.ACPI-fast.counter: 3642319843\nkern.timecounter.tc.ACPI-fast.frequency: 3579545\nkern.timecounter.tc.ACPI-fast.quality: 1000\nkern.timecounter.tc.HPET.mask: 4294967295\nkern.timecounter.tc.HPET.counter: 1160619197\nkern.timecounter.tc.HPET.frequency: 14318180\nkern.timecounter.tc.HPET.quality: 900\nkern.timecounter.tc.TSC.mask: 4294967295\nkern.timecounter.tc.TSC.counter: 2788277817\nkern.timecounter.tc.TSC.frequency: 3400155810\nkern.timecounter.tc.TSC.quality: -100\nkern.timecounter.smp_tsc: 0\nkern.timecounter.invariant_tsc: 1\n\nPentium 4\n======\n% sysctl -a | grep -i timecounter\nkern.timecounter.tick: 1\nkern.timecounter.choice: TSC(800) ACPI-fast(1000) i8254(0) dummy(-1000000)\nkern.timecounter.hardware: ACPI-fast\nkern.timecounter.stepwarnings: 0\nkern.timecounter.tc.i8254.mask: 65535\nkern.timecounter.tc.i8254.counter: 13682\nkern.timecounter.tc.i8254.frequency: 1193182\nkern.timecounter.tc.i8254.quality: 0\nkern.timecounter.tc.ACPI-fast.mask: 16777215\nkern.timecounter.tc.ACPI-fast.counter: 6708142\nkern.timecounter.tc.ACPI-fast.frequency: 3579545\nkern.timecounter.tc.ACPI-fast.quality: 1000\nkern.timecounter.tc.TSC.mask: 4294967295\nkern.timecounter.tc.TSC.counter: 3109326068\nkern.timecounter.tc.TSC.frequency: 2663194296\nkern.timecounter.tc.TSC.quality: 800\nkern.timecounter.smp_tsc: 0\nkern.timecounter.invariant_tsc: 0\n\nTSC, it seems, outperform the rest of clocks in terms of frequency.\n\n> cat /sys/devices/system/clocksource/clocksource0/current_clocksource\n>\n> On older CPUs, you often had to disable some sort of power management\n> in order to get a stable TSC -- the \"ondemand\" scaling governor is the\n> top suspect. Disabling this is distro-specific. You have to reboot to\n> get the kernel to re-test TSC. Unfortunately disabling power\n> management later at boot doesn't help you, you have to prevent it from\n> activating at all.\n>\n> For debugging, grepping dmesg for tsc or clocksource is often helpful.\n> On machines with unstable TSC you'll see output like this:\n>\n> [    0.000000] Fast TSC calibration using PIT\n> [    0.164068] checking TSC synchronization [CPU#0 -> CPU#1]: passed.\n> [    0.196730] Switching to clocksource tsc\n> [    0.261347] Marking TSC unstable due to TSC halts in idle\n> [    0.261536] Switching to clocksource acpi_pm\n>\n> If you just want to get repeatable timings, you can force both\n> machines to use the hpet clocksource:\n> echo hpet > /sys/devices/system/clocksource/clocksource0/current_clocksource\n>\n> Marti\n>\n\n\n\n--\nAchilleas Mantzios\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 24 Mar 2011 18:12:11 -0700", "msg_from": "DM <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg9.0.3 explain analyze running very slow compared to a\n\tdifferent box with much less configuration" }, { "msg_contents": "2011/3/25 DM <[email protected]>:\n> gettimeofday() on my new box is slow, after further research we found that,\n> when we set ACPI=Off, we got a good clock performance even the explain\n> analyze gave approximately gave the right values, but the hyperthreading is\n> off.\n\nDisabling ACPI also disables most CPU power management, so that\nexplains why you get a stable TSC that way. But that's not a real fix.\n\n> could you guide me how to set, the parameter current_clocksource to TSC,\n\nYou can't \"set\" it, the kernel will automatically choose TSC, if it's\nstable, at boot time; see messages in dmesg.\n\nA better way to disable power management on CentOS is to disable the\n'cpuspeed' service.\n\nNote that this is not necessary for newer CPUs; Intel Nehalem and AMD\nPhenom series have a stable TSC even with power management enabled.\n\nRegards,\nMarti\n", "msg_date": "Fri, 25 Mar 2011 10:30:00 +0200", "msg_from": "Marti Raudsepp <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg9.0.3 explain analyze running very slow compared to a\n\tdifferent box with much less configuration" }, { "msg_contents": "If it's a HP box you can also turn this off via the bios via your RBSU:\r\n\r\nStarting with HP ProLiant G6 servers that utilize Intel® Xeon® processors, setting the HP Power Profile \r\nOption in RBSU to Maximum Performance Mode sets these recommended additional low-latency options \r\nfor minimum BIOS latenc\r\n\r\n\r\n-----Original Message-----\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Marti Raudsepp\r\nSent: Friday, March 25, 2011 3:30 AM\r\nTo: DM\r\nCc: Achilleas Mantzios; [email protected]\r\nSubject: Re: [PERFORM] pg9.0.3 explain analyze running very slow compared to a different box with much less configuration\r\n\r\n2011/3/25 DM <[email protected]>:\r\n> gettimeofday() on my new box is slow, after further research we found \r\n> that, when we set ACPI=Off, we got a good clock performance even the \r\n> explain analyze gave approximately gave the right values, but the \r\n> hyperthreading is off.\r\n\r\nDisabling ACPI also disables most CPU power management, so that explains why you get a stable TSC that way. But that's not a real fix.\r\n\r\n> could you guide me how to set, the parameter current_clocksource to \r\n> TSC,\r\n\r\nYou can't \"set\" it, the kernel will automatically choose TSC, if it's stable, at boot time; see messages in dmesg.\r\n\r\nA better way to disable power management on CentOS is to disable the 'cpuspeed' service.\r\n\r\nNote that this is not necessary for newer CPUs; Intel Nehalem and AMD Phenom series have a stable TSC even with power management enabled.\r\n\r\nRegards,\r\nMarti\r\n\r\n--\r\nSent via pgsql-performance mailing list ([email protected])\r\nTo make changes to your subscription:\r\nhttp://www.postgresql.org/mailpref/pgsql-performance\r\n\r\nThis communication is for informational purposes only. It is not\nintended as an offer or solicitation for the purchase or sale of\nany financial instrument or as an official confirmation of any\ntransaction. All market prices, data and other information are not\nwarranted as to completeness or accuracy and are subject to change\nwithout notice. Any comments or statements made herein do not\nnecessarily reflect those of JPMorgan Chase & Co., its subsidiaries\nand affiliates.\r\n\r\nThis transmission may contain information that is privileged,\nconfidential, legally privileged, and/or exempt from disclosure\nunder applicable law. If you are not the intended recipient, you\nare hereby notified that any disclosure, copying, distribution, or\nuse of the information contained herein (including any reliance\nthereon) is STRICTLY PROHIBITED. Although this transmission and any\nattachments are believed to be free of any virus or other defect\nthat might affect any computer system into which it is received and\nopened, it is the responsibility of the recipient to ensure that it\nis virus free and no responsibility is accepted by JPMorgan Chase &\nCo., its subsidiaries and affiliates, as applicable, for any loss\nor damage arising in any way from its use. If you received this\ntransmission in error, please immediately contact the sender and\ndestroy the material in its entirety, whether in electronic or hard\ncopy format. Thank you.\r\n\r\nPlease refer to http://www.jpmorgan.com/pages/disclosures for\ndisclosures relating to European legal entities.", "msg_date": "Fri, 25 Mar 2011 10:25:45 -0400", "msg_from": "\"Strange, John W\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg9.0.3 explain analyze running very slow compared to\n\ta different box with much less configuration" } ]
[ { "msg_contents": "Hi,\n\nI see my application creating temporary files while creating an index.\nLOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp7076.0\", size 779853824\nSTATEMENT: CREATE INDEX IDX_LPA_LINKID ON NNDB.LPA (LINK_ID);\n\nSo I checked this again and raised afterwards maintenance_work_mem step by\nstep up 64GB.\nI logged in via psql, run the following statements\nset maintenance_work_mem = '64GB';\nCREATE INDEX IDX_LPA_LINKID ON NNDB.LPA (LINK_ID);\n\nBut still I get that evil message in the log file about creating a temporary\nfile.\nI also raised work_mem in my session up to 32GB - again without changing the\nbehavior.\n\nAccording to the postgres docs\nhttp://www.postgresql.org/docs/8.4/static/populate.html#POPULATE-WORK-MEMthis\nis supposed to help.\nAny ideas?\n\nI'm running postgres 8.4 64 bit on Linux from an enterprisedb package.\n# file /var/lib/pgsql/bin/postgres\n/var/lib/pgsql/bin/postgres: ELF 64-bit LSB executable, x86-64, version 1\n(SYSV), for GNU/Linux 2.4.0, dynamically linked (uses shared libs), not\nstripped\n\n\nBest Regards,\nUwe\n\nHi,I see my application creating temporary files while creating an index.LOG:  temporary file: path \"base/pgsql_tmp/pgsql_tmp7076.0\", size 779853824STATEMENT:  CREATE INDEX IDX_LPA_LINKID ON NNDB.LPA (LINK_ID);\nSo I checked this again and raised afterwards maintenance_work_mem step by step up 64GB.I logged in via psql, run the following statementsset maintenance_work_mem = '64GB';CREATE INDEX IDX_LPA_LINKID ON NNDB.LPA (LINK_ID);\nBut still I get that evil message in the log file about creating a temporary file.I also raised work_mem in my session up to 32GB - again without changing the behavior.According to the postgres docs http://www.postgresql.org/docs/8.4/static/populate.html#POPULATE-WORK-MEM this is supposed to help.\nAny ideas?I'm running postgres 8.4 64 bit on Linux from an enterprisedb package.# file /var/lib/pgsql/bin/postgres/var/lib/pgsql/bin/postgres: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), for GNU/Linux 2.4.0, dynamically linked (uses shared libs), not stripped\nBest Regards,Uwe", "msg_date": "Thu, 24 Mar 2011 14:56:35 +0100", "msg_from": "Uwe Bartels <[email protected]>", "msg_from_op": true, "msg_subject": "maintenance_work_mem + create index" }, { "msg_contents": "Uwe,\n\n* Uwe Bartels ([email protected]) wrote:\n> So I checked this again and raised afterwards maintenance_work_mem step by\n> step up 64GB.\n> I logged in via psql, run the following statements\n> set maintenance_work_mem = '64GB';\n\nI believe maintenance_work_mem suffers from the same problem that\nwork_mem has, specifically that PG still won't allocate more than\n1GB of memory for any single operation.\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Thu, 24 Mar 2011 10:13:03 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: maintenance_work_mem + create index" }, { "msg_contents": "OK. I didn't now that. Thanks for sharing that information.\nCan anybody tell if we have this limitation on maintenance_work_mem as well?\n\nDoes anybody know of a solution out of that on Linux?\nOr is there a dynamic way to put $PGDATA/base/pgsql_tmp into RAM without\nblocking it completely like a ram disk?\n\nBest Regards,\nUwe\n\nOn 24 March 2011 15:13, Stephen Frost <[email protected]> wrote:\n\n> Uwe,\n>\n> * Uwe Bartels ([email protected]) wrote:\n> > So I checked this again and raised afterwards maintenance_work_mem step\n> by\n> > step up 64GB.\n> > I logged in via psql, run the following statements\n> > set maintenance_work_mem = '64GB';\n>\n> I believe maintenance_work_mem suffers from the same problem that\n> work_mem has, specifically that PG still won't allocate more than\n> 1GB of memory for any single operation.\n>\n> Thanks,\n>\n> Stephen\n>\n> -----BEGIN PGP SIGNATURE-----\n> Version: GnuPG v1.4.10 (GNU/Linux)\n>\n> iEYEARECAAYFAk2LUW8ACgkQrzgMPqB3kigZMwCfUVL/5nSdK5xiV+/SjWB6BG9B\n> Fm0An2V5Tald8PUYXc5VIuKL/C1WNYTp\n> =MSxh\n> -----END PGP SIGNATURE-----\n>\n>\n\nOK. I didn't now that. Thanks for sharing that information.Can anybody tell if we have this limitation on maintenance_work_mem as well?Does anybody know of a solution out of that on Linux?Or is there a dynamic way to put $PGDATA/base/pgsql_tmp into RAM without blocking it completely like a ram disk?\nBest Regards,UweOn 24 March 2011 15:13, Stephen Frost <[email protected]> wrote:\nUwe,\n\n* Uwe Bartels ([email protected]) wrote:\n> So I checked this again and raised afterwards maintenance_work_mem step by\n> step up 64GB.\n> I logged in via psql, run the following statements\n> set maintenance_work_mem = '64GB';\n\nI believe maintenance_work_mem suffers from the same problem that\nwork_mem has, specifically that PG still won't allocate more than\n1GB of memory for any single operation.\n\n        Thanks,\n\n                Stephen\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.10 (GNU/Linux)\n\niEYEARECAAYFAk2LUW8ACgkQrzgMPqB3kigZMwCfUVL/5nSdK5xiV+/SjWB6BG9B\nFm0An2V5Tald8PUYXc5VIuKL/C1WNYTp\n=MSxh\n-----END PGP SIGNATURE-----", "msg_date": "Thu, 24 Mar 2011 15:40:33 +0100", "msg_from": "Uwe Bartels <[email protected]>", "msg_from_op": true, "msg_subject": "Re: maintenance_work_mem + create index" }, { "msg_contents": "On 03/24/2011 09:40 AM, Uwe Bartels wrote:\n\n> Does anybody know of a solution out of that on Linux?\n> Or is there a dynamic way to put $PGDATA/base/pgsql_tmp into RAM without\n> blocking it completely like a ram disk?\n\nWe put this in our startup script just before starting the actual database:\n\nfor x in $(find ${PGDATA}/base -mindepth 1 -maxdepth 1 -type d); do\n nDBNum=${x##*/}\n sDir=${DBSHM}/${nDBNum}\n\n if [ ! -d \"$sDir\" ]; then\n su -c \"mkdir $sDir\" - $PGUSER\n fi\ndone\n\nWhere PGDATA, DBSHM, and PGUSER are all set in \n/etc/sysconfig/postgresql. But DBSHM defaults to /dev/shm/pgsql_tmp on \nour Linux box.\n\nBasically what this does is ensures a directory exists for each of your \ndatabases in shared memory. Then all we did was symlink the pgsql_tmp \nfolder to point to those shared-memory directories. Many systems default \nso that up to half of total RAM can be used this way, so we're not at \nany risk with 64GB on our main nodes.\n\nWe already run a custom init.d script anyway because we needed something \nLSB compatible for Pacemaker. I highly recommend it. :)\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n", "msg_date": "Thu, 24 Mar 2011 10:14:01 -0500", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: maintenance_work_mem + create index" }, { "msg_contents": "OK. sounds promising. On my machine this looks similar.\nI'll try this.\n\nThanks,\nUwe\n\n\nOn 24 March 2011 16:14, Shaun Thomas <[email protected]> wrote:\n\n> On 03/24/2011 09:40 AM, Uwe Bartels wrote:\n>\n> Does anybody know of a solution out of that on Linux?\n>> Or is there a dynamic way to put $PGDATA/base/pgsql_tmp into RAM without\n>> blocking it completely like a ram disk?\n>>\n>\n> We put this in our startup script just before starting the actual database:\n>\n> for x in $(find ${PGDATA}/base -mindepth 1 -maxdepth 1 -type d); do\n> nDBNum=${x##*/}\n> sDir=${DBSHM}/${nDBNum}\n>\n> if [ ! -d \"$sDir\" ]; then\n> su -c \"mkdir $sDir\" - $PGUSER\n> fi\n> done\n>\n> Where PGDATA, DBSHM, and PGUSER are all set in /etc/sysconfig/postgresql.\n> But DBSHM defaults to /dev/shm/pgsql_tmp on our Linux box.\n>\n> Basically what this does is ensures a directory exists for each of your\n> databases in shared memory. Then all we did was symlink the pgsql_tmp folder\n> to point to those shared-memory directories. Many systems default so that up\n> to half of total RAM can be used this way, so we're not at any risk with\n> 64GB on our main nodes.\n>\n> We already run a custom init.d script anyway because we needed something\n> LSB compatible for Pacemaker. I highly recommend it. :)\n>\n> --\n> Shaun Thomas\n> OptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n> 312-676-8870\n> [email protected]\n>\n> ______________________________________________\n>\n> See http://www.peak6.com/email_disclaimer.php\n> for terms and conditions related to this email\n>\n\nOK. sounds promising. On my machine this looks similar. I'll try this.Thanks,Uwe\nOn 24 March 2011 16:14, Shaun Thomas <[email protected]> wrote:\nOn 03/24/2011 09:40 AM, Uwe Bartels wrote:\n\n\nDoes anybody know of a solution out of that on Linux?\nOr is there a dynamic way to put $PGDATA/base/pgsql_tmp into RAM without\nblocking it completely like a ram disk?\n\n\nWe put this in our startup script just before starting the actual database:\n\nfor x in $(find ${PGDATA}/base -mindepth 1 -maxdepth 1 -type d); do\n  nDBNum=${x##*/}\n  sDir=${DBSHM}/${nDBNum}\n\n  if [ ! -d \"$sDir\" ]; then\n    su -c \"mkdir $sDir\" - $PGUSER\n  fi\ndone\n\nWhere PGDATA, DBSHM, and PGUSER are all set in /etc/sysconfig/postgresql. But DBSHM defaults to /dev/shm/pgsql_tmp on our Linux box.\n\nBasically what this does is ensures a directory exists for each of your databases in shared memory. Then all we did was symlink the pgsql_tmp folder to point to those shared-memory directories. Many systems default so that up to half of total RAM can be used this way, so we're not at any risk with 64GB on our main nodes.\n\nWe already run a custom init.d script anyway because we needed something LSB compatible for Pacemaker. I highly recommend it. :)\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee  http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email", "msg_date": "Thu, 24 Mar 2011 16:28:54 +0100", "msg_from": "Uwe Bartels <[email protected]>", "msg_from_op": true, "msg_subject": "Re: maintenance_work_mem + create index" }, { "msg_contents": "On 03/24/2011 10:28 AM, Uwe Bartels wrote:\n\n> OK. sounds promising. On my machine this looks similar.\n> I'll try this.\n\nI just realized I may have implied that DBSHM automatically defaults to \n/db/shm/pgsql_tmp. It dosen't. I also have this at the very top of our \n/etc/init.d/postgresql script:\n\nif [ -f /etc/sysconfig/postgresql ]; then\n source /etc/sysconfig/postgresql\nfi\n\nDBSHM=${DBSHM:-/dev/shm/pgsql_tmp}\nPGDATA=${PGDATA:-\"/db/data/pgdata\"}\nPGUSER=${PGUSER:-postgres}\n\nDBSHM doesn't exist, and the other vars will probably be empty unless \nyou set them in the sysconfig file. What I meant was that /dev/shm \nautomatically exists on our Linux box and we make use of it. :)\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n", "msg_date": "Thu, 24 Mar 2011 10:38:30 -0500", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: maintenance_work_mem + create index" }, { "msg_contents": "Em 24-03-2011 11:40, Uwe Bartels escreveu:\n> Or is there a dynamic way to put $PGDATA/base/pgsql_tmp into RAM without\n> blocking it completely like a ram disk?\n>\nCreate a tablespace in a ram disk and set temp_tablespaces.\n\n\n-- \n Euler Taveira de Oliveira\n http://www.timbira.com/\n", "msg_date": "Thu, 24 Mar 2011 14:35:02 -0300", "msg_from": "Euler Taveira de Oliveira <[email protected]>", "msg_from_op": false, "msg_subject": "Re: maintenance_work_mem + create index" } ]
[ { "msg_contents": "Dear all,\n\nToday I got to run a query internally from my application by more than \n10 connections.\n\nBut The query performed very badly. A the data size of tables are as :\n\npdc_uima=# select pg_size_pretty(pg_total_relation_size('clause2'));\n pg_size_pretty\n----------------\n 5858 MB\n(1 row)\n\npdc_uima=# select pg_size_pretty(pg_total_relation_size('svo2')); \n pg_size_pretty\n----------------\n 4719 MB\n(1 row)\n\n\nI explain the query as after making the indexes as :\n\npdc_uima=# explain select c.clause, s.* from clause2 c, svo2 s where \nc.clause_id=s.clause_id and s.doc_id=c.source_id and c.\npdc_uima-# sentence_id=s.sentence_id ;\n QUERY \nPLAN \n--------------------------------------------------------------------------------------------------------------\n Merge Join (cost=5673831.05..34033959.87 rows=167324179 width=2053)\n Merge Cond: ((s.clause_id = c.clause_id) AND (s.doc_id = c.source_id) \nAND (s.sentence_id = c.sentence_id))\n -> Index Scan using idx_svo2 on svo2 s (cost=0.00..24489343.65 \nrows=27471560 width=1993)\n -> Materialize (cost=5673828.74..6071992.29 rows=31853084 width=72)\n -> Sort (cost=5673828.74..5753461.45 rows=31853084 width=72)\n Sort Key: c.clause_id, c.source_id, c.sentence_id\n -> Seq Scan on clause2 c (cost=0.00..770951.84 \nrows=31853084 width=72)\n\n\n\nIndexes are :\n\nCREATE INDEX idx_clause ON clause2 USING btree (clause_id, source_id, \nsentence_id);\nCREATE INDEX idx_svo2 ON svo2 USING btree (clause_id, doc_id, \nsentence_id);\n\nI don't know why it not uses the index scan for clause2 table.\n\nAny suggestions to tune the query.\n\n\nThanks & best Regards,\nAdarsh Sharma\n", "msg_date": "Fri, 25 Mar 2011 12:05:54 +0530", "msg_from": "Adarsh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Why Index is not used" }, { "msg_contents": "Adarsh Sharma <[email protected]> wrote:\n\n> Dear all,\n>\n> Today I got to run a query internally from my application by more than \n> 10 connections.\n>\n> But The query performed very badly. A the data size of tables are as :\n>\n> pdc_uima=# select pg_size_pretty(pg_total_relation_size('clause2'));\n> pg_size_pretty\n> ----------------\n> 5858 MB\n> (1 row)\n>\n> pdc_uima=# select pg_size_pretty(pg_total_relation_size('svo2')); \n> pg_size_pretty\n> ----------------\n> 4719 MB\n> (1 row)\n>\n>\n> I explain the query as after making the indexes as :\n>\n> pdc_uima=# explain select c.clause, s.* from clause2 c, svo2 s where \n> c.clause_id=s.clause_id and s.doc_id=c.source_id and c.\n> pdc_uima-# sentence_id=s.sentence_id ;\n> QUERY PLAN \n> \n> --------------------------------------------------------------------------------------------------------------\n> Merge Join (cost=5673831.05..34033959.87 rows=167324179 width=2053)\n> Merge Cond: ((s.clause_id = c.clause_id) AND (s.doc_id = c.source_id) \n> AND (s.sentence_id = c.sentence_id))\n> -> Index Scan using idx_svo2 on svo2 s (cost=0.00..24489343.65 \n> rows=27471560 width=1993)\n> -> Materialize (cost=5673828.74..6071992.29 rows=31853084 width=72)\n> -> Sort (cost=5673828.74..5753461.45 rows=31853084 width=72)\n> Sort Key: c.clause_id, c.source_id, c.sentence_id\n> -> Seq Scan on clause2 c (cost=0.00..770951.84 \n> rows=31853084 width=72)\n>\n>\n>\n> Indexes are :\n>\n> CREATE INDEX idx_clause ON clause2 USING btree (clause_id, source_id, \n> sentence_id);\n> CREATE INDEX idx_svo2 ON svo2 USING btree (clause_id, doc_id, \n> sentence_id);\n>\n> I don't know why it not uses the index scan for clause2 table.\n\nHow many rows contains clause2? The planner expected 167324179 returning\nrows, can you run the same explain with ANALYSE to see the real amount\nof returning rows?\n\n\nAndreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknown)\nKaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n", "msg_date": "Fri, 25 Mar 2011 07:44:27 +0100", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why Index is not used" }, { "msg_contents": "Thanks Andreas, I was about print the output but it takes too much time.\n\nBelow is the output of explain analyze command :\npdc_uima=# explain analyze select c.clause, s.* from clause2 c, svo2 s \nwhere c.clause_id=s.clause_id and s.doc_id=c.source_id and c.\npdc_uima-# sentence_id=s.sentence_id ;\n \nQUERY \nPLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=5673831.05..34033959.87 rows=167324179 width=2053) \n(actual time=216281.162..630721.636 rows=30473117 loops=1)\n Merge Cond: ((s.clause_id = c.clause_id) AND (s.doc_id = c.source_id) \nAND (s.sentence_id = c.sentence_id))\n -> Index Scan using idx_svo2 on svo2 s (cost=0.00..24489343.65 \nrows=27471560 width=1993) (actual time=0.130..177599.310 rows=27471560 \nloops=1)\n -> Materialize (cost=5673828.74..6071992.29 rows=31853084 width=72) \n(actual time=216280.596..370507.452 rows=52037763 loops=1)\n -> Sort (cost=5673828.74..5753461.45 rows=31853084 width=72) \n(actual time=216280.591..324707.956 rows=31853083 loops=1)\n Sort Key: c.clause_id, c.source_id, c.sentence_id\n Sort Method: external merge Disk: 2616520kB\n -> Seq Scan on clause2 c (cost=0.00..770951.84 \nrows=31853084 width=72) (actual time=0.025..25018.665 rows=31853083 loops=1)\n Total runtime: 647804.037 ms\n(9 rows)\n\n\nThanks , Adarsh\n\nAndreas Kretschmer wrote:\n> Adarsh Sharma <[email protected]> wrote:\n>\n> \n>> Dear all,\n>>\n>> Today I got to run a query internally from my application by more than \n>> 10 connections.\n>>\n>> But The query performed very badly. A the data size of tables are as :\n>>\n>> pdc_uima=# select pg_size_pretty(pg_total_relation_size('clause2'));\n>> pg_size_pretty\n>> ----------------\n>> 5858 MB\n>> (1 row)\n>>\n>> pdc_uima=# select pg_size_pretty(pg_total_relation_size('svo2')); \n>> pg_size_pretty\n>> ----------------\n>> 4719 MB\n>> (1 row)\n>>\n>>\n>> I explain the query as after making the indexes as :\n>>\n>> pdc_uima=# explain select c.clause, s.* from clause2 c, svo2 s where \n>> c.clause_id=s.clause_id and s.doc_id=c.source_id and c.\n>> pdc_uima-# sentence_id=s.sentence_id ;\n>> QUERY PLAN \n>> \n>> --------------------------------------------------------------------------------------------------------------\n>> Merge Join (cost=5673831.05..34033959.87 rows=167324179 width=2053)\n>> Merge Cond: ((s.clause_id = c.clause_id) AND (s.doc_id = c.source_id) \n>> AND (s.sentence_id = c.sentence_id))\n>> -> Index Scan using idx_svo2 on svo2 s (cost=0.00..24489343.65 \n>> rows=27471560 width=1993)\n>> -> Materialize (cost=5673828.74..6071992.29 rows=31853084 width=72)\n>> -> Sort (cost=5673828.74..5753461.45 rows=31853084 width=72)\n>> Sort Key: c.clause_id, c.source_id, c.sentence_id\n>> -> Seq Scan on clause2 c (cost=0.00..770951.84 \n>> rows=31853084 width=72)\n>>\n>>\n>>\n>> Indexes are :\n>>\n>> CREATE INDEX idx_clause ON clause2 USING btree (clause_id, source_id, \n>> sentence_id);\n>> CREATE INDEX idx_svo2 ON svo2 USING btree (clause_id, doc_id, \n>> sentence_id);\n>>\n>> I don't know why it not uses the index scan for clause2 table.\n>> \n>\n> How many rows contains clause2? The planner expected 167324179 returning\n> rows, can you run the same explain with ANALYSE to see the real amount\n> of returning rows?\n>\n>\n> Andreas\n> \n\n\n\n\n\n\n\n\nThanks Andreas, I was about print the output but it takes too much time.\n\nBelow is the output of explain analyze command :\npdc_uima=# explain analyze select c.clause, s.* from clause2 c, svo2 s\nwhere c.clause_id=s.clause_id and s.doc_id=c.source_id and c.\npdc_uima-# sentence_id=s.sentence_id ;\n                                                                    \nQUERY\nPLAN                                                                    \n\n----------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Join  (cost=5673831.05..34033959.87 rows=167324179 width=2053)\n(actual time=216281.162..630721.636 rows=30473117 loops=1)\n   Merge Cond: ((s.clause_id = c.clause_id) AND (s.doc_id =\nc.source_id) AND (s.sentence_id = c.sentence_id))\n   ->  Index Scan using idx_svo2 on svo2 s  (cost=0.00..24489343.65\nrows=27471560 width=1993) (actual time=0.130..177599.310 rows=27471560\nloops=1)\n   ->  Materialize  (cost=5673828.74..6071992.29 rows=31853084\nwidth=72) (actual time=216280.596..370507.452 rows=52037763 loops=1)\n         ->  Sort  (cost=5673828.74..5753461.45 rows=31853084\nwidth=72) (actual time=216280.591..324707.956 rows=31853083 loops=1)\n               Sort Key: c.clause_id, c.source_id, c.sentence_id\n               Sort Method:  external merge  Disk: 2616520kB\n               ->  Seq Scan on clause2 c  (cost=0.00..770951.84\nrows=31853084 width=72) (actual time=0.025..25018.665 rows=31853083\nloops=1)\n Total runtime: 647804.037 ms\n(9 rows)\n\n\nThanks , Adarsh\n\nAndreas Kretschmer wrote:\n\nAdarsh Sharma <[email protected]> wrote:\n\n \n\nDear all,\n\nToday I got to run a query internally from my application by more than \n10 connections.\n\nBut The query performed very badly. A the data size of tables are as :\n\npdc_uima=# select pg_size_pretty(pg_total_relation_size('clause2'));\npg_size_pretty\n----------------\n5858 MB\n(1 row)\n\npdc_uima=# select pg_size_pretty(pg_total_relation_size('svo2')); \npg_size_pretty\n----------------\n4719 MB\n(1 row)\n\n\nI explain the query as after making the indexes as :\n\npdc_uima=# explain select c.clause, s.* from clause2 c, svo2 s where \nc.clause_id=s.clause_id and s.doc_id=c.source_id and c.\npdc_uima-# sentence_id=s.sentence_id ;\n QUERY PLAN \n \n--------------------------------------------------------------------------------------------------------------\nMerge Join (cost=5673831.05..34033959.87 rows=167324179 width=2053)\n Merge Cond: ((s.clause_id = c.clause_id) AND (s.doc_id = c.source_id) \nAND (s.sentence_id = c.sentence_id))\n -> Index Scan using idx_svo2 on svo2 s (cost=0.00..24489343.65 \nrows=27471560 width=1993)\n -> Materialize (cost=5673828.74..6071992.29 rows=31853084 width=72)\n -> Sort (cost=5673828.74..5753461.45 rows=31853084 width=72)\n Sort Key: c.clause_id, c.source_id, c.sentence_id\n -> Seq Scan on clause2 c (cost=0.00..770951.84 \nrows=31853084 width=72)\n\n\n\nIndexes are :\n\nCREATE INDEX idx_clause ON clause2 USING btree (clause_id, source_id, \nsentence_id);\nCREATE INDEX idx_svo2 ON svo2 USING btree (clause_id, doc_id, \nsentence_id);\n\nI don't know why it not uses the index scan for clause2 table.\n \n\n\nHow many rows contains clause2? The planner expected 167324179 returning\nrows, can you run the same explain with ANALYSE to see the real amount\nof returning rows?\n\n\nAndreas", "msg_date": "Fri, 25 Mar 2011 12:21:50 +0530", "msg_from": "Adarsh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why Index is not used" }, { "msg_contents": "On Fri, Mar 25, 2011 at 12:05 PM, Adarsh Sharma <[email protected]>wrote:\n\n> Dear all,\n>\n> Today I got to run a query internally from my application by more than 10\n> connections.\n>\n> But The query performed very badly. A the data size of tables are as :\n>\n> pdc_uima=# select pg_size_pretty(pg_total_relation_size('clause2'));\n> pg_size_pretty\n> ----------------\n> 5858 MB\n> (1 row)\n>\n> pdc_uima=# select pg_size_pretty(pg_total_relation_size('svo2'));\n> pg_size_pretty\n> ----------------\n> 4719 MB\n> (1 row)\n>\n>\n> I explain the query as after making the indexes as :\n>\n> pdc_uima=# explain select c.clause, s.* from clause2 c, svo2 s where\n> c.clause_id=s.clause_id and s.doc_id=c.source_id and c.\n> pdc_uima-# sentence_id=s.sentence_id ;\n> QUERY PLAN\n>\n> --------------------------------------------------------------------------------------------------------------\n> Merge Join (cost=5673831.05..34033959.87 rows=167324179 width=2053)\n> Merge Cond: ((s.clause_id = c.clause_id) AND (s.doc_id = c.source_id) AND\n> (s.sentence_id = c.sentence_id))\n> -> Index Scan using idx_svo2 on svo2 s (cost=0.00..24489343.65\n> rows=27471560 width=1993)\n> -> Materialize (cost=5673828.74..6071992.29 rows=31853084 width=72)\n> -> Sort (cost=5673828.74..5753461.45 rows=31853084 width=72)\n> Sort Key: c.clause_id, c.source_id, c.sentence_id\n> -> Seq Scan on clause2 c (cost=0.00..770951.84 rows=31853084\n> width=72)\n>\n>\n>\n> Indexes are :\n>\n> CREATE INDEX idx_clause ON clause2 USING btree (clause_id, source_id,\n> sentence_id);\n> CREATE INDEX idx_svo2 ON svo2 USING btree (clause_id, doc_id,\n> sentence_id);\n>\n> I don't know why it not uses the index scan for clause2 table.\n>\n>\nIn this case, there are no predicates or filters on individual table. (maybe\nsomething like c.source_id=10)\nso either of the 2 tables will have to go for simple scan.\n\nAre you expecting seq. scan on svo2 and index scan on clause2?\n\n-- \nRegards,\nChetan Suttraway\nEnterpriseDB <http://www.enterprisedb.com/>, The Enterprise\nPostgreSQL<http://www.enterprisedb.com/>\n company.\n\nOn Fri, Mar 25, 2011 at 12:05 PM, Adarsh Sharma <[email protected]> wrote:\n\nDear all,\n\nToday I got to run a query internally from my application by more than 10 connections.\n\nBut The query performed very badly. A the data size of tables are as :\n\npdc_uima=#  select pg_size_pretty(pg_total_relation_size('clause2'));\npg_size_pretty\n----------------\n5858 MB\n(1 row)\n\npdc_uima=#  select pg_size_pretty(pg_total_relation_size('svo2'));  pg_size_pretty\n----------------\n4719 MB\n(1 row)\n\n\nI explain the query as after making the  indexes as :\n\npdc_uima=# explain select c.clause, s.* from clause2 c, svo2 s where c.clause_id=s.clause_id and s.doc_id=c.source_id and c.\npdc_uima-# sentence_id=s.sentence_id ;\n                                                 QUERY PLAN                                                 --------------------------------------------------------------------------------------------------------------\n\n\nMerge Join  (cost=5673831.05..34033959.87 rows=167324179 width=2053)\n  Merge Cond: ((s.clause_id = c.clause_id) AND (s.doc_id = c.source_id) AND (s.sentence_id = c.sentence_id))\n  ->  Index Scan using idx_svo2 on svo2 s  (cost=0.00..24489343.65 rows=27471560 width=1993)\n  ->  Materialize  (cost=5673828.74..6071992.29 rows=31853084 width=72)\n        ->  Sort  (cost=5673828.74..5753461.45 rows=31853084 width=72)\n              Sort Key: c.clause_id, c.source_id, c.sentence_id\n              ->  Seq Scan on clause2 c  (cost=0.00..770951.84 rows=31853084 width=72)\n\n\n\nIndexes are :\n\nCREATE INDEX idx_clause  ON clause2  USING btree  (clause_id, source_id, sentence_id);\nCREATE INDEX idx_svo2  ON svo2  USING btree (clause_id, doc_id, sentence_id);\n\nI don't know why it not uses the index scan for clause2 table.\nIn this case, there are no predicates or filters on individual table. (maybe something like c.source_id=10)so either of the 2 tables will have to go for simple scan.Are you expecting seq. scan on svo2 and index scan on clause2? \n-- Regards,Chetan SuttrawayEnterpriseDB, The Enterprise PostgreSQL company.", "msg_date": "Fri, 25 Mar 2011 12:26:09 +0530", "msg_from": "Chetan Suttraway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why Index is not used" }, { "msg_contents": "Chetan Suttraway wrote:\n>\n>\n> On Fri, Mar 25, 2011 at 12:05 PM, Adarsh Sharma \n> <[email protected] <mailto:[email protected]>> wrote:\n>\n> Dear all,\n>\n> Today I got to run a query internally from my application by more\n> than 10 connections.\n>\n> But The query performed very badly. A the data size of tables are as :\n>\n> pdc_uima=# select pg_size_pretty(pg_total_relation_size('clause2'));\n> pg_size_pretty\n> ----------------\n> 5858 MB\n> (1 row)\n>\n> pdc_uima=# select pg_size_pretty(pg_total_relation_size('svo2'));\n> pg_size_pretty\n> ----------------\n> 4719 MB\n> (1 row)\n>\n>\n> I explain the query as after making the indexes as :\n>\n> pdc_uima=# explain select c.clause, s.* from clause2 c, svo2 s\n> where c.clause_id=s.clause_id and s.doc_id=c.source_id and c.\n> pdc_uima-# sentence_id=s.sentence_id ;\n> QUERY PLAN \n> \n> --------------------------------------------------------------------------------------------------------------\n> Merge Join (cost=5673831.05..34033959.87 rows=167324179 width=2053)\n> Merge Cond: ((s.clause_id = c.clause_id) AND (s.doc_id =\n> c.source_id) AND (s.sentence_id = c.sentence_id))\n> -> Index Scan using idx_svo2 on svo2 s (cost=0.00..24489343.65\n> rows=27471560 width=1993)\n> -> Materialize (cost=5673828.74..6071992.29 rows=31853084 width=72)\n> -> Sort (cost=5673828.74..5753461.45 rows=31853084 width=72)\n> Sort Key: c.clause_id, c.source_id, c.sentence_id\n> -> Seq Scan on clause2 c (cost=0.00..770951.84\n> rows=31853084 width=72)\n>\n>\n>\n> Indexes are :\n>\n> CREATE INDEX idx_clause ON clause2 USING btree (clause_id,\n> source_id, sentence_id);\n> CREATE INDEX idx_svo2 ON svo2 USING btree (clause_id, doc_id,\n> sentence_id);\n>\n> I don't know why it not uses the index scan for clause2 table.\n>\n>\n> In this case, there are no predicates or filters on individual table. \n> (maybe something like c.source_id=10)\n> so either of the 2 tables will have to go for simple scan.\n>\n> Are you expecting seq. scan on svo2 and index scan on clause2?\n>\n\nAs per the size consideration and the number of rows, I think index scan \non clause2 is better.\n\nYour constraint is valid but I need to perform this query faster. \nWhat is the reason behind the seq scan of clause2.\n\n\n\nRegards,\nAdarsh\n>\n>\n\n\n\n\n\n\n\nChetan Suttraway wrote:\n\n\nOn Fri, Mar 25, 2011 at 12:05 PM, Adarsh\nSharma <[email protected]>\nwrote:\nDear\nall,\n\nToday I got to run a query internally from my application by more than\n10 connections.\n\nBut The query performed very badly. A the data size of tables are as :\n\npdc_uima=#  select pg_size_pretty(pg_total_relation_size('clause2'));\npg_size_pretty\n----------------\n5858 MB\n(1 row)\n\npdc_uima=#  select pg_size_pretty(pg_total_relation_size('svo2'));\n pg_size_pretty\n----------------\n4719 MB\n(1 row)\n\n\nI explain the query as after making the  indexes as :\n\npdc_uima=# explain select c.clause, s.* from clause2 c, svo2 s where\nc.clause_id=s.clause_id and s.doc_id=c.source_id and c.\npdc_uima-# sentence_id=s.sentence_id ;\n                                                QUERY PLAN            \n                                   \n--------------------------------------------------------------------------------------------------------------\nMerge Join  (cost=5673831.05..34033959.87 rows=167324179 width=2053)\n Merge Cond: ((s.clause_id = c.clause_id) AND (s.doc_id = c.source_id)\nAND (s.sentence_id = c.sentence_id))\n ->  Index Scan using idx_svo2 on svo2 s  (cost=0.00..24489343.65\nrows=27471560 width=1993)\n ->  Materialize  (cost=5673828.74..6071992.29 rows=31853084\nwidth=72)\n       ->  Sort  (cost=5673828.74..5753461.45 rows=31853084 width=72)\n             Sort Key: c.clause_id, c.source_id, c.sentence_id\n             ->  Seq Scan on clause2 c  (cost=0.00..770951.84\nrows=31853084 width=72)\n\n\n\nIndexes are :\n\nCREATE INDEX idx_clause  ON clause2  USING btree  (clause_id,\nsource_id, sentence_id);\nCREATE INDEX idx_svo2  ON svo2  USING btree (clause_id, doc_id,\nsentence_id);\n\nI don't know why it not uses the index scan for clause2 table.\n\n\n\n\nIn this case, there are no predicates or filters on individual table.\n(maybe something like c.source_id=10)\nso either of the 2 tables will have to go for simple scan.\n\nAre you expecting seq. scan on svo2 and index scan on clause2? \n\n\n\nAs per the size consideration and the number of rows, I think index\nscan on clause2 is better.\n\nYour constraint is valid  but  I need to perform  this query faster.  \nWhat is the reason behind the seq scan of clause2. \n\n\n\nRegards,\nAdarsh", "msg_date": "Fri, 25 Mar 2011 12:39:31 +0530", "msg_from": "Adarsh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why Index is not used" }, { "msg_contents": "Adarsh Sharma, 25.03.2011 07:51:\n>\n> Thanks Andreas, I was about print the output but it takes too much time.\n>\n> Below is the output of explain analyze command :\n> pdc_uima=# explain analyze select c.clause, s.* from clause2 c, svo2 s where c.clause_id=s.clause_id and s.doc_id=c.source_id and c.\n> pdc_uima-# sentence_id=s.sentence_id ;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------------------\n> Merge Join (cost=5673831.05..34033959.87 rows=167324179 width=2053) (actual time=216281.162..630721.636 rows=30473117 loops=1)\n> Merge Cond: ((s.clause_id = c.clause_id) AND (s.doc_id = c.source_id) AND (s.sentence_id = c.sentence_id))\n> -> Index Scan using idx_svo2 on svo2 s (cost=0.00..24489343.65 rows=27471560 width=1993) (actual time=0.130..177599.310 rows=27471560 loops=1)\n> -> Materialize (cost=5673828.74..6071992.29 rows=31853084 width=72) (actual time=216280.596..370507.452 rows=52037763 loops=1)\n> -> Sort (cost=5673828.74..5753461.45 rows=31853084 width=72) (actual time=216280.591..324707.956 rows=31853083 loops=1)\n> Sort Key: c.clause_id, c.source_id, c.sentence_id\n> Sort Method: external merge Disk: 2616520kB\n> -> Seq Scan on clause2 c (cost=0.00..770951.84 rows=31853084 width=72) (actual time=0.025..25018.665 rows=31853083 loops=1)\n> Total runtime: 647804.037 ms\n> (9 rows)\n>\n>\nHow many rows are there in clause2 in total?\n\n31853084 rows are returned from that table which sounds like the whole table qualifies for the join condition.\n\nRegards\nThomas\n\n", "msg_date": "Fri, 25 Mar 2011 08:24:33 +0100", "msg_from": "Thomas Kellerer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why Index is not used" }, { "msg_contents": "On Fri, Mar 25, 2011 at 12:39 PM, Adarsh Sharma <[email protected]>wrote:\n\n> Chetan Suttraway wrote:\n>\n>\n>\n> On Fri, Mar 25, 2011 at 12:05 PM, Adarsh Sharma <[email protected]>wrote:\n>\n>> Dear all,\n>>\n>> Today I got to run a query internally from my application by more than 10\n>> connections.\n>>\n>> But The query performed very badly. A the data size of tables are as :\n>>\n>> pdc_uima=# select pg_size_pretty(pg_total_relation_size('clause2'));\n>> pg_size_pretty\n>> ----------------\n>> 5858 MB\n>> (1 row)\n>>\n>> pdc_uima=# select pg_size_pretty(pg_total_relation_size('svo2'));\n>> pg_size_pretty\n>> ----------------\n>> 4719 MB\n>> (1 row)\n>>\n>>\n>> I explain the query as after making the indexes as :\n>>\n>> pdc_uima=# explain select c.clause, s.* from clause2 c, svo2 s where\n>> c.clause_id=s.clause_id and s.doc_id=c.source_id and c.\n>> pdc_uima-# sentence_id=s.sentence_id ;\n>> QUERY PLAN\n>>\n>> --------------------------------------------------------------------------------------------------------------\n>> Merge Join (cost=5673831.05..34033959.87 rows=167324179 width=2053)\n>> Merge Cond: ((s.clause_id = c.clause_id) AND (s.doc_id = c.source_id) AND\n>> (s.sentence_id = c.sentence_id))\n>> -> Index Scan using idx_svo2 on svo2 s (cost=0.00..24489343.65\n>> rows=27471560 width=1993)\n>> -> Materialize (cost=5673828.74..6071992.29 rows=31853084 width=72)\n>> -> Sort (cost=5673828.74..5753461.45 rows=31853084 width=72)\n>> Sort Key: c.clause_id, c.source_id, c.sentence_id\n>> -> Seq Scan on clause2 c (cost=0.00..770951.84\n>> rows=31853084 width=72)\n>>\n>>\n>>\n>> Indexes are :\n>>\n>> CREATE INDEX idx_clause ON clause2 USING btree (clause_id, source_id,\n>> sentence_id);\n>> CREATE INDEX idx_svo2 ON svo2 USING btree (clause_id, doc_id,\n>> sentence_id);\n>>\n>> I don't know why it not uses the index scan for clause2 table.\n>>\n>>\n> In this case, there are no predicates or filters on individual table.\n> (maybe something like c.source_id=10)\n> so either of the 2 tables will have to go for simple scan.\n>\n> Are you expecting seq. scan on svo2 and index scan on clause2?\n>\n>\n> As per the size consideration and the number of rows, I think index scan on\n> clause2 is better.\n>\n> Your constraint is valid but I need to perform this query faster.\n> What is the reason behind the seq scan of clause2.\n>\n>\n>\n> Regards,\n> Adarsh\n>\n>\n>\n>\n>\nCould you please post output of below queries:\nexplain select c.clause, s.* from clause2 c, svo2 s where\nc.clause_id=s.clause_id;\nexplain select c.clause, s.* from clause2 c, svo2 s where\ns.doc_id=c.source_id;\nexplain select c.clause, s.* from clause2 c, svo2 s where\nc.sentence_id=s.sentence_id ;\n\n-- \nRegards,\nChetan Suttraway\nEnterpriseDB <http://www.enterprisedb.com/>, The Enterprise\nPostgreSQL<http://www.enterprisedb.com/>\n company.\n\nOn Fri, Mar 25, 2011 at 12:39 PM, Adarsh Sharma <[email protected]> wrote:\n\nChetan Suttraway wrote:\n\n\nOn Fri, Mar 25, 2011 at 12:05 PM, Adarsh\nSharma <[email protected]>\nwrote:\nDear\nall,\n\nToday I got to run a query internally from my application by more than\n10 connections.\n\nBut The query performed very badly. A the data size of tables are as :\n\npdc_uima=#  select pg_size_pretty(pg_total_relation_size('clause2'));\npg_size_pretty\n----------------\n5858 MB\n(1 row)\n\npdc_uima=#  select pg_size_pretty(pg_total_relation_size('svo2'));\n pg_size_pretty\n----------------\n4719 MB\n(1 row)\n\n\nI explain the query as after making the  indexes as :\n\npdc_uima=# explain select c.clause, s.* from clause2 c, svo2 s where\nc.clause_id=s.clause_id and s.doc_id=c.source_id and c.\npdc_uima-# sentence_id=s.sentence_id ;\n                                                QUERY PLAN            \n                                   \n--------------------------------------------------------------------------------------------------------------\nMerge Join  (cost=5673831.05..34033959.87 rows=167324179 width=2053)\n Merge Cond: ((s.clause_id = c.clause_id) AND (s.doc_id = c.source_id)\nAND (s.sentence_id = c.sentence_id))\n ->  Index Scan using idx_svo2 on svo2 s  (cost=0.00..24489343.65\nrows=27471560 width=1993)\n ->  Materialize  (cost=5673828.74..6071992.29 rows=31853084\nwidth=72)\n       ->  Sort  (cost=5673828.74..5753461.45 rows=31853084 width=72)\n             Sort Key: c.clause_id, c.source_id, c.sentence_id\n             ->  Seq Scan on clause2 c  (cost=0.00..770951.84\nrows=31853084 width=72)\n\n\n\nIndexes are :\n\nCREATE INDEX idx_clause  ON clause2  USING btree  (clause_id,\nsource_id, sentence_id);\nCREATE INDEX idx_svo2  ON svo2  USING btree (clause_id, doc_id,\nsentence_id);\n\nI don't know why it not uses the index scan for clause2 table.\n\n\n\n\nIn this case, there are no predicates or filters on individual table.\n(maybe something like c.source_id=10)\nso either of the 2 tables will have to go for simple scan.\n\nAre you expecting seq. scan on svo2 and index scan on clause2? \n\n\n\nAs per the size consideration and the number of rows, I think index\nscan on clause2 is better.\n\nYour constraint is valid  but  I need to perform  this query faster.  \nWhat is the reason behind the seq scan of clause2. \n\n\n\nRegards,\nAdarsh\n\n\n\n\n\nCould you please post output of below queries: explain select c.clause, s.* from clause2 c, svo2 s where\nc.clause_id=s.clause_id;explain select c.clause, s.* from clause2 c, svo2 s where s.doc_id=c.source_id;explain select c.clause, s.* from clause2 c, svo2 s where c.sentence_id=s.sentence_id ;-- Regards,\n\n\nChetan SuttrawayEnterpriseDB, The Enterprise PostgreSQL company.", "msg_date": "Fri, 25 Mar 2011 13:44:27 +0530", "msg_from": "Chetan Suttraway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why Index is not used" }, { "msg_contents": ">\n>\n>\n> Could you please post output of below queries:\n> explain select c.clause, s.* from clause2 c, svo2 s where \n> c.clause_id=s.clause_id;\n> explain select c.clause, s.* from clause2 c, svo2 s where \n> s.doc_id=c.source_id;\n> explain select c.clause, s.* from clause2 c, svo2 s where \n> c.sentence_id=s.sentence_id ;\n\n\nAs per your instructions, Please check the below output :-\n\npdc_uima=# explain select c.clause, s.* from clause2 c, svo2 s where \nc.clause_id=s.clause_id;\n QUERY \nPLAN \n---------------------------------------------------------------------------------\n Hash Join (cost=7828339.10..4349603998133.96 rows=379772050555842 \nwidth=2053)\n Hash Cond: (c.clause_id = s.clause_id)\n -> Seq Scan on clause2 c (cost=0.00..770951.84 rows=31853084 width=64)\n -> Hash (cost=697537.60..697537.60 rows=27471560 width=1993)\n -> Seq Scan on svo2 s (cost=0.00..697537.60 rows=27471560 \nwidth=1993)\n(5 rows)\n\npdc_uima=# explain select c.clause, s.* from clause2 c, svo2 s where \ns.doc_id=c.source_id;\n QUERY \nPLAN \n---------------------------------------------------------------------------------------\n Merge Join (cost=43635232.12..358368926.66 rows=20954686217 width=2053)\n Merge Cond: (c.source_id = s.doc_id)\n -> Sort (cost=5596061.24..5675693.95 rows=31853084 width=64)\n Sort Key: c.source_id\n -> Seq Scan on clause2 c (cost=0.00..770951.84 rows=31853084 \nwidth=64)\n -> Materialize (cost=38028881.02..38372275.52 rows=27471560 width=1993)\n -> Sort (cost=38028881.02..38097559.92 rows=27471560 width=1993)\n Sort Key: s.doc_id\n -> Seq Scan on svo2 s (cost=0.00..697537.60 \nrows=27471560 width=1993)\n(9 rows)\n\npdc_uima=# explain select c.clause, s.* from clause2 c, svo2 s where \nc.sentence_id=s.sentence_id ;\n QUERY \nPLAN \n---------------------------------------------------------------------------------------\n Merge Join (cost=43711844.03..241541026048.10 rows=PLeaswidth=2053)\n Merge Cond: (c.sentence_id = s.sentence_id)\n -> Sort (cost=5596061.24..5675693.95 rows=31853084 width=64)\n Sort Key: c.sentence_id\n -> Seq Scan on clause2 c (cost=0.00..770951.84 rows=31853084 \nwidth=64)\n -> Materialize (cost=38028881.02..38372275.52 rows=27471560 width=1993)\n -> Sort (cost=38028881.02..38097559.92 rows=27471560 width=1993)\n Sort Key: s.sentence_id\n -> Seq Scan on svo2 s (cost=0.00..697537.60 \nrows=27471560 width=1993)\n(9 rows)\n\nPlease let me know if any other information is required.\n\n\n\n\n\n>\n> -- \n> Best Regards,\n> Adarsh Sharma\n>\n>\n\n\n\n\n\n\n\n\n\n\n\n\nCould you please post output of below queries:\nexplain select c.clause, s.* from clause2 c, svo2 s where\nc.clause_id=s.clause_id;\nexplain select c.clause, s.* from clause2 c, svo2 s where\ns.doc_id=c.source_id;\nexplain select c.clause, s.* from clause2 c, svo2 s where\nc.sentence_id=s.sentence_id ;\n\n\n\nAs per your instructions, Please  check the below output :-\n\npdc_uima=# explain select c.clause, s.* from clause2 c, svo2 s where\nc.clause_id=s.clause_id;\n                                   QUERY\nPLAN                                    \n---------------------------------------------------------------------------------\n Hash Join  (cost=7828339.10..4349603998133.96 rows=379772050555842\nwidth=2053)\n   Hash Cond: (c.clause_id = s.clause_id)\n   ->  Seq Scan on clause2 c  (cost=0.00..770951.84 rows=31853084\nwidth=64)\n   ->  Hash  (cost=697537.60..697537.60 rows=27471560 width=1993)\n         ->  Seq Scan on svo2 s  (cost=0.00..697537.60 rows=27471560\nwidth=1993)\n(5 rows)\n\npdc_uima=# explain select c.clause, s.* from clause2 c, svo2 s where\ns.doc_id=c.source_id;\n                                      QUERY\nPLAN                                       \n---------------------------------------------------------------------------------------\n Merge Join  (cost=43635232.12..358368926.66 rows=20954686217\nwidth=2053)\n   Merge Cond: (c.source_id = s.doc_id)\n   ->  Sort  (cost=5596061.24..5675693.95 rows=31853084 width=64)\n         Sort Key: c.source_id\n         ->  Seq Scan on clause2 c  (cost=0.00..770951.84\nrows=31853084 width=64)\n   ->  Materialize  (cost=38028881.02..38372275.52 rows=27471560\nwidth=1993)\n         ->  Sort  (cost=38028881.02..38097559.92 rows=27471560\nwidth=1993)\n               Sort Key: s.doc_id\n               ->  Seq Scan on svo2 s  (cost=0.00..697537.60\nrows=27471560 width=1993)\n(9 rows)\n\npdc_uima=# explain select c.clause, s.* from clause2 c, svo2 s where\nc.sentence_id=s.sentence_id ;\n                                      QUERY\nPLAN                                       \n---------------------------------------------------------------------------------------\n Merge Join  (cost=43711844.03..241541026048.10 rows=PLeaswidth=2053)\n   Merge Cond: (c.sentence_id = s.sentence_id)\n   ->  Sort  (cost=5596061.24..5675693.95 rows=31853084 width=64)\n         Sort Key: c.sentence_id\n         ->  Seq Scan on clause2 c  (cost=0.00..770951.84\nrows=31853084 width=64)\n   ->  Materialize  (cost=38028881.02..38372275.52 rows=27471560\nwidth=1993)\n         ->  Sort  (cost=38028881.02..38097559.92 rows=27471560\nwidth=1993)\n               Sort Key: s.sentence_id\n               ->  Seq Scan on svo2 s  (cost=0.00..697537.60\nrows=27471560 width=1993)\n(9 rows)\n\nPlease  let me know if any other information is required.\n\n\n\n\n\n\n-- \nBest Regards,\nAdarsh Sharma", "msg_date": "Fri, 25 Mar 2011 14:25:29 +0530", "msg_from": "Adarsh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why Index is not used" }, { "msg_contents": "On Fri, Mar 25, 2011 at 2:25 PM, Adarsh Sharma <[email protected]>wrote:\n\n>\n> Could you please post output of below queries:\n> explain select c.clause, s.* from clause2 c, svo2 s where\n> c.clause_id=s.clause_id;\n> explain select c.clause, s.* from clause2 c, svo2 s where\n> s.doc_id=c.source_id;\n> explain select c.clause, s.* from clause2 c, svo2 s where\n> c.sentence_id=s.sentence_id ;\n>\n>\n>\n> As per your instructions, Please check the below output :-\n>\n> pdc_uima=# explain select c.clause, s.* from clause2 c, svo2 s where\n> c.clause_id=s.clause_id;\n> QUERY\n> PLAN\n>\n> ---------------------------------------------------------------------------------\n> Hash Join (cost=7828339.10..4349603998133.96 rows=379772050555842\n> width=2053)\n> Hash Cond: (c.clause_id = s.clause_id)\n> -> Seq Scan on clause2 c (cost=0.00..770951.84 rows=31853084 width=64)\n> -> Hash (cost=697537.60..697537.60 rows=27471560 width=1993)\n> -> Seq Scan on svo2 s (cost=0.00..697537.60 rows=27471560\n> width=1993)\n> (5 rows)\n>\n> pdc_uima=# explain select c.clause, s.* from clause2 c, svo2 s where\n> s.doc_id=c.source_id;\n> QUERY\n> PLAN\n>\n> ---------------------------------------------------------------------------------------\n> Merge Join (cost=43635232.12..358368926.66 rows=20954686217 width=2053)\n> Merge Cond: (c.source_id = s.doc_id)\n> -> Sort (cost=5596061.24..5675693.95 rows=31853084 width=64)\n> Sort Key: c.source_id\n> -> Seq Scan on clause2 c (cost=0.00..770951.84 rows=31853084\n> width=64)\n> -> Materialize (cost=38028881.02..38372275.52 rows=27471560\n> width=1993)\n> -> Sort (cost=38028881.02..38097559.92 rows=27471560 width=1993)\n> Sort Key: s.doc_id\n> -> Seq Scan on svo2 s (cost=0.00..697537.60 rows=27471560\n> width=1993)\n> (9 rows)\n>\n> pdc_uima=# explain select c.clause, s.* from clause2 c, svo2 s where\n> c.sentence_id=s.sentence_id ;\n> QUERY\n> PLAN\n>\n> ---------------------------------------------------------------------------------------\n> Merge Join (cost=43711844.03..241541026048.10 rows=PLeaswidth=2053)\n> Merge Cond: (c.sentence_id = s.sentence_id)\n> -> Sort (cost=5596061.24..5675693.95 rows=31853084 width=64)\n> Sort Key: c.sentence_id\n> -> Seq Scan on clause2 c (cost=0.00..770951.84 rows=31853084\n> width=64)\n> -> Materialize (cost=38028881.02..38372275.52 rows=27471560\n> width=1993)\n> -> Sort (cost=38028881.02..38097559.92 rows=27471560 width=1993)\n> Sort Key: s.sentence_id\n> -> Seq Scan on svo2 s (cost=0.00..697537.60 rows=27471560\n> width=1993)\n> (9 rows)\n>\n> Please let me know if any other information is required.\n>\n>\n>\n>\n>\n>\n> --\n> Best Regards,\n> Adarsh Sharma\n>\n>\n>\n> The ideas is to have maximum filtering occuring on leading column of\nindex.\nthe first plan with only the predicates on clause_id is returning\n379772050555842 rows whereas\nin the second plan with doc_id predicates is returning only 20954686217.\n\nSo maybe you should consider re-ordering of the index on clause2.\n\nI am thinking that you created the indexes by looking at the columns used in\nthe where clause.\nBut its not always helpful to create indexes based on exact order of\npredicates specified in query.\nInstead the idea should be consider the predicate which is going to do\nfilter out the results.\nLikewise we should consider all possible uses of index columns across all\nqueries and then decide on the\norder of columns for the composite index to be created.\n\nWhats your take on this?\n\n-- \nRegards,\nChetan Suttraway\nEnterpriseDB <http://www.enterprisedb.com/>, The Enterprise\nPostgreSQL<http://www.enterprisedb.com/>\n company.\n\nOn Fri, Mar 25, 2011 at 2:25 PM, Adarsh Sharma <[email protected]> wrote:\n\n\n\n\n\n\nCould you please post output of below queries:\nexplain select c.clause, s.* from clause2 c, svo2 s where\nc.clause_id=s.clause_id;\nexplain select c.clause, s.* from clause2 c, svo2 s where\ns.doc_id=c.source_id;\nexplain select c.clause, s.* from clause2 c, svo2 s where\nc.sentence_id=s.sentence_id ;\n\n\n\nAs per your instructions, Please  check the below output :-\n\npdc_uima=# explain select c.clause, s.* from clause2 c, svo2 s where\nc.clause_id=s.clause_id;\n                                   QUERY\nPLAN                                    \n---------------------------------------------------------------------------------\n Hash Join  (cost=7828339.10..4349603998133.96 rows=379772050555842\nwidth=2053)\n   Hash Cond: (c.clause_id = s.clause_id)\n   ->  Seq Scan on clause2 c  (cost=0.00..770951.84 rows=31853084\nwidth=64)\n   ->  Hash  (cost=697537.60..697537.60 rows=27471560 width=1993)\n         ->  Seq Scan on svo2 s  (cost=0.00..697537.60 rows=27471560\nwidth=1993)\n(5 rows)\n\npdc_uima=# explain select c.clause, s.* from clause2 c, svo2 s where\ns.doc_id=c.source_id;\n                                      QUERY\nPLAN                                       \n---------------------------------------------------------------------------------------\n Merge Join  (cost=43635232.12..358368926.66 rows=20954686217\nwidth=2053)\n   Merge Cond: (c.source_id = s.doc_id)\n   ->  Sort  (cost=5596061.24..5675693.95 rows=31853084 width=64)\n         Sort Key: c.source_id\n         ->  Seq Scan on clause2 c  (cost=0.00..770951.84\nrows=31853084 width=64)\n   ->  Materialize  (cost=38028881.02..38372275.52 rows=27471560\nwidth=1993)\n         ->  Sort  (cost=38028881.02..38097559.92 rows=27471560\nwidth=1993)\n               Sort Key: s.doc_id\n               ->  Seq Scan on svo2 s  (cost=0.00..697537.60\nrows=27471560 width=1993)\n(9 rows)\n\npdc_uima=# explain select c.clause, s.* from clause2 c, svo2 s where\nc.sentence_id=s.sentence_id ;\n                                      QUERY\nPLAN                                       \n---------------------------------------------------------------------------------------\n Merge Join  (cost=43711844.03..241541026048.10 rows=PLeaswidth=2053)\n   Merge Cond: (c.sentence_id = s.sentence_id)\n   ->  Sort  (cost=5596061.24..5675693.95 rows=31853084 width=64)\n         Sort Key: c.sentence_id\n         ->  Seq Scan on clause2 c  (cost=0.00..770951.84\nrows=31853084 width=64)\n   ->  Materialize  (cost=38028881.02..38372275.52 rows=27471560\nwidth=1993)\n         ->  Sort  (cost=38028881.02..38097559.92 rows=27471560\nwidth=1993)\n               Sort Key: s.sentence_id\n               ->  Seq Scan on svo2 s  (cost=0.00..697537.60\nrows=27471560 width=1993)\n(9 rows)\n\nPlease  let me know if any other information is required.\n\n\n\n\n\n\n-- \nBest Regards,\nAdarsh Sharma\n\n\n\n\n\nThe ideas is to have maximum filtering occuring on leading column of index.the first plan with only the predicates on clause_id is returning 379772050555842 rows whereasin the second plan with doc_id predicates is returning only 20954686217.\nSo maybe you should consider re-ordering of the index on clause2.I am thinking that you created the indexes by looking at the columns used in the where clause.But its not always helpful to create  indexes based on exact order of predicates specified in query.\n\nInstead the idea should be consider the predicate which is going to do filter out the results. Likewise we should consider all possible uses of index columns across all queries and then decide on the order of columns for the composite index to be created.\nWhats your take on this?-- Regards,Chetan SuttrawayEnterpriseDB, The Enterprise PostgreSQL company.", "msg_date": "Fri, 25 Mar 2011 14:37:36 +0530", "msg_from": "Chetan Suttraway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why Index is not used" }, { "msg_contents": ">> Merge Join (cost=5673831.05..34033959.87 rows=167324179 width=2053)\n>> Merge Cond: ((s.clause_id = c.clause_id) AND (s.doc_id =\n>> c.source_id) AND (s.sentence_id = c.sentence_id))\n>> -> Index Scan using idx_svo2 on svo2 s (cost=0.00..24489343.65\n>> rows=27471560 width=1993)\n>> -> Materialize (cost=5673828.74..6071992.29 rows=31853084\n>> width=72)\n>> -> Sort (cost=5673828.74..5753461.45 rows=31853084\n>> width=72)\n>> Sort Key: c.clause_id, c.source_id, c.sentence_id\n>> -> Seq Scan on clause2 c (cost=0.00..770951.84\n>> rows=31853084 width=72)\n\n>>\n>\n> As per the size consideration and the number of rows, I think index scan\n> on clause2 is better.\n\nI really doubt that - using index usually involves a lot of random I/O and\nthat makes slow with a lot of rows. And that's exactly this case, as there\nare 27471560 rows in the first table.\n\nYou can force the planner to use different plan by disabling merge join,\njust set\n\n set enable_mergejoin = false\n\nand see what happens. There are other similar options:\n\n http://www.postgresql.org/docs/8.4/static/runtime-config-query.html\n\nAnd yet another option - you can try to mangle with the cost constants,\nnamely seq_page_cost and random_page_cost. Decreasing random_page_cost\n(default is 4) makes index scans cheaper, so it's more likely the planner\nwill choose them.\n\nTomas\n\n", "msg_date": "Fri, 25 Mar 2011 10:30:13 +0100", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Why Index is not used" }, { "msg_contents": "Chetan Suttraway wrote:\n>\n>\n> On Fri, Mar 25, 2011 at 2:25 PM, Adarsh Sharma \n> <[email protected] <mailto:[email protected]>> wrote:\n>\n>>\n>> Could you please post output of below queries:\n>> explain select c.clause, s.* from clause2 c, svo2 s where\n>> c.clause_id=s.clause_id;\n>> explain select c.clause, s.* from clause2 c, svo2 s where\n>> s.doc_id=c.source_id;\n>> explain select c.clause, s.* from clause2 c, svo2 s where\n>> c.sentence_id=s.sentence_id ;\n>\n>\n> As per your instructions, Please check the below output :-\n>\n> pdc_uima=# explain select c.clause, s.* from clause2 c, svo2 s\n> where c.clause_id=s.clause_id;\n> QUERY\n> PLAN \n> ---------------------------------------------------------------------------------\n> Hash Join (cost=7828339.10..4349603998133.96\n> rows=379772050555842 width=2053)\n> Hash Cond: (c.clause_id = s.clause_id)\n> -> Seq Scan on clause2 c (cost=0.00..770951.84 rows=31853084\n> width=64)\n> -> Hash (cost=697537.60..697537.60 rows=27471560 width=1993)\n> -> Seq Scan on svo2 s (cost=0.00..697537.60\n> rows=27471560 width=1993)\n> (5 rows)\n>\n> pdc_uima=# explain select c.clause, s.* from clause2 c, svo2 s\n> where s.doc_id=c.source_id;\n> QUERY\n> PLAN \n> ---------------------------------------------------------------------------------------\n> Merge Join (cost=43635232.12..358368926.66 rows=20954686217\n> width=2053)\n> Merge Cond: (c.source_id = s.doc_id)\n> -> Sort (cost=5596061.24..5675693.95 rows=31853084 width=64)\n> Sort Key: c.source_id\n> -> Seq Scan on clause2 c (cost=0.00..770951.84\n> rows=31853084 width=64)\n> -> Materialize (cost=38028881.02..38372275.52 rows=27471560\n> width=1993)\n> -> Sort (cost=38028881.02..38097559.92 rows=27471560\n> width=1993)\n> Sort Key: s.doc_id\n> -> Seq Scan on svo2 s (cost=0.00..697537.60\n> rows=27471560 width=1993)\n> (9 rows)\n>\n> pdc_uima=# explain select c.clause, s.* from clause2 c, svo2 s\n> where c.sentence_id=s.sentence_id ;\n> QUERY\n> PLAN \n> ---------------------------------------------------------------------------------------\n> Merge Join (cost=43711844.03..241541026048.10 rows=PLeaswidth=2053)\n> Merge Cond: (c.sentence_id = s.sentence_id)\n> -> Sort (cost=5596061.24..5675693.95 rows=31853084 width=64)\n> Sort Key: c.sentence_id\n> -> Seq Scan on clause2 c (cost=0.00..770951.84\n> rows=31853084 width=64)\n> -> Materialize (cost=38028881.02..38372275.52 rows=27471560\n> width=1993)\n> -> Sort (cost=38028881.02..38097559.92 rows=27471560\n> width=1993)\n> Sort Key: s.sentence_id\n> -> Seq Scan on svo2 s (cost=0.00..697537.60\n> rows=27471560 width=1993)\n> (9 rows)\n>\n> Please let me know if any other information is required.\n>\n>\n>\n>\n>\n>>\n>> -- \n>> Best Regards,\n>> Adarsh Sharma\n>>\n>>\n>\n> The ideas is to have maximum filtering occuring on leading column of \n> index.\n> the first plan with only the predicates on clause_id is returning \n> 379772050555842 rows whereas\n> in the second plan with doc_id predicates is returning only 20954686217.\n>\n> So maybe you should consider re-ordering of the index on clause2.\n>\n> I am thinking that you created the indexes by looking at the columns \n> used in the where clause.\n> But its not always helpful to create indexes based on exact order of \n> predicates specified in query.\n> Instead the idea should be consider the predicate which is going to do \n> filter out the results.\n> Likewise we should consider all possible uses of index columns across \n> all queries and then decide on the\n> order of columns for the composite index to be created.\n>\n> Whats your take on this?\n\nI am sorry but I am not able to got your points completely.\n\nMy table definitions are as :\n\n*Clause2 Table :\n\n*CREATE TABLE clause2\n(\n id bigint NOT NULL DEFAULT nextval('clause_id_seq'::regclass),\n source_id integer,\n sentence_id integer,\n clause_id integer,\n tense character varying(30),\n clause text,\n CONSTRAINT pk_clause_demo_id PRIMARY KEY (id)\n)\nWITH (\n OIDS=FALSE\n);\nCREATE INDEX idx_clause ON clause2 USING btree (clause_id, source_id, \nsentence_id);\n\n*svo2 table :*--\n\nCREATE TABLE svo2\n(\n svo_id bigint NOT NULL DEFAULT nextval('svo_svo_id_seq'::regclass),\n doc_id integer,\n sentence_id integer,\n clause_id integer,\n negation integer,\n subject character varying(3000),\n verb character varying(3000),\n \"object\" character varying(3000),\n preposition character varying(3000),\n subject_type character varying(3000),\n object_type character varying(3000),\n subject_attribute character varying(3000),\n object_attribute character varying(3000),\n verb_attribute character varying(3000),\n subject_concept character varying(100),\n object_concept character varying(100),\n subject_sense character varying(100),\n object_sense character varying(100),\n subject_chain character varying(5000),\n object_chain character varying(5000),\n sub_type_id integer,\n obj_type_id integer,\n CONSTRAINT pk_svo_demo_id PRIMARY KEY (svo_id)\n)\nWITH (\n OIDS=FALSE\n);\nCREATE INDEX idx_svo2 ON svo2 USING btree (clause_id, doc_id, \nsentence_id);\n\nPlease correct me if I m wrong.\n\nI need to change the order of columns in indexes according to the filter \nconditions but in this query .\n\nAfter making\n\nset enable_mergejoin = false\nand random_page_cost =2.0\n\nThe problem remains the same.\n\n\n\n\n\n\nWhat is your recommendations for the new index so that the query runs \neven faster.\n\n\nI can change my original query to :\n\nexplain analyze select \nc.clause,s.doc_id,s.subject,s.verb,s.object,s.subject_type,s.object_type \nfrom clause2 c, svo2 s where c.clause_id=s.clause_id and \ns.doc_id=c.source_id and c.sentence_id=s.sentence_id ;\n\nAnd the output is :\n\n QUERY \nPLAN \n------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..128419720.68 rows=167324179 width=105) (actual \ntime=11.179..285708.966 rows=30473117 loops=1)\n -> Seq Scan on svo2 s (cost=0.00..697537.60 rows=27471560 width=53) \n(actual time=0.013..19554.222 rows=27471560 loops=1)\n -> Index Scan using idx_clause on clause2 c (cost=0.00..4.63 rows=1 \nwidth=72) (actual time=0.006..0.007 rows=1 loops=27471560)\n Index Cond: ((c.clause_id = s.clause_id) AND (c.source_id = \ns.doc_id) AND (c.sentence_id = s.sentence_id))\n Total runtime: 301599.274 ms\n\n\nThanks & best Regards,\nAdarsh Sharma\n\n\n> Regards,\n> Chetan Suttraway\n> EnterpriseDB <http://www.enterprisedb.com/>, The Enterprise PostgreSQL \n> <http://www.enterprisedb.com/> company.\n>\n>\n>\n\n\n\n\n\n\n\nChetan Suttraway wrote:\n\n\nOn Fri, Mar 25, 2011 at 2:25 PM, Adarsh\nSharma <[email protected]>\nwrote:\n\n\n\n\n \n\nCould you please post output of below queries:\nexplain select c.clause, s.* from clause2 c, svo2 s where\nc.clause_id=s.clause_id;\nexplain select c.clause, s.* from clause2 c, svo2 s where\ns.doc_id=c.source_id;\nexplain select c.clause, s.* from clause2 c, svo2 s where\nc.sentence_id=s.sentence_id ;\n\n\n\n\nAs per your instructions, Please  check the below output :-\n\npdc_uima=# explain select c.clause, s.* from clause2 c, svo2 s where\nc.clause_id=s.clause_id;\n                                   QUERY\nPLAN                                    \n---------------------------------------------------------------------------------\n Hash Join  (cost=7828339.10..4349603998133.96 rows=379772050555842\nwidth=2053)\n   Hash Cond: (c.clause_id = s.clause_id)\n   ->  Seq Scan on clause2 c  (cost=0.00..770951.84 rows=31853084\nwidth=64)\n   ->  Hash  (cost=697537.60..697537.60 rows=27471560 width=1993)\n         ->  Seq Scan on svo2 s  (cost=0.00..697537.60 rows=27471560\nwidth=1993)\n(5 rows)\n\npdc_uima=# explain select c.clause, s.* from clause2 c, svo2 s where\ns.doc_id=c.source_id;\n                                      QUERY\nPLAN                                       \n---------------------------------------------------------------------------------------\n Merge Join  (cost=43635232.12..358368926.66 rows=20954686217\nwidth=2053)\n   Merge Cond: (c.source_id = s.doc_id)\n   ->  Sort  (cost=5596061.24..5675693.95 rows=31853084 width=64)\n         Sort Key: c.source_id\n         ->  Seq Scan on clause2 c  (cost=0.00..770951.84\nrows=31853084 width=64)\n   ->  Materialize  (cost=38028881.02..38372275.52 rows=27471560\nwidth=1993)\n         ->  Sort  (cost=38028881.02..38097559.92 rows=27471560\nwidth=1993)\n               Sort Key: s.doc_id\n               ->  Seq Scan on svo2 s  (cost=0.00..697537.60\nrows=27471560 width=1993)\n(9 rows)\n\npdc_uima=# explain select c.clause, s.* from clause2 c, svo2 s where\nc.sentence_id=s.sentence_id ;\n                                      QUERY\nPLAN                                       \n---------------------------------------------------------------------------------------\n Merge Join  (cost=43711844.03..241541026048.10 rows=PLeaswidth=2053)\n   Merge Cond: (c.sentence_id = s.sentence_id)\n   ->  Sort  (cost=5596061.24..5675693.95 rows=31853084 width=64)\n         Sort Key: c.sentence_id\n         ->  Seq Scan on clause2 c  (cost=0.00..770951.84\nrows=31853084 width=64)\n   ->  Materialize  (cost=38028881.02..38372275.52 rows=27471560\nwidth=1993)\n         ->  Sort  (cost=38028881.02..38097559.92 rows=27471560\nwidth=1993)\n               Sort Key: s.sentence_id\n               ->  Seq Scan on svo2 s  (cost=0.00..697537.60\nrows=27471560 width=1993)\n(9 rows)\n\nPlease  let me know if any other information is required.\n\n\n\n\n\n\n-- \nBest Regards,\nAdarsh Sharma\n\n\n\n\n\n\n\nThe ideas is to have maximum filtering occuring on leading column of\nindex.\nthe first plan with only the predicates on clause_id is returning\n379772050555842 rows whereas\nin the second plan with doc_id predicates is returning only 20954686217.\n\nSo maybe you should consider re-ordering of the index on clause2.\n\nI am thinking that you created the indexes by looking at the columns\nused in the where clause.\nBut its not always helpful to create  indexes based on exact order of\npredicates specified in query.\nInstead the idea should be consider the predicate which is going to do\nfilter out the results. \nLikewise we should consider all possible uses of index columns across\nall queries and then decide on the \norder of columns for the composite index to be created.\n\nWhats your take on this?\n\n\nI am sorry but I am not able to got your points completely.\n\nMy table definitions are as :\n\nClause2 Table :\n\nCREATE TABLE clause2\n(\n  id bigint NOT NULL DEFAULT nextval('clause_id_seq'::regclass),\n  source_id integer,\n  sentence_id integer,\n  clause_id integer,\n  tense character varying(30),\n  clause text,\n  CONSTRAINT pk_clause_demo_id PRIMARY KEY (id)\n)\nWITH (\n  OIDS=FALSE\n);\nCREATE INDEX idx_clause  ON clause2  USING btree  (clause_id,\nsource_id, sentence_id);\n\nsvo2 table :-- \n\nCREATE TABLE svo2\n(\n  svo_id bigint NOT NULL DEFAULT nextval('svo_svo_id_seq'::regclass),\n  doc_id integer,\n  sentence_id integer,\n  clause_id integer,\n  negation integer,\n  subject character varying(3000),\n  verb character varying(3000),\n  \"object\" character varying(3000),\n  preposition character varying(3000),\n  subject_type character varying(3000),\n  object_type character varying(3000),\n  subject_attribute character varying(3000),\n  object_attribute character varying(3000),\n  verb_attribute character varying(3000),\n  subject_concept character varying(100),\n  object_concept character varying(100),\n  subject_sense character varying(100),\n  object_sense character varying(100),\n  subject_chain character varying(5000),\n  object_chain character varying(5000),\n  sub_type_id integer,\n  obj_type_id integer,\n  CONSTRAINT pk_svo_demo_id PRIMARY KEY (svo_id)\n)\nWITH (\n  OIDS=FALSE\n);\nCREATE INDEX idx_svo2  ON svo2  USING btree  (clause_id, doc_id,\nsentence_id);\n\nPlease correct me if I m wrong.\n\nI need to change the order of columns in indexes according to the\nfilter conditions but in this query .\n\nAfter making \nset enable_mergejoin = false\nand random_page_cost =2.0\n\nThe problem remains the same.\n\n\n\n\n\n\nWhat is your recommendations for the new index so that the query runs\neven faster.\n\n\nI can change my original query to :\n\nexplain analyze select\nc.clause,s.doc_id,s.subject,s.verb,s.object,s.subject_type,s.object_type\nfrom clause2 c, svo2 s where c.clause_id=s.clause_id and\ns.doc_id=c.source_id and c.sentence_id=s.sentence_id ;\n\nAnd the output is :\n\n                                                             QUERY\nPLAN                                                             \n------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop  (cost=0.00..128419720.68 rows=167324179 width=105)\n(actual time=11.179..285708.966 rows=30473117 loops=1)\n   ->  Seq Scan on svo2 s  (cost=0.00..697537.60 rows=27471560\nwidth=53) (actual time=0.013..19554.222 rows=27471560 loops=1)\n   ->  Index Scan using idx_clause on clause2 c  (cost=0.00..4.63\nrows=1 width=72) (actual time=0.006..0.007 rows=1 loops=27471560)\n         Index Cond: ((c.clause_id = s.clause_id) AND (c.source_id =\ns.doc_id) AND (c.sentence_id = s.sentence_id))\n Total runtime: 301599.274 ms\n\n\nThanks & best Regards,\nAdarsh Sharma\n\n\nRegards,\nChetan Suttraway\nEnterpriseDB, The Enterprise\nPostgreSQL company.", "msg_date": "Fri, 25 Mar 2011 15:23:24 +0530", "msg_from": "Adarsh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why Index is not used" }, { "msg_contents": "On 03/25/2011 04:07 AM, Chetan Suttraway wrote:\n\n> The ideas is to have maximum filtering occuring on leading column of index.\n> the first plan with only the predicates on clause_id is returning\n> 379772050555842 rows whereas\n> in the second plan with doc_id predicates is returning only 20954686217.\n>\n> So maybe you should consider re-ordering of the index on clause2.\n\nThat won't really help him. He's joining a 27M row table against a 31M \nrow table with basically no WHERE clause. We can see that because he's \ngetting 30M rows back in the EXPLAIN ANALYZE. At that point, it doesn't \nreally matter which table gets index scanned. This query will *always* \ntake several minutes to execute.\n\nIt would be completely different if he only wanted to get the results \nfor *one* source. Or *one* sentence. But getting all of them ever stored \nwill just take forever.\n\n> I am sorry but I am not able to got your points completely.\n\nHe just means that indexes work better if they're placed in order of \nselectivity. In your case, it seems sentence_id restricts the result set \nbetter than clause_id. So Chetan suggested remaking your indexes to be \nthis instead:\n\nCREATE INDEX idx_clause ON clause2\n USING btree (sentence_id, clause_id, source_id);\n\nCREATE INDEX idx_svo2 ON svo2\n USING btree (sentence_id, clause_id, doc_id);\n\nThis *might* help. But your fundamental problem is that you're joining \ntwo giant tables with no clause to limit the result set. If you were \nonly getting back 10,000 rows, or even a million rows, your query could \nexecute in a fraction of the time. But joining every row in both tables \nand returning a 30-million row result set isn't going to be fun for \nanyone. Are you actually processing all 30-million rows you get back? \nStoring them somewhere?\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n", "msg_date": "Fri, 25 Mar 2011 08:24:55 -0500", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why Index is not used" }, { "msg_contents": "To expand on what Shaun said:\n\n> But your fundamental problem is that you're joining two\n> giant tables with no clause to limit the result set. If you were only\n> getting back 10,000 rows, or even a million rows, your query could execute\n> in a fraction of the time. But joining every row in both tables and\n> returning a 30-million row result set isn't going to be fun for anyone.\n\nIndexes aren't a magical performance fairy dust. An index gives you a\nway to look up a single row directly (you can't do that with a scan),\nbut it's a terrible way to look up 90% (or even 50%) of the rows in a\ntable, because the per-row cost of lookup is actually higher than in a\nscan. That is, once you need to look up more than a certain percentage\nof rows in a table, it's actually cheaper to scan it and ignore what\nyou don't care about rather than going through the index for each row.\nIt looks like your query is hitting this situation.\n\nTry turning off the merge join, as Tomas suggested, to validate the\nassumption that using the index would actually be worse.\n\nTo resolve your problem, you shouldn't be trying to make the planner\npick a better plan, you should optimize your settings to get this plan\nto perform better or (ideally) optimize your application so you don't\nneed such an expensive query (because the fundamental problem is that\nthis query is inherently expensive).\n\n---\nMaciek Sakrejda | System Architect | Truviso\n\n1065 E. Hillsdale Blvd., Suite 215\nFoster City, CA 94404\n(650) 242-3500 Main\nwww.truviso.com\n", "msg_date": "Fri, 25 Mar 2011 09:49:31 -0700", "msg_from": "Maciek Sakrejda <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why Index is not used" }, { "msg_contents": "On 03/25/2011 12:49 PM, Maciek Sakrejda wrote:\n> Indexes aren't a magical performance fairy dust.\n\n\nOne day I intend to use this line for the title of a presentation \nslide. Maybe the title of the whole talk.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Wed, 30 Mar 2011 03:55:12 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why Index is not used" } ]
[ { "msg_contents": "Today is the launch of Intel's 3rd generation SSD line, the 320 series. \nAnd they've finally produced a cheap consumer product that may be useful \nfor databases, too! They've put 6 small capacitors onto the board and \nadded logic to flush the write cache if the power drops. The cache on \nthese was never very big, so they were able to avoid needing one of the \nbig super-capacitors instead. Having 6 little ones is probably a net \nreliability win over the single point of failure, too.\n\nPerformance is only a little better than earlier generation designs, \nwhich means they're still behind the OCZ Vertex controllers that have \nbeen recommended on this list. I haven't really been hearing good \nthings about long-term reliability of OCZ's designs anyway, so glad to \nhave an alternative. *Important*: don't buy SSD for important data \nwithout also having a good redundancy/backup plan. As relatively new \ntechnology they do still have a pretty high failure rate. Make sure you \nbudget for two drives and make multiple copies of your data.\n\nAnyway, the new Intel drivers fast enough for most things, though, and \nare going to be very inexpensive. See \nhttp://www.storagereview.com/intel_ssd_320_review_300gb for some \nsimulated database tests. There's more about the internals at \nhttp://www.anandtech.com/show/4244/intel-ssd-320-review and the white \npaper about the capacitors is at \nhttp://newsroom.intel.com/servlet/JiveServlet/download/38-4324/Intel_SSD_320_Series_Enhance_Power_Loss_Technology_Brief.pdf\n\nSome may still find these two cheap for enterprise use, given the use of \nMLC limits how much activity these drives can handle. But it's great to \nhave a new option for lower budget system that can tolerate some risk there.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Mon, 28 Mar 2011 16:21:10 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Intel SSDs that may not suck" }, { "msg_contents": "This might be a bit too little too late though. As you mentioned there really isn't any real performance improvement for the Intel SSD. Meanwhile, SandForce (the controller that OCZ Vertex is based on) is releasing its next generation controller at a reportedly huge performance increase.\n\nIs there any benchmark measuring the performance of these SSD's (the new Intel vs. the new SandForce) running database workloads? The benchmarks I've seen so far are for desktop applications.\n\nAndy\n\n--- On Mon, 3/28/11, Greg Smith <[email protected]> wrote:\n\n> From: Greg Smith <[email protected]>\n> Subject: [PERFORM] Intel SSDs that may not suck\n> To: \"[email protected]\" <[email protected]>\n> Date: Monday, March 28, 2011, 4:21 PM\n> Today is the launch of Intel's 3rd\n> generation SSD line, the 320 series.  And they've\n> finally produced a cheap consumer product that may be useful\n> for databases, too!  They've put 6 small capacitors\n> onto the board and added logic to flush the write cache if\n> the power drops.  The cache on these was never very\n> big, so they were able to avoid needing one of the big\n> super-capacitors instead.  Having 6 little ones is\n> probably a net reliability win over the single point of\n> failure, too.\n> \n> Performance is only a little better than earlier generation\n> designs, which means they're still behind the OCZ Vertex\n> controllers that have been recommended on this list.  I\n> haven't really been hearing good things about long-term\n> reliability of OCZ's designs anyway, so glad to have an\n> alternative.  *Important*:  don't buy SSD for\n> important data without also having a good redundancy/backup\n> plan.  As relatively new technology they do still have\n> a pretty high failure rate.  Make sure you budget for\n> two drives and make multiple copies of your data.\n> \n> Anyway, the new Intel drivers fast enough for most things,\n> though, and are going to be very inexpensive.  See http://www.storagereview.com/intel_ssd_320_review_300gb\n> for some simulated database tests.  There's more about\n> the internals at http://www.anandtech.com/show/4244/intel-ssd-320-review\n> and the white paper about the capacitors is at http://newsroom.intel.com/servlet/JiveServlet/download/38-4324/Intel_SSD_320_Series_Enhance_Power_Loss_Technology_Brief.pdf\n> \n> Some may still find these two cheap for enterprise use,\n> given the use of MLC limits how much activity these drives\n> can handle.  But it's great to have a new option for\n> lower budget system that can tolerate some risk there.\n> \n> -- Greg Smith   2ndQuadrant US   \n> [email protected]   Baltimore,\n> MD\n> PostgreSQL Training, Services, and 24x7 Support \n> www.2ndQuadrant.us\n> \"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n> \n> \n> -- Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n\n \n", "msg_date": "Mon, 28 Mar 2011 16:54:50 -0700 (PDT)", "msg_from": "Andy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Intel SSDs that may not suck" }, { "msg_contents": "The potential breakthrough here with the 320 is consumer grade SSD\nperformance and price paired with high reliability.\n\nOn Mon, Mar 28, 2011 at 7:54 PM, Andy <[email protected]> wrote:\n> This might be a bit too little too late though. As you mentioned there really isn't any real performance improvement for the Intel SSD. Meanwhile, SandForce (the controller that OCZ Vertex is based on) is releasing its next generation controller at a reportedly huge performance increase.\n>\n> Is there any benchmark measuring the performance of these SSD's (the new Intel vs. the new SandForce) running database workloads? The benchmarks I've seen so far are for desktop applications.\n>\n> Andy\n>\n> --- On Mon, 3/28/11, Greg Smith <[email protected]> wrote:\n>\n>> From: Greg Smith <[email protected]>\n>> Subject: [PERFORM] Intel SSDs that may not suck\n>> To: \"[email protected]\" <[email protected]>\n>> Date: Monday, March 28, 2011, 4:21 PM\n>> Today is the launch of Intel's 3rd\n>> generation SSD line, the 320 series.  And they've\n>> finally produced a cheap consumer product that may be useful\n>> for databases, too!  They've put 6 small capacitors\n>> onto the board and added logic to flush the write cache if\n>> the power drops.  The cache on these was never very\n>> big, so they were able to avoid needing one of the big\n>> super-capacitors instead.  Having 6 little ones is\n>> probably a net reliability win over the single point of\n>> failure, too.\n>>\n>> Performance is only a little better than earlier generation\n>> designs, which means they're still behind the OCZ Vertex\n>> controllers that have been recommended on this list.  I\n>> haven't really been hearing good things about long-term\n>> reliability of OCZ's designs anyway, so glad to have an\n>> alternative.  *Important*:  don't buy SSD for\n>> important data without also having a good redundancy/backup\n>> plan.  As relatively new technology they do still have\n>> a pretty high failure rate.  Make sure you budget for\n>> two drives and make multiple copies of your data.\n>>\n>> Anyway, the new Intel drivers fast enough for most things,\n>> though, and are going to be very inexpensive.  See http://www.storagereview.com/intel_ssd_320_review_300gb\n>> for some simulated database tests.  There's more about\n>> the internals at http://www.anandtech.com/show/4244/intel-ssd-320-review\n>> and the white paper about the capacitors is at http://newsroom.intel.com/servlet/JiveServlet/download/38-4324/Intel_SSD_320_Series_Enhance_Power_Loss_Technology_Brief.pdf\n>>\n>> Some may still find these two cheap for enterprise use,\n>> given the use of MLC limits how much activity these drives\n>> can handle.  But it's great to have a new option for\n>> lower budget system that can tolerate some risk there.\n>>\n>> -- Greg Smith   2ndQuadrant US\n>> [email protected]   Baltimore,\n>> MD\n>> PostgreSQL Training, Services, and 24x7 Support\n>> www.2ndQuadrant.us\n>> \"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n>>\n>>\n>> -- Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Mon, 28 Mar 2011 21:42:23 -0400", "msg_from": "Justin Pitts <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Intel SSDs that may not suck" }, { "msg_contents": "On Mon, Mar 28, 2011 at 7:54 PM, Andy <[email protected]> wrote:\n> This might be a bit too little too late though. As you mentioned there really isn't any real performance improvement for the Intel SSD. Meanwhile, SandForce (the controller that OCZ Vertex is based on) is releasing its next generation controller at a reportedly huge performance increase.\n>\n> Is there any benchmark measuring the performance of these SSD's (the new Intel vs. the new SandForce) running database workloads? The benchmarks I've seen so far are for desktop applications.\n\nThe random performance data is usually a rough benchmark. The\nsequential numbers are mostly useless and always have been. The\nperformance of either the ocz or intel drive is so disgustingly fast\ncompared to a hard drives that the main stumbling block is life span\nand write endurance now that they are starting to get capactiors.\n\nMy own experience with MLC drives is that write cycle expectations are\nmore or less as advertised. They do go down (hard), and have to be\nmonitored. If you are writing a lot of data this can get pretty\nexpensive although the cost dynamics are getting better and better for\nflash. I have no idea what would be precisely prudent, but maybe some\ngood monitoring tools and phased obsolescence at around 80% duty cycle\nmight not be a bad starting point. With hard drives, you can kinda\nwait for em to pop and swap em in -- this is NOT a good idea for flash\nraid volumes.\n\nmerlin\n", "msg_date": "Tue, 29 Mar 2011 00:13:45 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Intel SSDs that may not suck" }, { "msg_contents": "On 2011-03-29 06:13, Merlin Moncure wrote:\n> My own experience with MLC drives is that write cycle expectations are\n> more or less as advertised. They do go down (hard), and have to be\n> monitored. If you are writing a lot of data this can get pretty\n> expensive although the cost dynamics are getting better and better for\n> flash. I have no idea what would be precisely prudent, but maybe some\n> good monitoring tools and phased obsolescence at around 80% duty cycle\n> might not be a bad starting point. With hard drives, you can kinda\n> wait for em to pop and swap em in -- this is NOT a good idea for flash\n> raid volumes.\nWhat do you mean by \"hard\", I have some in our setup, but\nhavent seen anyting \"hard\" just yet. Based on report on the net\nthey seem to slow down writes to \"next to nothing\" when they\nget used but that seems to be more gracefully than old\nrotating drives.. can you elaborate a bit more?\n\nJesper\n\n-- \nJesper\n", "msg_date": "Tue, 29 Mar 2011 06:55:37 +0200", "msg_from": "Jesper Krogh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Intel SSDs that may not suck" }, { "msg_contents": "On Mon, Mar 28, 2011 at 10:55 PM, Jesper Krogh <[email protected]> wrote:\n> On 2011-03-29 06:13, Merlin Moncure wrote:\n>>\n>> My own experience with MLC drives is that write cycle expectations are\n>> more or less as advertised. They do go down (hard), and have to be\n>> monitored. If you are writing a lot of data this can get pretty\n>> expensive although the cost dynamics are getting better and better for\n>> flash. I have no idea what would be precisely prudent, but maybe some\n>> good monitoring tools and phased obsolescence at around 80% duty cycle\n>> might not be a bad starting point.  With hard drives, you can kinda\n>> wait for em to pop and swap em in -- this is NOT a good idea for flash\n>> raid volumes.\n>\n> What do you mean by \"hard\", I have some in our setup, but\n> havent seen anyting \"hard\" just yet. Based on report on the net\n> they seem to slow down writes to \"next to nothing\" when they\n> get used but that seems to be more gracefully than old\n> rotating drives..  can you elaborate a bit more?\n\nMy understanding is that without running trim commands and such, they\nbecome fragmented and slower. But, when they start running out of\nwrite cycles they just die. I.e. they go down hard.\n", "msg_date": "Mon, 28 Mar 2011 23:02:01 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Intel SSDs that may not suck" }, { "msg_contents": "Hello Greg, list,\n\nOn 2011-03-28 22:21, Greg Smith wrote:\n> Today is the launch of Intel's 3rd generation SSD line, the 320 \n> series. And they've finally produced a cheap consumer product that \n> may be useful for databases, too! They've put 6 small capacitors onto \n> the board and added logic to flush the write cache if the power \n> drops. The cache on these was never very big, so they were able to \n> avoid needing one of the big super-capacitors instead. Having 6 \n> little ones is probably a net reliability win over the single point of \n> failure, too.\n>\n> Performance is only a little better than earlier generation designs, \n> which means they're still behind the OCZ Vertex controllers that have \n> been recommended on this list. I haven't really been hearing good \n> things about long-term reliability of OCZ's designs anyway, so glad to \n> have an alternative. *Important*: don't buy SSD for important data \n> without also having a good redundancy/backup plan. As relatively new \n> technology they do still have a pretty high failure rate. Make sure \n> you budget for two drives and make multiple copies of your data.\n>\n> Anyway, the new Intel drivers fast enough for most things, though, and \n> are going to be very inexpensive. See \n> http://www.storagereview.com/intel_ssd_320_review_300gb for some \n> simulated database tests. There's more about the internals at \n> http://www.anandtech.com/show/4244/intel-ssd-320-review and the white \n> paper about the capacitors is at \n> http://newsroom.intel.com/servlet/JiveServlet/download/38-4324/Intel_SSD_320_Series_Enhance_Power_Loss_Technology_Brief.pdf\n>\n> Some may still find these two cheap for enterprise use, given the use \n> of MLC limits how much activity these drives can handle. But it's \n> great to have a new option for lower budget system that can tolerate \n> some risk there.\n>\nWhile I appreciate the heads up about these new drives, your posting \nsuggests (though you formulated in a way that you do not actually say \nit) that OCZ products do not have a long term reliability. No factual \ndata. If you have knowledge of sandforce based OCZ drives fail, that'd \nbe interesting because that's the product line what the new Intel SSD \nought to be compared with. From my POV I've verified that the sandforce \nbased OCZ drives operate as they should (w.r.t. barriers/write through) \nand I've reported what and how that testing was done (where I really \nappreciated your help with) - \nhttp://archives.postgresql.org/pgsql-performance/2010-07/msg00449.php.\n\nThe three drives we're using in a development environment right now \nreport (with recent SSD firmwares and smartmontools) their health status \nincluding the supercap status as well as reserved blocks and a lot more \ninfo, that can be used to monitor when it's about to be dead. Since none \nof the drives have failed yet, or are in the vicinity of their end of \nlife predictions, it is currently unknown if this health status is \nreliable. It may be, but may as well not be. Therefore I'm very \ninterested in hearing hard facts about failures and the smart readings \nright before that.\n\nBelow are smart readings from two Vertex 2 Pro's, the first is the same \nI did the testing with earlier. You can see it's lifetime reads/writes \nas well as unexpected power loss count is larger than the other, newer \none. The FAILING_NOW of available reserved space is an artefact of \nsmartmontools db that has its threshold wrong: it should be read as Gb's \nreserved space, and I suspect for a new drive it might be in the order \nof 18 or 20.\n\nIt's hard to compare with spindles: I've seen them fail in all sorts of \nways, but as of yet I've seen no SSD failure yet. I'm inclined to start \na perpetual pgbench on one ssd with monitoring of smart stats to see if \nwhat they report is really a good indicator of their lifetime. If that \nis so I'm beginning to believe then this technology is better in failure \npredictability than spindles, which pretty much seems at random when you \nhave large arrays.\n\nModel I tested with earlier:\n\n=== START OF INFORMATION SECTION ===\nModel Family: SandForce Driven SSDs\nDevice Model: OCZ VERTEX2-PRO\nSerial Number: OCZ-BVW101PBN8Q8H8M5\nLU WWN Device Id: 5 e83a97 f88e46007\nFirmware Version: 1.32\nUser Capacity: 50,020,540,416 bytes\nDevice is: In smartctl database [for details use: -P show]\nATA Version is: 8\nATA Standard is: ATA-8-ACS revision 6\nLocal Time is: Tue Mar 29 11:25:04 2011 CEST\nSMART support is: Available - device has SMART capability.\nSMART support is: Enabled\n\n=== START OF READ SMART DATA SECTION ===\nSMART overall-health self-assessment test result: PASSED\nSee vendor-specific Attribute list for marginal Attributes.\n\nGeneral SMART Values:\nOffline data collection status: (0x00) Offline data collection activity\n was never started.\n Auto Offline Data Collection: \nDisabled.\nSelf-test execution status: ( 0) The previous self-test routine \ncompleted\n without error or no self-test \nhas ever\n been run.\nTotal time to complete Offline\ndata collection: ( 0) seconds.\nOffline data collection\ncapabilities: (0x7f) SMART execute Offline immediate.\n Auto Offline data collection \non/off support.\n Abort Offline collection upon new\n command.\n Offline surface scan supported.\n Self-test supported.\n Conveyance Self-test supported.\n Selective Self-test supported.\nSMART capabilities: (0x0003) Saves SMART data before entering\n power-saving mode.\n Supports SMART auto save timer.\nError logging capability: (0x01) Error logging supported.\n General Purpose Logging supported.\nShort self-test routine\nrecommended polling time: ( 1) minutes.\nExtended self-test routine\nrecommended polling time: ( 5) minutes.\nConveyance self-test routine\nrecommended polling time: ( 2) minutes.\nSCT capabilities: (0x003d) SCT Status supported.\n SCT Error Recovery Control \nsupported.\n SCT Feature Control supported.\n SCT Data Table supported.\n\nSMART Attributes Data Structure revision number: 10\nVendor Specific SMART Attributes with Thresholds:\nID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE \nUPDATED WHEN_FAILED RAW_VALUE\n 1 Raw_Read_Error_Rate 0x000f 120 120 050 Pre-fail \nAlways - 0/0\n 5 Retired_Block_Count 0x0033 100 100 003 Pre-fail \nAlways - 0\n 9 Power_On_Hours_and_Msec 0x0032 100 100 000 Old_age \nAlways - 965h+05m+20.870s\n 12 Power_Cycle_Count 0x0032 100 100 000 Old_age \nAlways - 234\n 13 Soft_Read_Error_Rate 0x000a 120 120 000 Old_age \nAlways - 752/0\n100 Gigabytes_Erased 0x0032 000 000 000 Old_age \nAlways - 1152\n170 Reserve_Block_Count 0x0032 000 000 000 Old_age \nAlways - 17024\n171 Program_Fail_Count 0x0032 000 000 000 Old_age \nAlways - 0\n172 Erase_Fail_Count 0x0032 000 000 000 Old_age \nAlways - 0\n174 Unexpect_Power_Loss_Ct 0x0030 000 000 000 Old_age \nOffline - 50\n177 Wear_Range_Delta 0x0000 000 000 --- Old_age \nOffline - 0\n181 Program_Fail_Count 0x0032 000 000 000 Old_age \nAlways - 0\n182 Erase_Fail_Count 0x0032 000 000 000 Old_age \nAlways - 0\n184 IO_Error_Detect_Code_Ct 0x0032 100 100 090 Old_age \nAlways - 0\n187 Reported_Uncorrect 0x0032 100 100 000 Old_age \nAlways - 0\n194 Temperature_Celsius 0x0022 032 031 000 Old_age \nAlways - 32 (0 0 0 31)\n195 ECC_Uncorr_Error_Count 0x001c 120 120 000 Old_age \nOffline - 0/0\n196 Reallocated_Event_Count 0x0033 100 100 003 Pre-fail \nAlways - 0\n198 Uncorrectable_Sector_Ct 0x0010 120 120 000 Old_age \nOffline - 0x000000000000\n199 SATA_CRC_Error_Count 0x003e 200 200 000 Old_age \nAlways - 0\n201 Unc_Soft_Read_Err_Rate 0x001c 120 120 000 Old_age \nOffline - 0/0\n204 Soft_ECC_Correct_Rate 0x001c 120 120 000 Old_age \nOffline - 0/0\n230 Life_Curve_Status 0x0013 100 100 000 Pre-fail \nAlways - 100\n231 SSD_Life_Left 0x0013 100 100 010 Pre-fail \nAlways - 0\n232 Available_Reservd_Space 0x0000 000 000 010 Old_age \nOffline FAILING_NOW 16\n233 SandForce_Internal 0x0000 000 000 000 Old_age \nOffline - 1088\n234 SandForce_Internal 0x0032 000 000 000 Old_age \nAlways - 6592\n235 SuperCap_Health 0x0033 100 100 001 Pre-fail \nAlways - 0\n241 Lifetime_Writes_GiB 0x0032 000 000 000 Old_age \nAlways - 6592\n242 Lifetime_Reads_GiB 0x0032 000 000 000 Old_age \nAlways - 3200\n\nSMART Error Log not supported\nSMART Self-test Log not supported\nSMART Selective self-test log data structure revision number 1\n SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS\n 1 0 0 Not_testing\n 2 0 0 Not_testing\n 3 0 0 Not_testing\n 4 0 0 Not_testing\n 5 0 0 Not_testing\nSelective self-test flags (0x0):\n After scanning selected spans, do NOT read-scan remainder of disk.\nIf Selective self-test is pending on power-up, resume after 0 minute delay.\n\n\nRelatively new model:\n\n=== START OF INFORMATION SECTION ===\nModel Family: SandForce Driven SSDs\nDevice Model: OCZ-VERTEX2 PRO\nSerial Number: OCZ-7AVL07UM37FP45U1\nLU WWN Device Id: 5 e83a97 f83e6388d\nFirmware Version: 1.32\nUser Capacity: 50,020,540,416 bytes\nDevice is: In smartctl database [for details use: -P show]\nATA Version is: 8\nATA Standard is: ATA-8-ACS revision 6\nLocal Time is: Tue Mar 29 11:34:28 2011 CEST\nSMART support is: Available - device has SMART capability.\nSMART support is: Enabled\n\n=== START OF READ SMART DATA SECTION ===\nSMART overall-health self-assessment test result: PASSED\nSee vendor-specific Attribute list for marginal Attributes.\n\nGeneral SMART Values:\nOffline data collection status: (0x00) Offline data collection activity\n was never started.\n Auto Offline Data Collection: \nDisabled.\nSelf-test execution status: ( 0) The previous self-test routine \ncompleted\n without error or no self-test \nhas ever\n been run.\nTotal time to complete Offline\ndata collection: ( 0) seconds.\nOffline data collection\ncapabilities: (0x7f) SMART execute Offline immediate.\n Auto Offline data collection \non/off support.\n Abort Offline collection upon new\n command.\n Offline surface scan supported.\n Self-test supported.\n Conveyance Self-test supported.\n Selective Self-test supported.\nSMART capabilities: (0x0003) Saves SMART data before entering\n power-saving mode.\n Supports SMART auto save timer.\nError logging capability: (0x01) Error logging supported.\n General Purpose Logging supported.\nShort self-test routine\nrecommended polling time: ( 1) minutes.\nExtended self-test routine\nrecommended polling time: ( 5) minutes.\nConveyance self-test routine\nrecommended polling time: ( 2) minutes.\nSCT capabilities: (0x003d) SCT Status supported.\n SCT Error Recovery Control \nsupported.\n SCT Feature Control supported.\n SCT Data Table supported.\n\nSMART Attributes Data Structure revision number: 10\nVendor Specific SMART Attributes with Thresholds:\nID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE \nUPDATED WHEN_FAILED RAW_VALUE\n 1 Raw_Read_Error_Rate 0x000f 120 120 050 Pre-fail \nAlways - 0/0\n 5 Retired_Block_Count 0x0033 100 100 003 Pre-fail \nAlways - 0\n 9 Power_On_Hours_and_Msec 0x0032 100 100 000 Old_age \nAlways - 452h+19m+31.020s\n 12 Power_Cycle_Count 0x0032 100 100 000 Old_age \nAlways - 64\n 13 Soft_Read_Error_Rate 0x000a 120 120 000 Old_age \nAlways - 3067/0\n100 Gigabytes_Erased 0x0032 000 000 000 Old_age \nAlways - 128\n170 Reserve_Block_Count 0x0032 000 000 000 Old_age \nAlways - 17440\n171 Program_Fail_Count 0x0032 000 000 000 Old_age \nAlways - 0\n172 Erase_Fail_Count 0x0032 000 000 000 Old_age \nAlways - 0\n174 Unexpect_Power_Loss_Ct 0x0030 000 000 000 Old_age \nOffline - 16\n177 Wear_Range_Delta 0x0000 000 000 --- Old_age \nOffline - 0\n181 Program_Fail_Count 0x0032 000 000 000 Old_age \nAlways - 0\n182 Erase_Fail_Count 0x0032 000 000 000 Old_age \nAlways - 0\n184 IO_Error_Detect_Code_Ct 0x0032 100 100 090 Old_age \nAlways - 0\n187 Reported_Uncorrect 0x0032 100 100 000 Old_age \nAlways - 0\n194 Temperature_Celsius 0x0022 032 032 000 Old_age \nAlways - 32 (Min/Max 0/32)\n195 ECC_Uncorr_Error_Count 0x001c 120 120 000 Old_age \nOffline - 0/0\n196 Reallocated_Event_Count 0x0033 100 100 003 Pre-fail \nAlways - 0\n198 Uncorrectable_Sector_Ct 0x0010 120 120 000 Old_age \nOffline - 0x000000000000\n199 SATA_CRC_Error_Count 0x003e 200 200 000 Old_age \nAlways - 0\n201 Unc_Soft_Read_Err_Rate 0x001c 120 120 000 Old_age \nOffline - 0/0\n204 Soft_ECC_Correct_Rate 0x001c 120 120 000 Old_age \nOffline - 0/0\n230 Life_Curve_Status 0x0013 100 100 000 Pre-fail \nAlways - 100\n231 SSD_Life_Left 0x0013 100 100 010 Pre-fail \nAlways - 0\n232 Available_Reservd_Space 0x0000 000 000 010 Old_age \nOffline FAILING_NOW 17\n233 SandForce_Internal 0x0000 000 000 000 Old_age \nOffline - 128\n234 SandForce_Internal 0x0032 000 000 000 Old_age \nAlways - 448\n235 SuperCap_Health 0x0033 100 100 010 Pre-fail \nAlways - 0\n241 Lifetime_Writes_GiB 0x0032 000 000 000 Old_age \nAlways - 448\n242 Lifetime_Reads_GiB 0x0032 000 000 000 Old_age \nAlways - 192\n\nSMART Error Log not supported\nSMART Self-test Log not supported\nSMART Selective self-test log data structure revision number 1\n SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS\n 1 0 0 Not_testing\n 2 0 0 Not_testing\n 3 0 0 Not_testing\n 4 0 0 Not_testing\n 5 0 0 Not_testing\nSelective self-test flags (0x0):\n After scanning selected spans, do NOT read-scan remainder of disk.\nIf Selective self-test is pending on power-up, resume after 0 minute delay.\n\n-- \nYeb Havinga\nhttp://www.mgrid.net/\nMastering Medical Data\n\n", "msg_date": "Tue, 29 Mar 2011 12:34:08 +0200", "msg_from": "Yeb Havinga <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Intel SSDs that may not suck" }, { "msg_contents": "\nOn Mar 29, 2011, at 12:13 AM, Merlin Moncure wrote:\n\n>\n> My own experience with MLC drives is that write cycle expectations are\n> more or less as advertised. They do go down (hard), and have to be\n> monitored. If you are writing a lot of data this can get pretty\n> expensive although the cost dynamics are getting better and better for\n> flash. I have no idea what would be precisely prudent, but maybe some\n> good monitoring tools and phased obsolescence at around 80% duty cycle\n> might not be a bad starting point. With hard drives, you can kinda\n> wait for em to pop and swap em in -- this is NOT a good idea for flash\n> raid volumes.\n\n\n\nwe've been running some of our DB's on SSD's (x25m's, we also have a \npair of x25e's in another box we use for some super hot tables). They \nhave been in production for well over a year (in some cases, nearly a \ncouple years) under heavy load.\n\nWe're currently being bit in the ass by performance degradation and \nwe're working out plans to remedy the situation. One box has 8 x25m's \nin a R10 behind a P400 controller. First, the p400 is not that \npowerful and we've run experiments with newer (p812) controllers that \nhave been generally positive. The main symptom we've been seeing is \nwrite stalls. Writing will go, then come to a complete halt for 0.5-2 \nseconds, then resume. The fix we're going to do is replace each \ndrive in order with the rebuild occuring between each. Then we do a \nsecurity erase to reset the drive back to completely empty (including \nthe \"spare\" blocks kept around for writes).\n\nNow that all sounds awful and horrible until you get to overall \nperformance, especially with reads - you are looking at 20k random \nreads per second with a few disks. Adding in writes does kick it down \na noch, but you're still looking at 10k+ iops. That is the current \ntrade off.\n\nIn general, i wouldn't recommend the cciss stuff with SSD's at this \ntime because it makes some things such as security erase, smart and \nother things near impossible. (performance seems ok though) We've got \nsome tests planned seeing what we can do with an Areca controller and \nsome ssds to see how it goes.\n\nAlso note that there is a funky interaction with an MSA70 and SSDs. \nthey do not work together. (I'm not sure if HP's official branded \nssd's have the same issue).\n\nThe write degradation could probably be monitored looking at svctime \nfrom sar. We may be implementing that in the near future to detect \nwhen this creeps up again.\n\n\n--\nJeff Trout <[email protected]>\nhttp://www.stuarthamm.net/\nhttp://www.dellsmartexitin.com/\n\n\n\n", "msg_date": "Tue, 29 Mar 2011 10:16:51 -0400", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Intel SSDs that may not suck" }, { "msg_contents": "2011/3/29 Jeff <[email protected]>:\n>\n> On Mar 29, 2011, at 12:13 AM, Merlin Moncure wrote:\n>\n>>\n>> My own experience with MLC drives is that write cycle expectations are\n>> more or less as advertised. They do go down (hard), and have to be\n>> monitored. If you are writing a lot of data this can get pretty\n>> expensive although the cost dynamics are getting better and better for\n>> flash. I have no idea what would be precisely prudent, but maybe some\n>> good monitoring tools and phased obsolescence at around 80% duty cycle\n>> might not be a bad starting point.  With hard drives, you can kinda\n>> wait for em to pop and swap em in -- this is NOT a good idea for flash\n>> raid volumes.\n>\n>\n>\n> we've been running some of our DB's on SSD's (x25m's, we also have a pair of\n> x25e's in another box we use for some super hot tables).  They have been in\n> production for well over a year (in some cases, nearly a couple years) under\n> heavy load.\n>\n> We're currently being bit in the ass by performance degradation and we're\n> working out plans to remedy the situation.  One box has 8 x25m's in a R10\n> behind a P400 controller.  First, the p400 is not that powerful and we've\n> run experiments with newer (p812) controllers that have been generally\n> positive.   The main symptom we've been seeing is write stalls.  Writing\n> will go, then come to a complete halt for 0.5-2 seconds, then resume.   The\n> fix we're going to do is replace each drive in order with the rebuild\n> occuring between each.  Then we do a security erase to reset the drive back\n> to completely empty (including the \"spare\" blocks kept around for writes).\n>\n> Now that all sounds awful and horrible until you get to overall performance,\n> especially with reads - you are looking at 20k random reads per second with\n> a few disks.  Adding in writes does kick it down a noch, but you're still\n> looking at 10k+ iops. That is the current trade off.\n>\n> In general, i wouldn't recommend the cciss stuff with SSD's at this time\n> because it makes some things such as security erase, smart and other things\n> near impossible. (performance seems ok though) We've got some tests planned\n> seeing what we can do with an Areca controller and some ssds to see how it\n> goes.\n>\n> Also note that there is a funky interaction with an MSA70 and SSDs. they do\n> not work together. (I'm not sure if HP's official branded ssd's have the\n> same issue).\n>\n> The write degradation could probably be monitored looking at svctime from\n> sar. We may be implementing that in the near future to detect when this\n> creeps up again.\n\nsvctime is untrustable. From the systat author, this field will be\nremoved in a future version.\n\n\n-- \nCédric Villemain               2ndQuadrant\nhttp://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support\n", "msg_date": "Tue, 29 Mar 2011 16:30:59 +0200", "msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Intel SSDs that may not suck" }, { "msg_contents": "\nOn Mar 29, 2011, at 10:16 AM, Jeff wrote:\n\n> Now that all sounds awful and horrible until you get to overall \n> performance, especially with reads - you are looking at 20k random \n> reads per second with a few disks. Adding in writes does kick it \n> down a noch, but you're still looking at 10k+ iops. That is the \n> current trade off.\n>\n\nWe've been doing a burn in for about 4 days now on an array of 8 \nx25m's behind a p812 controller: here's a sample of what it is \ncurrently doing (I have 10 threads randomly seeking, reading, and 10% \nof the time writing (then fsync'ing) out, using my pgiosim tool which \nI need to update on pgfoundry)\n\n10:25:24 AM dev104-2 7652.21 109734.51 12375.22 15.96 \n8.22 1.07 0.12 88.32\n10:25:25 AM dev104-2 7318.52 104948.15 11696.30 15.94 \n8.62 1.17 0.13 92.50\n10:25:26 AM dev104-2 7871.56 112572.48 13034.86 15.96 \n8.60 1.09 0.12 91.38\n10:25:27 AM dev104-2 7869.72 111955.96 13592.66 15.95 \n8.65 1.10 0.12 91.65\n10:25:28 AM dev104-2 7859.41 111920.79 13560.40 15.97 \n9.32 1.19 0.13 98.91\n10:25:29 AM dev104-2 7285.19 104133.33 12000.00 15.94 \n8.08 1.11 0.13 92.59\n10:25:30 AM dev104-2 8017.27 114581.82 13250.91 15.94 \n8.48 1.06 0.11 90.36\n10:25:31 AM dev104-2 8392.45 120030.19 13924.53 15.96 \n8.90 1.06 0.11 94.34\n10:25:32 AM dev104-2 10173.86 145836.36 16409.09 15.95 \n10.72 1.05 0.11 113.52\n10:25:33 AM dev104-2 7007.14 100107.94 11688.89 15.95 \n7.39 1.06 0.11 79.29\n10:25:34 AM dev104-2 8043.27 115076.92 13192.31 15.95 \n9.09 1.13 0.12 96.15\n10:25:35 AM dev104-2 7409.09 104290.91 13774.55 15.94 \n8.62 1.16 0.12 90.55\n\nthe 2nd to last column is svctime. first column after dev104-2 is \nTPS. if I kill the writes off, tps rises quite a bit:\n10:26:34 AM dev104-2 22659.41 361528.71 0.00 15.95 \n10.57 0.42 0.04 99.01\n10:26:35 AM dev104-2 22479.41 359184.31 7.84 15.98 \n9.61 0.52 0.04 98.04\n10:26:36 AM dev104-2 21734.29 347230.48 0.00 15.98 \n9.30 0.43 0.04 95.33\n10:26:37 AM dev104-2 21551.46 344023.30 116.50 15.97 \n9.56 0.44 0.05 97.09\n10:26:38 AM dev104-2 21964.42 350592.31 0.00 15.96 \n10.25 0.42 0.04 96.15\n10:26:39 AM dev104-2 22512.75 359294.12 7.84 15.96 \n10.23 0.50 0.04 98.04\n10:26:40 AM dev104-2 22373.53 357725.49 0.00 15.99 \n9.52 0.43 0.04 98.04\n10:26:41 AM dev104-2 21436.79 342596.23 0.00 15.98 \n9.17 0.43 0.04 94.34\n10:26:42 AM dev104-2 22525.49 359749.02 39.22 15.97 \n10.18 0.45 0.04 98.04\n\n\nnow to demonstrate \"write stalls\" on the problemtic box:\n10:30:49 AM dev104-3 0.00 0.00 0.00 0.00 \n0.38 0.00 0.00 35.85\n10:30:50 AM dev104-3 3.03 8.08 258.59 88.00 \n2.43 635.00 333.33 101.01\n10:30:51 AM dev104-3 4.00 0.00 128.00 32.00 \n0.67 391.75 92.75 37.10\n10:30:52 AM dev104-3 10.89 0.00 95.05 8.73 \n1.45 133.55 12.27 13.37\n10:30:53 AM dev104-3 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00\n10:30:54 AM dev104-3 155.00 0.00 1488.00 9.60 \n10.88 70.23 2.92 45.20\n10:30:55 AM dev104-3 10.00 0.00 536.00 53.60 \n1.66 100.20 45.80 45.80\n10:30:56 AM dev104-3 46.53 0.00 411.88 8.85 \n3.01 78.51 4.30 20.00\n10:30:57 AM dev104-3 11.00 0.00 96.00 8.73 \n0.79 72.91 27.00 29.70\n10:30:58 AM dev104-3 12.00 0.00 96.00 8.00 \n0.79 65.42 11.17 13.40\n10:30:59 AM dev104-3 7.84 7.84 62.75 9.00 \n0.67 85.38 32.00 25.10\n10:31:00 AM dev104-3 8.00 0.00 224.00 28.00 \n0.82 102.00 47.12 37.70\n10:31:01 AM dev104-3 20.00 0.00 184.00 9.20 \n0.24 11.80 1.10 2.20\n10:31:02 AM dev104-3 4.95 0.00 39.60 8.00 \n0.23 46.00 13.00 6.44\n10:31:03 AM dev104-3 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00\n\nthat was from a simple dd, not random writes. (since it is in \nproduction, I can't really do the random write test as easily)\n\ntheoretically, a nice rotation of disks would remove that problem. \nannoying, but it is the price you need to pay\n\n--\nJeff Trout <[email protected]>\nhttp://www.stuarthamm.net/\nhttp://www.dellsmartexitin.com/\n\n\n\n", "msg_date": "Tue, 29 Mar 2011 10:32:32 -0400", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Intel SSDs that may not suck" }, { "msg_contents": "This can be resolved by partitioning the disk with a larger write spare area so that the cells don't have to by recycled so often. There is a lot of \"misinformation\" about SSD's, there are some great articles on anandtech that really explain how the technology works and some of the differences between the controllers as well. If you do the reading you can find a solution that will work for you, SSD's are probably one of the best technologies to come along for us in a long time that gives us such a performance jump in the IO world. We have gone from completely IO bound to CPU bound, it's really worth spending the time to investigate and understand how this can impact your system.\r\n\r\nhttp://www.anandtech.com/show/2614\r\nhttp://www.anandtech.com/show/2738\r\nhttp://www.anandtech.com/show/4244/intel-ssd-320-review\r\nhttp://www.anandtech.com/tag/storage\r\nhttp://www.anandtech.com/show/3849/micron-announces-realssd-p300-slc-ssd-for-enterprise\r\n\r\n\r\n-----Original Message-----\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Jeff\r\nSent: Tuesday, March 29, 2011 9:33 AM\r\nTo: Jeff\r\nCc: Merlin Moncure; Andy; [email protected]; Greg Smith; Brian Ristuccia\r\nSubject: Re: [PERFORM] Intel SSDs that may not suck\r\n\r\n\r\nOn Mar 29, 2011, at 10:16 AM, Jeff wrote:\r\n\r\n> Now that all sounds awful and horrible until you get to overall \r\n> performance, especially with reads - you are looking at 20k random \r\n> reads per second with a few disks. Adding in writes does kick it down \r\n> a noch, but you're still looking at 10k+ iops. That is the current \r\n> trade off.\r\n>\r\n\r\nWe've been doing a burn in for about 4 days now on an array of 8 x25m's behind a p812 controller: here's a sample of what it is currently doing (I have 10 threads randomly seeking, reading, and 10% of the time writing (then fsync'ing) out, using my pgiosim tool which I need to update on pgfoundry)\r\n\r\n10:25:24 AM dev104-2 7652.21 109734.51 12375.22 15.96 \r\n8.22 1.07 0.12 88.32\r\n10:25:25 AM dev104-2 7318.52 104948.15 11696.30 15.94 \r\n8.62 1.17 0.13 92.50\r\n10:25:26 AM dev104-2 7871.56 112572.48 13034.86 15.96 \r\n8.60 1.09 0.12 91.38\r\n10:25:27 AM dev104-2 7869.72 111955.96 13592.66 15.95 \r\n8.65 1.10 0.12 91.65\r\n10:25:28 AM dev104-2 7859.41 111920.79 13560.40 15.97 \r\n9.32 1.19 0.13 98.91\r\n10:25:29 AM dev104-2 7285.19 104133.33 12000.00 15.94 \r\n8.08 1.11 0.13 92.59\r\n10:25:30 AM dev104-2 8017.27 114581.82 13250.91 15.94 \r\n8.48 1.06 0.11 90.36\r\n10:25:31 AM dev104-2 8392.45 120030.19 13924.53 15.96 \r\n8.90 1.06 0.11 94.34\r\n10:25:32 AM dev104-2 10173.86 145836.36 16409.09 15.95 \r\n10.72 1.05 0.11 113.52\r\n10:25:33 AM dev104-2 7007.14 100107.94 11688.89 15.95 \r\n7.39 1.06 0.11 79.29\r\n10:25:34 AM dev104-2 8043.27 115076.92 13192.31 15.95 \r\n9.09 1.13 0.12 96.15\r\n10:25:35 AM dev104-2 7409.09 104290.91 13774.55 15.94 \r\n8.62 1.16 0.12 90.55\r\n\r\nthe 2nd to last column is svctime. first column after dev104-2 is TPS. if I kill the writes off, tps rises quite a bit:\r\n10:26:34 AM dev104-2 22659.41 361528.71 0.00 15.95 \r\n10.57 0.42 0.04 99.01\r\n10:26:35 AM dev104-2 22479.41 359184.31 7.84 15.98 \r\n9.61 0.52 0.04 98.04\r\n10:26:36 AM dev104-2 21734.29 347230.48 0.00 15.98 \r\n9.30 0.43 0.04 95.33\r\n10:26:37 AM dev104-2 21551.46 344023.30 116.50 15.97 \r\n9.56 0.44 0.05 97.09\r\n10:26:38 AM dev104-2 21964.42 350592.31 0.00 15.96 \r\n10.25 0.42 0.04 96.15\r\n10:26:39 AM dev104-2 22512.75 359294.12 7.84 15.96 \r\n10.23 0.50 0.04 98.04\r\n10:26:40 AM dev104-2 22373.53 357725.49 0.00 15.99 \r\n9.52 0.43 0.04 98.04\r\n10:26:41 AM dev104-2 21436.79 342596.23 0.00 15.98 \r\n9.17 0.43 0.04 94.34\r\n10:26:42 AM dev104-2 22525.49 359749.02 39.22 15.97 \r\n10.18 0.45 0.04 98.04\r\n\r\n\r\nnow to demonstrate \"write stalls\" on the problemtic box:\r\n10:30:49 AM dev104-3 0.00 0.00 0.00 0.00 \r\n0.38 0.00 0.00 35.85\r\n10:30:50 AM dev104-3 3.03 8.08 258.59 88.00 \r\n2.43 635.00 333.33 101.01\r\n10:30:51 AM dev104-3 4.00 0.00 128.00 32.00 \r\n0.67 391.75 92.75 37.10\r\n10:30:52 AM dev104-3 10.89 0.00 95.05 8.73 \r\n1.45 133.55 12.27 13.37\r\n10:30:53 AM dev104-3 0.00 0.00 0.00 0.00 \r\n0.00 0.00 0.00 0.00\r\n10:30:54 AM dev104-3 155.00 0.00 1488.00 9.60 \r\n10.88 70.23 2.92 45.20\r\n10:30:55 AM dev104-3 10.00 0.00 536.00 53.60 \r\n1.66 100.20 45.80 45.80\r\n10:30:56 AM dev104-3 46.53 0.00 411.88 8.85 \r\n3.01 78.51 4.30 20.00\r\n10:30:57 AM dev104-3 11.00 0.00 96.00 8.73 \r\n0.79 72.91 27.00 29.70\r\n10:30:58 AM dev104-3 12.00 0.00 96.00 8.00 \r\n0.79 65.42 11.17 13.40\r\n10:30:59 AM dev104-3 7.84 7.84 62.75 9.00 \r\n0.67 85.38 32.00 25.10\r\n10:31:00 AM dev104-3 8.00 0.00 224.00 28.00 \r\n0.82 102.00 47.12 37.70\r\n10:31:01 AM dev104-3 20.00 0.00 184.00 9.20 \r\n0.24 11.80 1.10 2.20\r\n10:31:02 AM dev104-3 4.95 0.00 39.60 8.00 \r\n0.23 46.00 13.00 6.44\r\n10:31:03 AM dev104-3 0.00 0.00 0.00 0.00 \r\n0.00 0.00 0.00 0.00\r\n\r\nthat was from a simple dd, not random writes. (since it is in production, I can't really do the random write test as easily)\r\n\r\ntheoretically, a nice rotation of disks would remove that problem. \r\nannoying, but it is the price you need to pay\r\n\r\n--\r\nJeff Trout <[email protected]>\r\nhttp://www.stuarthamm.net/\r\nhttp://www.dellsmartexitin.com/\r\n\r\n\r\n\r\n\r\n--\r\nSent via pgsql-performance mailing list ([email protected])\r\nTo make changes to your subscription:\r\nhttp://www.postgresql.org/mailpref/pgsql-performance\r\nThis communication is for informational purposes only. It is not\nintended as an offer or solicitation for the purchase or sale of\nany financial instrument or as an official confirmation of any\ntransaction. All market prices, data and other information are not\nwarranted as to completeness or accuracy and are subject to change\nwithout notice. Any comments or statements made herein do not\nnecessarily reflect those of JPMorgan Chase & Co., its subsidiaries\nand affiliates.\r\n\r\nThis transmission may contain information that is privileged,\nconfidential, legally privileged, and/or exempt from disclosure\nunder applicable law. If you are not the intended recipient, you\nare hereby notified that any disclosure, copying, distribution, or\nuse of the information contained herein (including any reliance\nthereon) is STRICTLY PROHIBITED. Although this transmission and any\nattachments are believed to be free of any virus or other defect\nthat might affect any computer system into which it is received and\nopened, it is the responsibility of the recipient to ensure that it\nis virus free and no responsibility is accepted by JPMorgan Chase &\nCo., its subsidiaries and affiliates, as applicable, for any loss\nor damage arising in any way from its use. If you received this\ntransmission in error, please immediately contact the sender and\ndestroy the material in its entirety, whether in electronic or hard\ncopy format. Thank you.\r\n\r\nPlease refer to http://www.jpmorgan.com/pages/disclosures for\ndisclosures relating to European legal entities.\n", "msg_date": "Tue, 29 Mar 2011 11:32:16 -0400", "msg_from": "\"Strange, John W\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Intel SSDs that may not suck" }, { "msg_contents": "On 2011-03-29 16:16, Jeff wrote:\n> halt for 0.5-2 seconds, then resume. The fix we're going to do is\n> replace each drive in order with the rebuild occuring between each.\n> Then we do a security erase to reset the drive back to completely\n> empty (including the \"spare\" blocks kept around for writes).\n\nAre you replacing the drives with new once, or just secure-erase and \nback in?\nWhat kind of numbers are you drawing out of smartmontools in usage figures?\n(Also seeing some write-stalls here, on 24 Raid50 volumes of x25m's, and\nhave been planning to cycle drives for quite some time, without actually\ngetting to it.\n\n> Now that all sounds awful and horrible until you get to overall\n> performance, especially with reads - you are looking at 20k random\n> reads per second with a few disks. Adding in writes does kick it\n> down a noch, but you're still looking at 10k+ iops. That is the\n> current trade off.\n\nThats also my experience.\n-- \nJesper\n\n\n\n\n\n\n On 2011-03-29 16:16, Jeff wrote:\n> halt for 0.5-2 seconds, then\n resume. The fix we're going to do is\n > replace each drive in order with the rebuild occuring between\n each.\n > Then we do a security erase to reset the drive back to\n completely\n > empty (including the \"spare\" blocks kept around for writes).\n\n Are you replacing the drives with new once, or just secure-erase and\n back in? \n What kind of numbers are you drawing out of smartmontools in usage\n figures? \n (Also seeing some write-stalls here, on 24 Raid50 volumes of x25m's,\n and \n have been planning to cycle drives for quite some time, without\n actually \n getting to it. \n\n> Now that all sounds awful and\n horrible until you get to overall\n > performance, especially with reads - you are looking at 20k\n random\n > reads per second with a few disks. Adding in writes does kick\n it\n > down a noch, but you're still looking at 10k+ iops. That is\n the\n > current trade off.\n\nThats also my experience. \n -- \n Jesper", "msg_date": "Tue, 29 Mar 2011 18:12:25 +0200", "msg_from": "Jesper Krogh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Intel SSDs that may not suck" }, { "msg_contents": "Both the X25-M and the parts that AnandTech reviews (and a pretty thorough one they do) are, on a good day, prosumer. Getting review material for truly Enterprise parts, the kind that STEC, Violin, and Texas Memory will spend a year to get qualified at HP or IBM or Oracle is really hard to come by.\n\nZsolt does keep track of what's going on in the space, although he doesn't test himself, that I've seen. Still, a useful site to visit on occasion:\n\nhttp://www.storagesearch.com/\n\nregards\n\n---- Original message ----\n>Date: Tue, 29 Mar 2011 11:32:16 -0400\n>From: [email protected] (on behalf of \"Strange, John W\" <[email protected]>)\n>Subject: Re: [PERFORM] Intel SSDs that may not suck \n>To: Jeff <[email protected]>\n>Cc: Merlin Moncure <[email protected]>,Andy <[email protected]>,\"[email protected]\" <[email protected]>,Greg Smith <[email protected]>,Brian Ristuccia <[email protected]>\n>\n>This can be resolved by partitioning the disk with a larger write spare area so that the cells don't have to by recycled so often. There is a lot of \"misinformation\" about SSD's, there are some great articles on anandtech that really explain how the technology works and some of the differences between the controllers as well. If you do the reading you can find a solution that will work for you, SSD's are probably one of the best technologies to come along for us in a long time that gives us such a performance jump in the IO world. We have gone from completely IO bound to CPU bound, it's really worth spending the time to investigate and understand how this can impact your system.\n\n>\n\n>http://www.anandtech.com/show/2614\n\n>http://www.anandtech.com/show/2738\n\n>http://www.anandtech.com/show/4244/intel-ssd-320-review\n\n>http://www.anandtech.com/tag/storage\n\n>http://www.anandtech.com/show/3849/micron-announces-realssd-p300-slc-ssd-for-enterprise\n\n>\n\n>\n\n>-----Original Message-----\n\n>From: [email protected] [mailto:[email protected]] On Behalf Of Jeff\n\n>Sent: Tuesday, March 29, 2011 9:33 AM\n\n>To: Jeff\n\n>Cc: Merlin Moncure; Andy; [email protected]; Greg Smith; Brian Ristuccia\n\n>Subject: Re: [PERFORM] Intel SSDs that may not suck\n\n>\n\n>\n\n>On Mar 29, 2011, at 10:16 AM, Jeff wrote:\n\n>\n\n>> Now that all sounds awful and horrible until you get to overall \n\n>> performance, especially with reads - you are looking at 20k random \n\n>> reads per second with a few disks. Adding in writes does kick it down \n\n>> a noch, but you're still looking at 10k+ iops. That is the current \n\n>> trade off.\n\n>>\n\n>\n\n>We've been doing a burn in for about 4 days now on an array of 8 x25m's behind a p812 controller: here's a sample of what it is currently doing (I have 10 threads randomly seeking, reading, and 10% of the time writing (then fsync'ing) out, using my pgiosim tool which I need to update on pgfoundry)\n\n>\n\n>10:25:24 AM dev104-2 7652.21 109734.51 12375.22 15.96 \n\n>8.22 1.07 0.12 88.32\n\n>10:25:25 AM dev104-2 7318.52 104948.15 11696.30 15.94 \n\n>8.62 1.17 0.13 92.50\n\n>10:25:26 AM dev104-2 7871.56 112572.48 13034.86 15.96 \n\n>8.60 1.09 0.12 91.38\n\n>10:25:27 AM dev104-2 7869.72 111955.96 13592.66 15.95 \n\n>8.65 1.10 0.12 91.65\n\n>10:25:28 AM dev104-2 7859.41 111920.79 13560.40 15.97 \n\n>9.32 1.19 0.13 98.91\n\n>10:25:29 AM dev104-2 7285.19 104133.33 12000.00 15.94 \n\n>8.08 1.11 0.13 92.59\n\n>10:25:30 AM dev104-2 8017.27 114581.82 13250.91 15.94 \n\n>8.48 1.06 0.11 90.36\n\n>10:25:31 AM dev104-2 8392.45 120030.19 13924.53 15.96 \n\n>8.90 1.06 0.11 94.34\n\n>10:25:32 AM dev104-2 10173.86 145836.36 16409.09 15.95 \n\n>10.72 1.05 0.11 113.52\n\n>10:25:33 AM dev104-2 7007.14 100107.94 11688.89 15.95 \n\n>7.39 1.06 0.11 79.29\n\n>10:25:34 AM dev104-2 8043.27 115076.92 13192.31 15.95 \n\n>9.09 1.13 0.12 96.15\n\n>10:25:35 AM dev104-2 7409.09 104290.91 13774.55 15.94 \n\n>8.62 1.16 0.12 90.55\n\n>\n\n>the 2nd to last column is svctime. first column after dev104-2 is TPS. if I kill the writes off, tps rises quite a bit:\n\n>10:26:34 AM dev104-2 22659.41 361528.71 0.00 15.95 \n\n>10.57 0.42 0.04 99.01\n\n>10:26:35 AM dev104-2 22479.41 359184.31 7.84 15.98 \n\n>9.61 0.52 0.04 98.04\n\n>10:26:36 AM dev104-2 21734.29 347230.48 0.00 15.98 \n\n>9.30 0.43 0.04 95.33\n\n>10:26:37 AM dev104-2 21551.46 344023.30 116.50 15.97 \n\n>9.56 0.44 0.05 97.09\n\n>10:26:38 AM dev104-2 21964.42 350592.31 0.00 15.96 \n\n>10.25 0.42 0.04 96.15\n\n>10:26:39 AM dev104-2 22512.75 359294.12 7.84 15.96 \n\n>10.23 0.50 0.04 98.04\n\n>10:26:40 AM dev104-2 22373.53 357725.49 0.00 15.99 \n\n>9.52 0.43 0.04 98.04\n\n>10:26:41 AM dev104-2 21436.79 342596.23 0.00 15.98 \n\n>9.17 0.43 0.04 94.34\n\n>10:26:42 AM dev104-2 22525.49 359749.02 39.22 15.97 \n\n>10.18 0.45 0.04 98.04\n\n>\n\n>\n\n>now to demonstrate \"write stalls\" on the problemtic box:\n\n>10:30:49 AM dev104-3 0.00 0.00 0.00 0.00 \n\n>0.38 0.00 0.00 35.85\n\n>10:30:50 AM dev104-3 3.03 8.08 258.59 88.00 \n\n>2.43 635.00 333.33 101.01\n\n>10:30:51 AM dev104-3 4.00 0.00 128.00 32.00 \n\n>0.67 391.75 92.75 37.10\n\n>10:30:52 AM dev104-3 10.89 0.00 95.05 8.73 \n\n>1.45 133.55 12.27 13.37\n\n>10:30:53 AM dev104-3 0.00 0.00 0.00 0.00 \n\n>0.00 0.00 0.00 0.00\n\n>10:30:54 AM dev104-3 155.00 0.00 1488.00 9.60 \n\n>10.88 70.23 2.92 45.20\n\n>10:30:55 AM dev104-3 10.00 0.00 536.00 53.60 \n\n>1.66 100.20 45.80 45.80\n\n>10:30:56 AM dev104-3 46.53 0.00 411.88 8.85 \n\n>3.01 78.51 4.30 20.00\n\n>10:30:57 AM dev104-3 11.00 0.00 96.00 8.73 \n\n>0.79 72.91 27.00 29.70\n\n>10:30:58 AM dev104-3 12.00 0.00 96.00 8.00 \n\n>0.79 65.42 11.17 13.40\n\n>10:30:59 AM dev104-3 7.84 7.84 62.75 9.00 \n\n>0.67 85.38 32.00 25.10\n\n>10:31:00 AM dev104-3 8.00 0.00 224.00 28.00 \n\n>0.82 102.00 47.12 37.70\n\n>10:31:01 AM dev104-3 20.00 0.00 184.00 9.20 \n\n>0.24 11.80 1.10 2.20\n\n>10:31:02 AM dev104-3 4.95 0.00 39.60 8.00 \n\n>0.23 46.00 13.00 6.44\n\n>10:31:03 AM dev104-3 0.00 0.00 0.00 0.00 \n\n>0.00 0.00 0.00 0.00\n\n>\n\n>that was from a simple dd, not random writes. (since it is in production, I can't really do the random write test as easily)\n\n>\n\n>theoretically, a nice rotation of disks would remove that problem. \n\n>annoying, but it is the price you need to pay\n\n>\n\n>--\n\n>Jeff Trout <[email protected]>\n\n>http://www.stuarthamm.net/\n\n>http://www.dellsmartexitin.com/\n\n>\n\n>\n\n>\n\n>\n\n>--\n\n>Sent via pgsql-performance mailing list ([email protected])\n\n>To make changes to your subscription:\n\n>http://www.postgresql.org/mailpref/pgsql-performance\n\n>This communication is for informational purposes only. It is not\n>intended as an offer or solicitation for the purchase or sale of\n>any financial instrument or as an official confirmation of any\n>transaction. All market prices, data and other information are not\n>warranted as to completeness or accuracy and are subject to change\n>without notice. Any comments or statements made herein do not\n>necessarily reflect those of JPMorgan Chase & Co., its subsidiaries\n>and affiliates.\n\n>\n\n>This transmission may contain information that is privileged,\n>confidential, legally privileged, and/or exempt from disclosure\n>under applicable law. If you are not the intended recipient, you\n>are hereby notified that any disclosure, copying, distribution, or\n>use of the information contained herein (including any reliance\n>thereon) is STRICTLY PROHIBITED. Although this transmission and any\n>attachments are believed to be free of any virus or other defect\n>that might affect any computer system into which it is received and\n>opened, it is the responsibility of the recipient to ensure that it\n>is virus free and no responsibility is accepted by JPMorgan Chase &\n>Co., its subsidiaries and affiliates, as applicable, for any loss\n>or damage arising in any way from its use. If you received this\n>transmission in error, please immediately contact the sender and\n>destroy the material in its entirety, whether in electronic or hard\n>copy format. Thank you.\n\n>\n\n>Please refer to http://www.jpmorgan.com/pages/disclosures for\n>disclosures relating to European legal entities.\n>\n>-- \n>Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 29 Mar 2011 12:48:04 -0400 (EDT)", "msg_from": "<[email protected]>", "msg_from_op": false, "msg_subject": "Re: Intel SSDs that may not suck" }, { "msg_contents": "\nOn Mar 29, 2011, at 12:12 PM, Jesper Krogh wrote:\n\n>\n> Are you replacing the drives with new once, or just secure-erase and \n> back in?\n> What kind of numbers are you drawing out of smartmontools in usage \n> figures?\n> (Also seeing some write-stalls here, on 24 Raid50 volumes of x25m's, \n> and\n> have been planning to cycle drives for quite some time, without \n> actually\n> getting to it.\n>\n\nwe have some new drives that we are going to use initially, but \neventually it'll be a secure-erase'd one we replace it with (which \nshould perform identical to a new one)\n\nWhat enclosure & controller are you using on the 24 disk beast?\n\n--\nJeff Trout <[email protected]>\nhttp://www.stuarthamm.net/\nhttp://www.dellsmartexitin.com/\n\n\n\n", "msg_date": "Tue, 29 Mar 2011 12:50:58 -0400", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Intel SSDs that may not suck" }, { "msg_contents": "On 03/29/2011 06:34 AM, Yeb Havinga wrote:\n> While I appreciate the heads up about these new drives, your posting \n> suggests (though you formulated in a way that you do not actually say \n> it) that OCZ products do not have a long term reliability. No factual \n> data. If you have knowledge of sandforce based OCZ drives fail, that'd \n> be interesting because that's the product line what the new Intel SSD \n> ought to be compared with.\n\nI didn't want to say anything too strong until I got to the bottom of \nthe reports I'd been sorting through. It turns out that there is a very \nwide incompatibility between OCZ drives and some popular Gigabyte \nmotherboards: \nhttp://www.ocztechnologyforum.com/forum/showthread.php?76177-do-you-own-a-Gigabyte-motherboard-and-have-the-SMART-error-with-FW1.11...look-inside\n\n(I'm typing this message on a system with one of the impacted \ncombinations, one reason why I don't own a Vertex 2 Pro yet. That I \nwould have to run a \"Beta BIOS\" does not inspire confidence.)\n\nWhat happens on the models impacted is that you can't get SMART data \nfrom the drive. That means no monitoring for the sort of expected \nfailures we all know can happen with any drive. So far that looks to be \nat the bottom of all the anecdotal failure reports I'd found: the \ndrives may have been throwing bad sectors or some other early failure, \nand the owners had no idea because they thought SMART would warn \nthem--but it wasn't working at all. Thus, don't find out there's a \nproblem until the drive just dies altogether one day.\n\nMore popular doesn't always mean more reliable, but for stuff like this \nit helps. Intel ships so many more drives than OCZ that I'd be shocked \nif Gigabyte themselves didn't have reference samples of them for \ntesting. This really looks like more of a warning about why you should \nbe particularly aggressive with checking SMART when running recently \nintroduced drives, which it sounds like you are already doing.\n\nReliability in this area is so strange...a diversion to older drives \ngives an idea how annoyed I am about all this. Last year, I gave up on \nWestern Digital's consumer drives (again). Not because the failure \nrates were bad, but because the one failure I did run into was so \nterrible from a SMART perspective. The drive just lied about the whole \nproblem so aggressively I couldn't manage the process. I couldn't get \nthe drive to admit it had a problem such that it could turn into an RMA \ncandidate, despite failing every time I ran an aggressive SMART error \ncheck. It would reallocate a few sectors, say \"good as new!\", and then \nfail at the next block when I re-tested. Did that at least a dozen \ntimes before throwing it in the \"pathological drives\" pile I keep around \nfor torture testing.\n\nMeanwhile, the Seagate drives I switched back to are terrible, from a \nfailure percentage perspective. I just had two start to go bad last \nweek, both halves of an array which is always fun. But, the failure \nstarted with very clearly labeled increases in reallocated sectors, and \nthe drive that eventually went really bad (making the bad noises) was \nkicked back for RMA. If you've got redundancy, I'll take components \nthat fail cleanly over ones that hide what's going on, even if the one \nthat fails cleanly is actually more likely to fail. With a rebuild \nalways a drive swap away, having accurate data makes even a higher \nfailure rate manageable.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Tue, 29 Mar 2011 14:19:37 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Intel SSDs that may not suck" }, { "msg_contents": "On 2011-03-29 18:50, Jeff wrote:\n>\n> we have some new drives that we are going to use initially, but \n> eventually it'll be a secure-erase'd one we replace it with (which \n> should perform identical to a new one)\n>\n> What enclosure & controller are you using on the 24 disk beast?\n>\nLSI 8888ELP and a HP D2700 enclosure.\n\nWorks flawlessly, the only bad thing (which actually is pretty grave)\nis that the controller mis-numbers the slots in the enclosure, so\nyou'll have to have the \"mapping\" drawn on paper next to the\nenclosure to replace the correct disk.\n\n-- \nJesper\n", "msg_date": "Tue, 29 Mar 2011 21:47:21 +0200", "msg_from": "Jesper Krogh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Intel SSDs that may not suck" }, { "msg_contents": "On 03/28/2011 04:21 PM, Greg Smith wrote:\n> Today is the launch of Intel's 3rd generation SSD line, the 320 \n> series. And they've finally produced a cheap consumer product that \n> may be useful for databases, too! They've put 6 small capacitors onto \n> the board and added logic to flush the write cache if the power drops.\n\nI decided a while ago that I wasn't going to buy a personal SSD until I \ncould get one without a volatile write cache for less than what a \nbattery-backed caching controller costs. That seemed the really \ndisruptive technology point for the sort of database use I worry about. \nAccording to \nhttp://www.newegg.com/Product/Product.aspx?Item=N82E16820167050 that \npoint was today, with the new 120GB drives now selling for $240. UPS \nwilling, later this week I should have one of those here for testing.\n\nA pair of those mirrored with software RAID-1 runs $480 for 120GB. LSI \nMegaRAID 9260-4i with 512MB cache is $330, ditto 3ware 9750-4i. Battery \nbackup runs $135 to $180 depending on model; let's call it $150. Decent \n\"enterprise\" hard drive without RAID-incompatible firmware, $90 for \n500GB, need two of them. That's $660 total for 500GB of storage.\n\nIf you really don't need more than 120GB of storage, but do care about \nrandom I/O speed, this is a pretty easy decision now--presuming the \ndrive holds up to claims. As the claims are reasonable relative to the \nengineering that went into the drive now, that may actually be the case.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Mon, 04 Apr 2011 21:26:14 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Intel SSDs that may not suck" }, { "msg_contents": "On Mon, Apr 4, 2011 at 8:26 PM, Greg Smith <[email protected]> wrote:\n> On 03/28/2011 04:21 PM, Greg Smith wrote:\n>>\n>> Today is the launch of Intel's 3rd generation SSD line, the 320 series.\n>>  And they've finally produced a cheap consumer product that may be useful\n>> for databases, too!  They've put 6 small capacitors onto the board and added\n>> logic to flush the write cache if the power drops.\n>\n> I decided a while ago that I wasn't going to buy a personal SSD until I\n> could get one without a volatile write cache for less than what a\n> battery-backed caching controller costs.  That seemed the really disruptive\n> technology point for the sort of database use I worry about.  According to\n> http://www.newegg.com/Product/Product.aspx?Item=N82E16820167050 that point\n> was today, with the new 120GB drives now selling for $240.  UPS willing,\n> later this week I should have one of those here for testing.\n>\n> A pair of those mirrored with software RAID-1 runs $480 for 120GB.  LSI\n> MegaRAID 9260-4i with 512MB cache is $330, ditto 3ware 9750-4i.  Battery\n> backup runs $135 to $180 depending on model; let's call it $150.  Decent\n> \"enterprise\" hard drive without RAID-incompatible firmware, $90 for 500GB,\n> need two of them.  That's $660 total for 500GB of storage.\n>\n> If you really don't need more than 120GB of storage, but do care about\n> random I/O speed, this is a pretty easy decision now--presuming the drive\n> holds up to claims.  As the claims are reasonable relative to the\n> engineering that went into the drive now, that may actually be the case.\n\nOne thing about MLC flash drives (which the industry seems to be\nmoving towards) is that you have to factor drive lifespan into the\ntotal system balance of costs. Data point: had an ocz vertex 2 that\nburned out in ~ 18 months. In the post mortem, it was determined that\nthe drive met and exceeded its 10k write limit -- this was a busy\nproduction box.\n\nmerlin\n", "msg_date": "Tue, 5 Apr 2011 09:07:40 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Intel SSDs that may not suck" }, { "msg_contents": "I have generation 1 and 2 Intel MLC drives in production (~150+). Some\nhave been around for 2 years.\n\nNone have died. None have hit the write cycle limit. We do ~ 75GB of\nwrites a day.\n\nThe data and writes on these are not transactional (if one dies, we have\ncopies). But the reliability has been excellent. We had the performance\ndegradation issues in the G1's that required a firmware update, and have\nhad to do a secure-erase a on some to get write performance back to\nacceptable levels on a few.\n\nI could care less about the 'fast' sandforce drives. They fail at a high\nrate and the performance improvement is BECAUSE they are using a large,\nvolatile write cache. If I need higher sequential transfer rate, I'll\nRAID some of these together. A RAID-10 of 6 of these will make a simple\nselect count(1) query be CPU bound anyway.\n\nI have some G3 SSD's I'll be doing power-fail testing on soon for database\nuse (currently, we only use the old ones for indexes in databases or\nunimportant clone db's).\n\nI have had more raid cards fail in the last 3 years (out of a couple\ndozen) than Intel SSD's fail (out of ~150). I do not trust the Intel 510\nseries yet -- its based on a non-Intel controller and has worse\nrandom-write performance anyway.\n\n\n\nOn 3/28/11 9:13 PM, \"Merlin Moncure\" <[email protected]> wrote:\n\n>On Mon, Mar 28, 2011 at 7:54 PM, Andy <[email protected]> wrote:\n>> This might be a bit too little too late though. As you mentioned there\n>>really isn't any real performance improvement for the Intel SSD.\n>>Meanwhile, SandForce (the controller that OCZ Vertex is based on) is\n>>releasing its next generation controller at a reportedly huge\n>>performance increase.\n>>\n>> Is there any benchmark measuring the performance of these SSD's (the\n>>new Intel vs. the new SandForce) running database workloads? The\n>>benchmarks I've seen so far are for desktop applications.\n>\n>The random performance data is usually a rough benchmark. The\n>sequential numbers are mostly useless and always have been. The\n>performance of either the ocz or intel drive is so disgustingly fast\n>compared to a hard drives that the main stumbling block is life span\n>and write endurance now that they are starting to get capactiors.\n>\n>My own experience with MLC drives is that write cycle expectations are\n>more or less as advertised. They do go down (hard), and have to be\n>monitored. If you are writing a lot of data this can get pretty\n>expensive although the cost dynamics are getting better and better for\n>flash. I have no idea what would be precisely prudent, but maybe some\n>good monitoring tools and phased obsolescence at around 80% duty cycle\n>might not be a bad starting point. With hard drives, you can kinda\n>wait for em to pop and swap em in -- this is NOT a good idea for flash\n>raid volumes.\n>\n>merlin\n>\n>-- \n>Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Wed, 6 Apr 2011 13:52:04 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Intel SSDs that may not suck" }, { "msg_contents": "\n--- On Wed, 4/6/11, Scott Carey <[email protected]> wrote:\n\n\n> I could care less about the 'fast' sandforce drives. \n> They fail at a high\n> rate and the performance improvement is BECAUSE they are\n> using a large,\n> volatile write cache.  \n\nThe G1 and G2 Intel MLC also use volatile write cache, just like most SandForce drives do. \n", "msg_date": "Wed, 6 Apr 2011 14:11:10 -0700 (PDT)", "msg_from": "Andy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Intel SSDs that may not suck" }, { "msg_contents": "Not for user data, only controller data.\n\n\n\n---- Original message ----\n>Date: Wed, 6 Apr 2011 14:11:10 -0700 (PDT)\n>From: [email protected] (on behalf of Andy <[email protected]>)\n>Subject: Re: [PERFORM] Intel SSDs that may not suck \n>To: Merlin Moncure <[email protected]>,Scott Carey <[email protected]>\n>Cc: \"[email protected]\" <[email protected]>,Greg Smith <[email protected]>\n>\n>\n>--- On Wed, 4/6/11, Scott Carey <[email protected]> wrote:\n>\n>\n>> I could care less about the 'fast' sandforce drives. \n>> They fail at a high\n>> rate and the performance improvement is BECAUSE they are\n>> using a large,\n>> volatile write cache.  \n>\n>The G1 and G2 Intel MLC also use volatile write cache, just like most SandForce drives do.\n>\n>-- \n>Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 6 Apr 2011 19:03:22 -0400 (EDT)", "msg_from": "<[email protected]>", "msg_from_op": false, "msg_subject": "Re: Intel SSDs that may not suck" }, { "msg_contents": "\n\nOn 3/29/11 7:16 AM, \"Jeff\" <[email protected]> wrote:\n\n>\n>The write degradation could probably be monitored looking at svctime\n>from sar. We may be implementing that in the near future to detect\n>when this creeps up again.\n\n\nFor the X25-M's, overcommit. Do a secure erase, then only partition and\nuse 85% or so of the drive (~7% is already hidden). This helps a lot with\nthe write performance over time. The Intel rep claimed that the new G3's\nare much better at limiting the occasional write latency, by splitting\nlonger delays into slightly more frequent smaller delays.\n\nSome of the benchmark reviews have histograms that demonstrate this\n(although the authors of the review only note average latency or\nthroughput, the deviations have clearly gone down in this generation).\n\nI'll know more for sure after some benchmarking myself.\n\n\n>\n>\n>--\n>Jeff Trout <[email protected]>\n>http://www.stuarthamm.net/\n>http://www.dellsmartexitin.com/\n>\n>\n>\n>\n>-- \n>Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Wed, 6 Apr 2011 17:05:28 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Intel SSDs that may not suck" }, { "msg_contents": "\n\nOn 3/29/11 7:32 AM, \"Jeff\" <[email protected]> wrote:\n\n>\n>On Mar 29, 2011, at 10:16 AM, Jeff wrote:\n>\n>> Now that all sounds awful and horrible until you get to overall\n>> performance, especially with reads - you are looking at 20k random\n>> reads per second with a few disks. Adding in writes does kick it\n>> down a noch, but you're still looking at 10k+ iops. That is the\n>> current trade off.\n>>\n>\n>We've been doing a burn in for about 4 days now on an array of 8\n>x25m's behind a p812 controller: here's a sample of what it is\n>currently doing (I have 10 threads randomly seeking, reading, and 10%\n>of the time writing (then fsync'ing) out, using my pgiosim tool which\n>I need to update on pgfoundry)\n\nYour RAID card is probably disabling the write cache on those. If not, it\nisn't power failure safe.\n\nWhen the write cache is disabled, the negative effects of random writes on\nlongevity and performance are significantly amplified.\n\nFor the G3 drives, you can force the write caches on and remain power\nfailure safe. This will significantly decrease the effects of the below.\nYou can also use a newer linux version with a file system that supports\nTRIM/DISCARD which will help as long as your raid controller passes that\nthrough. It might end up that for many workloads with these drives, it is\nfaster to use software raid than hardware raid + raid controller.\n\n\n>\n>that was from a simple dd, not random writes. (since it is in\n>production, I can't really do the random write test as easily)\n>\n>theoretically, a nice rotation of disks would remove that problem.\n>annoying, but it is the price you need to pay\n>\n>--\n>Jeff Trout <[email protected]>\n>http://www.stuarthamm.net/\n>http://www.dellsmartexitin.com/\n>\n>\n>\n>\n>-- \n>Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Wed, 6 Apr 2011 17:10:31 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Intel SSDs that may not suck" }, { "msg_contents": "\n\nOn 4/6/11 2:11 PM, \"Andy\" <[email protected]> wrote:\n\n>\n>--- On Wed, 4/6/11, Scott Carey <[email protected]> wrote:\n>\n>\n>> I could care less about the 'fast' sandforce drives.\n>> They fail at a high\n>> rate and the performance improvement is BECAUSE they are\n>> using a large,\n>> volatile write cache.\n>\n>The G1 and G2 Intel MLC also use volatile write cache, just like most\n>SandForce drives do.\n\n1. People are complaining that the Intel G3's aren't as fast as the\nSandForce drives (they are faster than the 1st gen SandForce, but not the\nyet-to-be-released ones like Vertex 3). From a database perspective, this\nis complete BS.\n\n2. 256K versus 64MB write cache. Power + time to flush a cache matters.\n\n3. None of the performance benchmarks of drives are comparing the\nperformance with the cache _disabled_ which is required when not power\nsafe. If the SandForce drives are still that much faster with it\ndisabled, I'd be shocked. Disabling a 256K write cache will affect\nperformance less than disabling a 64MB one.\n\n", "msg_date": "Wed, 6 Apr 2011 17:20:21 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Intel SSDs that may not suck" }, { "msg_contents": "\n\nOn 4/6/11 4:03 PM, \"[email protected]\" <[email protected]> wrote:\n\n>Not for user data, only controller data.\n>\n\nFalse. I used to think so, but there is volatile write cache for user\ndata -- its on the 256K chip SRAM not the DRAM though.\n\nSimple power failure tests demonstrate that you lose data with these\ndrives unless you disable the cache. Disabling the cache roughly drops\nwrite performance by a factor of 3 to 4 on G1 drives and significantly\nhurts wear-leveling and longevity (I haven't tried G2's).\n\n>\n>\n>---- Original message ----\n>>Date: Wed, 6 Apr 2011 14:11:10 -0700 (PDT)\n>>From: [email protected] (on behalf of Andy\n>><[email protected]>)\n>>Subject: Re: [PERFORM] Intel SSDs that may not suck\n>>To: Merlin Moncure <[email protected]>,Scott Carey\n>><[email protected]>\n>>Cc: \"[email protected]\"\n>><[email protected]>,Greg Smith <[email protected]>\n>>\n>>\n>>--- On Wed, 4/6/11, Scott Carey <[email protected]> wrote:\n>>\n>>\n>>> I could care less about the 'fast' sandforce drives.\n>>> They fail at a high\n>>> rate and the performance improvement is BECAUSE they are\n>>> using a large,\n>>> volatile write cache.\n>>\n>>The G1 and G2 Intel MLC also use volatile write cache, just like most\n>>SandForce drives do.\n>>\n>>-- \n>>Sent via pgsql-performance mailing list\n>>([email protected])\n>>To make changes to your subscription:\n>>http://www.postgresql.org/mailpref/pgsql-performance\n>\n>-- \n>Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Wed, 6 Apr 2011 17:22:01 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Intel SSDs that may not suck" }, { "msg_contents": "\nOn 4/5/11 7:07 AM, \"Merlin Moncure\" <[email protected]> wrote:\n\n>On Mon, Apr 4, 2011 at 8:26 PM, Greg Smith <[email protected]> wrote:\n>>\n>> If you really don't need more than 120GB of storage, but do care about\n>> random I/O speed, this is a pretty easy decision now--presuming the\n>>drive\n>> holds up to claims. As the claims are reasonable relative to the\n>> engineering that went into the drive now, that may actually be the case.\n>\n>One thing about MLC flash drives (which the industry seems to be\n>moving towards) is that you have to factor drive lifespan into the\n>total system balance of costs. Data point: had an ocz vertex 2 that\n>burned out in ~ 18 months. In the post mortem, it was determined that\n>the drive met and exceeded its 10k write limit -- this was a busy\n>production box.\n\nWhat OCZ Drive? What controller? Indilinx? SandForce? Wear-leveling on\nthese vary quite a bit.\n\nIntel claims write lifetimes in the single digit PB sizes for these 310's.\n They are due to have an update to the X25-E line too at some point.\nPublic roadmaps say this will be using \"enterprise\" MLC. This stuff\ntrades off write endurance for data longevity -- if left without power for\ntoo long the data will be lost. This is a tradeoff for all flash -- but\nthe stuff that is optimized for USB sticks is quite different than the\nstuff optimized for servers.\n\n>\n>merlin\n>\n>-- \n>Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Wed, 6 Apr 2011 17:42:17 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Intel SSDs that may not suck" }, { "msg_contents": "On Wed, Apr 6, 2011 at 5:42 PM, Scott Carey <[email protected]> wrote:\n> On 4/5/11 7:07 AM, \"Merlin Moncure\" <[email protected]> wrote:\n>>One thing about MLC flash drives (which the industry seems to be\n>>moving towards) is that you have to factor drive lifespan into the\n>>total system balance of costs. Data point: had an ocz vertex 2 that\n>>burned out in ~ 18 months.  In the post mortem, it was determined that\n>>the drive met and exceeded its 10k write limit -- this was a busy\n>>production box.\n>\n> What OCZ Drive?  What controller?  Indilinx? SandForce?  Wear-leveling on\n> these vary quite a bit.\n\nSandForce SF-1200\n\n-Dave\n", "msg_date": "Wed, 6 Apr 2011 18:07:02 -0700", "msg_from": "David Rees <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Intel SSDs that may not suck" }, { "msg_contents": "On 04/06/2011 08:22 PM, Scott Carey wrote:\n> Simple power failure tests demonstrate that you lose data with these\n> drives unless you disable the cache. Disabling the cache roughly drops\n> write performance by a factor of 3 to 4 on G1 drives and significantly\n> hurts wear-leveling and longevity (I haven't tried G2's).\n> \n\nYup. I have a customer running a busy system with Intel X25-Es, and \nanother with X25-Ms, and every time there is a power failure at either \nplace their database gets corrupted. That those drives are worthless \nfor a reliable database setup has been clear for two years now: \nhttp://www.mysqlperformanceblog.com/2009/03/02/ssd-xfs-lvm-fsync-write-cache-barrier-and-lost-transactions/ \nand sometimes I even hear reports about those drives getting corrupted \neven when the write cache is turned off. If you aggressively replicate \nthe data to another location on a different power grid, you can survive \nwith Intel's older drives. But odds are you're going to lose at least \nsome transactions no matter what you do, and the risk of \"database won't \nstart\" levels of corruption is always lingering.\n\nThe fact that Intel is making so much noise over the improved write \nintegrity features on the new drives gives you an idea how much these \nproblems have hurt their reputation in the enterprise storage space.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Wed, 06 Apr 2011 21:32:27 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Intel SSDs that may not suck" }, { "msg_contents": "Here's the new Intel 3rd generation 320 series drive:\n\n$ sudo smartctl -i /dev/sdc\nDevice Model: INTEL SSDSA2CW120G3\nFirmware Version: 4PC10302\nUser Capacity: 120,034,123,776 bytes\nATA Version is: 8\nATA Standard is: ATA-8-ACS revision 4\n\nSince I have to go chant at the unbelievers next week (MySQL Con), don't \nhave time for a really thorough look here. But I made a first pass \nthrough my usual benchmarks without any surprises.\n\nbonnie++ meets expectations with 253MB/s reads, 147MB/s writes, and 3935 \nseeks/second:\n\nVersion 1.03e ------Sequential Output------ --Sequential Input- \n--Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- \n--Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP \n/sec %CP\ntoy 32144M 147180 7 77644 3 253893 5 \n3935 15\n\nUsing sysbench to generate a 100GB file and randomly seek around it \ngives a similar figure:\n\nExtra file open flags: 0\n100 files, 1Gb each\n100Gb total file size\nBlock size 8Kb\nNumber of random requests for random IO: 10000\nRead/Write ratio for combined random IO test: 1.50\nUsing synchronous I/O mode\nDoing random read test\nThreads started!\nDone.\n\nOperations performed: 10000 reads, 0 writes, 0 Other = 10000 Total\nRead 78.125Mb Written 0b Total transferred 78.125Mb (26.698Mb/sec)\n 3417.37 Requests/sec executed\n\nSo that's the basic range of performance: up to 250MB/s on reads, but \npotentially as low as 3400 IOPS = 27MB/s on really random workloads. I \ncan make it do worse than that as you'll see in a minute.\n\nAt a database scale of 500, I can get 2357 TPS:\n\npostgres@toy:~$ /usr/lib/postgresql/8.4/bin/pgbench -c 64 -T 300 pgbench\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 500\nquery mode: simple\nnumber of clients: 64\nduration: 300 s\nnumber of transactions actually processed: 707793\ntps = 2357.497195 (including connections establishing)\ntps = 2357.943894 (excluding connections establishing)\n\nThis is basically the same performance as the 4-disk setup with 256MB \nbattery-backed write controller I profiled at \nhttp://www.2ndquadrant.us/pgbench-results/index.htm ; there XFS got as \nhigh as 2332 TPS, albeit with a PostgreSQL patched for better \nperformance than I used here. This system has 16GB of RAM, so this is \nexercising write speed only without needing to read anything from disk; \nnot too hard for regular drives to do. Performance holds at a scale of \n1000 however:\n\npostgres@toy:~$ /usr/lib/postgresql/8.4/bin/pgbench -c 64 -T 300 -l pgbench\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1000\nquery mode: simple\nnumber of clients: 64\nduration: 300 s\nnumber of transactions actually processed: 586043\ntps = 1953.006031 (including connections establishing)\ntps = 1953.399065 (excluding connections establishing)\n\nWhereas my regular drives are lucky to hit 350 TPS here. So this is the \ntypical sweet spot for SSD: workload is bigger than RAM, but not so \nmuch bigger than RAM that reads & writes become completely random.\n\nIf I crank the scale way up, to 4000 = 58GB, now I'm solidly in \nseek-bound behavior, which does about twice as fast as my regular drive \narray here (that's around 200 TPS on this test):\n\npostgres@toy:~$ /usr/lib/postgresql/8.4/bin/pgbench -T 1800 -c 64 -l pgbench\nstarting vacuum...end.\n\ntransaction type: TPC-B (sort of)\nscaling factor: 4000\nquery mode: simple\nnumber of clients: 64\nduration: 1800 s\nnumber of transactions actually processed: 731568\ntps = 406.417254 (including connections establishing)\ntps = 406.430713 (excluding connections establishing)\n\nHere's a snapshot of typical drive activity when running this:\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 2.29 0.00 1.30 54.80 0.00 41.61\n\nDevice: rrqm/s wrqm/s r/s w/s rMB/s wMB/s \navgrq-sz avgqu-sz await svctm %util\nsdc 0.00 676.67 443.63 884.00 7.90 12.25 \n31.09 41.77 31.45 0.75 99.93\n\nSo we're down to around 20MB/s, just as sysbench predicted a seek-bound \nworkload would be on these drives.\n\nI can still see checkpoint spikes here where sync times go upward:\n\n2011-04-06 20:40:58.969 EDT: LOG: checkpoint complete: wrote 2959 \nbuffers (9.0%); 0 transaction log file(s) added, 0 removed, 0 recycled; \nwrite=147.300 s, sync=32.885 s, total=181.758 s\n\nBut the drive seems to never become unresponsive for longer than a second:\n\npostgres@toy:~$ cat pgbench_log.4585 | cut -d\" \" -f 6 | sort -n | tail\n999941\n999952\n999956\n999959\n999960\n999970\n999977\n999984\n999992\n999994\n\nPower-plug pull tests with diskchecker.pl and a write-heavy database \nload didn't notice anything funny about the write cache:\n\n[witness]\n$ wget http://code.sixapart.com/svn/tools/trunk/diskchecker.pl\n$ chmod +x ./diskchecker.pl\n$ ./diskchecker.pl -l\n\n[server with SSD]\n$ wget http://code.sixapart.com/svn/tools/trunk/diskchecker.pl\n$ chmod +x ./diskchecker.pl\n$ diskchecker.pl -s grace create test_file 500\n\n diskchecker: running 20 sec, 69.67% coverage of 500 MB (38456 writes; \n1922/s)\n diskchecker: running 21 sec, 71.59% coverage of 500 MB (40551 writes; \n1931/s)\n diskchecker: running 22 sec, 73.52% coverage of 500 MB (42771 writes; \n1944/s)\n diskchecker: running 23 sec, 75.17% coverage of 500 MB (44925 writes; \n1953/s)\n[pull plug]\n\n/home/gsmith/diskchecker.pl -s grace verify test_file\n verifying: 0.00%\n verifying: 0.73%\n verifying: 7.83%\n verifying: 14.98%\n verifying: 22.10%\n verifying: 29.23%\n verifying: 36.39%\n verifying: 43.50%\n verifying: 50.65%\n verifying: 57.70%\n verifying: 64.81%\n verifying: 71.86%\n verifying: 79.02%\n verifying: 86.11%\n verifying: 93.15%\n verifying: 100.00%\nTotal errors: 0\n\n2011-04-06 21:43:09.377 EDT: LOG: database system was interrupted; last \nknown up at 2011-04-06 21:30:27 EDT\n2011-04-06 21:43:09.392 EDT: LOG: database system was not properly shut \ndown; automatic recovery in progress\n2011-04-06 21:43:09.394 EDT: LOG: redo starts at 6/BF7B2880\n2011-04-06 21:43:10.687 EDT: LOG: unexpected pageaddr 5/C2786000 in log \nfile 6, segment 205, offset 7888896\n2011-04-06 21:43:10.687 EDT: LOG: redo done at 6/CD784400\n2011-04-06 21:43:10.687 EDT: LOG: last completed transaction was at log \ntime 2011-04-06 21:39:00.551065-04\n2011-04-06 21:43:10.705 EDT: LOG: checkpoint starting: end-of-recovery \nimmediate\n2011-04-06 21:43:14.766 EDT: LOG: checkpoint complete: wrote 29915 \nbuffers (91.3%); 0 transaction log file(s) added, 0 removed, 106 \nrecycled; write=0.146 s, sync=3.904 s, total=4.078 s\n2011-04-06 21:43:14.777 EDT: LOG: database system is ready to accept \nconnections\n\nSo far, this drive is living up to expectations, without doing anything \nunexpected good or bad. When doing the things that SSD has the biggest \nadvantage over mechanical drives, it's more than 5X as fast as a 4-disk \narray (3 disk DB + wal) with a BBWC. But on really huge workloads, \nwhere the worst-cast behavior of the drive is being hit, that falls to \ncloser to a 2X advantage. And if you're doing work that isn't random \nmuch at all, the drive only matches regular disk.\n\nI like not having surprises in this sort of thing though. Intel 320 \nseries gets a preliminary thumbs-up from me. I'll be happy when these \nare mainstream enough that I can finally exit the anti-Intel SSD pulpit \nI've been standing on the last two years.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Wed, 06 Apr 2011 22:21:55 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Intel SSDs that may not suck" }, { "msg_contents": "Had to say a quick thanks to Greg and the others who have posted \ndetailed test results on SSDs here.\nFor those of us watching for the inflection point where we can begin the \ntransition from mechanical to solid state storage, this data and \nexperience is invaluable. Thanks for sharing it.\n\nA short story while I'm posting : my Dad taught electronics engineering \nand would often visit the local factories with groups of students. I \nremember in particular after a visit to a disk drive manufacturer \n(Burroughs), in 1977 he came home telling me that he'd asked the plant \nmanager what their plan was once solid state storage made their products \nobsolete. The manager looked at him like he was form another planet...\n\nSo I've been waiting patiently 34 years for this hopefully \nsoon-to-arrive moment ;)\n\n\n", "msg_date": "Wed, 06 Apr 2011 20:56:16 -0600", "msg_from": "David Boreham <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Intel SSDs that may not suck" }, { "msg_contents": "SSDs have been around for quite some time. The first that I've found is Texas Memory. Not quite 1977, but not flash either, although they've been doing so for a couple of years. \n\nhttp://www.ramsan.com/company/history\n\n---- Original message ----\n>Date: Wed, 06 Apr 2011 20:56:16 -0600\n>From: [email protected] (on behalf of David Boreham <[email protected]>)\n>Subject: Re: [PERFORM] Intel SSDs that may not suck \n>To: [email protected]\n>\n>Had to say a quick thanks to Greg and the others who have posted \n>detailed test results on SSDs here.\n>For those of us watching for the inflection point where we can begin the \n>transition from mechanical to solid state storage, this data and \n>experience is invaluable. Thanks for sharing it.\n>\n>A short story while I'm posting : my Dad taught electronics engineering \n>and would often visit the local factories with groups of students. I \n>remember in particular after a visit to a disk drive manufacturer \n>(Burroughs), in 1977 he came home telling me that he'd asked the plant \n>manager what their plan was once solid state storage made their products \n>obsolete. The manager looked at him like he was form another planet...\n>\n>So I've been waiting patiently 34 years for this hopefully \n>soon-to-arrive moment ;)\n>\n>\n>\n>-- \n>Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 6 Apr 2011 23:19:50 -0400 (EDT)", "msg_from": "<[email protected]>", "msg_from_op": false, "msg_subject": "Re: Intel SSDs that may not suck" }, { "msg_contents": "On 4/6/2011 9:19 PM, [email protected] wrote:\n> SSDs have been around for quite some time. The first that I've found is Texas Memory. Not quite 1977, but not flash either, although they've been doing so for a couple of years.\nWell, I built my first ram disk (which of course I thought I had \ninvented, at the time) in 1982.\nBut today we're seeing solid state storage seriously challenging \nrotating media across all applications, except at the TB and beyond \nscale. That's what's new.\n\n\n", "msg_date": "Wed, 06 Apr 2011 21:52:03 -0600", "msg_from": "David Boreham <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Intel SSDs that may not suck" }, { "msg_contents": "On 2011-03-28 22:21, Greg Smith wrote:\n> Some may still find these two cheap for enterprise use, given the use \n> of MLC limits how much activity these drives can handle. But it's \n> great to have a new option for lower budget system that can tolerate \n> some risk there.\n>\nDrifting of the topic slightly.. Has anyone opinions/experience with:\nhttp://www.ocztechnology.com/ocz-z-drive-r2-p88-pci-express-ssd.html\n\nThey seem to be \"like\" the FusionIO drives just quite a lot cheaper,\nwonder what the state of those 512MB is in case of a power-loss.\n\n\n-- \nJesper\n", "msg_date": "Thu, 07 Apr 2011 06:27:57 +0200", "msg_from": "Jesper Krogh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Intel SSDs that may not suck" }, { "msg_contents": "On 04/07/2011 12:27 AM, Jesper Krogh wrote:\n> On 2011-03-28 22:21, Greg Smith wrote:\n>> Some may still find these two cheap for enterprise use, given the use \n>> of MLC limits how much activity these drives can handle. But it's \n>> great to have a new option for lower budget system that can tolerate \n>> some risk there.\n>>\n> Drifting of the topic slightly.. Has anyone opinions/experience with:\n> http://www.ocztechnology.com/ocz-z-drive-r2-p88-pci-express-ssd.html\n>\n> They seem to be \"like\" the FusionIO drives just quite a lot cheaper,\n> wonder what the state of those 512MB is in case of a power-loss.\n\nWhat I do is assume that if the vendor doesn't say outright how the \ncache is preserved, that means it isn't, and the card is garbage for \ndatabase use. That rule is rarely wrong. The available soon Z-Drive R3 \nincludes a Sandforce controller and supercap for preserving writes: \nhttp://hothardware.com/News/OCZ-Unveils-RevoDrive-X3-Vertex-3-and-Other-SSD-Goodness/\n\nSince they're bragging about it there, the safe bet is that the older R2 \nunit had no such facility.\n\nI note that the Z-Drive R2 is basically some flash packed on top of an \nLSI 1068e controller, mapped as a RAID0 volume. It's possible they left \nthe battery-backup unit on that card exposed, so it may be possible to \ndo better with it. The way they just stack those card layers together, \nthe thing is practically held together with duct tape though. That's \nnot a confidence inspiring design to me. The R3 drives are much more \ncleanly integrated.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Thu, 07 Apr 2011 01:48:26 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Intel SSDs that may not suck" }, { "msg_contents": "\n\nOn 4/6/11 10:48 PM, \"Greg Smith\" <[email protected]> wrote:\n>Since they're bragging about it there, the safe bet is that the older R2\n>unit had no such facility.\n>\n>I note that the Z-Drive R2 is basically some flash packed on top of an\n>LSI 1068e controller, mapped as a RAID0 volume.\n\nIn Linux, you can expose it as a set of 4 JBOD drives, use software RAID\nof any kind on that,\nand have access to TRIM. Still useless for (most) databases but may be\nuseful for other applications, if the reliability level is OK otherwise.\n\nI wonder if the R3 will also be configurable as direct JBOD.\n\n\n>It's possible they left\n>the battery-backup unit on that card exposed, so it may be possible to\n>do better with it. The way they just stack those card layers together,\n>the thing is practically held together with duct tape though. That's\n>not a confidence inspiring design to me. The R3 drives are much more\n>cleanly integrated.\n>\n>-- \n>Greg Smith 2ndQuadrant US [email protected] Baltimore, MD\n>PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n>\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n>\n>\n>-- \n>Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Thu, 7 Apr 2011 09:25:18 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Intel SSDs that may not suck" } ]
[ { "msg_contents": "Dear list,\n\nwe have got a web application and when people log in some information is \nwritten to the user tables. We have around 500 active users, but at the \nmost 5 people are logged in at the same time. From times to times people \nlog in and then the application is not responsive any more.\n\nWhat we see in the postgres server logs is that processes are waiting \nfor other transactions to finish though not because of a deadlock.\n\nThe log tells me that certain update statements take sometimes about \n3-10 minutes. But we are talking about updates on tables with 1000 to \n10000 rows and updates that are supposed to update 1 row.\n\nWe are running under windows 2008 and postgres 8.4.7. ( Sorry for the \nwindows, it was not MY first choice )\n\nMy only explanation at the moment would be, that there must be any kind \nof windows process that stops all other processes until it is finished \nor something like that. ( Could it even be autovaccuum? ). Is there a \nway to find out how long autovaccum took ? Has anyone seen anything \nsimiliar? Or could it really be that we need a bigger machine with more \nio? But the one disk in the system still seems not very busy and \nresponse times in windows resource monitor are not higher than 28 ms.\n\nFollowing is an excerpt of our server log.\n\nLOG: process 1660 acquired ShareLock on transaction 74652 after \n533354.000 ms\nSTATEMENT: UPDATE extjs_recentlist SET visible=$1 WHERE recentlist_id=$2\nLOG: process 4984 acquired ShareLock on transaction 74652 after \n1523530.000 ms\nSTATEMENT: UPDATE extjs_recentlist SET visible=$1 WHERE recentlist_id=$2\nLOG: process 956 acquired ExclusiveLock on tuple (4,188) of relation \n16412 of database 16384 after 383055.000 ms\nSTATEMENT: UPDATE extjs_recentlist SET visible=$1 WHERE recentlist_id=$2\nLOG: process 4312 acquired ExclusiveLock on tuple (9,112) of relation \n16412 of database 16384 after 1422677.000 ms\nSTATEMENT: UPDATE extjs_recentlist SET visible=$1 WHERE recentlist_id=$2\nLOG: duration: 1523567.000 ms execute <unnamed>: UPDATE \nextjs_recentlist SET visible=$1 WHERE recentlist_id=$2\nDETAIL: parameters: $1 = 't', $2 = '1362'\nLOG: duration: 533391.000 ms execute <unnamed>: UPDATE \nextjs_recentlist SET visible=$1 WHERE recentlist_id=$2\nDETAIL: parameters: $1 = 't', $2 = '31'\nLOG: process 5504 acquired ExclusiveLock on tuple (9,112) of relation \n16412 of database 16384 after 183216.000 ms\nSTATEMENT: UPDATE extjs_recentlist SET visible=$1 WHERE recentlist_id=$2\nLOG: process 1524 acquired ExclusiveLock on tuple (4,188) of relation \n16412 of database 16384 after 376370.000 ms\nSTATEMENT: UPDATE extjs_recentlist SET visible=$1 WHERE recentlist_id=$2\nLOG: duration: 1422688.000 ms execute <unnamed>: UPDATE \nextjs_recentlist SET visible=$1 WHERE recentlist_id=$2\nDETAIL: parameters: $1 = 't', $2 = '1362'\nLOG: duration: 383067.000 ms execute <unnamed>: UPDATE \nextjs_recentlist SET visible=$1 WHERE recentlist_id=$2\nDETAIL: parameters: $1 = 'f', $2 = '31'\nLOG: process 4532 acquired ExclusiveLock on tuple (9,112) of relation \n16412 of database 16384 after 118851.000 ms\nSTATEMENT: UPDATE extjs_recentlist SET visible=$1 WHERE recentlist_id=$2\nLOG: process 4448 acquired ExclusiveLock on tuple (4,188) of relation \n16412 of database 16384 after 366304.000 ms\nSTATEMENT: UPDATE extjs_recentlist SET visible=$1 WHERE recentlist_id=$2\nLOG: duration: 183241.000 ms execute <unnamed>: UPDATE \nextjs_recentlist SET visible=$1 WHERE recentlist_id=$2\nDETAIL: parameters: $1 = 't', $2 = '1362'\nLOG: duration: 376395.000 ms execute <unnamed>: UPDATE \nextjs_recentlist SET visible=$1 WHERE recentlist_id=$2\nDETAIL: parameters: $1 = 't', $2 = '31'\nLOG: process 4204 acquired ExclusiveLock on tuple (4,188) of relation \n16412 of database 16384 after 339893.000 ms\nSTATEMENT: UPDATE extjs_recentlist SET visible=$1 WHERE recentlist_id=$2\nLOG: duration: 366342.000 ms execute <unnamed>: UPDATE \nextjs_recentlist SET visible=$1 WHERE recentlist_id=$2\nDETAIL: parameters: $1 = 't', $2 = '31'\nLOG: process 4760 acquired ExclusiveLock on tuple (4,188) of relation \n16412 of database 16384 after 205943.000 ms\nSTATEMENT: UPDATE extjs_recentlist SET visible=$1 WHERE recentlist_id=$2\nLOG: duration: 339923.000 ms execute <unnamed>: UPDATE \nextjs_recentlist SET visible=$1 WHERE recentlist_id=$2\nDETAIL: parameters: $1 = 't', $2 = '31'\nLOG: duration: 205963.000 ms execute <unnamed>: UPDATE \nextjs_recentlist SET visible=$1 WHERE recentlist_id=$2\nDETAIL: parameters: $1 = 't', $2 = '31'\nLOG: duration: 124654.000 ms execute <unnamed>: UPDATE \nextjs_recentlist SET visible=$1 WHERE recentlist_id=$2\nDETAIL: parameters: $1 = 't', $2 = '1362'\nLOG: process 3844 still waiting for ShareLock on transaction 74839 \nafter 8000.000 ms\n\nThanx in advance.\n\nLars\n-- \n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nLars Feistner\n\nKompetenzzentrum für Prüfungen in der Medizin\nMedizinische Fakultät Heidelberg,\nIm Neuenheimer Feld 346, Raum 013\n69120 Heidelberg\n\nE-Mail: [email protected]\nFon: +49-6221-56-8269\nFax: +49-6221-56-7175\n\nWWW: http://www.ims-m.de\n http://www.kompmed.de\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n", "msg_date": "Tue, 29 Mar 2011 16:38:50 +0200", "msg_from": "Lars Feistner <[email protected]>", "msg_from_op": true, "msg_subject": "very long updates very small tables" }, { "msg_contents": "Lars Feistner <[email protected]> wrote:\n \n> The log tells me that certain update statements take sometimes\n> about 3-10 minutes. But we are talking about updates on tables\n> with 1000 to 10000 rows and updates that are supposed to update 1\n> row.\n \nThe top possibilities that come to my mind are:\n \n(1) The tables are horribly bloated. If autovacuum is off or not\naggressive enough, things can degenerate to this level.\n \n(2) Memory is over-committed and your machine is thrashing.\n \n(3) There are explicit LOCK commands in the software which is\ncontributing to the blocking.\n \n(4) There is some external delay within the transaction, such as\nwaiting for user input while the transaction is open.\n \nMaybe there's a combination of the above at play. Can you rule any\nof these out?\n \n-Kevin\n", "msg_date": "Tue, 29 Mar 2011 14:28:14 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: very long updates very small tables" }, { "msg_contents": "Hello Kevin,\n\n\nOn 03/29/2011 09:28 PM, Kevin Grittner wrote:\n> Lars Feistner<[email protected]> wrote:\n>\n>> The log tells me that certain update statements take sometimes\n>> about 3-10 minutes. But we are talking about updates on tables\n>> with 1000 to 10000 rows and updates that are supposed to update 1\n>> row.\n>\n> The top possibilities that come to my mind are:\n>\n> (1) The tables are horribly bloated. If autovacuum is off or not\n> aggressive enough, things can degenerate to this level.\n>\nSome tables are auto vacuumed regularly others are not. The specific \ntable extjs_recentlist was never autovacuumed. So i would think that \nupdates on this table should be always very slow, but they are not. Only \nevery 4 or 5th day for maybe half an hour and then everything is fine \nagain. And;-) there is no anti virus installed.\n> (2) Memory is over-committed and your machine is thrashing.\n>\nWe can rule this out. There is enough memory installed and the database \nis less than 500MB.\n> (3) There are explicit LOCK commands in the software which is\n> contributing to the blocking.\nWe use the the jdbc driver. The jdbc driver might do some locking but we \ndon't.\n>\n> (4) There is some external delay within the transaction, such as\n> waiting for user input while the transaction is open.\n>\nNo, no user interaction within a transaction.\n> Maybe there's a combination of the above at play. Can you rule any\n> of these out?\n>\n> -Kevin\n>\nSo, i will try to get the autovacuum to be more aggressive and will \nreport again if nothing changes.\n\nThanks a lot.\nLars\n\n-- \n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nLars Feistner\n\nKompetenzzentrum für Prüfungen in der Medizin\nMedizinische Fakultät Heidelberg,\nIm Neuenheimer Feld 346, Raum 013\n69120 Heidelberg\n\nE-Mail: [email protected]\nFon: +49-6221-56-8269\nFax: +49-6221-56-7175\n\nWWW: http://www.ims-m.de\n http://www.kompmed.de\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n", "msg_date": "Wed, 30 Mar 2011 09:35:08 +0200", "msg_from": "Lars Feistner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: very long updates very small tables" }, { "msg_contents": "2011/3/30, Lars Feistner <[email protected]>:\n> Hello Kevin,\n>\n>\n> On 03/29/2011 09:28 PM, Kevin Grittner wrote:\n>> Lars Feistner<[email protected]> wrote:\n>>\n>>> The log tells me that certain update statements take sometimes\n>>> about 3-10 minutes. But we are talking about updates on tables\n>>> with 1000 to 10000 rows and updates that are supposed to update 1\n>>> row.\n>>\n>> The top possibilities that come to my mind are:\n>>\n>> (1) The tables are horribly bloated. If autovacuum is off or not\n>> aggressive enough, things can degenerate to this level.\n>>\n> Some tables are auto vacuumed regularly others are not. The specific\n> table extjs_recentlist was never autovacuumed. So i would think that\n> updates on this table should be always very slow, but they are not. Only\n> every 4 or 5th day for maybe half an hour and then everything is fine\n> again. And;-) there is no anti virus installed.\n>> (2) Memory is over-committed and your machine is thrashing.\n>>\n> We can rule this out. There is enough memory installed and the database\n> is less than 500MB.\n>> (3) There are explicit LOCK commands in the software which is\n>> contributing to the blocking.\n> We use the the jdbc driver. The jdbc driver might do some locking but we\n> don't.\n>>\n>> (4) There is some external delay within the transaction, such as\n>> waiting for user input while the transaction is open.\n>>\n> No, no user interaction within a transaction.\n>> Maybe there's a combination of the above at play. Can you rule any\n>> of these out?\n>>\n>> -Kevin\n>>\n> So, i will try to get the autovacuum to be more aggressive and will\n> report again if nothing changes.\n>\n> Thanks a lot.\n> Lars\n>\n> --\n> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n> Lars Feistner\n>\n> Kompetenzzentrum für Prüfungen in der Medizin\n> Medizinische Fakultät Heidelberg,\n> Im Neuenheimer Feld 346, Raum 013\n> 69120 Heidelberg\n>\n> E-Mail: [email protected]\n> Fon: +49-6221-56-8269\n> Fax: +49-6221-56-7175\n>\n> WWW: http://www.ims-m.de\n> http://www.kompmed.de\n> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nHi. try to log all statements for an hour and show us it. And Postgresql.conf .\n\n\n\n------------\npasman\n", "msg_date": "Wed, 30 Mar 2011 17:24:18 +0200", "msg_from": "=?ISO-8859-2?Q?pasman_pasma=F1ski?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: very long updates very small tables" }, { "msg_contents": "Lars Feistner <[email protected]> wrote:\n> On 03/29/2011 09:28 PM, Kevin Grittner wrote:\n>> Lars Feistner<[email protected]> wrote:\n>>\n>>> The log tells me that certain update statements take sometimes\n>>> about 3-10 minutes. But we are talking about updates on tables\n>>> with 1000 to 10000 rows and updates that are supposed to update\n>>> 1 row.\n>>\n>> The top possibilities that come to my mind are:\n \n> [all eliminated as possibilities]\n \nIf you haven't already done so, you should probably turn on\ncheckpoint logging to see if this corresponds to checkpoint\nactivity. If it does, you can try cranking up how aggressive your\nbackground writer is, and perhaps limiting your shared_buffers to\nsomething around the size of your RAID controller's BBU cache. (I\nhope you have a RAID controller with BBU cache configured for\nwrite-back, anyway.)\n \n-Kevin\n", "msg_date": "Wed, 30 Mar 2011 11:54:52 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: very long updates very small tables" }, { "msg_contents": "\n\nOn 03/30/2011 06:54 PM, Kevin Grittner wrote:\n> Lars Feistner<[email protected]> wrote:\n>> On 03/29/2011 09:28 PM, Kevin Grittner wrote:\n>>> Lars Feistner<[email protected]> wrote:\n>>>\n>>>> The log tells me that certain update statements take sometimes\n>>>> about 3-10 minutes. But we are talking about updates on tables\n>>>> with 1000 to 10000 rows and updates that are supposed to update\n>>>> 1 row.\n>>>\n>>> The top possibilities that come to my mind are:\n>\n>> [all eliminated as possibilities]\n>\n> If you haven't already done so, you should probably turn on\n> checkpoint logging to see if this corresponds to checkpoint\n> activity. If it does, you can try cranking up how aggressive your\n> background writer is, and perhaps limiting your shared_buffers to\n> something around the size of your RAID controller's BBU cache. (I\n> hope you have a RAID controller with BBU cache configured for\n> write-back, anyway.)\n>\n> -Kevin\n>\n\nHello Kevin,\n\ni am sorry to disappoint you here. As I said in my first E-Mail we don't \nhave much traffic and the database fits easily into memory. The traffic \nmight increase, at least it was increasing the last 12 months. The \ndatabase will always fit into memory.\nNo, we don't have a raid and thus we don't have a bbu. Actually we \nstarted off with a big SAN that our data centre offered. But sometimes \nthis SAN was a bit slow and when we first encountered the very long \nupdates i thought there was a connection between the long running \nupdates and the slowliness of the SAN, so i started to use the local \ndisk (we are talking about one disk not disks) for the database. I am \nstill seeing the long running inserts and updates. I am still following \nthe auto vacuum trail, it does still not run frequently enough. Thanks a \nlot for the replies so far. I will keep you guys informed about my next \nsteps and the results.\n\nThanx a lot\nLars\n\n-- \n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nLars Feistner\n\nKompetenzzentrum für Prüfungen in der Medizin\nMedizinische Fakultät Heidelberg,\nIm Neuenheimer Feld 346, Raum 013\n69120 Heidelberg\n\nE-Mail: [email protected]\nFon: +49-6221-56-8269\nFax: +49-6221-56-7175\n\nWWW: http://www.ims-m.de\n http://www.kompmed.de\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n", "msg_date": "Mon, 04 Apr 2011 10:11:41 +0200", "msg_from": "Lars Feistner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: very long updates very small tables" }, { "msg_contents": "Lars Feistner <[email protected]> wrote:\n> On 03/30/2011 06:54 PM, Kevin Grittner wrote:\n \n>> If you haven't already done so, you should probably turn on\n>> checkpoint logging to see if this corresponds to checkpoint\n>> activity. If it does, you can try cranking up how aggressive\n>> your background writer is, and perhaps limiting your\n>> shared_buffers to something around the size of your RAID\n>> controller's BBU cache. (I hope you have a RAID controller with\n>> BBU cache configured for write-back, anyway.)\n \n> i am sorry to disappoint you here. As I said in my first E-Mail we\n> don't have much traffic and the database fits easily into memory.\n> The traffic might increase, at least it was increasing the last 12\n> months. The database will always fit into memory.\n> No, we don't have a raid and thus we don't have a bbu. Actually\n> we started off with a big SAN that our data centre offered. But\n> sometimes this SAN was a bit slow and when we first encountered\n> the very long updates i thought there was a connection between the\n> long running updates and the slowliness of the SAN, so i started\n> to use the local disk (we are talking about one disk not disks)\n> for the database. I am still seeing the long running inserts and\n> updates. I am still following the auto vacuum trail, it does still\n> not run frequently enough. Thanks a lot for the replies so far. I\n> will keep you guys informed about my next steps and the results.\n \nNothing there makes a write glut on checkpoint less likely to be the\ncause. Without a BBU write-back cache it is actually *more* likely,\nand having enough RAM to hold the whole database makes it *more*\nlikely. If you haven't placed your pg_xlog directory on a separate\nfile system, it is also more likely.\n \nTurning on logging of checkpoint activity and checking whether that\ncorrelates with your problem times is strongly indicated.\n \n-Kevin\n", "msg_date": "Mon, 04 Apr 2011 09:32:40 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: very long updates very small tables" }, { "msg_contents": "Dne 4.4.2011 16:32, Kevin Grittner napsal(a):\n> Nothing there makes a write glut on checkpoint less likely to be the\n> cause. Without a BBU write-back cache it is actually *more* likely,\n> and having enough RAM to hold the whole database makes it *more*\n> likely. If you haven't placed your pg_xlog directory on a separate\n> file system, it is also more likely.\n> \n> Turning on logging of checkpoint activity and checking whether that\n> correlates with your problem times is strongly indicated.\n> \n> -Kevin\n\nCheckpoints would be my first guess too, but the whole database is just\n500MB. Lars, how did you get this number? Did you measure the amount of\ndisk space occupied or somehow else?\n\nBTW how much memory is there (total RAM and dedicated to shared\nbuffers)? How many checkpoint segments are there?\n\nHave you monitored the overall behavior of the system (what processes\nare running etc.) when the problems occur? I don't have much experience\nwith Windows but tools from sysinternals are reasonable.\n\nAnd yet another idea - have you tried to use the stats collected by\nPostgreSQL? I mean the pg_stat_ tables, especially pg_stat_bgwriter and\nmaybe pg_stat_all_tables. Those numbers are cummulative, so do two\nsnapshot when the problems are happening and subtract them to get an\nidea of what's going on.\n\nregards\nTomas\n\n", "msg_date": "Wed, 06 Apr 2011 20:30:39 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: very long updates very small tables" } ]
[ { "msg_contents": "I've got some functionality that necessarily must scan a relatively large\ntable. Even worse, the total workload is actually 3 similar, but different\nqueries, each of which requires a table scan. They all have a resultset\nthat has the same structure, and all get inserted into a temp table. Is\nthere any performance benefit to revamping the workload such that it issues\na single:\n\ninsert into (...) select ... UNION select ... UNION select\n\nas opposed to 3 separate \"insert into () select ...\" statements.\n\nI could figure it out empirically, but the queries are really slow on my dev\nlaptop and I don't have access to the staging system at the moment. Also,\nit requires revamping a fair bit of code, so I figured it never hurts to\nask. I don't have a sense of whether postgres is able to parallelize\nmultiple subqueries via a single scan\n\nI've got some functionality that necessarily must scan a relatively large table.  Even worse, the total workload is actually 3 similar, but different queries, each of which requires a table scan.  They all have a resultset that has the same structure, and all get inserted into a temp table.  Is there any performance benefit to revamping the workload such that it issues a single:\ninsert into (...) select ... UNION select ... UNION selectas opposed to 3 separate \"insert into () select ...\" statements.I could figure it out empirically, but the queries are really slow on my dev laptop and I don't have access to the staging system at the moment.  Also, it requires revamping a fair bit of code, so I figured it never hurts to ask.  I don't have a sense of whether postgres is able to parallelize multiple subqueries via a single scan", "msg_date": "Tue, 29 Mar 2011 15:16:19 -0700", "msg_from": "Samuel Gendler <[email protected]>", "msg_from_op": true, "msg_subject": "multiple table scan performance" }, { "msg_contents": "On Tue, Mar 29, 2011 at 7:16 PM, Samuel Gendler\n<[email protected]> wrote:\n> Is there any performance benefit to revamping the workload such that it issues\n> a single:\n> insert into (...) select ... UNION select ... UNION select\n> as opposed to 3 separate \"insert into () select ...\" statements.\n\nI wouldn't expect any difference - if you used UNION ALL (union will\nbe equivalent to insert into () select DISTINCT ...)\n", "msg_date": "Tue, 29 Mar 2011 19:28:07 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: multiple table scan performance" }, { "msg_contents": "On 3/29/11 3:16 PM, Samuel Gendler wrote:\n> I've got some functionality that necessarily must scan a relatively large table. Even worse, the total workload is actually 3 similar, but different queries, each of which requires a table scan. They all have a resultset that has the same structure, and all get inserted into a temp table. Is there any performance benefit to revamping the workload such that it issues a single:\n>\n> insert into (...) select ... UNION select ... UNION select\n>\n> as opposed to 3 separate \"insert into () select ...\" statements.\n>\n> I could figure it out empirically, but the queries are really slow on my dev laptop and I don't have access to the staging system at the moment. Also, it requires revamping a fair bit of code, so I figured it never hurts to ask. I don't have a sense of whether postgres is able to parallelize multiple subqueries via a single scan\nYou don't indicate how complex your queries are. If it's just a single table and the conditions are relatively simple, could you do something like this?\n\n insert into (...) select ... where (...) OR (...) OR (...)\n\nCraig\n", "msg_date": "Tue, 29 Mar 2011 16:31:26 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: multiple table scan performance" }, { "msg_contents": "On Wed, Mar 30, 2011 at 01:16, Samuel Gendler <[email protected]> wrote:\n> I've got some functionality that necessarily must scan a relatively large table\n\n> Is there any performance benefit to revamping the workload such that it issues\n> a single:\n> insert into (...) select ... UNION select ... UNION select\n> as opposed to 3 separate \"insert into () select ...\" statements.\n\nApparently not, as explained by Claudio Freire. This seems like missed\nopportunity for the planner, however. If it scanned all three UNION\nsubqueries in parallel, the synchronized seqscans feature would kick\nin and the physical table would only be read once, instead of 3 times.\n\n(I'm assuming that seqscan disk access is your bottleneck)\n\nYou can trick Postgres (8.3.x and newer) into doing it in parallel\nanyway: open 3 separate database connections and issue each of these\n'INSERT INTO ... SELECT' parts separately. This way all the queries\nshould execute in about 1/3 the time, compared to running them in one\nsession or with UNION ALL.\n\nRegards,\nMarti\n", "msg_date": "Wed, 30 Mar 2011 03:05:08 +0300", "msg_from": "Marti Raudsepp <[email protected]>", "msg_from_op": false, "msg_subject": "Re: multiple table scan performance" }, { "msg_contents": "On Tue, Mar 29, 2011 at 5:05 PM, Marti Raudsepp <[email protected]> wrote:\n\n> On Wed, Mar 30, 2011 at 01:16, Samuel Gendler <[email protected]>\n> wrote:\n>\n> You can trick Postgres (8.3.x and newer) into doing it in parallel\n> anyway: open 3 separate database connections and issue each of these\n> 'INSERT INTO ... SELECT' parts separately. This way all the queries\n> should execute in about 1/3 the time, compared to running them in one\n> session or with UNION ALL.\n>\n\nThat's a good idea, but forces a lot of infrastructural change on me. I'm\ninserting into a temp table, then deleting everything from another table\nbefore copying over. I could insert into an ordinary table, but then I've\ngot to deal with ensuring that everything is properly cleaned up, etc.\n Since nothing is actually blocked, waiting for the queries to return, I\nthink I'll just let them churn for now. It won't make much difference in\nproduction, where the whole table will fit easily into cache. I just wanted\nthings to be faster in my dev environment.\n\n\n\n>\n> Regards,\n> Marti\n>\n\nOn Tue, Mar 29, 2011 at 5:05 PM, Marti Raudsepp <[email protected]> wrote:\nOn Wed, Mar 30, 2011 at 01:16, Samuel Gendler <[email protected]> wrote:\n\nYou can trick Postgres (8.3.x and newer) into doing it in parallel\nanyway: open 3 separate database connections and issue each of these\n'INSERT INTO ... SELECT' parts separately.  This way all the queries\nshould execute in about 1/3 the time, compared to running them in one\nsession or with UNION ALL.That's a good idea, but forces a lot of infrastructural change on me.  I'm inserting into a temp table, then deleting everything from another table before copying over.  I could insert into an ordinary table, but then I've got to deal with ensuring that everything is properly cleaned up, etc.  Since nothing is actually blocked, waiting for the queries to return, I think I'll just let them churn for now. It won't make much difference in production, where the whole table will fit easily into cache.  I just wanted things to be faster in my dev environment.\n \n\nRegards,\nMarti", "msg_date": "Tue, 29 Mar 2011 17:12:24 -0700", "msg_from": "Samuel Gendler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: multiple table scan performance" } ]
[ { "msg_contents": "Just some information on our setup:\r\n\r\n- HP DL585 G6 \r\n- 4 x AMD Opteron 8435 (24 cores)\r\n- 256GB RAM\r\n- 2 FusionIO 640GB PCI-SSD (RAID0)\r\n- dual 10GB ethernet.\r\n\r\n- we have several tables that we store calculated values in.\r\n- these are inserted by a compute farm that calculates the results and stores them into a partitioned schema (schema listed below)\r\n- whenever we do a lot of inserts we seem to get exclusive locks.\r\n\r\nIs there something we can do to improve the performance around locking when doing a lot of parallel inserts with COPY into? We are not IO bound, what happens is that the copies start to slow down and continue to come in and cause the client to swap, we had hit over 800+ COPYS were in a waiting state, which forced us to start paging heavily creating an issue. If we can figure out the locking issue the copys should clear faster requiring less memory in use.\r\n\r\n[ 2011-03-30 15:54:55.886 EDT ] 14405 [local] asgprod:4d938288.3845 LOG: process 14405 still waiting for ExclusiveLock on extension of relation 470273 of database 16384 after 5001.894 ms\r\n[ 2011-03-30 15:54:55.886 EDT ] 14405 [local] asgprod:4d938288.3845 CONTEXT: COPY reportvalues_part_1931, line 1: \"660250 41977959 11917 584573.43642105709\"\r\n[ 2011-03-30 15:54:55.886 EDT ] 14405 [local] asgprod:4d938288.3845 STATEMENT: COPY reportvalues_part_1931 FROM stdin USING DELIMITERS ' '\r\n[ 2011-03-30 15:54:56.015 EDT ] 7294 [local] asgprod:4d938939.1c7e LOG: process 7294 still waiting for ExclusiveLock on extension of relation 470606 of database 16384 after 5062.968 ms\r\n[ 2011-03-30 15:54:56.015 EDT ] 7294 [local] asgprod:4d938939.1c7e CONTEXT: COPY reportvalues_part_1932, line 158: \"660729 41998839 887 45000.0\"\r\n[ 2011-03-30 15:54:56.015 EDT ] 7294 [local] asgprod:4d938939.1c7e STATEMENT: COPY reportvalues_part_1932 FROM stdin USING DELIMITERS ' '\r\n[ 2011-03-30 15:54:56.077 EDT ] 25781 [local] asgprod:4d938556.64b5 LOG: process 25781 still waiting for ExclusiveLock on extension of relation 470606 of database 16384 after 5124.463 ms\r\n\r\nrelation | 16384 | 470606 | | | | | | | | 93/677526 | 14354 | RowExclusiveLock | t\r\n relation | 16384 | 470606 | | | | | | | | 1047/4 | 27451 | RowExclusiveLock | t\r\n relation | 16384 | 470606 | | | | | | | | 724/58891 | 20721 | RowExclusiveLock | t\r\n transactionid | | | | | | 94673393 | | | | 110/502566 | 1506 | ExclusiveLock | t\r\n virtualxid | | | | | 975/92 | | | | | 975/92 | 25751 | ExclusiveLock | t\r\n extend | 16384 | 470606 | | | | | | | | 672/102043 | 20669 | ExclusiveLock | f\r\n extend | 16384 | 470606 | | | | | | | | 1178/10 | 6074 | ExclusiveLock | f\r\n virtualxid | | | | | 37/889225 | | | | | 37/889225 | 4623 | ExclusiveLock | t\r\n relation | 16384 | 405725 | | | | | | | | 39/822056 | 32502 | AccessShareLock | t\r\n transactionid | | | | | | 94673831 | | | | 917/278 | 23134 | ExclusiveLock | t\r\n relation | 16384 | 470609 | | | | | | | | 537/157021 | 11863 | RowExclusiveLock | t\r\n relation | 16384 | 470609 | | | | | | | | 532/91114 | 7282 | RowExclusiveLock | t\r\n virtualxid | | | | | 920/8 | | | | | 920/8 | 23137 | ExclusiveLock | t\r\n relation | 16384 | 425555 | | | | | | | | 39/822056 | 32502 | AccessShareLock | t\r\n relation | 16384 | 470606 | | | | | | | | 915/10 | 22619 | RowExclusiveLock | t\r\n relation | 16384 | 470606 | | | | | | | | 344/387563 | 30343 | RowExclusiveLock | tNumber of child tables: 406 (Use \\d+ to list them.)\r\n\r\n\r\nriskresults=# \\d reportvalues_part_1932;\r\n Table \"public.reportvalues_part_1932\"\r\n Column | Type | Modifiers\r\n--------------+------------------+-----------\r\n reportid | integer | not null\r\n scenarioid | integer | not null\r\n instrumentid | integer | not null\r\n value | double precision |\r\nIndexes:\r\n \"reportvalues_part_1932_pkey\" PRIMARY KEY, btree (reportid, scenarioid, instrumentid)\r\nInherits: reportvalues_part\r\n\r\nriskresults=# \\d reportvalues_part;\r\n Table \"public.reportvalues_part\"\r\n Column | Type | Modifiers\r\n--------------+------------------+-----------\r\n reportid | integer | not null\r\n scenarioid | integer | not null\r\n instrumentid | integer | not null\r\n value | double precision |\r\nIndexes:\r\n \"reportvalues_part_pkey\" PRIMARY KEY, btree (reportid, scenarioid, instrumentid)\r\nNumber of child tables: 406 (Use \\d+ to list them.)\r\n\r\nThis communication is for informational purposes only. It is not\nintended as an offer or solicitation for the purchase or sale of\nany financial instrument or as an official confirmation of any\ntransaction. All market prices, data and other information are not\nwarranted as to completeness or accuracy and are subject to change\nwithout notice. Any comments or statements made herein do not\nnecessarily reflect those of JPMorgan Chase & Co., its subsidiaries\nand affiliates.\r\n\r\nThis transmission may contain information that is privileged,\nconfidential, legally privileged, and/or exempt from disclosure\nunder applicable law. If you are not the intended recipient, you\nare hereby notified that any disclosure, copying, distribution, or\nuse of the information contained herein (including any reliance\nthereon) is STRICTLY PROHIBITED. Although this transmission and any\nattachments are believed to be free of any virus or other defect\nthat might affect any computer system into which it is received and\nopened, it is the responsibility of the recipient to ensure that it\nis virus free and no responsibility is accepted by JPMorgan Chase &\nCo., its subsidiaries and affiliates, as applicable, for any loss\nor damage arising in any way from its use. If you received this\ntransmission in error, please immediately contact the sender and\ndestroy the material in its entirety, whether in electronic or hard\ncopy format. Thank you.\r\n\r\nPlease refer to http://www.jpmorgan.com/pages/disclosures for\ndisclosures relating to European legal entities.\n", "msg_date": "Wed, 30 Mar 2011 16:56:22 -0400", "msg_from": "\"Strange, John W\" <[email protected]>", "msg_from_op": true, "msg_subject": "COPY with high # of clients, partitioned table locking issues?" }, { "msg_contents": "John,\n\nSorry to hear you're struggling with such underpowered hardware. ;-) A little more information would be helpful, though:\n\n1. What version of PG are you running?\n2. What are the constraints on the child tables?\n3. How many rows does each copy insert?\n4. Are these wrapped in transactions?\n5. are the child tables created at the same time the copies are taking place? In the same transaction?\n6. Are the indexes in place on the child table(s) when the copies are running? Do they have to be to validate the data?\n7. What are the configuration settings for the database? (Just the ones changed from the default, please.)\n8. Which file system are you running for the database files? Mount options?\n9. Are the WAL files on the same file system?\n\n\nBob Lunney\n\n--- On Wed, 3/30/11, Strange, John W <[email protected]> wrote:\n\n> From: Strange, John W <[email protected]>\n> Subject: [PERFORM] COPY with high # of clients, partitioned table locking issues?\n> To: \"[email protected]\" <[email protected]>\n> Date: Wednesday, March 30, 2011, 4:56 PM\n> Just some information on our setup:\n> \n> - HP DL585 G6 \n> - 4 x AMD Opteron 8435 (24 cores)\n> - 256GB RAM\n> - 2 FusionIO 640GB PCI-SSD (RAID0)\n> - dual 10GB ethernet.\n> \n> - we have several tables that we store calculated values\n> in.\n> - these are inserted by a compute farm that calculates the\n> results and stores them into a partitioned schema (schema\n> listed below)\n> - whenever we do a lot of inserts we seem to get exclusive\n> locks.\n> \n> Is there something we can do to improve the performance\n> around locking when doing a lot of parallel inserts with\n> COPY into?  We are not IO bound, what happens is that\n> the copies start to slow down and continue to come in and\n> cause the client to swap, we had hit over 800+ COPYS were in\n> a waiting state, which forced us to start paging heavily\n> creating an issue.  If we can figure out the locking\n> issue the copys should clear faster requiring less memory in\n> use.\n> \n> [ 2011-03-30 15:54:55.886 EDT ] 14405 [local]\n> asgprod:4d938288.3845 LOG:  process 14405 still waiting\n> for ExclusiveLock on extension of relation 470273 of\n> database 16384 after 5001.894 ms\n> [ 2011-03-30 15:54:55.886 EDT ] 14405 [local]\n> asgprod:4d938288.3845 CONTEXT:  COPY\n> reportvalues_part_1931, line 1: \"660250     \n> 41977959       \n> 11917   584573.43642105709\"\n> [ 2011-03-30 15:54:55.886 EDT ] 14405 [local]\n> asgprod:4d938288.3845 STATEMENT:  COPY\n> reportvalues_part_1931 FROM stdin USING DELIMITERS ' \n>      '\n> [ 2011-03-30 15:54:56.015 EDT ] 7294 [local]\n> asgprod:4d938939.1c7e LOG:  process 7294 still waiting\n> for ExclusiveLock on extension of relation 470606 of\n> database 16384 after 5062.968 ms\n> [ 2011-03-30 15:54:56.015 EDT ] 7294 [local]\n> asgprod:4d938939.1c7e CONTEXT:  COPY\n> reportvalues_part_1932, line 158: \"660729 \n>    41998839       \n> 887     45000.0\"\n> [ 2011-03-30 15:54:56.015 EDT ] 7294 [local]\n> asgprod:4d938939.1c7e STATEMENT:  COPY\n> reportvalues_part_1932 FROM stdin USING DELIMITERS ' \n>       '\n> [ 2011-03-30 15:54:56.077 EDT ] 25781 [local]\n> asgprod:4d938556.64b5 LOG:  process 25781 still waiting\n> for ExclusiveLock on extension of relation 470606 of\n> database 16384 after 5124.463 ms\n> \n> relation      |    16384\n> |   470606 |      | \n>      |         \n>   |           \n>    |     \n>    |       | \n>         | 93/677526     \n>     | 14354 | RowExclusiveLock     \n>    | t\n> relation      |    16384\n> |   470606 |      | \n>      |         \n>   |           \n>    |     \n>    |       | \n>         | 1047/4     \n>        | 27451 |\n> RowExclusiveLock         | t\n> relation      |    16384\n> |   470606 |      | \n>      |         \n>   |           \n>    |     \n>    |       | \n>         | 724/58891     \n>     | 20721 | RowExclusiveLock     \n>    | t\n> transactionid |          | \n>         |      | \n>      |         \n>   |      94673393 |     \n>    |       | \n>         | 110/502566     \n>    |  1506 | ExclusiveLock   \n>         | t\n> virtualxid    |       \n>   |          |   \n>   |       | 975/92 \n>    |           \n>    |     \n>    |       | \n>         | 975/92     \n>        | 25751 |\n> ExclusiveLock            | t\n> extend        |    16384\n> |   470606 |      | \n>      |         \n>   |           \n>    |     \n>    |       | \n>         | 672/102043     \n>    | 20669 | ExclusiveLock   \n>         | f\n> extend        |    16384\n> |   470606 |      | \n>      |         \n>   |           \n>    |     \n>    |       | \n>         | 1178/10     \n>       |  6074 | ExclusiveLock \n>           | f\n> virtualxid    |       \n>   |          |   \n>   |       | 37/889225 \n> |           \n>    |     \n>    |       | \n>         | 37/889225     \n>     |  4623 | ExclusiveLock   \n>         | t\n> relation      |    16384\n> |   405725 |      | \n>      |         \n>   |           \n>    |     \n>    |       | \n>         | 39/822056     \n>     | 32502 | AccessShareLock     \n>     | t\n> transactionid |          | \n>         |      | \n>      |         \n>   |      94673831 |     \n>    |       | \n>         | 917/278     \n>       | 23134 | ExclusiveLock   \n>         | t\n> relation      |    16384\n> |   470609 |      | \n>      |         \n>   |           \n>    |     \n>    |       | \n>         | 537/157021     \n>    | 11863 | RowExclusiveLock   \n>      | t\n> relation      |    16384\n> |   470609 |      | \n>      |         \n>   |           \n>    |     \n>    |       | \n>         | 532/91114     \n>     |  7282 | RowExclusiveLock   \n>      | t\n> virtualxid    |       \n>   |          |   \n>   |       | 920/8   \n>   |           \n>    |     \n>    |       | \n>         | 920/8     \n>         | 23137 | ExclusiveLock \n>           | t\n> relation      |    16384\n> |   425555 |      | \n>      |         \n>   |           \n>    |     \n>    |       | \n>         | 39/822056     \n>     | 32502 | AccessShareLock     \n>     | t\n> relation      |    16384\n> |   470606 |      | \n>      |         \n>   |           \n>    |     \n>    |       | \n>         | 915/10     \n>        | 22619 |\n> RowExclusiveLock         | t\n> relation      |    16384\n> |   470606 |      | \n>      |         \n>   |           \n>    |     \n>    |       | \n>         | 344/387563     \n>    | 30343 | RowExclusiveLock   \n>      | tNumber of child tables: 406 (Use\n> \\d+ to list them.)\n> \n> \n> riskresults=# \\d reportvalues_part_1932;\n>     Table \"public.reportvalues_part_1932\"\n>     Column    |   \n>    Type       |\n> Modifiers\n> --------------+------------------+-----------\n> reportid     | integer   \n>       | not null\n> scenarioid   | integer     \n>     | not null\n> instrumentid | integer          |\n> not null\n> value        | double precision |\n> Indexes:\n>     \"reportvalues_part_1932_pkey\" PRIMARY KEY,\n> btree (reportid, scenarioid, instrumentid)\n> Inherits: reportvalues_part\n> \n> riskresults=# \\d reportvalues_part;\n>       Table \"public.reportvalues_part\"\n>     Column    |   \n>    Type       |\n> Modifiers\n> --------------+------------------+-----------\n> reportid     | integer   \n>       | not null\n> scenarioid   | integer     \n>     | not null\n> instrumentid | integer          |\n> not null\n> value        | double precision |\n> Indexes:\n>     \"reportvalues_part_pkey\" PRIMARY KEY, btree\n> (reportid, scenarioid, instrumentid)\n> Number of child tables: 406 (Use \\d+ to list them.)\n> \n> This communication is for informational purposes only. It\n> is not\n> intended as an offer or solicitation for the purchase or\n> sale of\n> any financial instrument or as an official confirmation of\n> any\n> transaction. All market prices, data and other information\n> are not\n> warranted as to completeness or accuracy and are subject to\n> change\n> without notice. Any comments or statements made herein do\n> not\n> necessarily reflect those of JPMorgan Chase & Co., its\n> subsidiaries\n> and affiliates.\n> \n> This transmission may contain information that is\n> privileged,\n> confidential, legally privileged, and/or exempt from\n> disclosure\n> under applicable law. If you are not the intended\n> recipient, you\n> are hereby notified that any disclosure, copying,\n> distribution, or\n> use of the information contained herein (including any\n> reliance\n> thereon) is STRICTLY PROHIBITED. Although this transmission\n> and any\n> attachments are believed to be free of any virus or other\n> defect\n> that might affect any computer system into which it is\n> received and\n> opened, it is the responsibility of the recipient to ensure\n> that it\n> is virus free and no responsibility is accepted by JPMorgan\n> Chase &\n> Co., its subsidiaries and affiliates, as applicable, for\n> any loss\n> or damage arising in any way from its use. If you received\n> this\n> transmission in error, please immediately contact the\n> sender and\n> destroy the material in its entirety, whether in electronic\n> or hard\n> copy format. Thank you.\n> \n> Please refer to http://www.jpmorgan.com/pages/disclosures\n> for\n> disclosures relating to European legal entities.\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n\n \n", "msg_date": "Wed, 30 Mar 2011 17:48:30 -0700 (PDT)", "msg_from": "Bob Lunney <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY with high # of clients, partitioned table locking issues?" }, { "msg_contents": "On Wed, Mar 30, 2011 at 5:48 PM, Bob Lunney <[email protected]> wrote:\n\n> John,\n>\n> Sorry to hear you're struggling with such underpowered hardware. ;-) A\n> little more information would be helpful, though:\n>\n> 1. What version of PG are you running?\n> 2. What are the constraints on the child tables?\n> 3. How many rows does each copy insert?\n> 4. Are these wrapped in transactions?\n> 5. are the child tables created at the same time the copies are taking\n> place? In the same transaction?\n> 6. Are the indexes in place on the child table(s) when the copies are\n> running? Do they have to be to validate the data?\n> 7. What are the configuration settings for the database? (Just the ones\n> changed from the default, please.)\n> 8. Which file system are you running for the database files? Mount\n> options?\n> 9. Are the WAL files on the same file system?\n>\n>\n10. are you copying directly into the child tables or into the parent and\nthen redirecting to child tables via a trigger?\n\nOn Wed, Mar 30, 2011 at 5:48 PM, Bob Lunney <[email protected]> wrote:\nJohn,\n\nSorry to hear you're struggling with such underpowered hardware.  ;-)  A little more information would be helpful, though:\n\n1.  What version of PG are you running?\n2.  What are the constraints on the child tables?\n3.  How many rows does each copy insert?\n4.  Are these wrapped in transactions?\n5.  are the child tables created at the same time the copies are taking place?  In the same transaction?\n6.  Are the indexes in place on the child table(s) when the copies are running?  Do they have to be to validate the data?\n7.  What are the configuration settings for the database?  (Just the ones changed from the default, please.)\n8.  Which file system are you running for the database files?  Mount options?\n9.  Are the WAL files on the same file system?\n 10. are you copying directly into the child tables or into the parent and then redirecting to child tables via a trigger?", "msg_date": "Wed, 30 Mar 2011 18:31:40 -0700", "msg_from": "Samuel Gendler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY with high # of clients, partitioned table locking issues?" }, { "msg_contents": "Your message was dropped into my Spam lable :S\n\n\n2011/3/30 Strange, John W <[email protected]>:\n> Just some information on our setup:\n>\n> - HP DL585 G6\n> - 4 x AMD Opteron 8435 (24 cores)\n> - 256GB RAM\n> - 2 FusionIO 640GB PCI-SSD (RAID0)\n> - dual 10GB ethernet.\n>\n> - we have several tables that we store calculated values in.\n> - these are inserted by a compute farm that calculates the results and stores them into a partitioned schema (schema listed below)\n> - whenever we do a lot of inserts we seem to get exclusive locks.\n>\n> Is there something we can do to improve the performance around locking when doing a lot of parallel inserts with COPY into?  We are not IO bound, what happens is that the copies start to slow down and continue to come in and cause the client to swap, we had hit over 800+ COPYS were in a waiting state, which forced us to start paging heavily creating an issue.  If we can figure out the locking issue the copys should clear faster requiring less memory in use.\n>\n> [ 2011-03-30 15:54:55.886 EDT ] 14405 [local] asgprod:4d938288.3845 LOG:  process 14405 still waiting for ExclusiveLock on extension of relation 470273 of database 16384 after 5001.894 ms\n> [ 2011-03-30 15:54:55.886 EDT ] 14405 [local] asgprod:4d938288.3845 CONTEXT:  COPY reportvalues_part_1931, line 1: \"660250      41977959        11917   584573.43642105709\"\n> [ 2011-03-30 15:54:55.886 EDT ] 14405 [local] asgprod:4d938288.3845 STATEMENT:  COPY reportvalues_part_1931 FROM stdin USING DELIMITERS '       '\n> [ 2011-03-30 15:54:56.015 EDT ] 7294 [local] asgprod:4d938939.1c7e LOG:  process 7294 still waiting for ExclusiveLock on extension of relation 470606 of database 16384 after 5062.968 ms\n> [ 2011-03-30 15:54:56.015 EDT ] 7294 [local] asgprod:4d938939.1c7e CONTEXT:  COPY reportvalues_part_1932, line 158: \"660729     41998839        887     45000.0\"\n> [ 2011-03-30 15:54:56.015 EDT ] 7294 [local] asgprod:4d938939.1c7e STATEMENT:  COPY reportvalues_part_1932 FROM stdin USING DELIMITERS '        '\n> [ 2011-03-30 15:54:56.077 EDT ] 25781 [local] asgprod:4d938556.64b5 LOG:  process 25781 still waiting for ExclusiveLock on extension of relation 470606 of database 16384 after 5124.463 ms\n>\n\nBut you are using stdin for COPY! The best way is use files. Maybe you must\nreview postgresql.conf configuration, especially the WAL configuration.\nHow many times you do this procedure? which is the amount of data involved?\n\n\n\n\n-- \n--\n              Emanuel Calvo\n              Helpame.com\n", "msg_date": "Thu, 31 Mar 2011 12:43:30 +0200", "msg_from": "Emanuel Calvo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY with high # of clients, partitioned table locking issues?" }, { "msg_contents": "> But you are using stdin for COPY! The best way is use files.\n\nI've never heard this before, and I don't see how reading from files\ncould possibly help. Can you clarify?\n---\nMaciek Sakrejda | System Architect | Truviso\n\n1065 E. Hillsdale Blvd., Suite 215\nFoster City, CA 94404\n(650) 242-3500 Main\nwww.truviso.com\n", "msg_date": "Thu, 31 Mar 2011 09:25:41 -0700", "msg_from": "Maciek Sakrejda <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY with high # of clients, partitioned table locking issues?" }, { "msg_contents": "On 03/30/2011 04:56 PM, Strange, John W wrote:\n> [ 2011-03-30 15:54:55.886 EDT ] 14405 [local] asgprod:4d938288.3845 LOG: process 14405 still waiting for ExclusiveLock on extension of relation 470273 of database 16384 after 5001.894 ms\n> [ 2011-03-30 15:54:56.015 EDT ] 7294 [local] asgprod:4d938939.1c7e LOG: process 7294 still waiting for ExclusiveLock on extension of relation 470606 of database 16384 after 5062.968 ms\n> [ 2011-03-30 15:54:56.077 EDT ] 25781 [local] asgprod:4d938556.64b5 LOG: process 25781 still waiting for ExclusiveLock on extension of relation 470606 of database 16384 after 5124.463 ms\n> \n\nWhen you insert something new into the database, sometimes it has to \ngrow the size of the underlying file on disk to add it. That's called \n\"relation extension\"; basically the table gets some number of 8K blocks \nadded to the end of it. If your workload tries to push new blocks into \na table with no free space, every operation will become serialized \nwaiting on individual processes grabbing the lock for relation extension.\n\nThe main reasonable way around this from a high level is to write \nsomething that makes the extensions create significantly more data when \nthey get into this situation than they do right now. Don't just extend \nby one block; extend by a large numer instead, if you believe you're in \nthis sort of situation. That's probably going to take a low-level code \nchange to actually fix the issue inside PostgreSQL though.\n\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Fri, 01 Apr 2011 02:50:47 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY with high # of clients, partitioned table locking\n issues?" } ]
[ { "msg_contents": "Hi All,\n\nI'm trying to delete one row from a table and it's taking an extremely long time. This parent table is referenced by other table's foreign keys, but the particular row I'm trying to delete is not referenced any other rows in the associative tables. This table has the following structure:\n\nCREATE TABLE revision\n(\n id serial NOT NULL,\n revision_time timestamp without time zone NOT NULL DEFAULT now(),\n start_time timestamp without time zone NOT NULL DEFAULT clock_timestamp(),\n schema_change boolean NOT NULL,\n \"comment\" text,\n CONSTRAINT revision_pkey PRIMARY KEY (id)\n)\nWITH (\n OIDS=FALSE\n);\n\nThis table is referenced from foreign key by 130 odd other tables. The total number of rows from these referencing tables goes into the hundreds of millions. Each of these tables has been automatically created by script and has the same _revision_created, _revision_expired fields, foreign keys and indexes. Here is an example of one:\n\nCREATE TABLE table_version.bde_crs_action_revision\n(\n _revision_created integer NOT NULL,\n _revision_expired integer,\n tin_id integer NOT NULL,\n id integer NOT NULL,\n \"sequence\" integer NOT NULL,\n att_type character varying(4) NOT NULL,\n system_action character(1) NOT NULL,\n audit_id integer NOT NULL,\n CONSTRAINT \"pkey_table_version.bde_crs_action_revision\" PRIMARY KEY (_revision_created, audit_id),\n CONSTRAINT bde_crs_action_revision__revision_created_fkey FOREIGN KEY (_revision_created)\n REFERENCES table_version.revision (id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT bde_crs_action_revision__revision_expired_fkey FOREIGN KEY (_revision_expired)\n REFERENCES table_version.revision (id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION\n)\nWITH (\n OIDS=FALSE\n);\nALTER TABLE table_version.bde_crs_action_revision OWNER TO bde_dba;\nALTER TABLE table_version.bde_crs_action_revision ALTER COLUMN audit_id SET STATISTICS 500;\n\n\nCREATE INDEX idx_crs_action_audit_id\n ON table_version.bde_crs_action_revision\n USING btree\n (audit_id);\n\nCREATE INDEX idx_crs_action_created\n ON table_version.bde_crs_action_revision\n USING btree\n (_revision_created);\n\nCREATE INDEX idx_crs_action_expired\n ON table_version.bde_crs_action_revision\n USING btree\n (_revision_expired);\n\nCREATE INDEX idx_crs_action_expired_created\n ON table_version.bde_crs_action_revision\n USING btree\n (_revision_expired, _revision_created);\n\nCREATE INDEX idx_crs_action_expired_key\n ON table_version.bde_crs_action_revision\n USING btree\n (_revision_expired, audit_id);\n\n\nAll of the table have been analysed before I tried to run the query.\n\nThe fact the all of the foreign keys have a covering index makes me wonder why this delete is taking so long.\n\nThe explain for \n\ndelete from table_version.revision where id = 1003\n\n\nDelete (cost=0.00..1.02 rows=1 width=6)\n -> Seq Scan on revision (cost=0.00..1.02 rows=1 width=6)\n Filter: (id = 100)\n\nI'm running POstgreSQL 9.0.2 on Ubuntu 10.4\n\nCheers\nJeremy\n______________________________________________________________________________________________________\n\nThis message contains information, which is confidential and may be subject to legal privilege. \nIf you are not the intended recipient, you must not peruse, use, disseminate, distribute or copy this message.\nIf you have received this message in error, please notify us immediately (Phone 0800 665 463 or [email protected]) and destroy the original message.\nLINZ accepts no responsibility for changes to this email, or for any attachments, after its transmission from LINZ.\n\nThank you.\n______________________________________________________________________________________________________\n", "msg_date": "Thu, 31 Mar 2011 15:16:29 +1300", "msg_from": "Jeremy Palmer <[email protected]>", "msg_from_op": true, "msg_subject": "Slow deleting tables with foreign keys" }, { "msg_contents": "Jeremy,\n\nDoes table_revision have a unique index on id? Also, I doubt these two indexes ever get used:\n\nCREATE INDEX idx_crs_action_expired_created\n ON table_version.bde_crs_action_revision\n USING btree\n (_revision_expired, _revision_created);\n\nCREATE INDEX idx_crs_action_expired_key\n ON table_version.bde_crs_action_revision\n USING btree\n (_revision_expired, audit_id);\n\nBob Lunney\n\n--- On Wed, 3/30/11, Jeremy Palmer <[email protected]> wrote:\n\n> From: Jeremy Palmer <[email protected]>\n> Subject: [PERFORM] Slow deleting tables with foreign keys\n> To: \"[email protected]\" <[email protected]>\n> Date: Wednesday, March 30, 2011, 10:16 PM\n> Hi All,\n> \n> I'm trying to delete one row from a table and it's taking\n> an extremely long time. This parent table is referenced by\n> other table's foreign keys, but the particular row I'm\n> trying to delete is not referenced any other rows in the\n> associative tables. This table has the following structure:\n> \n> CREATE TABLE revision\n> (\n>   id serial NOT NULL,\n>   revision_time timestamp without time zone NOT NULL\n> DEFAULT now(),\n>   start_time timestamp without time zone NOT NULL\n> DEFAULT clock_timestamp(),\n>   schema_change boolean NOT NULL,\n>   \"comment\" text,\n>   CONSTRAINT revision_pkey PRIMARY KEY (id)\n> )\n> WITH (\n>   OIDS=FALSE\n> );\n> \n> This table is referenced from foreign key by 130 odd other\n> tables. The total number of rows from these referencing\n> tables goes into the hundreds of millions. Each of these\n> tables has been automatically created by script and has the\n> same _revision_created, _revision_expired fields, foreign\n> keys and indexes. Here is an example of one:\n> \n> CREATE TABLE table_version.bde_crs_action_revision\n> (\n>   _revision_created integer NOT NULL,\n>   _revision_expired integer,\n>   tin_id integer NOT NULL,\n>   id integer NOT NULL,\n>   \"sequence\" integer NOT NULL,\n>   att_type character varying(4) NOT NULL,\n>   system_action character(1) NOT NULL,\n>   audit_id integer NOT NULL,\n>   CONSTRAINT\n> \"pkey_table_version.bde_crs_action_revision\" PRIMARY KEY\n> (_revision_created, audit_id),\n>   CONSTRAINT\n> bde_crs_action_revision__revision_created_fkey FOREIGN KEY\n> (_revision_created)\n>       REFERENCES table_version.revision (id)\n> MATCH SIMPLE\n>       ON UPDATE NO ACTION ON DELETE NO\n> ACTION,\n>   CONSTRAINT\n> bde_crs_action_revision__revision_expired_fkey FOREIGN KEY\n> (_revision_expired)\n>       REFERENCES table_version.revision (id)\n> MATCH SIMPLE\n>       ON UPDATE NO ACTION ON DELETE NO\n> ACTION\n> )\n> WITH (\n>   OIDS=FALSE\n> );\n> ALTER TABLE table_version.bde_crs_action_revision OWNER TO\n> bde_dba;\n> ALTER TABLE table_version.bde_crs_action_revision ALTER\n> COLUMN audit_id SET STATISTICS 500;\n> \n> \n> CREATE INDEX idx_crs_action_audit_id\n>   ON table_version.bde_crs_action_revision\n>   USING btree\n>   (audit_id);\n> \n> CREATE INDEX idx_crs_action_created\n>   ON table_version.bde_crs_action_revision\n>   USING btree\n>   (_revision_created);\n> \n> CREATE INDEX idx_crs_action_expired\n>   ON table_version.bde_crs_action_revision\n>   USING btree\n>   (_revision_expired);\n> \n> CREATE INDEX idx_crs_action_expired_created\n>   ON table_version.bde_crs_action_revision\n>   USING btree\n>   (_revision_expired, _revision_created);\n> \n> CREATE INDEX idx_crs_action_expired_key\n>   ON table_version.bde_crs_action_revision\n>   USING btree\n>   (_revision_expired, audit_id);\n> \n> \n> All of the table have been analysed before I tried to run\n> the query.\n> \n> The fact the all of the foreign keys have a covering index\n> makes me wonder why this delete is taking so long.\n> \n> The explain for \n> \n> delete from table_version.revision where id = 1003\n> \n> \n> Delete  (cost=0.00..1.02 rows=1 width=6)\n>   ->  Seq Scan on revision \n> (cost=0.00..1.02 rows=1 width=6)\n>         Filter: (id = 100)\n> \n> I'm running POstgreSQL 9.0.2 on Ubuntu 10.4\n> \n> Cheers\n> Jeremy\n> ______________________________________________________________________________________________________\n> \n> This message contains information, which is confidential\n> and may be subject to legal privilege. \n> If you are not the intended recipient, you must not peruse,\n> use, disseminate, distribute or copy this message.\n> If you have received this message in error, please notify\n> us immediately (Phone 0800 665 463 or [email protected])\n> and destroy the original message.\n> LINZ accepts no responsibility for changes to this email,\n> or for any attachments, after its transmission from LINZ.\n> \n> Thank you.\n> ______________________________________________________________________________________________________\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n\n \n", "msg_date": "Thu, 31 Mar 2011 07:54:25 -0700 (PDT)", "msg_from": "Bob Lunney <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow deleting tables with foreign keys" }, { "msg_contents": "Hi Bob,\n\nThe \"table_version.revision\" (\"revision\" is the same) table has a primary key on id because of the PK \"revision_pkey\". Actually at the moment there are only two rows in the table table_version.revision!\n\nThanks for the tips about the indexes. I'm still in the development and tuning process, so I will do some analysis of the index stats to see if they are indeed redundant.\n\nCheers,\nJeremy\n________________________________________\nFrom: Bob Lunney [[email protected]]\nSent: Friday, 1 April 2011 3:54 a.m.\nTo: [email protected]; Jeremy Palmer\nSubject: Re: [PERFORM] Slow deleting tables with foreign keys\n\nJeremy,\n\nDoes table_revision have a unique index on id? Also, I doubt these two indexes ever get used:\n\nCREATE INDEX idx_crs_action_expired_created\n ON table_version.bde_crs_action_revision\n USING btree\n (_revision_expired, _revision_created);\n\nCREATE INDEX idx_crs_action_expired_key\n ON table_version.bde_crs_action_revision\n USING btree\n (_revision_expired, audit_id);\n\nBob Lunney\n______________________________________________________________________________________________________\n\nThis message contains information, which is confidential and may be subject to legal privilege. \nIf you are not the intended recipient, you must not peruse, use, disseminate, distribute or copy this message.\nIf you have received this message in error, please notify us immediately (Phone 0800 665 463 or [email protected]) and destroy the original message.\nLINZ accepts no responsibility for changes to this email, or for any attachments, after its transmission from LINZ.\n\nThank you.\n______________________________________________________________________________________________________\n", "msg_date": "Fri, 1 Apr 2011 08:43:30 +1300", "msg_from": "Jeremy Palmer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow deleting tables with foreign keys" }, { "msg_contents": "On Wed, Mar 30, 2011 at 10:16 PM, Jeremy Palmer <[email protected]> wrote:\n> Hi All,\n>\n> I'm trying to delete one row from a table and it's taking an extremely long time. This parent table is referenced by other table's foreign keys, but the particular row I'm trying to delete is not referenced any other rows in the associative tables. This table has the following structure:\n>\n> CREATE TABLE revision\n> (\n>  id serial NOT NULL,\n>  revision_time timestamp without time zone NOT NULL DEFAULT now(),\n>  start_time timestamp without time zone NOT NULL DEFAULT clock_timestamp(),\n>  schema_change boolean NOT NULL,\n>  \"comment\" text,\n>  CONSTRAINT revision_pkey PRIMARY KEY (id)\n> )\n> WITH (\n>  OIDS=FALSE\n> );\n>\n> This table is referenced from foreign key by 130 odd other tables. The total number of rows from these referencing tables goes into the hundreds of millions. Each of these tables has been automatically created by script and has the same _revision_created, _revision_expired fields, foreign keys and indexes. Here is an example of one:\n>\n> CREATE TABLE table_version.bde_crs_action_revision\n> (\n>  _revision_created integer NOT NULL,\n>  _revision_expired integer,\n>  tin_id integer NOT NULL,\n>  id integer NOT NULL,\n>  \"sequence\" integer NOT NULL,\n>  att_type character varying(4) NOT NULL,\n>  system_action character(1) NOT NULL,\n>  audit_id integer NOT NULL,\n>  CONSTRAINT \"pkey_table_version.bde_crs_action_revision\" PRIMARY KEY (_revision_created, audit_id),\n>  CONSTRAINT bde_crs_action_revision__revision_created_fkey FOREIGN KEY (_revision_created)\n>      REFERENCES table_version.revision (id) MATCH SIMPLE\n>      ON UPDATE NO ACTION ON DELETE NO ACTION,\n>  CONSTRAINT bde_crs_action_revision__revision_expired_fkey FOREIGN KEY (_revision_expired)\n>      REFERENCES table_version.revision (id) MATCH SIMPLE\n>      ON UPDATE NO ACTION ON DELETE NO ACTION\n> )\n> WITH (\n>  OIDS=FALSE\n> );\n> ALTER TABLE table_version.bde_crs_action_revision OWNER TO bde_dba;\n> ALTER TABLE table_version.bde_crs_action_revision ALTER COLUMN audit_id SET STATISTICS 500;\n>\n>\n> CREATE INDEX idx_crs_action_audit_id\n>  ON table_version.bde_crs_action_revision\n>  USING btree\n>  (audit_id);\n>\n> CREATE INDEX idx_crs_action_created\n>  ON table_version.bde_crs_action_revision\n>  USING btree\n>  (_revision_created);\n>\n> CREATE INDEX idx_crs_action_expired\n>  ON table_version.bde_crs_action_revision\n>  USING btree\n>  (_revision_expired);\n>\n> CREATE INDEX idx_crs_action_expired_created\n>  ON table_version.bde_crs_action_revision\n>  USING btree\n>  (_revision_expired, _revision_created);\n>\n> CREATE INDEX idx_crs_action_expired_key\n>  ON table_version.bde_crs_action_revision\n>  USING btree\n>  (_revision_expired, audit_id);\n>\n>\n> All of the table have been analysed before I tried to run the query.\n>\n> The fact the all of the foreign keys have a covering index makes me wonder why this delete is taking so long.\n>\n> The explain for\n>\n> delete from table_version.revision where id = 1003\n>\n>\n> Delete  (cost=0.00..1.02 rows=1 width=6)\n>  ->  Seq Scan on revision  (cost=0.00..1.02 rows=1 width=6)\n>        Filter: (id = 100)\n>\n> I'm running POstgreSQL 9.0.2 on Ubuntu 10.4\n\nEXPLAIN ANALYZE can be useful in these kinds of situations, as it will\ntell you where the time is going. e.g.:\n\nrhaas=# explain analyze delete from foo where a = 2;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------\n Delete on foo (cost=0.00..8.27 rows=1 width=6) (actual\ntime=0.054..0.054 rows=0 loops=1)\n -> Index Scan using foo_a_key on foo (cost=0.00..8.27 rows=1\nwidth=6) (actual time=0.028..0.032 rows=1 loops=1)\n Index Cond: (a = 2)\n Trigger for constraint bar_a_fkey: time=5.530 calls=1\n Total runtime: 5.648 ms\n(5 rows)\n\nIn your case you probably will have lots of \"Trigger for constraint\nblahblah\" lines and you can see which one or ones are taking all the\ntime, which might give you a clue where to go with it.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Mon, 25 Apr 2011 19:54:34 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow deleting tables with foreign keys" } ]
[ { "msg_contents": "For this query:\n\nselect pp.id,pp.product_id,pp.selling_site_id,pp.asin\nfrom product_price pp\nwhere\n(pp.asin is not null and pp.asin<>'')\nand (pp.upload_status_id<>1)\nand pp.selling_site_id in (8,7,35,6,9)\nand (pp.last_od < 'now'::timestamp - '1 week'::interval )\nlimit 5000\n\nQuery plan is:\n\n\"Limit (cost=9182.41..77384.80 rows=3290 width=35)\"\n\" -> Bitmap Heap Scan on product_price pp (cost=9182.41..77384.80 \nrows=3290 width=35)\"\n\" Recheck Cond: ((last_od < '2011-03-24 \n13:05:09.540025'::timestamp without time zone) AND (selling_site_id = \nANY ('{8,7,35,6,9}'::bigint[])))\"\n\" Filter: ((asin IS NOT NULL) AND (asin <> ''::text) AND \n(upload_status_id <> 1))\"\n\" -> Bitmap Index Scan on idx_product_price_last_od_ss \n(cost=0.00..9181.59 rows=24666 width=0)\"\n\" Index Cond: ((last_od < '2011-03-24 \n13:05:09.540025'::timestamp without time zone) AND (selling_site_id = \nANY ('{8,7,35,6,9}'::bigint[])))\"\n\nFor this query:\n\nselect pp.id,pp.product_id,pp.selling_site_id,pp.asin\nfrom product_price pp\nwhere\n(pp.asin is not null and pp.asin<>'')\nand (pp.upload_status_id<>1)\nand pp.selling_site_id in (8,7,35,6,9)\nand (pp.last_od + '1 week'::interval < 'now'::timestamp )\nlimit 5000\n\nQuery plan is:\n\n\"Limit (cost=0.00..13890.67 rows=5000 width=35)\"\n\" -> Seq Scan on product_price pp (cost=0.00..485889.97 rows=174898 \nwidth=35)\"\n\" Filter: ((asin IS NOT NULL) AND (asin <> ''::text) AND \n(upload_status_id <> 1) AND ((last_od + '7 days'::interval) < \n'2011-03-31 13:06:17.460013'::timestamp without time zone) AND \n(selling_site_id = ANY ('{8,7,35,6,9}'::bigint[])))\"\n\n\nThe only difference is this: instead of (pp.last_od < 'now'::timestamp - \n'1 week'::interval ) I have used (pp.last_od + '1 week'::interval < \n'now'::timestamp )\n\nFirst query with index scan opens in 440msec. The second query with seq \nscan opens in about 22 seconds. So the first one is about 50x faster.\n\nMy concern is that we are working on a huge set of applications that use \nthousands of different queries on a database. There are programs that we \nwrote years ago. The database structure continuously changing. We are \nadding new indexes and columns, and of course we are upgrading \nPostgreSQL when a new stable version comes out. There are cases when a \nchange in a table affects 500+ queries in 50+ programs. I really did not \nthink that I have to be THAT CAREFUL with writing conditions in SQL. Do \nI really have to manually analyze all those queries and \"correct\" \nconditions like this?\n\nIf so, then at least I would like to know if there is a documentation or \nwiki page where I can learn about \"how not to write conditions\". I just \nfigured out that I need to put constant expressions on one side of any \ncomparison, if possible. But probably there are other rules I wouldn't \nthink of.\n\nMight it be possible to change the optimizer so that it tries to rally \nconstant expressions in the first place? That cannot be bad, right?\n\nThanks,\n\n Laszlo\n\n", "msg_date": "Thu, 31 Mar 2011 19:26:10 +0200", "msg_from": "Laszlo Nagy <[email protected]>", "msg_from_op": true, "msg_subject": "Why it is using/not using index scan?" }, { "msg_contents": "On Thu, Mar 31, 2011 at 12:26 PM, Laszlo Nagy <[email protected]> wrote:\n> For this query:\n>\n> select pp.id,pp.product_id,pp.selling_site_id,pp.asin\n> from product_price pp\n> where\n> (pp.asin is not null and pp.asin<>'')\n> and (pp.upload_status_id<>1)\n> and pp.selling_site_id in (8,7,35,6,9)\n> and (pp.last_od < 'now'::timestamp - '1 week'::interval )\n> limit 5000\n>\n> Query plan is:\n>\n> \"Limit  (cost=9182.41..77384.80 rows=3290 width=35)\"\n> \"  ->  Bitmap Heap Scan on product_price pp  (cost=9182.41..77384.80\n> rows=3290 width=35)\"\n> \"        Recheck Cond: ((last_od < '2011-03-24 13:05:09.540025'::timestamp\n> without time zone) AND (selling_site_id = ANY ('{8,7,35,6,9}'::bigint[])))\"\n> \"        Filter: ((asin IS NOT NULL) AND (asin <> ''::text) AND\n> (upload_status_id <> 1))\"\n> \"        ->  Bitmap Index Scan on idx_product_price_last_od_ss\n>  (cost=0.00..9181.59 rows=24666 width=0)\"\n> \"              Index Cond: ((last_od < '2011-03-24\n> 13:05:09.540025'::timestamp without time zone) AND (selling_site_id = ANY\n> ('{8,7,35,6,9}'::bigint[])))\"\n>\n> For this query:\n>\n> select pp.id,pp.product_id,pp.selling_site_id,pp.asin\n> from product_price pp\n> where\n> (pp.asin is not null and pp.asin<>'')\n> and (pp.upload_status_id<>1)\n> and pp.selling_site_id in (8,7,35,6,9)\n> and (pp.last_od + '1 week'::interval < 'now'::timestamp )\n> limit 5000\n>\n> Query plan is:\n>\n> \"Limit  (cost=0.00..13890.67 rows=5000 width=35)\"\n> \"  ->  Seq Scan on product_price pp  (cost=0.00..485889.97 rows=174898\n> width=35)\"\n> \"        Filter: ((asin IS NOT NULL) AND (asin <> ''::text) AND\n> (upload_status_id <> 1) AND ((last_od + '7 days'::interval) < '2011-03-31\n> 13:06:17.460013'::timestamp without time zone) AND (selling_site_id = ANY\n> ('{8,7,35,6,9}'::bigint[])))\"\n>\n>\n> The only difference is this: instead of (pp.last_od < 'now'::timestamp - '1\n> week'::interval ) I have used (pp.last_od + '1 week'::interval <\n> 'now'::timestamp )\n>\n> First query with index scan opens in 440msec. The second query with seq scan\n> opens in about 22 seconds. So the first one is about 50x faster.\n>\n> My concern is that we are working on a huge set of applications that use\n> thousands of different queries on a database. There are programs that we\n> wrote years ago. The database structure continuously changing. We are adding\n> new indexes and columns, and of course we are upgrading PostgreSQL when a\n> new stable version comes out. There are cases when a change in a table\n> affects 500+ queries in 50+ programs. I really did not think that I have to\n> be THAT CAREFUL with writing conditions in SQL. Do I really have to manually\n> analyze all those queries and \"correct\" conditions like this?\n>\n> If so, then at least I would like to know if there is a documentation or\n> wiki page where I can learn about \"how not to write conditions\". I just\n> figured out that I need to put constant expressions on one side of any\n> comparison, if possible. But probably there are other rules I wouldn't think\n> of.\n>\n> Might it be possible to change the optimizer so that it tries to rally\n> constant expressions in the first place? That cannot be bad, right?\n\nIt's pretty well understood by database developers that indexable\nexpressions are such that the expression being compared is in the same\nform being used in 'create index'. Even if you did not understand\nthat, simple trial and error gave the answer immediately using the\nstandard tools (explain,timing etc) provided by the database. If you\nare concerned, just start logging slow queries\n(log_min_duration_statement) and fix them if the sql is bad.\n\nmerlin\n", "msg_date": "Fri, 8 Apr 2011 10:45:31 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why it is using/not using index scan?" }, { "msg_contents": "Dne 31.3.2011 19:26, Laszlo Nagy napsal(a):\n> For this query:\n> \n> select pp.id,pp.product_id,pp.selling_site_id,pp.asin\n> from product_price pp\n> where\n> (pp.asin is not null and pp.asin<>'')\n> and (pp.upload_status_id<>1)\n> and pp.selling_site_id in (8,7,35,6,9)\n> and (pp.last_od < 'now'::timestamp - '1 week'::interval )\n> limit 5000\n> \n> Query plan is:\n> \n> \"Limit (cost=9182.41..77384.80 rows=3290 width=35)\"\n> \" -> Bitmap Heap Scan on product_price pp (cost=9182.41..77384.80\n> rows=3290 width=35)\"\n> \" Recheck Cond: ((last_od < '2011-03-24\n> 13:05:09.540025'::timestamp without time zone) AND (selling_site_id =\n> ANY ('{8,7,35,6,9}'::bigint[])))\"\n> \" Filter: ((asin IS NOT NULL) AND (asin <> ''::text) AND\n> (upload_status_id <> 1))\"\n> \" -> Bitmap Index Scan on idx_product_price_last_od_ss \n> (cost=0.00..9181.59 rows=24666 width=0)\"\n> \" Index Cond: ((last_od < '2011-03-24\n> 13:05:09.540025'::timestamp without time zone) AND (selling_site_id =\n> ANY ('{8,7,35,6,9}'::bigint[])))\"\n> \n> For this query:\n> \n> select pp.id,pp.product_id,pp.selling_site_id,pp.asin\n> from product_price pp\n> where\n> (pp.asin is not null and pp.asin<>'')\n> and (pp.upload_status_id<>1)\n> and pp.selling_site_id in (8,7,35,6,9)\n> and (pp.last_od + '1 week'::interval < 'now'::timestamp )\n> limit 5000\n> \n> Query plan is:\n> \n> \"Limit (cost=0.00..13890.67 rows=5000 width=35)\"\n> \" -> Seq Scan on product_price pp (cost=0.00..485889.97 rows=174898\n> width=35)\"\n> \" Filter: ((asin IS NOT NULL) AND (asin <> ''::text) AND\n> (upload_status_id <> 1) AND ((last_od + '7 days'::interval) <\n> '2011-03-31 13:06:17.460013'::timestamp without time zone) AND\n> (selling_site_id = ANY ('{8,7,35,6,9}'::bigint[])))\"\n> \n> \n> The only difference is this: instead of (pp.last_od < 'now'::timestamp -\n> '1 week'::interval ) I have used (pp.last_od + '1 week'::interval <\n> 'now'::timestamp )\n\nThat's the only difference as you see it - the planner actually found\nout the former query is expected to return 3290 rows while the latter\none is expected to return 174898 rows. That's the reason why the second\nquery is using seqscan instead of index scan - for a lot of rows, the\nindex scan tends to be very inefficient.\n\nNext time post EXPLAIN ANALYZE output, as it provides data from the\nactual run, so we can see if there are any issues with those estimates.\n\nAnyway, you may try to disable sequential scans (just run 'set\nenable_seqacan=off' before running the query) and you'll see if index\nscan really would be better.\n\n> First query with index scan opens in 440msec. The second query with seq\n> scan opens in about 22 seconds. So the first one is about 50x faster.\n\nEvery database/planner has some weaknesses - it may be the case that an\nindex scan would be faster but postgresql is not able to use it in this\ncase for some reason.\n\n> My concern is that we are working on a huge set of applications that use\n> thousands of different queries on a database. There are programs that we\n> wrote years ago. The database structure continuously changing. We are\n> adding new indexes and columns, and of course we are upgrading\n> PostgreSQL when a new stable version comes out. There are cases when a\n> change in a table affects 500+ queries in 50+ programs. I really did not\n> think that I have to be THAT CAREFUL with writing conditions in SQL. Do\n> I really have to manually analyze all those queries and \"correct\"\n> conditions like this?\n\nYou have to be that careful, and it's not specific to PostgreSQL. I'm\nworking with other databases and the same holds for them - SQL looks so\nsimple that a chimp might learn it, but only the best chimps may produce\ngood queries.\n\n> If so, then at least I would like to know if there is a documentation or\n> wiki page where I can learn about \"how not to write conditions\". I just\n> figured out that I need to put constant expressions on one side of any\n> comparison, if possible. But probably there are other rules I wouldn't\n> think of.\n\nI'm not aware of such official document. I've planned to write an\narticle \"10 ways to ruin your query\" but I somehow forgot about it.\nAnyway it's mostly common sense, i.e. once you know how indexes work\nyou'll immediately see if a condition may benefit from them or not.\n\nTomas\n", "msg_date": "Sat, 09 Apr 2011 16:59:38 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why it is using/not using index scan?" } ]
[ { "msg_contents": "When I do a query on a table with child tables on certain queries pg\nuses indexes and on others it doesn't. Why does this happen? For example:\n\n\n[local]:playpen=> explain analyze select * from vis where id > 10747 ;\n QUERY\nPLAN \n----------------------------------------------------------------------------------------------------------------------------------------\n Result (cost=4.29..115.11 rows=325 width=634) (actual\ntime=0.063..0.116 rows=5 loops=1)\n -> Append (cost=4.29..115.11 rows=325 width=634) (actual\ntime=0.053..0.090 rows=5 loops=1)\n -> Bitmap Heap Scan on vis (cost=4.29..23.11 rows=5\nwidth=948) (actual time=0.051..0.058 rows=5 loops=1)\n Recheck Cond: (id > 10747)\n -> Bitmap Index Scan on vis_pkey (cost=0.00..4.29\nrows=5 width=0) (actual time=0.037..0.037 rows=5 loops=1)\n Index Cond: (id > 10747)\n -> Seq Scan on vis_for_seg_1_2011_03 vis (cost=0.00..11.50\nrows=40 width=629) (actual time=0.002..0.002 rows=0 loops=1)\n Filter: (id > 10747)\n -> Seq Scan on vis_for_seg_4_2011_03 vis (cost=0.00..11.50\nrows=40 width=629) (actual time=0.002..0.002 rows=0 loops=1)\n Filter: (id > 10747)\n -> Seq Scan on vis_for_seg_66_2011_03 vis (cost=0.00..11.50\nrows=40 width=629) (actual time=0.001..0.001 rows=0 loops=1)\n Filter: (id > 10747)\n -> Seq Scan on vis_for_seg_69_2011_03 vis (cost=0.00..11.50\nrows=40 width=629) (actual time=0.002..0.002 rows=0 loops=1)\n Filter: (id > 10747)\n -> Seq Scan on vis_for_seg_79_2011_03 vis (cost=0.00..11.50\nrows=40 width=629) (actual time=0.001..0.001 rows=0 loops=1)\n Filter: (id > 10747)\n -> Seq Scan on vis_for_seg_80_2011_03 vis (cost=0.00..11.50\nrows=40 width=629) (actual time=0.001..0.001 rows=0 loops=1)\n Filter: (id > 10747)\n -> Seq Scan on vis_for_seg_82_2011_03 vis (cost=0.00..11.50\nrows=40 width=629) (actual time=0.001..0.001 rows=0 loops=1)\n Filter: (id > 10747)\n -> Seq Scan on vis_for_seg_87_2011_03 vis (cost=0.00..11.50\nrows=40 width=629) (actual time=0.002..0.002 rows=0 loops=1)\n Filter: (id > 10747)\n Total runtime: 0.724 ms\n(23 rows)\n\nTime: 5.804 ms\n[local]:playpen=> explain analyze select * from vis where id = 10747 ;\n \nQUERY\nPLAN \n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Result (cost=0.00..74.41 rows=9 width=664) (actual time=0.060..0.503\nrows=1 loops=1)\n -> Append (cost=0.00..74.41 rows=9 width=664) (actual\ntime=0.053..0.493 rows=1 loops=1)\n -> Index Scan using vis_pkey on vis (cost=0.00..8.27 rows=1\nwidth=948) (actual time=0.051..0.055 rows=1 loops=1)\n Index Cond: (id = 10747)\n -> Index Scan using vis_for_seg_1_2011_03_pkey on\nvis_for_seg_1_2011_03 vis (cost=0.00..8.27 rows=1 width=629) (actual\ntime=0.122..0.122 rows=0 loops=1)\n Index Cond: (id = 10747)\n -> Index Scan using vis_for_seg_4_2011_03_pkey on\nvis_for_seg_4_2011_03 vis (cost=0.00..8.27 rows=1 width=629) (actual\ntime=0.043..0.043 rows=0 loops=1)\n Index Cond: (id = 10747)\n -> Index Scan using vis_for_seg_66_2011_03_pkey on\nvis_for_seg_66_2011_03 vis (cost=0.00..8.27 rows=1 width=629) (actual\ntime=0.041..0.041 rows=0 loops=1)\n Index Cond: (id = 10747)\n -> Index Scan using vis_for_seg_69_2011_03_pkey on\nvis_for_seg_69_2011_03 vis (cost=0.00..8.27 rows=1 width=629) (actual\ntime=0.041..0.041 rows=0 loops=1)\n Index Cond: (id = 10747)\n -> Index Scan using vis_for_seg_79_2011_03_pkey on\nvis_for_seg_79_2011_03 vis (cost=0.00..8.27 rows=1 width=629) (actual\ntime=0.043..0.043 rows=0 loops=1)\n Index Cond: (id = 10747)\n -> Index Scan using vis_for_seg_80_2011_03_pkey on\nvis_for_seg_80_2011_03 vis (cost=0.00..8.27 rows=1 width=629) (actual\ntime=0.041..0.041 rows=0 loops=1)\n Index Cond: (id = 10747)\n -> Index Scan using vis_for_seg_82_2011_03_pkey on\nvis_for_seg_82_2011_03 vis (cost=0.00..8.27 rows=1 width=629) (actual\ntime=0.049..0.049 rows=0 loops=1)\n Index Cond: (id = 10747)\n -> Index Scan using vis_for_seg_87_2011_03_pkey on\nvis_for_seg_87_2011_03 vis (cost=0.00..8.27 rows=1 width=629) (actual\ntime=0.043..0.043 rows=0 loops=1)\n Index Cond: (id = 10747)\n Total runtime: 1.110 ms\n(21 rows)\n\n[local]:playpen=> select version();\n \nversion \n------------------------------------------------------------------------------------------------------------\n PostgreSQL 9.0.3 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 4.1.2\n20080704 (Red Hat 4.1.2-48), 32-bit\n(1 row)\n\n", "msg_date": "Thu, 31 Mar 2011 20:41:01 -0400", "msg_from": "Joseph Shraibman <[email protected]>", "msg_from_op": true, "msg_subject": "index usage on queries on inherited tables" }, { "msg_contents": "On Fri, Apr 1, 2011 at 2:41 AM, Joseph Shraibman <[email protected]> wrote:\n> When I do a query on a table with child tables on certain queries pg\n> uses indexes and on others it doesn't. Why does this happen? For example:\n>\n>\n> [local]:playpen=> explain analyze select * from vis where id > 10747 ;\n>                                                               QUERY\n> PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------\n>  Result  (cost=4.29..115.11 rows=325 width=634) (actual\n> time=0.063..0.116 rows=5 loops=1)\n>   ->  Append  (cost=4.29..115.11 rows=325 width=634) (actual\n> time=0.053..0.090 rows=5 loops=1)\n>         ->  Bitmap Heap Scan on vis  (cost=4.29..23.11 rows=5\n> width=948) (actual time=0.051..0.058 rows=5 loops=1)\n>               Recheck Cond: (id > 10747)\n>               ->  Bitmap Index Scan on vis_pkey  (cost=0.00..4.29\n> rows=5 width=0) (actual time=0.037..0.037 rows=5 loops=1)\n>                     Index Cond: (id > 10747)\n>         ->  Seq Scan on vis_for_seg_1_2011_03 vis  (cost=0.00..11.50\n> rows=40 width=629) (actual time=0.002..0.002 rows=0 loops=1)\n>               Filter: (id > 10747)\n>         ->  Seq Scan on vis_for_seg_4_2011_03 vis  (cost=0.00..11.50\n> rows=40 width=629) (actual time=0.002..0.002 rows=0 loops=1)\n>               Filter: (id > 10747)\n>         ->  Seq Scan on vis_for_seg_66_2011_03 vis  (cost=0.00..11.50\n> rows=40 width=629) (actual time=0.001..0.001 rows=0 loops=1)\n>               Filter: (id > 10747)\n>         ->  Seq Scan on vis_for_seg_69_2011_03 vis  (cost=0.00..11.50\n> rows=40 width=629) (actual time=0.002..0.002 rows=0 loops=1)\n>               Filter: (id > 10747)\n>         ->  Seq Scan on vis_for_seg_79_2011_03 vis  (cost=0.00..11.50\n> rows=40 width=629) (actual time=0.001..0.001 rows=0 loops=1)\n>               Filter: (id > 10747)\n>         ->  Seq Scan on vis_for_seg_80_2011_03 vis  (cost=0.00..11.50\n> rows=40 width=629) (actual time=0.001..0.001 rows=0 loops=1)\n>               Filter: (id > 10747)\n>         ->  Seq Scan on vis_for_seg_82_2011_03 vis  (cost=0.00..11.50\n> rows=40 width=629) (actual time=0.001..0.001 rows=0 loops=1)\n>               Filter: (id > 10747)\n>         ->  Seq Scan on vis_for_seg_87_2011_03 vis  (cost=0.00..11.50\n> rows=40 width=629) (actual time=0.002..0.002 rows=0 loops=1)\n>               Filter: (id > 10747)\n>  Total runtime: 0.724 ms\n> (23 rows)\n>\n> Time: 5.804 ms\n> [local]:playpen=> explain analyze select * from vis where id = 10747 ;\n>\n> QUERY\n> PLAN\n>\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>  Result  (cost=0.00..74.41 rows=9 width=664) (actual time=0.060..0.503\n> rows=1 loops=1)\n>   ->  Append  (cost=0.00..74.41 rows=9 width=664) (actual\n> time=0.053..0.493 rows=1 loops=1)\n>         ->  Index Scan using vis_pkey on vis  (cost=0.00..8.27 rows=1\n> width=948) (actual time=0.051..0.055 rows=1 loops=1)\n>               Index Cond: (id = 10747)\n>         ->  Index Scan using vis_for_seg_1_2011_03_pkey on\n> vis_for_seg_1_2011_03 vis  (cost=0.00..8.27 rows=1 width=629) (actual\n> time=0.122..0.122 rows=0 loops=1)\n>               Index Cond: (id = 10747)\n>         ->  Index Scan using vis_for_seg_4_2011_03_pkey on\n> vis_for_seg_4_2011_03 vis  (cost=0.00..8.27 rows=1 width=629) (actual\n> time=0.043..0.043 rows=0 loops=1)\n>               Index Cond: (id = 10747)\n>         ->  Index Scan using vis_for_seg_66_2011_03_pkey on\n> vis_for_seg_66_2011_03 vis  (cost=0.00..8.27 rows=1 width=629) (actual\n> time=0.041..0.041 rows=0 loops=1)\n>               Index Cond: (id = 10747)\n>         ->  Index Scan using vis_for_seg_69_2011_03_pkey on\n> vis_for_seg_69_2011_03 vis  (cost=0.00..8.27 rows=1 width=629) (actual\n> time=0.041..0.041 rows=0 loops=1)\n>               Index Cond: (id = 10747)\n>         ->  Index Scan using vis_for_seg_79_2011_03_pkey on\n> vis_for_seg_79_2011_03 vis  (cost=0.00..8.27 rows=1 width=629) (actual\n> time=0.043..0.043 rows=0 loops=1)\n>               Index Cond: (id = 10747)\n>         ->  Index Scan using vis_for_seg_80_2011_03_pkey on\n> vis_for_seg_80_2011_03 vis  (cost=0.00..8.27 rows=1 width=629) (actual\n> time=0.041..0.041 rows=0 loops=1)\n>               Index Cond: (id = 10747)\n>         ->  Index Scan using vis_for_seg_82_2011_03_pkey on\n> vis_for_seg_82_2011_03 vis  (cost=0.00..8.27 rows=1 width=629) (actual\n> time=0.049..0.049 rows=0 loops=1)\n>               Index Cond: (id = 10747)\n>         ->  Index Scan using vis_for_seg_87_2011_03_pkey on\n> vis_for_seg_87_2011_03 vis  (cost=0.00..8.27 rows=1 width=629) (actual\n> time=0.043..0.043 rows=0 loops=1)\n>               Index Cond: (id = 10747)\n>  Total runtime: 1.110 ms\n> (21 rows)\n>\n> [local]:playpen=> select version();\n>\n> version\n> ------------------------------------------------------------------------------------------------------------\n>  PostgreSQL 9.0.3 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 4.1.2\n> 20080704 (Red Hat 4.1.2-48), 32-bit\n> (1 row)\n\nIn the first case, PostgreSQL evidently thinks that using the indexes\nwill be slower than just ignoring them. You could find out whether\nit's right by trying it with enable_seqscan=off.\n\nIf it turns out that using the indexes really is better, then you\nprobably want to adjust random_page_cost and seq_page_cost. The\ndefaults assume a mostly-not-cached database, so if your database is\nheavily or completely cached you might need significantly lower\nvalues.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Wed, 27 Apr 2011 22:32:48 +0200", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index usage on queries on inherited tables" }, { "msg_contents": "On 04/27/2011 04:32 PM, Robert Haas wrote:\n> In the first case, PostgreSQL evidently thinks that using the indexes\n> will be slower than just ignoring them. You could find out whether\n> it's right by trying it with enable_seqscan=off.\n\nMy point is that this is just a problem with inherited tables. It\nshould be obvious to postgres that few rows are being returned, but in\nthe inherited tables case it doesn't use indexes. This was just an\nexample. In a 52 gig table I have a \"select id from table limit 1 order\nby id desc\" returns instantly, but as soon as you declare a child table\nit tries to seq scan all the tables.\n", "msg_date": "Wed, 27 Apr 2011 17:11:44 -0400", "msg_from": "Joseph Shraibman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: index usage on queries on inherited tables" }, { "msg_contents": "On Wed, Apr 27, 2011 at 2:11 PM, Joseph Shraibman <[email protected]>wrote:\n\n> On 04/27/2011 04:32 PM, Robert Haas wrote:\n> > In the first case, PostgreSQL evidently thinks that using the indexes\n> > will be slower than just ignoring them. You could find out whether\n> > it's right by trying it with enable_seqscan=off.\n>\n> My point is that this is just a problem with inherited tables. It\n> should be obvious to postgres that few rows are being returned, but in\n> the inherited tables case it doesn't use indexes. This was just an\n> example. In a 52 gig table I have a \"select id from table limit 1 order\n> by id desc\" returns instantly, but as soon as you declare a child table\n> it tries to seq scan all the tables.\n>\n>\nIf I'm understanding correctly, this kind of obviates the utility of\npartitioning if you structure a warehouse in a traditional manner. Assuming\na fact table partitioned by time, but with foreign keys to a time dimension,\nit is now not possible to gain any advantage from the partitioning if\nselecting on columns in the time dimension.\n\n\"select * from fact_table f join time_dimension t on f.time_id = t.time_id\nwhere t.quarter=3 and t.year = 2010\" will scan all partitions of the fact\ntable despite the fact that all of the rows would come from 3 partitions,\nassuming a partitioning schema that uses one partition for each month.\n\nI use a time id that is calculable from the from the timestamp so it doesn't\nneed to be looked up, and partitioning on time_id directly is easy enough to\nhandle, but if I'm understanding the problem, it sounds like nothing short\nof computing the appropriate time ids before issuing the query and then\nincluding a 'where f.time_id between x and y' clause to the query will\nresult in the partitions being correctly excluded. Is that what people are\ndoing to solve this problem? The alternative is to leave a timestamp column\nin the fact table (something I tend to do since it makes typing ad-hoc\nqueries in psql much easier) and partition on that column and then always\ninclude a where clause for that column that is at least as large as the\nrequested row range. Both result in fairly ugly queries, though I can\ncertainly see how I might structure my code to always build queries which\nadhere to this.\n\nI'm just in the process of designing a star schema for a project and was\nintending to use exactly the structure I described at the top of the email.\nIs there a postgres best-practices for solving this problem? There's no way\nI can get away without partitioning. I'm looking at a worst case table of\n100,000 rows being written every 5 minutes, 24x7 - 29 million rows per day,\na billion rows per month - with most queries running over a single month or\ncomparing same months from differing years and quarters - so a month based\npartitioning. Normal case is closer to 10K rows per 5 minutes.\n\nSuggestions?\n\n--sam\n\nOn Wed, Apr 27, 2011 at 2:11 PM, Joseph Shraibman <[email protected]> wrote:\nOn 04/27/2011 04:32 PM, Robert Haas wrote:\n> In the first case, PostgreSQL evidently thinks that using the indexes\n> will be slower than just ignoring them.  You could find out whether\n> it's right by trying it with enable_seqscan=off.\n\nMy point is that this is just a problem with inherited tables.  It\nshould be obvious to postgres that few rows are being returned, but in\nthe inherited tables case it doesn't use indexes.  This was just an\nexample.  In a 52 gig table I have a \"select id from table limit 1 order\nby id desc\" returns instantly, but as soon as you declare a child table\nit tries to seq scan all the tables.\nIf I'm understanding correctly, this kind of obviates the utility of partitioning if you structure a warehouse in a traditional manner.  Assuming a fact table partitioned by time, but with foreign keys to a time dimension, it is now not possible to gain any advantage from the partitioning if selecting on columns in the time dimension.\n\"select * from fact_table f join time_dimension t on f.time_id = t.time_id where t.quarter=3 and t.year = 2010\" will scan all partitions of the fact table despite the fact that all of the rows would come from 3 partitions, assuming a partitioning schema that uses one partition for each month.  \nI use a time id that is calculable from the from the timestamp so it doesn't need to be looked up, and partitioning on time_id directly is easy enough to handle, but if I'm understanding the problem, it sounds like nothing short of computing the appropriate time ids before issuing the query and then including a 'where f.time_id between x and y' clause to the query will result in the partitions being correctly excluded.  Is that what people are doing to solve this problem?  The alternative is to leave a timestamp column in the fact table (something I tend to do since it makes typing ad-hoc queries in psql much easier) and partition on that column and then always include a where clause for that column that is at least as large as the requested row range.  Both result in fairly ugly queries, though I can certainly see how I might structure my code to always build queries which adhere to this.\nI'm just in the process of designing a star schema for a project and was intending to use exactly the structure I described at the top of the email. Is there a postgres best-practices for solving this problem? There's no way I can get away without partitioning.  I'm looking at a worst case table of 100,000 rows being written every 5 minutes, 24x7 - 29 million rows per day, a billion rows per month - with most queries running over a single month or comparing same months from differing years and quarters - so a month based partitioning.  Normal case is closer to 10K rows per 5 minutes.\nSuggestions?--sam", "msg_date": "Wed, 27 Apr 2011 17:18:17 -0700", "msg_from": "Samuel Gendler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index usage on queries on inherited tables" }, { "msg_contents": "Joseph Shraibman wrote:\n> In a 52 gig table I have a \"select id from table limit 1 order\n> by id desc\" returns instantly, but as soon as you declare a child table\n> it tries to seq scan all the tables.\n> \n\nThis is probably the limitation that's fixed in PostgreSQL 9.1 by this \ncommit (following a few others leading up to it): \nhttp://archives.postgresql.org/pgsql-committers/2010-11/msg00028.php\n\nThere was a good example showing what didn't work as expected before \n(along with an earlier patch that didn't everything the larger 9.1 \nimprovement does) at \nhttp://archives.postgresql.org/pgsql-hackers/2009-07/msg01115.php ; \n\"ORDER BY x DESC LIMIT 1\" returns the same things as MAX(x).\n\nIt's a pretty serious issue with the partitioning in earlier versions. \nI know of multiple people, myself included, who have been compelled to \napply this change to an earlier version of PostgreSQL to make larger \npartitioned databases work correctly. The other option is to manually \ndecompose the queries into ones that target each of the child tables \nindividually, then combine the results, which is no fun either.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Wed, 27 Apr 2011 22:18:37 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index usage on queries on inherited tables" }, { "msg_contents": "On Apr 27, 2011, at 11:11 PM, Joseph Shraibman <[email protected]> wrote:\n> On 04/27/2011 04:32 PM, Robert Haas wrote:\n>> In the first case, PostgreSQL evidently thinks that using the indexes\n>> will be slower than just ignoring them. You could find out whether\n>> it's right by trying it with enable_seqscan=off.\n> \n> My point is that this is just a problem with inherited tables. It\n> should be obvious to postgres that few rows are being returned, but in\n> the inherited tables case it doesn't use indexes. This was just an\n> example. In a 52 gig table I have a \"select id from table limit 1 order\n> by id desc\" returns instantly, but as soon as you declare a child table\n> it tries to seq scan all the tables.\n\nOh, sorry, I must have misunderstood. As Greg says, this is fixed in 9.1.\n\n...Robert\n", "msg_date": "Sat, 30 Apr 2011 00:53:48 +0200", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index usage on queries on inherited tables" } ]
[ { "msg_contents": "Is there a reason that when executing queries the table constraints are\nonly checked during planning and not execution? I end up making 2 round\ntrips to the database to get around this.\n\nAll of these queries should produce the same output:\n\n \n[local]:playpen=> explain analyze select count(*) from vis where seg = 69;\n \nQUERY\nPLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=857.51..857.52 rows=1 width=0) (actual\ntime=16.551..16.553 rows=1 loops=1)\n -> Append (cost=72.70..849.62 rows=3155 width=0) (actual\ntime=0.906..12.754 rows=3154 loops=1)\n -> Bitmap Heap Scan on vis (cost=72.70..838.12 rows=3154\nwidth=0) (actual time=0.903..6.346 rows=3154 loops=1)\n Recheck Cond: (seg = 69)\n -> Bitmap Index Scan on vis_seg_firstevent_idx \n(cost=0.00..71.91 rows=3154 width=0) (actual time=0.787..0.787 rows=3154\nloops=1)\n Index Cond: (seg = 69)\n -> Seq Scan on vis_for_seg_69_2011_03 vis (cost=0.00..11.50\nrows=1 width=0) (actual time=0.004..0.004 rows=0 loops=1)\n Filter: (seg = 69)\n Total runtime: 16.702 ms\n(9 rows)\n\nTime: 27.581 ms\n[local]:playpen=>\n[local]:playpen=> explain analyze select count(*) from vis where seg =\n(select seg from an where key = 471);\n \nQUERY\nPLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=713.50..713.51 rows=1 width=0) (actual\ntime=16.721..16.722 rows=1 loops=1)\n InitPlan 1 (returns $0)\n -> Index Scan using an_pkey on an (cost=0.00..8.27 rows=1\nwidth=4) (actual time=0.037..0.041 rows=1 loops=1)\n Index Cond: (key = 471)\n -> Append (cost=10.92..704.35 rows=352 width=0) (actual\ntime=0.970..13.024 rows=3154 loops=1)\n -> Bitmap Heap Scan on vis (cost=10.92..612.35 rows=344\nwidth=0) (actual time=0.967..6.470 rows=3154 loops=1)\n Recheck Cond: (seg = $0)\n -> Bitmap Index Scan on vis_seg_firstevent_idx \n(cost=0.00..10.83 rows=344 width=0) (actual time=0.862..0.862 rows=3154\nloops=1)\n Index Cond: (seg = $0)\n -> Seq Scan on vis_for_seg_1_2011_03 vis (cost=0.00..11.50\nrows=1 width=0) (actual time=0.004..0.004 rows=0 loops=1)\n Filter: (seg = $0)\n -> Seq Scan on vis_for_seg_4_2011_03 vis (cost=0.00..11.50\nrows=1 width=0) (actual time=0.002..0.002 rows=0 loops=1)\n Filter: (seg = $0)\n -> Seq Scan on vis_for_seg_66_2011_03 vis (cost=0.00..11.50\nrows=1 width=0) (actual time=0.001..0.001 rows=0 loops=1)\n Filter: (seg = $0)\n -> Seq Scan on vis_for_seg_69_2011_03 vis (cost=0.00..11.50\nrows=1 width=0) (actual time=0.001..0.001 rows=0 loops=1)\n Filter: (seg = $0)\n -> Seq Scan on vis_for_seg_79_2011_03 vis (cost=0.00..11.50\nrows=1 width=0) (actual time=0.002..0.002 rows=0 loops=1)\n Filter: (seg = $0)\n -> Seq Scan on vis_for_seg_80_2011_03 vis (cost=0.00..11.50\nrows=1 width=0) (actual time=0.001..0.001 rows=0 loops=1)\n Filter: (seg = $0)\n -> Seq Scan on vis_for_seg_82_2011_03 vis (cost=0.00..11.50\nrows=1 width=0) (actual time=0.002..0.002 rows=0 loops=1)\n Filter: (seg = $0)\n -> Seq Scan on vis_for_seg_87_2011_03 vis (cost=0.00..11.50\nrows=1 width=0) (actual time=0.002..0.002 rows=0 loops=1)\n Filter: (seg = $0)\n Total runtime: 17.012 ms\n(26 rows)\n\nTime: 24.147 ms\n[local]:playpen=>\n[local]:playpen=> explain analyze select count(vis.*) from vis, an where\nvis.seg = an.seg and an.key = 471;\n \nQUERY\nPLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=726.72..726.73 rows=1 width=29) (actual\ntime=30.061..30.062 rows=1 loops=1)\n -> Nested Loop (cost=10.92..725.65 rows=424 width=29) (actual\ntime=0.999..26.118 rows=3154 loops=1)\n Join Filter: (public.vis.seg = an.seg)\n -> Index Scan using an_pkey on an (cost=0.00..8.27 rows=1\nwidth=4) (actual time=0.024..0.032 rows=1 loops=1)\n Index Cond: (key = 471)\n -> Append (cost=10.92..701.09 rows=1304 width=36) (actual\ntime=0.938..18.488 rows=3154 loops=1)\n -> Bitmap Heap Scan on vis (cost=10.92..611.49 rows=344\nwidth=36) (actual time=0.936..11.753 rows=3154 loops=1)\n Recheck Cond: (public.vis.seg = an.seg)\n -> Bitmap Index Scan on vis_seg_firstevent_idx \n(cost=0.00..10.83 rows=344 width=0) (actual time=0.826..0.826 rows=3154\nloops=1)\n Index Cond: (public.vis.seg = an.seg)\n -> Seq Scan on vis_for_seg_1_2011_03 vis \n(cost=0.00..11.20 rows=120 width=36) (actual time=0.003..0.003 rows=0\nloops=1)\n -> Seq Scan on vis_for_seg_4_2011_03 vis \n(cost=0.00..11.20 rows=120 width=36) (actual time=0.001..0.001 rows=0\nloops=1)\n -> Seq Scan on vis_for_seg_66_2011_03 vis \n(cost=0.00..11.20 rows=120 width=36) (actual time=0.002..0.002 rows=0\nloops=1)\n -> Seq Scan on vis_for_seg_69_2011_03 vis \n(cost=0.00..11.20 rows=120 width=36) (actual time=0.002..0.002 rows=0\nloops=1)\n -> Seq Scan on vis_for_seg_79_2011_03 vis \n(cost=0.00..11.20 rows=120 width=36) (actual time=0.002..0.002 rows=0\nloops=1)\n -> Seq Scan on vis_for_seg_80_2011_03 vis \n(cost=0.00..11.20 rows=120 width=36) (actual time=0.001..0.001 rows=0\nloops=1)\n -> Seq Scan on vis_for_seg_82_2011_03 vis \n(cost=0.00..11.20 rows=120 width=36) (actual time=0.002..0.002 rows=0\nloops=1)\n -> Seq Scan on vis_for_seg_87_2011_03 vis \n(cost=0.00..11.20 rows=120 width=36) (actual time=0.002..0.002 rows=0\nloops=1)\n Total runtime: 30.398 ms\n(19 rows)\n\n\n[local]:playpen=> select version();\n \nversion \n------------------------------------------------------------------------------------------------------------\n PostgreSQL 9.0.3 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 4.1.2\n20080704 (Red Hat 4.1.2-48), 32-bit\n(1 row)\n", "msg_date": "Thu, 31 Mar 2011 20:41:04 -0400", "msg_from": "Joseph Shraibman <[email protected]>", "msg_from_op": true, "msg_subject": "table contraints checks only happen in planner phase" }, { "msg_contents": "On 3/31/11 5:41 PM, Joseph Shraibman wrote:\n> Is there a reason that when executing queries the table constraints are\n> only checked during planning and not execution? I end up making 2 round\n> trips to the database to get around this.\n\nThis is a limitation with our current partitioning implementation. It\nonly understands literal values, not JOINs.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n", "msg_date": "Fri, 01 Apr 2011 10:32:24 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: table contraints checks only happen in planner phase" } ]
[ { "msg_contents": "I have a PL/PGSQL stored procedure on my server which is about 3,500 lines\nlong. To further improve performance after more than year of optimizing the\nPL/PGSQL, and also to hide the logic so it is not as easy to copy, I am\nstarting to re-write the stored procedure in C. The initial design is such\nthat a C++ client would query the PL/PGSQL stored procedure using one\ntransaction and the result of the transaction would be several open cursors\nit could use to pull down the data from the server. This data would be\nre-written to XML and passed back to the original requestor for use. I am\nnow also going to replace the XML conversion process and re-write the C++\nlogic as a module for the system that initiates the XML request. So, the\nlayout is going from:\n\nUser Client ----> XML ----> C++ Server / PG Client -----> SQL ----->\nPL/PGSQL Stored Proc on PGSQL Server\n\nTo:\n\nUser Client / C Module / PG Client -----> SQL ----> C Stored Proc on PGSQL\nServer\n\nThis will improve overall efficiency considerably. However, I am now wonder\nabout the follow....\n\nThe PL/PGSQL procedure has many large SELECT statements which pull various\ndata from various tables. Then, between those SELECT statements, it runs\nlogic to determine what other SELECT statements it should execute. Once it\nfigures this all out and finishes running all the select statements, it\njoins some of those results together and returns 3 cursors which represent\nall the data the User Client needs in order to proceed with processing its\nworkload. Every SELECT statement that the procedure ends up executing has\nits data stored in those three cursors, and all of it is required by the\nUser Client to process its workload. Given that this data must be\ntransferred across the link from the Postgres server to the client, would it\nbe more efficient to keep all of the contents of the PL/PGSQL stored\nprocedure, including the logic about which SELECT statements to run, on the\nPostgres server, or would it be just as fast to have that logic contained in\nthe C module for the User Client and have that C module make multiple\nrequests to the Postgres server?\n\nThe stored procedure also does write some records to three tables on the\nserver which represent a summary of everything that it computed and returned\nto the User Client for billing purposes. My initial intuition on this is\nthat keeping it all in the stored procedure on the server is going to be\nfaster for two main reasons:\n\n1) Each select statement from the User Client C Module would be a separate\ntransaction which would drastically increase transaction overhead for the\nwhole set of requests.\n2) Writing the billing data at the end would mean that I not only have to\npull all the data down to the User Client, I must also push the data back up\nto the server for writing the billing records.\n\nSo, am I looking at this the right way, or am I missing something?\n\nThanks in advance for any responses.\n\n-- \nEliot Gable\n\n\"We do not inherit the Earth from our ancestors: we borrow it from our\nchildren.\" ~David Brower\n\n\"I decided the words were too conservative for me. We're not borrowing from\nour children, we're stealing from them--and it's not even considered to be a\ncrime.\" ~David Brower\n\n\"Esse oportet ut vivas, non vivere ut edas.\" (Thou shouldst eat to live; not\nlive to eat.) ~Marcus Tullius Cicero\n\nI have a PL/PGSQL stored procedure on my server which is about 3,500 lines long. To further improve performance after more than year of optimizing the PL/PGSQL, and also to hide the logic so it is not as easy to copy, I am starting to re-write the stored procedure in C. The initial design is such that a C++ client would query the PL/PGSQL stored procedure using one transaction and the result of the transaction would be several open cursors it could use to pull down the data from the server. This data would be re-written to XML and passed back to the original requestor for use. I am now also going to replace the XML conversion process and re-write the C++ logic as a module for the system that initiates the XML request. So, the layout is going from:\nUser Client ----> XML ----> C++ Server / PG Client -----> SQL -----> PL/PGSQL Stored Proc on PGSQL ServerTo:User Client / C Module / PG Client -----> SQL ----> C Stored Proc on PGSQL Server\nThis will improve overall efficiency considerably. However, I am now wonder about the follow....The PL/PGSQL procedure has many large SELECT statements which pull various data from various tables. Then, between those SELECT statements, it runs logic to determine what other SELECT statements it should execute. Once it figures this all out and finishes running all the select statements, it joins some of those results together and returns 3 cursors which represent all the data the User Client needs in order to proceed with processing its workload. Every SELECT statement that the procedure ends up executing has its data stored in those three cursors, and all of it is required by the User Client to process its workload. Given that this data must be transferred across the link from the Postgres server to the client, would it be more efficient to keep all of the contents of the PL/PGSQL stored procedure, including the logic about which SELECT statements to run, on the Postgres server, or would it be just as fast to have that logic contained in the C module for the User Client and have that C module make multiple requests to the Postgres server?\nThe stored procedure also does write some records to three tables on the server which represent a summary of everything that it computed and returned to the User Client for billing purposes. My initial intuition on this is that keeping it all in the stored procedure on the server is going to be faster for two main reasons:\n1) Each select statement from the User Client C Module would be a separate transaction which would drastically increase transaction overhead for the whole set of requests.2) Writing the billing data at the end would mean that I not only have to pull all the data down to the User Client, I must also push the data back up to the server for writing the billing records.\nSo, am I looking at this the right way, or am I missing something?Thanks in advance for any responses.-- Eliot Gable\"We do not inherit the Earth from our ancestors: we borrow it from our children.\" ~David Brower \n\"I decided the words were too conservative for me. We're not borrowing from our children, we're stealing from them--and it's not even considered to be a crime.\" ~David Brower\"Esse oportet ut vivas, non vivere ut edas.\" (Thou shouldst eat to live; not live to eat.) ~Marcus Tullius Cicero", "msg_date": "Sat, 2 Apr 2011 14:52:28 -0400", "msg_from": "Eliot Gable <[email protected]>", "msg_from_op": true, "msg_subject": "C on Client versus C on Server" }, { "msg_contents": "On 02.04.2011 21:52, Eliot Gable wrote:\n> 1) Each select statement from the User Client C Module would be a separate\n> transaction which would drastically increase transaction overhead for the\n> whole set of requests.\n\nYou could wrap the statements in BEGIN-COMMIT in the client code.\n\n> 2) Writing the billing data at the end would mean that I not only have to\n> pull all the data down to the User Client, I must also push the data back up\n> to the server for writing the billing records.\n\nYeah, that sounds right.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Sun, 03 Apr 2011 10:01:57 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: C on Client versus C on Server" } ]
[ { "msg_contents": "Dear all,\n\nI have a Postgres database server with 16GB RAM.\nOur application runs by making connections to Postgres Server from \ndifferent servers and selecting data from one table & insert into \nremaining tables in a database.\n\nBelow is the no. of connections output :-\n\npostgres=# select datname,numbackends from pg_stat_database;\n datname | numbackends\n-------------------+-------------\n template1 | 0\n template0 | 0\n postgres | 3\n template_postgis | 0\n pdc_uima_dummy | 107\n pdc_uima_version3 | 1\n pdc_uima_olap | 0\n pdc_uima_s9 | 3\n pdc_uima | 1\n(9 rows)\n\nI am totally confused for setting configuration parameters in Postgres \nParameters :-\n\nFirst of all, I research on some tuning parameters and set mu \npostgresql.conf as:-\n\nmax_connections = 1000\nshared_buffers = 4096MB\ntemp_buffers = 16MB \nwork_mem = 64MB\nmaintenance_work_mem = 128MB\nwal_buffers = 32MB\ncheckpoint_segments = 3 \nrandom_page_cost = 2.0\neffective_cache_size = 8192MB\n\nThen I got some problems from Application Users that the Postgres Slows \ndown and free commands output is :-\n\n[root@s8-mysd-2 ~]# free -g\n total used free shared buffers cached\nMem: 15 15 0 0 0 14\n-/+ buffers/cache: 0 14\nSwap: 16 0 15\n[root@s8-mysd-2 ~]# free \n total used free shared buffers cached\nMem: 16299476 16202264 97212 0 58924 15231852\n-/+ buffers/cache: 911488 15387988\nSwap: 16787884 153136 16634748\n\nI think there may be some problem in my Configuration parameters and \nchange it as :\n\nmax_connections = 700\nshared_buffers = 4096MB\ntemp_buffers = 16MB \nwork_mem = 64MB\nmaintenance_work_mem = 128MB\nwal_buffers = 32MB\ncheckpoint_segments = 32 \nrandom_page_cost = 2.0\neffective_cache_size = 4096MB\n\nbut Still Postgres Server uses Swap Memory While SELECT & INSERT into \ndatabase tables.\n\nPlease check the attached postgresql.conf .\n\nAnd also have some views on how to tune this server.\n\nDO I need to Increase my RAM s.t I hit H/W limitation.\n\n\n\nThanks & best Regards,\nAdarsh Sharma", "msg_date": "Mon, 04 Apr 2011 15:10:33 +0530", "msg_from": "Adarsh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres Performance Tuning" }, { "msg_contents": "> max_connections = 700\n> shared_buffers = 4096MB\n> temp_buffers = 16MB\n> work_mem = 64MB\n> maintenance_work_mem = 128MB\n> wal_buffers = 32MB\n> checkpoint_segments = 32\n> random_page_cost = 2.0\n> effective_cache_size = 4096MB\n\nFirst of all, there's no reason to increase wal_buffers above 32MB. AFAIK\nthe largest sensible value is 16MB - I doubt increasing it further will\nimprove performance.\n\nSecond - effective_cache_size is just a hint how much memory is used by\nthe operating system for filesystem cache. So this does not influence\namount of allocated memory in any way.\n\n> but Still Postgres Server uses Swap Memory While SELECT & INSERT into\n> database tables.\n\nAre you sure it's PostgreSQL. What else is running on the box? Have you\nanalyzed why the SQL queries are slow (using EXPLAIN)?\n\nregards\nTomas\n\n", "msg_date": "Mon, 4 Apr 2011 12:28:06 +0200", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Postgres Performance Tuning" }, { "msg_contents": "[email protected] wrote:\n>> max_connections = 700\n>> shared_buffers = 4096MB\n>> temp_buffers = 16MB\n>> work_mem = 64MB\n>> maintenance_work_mem = 128MB\n>> wal_buffers = 32MB\n>> checkpoint_segments = 32\n>> random_page_cost = 2.0\n>> effective_cache_size = 4096MB\n>\n> First of all, there's no reason to increase wal_buffers above 32MB. AFAIK\n> the largest sensible value is 16MB - I doubt increasing it further will\n> improve performance.\n>\n> Second - effective_cache_size is just a hint how much memory is used by\n> the operating system for filesystem cache. So this does not influence\n> amount of allocated memory in any way.\n>\n>> but Still Postgres Server uses Swap Memory While SELECT & INSERT into\n>> database tables.\n>\n> Are you sure it's PostgreSQL. What else is running on the box? Have you\n> analyzed why the SQL queries are slow (using EXPLAIN)?\n\nThanks , Below is my action points :-\n\nmax_connections = 300 ( I don't think that application uses more than \n300 connections )\nshared_buffers = 4096MB\ntemp_buffers = 16MB\nwork_mem = 64MB\nmaintenance_work_mem = 128MB\nwal_buffers = 16MB ( As per U'r suggestions )\ncheckpoint_segments = 32\nrandom_page_cost = 2.0\neffective_cache_size = 8192MB ( Recommended 50% of RAM )\n\n\nMy Shared Memory Variables are as:-\n\n\n[root@s8-mysd-2 ~]# cat /proc/sys/kernel/shmmax\n\n6442450944\n\n[root@s8-mysd-2 ~]# cat /proc/sys/kernel/shmall\n\n6442450944\n\n[root@s8-mysd-2 ~]\n\n\nPlease let me know if any parameter need some change.\n\nAs now I am going change my parameters as per the below link :-\n\nhttp://airumman.blogspot.com/2011/03/postgresql-parameters-for-new-dedicated.html\n\nBut one thing I am not able to understand is :-\n\nStart the server and find out how much memory is still available for the \nOS filesystem cache\n\n\nU'r absolutely right I am also researching on the explain of all select \nstatements and i find one reason of poor indexing on TEXT columns.\n\n\n\nThanks & best Regards,\nAdarsh Sharma\n\n\n\n>\n> regards\n> Tomas\n>\n>\n\n\n\n\n\n\n\[email protected] wrote:\n\nmax_connections = 700\nshared_buffers = 4096MB\ntemp_buffers = 16MB\nwork_mem = 64MB\nmaintenance_work_mem = 128MB\nwal_buffers = 32MB\ncheckpoint_segments = 32\nrandom_page_cost = 2.0\neffective_cache_size = 4096MB\n\n\nFirst of all, there's no reason to increase wal_buffers above 32MB.\nAFAIK\nthe largest sensible value is 16MB - I doubt increasing it further will\nimprove performance.\n\nSecond - effective_cache_size is just a hint how much memory is used by\nthe operating system for filesystem cache. So this does not influence\namount of allocated memory in any way.\n\nbut Still Postgres Server uses Swap Memory\nWhile SELECT & INSERT into\ndatabase tables.\n\n\nAre you sure it's PostgreSQL. What else is running on the box? Have you\nanalyzed why the SQL queries are slow (using EXPLAIN)?\n\n\nThanks , Below is my action points :-\n\nmax_connections = 300 ( I don't think that application uses more than\n300 connections )\nshared_buffers = 4096MB\ntemp_buffers = 16MB\nwork_mem = 64MB \nmaintenance_work_mem = 128MB\nwal_buffers = 16MB ( As per U'r suggestions )\ncheckpoint_segments = 32\nrandom_page_cost = 2.0\neffective_cache_size = 8192MB ( Recommended 50% of RAM )\n\n\nMy Shared Memory Variables are as:-\n\n\n[root@s8-mysd-2 ~]# cat /proc/sys/kernel/shmmax \n\n6442450944\n\n[root@s8-mysd-2 ~]# cat /proc/sys/kernel/shmall \n\n6442450944\n\n[root@s8-mysd-2 ~]\n\n\nPlease let me know if any parameter need some change.\n\nAs now I am going change my parameters as per the below link :-\n\nhttp://airumman.blogspot.com/2011/03/postgresql-parameters-for-new-dedicated.html\n\nBut one thing I am not able to understand is :-\n\n\nStart the server and find out\nhow much memory is still available for the OS filesystem cache\n\n\nU'r absolutely right I am also researching on the explain\nof all select statements and i find one reason of poor indexing on TEXT\ncolumns.\n\n\n\nThanks & best Regards,\nAdarsh Sharma\n\n\n\n\nregards\nTomas", "msg_date": "Mon, 04 Apr 2011 16:09:20 +0530", "msg_from": "Adarsh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres Performance Tuning" }, { "msg_contents": "On Mon, Apr 4, 2011 at 3:40 AM, Adarsh Sharma <[email protected]> wrote:\n> Dear all,\n>\n> I have a Postgres database server with 16GB RAM.\n> Our application runs by making connections to Postgres Server from different\n> servers and selecting data from one table & insert into remaining tables in\n> a database.\n>\n> Below is the no. of connections output :-\n>\n> postgres=# select datname,numbackends from pg_stat_database;\n>     datname      | numbackends\n> -------------------+-------------\n> template1         |           0\n> template0         |           0\n> postgres          |           3\n> template_postgis  |           0\n> pdc_uima_dummy    |         107\n> pdc_uima_version3 |           1\n> pdc_uima_olap     |           0\n> pdc_uima_s9       |           3\n> pdc_uima          |           1\n> (9 rows)\n>\n> I am totally confused for setting configuration parameters in Postgres\n> Parameters :-\n>\n> First of all, I research on some tuning parameters and set mu\n> postgresql.conf as:-\n>\n> max_connections = 1000\n\nThat's a little high.\n\n> shared_buffers = 4096MB\n> work_mem = 64MB\n\nThat's way high. Work mem is PER SORT as well as PER CONNECTION.\n1000 connections with 2 sorts each = 128,000MB.\n\n> [root@s8-mysd-2 ~]# free              total       used       free     shared\n>    buffers     cached\n> Mem:      16299476   16202264      97212          0      58924   15231852\n> -/+ buffers/cache:     911488   15387988\n> Swap:     16787884     153136   16634748\n\nThere is nothing wrong here. You're using 153M out of 16G swap. 15.x\nGig is shared buffers. If your system is slow, it's not because it's\nrunning out of memory or using too much swap.\n\n>\n> I think there may be some problem in my Configuration parameters and change\n> it as :\n\nDon't just guess and hope for the best. Examine your system to\ndetermine where it's having issues. Use\nvmstat 10\niostat -xd 10\ntop\nhtop\n\nand so on to see where your bottleneck is. CPU? Kernel wait? IO wait? etc.\n\nlog long running queries. Use pgfouine to examine your queries.\n", "msg_date": "Mon, 4 Apr 2011 04:43:59 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres Performance Tuning" }, { "msg_contents": "Also you can try to take the help of pgtune before hand.\n\npgfoundry.org/projects/*pgtune*/\n\n\nOn Mon, Apr 4, 2011 at 12:43 PM, Scott Marlowe <[email protected]>wrote:\n\n> On Mon, Apr 4, 2011 at 3:40 AM, Adarsh Sharma <[email protected]>\n> wrote:\n> > Dear all,\n> >\n> > I have a Postgres database server with 16GB RAM.\n> > Our application runs by making connections to Postgres Server from\n> different\n> > servers and selecting data from one table & insert into remaining tables\n> in\n> > a database.\n> >\n> > Below is the no. of connections output :-\n> >\n> > postgres=# select datname,numbackends from pg_stat_database;\n> > datname | numbackends\n> > -------------------+-------------\n> > template1 | 0\n> > template0 | 0\n> > postgres | 3\n> > template_postgis | 0\n> > pdc_uima_dummy | 107\n> > pdc_uima_version3 | 1\n> > pdc_uima_olap | 0\n> > pdc_uima_s9 | 3\n> > pdc_uima | 1\n> > (9 rows)\n> >\n> > I am totally confused for setting configuration parameters in Postgres\n> > Parameters :-\n> >\n> > First of all, I research on some tuning parameters and set mu\n> > postgresql.conf as:-\n> >\n> > max_connections = 1000\n>\n> That's a little high.\n>\n> > shared_buffers = 4096MB\n> > work_mem = 64MB\n>\n> That's way high. Work mem is PER SORT as well as PER CONNECTION.\n> 1000 connections with 2 sorts each = 128,000MB.\n>\n> > [root@s8-mysd-2 ~]# free total used free\n> shared\n> > buffers cached\n> > Mem: 16299476 16202264 97212 0 58924 15231852\n> > -/+ buffers/cache: 911488 15387988\n> > Swap: 16787884 153136 16634748\n>\n> There is nothing wrong here. You're using 153M out of 16G swap. 15.x\n> Gig is shared buffers. If your system is slow, it's not because it's\n> running out of memory or using too much swap.\n>\n> >\n> > I think there may be some problem in my Configuration parameters and\n> change\n> > it as :\n>\n> Don't just guess and hope for the best. Examine your system to\n> determine where it's having issues. Use\n> vmstat 10\n> iostat -xd 10\n> top\n> htop\n>\n> and so on to see where your bottleneck is. CPU? Kernel wait? IO wait?\n> etc.\n>\n> log long running queries. Use pgfouine to examine your queries.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nAlso you can try to take the help of pgtune before hand.pgfoundry.org/projects/pgtune/On Mon, Apr 4, 2011 at 12:43 PM, Scott Marlowe <[email protected]> wrote:\nOn Mon, Apr 4, 2011 at 3:40 AM, Adarsh Sharma <[email protected]> wrote:\n\n> Dear all,\n>\n> I have a Postgres database server with 16GB RAM.\n> Our application runs by making connections to Postgres Server from different\n> servers and selecting data from one table & insert into remaining tables in\n> a database.\n>\n> Below is the no. of connections output :-\n>\n> postgres=# select datname,numbackends from pg_stat_database;\n>     datname      | numbackends\n> -------------------+-------------\n> template1         |           0\n> template0         |           0\n> postgres          |           3\n> template_postgis  |           0\n> pdc_uima_dummy    |         107\n> pdc_uima_version3 |           1\n> pdc_uima_olap     |           0\n> pdc_uima_s9       |           3\n> pdc_uima          |           1\n> (9 rows)\n>\n> I am totally confused for setting configuration parameters in Postgres\n> Parameters :-\n>\n> First of all, I research on some tuning parameters and set mu\n> postgresql.conf as:-\n>\n> max_connections = 1000\n\nThat's a little high.\n\n> shared_buffers = 4096MB\n> work_mem = 64MB\n\nThat's way high.  Work mem is PER SORT as well as PER CONNECTION.\n1000 connections with 2 sorts each = 128,000MB.\n\n> [root@s8-mysd-2 ~]# free              total       used       free     shared\n>    buffers     cached\n> Mem:      16299476   16202264      97212          0      58924   15231852\n> -/+ buffers/cache:     911488   15387988\n> Swap:     16787884     153136   16634748\n\nThere is nothing wrong here.  You're using 153M out of 16G swap.  15.x\nGig is shared buffers.  If your system is slow, it's not because it's\nrunning out of memory or using too much swap.\n\n>\n> I think there may be some problem in my Configuration parameters and change\n> it as :\n\nDon't just guess and hope for the best.  Examine your system to\ndetermine where it's having issues.  Use\nvmstat 10\niostat -xd 10\ntop\nhtop\n\nand so on to see where your bottleneck is.  CPU?  Kernel wait?  IO wait? etc.\n\nlog long running queries.  Use pgfouine to examine your queries.\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Mon, 4 Apr 2011 12:52:43 +0200", "msg_from": "Sethu Prasad <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres Performance Tuning" }, { "msg_contents": "On Mon, Apr 4, 2011 at 4:43 AM, Scott Marlowe <[email protected]> wrote:\n>\n>> [root@s8-mysd-2 ~]# free              total       used       free     shared\n>>    buffers     cached\n>> Mem:      16299476   16202264      97212          0      58924   15231852\n>> -/+ buffers/cache:     911488   15387988\n>> Swap:     16787884     153136   16634748\n>\n> There is nothing wrong here.  You're using 153M out of 16G swap.  15.x\n> Gig is shared buffers.  If your system is slow, it's not because it's\n> running out of memory or using too much swap.\n\nSorry that's 15.xG is system cache, not shared buffers. Anyway, still\nnot a problem.\n", "msg_date": "Mon, 4 Apr 2011 04:54:47 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres Performance Tuning" }, { "msg_contents": "Adarsh,\n\nWhat is the Size of Database?\n\nBest Regards,\nRaghavendra\nEnterpriseDB Corporation\n\nOn Mon, Apr 4, 2011 at 4:24 PM, Scott Marlowe <[email protected]>wrote:\n\n> On Mon, Apr 4, 2011 at 4:43 AM, Scott Marlowe <[email protected]>\n> wrote:\n> >\n> >> [root@s8-mysd-2 ~]# free total used free\n> shared\n> >> buffers cached\n> >> Mem: 16299476 16202264 97212 0 58924\n> 15231852\n> >> -/+ buffers/cache: 911488 15387988\n> >> Swap: 16787884 153136 16634748\n> >\n> > There is nothing wrong here. You're using 153M out of 16G swap. 15.x\n> > Gig is shared buffers. If your system is slow, it's not because it's\n> > running out of memory or using too much swap.\n>\n> Sorry that's 15.xG is system cache, not shared buffers. Anyway, still\n> not a problem.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nAdarsh,What is the Size of Database?\nBest Regards,RaghavendraEnterpriseDB CorporationOn Mon, Apr 4, 2011 at 4:24 PM, Scott Marlowe <[email protected]> wrote:\nOn Mon, Apr 4, 2011 at 4:43 AM, Scott Marlowe <[email protected]> wrote:\n\n\n>\n>> [root@s8-mysd-2 ~]# free              total       used       free     shared\n>>    buffers     cached\n>> Mem:      16299476   16202264      97212          0      58924   15231852\n>> -/+ buffers/cache:     911488   15387988\n>> Swap:     16787884     153136   16634748\n>\n> There is nothing wrong here.  You're using 153M out of 16G swap.  15.x\n> Gig is shared buffers.  If your system is slow, it's not because it's\n> running out of memory or using too much swap.\n\nSorry that's 15.xG is system cache, not shared buffers.  Anyway, still\nnot a problem.\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Mon, 4 Apr 2011 16:26:57 +0530", "msg_from": "Raghavendra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres Performance Tuning" }, { "msg_contents": "My database size is :-\npostgres=# select pg_size_pretty(pg_database_size('pdc_uima_dummy'));\n pg_size_pretty\n----------------\n 49 GB\n(1 row)\n\nI have a doubt regarding postgres Memory Usage :-\n\nSay my Application makes Connection to Database Server ( *.*.*.106) from \n(*.*.*.111, *.*.*.113, *.*.*.114) Servers and I check the top command as \n:-- Say it makes 100 Connections\n\ntop - 17:01:02 up 5:39, 4 users, load average: 0.00, 0.00, 0.00\nTasks: 170 total, 1 running, 169 sleeping, 0 stopped, 0 zombie\nCpu(s): 0.0% us, 0.2% sy, 0.0% ni, 99.6% id, 0.1% wa, 0.0% hi, \n0.0% si, 0.0% st\nMem: 16299476k total, 16198784k used, 100692k free, 73776k buffers\nSwap: 16787884k total, 148176k used, 16639708k free, 15585396k cached\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ \nCOMMAND \n\n 3401 postgres 20 0 4288m 3.3g 3.3g S 0 21.1 0:24.73 \npostgres \n\n 3397 postgres 20 0 4286m 119m 119m S 0 0.8 0:00.36 \npostgres \n\n 4083 postgres 20 0 4303m 104m 101m S 0 0.7 0:07.68 \npostgres \n\n 3402 postgres 20 0 4288m 33m 32m S 0 0.2 0:03.67 \npostgres \n\n 4082 postgres 20 0 4301m 27m 25m S 0 0.2 0:00.85 \npostgres \n\n 4748 postgres 20 0 4290m 5160 3700 S 0 0.0 0:00.00 \npostgres \n\n 4173 root 20 0 12340 3028 1280 S 0 0.0 0:00.12 \nbash \n\n 4084 postgres 20 0 4290m 2952 1736 S 0 0.0 0:00.00 \npostgres \n\n 4612 root 20 0 12340 2920 1276 S 0 0.0 0:00.06 \nbash \n\n 4681 root 20 0 12340 2920 1276 S 0 0.0 0:00.05 \nbash \n\n 4550 root 20 0 12208 2884 1260 S 0 0.0 0:00.08 \nbash \n\n 4547 root 20 0 63580 2780 2204 S \n\nand free command says :--\n[root@s8-mysd-2 8.4SS]# free -g\n total used free shared buffers cached\nMem: 15 15 0 0 0 14\n-/+ buffers/cache: 0 15\nSwap: 16 0 15\n[root@s8-mysd-2 8.4SS]#\n\n\nNow when my job finishes and I close the Connections from 2 Servers , \nthe top & free output remains the same :-\n\nI don't know What is the reason behind this as I have only 3 Connections \nfrom the below command :\n\npostgres=# select datname, client_addr,current_query from pg_stat_activity;\n datname | client_addr | \ncurrent_query \n----------------+---------------+------------------------------------------------------------------\n postgres | | select datname, \nclient_addr,current_query from pg_stat_activity;\n postgres | 192.168.0.208 | <IDLE>\n pdc_uima_s9 | 192.168.0.208 | <IDLE>\n pdc_uima_s9 | 192.168.0.208 | <IDLE>\n pdc_uima_dummy | 192.168.0.208 | <IDLE>\n pdc_uima_dummy | 192.168.1.102 | <IDLE>\n pdc_uima_dummy | 192.168.1.102 | <IDLE>\n pdc_uima_dummy | 192.168.1.102 | <IDLE>\n(8 rows)\n\n\nPLease help me to understand how much memory does 1 Connection Uses and \nhow to use Server parameters accordingly.\n\n\nThanks & best Regards,\nAdarsh Sharma\n\n\n\nRaghavendra wrote:\n> Adarsh,\n>\n> What is the Size of Database?\n>\n> Best Regards,\n> Raghavendra\n> EnterpriseDB Corporation\n>\n> On Mon, Apr 4, 2011 at 4:24 PM, Scott Marlowe <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n> On Mon, Apr 4, 2011 at 4:43 AM, Scott Marlowe\n> <[email protected] <mailto:[email protected]>> wrote:\n> >\n> >> [root@s8-mysd-2 ~]# free total used \n> free shared\n> >> buffers cached\n> >> Mem: 16299476 16202264 97212 0 58924\n> 15231852\n> >> -/+ buffers/cache: 911488 15387988\n> >> Swap: 16787884 153136 16634748\n> >\n> > There is nothing wrong here. You're using 153M out of 16G swap.\n> 15.x\n> > Gig is shared buffers. If your system is slow, it's not because\n> it's\n> > running out of memory or using too much swap.\n>\n> Sorry that's 15.xG is system cache, not shared buffers. Anyway, still\n> not a problem.\n>\n> --\n> Sent via pgsql-performance mailing list\n> ([email protected]\n> <mailto:[email protected]>)\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n\n\n\n\n\n\n\nMy database size is :- \npostgres=# select pg_size_pretty(pg_database_size('pdc_uima_dummy'));\n pg_size_pretty \n----------------\n 49 GB\n(1 row)\n\nI have a doubt regarding postgres Memory Usage :-\n\nSay my Application makes Connection to Database Server ( *.*.*.106)\nfrom (*.*.*.111, *.*.*.113, *.*.*.114) Servers and I check the top\ncommand as :-- Say it makes 100 Connections\n\ntop - 17:01:02 up  5:39,  4 users,  load average: 0.00, 0.00, 0.00\nTasks: 170 total,   1 running, 169 sleeping,   0 stopped,   0 zombie\nCpu(s):  0.0% us,  0.2% sy,  0.0% ni, 99.6% id,  0.1% wa,  0.0% hi, \n0.0% si,  0.0% st\nMem:  16299476k total, 16198784k used,   100692k free,    73776k buffers\nSwap: 16787884k total,   148176k used, 16639708k free, 15585396k cached\n\n  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+ \nCOMMAND                                                                                        \n\n 3401 postgres  20   0 4288m 3.3g 3.3g S    0 21.1   0:24.73\npostgres                                                                                       \n\n 3397 postgres  20   0 4286m 119m 119m S    0  0.8   0:00.36\npostgres                                                                                       \n\n 4083 postgres  20   0 4303m 104m 101m S    0  0.7   0:07.68\npostgres                                                                                       \n\n 3402 postgres  20   0 4288m  33m  32m S    0  0.2   0:03.67\npostgres                                                                                       \n\n 4082 postgres  20   0 4301m  27m  25m S    0  0.2   0:00.85\npostgres                                                                                       \n\n 4748 postgres  20   0 4290m 5160 3700 S    0  0.0   0:00.00\npostgres                                                                                       \n\n 4173 root      20   0 12340 3028 1280 S    0  0.0   0:00.12\nbash                                                                                           \n\n 4084 postgres  20   0 4290m 2952 1736 S    0  0.0   0:00.00\npostgres                                                                                       \n\n 4612 root      20   0 12340 2920 1276 S    0  0.0   0:00.06\nbash                                                                                           \n\n 4681 root      20   0 12340 2920 1276 S    0  0.0   0:00.05\nbash                                                                                           \n\n 4550 root      20   0 12208 2884 1260 S    0  0.0   0:00.08\nbash                                                                                           \n\n 4547 root      20   0 63580 2780 2204 S   \n\nand free command says :--\n[root@s8-mysd-2 8.4SS]# free -g\n             total       used       free     shared    buffers    \ncached\nMem:            15         15          0          0          0        \n14\n-/+ buffers/cache:          0         15\nSwap:           16          0         15\n[root@s8-mysd-2 8.4SS]# \n\n\nNow when my job finishes and I close the Connections from 2 Servers ,\nthe top & free output remains the same :-\n\nI don't know What is the reason behind this as I have only 3\nConnections from the below command :\n\npostgres=# select datname, client_addr,current_query from\npg_stat_activity;\n    datname     |  client_addr  |                         \ncurrent_query                           \n----------------+---------------+------------------------------------------------------------------\n postgres       |               | select datname,\nclient_addr,current_query from pg_stat_activity;\n postgres       | 192.168.0.208 | <IDLE>\n pdc_uima_s9    | 192.168.0.208 | <IDLE>\n pdc_uima_s9    | 192.168.0.208 | <IDLE>\n pdc_uima_dummy | 192.168.0.208 | <IDLE>\n pdc_uima_dummy | 192.168.1.102 | <IDLE>\n pdc_uima_dummy | 192.168.1.102 | <IDLE>\n pdc_uima_dummy | 192.168.1.102 | <IDLE>\n(8 rows)\n\n\nPLease help me to understand how much memory does 1 Connection Uses and\nhow to use Server parameters accordingly.\n\n\nThanks & best Regards,\nAdarsh Sharma\n\n\n\nRaghavendra wrote:\nAdarsh,\n \n\nWhat is the Size of Database?\n\n\n\nBest Regards,\nRaghavendra\nEnterpriseDB Corporation\n\n\nOn Mon, Apr 4, 2011 at 4:24 PM, Scott\nMarlowe <[email protected]>\nwrote:\n\nOn Mon, Apr 4, 2011 at 4:43 AM, Scott Marlowe <[email protected]>\nwrote:\n>\n>> [root@s8-mysd-2 ~]# free              total       used      \nfree     shared\n>>    buffers     cached\n>> Mem:      16299476   16202264      97212          0      58924\n  15231852\n>> -/+ buffers/cache:     911488   15387988\n>> Swap:     16787884     153136   16634748\n>\n> There is nothing wrong here.  You're using 153M out of 16G swap.\n 15.x\n> Gig is shared buffers.  If your system is slow, it's not because\nit's\n> running out of memory or using too much swap.\n\n\nSorry that's 15.xG is system cache, not shared buffers.  Anyway, still\nnot a problem.\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Mon, 04 Apr 2011 17:04:24 +0530", "msg_from": "Adarsh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres Performance Tuning" }, { "msg_contents": "On Mon, Apr 4, 2011 at 5:34 AM, Adarsh Sharma <[email protected]> wrote:\n> Mem:  16299476k total, 16198784k used,   100692k free,    73776k buffers\n> Swap: 16787884k total,   148176k used, 16639708k free, 15585396k cached\n>\n>   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+\n> COMMAND\n>  3401 postgres  20   0 4288m 3.3g 3.3g S    0 21.1   0:24.73\n> postgres\n>  3397 postgres  20   0 4286m 119m 119m S    0  0.8   0:00.36\n> postgres\n> PLease help me to understand how much memory does 1 Connection Uses and how\n> to use Server parameters accordingly.\n\nOK, first, see the 15585396k cached? That's how much memory your OS\nis using to cache file systems etc. Basically that's memory not being\nused by anything else right now, so the OS borrows it and uses it for\ncaching.\n\nNext, VIRT is how much memory your process would need to load every\nlib it might need but may not be using now, plus all the shared memory\nit might need, plus it's own space etc. It's not memory in use, it's\nmemory that might under the worst circumstances, be used by that one\nprocess. RES is the amount of memory the process IS actually\ntouching, including shared memory that other processes may be sharing.\n Finally, SHR is the amount of shared memory the process is touching.\nso, taking your biggest process, it is linked to enough libraries and\nshared memory and it's own private memory to add up to 4288Meg. It is\ncurrently actually touching 3.3G. Of that 3.3G it is touching 3.3G is\nshared with other processes. So, the difference between RES and SHR\nis 0, so the delta, or extra memory it's using besides shared memory\nis ZERO (or very close to it, probably dozens or fewer of megabytes).\n\nSo, you're NOT running out of memory. Remember when I mentioned\niostat, vmstat, etc up above? Have you run any of those?\n", "msg_date": "Mon, 4 Apr 2011 05:43:01 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres Performance Tuning" }, { "msg_contents": "Thanks Scott :\n\nMy iostat package is not installed but have a look on below output:\n\n[root@s8-mysd-2 8.4SS]# vmstat 10\nprocs -----------memory---------- ---swap-- -----io---- --system-- \n-----cpu------\n r b swpd free buff cache si so bi bo in cs us sy \nid wa st\n 1 0 147664 93920 72332 15580748 0 1 113 170 47 177 6 \n1 92 1 0\n 0 0 147664 94020 72348 15580748 0 0 0 4 993 565 0 \n0 100 0 0\n 0 0 147664 93896 72364 15580748 0 0 0 5 993 571 0 \n0 100 0 0\n 0 0 147664 93524 72416 15580860 0 0 0 160 1015 591 0 \n0 100 0 0\n 0 0 147664 93524 72448 15580860 0 0 0 8 1019 553 0 \n0 100 0 0\n 0 0 147664 93648 72448 15580860 0 0 0 0 1019 555 0 \n0 100 0 0\n 0 0 147664 93648 72448 15580860 0 0 0 3 1023 560 0 \n0 100 0 0\n\n[root@s8-mysd-2 8.4SS]# iostat\n-bash: iostat: command not found\n[root@s8-mysd-2 8.4SS]#\n\nBest regards,\nAdarsh\n\nScott Marlowe wrote:\n> On Mon, Apr 4, 2011 at 5:34 AM, Adarsh Sharma <[email protected]> wrote:\n> \n>> Mem: 16299476k total, 16198784k used, 100692k free, 73776k buffers\n>> Swap: 16787884k total, 148176k used, 16639708k free, 15585396k cached\n>>\n>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+\n>> COMMAND\n>> 3401 postgres 20 0 4288m 3.3g 3.3g S 0 21.1 0:24.73\n>> postgres\n>> 3397 postgres 20 0 4286m 119m 119m S 0 0.8 0:00.36\n>> postgres\n>> PLease help me to understand how much memory does 1 Connection Uses and how\n>> to use Server parameters accordingly.\n>> \n>\n> OK, first, see the 15585396k cached? That's how much memory your OS\n> is using to cache file systems etc. Basically that's memory not being\n> used by anything else right now, so the OS borrows it and uses it for\n> caching.\n>\n> Next, VIRT is how much memory your process would need to load every\n> lib it might need but may not be using now, plus all the shared memory\n> it might need, plus it's own space etc. It's not memory in use, it's\n> memory that might under the worst circumstances, be used by that one\n> process. RES is the amount of memory the process IS actually\n> touching, including shared memory that other processes may be sharing.\n> Finally, SHR is the amount of shared memory the process is touching.\n> so, taking your biggest process, it is linked to enough libraries and\n> shared memory and it's own private memory to add up to 4288Meg. It is\n> currently actually touching 3.3G. Of that 3.3G it is touching 3.3G is\n> shared with other processes. So, the difference between RES and SHR\n> is 0, so the delta, or extra memory it's using besides shared memory\n> is ZERO (or very close to it, probably dozens or fewer of megabytes).\n>\n> So, you're NOT running out of memory. Remember when I mentioned\n> iostat, vmstat, etc up above? Have you run any of those?\n> \n\n\n\n\n\n\n\n\n\nThanks Scott :\n\nMy iostat package is not installed but have a look on below output:\n\n[root@s8-mysd-2 8.4SS]# vmstat 10\nprocs -----------memory---------- ---swap-- -----io---- --system--\n-----cpu------\n r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us\nsy id wa st\n 1  0 147664  93920  72332 15580748    0    1   113   170   47   177 \n6  1 92  1  0\n 0  0 147664  94020  72348 15580748    0    0     0     4  993   565 \n0  0 100  0  0\n 0  0 147664  93896  72364 15580748    0    0     0     5  993   571 \n0  0 100  0  0\n 0  0 147664  93524  72416 15580860    0    0     0   160 1015   591 \n0  0 100  0  0\n 0  0 147664  93524  72448 15580860    0    0     0     8 1019   553 \n0  0 100  0  0\n 0  0 147664  93648  72448 15580860    0    0     0     0 1019   555 \n0  0 100  0  0\n 0  0 147664  93648  72448 15580860    0    0     0     3 1023   560 \n0  0 100  0  0\n\n[root@s8-mysd-2 8.4SS]# iostat\n-bash: iostat: command not found\n[root@s8-mysd-2 8.4SS]# \n\nBest regards,\nAdarsh\n\nScott Marlowe wrote:\n\nOn Mon, Apr 4, 2011 at 5:34 AM, Adarsh Sharma <[email protected]> wrote:\n \n\nMem:  16299476k total, 16198784k used,   100692k free,    73776k buffers\nSwap: 16787884k total,   148176k used, 16639708k free, 15585396k cached\n\n  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+\nCOMMAND\n 3401 postgres  20   0 4288m 3.3g 3.3g S    0 21.1   0:24.73\npostgres\n 3397 postgres  20   0 4286m 119m 119m S    0  0.8   0:00.36\npostgres\nPLease help me to understand how much memory does 1 Connection Uses and how\nto use Server parameters accordingly.\n \n\n\nOK, first, see the 15585396k cached? That's how much memory your OS\nis using to cache file systems etc. Basically that's memory not being\nused by anything else right now, so the OS borrows it and uses it for\ncaching.\n\nNext, VIRT is how much memory your process would need to load every\nlib it might need but may not be using now, plus all the shared memory\nit might need, plus it's own space etc. It's not memory in use, it's\nmemory that might under the worst circumstances, be used by that one\nprocess. RES is the amount of memory the process IS actually\ntouching, including shared memory that other processes may be sharing.\n Finally, SHR is the amount of shared memory the process is touching.\nso, taking your biggest process, it is linked to enough libraries and\nshared memory and it's own private memory to add up to 4288Meg. It is\ncurrently actually touching 3.3G. Of that 3.3G it is touching 3.3G is\nshared with other processes. So, the difference between RES and SHR\nis 0, so the delta, or extra memory it's using besides shared memory\nis ZERO (or very close to it, probably dozens or fewer of megabytes).\n\nSo, you're NOT running out of memory. Remember when I mentioned\niostat, vmstat, etc up above? Have you run any of those?", "msg_date": "Mon, 04 Apr 2011 17:21:13 +0530", "msg_from": "Adarsh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres Performance Tuning" }, { "msg_contents": "You got to have something to compare against.\nI would say, try to run some benchmarks (pgbench from contrib) and compare them\nagainst a known good instance of postgresql, if you have access in such a machine.\n\nThat said, and forgive me if i sound a little \"explicit\" but if you dont know how to install iostat\nthen there are few chances that you understand unix/linux/bsd concepts properly\nand therefore any efforts to just speed up postgresql in such an environment , at this point,\nwill not have the desired effect, because even if you manage to solve smth now,\ntommorow you will still be in confusion about smth else that might arise.\nSo, i suggest:\n1) try to get an understanding on how your favorite distribution works (read any relevant info, net, books, etc..)\n2) Go and get the book \"PostgreSQL 9.0 High Performance\" by Greg Smith. It is a very good book\nnot only about postgresql but about the current state of systems performance as well.\n\nΣτις Monday 04 April 2011 14:51:13 ο/η Adarsh Sharma έγραψε:\n> \n> Thanks Scott :\n> \n> My iostat package is not installed but have a look on below output:\n> \n> [root@s8-mysd-2 8.4SS]# vmstat 10\n> procs -----------memory---------- ---swap-- -----io---- --system-- \n> -----cpu------\n> r b swpd free buff cache si so bi bo in cs us sy \n> id wa st\n> 1 0 147664 93920 72332 15580748 0 1 113 170 47 177 6 \n> 1 92 1 0\n> 0 0 147664 94020 72348 15580748 0 0 0 4 993 565 0 \n> 0 100 0 0\n> 0 0 147664 93896 72364 15580748 0 0 0 5 993 571 0 \n> 0 100 0 0\n> 0 0 147664 93524 72416 15580860 0 0 0 160 1015 591 0 \n> 0 100 0 0\n> 0 0 147664 93524 72448 15580860 0 0 0 8 1019 553 0 \n> 0 100 0 0\n> 0 0 147664 93648 72448 15580860 0 0 0 0 1019 555 0 \n> 0 100 0 0\n> 0 0 147664 93648 72448 15580860 0 0 0 3 1023 560 0 \n> 0 100 0 0\n> \n> [root@s8-mysd-2 8.4SS]# iostat\n> -bash: iostat: command not found\n> [root@s8-mysd-2 8.4SS]#\n> \n> Best regards,\n> Adarsh\n> \n> Scott Marlowe wrote:\n> > On Mon, Apr 4, 2011 at 5:34 AM, Adarsh Sharma <[email protected]> wrote:\n> > \n> >> Mem: 16299476k total, 16198784k used, 100692k free, 73776k buffers\n> >> Swap: 16787884k total, 148176k used, 16639708k free, 15585396k cached\n> >>\n> >> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+\n> >> COMMAND\n> >> 3401 postgres 20 0 4288m 3.3g 3.3g S 0 21.1 0:24.73\n> >> postgres\n> >> 3397 postgres 20 0 4286m 119m 119m S 0 0.8 0:00.36\n> >> postgres\n> >> PLease help me to understand how much memory does 1 Connection Uses and how\n> >> to use Server parameters accordingly.\n> >> \n> >\n> > OK, first, see the 15585396k cached? That's how much memory your OS\n> > is using to cache file systems etc. Basically that's memory not being\n> > used by anything else right now, so the OS borrows it and uses it for\n> > caching.\n> >\n> > Next, VIRT is how much memory your process would need to load every\n> > lib it might need but may not be using now, plus all the shared memory\n> > it might need, plus it's own space etc. It's not memory in use, it's\n> > memory that might under the worst circumstances, be used by that one\n> > process. RES is the amount of memory the process IS actually\n> > touching, including shared memory that other processes may be sharing.\n> > Finally, SHR is the amount of shared memory the process is touching.\n> > so, taking your biggest process, it is linked to enough libraries and\n> > shared memory and it's own private memory to add up to 4288Meg. It is\n> > currently actually touching 3.3G. Of that 3.3G it is touching 3.3G is\n> > shared with other processes. So, the difference between RES and SHR\n> > is 0, so the delta, or extra memory it's using besides shared memory\n> > is ZERO (or very close to it, probably dozens or fewer of megabytes).\n> >\n> > So, you're NOT running out of memory. Remember when I mentioned\n> > iostat, vmstat, etc up above? Have you run any of those?\n> > \n> \n> \n\n\n\n-- \nAchilleas Mantzios\n", "msg_date": "Mon, 4 Apr 2011 14:14:14 +0200", "msg_from": "Achilleas Mantzios <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres Performance Tuning" }, { "msg_contents": "On Mon, Apr 4, 2011 at 5:51 AM, Adarsh Sharma <[email protected]> wrote:\n>\n>\n> Thanks Scott :\n>\n> My iostat package is not installed but have a look on below output:\n>\n> [root@s8-mysd-2 8.4SS]# vmstat 10\n> procs -----------memory---------- ---swap-- -----io---- --system--\n> -----cpu------\n>  r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy id\n> wa st\n>  1  0 147664  93920  72332 15580748    0    1   113   170   47   177  6  1\n> 92  1  0\n>  0  0 147664  94020  72348 15580748    0    0     0     4  993   565  0  0\n> 100  0  0\n>  0  0 147664  93896  72364 15580748    0    0     0     5  993   571  0  0\n> 100  0  0\n>  0  0 147664  93524  72416 15580860    0    0     0   160 1015   591  0  0\n> 100  0  0\n>  0  0 147664  93524  72448 15580860    0    0     0     8 1019   553  0  0\n> 100  0  0\n>  0  0 147664  93648  72448 15580860    0    0     0     0 1019   555  0  0\n> 100  0  0\n>  0  0 147664  93648  72448 15580860    0    0     0     3 1023   560  0  0\n> 100  0  0\n\nOK, right now your machine is at idle. Run vmstat / iostat when it's\nunder load. If the wa column stays low, then you're not IO bound but\nmore than likely CPU bound.\n", "msg_date": "Mon, 4 Apr 2011 06:14:19 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres Performance Tuning" }, { "msg_contents": ">\n>\n> Thanks Scott :\n>\n> My iostat package is not installed but have a look on below output:\n>\n> [root@s8-mysd-2 8.4SS]# vmstat 10\n> procs -----------memory---------- ---swap-- -----io---- --system--\n> -----cpu------\n> r b swpd free buff cache si so bi bo in cs us sy\n> id wa st\n> 1 0 147664 93920 72332 15580748 0 1 113 170 47 177 6\n> 1 92 1 0\n> 0 0 147664 94020 72348 15580748 0 0 0 4 993 565 0\n> 0 100 0 0\n> 0 0 147664 93896 72364 15580748 0 0 0 5 993 571 0\n> 0 100 0 0\n> 0 0 147664 93524 72416 15580860 0 0 0 160 1015 591 0\n> 0 100 0 0\n> 0 0 147664 93524 72448 15580860 0 0 0 8 1019 553 0\n> 0 100 0 0\n> 0 0 147664 93648 72448 15580860 0 0 0 0 1019 555 0\n> 0 100 0 0\n> 0 0 147664 93648 72448 15580860 0 0 0 3 1023 560 0\n> 0 100 0 0\n\nIs this from a busy or idle period? I guess it's from an idle one, because\nthe CPU is 100% idle and there's very little I/O activity. That's useless\n- we need to see vmstat output from period when there's something wrong.\n\n> [root@s8-mysd-2 8.4SS]# iostat\n> -bash: iostat: command not found\n> [root@s8-mysd-2 8.4SS]#\n\nThen install it. Not sure what distro you use, but it's usually packed in\nsysstat package.\n\nTomas\n\n", "msg_date": "Mon, 4 Apr 2011 14:28:14 +0200", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Postgres Performance Tuning" }, { "msg_contents": "Adarsh,\n\n\n> [root@s8-mysd-2 8.4SS]# iostat\n> -bash: iostat: command not found\n>\n> /usr/bin/iostat\n\nOur application runs by making connections to Postgres Server from different\n> servers and selecting data from one table & insert into remaining tables in\n> a database.\n\n\nWhen you are doing bulk inserts you need to tune AUTOVACUUM parameters or\nChange the autovacuum settings for those tables doing bulk INSERTs. Insert's\nneed analyze.\n\n\n\n> #autovacuum = on # Enable autovacuum subprocess?\n> 'on'\n> # requires track_counts to also be\n> on.\n> #log_autovacuum_min_duration = -1 # -1 disables, 0 logs all actions\n> and\n> # their durations, > 0 logs only\n> # actions running at least this\n> number\n> # of milliseconds.\n> #autovacuum_max_workers = 3 # max number of autovacuum\n> subprocesses\n> #autovacuum_naptime = 1min # time between autovacuum runs\n> #autovacuum_vacuum_threshold = 50 # min number of row updates before\n> # vacuum\n> #autovacuum_analyze_threshold = 50 # min number of row updates before\n> # analyze\n> #autovacuum_vacuum_scale_factor = 0.2 # fraction of table size before\n> vacuum\n> #autovacuum_analyze_scale_factor = 0.1 # fraction of table size before\n> analyze\n> #autovacuum_freeze_max_age = 200000000 # maximum XID age before forced\n> vacuum\n> # (change requires restart)\n> #autovacuum_vacuum_cost_delay = 20ms # default vacuum cost delay for\n> # autovacuum, in milliseconds;\n> # -1 means use vacuum_cost_delay\n> #autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for\n> # autovacuum, -1 means use\n> # vacuum_cost_limit\n\n\nThese are all default AUTOVACUUM settings. If you are using PG 8.4 or above,\ntry AUTOVACUUM settings on bulk insert tables for better performance. Also\nneed to tune the 'autovacuum_naptime'\n\nEg:-\n ALTER table <table name> SET (autovacuum_vacuum_threshold=xxxxx,\nautovacuum_analyze_threshold=xxxx);\n\nwal_buffers //max is 16MB\ncheckpoint_segment /// Its very less in your setting\ncheckpoint_timeout\ntemp_buffer // If application is using temp tables\n\n\nThese parameter will also boost the performance.\n\nBest Regards\nRaghavendra\nEnterpriseDB Corporation.\n\n\n\n\n\n\n\n> Scott Marlowe wrote:\n>\n> On Mon, Apr 4, 2011 at 5:34 AM, Adarsh Sharma <[email protected]> <[email protected]> wrote:\n>\n>\n> Mem: 16299476k total, 16198784k used, 100692k free, 73776k buffers\n> Swap: 16787884k total, 148176k used, 16639708k free, 15585396k cached\n>\n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+\n> COMMAND\n> 3401 postgres 20 0 4288m 3.3g 3.3g S 0 21.1 0:24.73\n> postgres\n> 3397 postgres 20 0 4286m 119m 119m S 0 0.8 0:00.36\n> postgres\n> PLease help me to understand how much memory does 1 Connection Uses and how\n> to use Server parameters accordingly.\n>\n>\n> OK, first, see the 15585396k cached? That's how much memory your OS\n> is using to cache file systems etc. Basically that's memory not being\n> used by anything else right now, so the OS borrows it and uses it for\n> caching.\n>\n> Next, VIRT is how much memory your process would need to load every\n> lib it might need but may not be using now, plus all the shared memory\n> it might need, plus it's own space etc. It's not memory in use, it's\n> memory that might under the worst circumstances, be used by that one\n> process. RES is the amount of memory the process IS actually\n> touching, including shared memory that other processes may be sharing.\n> Finally, SHR is the amount of shared memory the process is touching.\n> so, taking your biggest process, it is linked to enough libraries and\n> shared memory and it's own private memory to add up to 4288Meg. It is\n> currently actually touching 3.3G. Of that 3.3G it is touching 3.3G is\n> shared with other processes. So, the difference between RES and SHR\n> is 0, so the delta, or extra memory it's using besides shared memory\n> is ZERO (or very close to it, probably dozens or fewer of megabytes).\n>\n> So, you're NOT running out of memory. Remember when I mentioned\n> iostat, vmstat, etc up above? Have you run any of those?\n>\n>\n>\n>\n\nAdarsh, [root@s8-mysd-2 8.4SS]# iostat\n\n\n-bash: iostat: command not found/usr/bin/iostat\nOur application runs by making connections to Postgres Server from different servers and selecting data from one table & insert into remaining tables in a database.\nWhen you are doing bulk inserts you need to tune AUTOVACUUM parameters or Change the autovacuum settings for those tables doing bulk INSERTs. Insert's need analyze. \n\n\n#autovacuum = on                        # Enable autovacuum subprocess?  'on'                                       # requires track_counts to also be on.#log_autovacuum_min_duration = -1       # -1 disables, 0 logs all actions and\n\n                                       # their durations, > 0 logs only                                       # actions running at least this number                                       # of milliseconds.\n#autovacuum_max_workers = 3             # max number of autovacuum subprocesses\n#autovacuum_naptime = 1min              # time between autovacuum runs#autovacuum_vacuum_threshold = 50       # min number of row updates before                                       # vacuum#autovacuum_analyze_threshold = 50      # min number of row updates before\n\n                                       # analyze#autovacuum_vacuum_scale_factor = 0.2   # fraction of table size before vacuum#autovacuum_analyze_scale_factor = 0.1  # fraction of table size before analyze#autovacuum_freeze_max_age = 200000000  # maximum XID age before forced vacuum\n\n                                       # (change requires restart)#autovacuum_vacuum_cost_delay = 20ms    # default vacuum cost delay for                                       # autovacuum, in milliseconds;                                       # -1 means use vacuum_cost_delay\n\n#autovacuum_vacuum_cost_limit = -1      # default vacuum cost limit for                                       # autovacuum, -1 means use                                       # vacuum_cost_limit\nThese are all default AUTOVACUUM settings. If you are using PG 8.4 or above, try AUTOVACUUM settings on bulk insert tables for better performance. Also need to tune the 'autovacuum_naptime' \nEg:- ALTER table <table name> SET (autovacuum_vacuum_threshold=xxxxx, autovacuum_analyze_threshold=xxxx);wal_buffers  //max is 16MBcheckpoint_segment    /// Its very less in your setting\ncheckpoint_timeout     temp_buffer  // If application is using temp tablesThese parameter will also boost the performance.Best Regards\nRaghavendraEnterpriseDB Corporation.  \nScott Marlowe wrote:\n\nOn Mon, Apr 4, 2011 at 5:34 AM, Adarsh Sharma <[email protected]> wrote:\n \n\nMem:  16299476k total, 16198784k used,   100692k free,    73776k buffers\nSwap: 16787884k total,   148176k used, 16639708k free, 15585396k cached\n\n  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+\nCOMMAND\n 3401 postgres  20   0 4288m 3.3g 3.3g S    0 21.1   0:24.73\npostgres\n 3397 postgres  20   0 4286m 119m 119m S    0  0.8   0:00.36\npostgres\nPLease help me to understand how much memory does 1 Connection Uses and how\nto use Server parameters accordingly.\n \n\nOK, first, see the 15585396k cached? That's how much memory your OS\nis using to cache file systems etc. Basically that's memory not being\nused by anything else right now, so the OS borrows it and uses it for\ncaching.\n\nNext, VIRT is how much memory your process would need to load every\nlib it might need but may not be using now, plus all the shared memory\nit might need, plus it's own space etc. It's not memory in use, it's\nmemory that might under the worst circumstances, be used by that one\nprocess. RES is the amount of memory the process IS actually\ntouching, including shared memory that other processes may be sharing.\n Finally, SHR is the amount of shared memory the process is touching.\nso, taking your biggest process, it is linked to enough libraries and\nshared memory and it's own private memory to add up to 4288Meg. It is\ncurrently actually touching 3.3G. Of that 3.3G it is touching 3.3G is\nshared with other processes. So, the difference between RES and SHR\nis 0, so the delta, or extra memory it's using besides shared memory\nis ZERO (or very close to it, probably dozens or fewer of megabytes).\n\nSo, you're NOT running out of memory. Remember when I mentioned\niostat, vmstat, etc up above? Have you run any of those?", "msg_date": "Mon, 4 Apr 2011 18:00:07 +0530", "msg_from": "Raghavendra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres Performance Tuning" }, { "msg_contents": "Thank U all,\n\nI know some things to work on & after some work & study on them , I will \ncontinue this discussion tomorrow .\n\n\nBest Regards,\nAdarsh\n\nRaghavendra wrote:\n> Adarsh,\n> \n>\n> [root@s8-mysd-2 8.4SS]# iostat\n> -bash: iostat: command not found\n>\n> /usr/bin/iostat\n>\n> Our application runs by making connections to Postgres Server from\n> different servers and selecting data from one table & insert into\n> remaining tables in a database.\n>\n>\n> When you are doing bulk inserts you need to tune AUTOVACUUM parameters \n> or Change the autovacuum settings for those tables doing bulk INSERTs. \n> Insert's need analyze.\n>\n> \n>\n> #autovacuum = on # Enable autovacuum\n> subprocess? 'on'\n> # requires track_counts to\n> also be on.\n> #log_autovacuum_min_duration = -1 # -1 disables, 0 logs all\n> actions and\n> # their durations, > 0 logs\n> only\n> # actions running at least\n> this number\n> # of milliseconds.\n> #autovacuum_max_workers = 3 # max number of autovacuum\n> subprocesses\n> #autovacuum_naptime = 1min # time between autovacuum runs\n> #autovacuum_vacuum_threshold = 50 # min number of row\n> updates before\n> # vacuum\n> #autovacuum_analyze_threshold = 50 # min number of row\n> updates before\n> # analyze\n> #autovacuum_vacuum_scale_factor = 0.2 # fraction of table size\n> before vacuum\n> #autovacuum_analyze_scale_factor = 0.1 # fraction of table size\n> before analyze\n> #autovacuum_freeze_max_age = 200000000 # maximum XID age before\n> forced vacuum\n> # (change requires restart)\n> #autovacuum_vacuum_cost_delay = 20ms # default vacuum cost\n> delay for\n> # autovacuum, in milliseconds;\n> # -1 means use\n> vacuum_cost_delay\n> #autovacuum_vacuum_cost_limit = -1 # default vacuum cost\n> limit for\n> # autovacuum, -1 means use\n> # vacuum_cost_limit\n>\n>\n> These are all default AUTOVACUUM settings. If you are using PG 8.4 or \n> above, try AUTOVACUUM settings on bulk insert tables for better \n> performance. Also need to tune the 'autovacuum_naptime' \n>\n> Eg:-\n> ALTER table <table name> SET (autovacuum_vacuum_threshold=xxxxx, \n> autovacuum_analyze_threshold=xxxx);\n>\n> wal_buffers //max is 16MB\n> checkpoint_segment /// Its very less in your setting\n> checkpoint_timeout \n> temp_buffer // If application is using temp tables\n>\n>\n> These parameter will also boost the performance.\n>\n> Best Regards\n> Raghavendra\n> EnterpriseDB Corporation.\n>\n> \n>\n>\n>\n> \n>\n> Scott Marlowe wrote:\n>> On Mon, Apr 4, 2011 at 5:34 AM, Adarsh Sharma <[email protected]> <mailto:[email protected]> wrote:\n>> \n>>> Mem: 16299476k total, 16198784k used, 100692k free, 73776k buffers\n>>> Swap: 16787884k total, 148176k used, 16639708k free, 15585396k cached\n>>>\n>>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+\n>>> COMMAND\n>>> 3401 postgres 20 0 4288m 3.3g 3.3g S 0 21.1 0:24.73\n>>> postgres\n>>> 3397 postgres 20 0 4286m 119m 119m S 0 0.8 0:00.36\n>>> postgres\n>>> PLease help me to understand how much memory does 1 Connection Uses and how\n>>> to use Server parameters accordingly.\n>>> \n>> OK, first, see the 15585396k cached? That's how much memory your OS\n>> is using to cache file systems etc. Basically that's memory not being\n>> used by anything else right now, so the OS borrows it and uses it for\n>> caching.\n>>\n>> Next, VIRT is how much memory your process would need to load every\n>> lib it might need but may not be using now, plus all the shared memory\n>> it might need, plus it's own space etc. It's not memory in use, it's\n>> memory that might under the worst circumstances, be used by that one\n>> process. RES is the amount of memory the process IS actually\n>> touching, including shared memory that other processes may be sharing.\n>> Finally, SHR is the amount of shared memory the process is touching.\n>> so, taking your biggest process, it is linked to enough libraries and\n>> shared memory and it's own private memory to add up to 4288Meg. It is\n>> currently actually touching 3.3G. Of that 3.3G it is touching 3.3G is\n>> shared with other processes. So, the difference between RES and SHR\n>> is 0, so the delta, or extra memory it's using besides shared memory\n>> is ZERO (or very close to it, probably dozens or fewer of megabytes).\n>>\n>> So, you're NOT running out of memory. Remember when I mentioned\n>> iostat, vmstat, etc up above? Have you run any of those?\n>> \n>\n>\n\n\n\n\n\n\n\n\n\nThank U all,\n\nI know some things to work on & after some work & study on them\n, I will continue this discussion tomorrow .\n\n\nBest  Regards,\nAdarsh\n\nRaghavendra wrote:\n\n\nAdarsh,\n \n\n[root@s8-mysd-2 8.4SS]# iostat\n-bash: iostat: command not found\n \n\n\n\n/usr/bin/iostat\n\n\nOur\napplication runs by making connections to Postgres Server from\ndifferent servers and selecting data from one table & insert into\nremaining tables in a database.\n\n\nWhen you are doing bulk inserts you need to tune AUTOVACUUM\nparameters or Change the autovacuum settings for those tables doing\nbulk INSERTs. Insert's need analyze.\n\n\n \n\n#autovacuum\n= on                        # Enable autovacuum subprocess?  'on'\n                                       # requires track_counts to also\nbe on.\n#log_autovacuum_min_duration = -1       # -1 disables, 0 logs all\nactions and\n                                       # their durations, > 0 logs\nonly\n                                       # actions running at least this\nnumber\n                                       # of milliseconds.\n#autovacuum_max_workers = 3             # max number of autovacuum\nsubprocesses\n#autovacuum_naptime = 1min              # time between autovacuum runs\n#autovacuum_vacuum_threshold = 50       # min number of row updates\nbefore\n                                       # vacuum\n#autovacuum_analyze_threshold = 50      # min number of row updates\nbefore\n                                       # analyze\n#autovacuum_vacuum_scale_factor = 0.2   # fraction of table size before\nvacuum\n#autovacuum_analyze_scale_factor = 0.1  # fraction of table size before\nanalyze\n#autovacuum_freeze_max_age = 200000000  # maximum XID age before forced\nvacuum\n                                       # (change requires restart)\n#autovacuum_vacuum_cost_delay = 20ms    # default vacuum cost delay for\n                                       # autovacuum, in milliseconds;\n                                       # -1 means use vacuum_cost_delay\n#autovacuum_vacuum_cost_limit = -1      # default vacuum cost limit for\n                                       # autovacuum, -1 means use\n                                       # vacuum_cost_limit\n\n\n\nThese are all default AUTOVACUUM settings. If you are using PG\n8.4 or above, try AUTOVACUUM settings on bulk insert tables for better\nperformance. Also need to tune the 'autovacuum_naptime' \n\n\nEg:-\n ALTER table <table name> SET\n(autovacuum_vacuum_threshold=xxxxx, autovacuum_analyze_threshold=xxxx);\n\n\nwal_buffers  //max is 16MB\ncheckpoint_segment    /// Its very less in your setting\ncheckpoint_timeout     \ntemp_buffer  // If application is using temp tables\n\n\n\n\nThese parameter will also boost the performance.\n\n\nBest Regards\nRaghavendra\nEnterpriseDB Corporation.\n\n\n \n\n\n\n\n\n\n \n\n\n\nScott Marlowe wrote:\n \nOn Mon, Apr 4, 2011 at 5:34 AM, Adarsh Sharma <[email protected]> wrote:\n \n\nMem:  16299476k total, 16198784k used,   100692k free,    73776k buffers\nSwap: 16787884k total,   148176k used, 16639708k free, 15585396k cached\n\n  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+\nCOMMAND\n 3401 postgres  20   0 4288m 3.3g 3.3g S    0 21.1   0:24.73\npostgres\n 3397 postgres  20   0 4286m 119m 119m S    0  0.8   0:00.36\npostgres\nPLease help me to understand how much memory does 1 Connection Uses and how\nto use Server parameters accordingly.\n \n\nOK, first, see the 15585396k cached? That's how much memory your OS\nis using to cache file systems etc. Basically that's memory not being\nused by anything else right now, so the OS borrows it and uses it for\ncaching.\n\nNext, VIRT is how much memory your process would need to load every\nlib it might need but may not be using now, plus all the shared memory\nit might need, plus it's own space etc. It's not memory in use, it's\nmemory that might under the worst circumstances, be used by that one\nprocess. RES is the amount of memory the process IS actually\ntouching, including shared memory that other processes may be sharing.\n Finally, SHR is the amount of shared memory the process is touching.\nso, taking your biggest process, it is linked to enough libraries and\nshared memory and it's own private memory to add up to 4288Meg. It is\ncurrently actually touching 3.3G. Of that 3.3G it is touching 3.3G is\nshared with other processes. So, the difference between RES and SHR\nis 0, so the delta, or extra memory it's using besides shared memory\nis ZERO (or very close to it, probably dozens or fewer of megabytes).\n\nSo, you're NOT running out of memory. Remember when I mentioned\niostat, vmstat, etc up above? Have you run any of those?", "msg_date": "Mon, 04 Apr 2011 18:03:54 +0530", "msg_from": "Adarsh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres Performance Tuning" }, { "msg_contents": "Best of luck, the two standard links for this kind of problem are:\n\nhttp://wiki.postgresql.org/wiki/Guide_to_reporting_problems\nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n\nNote that in creating the information needed to report a problem you\nmay well wind up troubleshooting it and fixing it. That's a good\nthing :)\n", "msg_date": "Mon, 4 Apr 2011 10:22:48 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres Performance Tuning" }, { "msg_contents": "Hi, Good Morning To All of You.\n\nYesterday I had some research on my problems. As Scott rightly suggest \nme to have pre information before posting in the list, I aggreed to him.\n\nHere is my first doubt , that I explain as:\n\nMy application makes several connections to Database Server & done their \nwork :\n\nDuring this process have a look on below output of free command :\n\n[root@s8-mysd-2 ~]# free -m\n total used free shared buffers cached\nMem: 15917 15826 90 0 101 15013\n-/+ buffers/cache: 711 15205\nSwap: 16394 143 16250\n\nIt means 15 GB memory is cached.\n\n[root@s8-mysd-2 ~]# cat /proc/meminfo\nMemTotal: 16299476 kB\nMemFree: 96268 kB\nBuffers: 104388 kB\nCached: 15370008 kB\nSwapCached: 3892 kB\nActive: 6574788 kB\nInactive: 8951884 kB\nActive(anon): 3909024 kB\nInactive(anon): 459720 kB\nActive(file): 2665764 kB\nInactive(file): 8492164 kB\nUnevictable: 0 kB\nMlocked: 0 kB\nSwapTotal: 16787884 kB\nSwapFree: 16640472 kB\nDirty: 1068 kB\nWriteback: 0 kB\nAnonPages: 48864 kB\nMapped: 4277000 kB\nSlab: 481960 kB\nSReclaimable: 466544 kB\nSUnreclaim: 15416 kB\nPageTables: 57860 kB\nNFS_Unstable: 0 kB\nBounce: 0 kB\nWritebackTmp: 0 kB\nCommitLimit: 24904852 kB\nCommitted_AS: 5022172 kB\nVmallocTotal: 34359738367 kB\nVmallocUsed: 310088 kB\nVmallocChunk: 34359422091 kB\nHugePages_Total: 32\nHugePages_Free: 32\nHugePages_Rsvd: 0\nHugePages_Surp: 0\nHugepagesize: 2048 kB\nDirectMap4k: 3776 kB\nDirectMap2M: 16773120 kB\n[root@s8-mysd-2 ~]#\n\nNow Can I know why the cached memory is not freed after the connections \ndone their work & their is no more connections :\n\npdc_uima_dummy=# select datname,numbackends from pg_stat_database;\n datname | numbackends\n-------------------+-------------\n template1 | 0\n template0 | 0\n postgres | 2\n template_postgis | 0\n pdc_uima_dummy | 11\n pdc_uima_version3 | 0\n pdc_uima_olap | 0\n pdc_uima_s9 | 0\n pdc_uima | 0\n(9 rows)\n\nSame output is when it has 100 connections.\n\nNow I have to start more queries on Database Server and issue new \nconnections after some time. Why the cached memory is not freed.\n\nFlushing the cache memory is needed & how it could use so much if I set\n\neffective_cache_size = 4096 MB.\n\nI think if i issue some new select queries on large set of data, it will \nuse Swap Memory & degrades Performance.\n\nPlease correct if I'm wrong.\n\n\nThanks & best Regards,\nAdarsh Sharma\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nRaghavendra wrote:\n> Adarsh,\n> \n>\n> [root@s8-mysd-2 8.4SS]# iostat\n> -bash: iostat: command not found\n>\n> /usr/bin/iostat\n>\n> Our application runs by making connections to Postgres Server from\n> different servers and selecting data from one table & insert into\n> remaining tables in a database.\n>\n>\n> When you are doing bulk inserts you need to tune AUTOVACUUM parameters \n> or Change the autovacuum settings for those tables doing bulk INSERTs. \n> Insert's need analyze.\n>\n> \n>\n> #autovacuum = on # Enable autovacuum\n> subprocess? 'on'\n> # requires track_counts to\n> also be on.\n> #log_autovacuum_min_duration = -1 # -1 disables, 0 logs all\n> actions and\n> # their durations, > 0 logs\n> only\n> # actions running at least\n> this number\n> # of milliseconds.\n> #autovacuum_max_workers = 3 # max number of autovacuum\n> subprocesses\n> #autovacuum_naptime = 1min # time between autovacuum runs\n> #autovacuum_vacuum_threshold = 50 # min number of row\n> updates before\n> # vacuum\n> #autovacuum_analyze_threshold = 50 # min number of row\n> updates before\n> # analyze\n> #autovacuum_vacuum_scale_factor = 0.2 # fraction of table size\n> before vacuum\n> #autovacuum_analyze_scale_factor = 0.1 # fraction of table size\n> before analyze\n> #autovacuum_freeze_max_age = 200000000 # maximum XID age before\n> forced vacuum\n> # (change requires restart)\n> #autovacuum_vacuum_cost_delay = 20ms # default vacuum cost\n> delay for\n> # autovacuum, in milliseconds;\n> # -1 means use\n> vacuum_cost_delay\n> #autovacuum_vacuum_cost_limit = -1 # default vacuum cost\n> limit for\n> # autovacuum, -1 means use\n> # vacuum_cost_limit\n>\n>\n> These are all default AUTOVACUUM settings. If you are using PG 8.4 or \n> above, try AUTOVACUUM settings on bulk insert tables for better \n> performance. Also need to tune the 'autovacuum_naptime' \n>\n> Eg:-\n> ALTER table <table name> SET (autovacuum_vacuum_threshold=xxxxx, \n> autovacuum_analyze_threshold=xxxx);\n>\n> wal_buffers //max is 16MB\n> checkpoint_segment /// Its very less in your setting\n> checkpoint_timeout \n> temp_buffer // If application is using temp tables\n>\n>\n> These parameter will also boost the performance.\n>\n> Best Regards\n> Raghavendra\n> EnterpriseDB Corporation.\n>\n> \n>\n>\n>\n> \n>\n> Scott Marlowe wrote:\n>> On Mon, Apr 4, 2011 at 5:34 AM, Adarsh Sharma <[email protected]> <mailto:[email protected]> wrote:\n>> \n>>> Mem: 16299476k total, 16198784k used, 100692k free, 73776k buffers\n>>> Swap: 16787884k total, 148176k used, 16639708k free, 15585396k cached\n>>>\n>>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+\n>>> COMMAND\n>>> 3401 postgres 20 0 4288m 3.3g 3.3g S 0 21.1 0:24.73\n>>> postgres\n>>> 3397 postgres 20 0 4286m 119m 119m S 0 0.8 0:00.36\n>>> postgres\n>>> PLease help me to understand how much memory does 1 Connection Uses and how\n>>> to use Server parameters accordingly.\n>>> \n>> OK, first, see the 15585396k cached? That's how much memory your OS\n>> is using to cache file systems etc. Basically that's memory not being\n>> used by anything else right now, so the OS borrows it and uses it for\n>> caching.\n>>\n>> Next, VIRT is how much memory your process would need to load every\n>> lib it might need but may not be using now, plus all the shared memory\n>> it might need, plus it's own space etc. It's not memory in use, it's\n>> memory that might under the worst circumstances, be used by that one\n>> process. RES is the amount of memory the process IS actually\n>> touching, including shared memory that other processes may be sharing.\n>> Finally, SHR is the amount of shared memory the process is touching.\n>> so, taking your biggest process, it is linked to enough libraries and\n>> shared memory and it's own private memory to add up to 4288Meg. It is\n>> currently actually touching 3.3G. Of that 3.3G it is touching 3.3G is\n>> shared with other processes. So, the difference between RES and SHR\n>> is 0, so the delta, or extra memory it's using besides shared memory\n>> is ZERO (or very close to it, probably dozens or fewer of megabytes).\n>>\n>> So, you're NOT running out of memory. Remember when I mentioned\n>> iostat, vmstat, etc up above? Have you run any of those?\n>> \n>\n>\n\n", "msg_date": "Tue, 05 Apr 2011 13:03:05 +0530", "msg_from": "Adarsh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres Performance Tuning" }, { "msg_contents": "On Apr 5, 2011, at 9:33 AM, Adarsh Sharma wrote:\n\n> Now I have to start more queries on Database Server and issue new connections after some time. Why the cached memory is not freed.\n\nIt's freed on-demand.\n\n> Flushing the cache memory is needed & how it could use so much if I set\n\nWhy would forced flushing be needed? And why would it be useful? It is not.\n\n> effective_cache_size = 4096 MB.\n\nWatch the \"cached\" field of free's output and set effective_cache_size to that amount (given that your server is running postgres only, has no major other tasks)\n\n> I think if i issue some new select queries on large set of data, it will use Swap Memory & degrades Performance.\n\nHave you ever tried that? Will not. \n\n> Please correct if I'm wrong.\n\nYou seem to know very little about Unix/Linux memory usage and how to interpret the tools' output.\nPlease read some (very basic) documentation for sysadmins regarding these subjects.\nIt will help you a lot to understand how things work.\n\n-- \nAkos Gabriel\nGeneral Manager\nLiferay Hungary Ltd.\nLiferay Hungary Symposium, May 26, 2011 | Register today: http://www.liferay.com/hungary2011\n\n", "msg_date": "Tue, 5 Apr 2011 14:31:04 +0200", "msg_from": "=?iso-8859-1?Q?=C1kos_G=E1briel?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres Performance Tuning" }, { "msg_contents": "On Apr 5, 2011, at 9:33 AM, Adarsh Sharma wrote:\n\n> Now I have to start more queries on Database Server and issue new connections after some time. Why the cached memory is not freed.\n\nIt's freed on-demand.\n\n> Flushing the cache memory is needed & how it could use so much if I set\n\nWhy would forced flushing be needed? And why would it be useful? It is not.\n\n> effective_cache_size = 4096 MB.\n\nWatch the \"cached\" field of free's output and set effective_cache_size to that amount (given that your server is running postgres only, has no major other tasks)\n\n> I think if i issue some new select queries on large set of data, it will use Swap Memory & degrades Performance.\n\nHave you ever tried that? Will not. \n\n> Please correct if I'm wrong.\n\nYou seem to know very little about Unix/Linux memory usage and how to interpret the tools' output.\nPlease read some (very basic) documentation for sysadmins regarding these subjects.\nIt will help you a lot to understand how things work.\n\n-- \nAkos Gabriel\nGeneral Manager\nLiferay Hungary Ltd.\nLiferay Hungary Symposium, May 26, 2011 | Register today: http://www.liferay.com/hungary2011\n\n-- \nÜdvözlettel,\nGábriel Ákos\n\n", "msg_date": "Tue, 5 Apr 2011 14:38:51 +0200", "msg_from": "=?iso-8859-1?Q?=C1kos_G=E1briel?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres Performance Tuning" }, { "msg_contents": "On Tue, Apr 5, 2011 at 1:33 AM, Adarsh Sharma <[email protected]> wrote:\n>\n> [root@s8-mysd-2 ~]# free -m\n>            total       used       free     shared    buffers     cached\n> Mem:         15917      15826         90          0        101      15013\n> -/+ buffers/cache:        711      15205\n> Swap:        16394        143      16250\n>\n> It means 15 GB memory is cached.\n\nNote that the kernel takes all otherwise unused memory and uses it for\ncache. If, at any time a process needs more memory, the kernel just\ndumps some cached data and frees up the memory and hands it over, it's\nall automatic. As long as cache is large, things are OK. You need to\nbe looking to see if you're IO bound or CPU bound first. so, vmstat\n(install the sysstat package) is the first thing to use.\n", "msg_date": "Tue, 5 Apr 2011 07:08:07 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres Performance Tuning" }, { "msg_contents": "Scott Marlowe wrote:\n> On Tue, Apr 5, 2011 at 1:33 AM, Adarsh Sharma <[email protected]> wrote:\n> \n>> [root@s8-mysd-2 ~]# free -m\n>> total used free shared buffers cached\n>> Mem: 15917 15826 90 0 101 15013\n>> -/+ buffers/cache: 711 15205\n>> Swap: 16394 143 16250\n>>\n>> It means 15 GB memory is cached.\n>> \n>\n> Note that the kernel takes all otherwise unused memory and uses it for\n> cache. If, at any time a process needs more memory, the kernel just\n> dumps some cached data and frees up the memory and hands it over, it's\n> all automatic. As long as cache is large, things are OK. You need to\n> be looking to see if you're IO bound or CPU bound first. so, vmstat\n> (install the sysstat package) is the first thing to use.\n> \nThanks a lot , Scott. :-)\n\n\n\nBest Regards , Adarsh\n\n\n\n\n\n\n\n\nScott Marlowe wrote:\n\nOn Tue, Apr 5, 2011 at 1:33 AM, Adarsh Sharma <[email protected]> wrote:\n \n\n[root@s8-mysd-2 ~]# free -m\n           total       used       free     shared    buffers     cached\nMem:         15917      15826         90          0        101      15013\n-/+ buffers/cache:        711      15205\nSwap:        16394        143      16250\n\nIt means 15 GB memory is cached.\n \n\n\nNote that the kernel takes all otherwise unused memory and uses it for\ncache. If, at any time a process needs more memory, the kernel just\ndumps some cached data and frees up the memory and hands it over, it's\nall automatic. As long as cache is large, things are OK. You need to\nbe looking to see if you're IO bound or CPU bound first. so, vmstat\n(install the sysstat package) is the first thing to use.\n \n\nThanks a lot , Scott. :-) \n\n\n\nBest Regards , Adarsh", "msg_date": "Tue, 05 Apr 2011 18:50:05 +0530", "msg_from": "Adarsh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres Performance Tuning" }, { "msg_contents": "On Tue, Apr 5, 2011 at 7:20 AM, Adarsh Sharma <[email protected]> wrote:\n> Scott Marlowe wrote:\n>\n> On Tue, Apr 5, 2011 at 1:33 AM, Adarsh Sharma <[email protected]>\n> wrote:\n>\n>\n> [root@s8-mysd-2 ~]# free -m\n>            total       used       free     shared    buffers     cached\n> Mem:         15917      15826         90          0        101      15013\n> -/+ buffers/cache:        711      15205\n> Swap:        16394        143      16250\n>\n> It means 15 GB memory is cached.\n>\n>\n> Note that the kernel takes all otherwise unused memory and uses it for\n> cache. If, at any time a process needs more memory, the kernel just\n> dumps some cached data and frees up the memory and hands it over, it's\n> all automatic. As long as cache is large, things are OK. You need to\n> be looking to see if you're IO bound or CPU bound first. so, vmstat\n> (install the sysstat package) is the first thing to use.\n\nBTW, just remembered that vmstat is it's own package, it's iostat and\nsar that are in sysstat.\n\nIf you install sysstat, enable stats collecting by editing the\n/etc/default/sysstat file and changing the ENABLED=\"false\" to\nENABLED=\"true\" and restarting the service with sudo\n/etc/init.d/sysstat restart\n", "msg_date": "Tue, 5 Apr 2011 07:49:54 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres Performance Tuning" } ]
[ { "msg_contents": "Dear all,\n\n I want to clear my doubts regarding creating several single or a \nmulti-column indexes.\nMy table schema is :-\nCREATE TABLE svo2( svo_id bigint NOT NULL DEFAULT \nnextval('svo_svo_id_seq'::regclass), doc_id integer, sentence_id \ninteger, clause_id integer, negation integer, subject \ncharactervarying(3000), verb character varying(3000), \"object\" \ncharacter varying(3000), preposition character varying(3000), \nsubject_type character varying(3000), object_type \ncharactervarying(3000), subject_attribute character varying(3000), \nobject_attribute character varying(3000), verb_attribute character \nvarying(3000), subject_concept character varying(100), object_concept \ncharacter varying(100), subject_sense character varying(100), \nobject_sense character varying(100), subject_chain character \nvarying(5000), object_chain character varying(5000), sub_type_id \ninteger, obj_type_id integer, CONSTRAINT pk_svo_id PRIMARY KEY \n(svo_id))WITH ( OIDS=FALSE);\n\n\n_*Fore.g*_\n\nCREATE INDEX idx_svo2_id_dummy ON svo2 USING btree (doc_id, clause_id, \nsentence_id);\n\nor\n\nCREATE INDEX idx_svo2_id_dummy ON svo2 USING btree (doc_id);\nCREATE INDEX idx_svo2_id_dummy1 ON svo2 USING btree (clause_id);\nCREATE INDEX idx_svo2_id_dummy2 ON svo2 USING btree (sentence_id);\n\nWhich is better if a query uses all three columns in join where clause.\n\n\n\nThanks & best regards,\nAdarsh Sharma\n\n\n\n\n\n\nDear all,\n\n I want to clear my doubts regarding creating several single or a\nmulti-column indexes.\nMy table schema is :-\nCREATE TABLE svo2(  svo_id bigint NOT NULL DEFAULT\nnextval('svo_svo_id_seq'::regclass),  doc_id integer,  sentence_id\ninteger,  clause_id integer,  negation integer,  subject\ncharactervarying(3000),  verb character varying(3000),  \"object\"\ncharacter varying(3000),  preposition character varying(3000), \nsubject_type character varying(3000),  object_type\ncharactervarying(3000),  subject_attribute character varying(3000), \nobject_attribute character varying(3000),  verb_attribute character\nvarying(3000),  subject_concept character varying(100), object_concept\ncharacter varying(100),  subject_sense character varying(100), \nobject_sense character varying(100),  subject_chain character\nvarying(5000),  object_chain character varying(5000),  sub_type_id\ninteger,  obj_type_id integer,  CONSTRAINT pk_svo_id PRIMARY KEY\n(svo_id))WITH (  OIDS=FALSE);\n\n\nFore.g \n\nCREATE INDEX idx_svo2_id_dummy  ON svo2  USING btree (doc_id,\nclause_id, sentence_id);\n\nor\n\nCREATE INDEX idx_svo2_id_dummy  ON svo2  USING btree (doc_id);\nCREATE INDEX idx_svo2_id_dummy1  ON svo2  USING btree (clause_id);\nCREATE INDEX idx_svo2_id_dummy2  ON svo2  USING btree (sentence_id);\n\nWhich is better if a query uses all three columns in join where clause.\n\n\n\nThanks & best regards,\nAdarsh Sharma", "msg_date": "Tue, 05 Apr 2011 15:56:52 +0530", "msg_from": "Adarsh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Which is better Index" }, { "msg_contents": "On Tue, Apr 5, 2011 at 3:56 PM, Adarsh Sharma <[email protected]>wrote:\n\n> Dear all,\n>\n> I want to clear my doubts regarding creating several single or a\n> multi-column indexes.\n> My table schema is :-\n> CREATE TABLE svo2( svo_id bigint NOT NULL DEFAULT\n> nextval('svo_svo_id_seq'::regclass), doc_id integer, sentence_id integer,\n> clause_id integer, negation integer, subject charactervarying(3000), verb\n> character varying(3000), \"object\" character varying(3000), preposition\n> character varying(3000), subject_type character varying(3000), object_type\n> charactervarying(3000), subject_attribute character varying(3000),\n> object_attribute character varying(3000), verb_attribute character\n> varying(3000), subject_concept character varying(100), object_concept\n> character varying(100), subject_sense character varying(100), object_sense\n> character varying(100), subject_chain character varying(5000),\n> object_chain character varying(5000), sub_type_id integer, obj_type_id\n> integer, CONSTRAINT pk_svo_id PRIMARY KEY (svo_id))WITH ( OIDS=FALSE);\n>\n>\n> *Fore.g*\n>\n> CREATE INDEX idx_svo2_id_dummy ON svo2 USING btree (doc_id, clause_id,\n> sentence_id);\n>\n> or\n>\n> CREATE INDEX idx_svo2_id_dummy ON svo2 USING btree (doc_id);\n> CREATE INDEX idx_svo2_id_dummy1 ON svo2 USING btree (clause_id);\n> CREATE INDEX idx_svo2_id_dummy2 ON svo2 USING btree (sentence_id);\n>\n> Which is better if a query uses all three columns in join where clause.\n>\n>\n>\n> Thanks & best regards,\n> Adarsh Sharma\n>\n>\nThats very difficult to tell as you have not shared the details of system,\nlike what is the other table,\nhow the joined table are related and so on.\nBasically we need to understand how the data is organized within table and\nacross schema or system.\n\nTo begin with, maybe below links could provide some insights:\n\nhttp://www.postgresql.org/docs/current/static/indexes-multicolumn.html\nhttp://www.postgresql.org/docs/current/static/indexes-bitmap-scans.html\n\n\n-- \nRegards,\nChetan Suttraway\nEnterpriseDB <http://www.enterprisedb.com/>, The Enterprise\nPostgreSQL<http://www.enterprisedb.com/>\n company.\n\nOn Tue, Apr 5, 2011 at 3:56 PM, Adarsh Sharma <[email protected]> wrote:\n\nDear all,\n\n I want to clear my doubts regarding creating several single or a\nmulti-column indexes.\nMy table schema is :-\nCREATE TABLE svo2(  svo_id bigint NOT NULL DEFAULT\nnextval('svo_svo_id_seq'::regclass),  doc_id integer,  sentence_id\ninteger,  clause_id integer,  negation integer,  subject\ncharactervarying(3000),  verb character varying(3000),  \"object\"\ncharacter varying(3000),  preposition character varying(3000), \nsubject_type character varying(3000),  object_type\ncharactervarying(3000),  subject_attribute character varying(3000), \nobject_attribute character varying(3000),  verb_attribute character\nvarying(3000),  subject_concept character varying(100), object_concept\ncharacter varying(100),  subject_sense character varying(100), \nobject_sense character varying(100),  subject_chain character\nvarying(5000),  object_chain character varying(5000),  sub_type_id\ninteger,  obj_type_id integer,  CONSTRAINT pk_svo_id PRIMARY KEY\n(svo_id))WITH (  OIDS=FALSE);\n\n\nFore.g \n\nCREATE INDEX idx_svo2_id_dummy  ON svo2  USING btree (doc_id,\nclause_id, sentence_id);\n\nor\n\nCREATE INDEX idx_svo2_id_dummy  ON svo2  USING btree (doc_id);\nCREATE INDEX idx_svo2_id_dummy1  ON svo2  USING btree (clause_id);\nCREATE INDEX idx_svo2_id_dummy2  ON svo2  USING btree (sentence_id);\n\nWhich is better if a query uses all three columns in join where clause.\n\n\n\nThanks & best regards,\nAdarsh Sharma\n\n\nThats very difficult to tell as you have not shared the details of system, like what is the other table, how the joined table are related and so on.Basically we need to understand how the data is organized within table and across schema or system.\nTo begin with, maybe below links could provide some insights:http://www.postgresql.org/docs/current/static/indexes-multicolumn.html\nhttp://www.postgresql.org/docs/current/static/indexes-bitmap-scans.html-- Regards,Chetan SuttrawayEnterpriseDB, The Enterprise PostgreSQL company.", "msg_date": "Tue, 5 Apr 2011 19:37:08 +0530", "msg_from": "Chetan Suttraway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Which is better Index" }, { "msg_contents": "On 04/05/2011 06:26 AM, Adarsh Sharma wrote:\n> CREATE INDEX idx_svo2_id_dummy ON svo2 USING btree (doc_id, \n> clause_id, sentence_id);\n>\n> or\n>\n> CREATE INDEX idx_svo2_id_dummy ON svo2 USING btree (doc_id);\n> CREATE INDEX idx_svo2_id_dummy1 ON svo2 USING btree (clause_id);\n> CREATE INDEX idx_svo2_id_dummy2 ON svo2 USING btree (sentence_id);\n>\n> Which is better if a query uses all three columns in join where clause.\n\nImpossible to say. It's possible neither approach is best. If \nclause_id and sentence_id are not very selective, the optimal setup here \ncould easily be an index on only doc_id. Just index that, let the query \nexecutor throw out non-matching rows. Indexes are expensive to \nmaintain, and are not free to use in queries either.\n\nWhat you could do here is create all four of these indexes, try to \nsimulate your workload, and see which actually get used. Throw out the \nones that the optimizer doesn't use anyway. The odds are against you \npredicting what's going to happen accurately here. You might as well \naccept that, set things up to measure what happens instead, and use that \nas feedback on the design.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Wed, 06 Apr 2011 02:55:29 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Which is better Index" } ]
[ { "msg_contents": "Would really appreciate someone taking a look at the query below.... \nThanks in advance!\n\n\nthis is on a linux box...\nLinux dsrvr201.larc.nasa.gov 2.6.18-164.9.1.el5 #1 SMP Wed Dec 9 \n03:27:37 EST 2009 x86_64 x86_64 x86_64 GNU/Linux\n\nexplain analyze\nselect MIN(IV.STRTDATE), MAX(IV.ENDDATE)\nfrom GRAN_VER GV\nleft outer join INVENTORY IV on GV.GRANULE_ID = IV.GRANULE_ID, INVSENSOR \nINVS\nwhere IV.INV_ID='65' and GV.GRANULE_ID = INVS.granule_id and \nINVS.sensor_id='13'\n\n\n\"Aggregate (cost=736364.52..736364.53 rows=1 width=8) (actual \ntime=17532.930..17532.930 rows=1 loops=1)\"\n\" -> Hash Join (cost=690287.33..734679.77 rows=336949 width=8) \n(actual time=13791.593..17323.080 rows=924675 loops=1)\"\n\" Hash Cond: (invs.granule_id = gv.granule_id)\"\n\" -> Seq Scan on invsensor invs (cost=0.00..36189.41 \nrows=1288943 width=4) (actual time=0.297..735.375 rows=1277121 loops=1)\"\n\" Filter: (sensor_id = 13)\"\n\" -> Hash (cost=674401.52..674401.52 rows=1270865 width=16) \n(actual time=13787.698..13787.698 rows=1270750 loops=1)\"\n\" -> Hash Join (cost=513545.62..674401.52 rows=1270865 \nwidth=16) (actual time=1998.702..13105.578 rows=1270750 loops=1)\"\n\" Hash Cond: (gv.granule_id = iv.granule_id)\"\n\" -> Seq Scan on gran_ver gv (cost=0.00..75224.90 \nrows=4861490 width=4) (actual time=0.008..1034.885 rows=4867542 loops=1)\"\n\" -> Hash (cost=497659.81..497659.81 rows=1270865 \nwidth=12) (actual time=1968.918..1968.918 rows=1270750 loops=1)\"\n\" -> Bitmap Heap Scan on inventory iv \n(cost=24050.00..497659.81 rows=1270865 width=12) (actual \ntime=253.542..1387.957 rows=1270750 loops=1)\"\n\" Recheck Cond: (inv_id = 65)\"\n\" -> Bitmap Index Scan on inven_idx1 \n(cost=0.00..23732.28 rows=1270865 width=0) (actual time=214.364..214.364 \nrows=1270977 loops=1)\"\n\" Index Cond: (inv_id = 65)\"\n\"Total runtime: 17533.100 ms\"\n\nsome additional info.....\nthe table inventory is about 4481 MB and also has postgis types.\nthe table gran_ver is about 523 MB\nthe table INVSENSOR is about 217 MB\n\nthe server itself has 32G RAM with the following set in the postgres conf\nshared_buffers = 3GB\nwork_mem = 64MB\nmaintenance_work_mem = 512MB\nwal_buffers = 6MB\n\nlet me know if I've forgotten anything! thanks a bunch!!\n\nMaria Wilson\nNASA/Langley Research Center\nHampton, Virginia\[email protected]\n\n\n\n*\n*\n\n\n\n\n\n\n Would really appreciate someone taking a look at the query\n below....  Thanks in advance!\n\n\n this is on a linux box...\n Linux dsrvr201.larc.nasa.gov 2.6.18-164.9.1.el5 #1 SMP Wed Dec 9\n 03:27:37 EST 2009 x86_64 x86_64 x86_64 GNU/Linux\n\n explain analyze\n select MIN(IV.STRTDATE), MAX(IV.ENDDATE) \n from GRAN_VER GV \n left outer join INVENTORY IV on GV.GRANULE_ID = IV.GRANULE_ID,\n INVSENSOR INVS \n where IV.INV_ID='65' and GV.GRANULE_ID = INVS.granule_id and\n INVS.sensor_id='13'\n\n\n \"Aggregate  (cost=736364.52..736364.53 rows=1 width=8) (actual\n time=17532.930..17532.930 rows=1 loops=1)\"\n \"  ->  Hash Join  (cost=690287.33..734679.77 rows=336949 width=8)\n (actual time=13791.593..17323.080 rows=924675 loops=1)\"\n \"        Hash Cond: (invs.granule_id = gv.granule_id)\"\n \"        ->  Seq Scan on invsensor invs  (cost=0.00..36189.41\n rows=1288943 width=4) (actual time=0.297..735.375 rows=1277121\n loops=1)\"\n \"              Filter: (sensor_id = 13)\"\n \"        ->  Hash  (cost=674401.52..674401.52 rows=1270865\n width=16) (actual time=13787.698..13787.698 rows=1270750 loops=1)\"\n \"              ->  Hash Join  (cost=513545.62..674401.52\n rows=1270865 width=16) (actual time=1998.702..13105.578 rows=1270750\n loops=1)\"\n \"                    Hash Cond: (gv.granule_id = iv.granule_id)\"\n \"                    ->  Seq Scan on gran_ver gv \n (cost=0.00..75224.90 rows=4861490 width=4) (actual\n time=0.008..1034.885 rows=4867542 loops=1)\"\n \"                    ->  Hash  (cost=497659.81..497659.81\n rows=1270865 width=12) (actual time=1968.918..1968.918 rows=1270750\n loops=1)\"\n \"                          ->  Bitmap Heap Scan on inventory iv \n (cost=24050.00..497659.81 rows=1270865 width=12) (actual\n time=253.542..1387.957 rows=1270750 loops=1)\"\n \"                                Recheck Cond: (inv_id = 65)\"\n \"                                ->  Bitmap Index Scan on\n inven_idx1  (cost=0.00..23732.28 rows=1270865 width=0) (actual\n time=214.364..214.364 rows=1270977 loops=1)\"\n \"                                      Index Cond: (inv_id = 65)\"\n \"Total runtime: 17533.100 ms\"\n\n some additional info.....\n the table inventory is about 4481 MB and also has postgis types.\n the table gran_ver is about 523 MB\n the table INVSENSOR is about 217 MB\n\n the server itself has 32G RAM with the following set in the postgres\n conf\n shared_buffers = 3GB\n work_mem = 64MB                       \n maintenance_work_mem = 512MB        \n wal_buffers = 6MB\n\n let me know if I've forgotten anything!  thanks a bunch!!\n\n Maria Wilson\n NASA/Langley Research Center\n Hampton, Virginia\[email protected]", "msg_date": "Tue, 5 Apr 2011 15:25:46 -0400", "msg_from": "\"Maria L. Wilson\" <[email protected]>", "msg_from_op": true, "msg_subject": "help speeding up a query in postgres 8.4.5" }, { "msg_contents": "On 5 April 2011 21:25, Maria L. Wilson <[email protected]> wrote:\n\n> Would really appreciate someone taking a look at the query below....\n> Thanks in advance!\n>\n>\n> this is on a linux box...\n> Linux dsrvr201.larc.nasa.gov 2.6.18-164.9.1.el5 #1 SMP Wed Dec 9 03:27:37\n> EST 2009 x86_64 x86_64 x86_64 GNU/Linux\n>\n> explain analyze\n> select MIN(IV.STRTDATE), MAX(IV.ENDDATE)\n> from GRAN_VER GV\n> left outer join INVENTORY IV on GV.GRANULE_ID = IV.GRANULE_ID, INVSENSOR\n> INVS\n> where IV.INV_ID='65' and GV.GRANULE_ID = INVS.granule_id and\n> INVS.sensor_id='13'\n>\n>\n> \"Aggregate (cost=736364.52..736364.53 rows=1 width=8) (actual\n> time=17532.930..17532.930 rows=1 loops=1)\"\n> \" -> Hash Join (cost=690287.33..734679.77 rows=336949 width=8) (actual\n> time=13791.593..17323.080 rows=924675 loops=1)\"\n> \" Hash Cond: (invs.granule_id = gv.granule_id)\"\n> \" -> Seq Scan on invsensor invs (cost=0.00..36189.41 rows=1288943\n> width=4) (actual time=0.297..735.375 rows=1277121 loops=1)\"\n> \" Filter: (sensor_id = 13)\"\n> \" -> Hash (cost=674401.52..674401.52 rows=1270865 width=16)\n> (actual time=13787.698..13787.698 rows=1270750 loops=1)\"\n> \" -> Hash Join (cost=513545.62..674401.52 rows=1270865\n> width=16) (actual time=1998.702..13105.578 rows=1270750 loops=1)\"\n> \" Hash Cond: (gv.granule_id = iv.granule_id)\"\n> \" -> Seq Scan on gran_ver gv (cost=0.00..75224.90\n> rows=4861490 width=4) (actual time=0.008..1034.885 rows=4867542 loops=1)\"\n> \" -> Hash (cost=497659.81..497659.81 rows=1270865\n> width=12) (actual time=1968.918..1968.918 rows=1270750 loops=1)\"\n> \" -> Bitmap Heap Scan on inventory iv\n> (cost=24050.00..497659.81 rows=1270865 width=12) (actual\n> time=253.542..1387.957 rows=1270750 loops=1)\"\n> \" Recheck Cond: (inv_id = 65)\"\n> \" -> Bitmap Index Scan on inven_idx1\n> (cost=0.00..23732.28 rows=1270865 width=0) (actual time=214.364..214.364\n> rows=1270977 loops=1)\"\n> \" Index Cond: (inv_id = 65)\"\n> \"Total runtime: 17533.100 ms\"\n>\n> some additional info.....\n> the table inventory is about 4481 MB and also has postgis types.\n> the table gran_ver is about 523 MB\n> the table INVSENSOR is about 217 MB\n>\n> the server itself has 32G RAM with the following set in the postgres conf\n> shared_buffers = 3GB\n> work_mem = 64MB\n> maintenance_work_mem = 512MB\n> wal_buffers = 6MB\n>\n> let me know if I've forgotten anything! thanks a bunch!!\n>\n> Maria Wilson\n> NASA/Langley Research Center\n> Hampton, Virginia\n> [email protected]\n>\n>\n>\nHi,\ncould you show us indexes that you have on all tables from this query? Have\nyou tried running vacuum analyze on those tables? Do you have autovacuum\nactive?\n\nregards\nSzymon\n\nOn 5 April 2011 21:25, Maria L. Wilson <[email protected]> wrote:\n\n Would really appreciate someone taking a look at the query\n below....  Thanks in advance!\n\n\n this is on a linux box...\n Linux dsrvr201.larc.nasa.gov 2.6.18-164.9.1.el5 #1 SMP Wed Dec 9\n 03:27:37 EST 2009 x86_64 x86_64 x86_64 GNU/Linux\n\n explain analyze\n select MIN(IV.STRTDATE), MAX(IV.ENDDATE) \n from GRAN_VER GV \n left outer join INVENTORY IV on GV.GRANULE_ID = IV.GRANULE_ID,\n INVSENSOR INVS \n where IV.INV_ID='65' and GV.GRANULE_ID = INVS.granule_id and\n INVS.sensor_id='13'\n\n\n \"Aggregate  (cost=736364.52..736364.53 rows=1 width=8) (actual\n time=17532.930..17532.930 rows=1 loops=1)\"\n \"  ->  Hash Join  (cost=690287.33..734679.77 rows=336949 width=8)\n (actual time=13791.593..17323.080 rows=924675 loops=1)\"\n \"        Hash Cond: (invs.granule_id = gv.granule_id)\"\n \"        ->  Seq Scan on invsensor invs  (cost=0.00..36189.41\n rows=1288943 width=4) (actual time=0.297..735.375 rows=1277121\n loops=1)\"\n \"              Filter: (sensor_id = 13)\"\n \"        ->  Hash  (cost=674401.52..674401.52 rows=1270865\n width=16) (actual time=13787.698..13787.698 rows=1270750 loops=1)\"\n \"              ->  Hash Join  (cost=513545.62..674401.52\n rows=1270865 width=16) (actual time=1998.702..13105.578 rows=1270750\n loops=1)\"\n \"                    Hash Cond: (gv.granule_id = iv.granule_id)\"\n \"                    ->  Seq Scan on gran_ver gv \n (cost=0.00..75224.90 rows=4861490 width=4) (actual\n time=0.008..1034.885 rows=4867542 loops=1)\"\n \"                    ->  Hash  (cost=497659.81..497659.81\n rows=1270865 width=12) (actual time=1968.918..1968.918 rows=1270750\n loops=1)\"\n \"                          ->  Bitmap Heap Scan on inventory iv \n (cost=24050.00..497659.81 rows=1270865 width=12) (actual\n time=253.542..1387.957 rows=1270750 loops=1)\"\n \"                                Recheck Cond: (inv_id = 65)\"\n \"                                ->  Bitmap Index Scan on\n inven_idx1  (cost=0.00..23732.28 rows=1270865 width=0) (actual\n time=214.364..214.364 rows=1270977 loops=1)\"\n \"                                      Index Cond: (inv_id = 65)\"\n \"Total runtime: 17533.100 ms\"\n\n some additional info.....\n the table inventory is about 4481 MB and also has postgis types.\n the table gran_ver is about 523 MB\n the table INVSENSOR is about 217 MB\n\n the server itself has 32G RAM with the following set in the postgres\n conf\n shared_buffers = 3GB\n work_mem = 64MB                       \n maintenance_work_mem = 512MB        \n wal_buffers = 6MB\n\n let me know if I've forgotten anything!  thanks a bunch!!\n\n Maria Wilson\n NASA/Langley Research Center\n Hampton, Virginia\[email protected]\nHi,could you show us indexes that you have on all tables from this query? Have you tried running vacuum analyze on those tables? Do you have autovacuum active?\nregardsSzymon", "msg_date": "Wed, 6 Apr 2011 13:41:23 +0200", "msg_from": "Szymon Guz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: help speeding up a query in postgres 8.4.5" }, { "msg_contents": "> some additional info.....\n> the table inventory is about 4481 MB and also has postgis types.\n> the table gran_ver is about 523 MB\n> the table INVSENSOR is about 217 MB\n>\n> the server itself has 32G RAM with the following set in the postgres conf\n> shared_buffers = 3GB\n> work_mem = 64MB\n> maintenance_work_mem = 512MB\n> wal_buffers = 6MB\n\nNot sure how to improve the query itself - it's rather simple and the\nexecution plan seems reasonable. You're dealing with a lot of data, so it\ntakes time to process.\n\nAnyway, I'd try to bump up the shared buffers a bit (the tables you've\nlisted have about 5.5 GB, so 3GB of shared buffers won't cover it). OTOH\nmost of the data will be in pagecache maintained by the kernel anyway.\n\nTry to increase the work_mem a bit, that might speed up the hash joins\n(the two hash joins consumed about 15s, the whole query took 17s). This\ndoes not require a restart, just do\n\nset work_mem = '128MB'\n\n(or 256MB) and then run the query in the same session. Let's see if that\nworks.\n\nregards\nTomas\n\n", "msg_date": "Wed, 6 Apr 2011 15:16:05 +0200", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: help speeding up a query in postgres 8.4.5" }, { "msg_contents": "Autovacuum is not running - but regular vacuums are being done twice daily.\n\nindexes on inventory:\n\nCREATE INDEX inven_idx1\n ON inventory\n USING btree\n (inv_id);\n\nCREATE UNIQUE INDEX inven_idx2\n ON inventory\n USING btree\n (granule_id);\n\nindexes on gran_ver:\nCREATE UNIQUE INDEX granver_idx1\n ON gran_ver\n USING btree\n (granule_id);\n\nindexes on sensor\nCREATE INDEX invsnsr_idx2\n ON invsensor\n USING btree\n (sensor_id);\n\n\n\n\nOn 4/6/11 7:41 AM, Szymon Guz wrote:\n>\n>\n> On 5 April 2011 21:25, Maria L. Wilson <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n> Would really appreciate someone taking a look at the query\n> below.... Thanks in advance!\n>\n>\n> this is on a linux box...\n> Linux dsrvr201.larc.nasa.gov <http://dsrvr201.larc.nasa.gov>\n> 2.6.18-164.9.1.el5 #1 SMP Wed Dec 9 03:27:37 EST 2009 x86_64\n> x86_64 x86_64 GNU/Linux\n>\n> explain analyze\n> select MIN(IV.STRTDATE), MAX(IV.ENDDATE)\n> from GRAN_VER GV\n> left outer join INVENTORY IV on GV.GRANULE_ID = IV.GRANULE_ID,\n> INVSENSOR INVS\n> where IV.INV_ID='65' and GV.GRANULE_ID = INVS.granule_id and\n> INVS.sensor_id='13'\n>\n>\n> \"Aggregate (cost=736364.52..736364.53 rows=1 width=8) (actual\n> time=17532.930..17532.930 rows=1 loops=1)\"\n> \" -> Hash Join (cost=690287.33..734679.77 rows=336949 width=8)\n> (actual time=13791.593..17323.080 rows=924675 loops=1)\"\n> \" Hash Cond: (invs.granule_id = gv.granule_id)\"\n> \" -> Seq Scan on invsensor invs (cost=0.00..36189.41\n> rows=1288943 width=4) (actual time=0.297..735.375 rows=1277121\n> loops=1)\"\n> \" Filter: (sensor_id = 13)\"\n> \" -> Hash (cost=674401.52..674401.52 rows=1270865\n> width=16) (actual time=13787.698..13787.698 rows=1270750 loops=1)\"\n> \" -> Hash Join (cost=513545.62..674401.52\n> rows=1270865 width=16) (actual time=1998.702..13105.578\n> rows=1270750 loops=1)\"\n> \" Hash Cond: (gv.granule_id = iv.granule_id)\"\n> \" -> Seq Scan on gran_ver gv \n> (cost=0.00..75224.90 rows=4861490 width=4) (actual\n> time=0.008..1034.885 rows=4867542 loops=1)\"\n> \" -> Hash (cost=497659.81..497659.81\n> rows=1270865 width=12) (actual time=1968.918..1968.918\n> rows=1270750 loops=1)\"\n> \" -> Bitmap Heap Scan on inventory iv \n> (cost=24050.00..497659.81 rows=1270865 width=12) (actual\n> time=253.542..1387.957 rows=1270750 loops=1)\"\n> \" Recheck Cond: (inv_id = 65)\"\n> \" -> Bitmap Index Scan on\n> inven_idx1 (cost=0.00..23732.28 rows=1270865 width=0) (actual\n> time=214.364..214.364 rows=1270977 loops=1)\"\n> \" Index Cond: (inv_id = 65)\"\n> \"Total runtime: 17533.100 ms\"\n>\n> some additional info.....\n> the table inventory is about 4481 MB and also has postgis types.\n> the table gran_ver is about 523 MB\n> the table INVSENSOR is about 217 MB\n>\n> the server itself has 32G RAM with the following set in the\n> postgres conf\n> shared_buffers = 3GB\n> work_mem = 64MB\n> maintenance_work_mem = 512MB\n> wal_buffers = 6MB\n>\n> let me know if I've forgotten anything! thanks a bunch!!\n>\n> Maria Wilson\n> NASA/Langley Research Center\n> Hampton, Virginia\n> [email protected] <mailto:[email protected]>\n>\n>\n>\n> Hi,\n> could you show us indexes that you have on all tables from this query? \n> Have you tried running vacuum analyze on those tables? Do you have \n> autovacuum active?\n>\n> regards\n> Szymon\n\n\n\n\n\n\n Autovacuum is not running - but regular vacuums are being done twice\n daily. \n\n indexes on inventory:\n\n CREATE INDEX inven_idx1\n   ON inventory\n   USING btree\n   (inv_id);\n\n CREATE UNIQUE INDEX inven_idx2\n   ON inventory\n   USING btree\n   (granule_id);\n\n indexes on gran_ver:\n CREATE UNIQUE INDEX granver_idx1\n   ON gran_ver\n   USING btree\n   (granule_id);\n\n indexes on sensor\n CREATE INDEX invsnsr_idx2\n   ON invsensor\n   USING btree\n   (sensor_id);\n\n\n\n\n On 4/6/11 7:41 AM, Szymon Guz wrote:\n \n\n\n\nOn 5 April 2011 21:25, Maria L. Wilson <[email protected]>\n wrote:\n\n Would really appreciate someone taking a look at the\n query below....  Thanks in advance!\n\n\n this is on a linux box...\n Linux dsrvr201.larc.nasa.gov\n 2.6.18-164.9.1.el5 #1 SMP Wed Dec 9 03:27:37 EST 2009 x86_64\n x86_64 x86_64 GNU/Linux\n\n explain analyze\n select MIN(IV.STRTDATE), MAX(IV.ENDDATE) \n from GRAN_VER GV \n left outer join INVENTORY IV on GV.GRANULE_ID =\n IV.GRANULE_ID, INVSENSOR INVS \n where IV.INV_ID='65' and GV.GRANULE_ID = INVS.granule_id and\n INVS.sensor_id='13'\n\n\n \"Aggregate  (cost=736364.52..736364.53 rows=1 width=8)\n (actual time=17532.930..17532.930 rows=1 loops=1)\"\n \"  ->  Hash Join  (cost=690287.33..734679.77 rows=336949\n width=8) (actual time=13791.593..17323.080 rows=924675\n loops=1)\"\n \"        Hash Cond: (invs.granule_id = gv.granule_id)\"\n \"        ->  Seq Scan on invsensor invs \n (cost=0.00..36189.41 rows=1288943 width=4) (actual\n time=0.297..735.375 rows=1277121 loops=1)\"\n \"              Filter: (sensor_id = 13)\"\n \"        ->  Hash  (cost=674401.52..674401.52\n rows=1270865 width=16) (actual time=13787.698..13787.698\n rows=1270750 loops=1)\"\n \"              ->  Hash Join  (cost=513545.62..674401.52\n rows=1270865 width=16) (actual time=1998.702..13105.578\n rows=1270750 loops=1)\"\n \"                    Hash Cond: (gv.granule_id =\n iv.granule_id)\"\n \"                    ->  Seq Scan on gran_ver gv \n (cost=0.00..75224.90 rows=4861490 width=4) (actual\n time=0.008..1034.885 rows=4867542 loops=1)\"\n \"                    ->  Hash  (cost=497659.81..497659.81\n rows=1270865 width=12) (actual time=1968.918..1968.918\n rows=1270750 loops=1)\"\n \"                          ->  Bitmap Heap Scan on\n inventory iv  (cost=24050.00..497659.81 rows=1270865\n width=12) (actual time=253.542..1387.957 rows=1270750\n loops=1)\"\n \"                                Recheck Cond: (inv_id =\n 65)\"\n \"                                ->  Bitmap Index Scan on\n inven_idx1  (cost=0.00..23732.28 rows=1270865 width=0)\n (actual time=214.364..214.364 rows=1270977 loops=1)\"\n \"                                      Index Cond: (inv_id =\n 65)\"\n \"Total runtime: 17533.100 ms\"\n\n some additional info.....\n the table inventory is about 4481 MB and also has postgis\n types.\n the table gran_ver is about 523 MB\n the table INVSENSOR is about 217 MB\n\n the server itself has 32G RAM with the following set in the\n postgres conf\n shared_buffers = 3GB\n work_mem = 64MB                       \n maintenance_work_mem = 512MB        \n wal_buffers = 6MB\n\n let me know if I've forgotten anything!  thanks a bunch!!\n\n Maria Wilson\n NASA/Langley Research Center\n Hampton, Virginia\[email protected]\n\n\n\n\n\n\nHi,\ncould you show us indexes that you have on all tables from\n this query? Have you tried running vacuum analyze on those\n tables? Do you have autovacuum active?\n\n\nregards\nSzymon", "msg_date": "Wed, 6 Apr 2011 09:33:26 -0400", "msg_from": "\"Maria L. Wilson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: help speeding up a query in postgres 8.4.5" }, { "msg_contents": "thanks for the reply, Tomas. I'll test bumping up work_mem and see how \nthat helps.....\n\nthanks again, Maria Wilson\n\nOn 4/6/11 9:16 AM, [email protected] wrote:\n>> some additional info.....\n>> the table inventory is about 4481 MB and also has postgis types.\n>> the table gran_ver is about 523 MB\n>> the table INVSENSOR is about 217 MB\n>>\n>> the server itself has 32G RAM with the following set in the postgres conf\n>> shared_buffers = 3GB\n>> work_mem = 64MB\n>> maintenance_work_mem = 512MB\n>> wal_buffers = 6MB\n> Not sure how to improve the query itself - it's rather simple and the\n> execution plan seems reasonable. You're dealing with a lot of data, so it\n> takes time to process.\n>\n> Anyway, I'd try to bump up the shared buffers a bit (the tables you've\n> listed have about 5.5 GB, so 3GB of shared buffers won't cover it). OTOH\n> most of the data will be in pagecache maintained by the kernel anyway.\n>\n> Try to increase the work_mem a bit, that might speed up the hash joins\n> (the two hash joins consumed about 15s, the whole query took 17s). This\n> does not require a restart, just do\n>\n> set work_mem = '128MB'\n>\n> (or 256MB) and then run the query in the same session. Let's see if that\n> works.\n>\n> regards\n> Tomas\n>\n", "msg_date": "Wed, 6 Apr 2011 09:36:58 -0400", "msg_from": "\"Maria L. Wilson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: help speeding up a query in postgres 8.4.5" }, { "msg_contents": "\"Maria L. Wilson\" <[email protected]> wrote:\n \n> Autovacuum is not running - but regular vacuums are being done\n> twice daily.\n \nIs the ANALYZE keyword used on those VACUUM runs? What version of\nPostgreSQL is this. If it's enough to need fsm settings, do you run\nwith the VERBOSE option and check the end of the output to make sure\nthey are set high enough?\n \n-Kevin\n", "msg_date": "Wed, 06 Apr 2011 10:33:38 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: help speeding up a query in postgres 8.4.5" }, { "msg_contents": "yep - we use analyze and check the output. It's version 8.4.5 so no fsm \nissues.\n\nthanks, Maria\n\nOn 4/6/11 11:33 AM, Kevin Grittner wrote:\n> \"Maria L. Wilson\"<[email protected]> wrote:\n>\n>> Autovacuum is not running - but regular vacuums are being done\n>> twice daily.\n>\n> Is the ANALYZE keyword used on those VACUUM runs? What version of\n> PostgreSQL is this. If it's enough to need fsm settings, do you run\n> with the VERBOSE option and check the end of the output to make sure\n> they are set high enough?\n>\n> -Kevin\n", "msg_date": "Wed, 6 Apr 2011 11:37:28 -0400", "msg_from": "\"Maria L. Wilson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: help speeding up a query in postgres 8.4.5" }, { "msg_contents": "Dne 6.4.2011 17:33, Kevin Grittner napsal(a):\n> \"Maria L. Wilson\" <[email protected]> wrote:\n> \n>> Autovacuum is not running - but regular vacuums are being done\n>> twice daily.\n> \n> Is the ANALYZE keyword used on those VACUUM runs? What version of\n> PostgreSQL is this. If it's enough to need fsm settings, do you run\n> with the VERBOSE option and check the end of the output to make sure\n> they are set high enough?\n\nWhy do you think the problem is related to stale stats? It seems to me\nfairly accurate - see the explain analyze in the first post). All the\nnodes are less than 1% off (which is great), except for the last hash\njoin that returns 336949 rows instead of 924675 expected rows.\n\nMaybe I'm missing something, but the stats seem to be quite accurate and\nthere is just very little dead tuples I guess.\n\nregards\nTomas\n", "msg_date": "Wed, 06 Apr 2011 20:13:19 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: help speeding up a query in postgres 8.4.5" }, { "msg_contents": "On Tue, Apr 5, 2011 at 3:25 PM, Maria L. Wilson\n<[email protected]> wrote:\n> Would really appreciate someone taking a look at the query below....  Thanks\n> in advance!\n>\n>\n> this is on a linux box...\n> Linux dsrvr201.larc.nasa.gov 2.6.18-164.9.1.el5 #1 SMP Wed Dec 9 03:27:37\n> EST 2009 x86_64 x86_64 x86_64 GNU/Linux\n>\n> explain analyze\n> select MIN(IV.STRTDATE), MAX(IV.ENDDATE)\n> from GRAN_VER GV\n> left outer join INVENTORY IV on GV.GRANULE_ID = IV.GRANULE_ID, INVSENSOR\n> INVS\n> where IV.INV_ID='65' and GV.GRANULE_ID = INVS.granule_id and\n> INVS.sensor_id='13'\n>\n>\n> \"Aggregate  (cost=736364.52..736364.53 rows=1 width=8) (actual\n> time=17532.930..17532.930 rows=1 loops=1)\"\n> \"  ->  Hash Join  (cost=690287.33..734679.77 rows=336949 width=8) (actual\n> time=13791.593..17323.080 rows=924675 loops=1)\"\n> \"        Hash Cond: (invs.granule_id = gv.granule_id)\"\n> \"        ->  Seq Scan on invsensor invs  (cost=0.00..36189.41 rows=1288943\n> width=4) (actual time=0.297..735.375 rows=1277121 loops=1)\"\n> \"              Filter: (sensor_id = 13)\"\n> \"        ->  Hash  (cost=674401.52..674401.52 rows=1270865 width=16) (actual\n> time=13787.698..13787.698 rows=1270750 loops=1)\"\n> \"              ->  Hash Join  (cost=513545.62..674401.52 rows=1270865\n> width=16) (actual time=1998.702..13105.578 rows=1270750 loops=1)\"\n> \"                    Hash Cond: (gv.granule_id = iv.granule_id)\"\n> \"                    ->  Seq Scan on gran_ver gv  (cost=0.00..75224.90\n> rows=4861490 width=4) (actual time=0.008..1034.885 rows=4867542 loops=1)\"\n> \"                    ->  Hash  (cost=497659.81..497659.81 rows=1270865\n> width=12) (actual time=1968.918..1968.918 rows=1270750 loops=1)\"\n> \"                          ->  Bitmap Heap Scan on inventory iv\n> (cost=24050.00..497659.81 rows=1270865 width=12) (actual\n> time=253.542..1387.957 rows=1270750 loops=1)\"\n> \"                                Recheck Cond: (inv_id = 65)\"\n> \"                                ->  Bitmap Index Scan on inven_idx1\n> (cost=0.00..23732.28 rows=1270865 width=0) (actual time=214.364..214.364\n> rows=1270977 loops=1)\"\n> \"                                      Index Cond: (inv_id = 65)\"\n> \"Total runtime: 17533.100 ms\"\n>\n> some additional info.....\n> the table inventory is about 4481 MB and also has postgis types.\n> the table gran_ver is about 523 MB\n> the table INVSENSOR is about 217 MB\n>\n> the server itself has 32G RAM with the following set in the postgres conf\n> shared_buffers = 3GB\n> work_mem = 64MB\n> maintenance_work_mem = 512MB\n> wal_buffers = 6MB\n>\n> let me know if I've forgotten anything!  thanks a bunch!!\n\nLate response here, but...\n\nIs there an index on invsensor (sensor_id, granule_id)? If not, that\nmight be something to try. If so, you might want to try to figure out\nwhy it's not being used.\n\nLikewise, is there an index on gran_ver (granule_id)?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Tue, 10 May 2011 13:38:17 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: help speeding up a query in postgres 8.4.5" }, { "msg_contents": "thanks for taking a look at this.... and it's never too late!!\n\nI've tried bumping up work_mem and did not see any improvements -\nAll the indexes do exist that you asked.... see below....\nAny other ideas?\n\nCREATE INDEX invsnsr_idx1\n ON invsensor\n USING btree\n (granule_id);\n\nCREATE INDEX invsnsr_idx2\n ON invsensor\n USING btree\n (sensor_id);\n\nCREATE UNIQUE INDEX granver_idx1\n ON gran_ver\n USING btree\n (granule_id);\n\nthanks for the look -\nMaria Wilson\nNASA/Langley Research Center\nHampton, Virginia 23681\[email protected]\n\nOn 5/10/11 1:38 PM, Robert Haas wrote:\n> On Tue, Apr 5, 2011 at 3:25 PM, Maria L. Wilson\n> <[email protected]> wrote:\n>> Would really appreciate someone taking a look at the query below.... Thanks\n>> in advance!\n>>\n>>\n>> this is on a linux box...\n>> Linux dsrvr201.larc.nasa.gov 2.6.18-164.9.1.el5 #1 SMP Wed Dec 9 03:27:37\n>> EST 2009 x86_64 x86_64 x86_64 GNU/Linux\n>>\n>> explain analyze\n>> select MIN(IV.STRTDATE), MAX(IV.ENDDATE)\n>> from GRAN_VER GV\n>> left outer join INVENTORY IV on GV.GRANULE_ID = IV.GRANULE_ID, INVSENSOR\n>> INVS\n>> where IV.INV_ID='65' and GV.GRANULE_ID = INVS.granule_id and\n>> INVS.sensor_id='13'\n>>\n>>\n>> \"Aggregate (cost=736364.52..736364.53 rows=1 width=8) (actual\n>> time=17532.930..17532.930 rows=1 loops=1)\"\n>> \" -> Hash Join (cost=690287.33..734679.77 rows=336949 width=8) (actual\n>> time=13791.593..17323.080 rows=924675 loops=1)\"\n>> \" Hash Cond: (invs.granule_id = gv.granule_id)\"\n>> \" -> Seq Scan on invsensor invs (cost=0.00..36189.41 rows=1288943\n>> width=4) (actual time=0.297..735.375 rows=1277121 loops=1)\"\n>> \" Filter: (sensor_id = 13)\"\n>> \" -> Hash (cost=674401.52..674401.52 rows=1270865 width=16) (actual\n>> time=13787.698..13787.698 rows=1270750 loops=1)\"\n>> \" -> Hash Join (cost=513545.62..674401.52 rows=1270865\n>> width=16) (actual time=1998.702..13105.578 rows=1270750 loops=1)\"\n>> \" Hash Cond: (gv.granule_id = iv.granule_id)\"\n>> \" -> Seq Scan on gran_ver gv (cost=0.00..75224.90\n>> rows=4861490 width=4) (actual time=0.008..1034.885 rows=4867542 loops=1)\"\n>> \" -> Hash (cost=497659.81..497659.81 rows=1270865\n>> width=12) (actual time=1968.918..1968.918 rows=1270750 loops=1)\"\n>> \" -> Bitmap Heap Scan on inventory iv\n>> (cost=24050.00..497659.81 rows=1270865 width=12) (actual\n>> time=253.542..1387.957 rows=1270750 loops=1)\"\n>> \" Recheck Cond: (inv_id = 65)\"\n>> \" -> Bitmap Index Scan on inven_idx1\n>> (cost=0.00..23732.28 rows=1270865 width=0) (actual time=214.364..214.364\n>> rows=1270977 loops=1)\"\n>> \" Index Cond: (inv_id = 65)\"\n>> \"Total runtime: 17533.100 ms\"\n>>\n>> some additional info.....\n>> the table inventory is about 4481 MB and also has postgis types.\n>> the table gran_ver is about 523 MB\n>> the table INVSENSOR is about 217 MB\n>>\n>> the server itself has 32G RAM with the following set in the postgres conf\n>> shared_buffers = 3GB\n>> work_mem = 64MB\n>> maintenance_work_mem = 512MB\n>> wal_buffers = 6MB\n>>\n>> let me know if I've forgotten anything! thanks a bunch!!\n> Late response here, but...\n>\n> Is there an index on invsensor (sensor_id, granule_id)? If not, that\n> might be something to try. If so, you might want to try to figure out\n> why it's not being used.\n>\n> Likewise, is there an index on gran_ver (granule_id)?\n>\n", "msg_date": "Tue, 10 May 2011 13:47:44 -0400", "msg_from": "\"Maria L. Wilson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: help speeding up a query in postgres 8.4.5" }, { "msg_contents": "[ woops, accidentally replied off-list, trying again ]\n\nOn Tue, May 10, 2011 at 1:47 PM, Maria L. Wilson\n<[email protected]> wrote:\n> thanks for taking a look at this.... and it's never too late!!\n>\n> I've tried bumping up work_mem and did not see any improvements -\n> All the indexes do exist that you asked.... see below....\n> Any other ideas?\n>\n> CREATE INDEX invsnsr_idx1\n> ON invsensor\n> USING btree\n> (granule_id);\n>\n> CREATE INDEX invsnsr_idx2\n> ON invsensor\n> USING btree\n> (sensor_id);\n\nWhat about a composite index on both columns?\n\n> CREATE UNIQUE INDEX granver_idx1\n> ON gran_ver\n> USING btree\n> (granule_id);\n\nIt's a bit surprising to me that this isn't getting used. How big are\nthese tables, and how much memory do you have, and what values are you\nusing for seq_page_cost/random_page_cost/effective_cache_size?\n\n...Robert\n", "msg_date": "Tue, 10 May 2011 13:59:11 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: help speeding up a query in postgres 8.4.5" }, { "msg_contents": "haven't tested a composite index\n\ninvsensor is 2,003,980 rows and 219MB\ngranver is 5,138,730 rows and 556MB\nthe machine has 32G memory\nseq_page_cost, random_page_costs & effective_cache_size are set to the \ndefaults (1,4, and 128MB) - looks like they could be bumped up.\nGot any recommendations?\n\nMaria\n\nOn 5/10/11 1:59 PM, Robert Haas wrote:\n> [ woops, accidentally replied off-list, trying again ]\n>\n> On Tue, May 10, 2011 at 1:47 PM, Maria L. Wilson\n> <[email protected]> wrote:\n>> thanks for taking a look at this.... and it's never too late!!\n>>\n>> I've tried bumping up work_mem and did not see any improvements -\n>> All the indexes do exist that you asked.... see below....\n>> Any other ideas?\n>>\n>> CREATE INDEX invsnsr_idx1\n>> ON invsensor\n>> USING btree\n>> (granule_id);\n>>\n>> CREATE INDEX invsnsr_idx2\n>> ON invsensor\n>> USING btree\n>> (sensor_id);\n> What about a composite index on both columns?\n>\n>> CREATE UNIQUE INDEX granver_idx1\n>> ON gran_ver\n>> USING btree\n>> (granule_id);\n> It's a bit surprising to me that this isn't getting used. How big are\n> these tables, and how much memory do you have, and what values are you\n> using for seq_page_cost/random_page_cost/effective_cache_size?\n>\n> ...Robert\n", "msg_date": "Tue, 10 May 2011 14:20:09 -0400", "msg_from": "\"Maria L. Wilson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: help speeding up a query in postgres 8.4.5" }, { "msg_contents": "On Tue, Apr 5, 2011 at 1:25 PM, Maria L. Wilson\n<[email protected]> wrote:\n\nThis bit:\n\n> left outer join INVENTORY IV on GV.GRANULE_ID = IV.GRANULE_ID, INVSENSOR\n> INVS\n\nhas both an explicit and an implicit join. This can constrain join\nre-ordering in the planner. Can you change it to explicit joins only\nand see if that helps?\n", "msg_date": "Wed, 11 May 2011 01:09:27 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: help speeding up a query in postgres 8.4.5" }, { "msg_contents": "On Tue, May 10, 2011 at 2:20 PM, Maria L. Wilson\n<[email protected]> wrote:\n> haven't tested a composite index\n>\n> invsensor is 2,003,980 rows and 219MB\n> granver is 5,138,730 rows and 556MB\n> the machine has 32G memory\n> seq_page_cost, random_page_costs & effective_cache_size are set to the\n> defaults (1,4, and 128MB) - looks like they could be bumped up.\n> Got any recommendations?\n\nYeah, I'd try setting effective_cache_size=24GB, seq_page_cost=0.1,\nrandom_page_cost=0.1 and see if you get a better plan. If possible,\ncan you post the EXPLAIN ANALYZE output with those settings for us?\n\nIf that doesn't cause the planner to use the indexes, then I'd be\nsuspicious that there is something wrong with those indexes that makes\nthe planner think it *can't* use them. It would be helpful to see the\nEXPLAIN output after SET enable_seqscan=off.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Wed, 11 May 2011 08:01:09 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: help speeding up a query in postgres 8.4.5" }, { "msg_contents": "Scott Marlowe <[email protected]> writes:\n> On Tue, Apr 5, 2011 at 1:25 PM, Maria L. Wilson\n> <[email protected]> wrote:\n> This bit:\n\n>> left outer join INVENTORY IV on GV.GRANULE_ID = IV.GRANULE_ID, INVSENSOR\n>> INVS\n\n> has both an explicit and an implicit join. This can constrain join\n> re-ordering in the planner. Can you change it to explicit joins only\n> and see if that helps?\n\nSince there's a WHERE constraint on IV, the outer join is going to be\nstrength-reduced to an inner join (note the lack of any outer joins in\nthe plan). So that isn't going to matter.\n\nAFAICS this is just plain an expensive query. The two filter\nconstraints are not very selective, each passing more than a million\nrows up to the join. You can't expect to join millions of rows in no\ntime flat. About all you can do is try to bump up work_mem enough that\nthe join won't use temp files --- for something like this, that's likely\nto require a setting of hundreds of MB. I'm not sure whether Maria is\nusing a version in which EXPLAIN ANALYZE will show whether a hash join\nwas batched, but that's what I'd be looking at.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 11 May 2011 10:31:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: help speeding up a query in postgres 8.4.5 " } ]
[ { "msg_contents": "I'm using 9.1dev.\n\nCould someone explain the following behaviour?\n\n-- create a test table\nCREATE TABLE indextest (id serial, stuff text);\n\n-- insert loads of values with intermittent sets of less common values\nINSERT INTO indextest (stuff) SELECT 'meow' FROM generate_series (1,1000000);\nINSERT INTO indextest (stuff) SELECT 'bark' FROM generate_series (1,1000);\nINSERT INTO indextest (stuff) SELECT 'meow' FROM generate_series (1,1000000);\nINSERT INTO indextest (stuff) SELECT 'bark' FROM generate_series (1,1000);\nINSERT INTO indextest (stuff) SELECT 'meow' FROM generate_series (1,1000000);\nINSERT INTO indextest (stuff) SELECT 'bark' FROM generate_series (1,1000);\nINSERT INTO indextest (stuff) SELECT 'meow' FROM generate_series (1,1000000);\nINSERT INTO indextest (stuff) SELECT 'bark' FROM generate_series (1,1000);\nINSERT INTO indextest (stuff) SELECT 'meow' FROM generate_series (1,1000000);\nINSERT INTO indextest (stuff) SELECT 'bark' FROM generate_series (1,1000);\nINSERT INTO indextest (stuff) SELECT 'meow' FROM generate_series (1,1000000);\nINSERT INTO indextest (stuff) SELECT 'bark' FROM generate_series (1,1000);\nINSERT INTO indextest (stuff) SELECT 'meow' FROM generate_series (1,1000000);\nINSERT INTO indextest (stuff) SELECT 'bark' FROM generate_series (1,1000);\nINSERT INTO indextest (stuff) SELECT 'meow' FROM generate_series (1,1000000);\nINSERT INTO indextest (stuff) SELECT 'bark' FROM generate_series (1,1000);\n\n-- create regular index\nCREATE INDEX indextest_stuff ON indextest(stuff);\n\n-- update table stats\nANALYZE indextest;\n\npostgres=# explain analyze select * from indextest where stuff = 'bark';\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------\n Index Scan using indextest_stuff on indextest (cost=0.00..485.09\nrows=9076 width=9) (actual time=0.142..3.533 rows=8000 loops=1)\n Index Cond: (stuff = 'bark'::text)\n Total runtime: 4.248 ms\n(3 rows)\n\nThis is very fast. Now if I drop the index and add a partial index\nwith the conditions being tested.\n\nDROP INDEX indextest_stuff;\n\nCREATE INDEX indextest_stuff ON indextest(stuff) WHERE stuff = 'bark';\n\npostgres=# explain analyze select * from indextest where stuff = 'bark';\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------\n Seq Scan on indextest (cost=0.00..143386.48 rows=5606 width=9)\n(actual time=164.321..1299.794 rows=8000 loops=1)\n Filter: (stuff = 'bark'::text)\n Total runtime: 1300.267 ms\n(3 rows)\n\nThe index doesn't get used. There's probably a logical explanation,\nwhich is what I'm curious about.\n\n-- \nThom Brown\nTwitter: @darkixion\nIRC (freenode): dark_ixion\nRegistered Linux user: #516935\n\nEnterpriseDB UK: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company1\n", "msg_date": "Tue, 5 Apr 2011 23:35:29 +0100", "msg_from": "Thom Brown <[email protected]>", "msg_from_op": true, "msg_subject": "Partial index slower than regular index" }, { "msg_contents": "On Tue, Apr 05, 2011 at 11:35:29PM +0100, Thom Brown wrote:\n> I'm using 9.1dev.\n> \n> Could someone explain the following behaviour?\n> \n> -- create a test table\n> CREATE TABLE indextest (id serial, stuff text);\n> \n> -- insert loads of values with intermittent sets of less common values\n> INSERT INTO indextest (stuff) SELECT 'meow' FROM generate_series (1,1000000);\n> INSERT INTO indextest (stuff) SELECT 'bark' FROM generate_series (1,1000);\n> INSERT INTO indextest (stuff) SELECT 'meow' FROM generate_series (1,1000000);\n> INSERT INTO indextest (stuff) SELECT 'bark' FROM generate_series (1,1000);\n> INSERT INTO indextest (stuff) SELECT 'meow' FROM generate_series (1,1000000);\n> INSERT INTO indextest (stuff) SELECT 'bark' FROM generate_series (1,1000);\n> INSERT INTO indextest (stuff) SELECT 'meow' FROM generate_series (1,1000000);\n> INSERT INTO indextest (stuff) SELECT 'bark' FROM generate_series (1,1000);\n> INSERT INTO indextest (stuff) SELECT 'meow' FROM generate_series (1,1000000);\n> INSERT INTO indextest (stuff) SELECT 'bark' FROM generate_series (1,1000);\n> INSERT INTO indextest (stuff) SELECT 'meow' FROM generate_series (1,1000000);\n> INSERT INTO indextest (stuff) SELECT 'bark' FROM generate_series (1,1000);\n> INSERT INTO indextest (stuff) SELECT 'meow' FROM generate_series (1,1000000);\n> INSERT INTO indextest (stuff) SELECT 'bark' FROM generate_series (1,1000);\n> INSERT INTO indextest (stuff) SELECT 'meow' FROM generate_series (1,1000000);\n> INSERT INTO indextest (stuff) SELECT 'bark' FROM generate_series (1,1000);\n> \n> -- create regular index\n> CREATE INDEX indextest_stuff ON indextest(stuff);\n> \n> -- update table stats\n> ANALYZE indextest;\n> \n> postgres=# explain analyze select * from indextest where stuff = 'bark';\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using indextest_stuff on indextest (cost=0.00..485.09\n> rows=9076 width=9) (actual time=0.142..3.533 rows=8000 loops=1)\n> Index Cond: (stuff = 'bark'::text)\n> Total runtime: 4.248 ms\n> (3 rows)\n> \n> This is very fast. Now if I drop the index and add a partial index\n> with the conditions being tested.\n> \n> DROP INDEX indextest_stuff;\n> \n> CREATE INDEX indextest_stuff ON indextest(stuff) WHERE stuff = 'bark';\n> \n> postgres=# explain analyze select * from indextest where stuff = 'bark';\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------\n> Seq Scan on indextest (cost=0.00..143386.48 rows=5606 width=9)\n> (actual time=164.321..1299.794 rows=8000 loops=1)\n> Filter: (stuff = 'bark'::text)\n> Total runtime: 1300.267 ms\n> (3 rows)\n> \n> The index doesn't get used. There's probably a logical explanation,\n> which is what I'm curious about.\n> \n\nThe stats seem off. Are you certain that an analyze has run?\n\nCheers,\nKen\n", "msg_date": "Tue, 5 Apr 2011 18:02:51 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partial index slower than regular index" }, { "msg_contents": "On Tue, Apr 5, 2011 at 4:35 PM, Thom Brown <[email protected]> wrote:\n> I'm using 9.1dev.\nSNIP\n\n> DROP INDEX indextest_stuff;\n>\n> CREATE INDEX indextest_stuff ON indextest(stuff) WHERE stuff = 'bark';\n>\n> postgres=# explain analyze select * from indextest where stuff = 'bark';\n>                                                    QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------\n>  Seq Scan on indextest  (cost=0.00..143386.48 rows=5606 width=9)\n> (actual time=164.321..1299.794 rows=8000 loops=1)\n>   Filter: (stuff = 'bark'::text)\n>  Total runtime: 1300.267 ms\n> (3 rows)\n>\n> The index doesn't get used.  There's probably a logical explanation,\n> which is what I'm curious about.\n\nWorks fine for me:\n\nexplain analyze select * from indextest where stuff = 'bark';\n QUERY\nPLAN\n------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using indextest_stuff on indextest (cost=0.00..837.01\nrows=13347 width=9) (actual time=0.226..6.073 rows=8000 loops=1)\n Index Cond: (stuff = 'bark'::text)\n Total runtime: 7.527 ms\n\nEven with a random_page_cost = 4 it works. Running 8.3.13 btw.\n", "msg_date": "Tue, 5 Apr 2011 17:31:56 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partial index slower than regular index" }, { "msg_contents": "On 06/04/11 11:31, Scott Marlowe wrote:\n> On Tue, Apr 5, 2011 at 4:35 PM, Thom Brown<[email protected]> wrote:\n>> I'm using 9.1dev.\n> SNIP\n>\n>> DROP INDEX indextest_stuff;\n>>\n>> CREATE INDEX indextest_stuff ON indextest(stuff) WHERE stuff = 'bark';\n>>\n>> postgres=# explain analyze select * from indextest where stuff = 'bark';\n>> QUERY PLAN\n>> -------------------------------------------------------------------------------------------------------------------\n>> Seq Scan on indextest (cost=0.00..143386.48 rows=5606 width=9)\n>> (actual time=164.321..1299.794 rows=8000 loops=1)\n>> Filter: (stuff = 'bark'::text)\n>> Total runtime: 1300.267 ms\n>> (3 rows)\n>>\n>> The index doesn't get used. There's probably a logical explanation,\n>> which is what I'm curious about.\n> Works fine for me:\n>\n> explain analyze select * from indextest where stuff = 'bark';\n> QUERY\n> PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using indextest_stuff on indextest (cost=0.00..837.01\n> rows=13347 width=9) (actual time=0.226..6.073 rows=8000 loops=1)\n> Index Cond: (stuff = 'bark'::text)\n> Total runtime: 7.527 ms\n>\n> Even with a random_page_cost = 4 it works. Running 8.3.13 btw.\n>\n\nI reproduce what Thom sees - using 9.1dev with default config settings. \nEven cranking up effective_cache_size does not encourage the partial \nindex to be used.\n\nMark\n\n", "msg_date": "Wed, 06 Apr 2011 11:40:43 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partial index slower than regular index" }, { "msg_contents": "On 06/04/11 11:40, Mark Kirkwood wrote:\n> On 06/04/11 11:31, Scott Marlowe wrote:\n>> On Tue, Apr 5, 2011 at 4:35 PM, Thom Brown<[email protected]> wrote:\n>>> I'm using 9.1dev.\n>> SNIP\n>>\n>>> DROP INDEX indextest_stuff;\n>>>\n>>> CREATE INDEX indextest_stuff ON indextest(stuff) WHERE stuff = 'bark';\n>>>\n>>> postgres=# explain analyze select * from indextest where stuff = \n>>> 'bark';\n>>> QUERY PLAN\n>>> ------------------------------------------------------------------------------------------------------------------- \n>>>\n>>> Seq Scan on indextest (cost=0.00..143386.48 rows=5606 width=9)\n>>> (actual time=164.321..1299.794 rows=8000 loops=1)\n>>> Filter: (stuff = 'bark'::text)\n>>> Total runtime: 1300.267 ms\n>>> (3 rows)\n>>>\n>>> The index doesn't get used. There's probably a logical explanation,\n>>> which is what I'm curious about.\n>> Works fine for me:\n>>\n>> explain analyze select * from indextest where stuff = 'bark';\n>> QUERY\n>> PLAN\n>> ------------------------------------------------------------------------------------------------------------------------------------ \n>>\n>> Index Scan using indextest_stuff on indextest (cost=0.00..837.01\n>> rows=13347 width=9) (actual time=0.226..6.073 rows=8000 loops=1)\n>> Index Cond: (stuff = 'bark'::text)\n>> Total runtime: 7.527 ms\n>>\n>> Even with a random_page_cost = 4 it works. Running 8.3.13 btw.\n>>\n>\n> I reproduce what Thom sees - using 9.1dev with default config \n> settings. Even cranking up effective_cache_size does not encourage the \n> partial index to be used.\n>\n>\n\n\nHowever trying with 9.0 gives me the (expected) same 8.3 behaviour:\n\n\ntest=# CREATE INDEX indextest_stuff ON indextest(stuff) WHERE stuff = \n'bark';\nCREATE INDEX\n\ntest=# explain analyze select * from indextest where stuff = 'bark';\n QUERY PLAN\n--------------------------------------------------------------------------------------------\n Index Scan using indextest_stuff on indextest (cost=0.00..284.20 \nrows=5873 width=9)\n (actual \ntime=0.276..9.621 rows=8000 loops=1)\n Index Cond: (stuff = 'bark'::text)\n Total runtime: 16.621 ms\n(3 rows)\n\n\nregards\n\nMark\n\n\n", "msg_date": "Wed, 06 Apr 2011 14:03:53 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partial index slower than regular index" }, { "msg_contents": "Thom Brown <[email protected]> writes:\n> The index doesn't get used. There's probably a logical explanation,\n> which is what I'm curious about.\n\nEr ... it's broken?\n\nIt looks like the index predicate expression isn't getting the right\ncollation assigned, so predtest.c decides the query doesn't imply the\nindex's predicate. Too tired to look into exactly why right now, but\nit's clearly bound up in all the recent collation changes.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 06 Apr 2011 00:44:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partial index slower than regular index " }, { "msg_contents": "On 6 April 2011 05:44, Tom Lane <[email protected]> wrote:\n> Thom Brown <[email protected]> writes:\n>> The index doesn't get used.  There's probably a logical explanation,\n>> which is what I'm curious about.\n>\n> Er ... it's broken?\n>\n> It looks like the index predicate expression isn't getting the right\n> collation assigned, so predtest.c decides the query doesn't imply the\n> index's predicate.  Too tired to look into exactly why right now, but\n> it's clearly bound up in all the recent collation changes.\n\nTesting it again with very explicit collations, it still has issues:\n\nCREATE INDEX indextest_stuff ON indextest(stuff COLLATE \"en_GB.UTF-8\")\nWHERE stuff COLLATE \"en_GB.UTF-8\" = 'bark' COLLATE \"en_GB.UTF-8\";\n\npostgres=# explain analyze select * from indextest where stuff collate\n\"en_GB.UTF-8\" = 'bark' collate \"en_GB.UTF-8\";\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------\n Seq Scan on indextest (cost=0.00..143387.00 rows=8312 width=9)\n(actual time=163.759..1308.316 rows=8000 loops=1)\n Filter: ((stuff)::text = 'bark'::text COLLATE \"en_GB.UTF-8\")\n Total runtime: 1308.821 ms\n(3 rows)\n\nBut I'm possibly missing the point here.\n\n-- \nThom Brown\nTwitter: @darkixion\nIRC (freenode): dark_ixion\nRegistered Linux user: #516935\n\nEnterpriseDB UK: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Wed, 6 Apr 2011 09:15:25 +0100", "msg_from": "Thom Brown <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Partial index slower than regular index" }, { "msg_contents": "On 6 April 2011 00:02, Kenneth Marshall <[email protected]> wrote:\n> The stats seem off. Are you certain that an analyze has run?\n>\n> Cheers,\n> Ken\n>\n\nYes, an ANALYZE was definitely run against the table.\n\n-- \nThom Brown\nTwitter: @darkixion\nIRC (freenode): dark_ixion\nRegistered Linux user: #516935\n\nEnterpriseDB UK: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Wed, 6 Apr 2011 09:33:12 +0100", "msg_from": "Thom Brown <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Partial index slower than regular index" }, { "msg_contents": "Thom Brown <[email protected]> writes:\n> On 6 April 2011 05:44, Tom Lane <[email protected]> wrote:\n>> It looks like the index predicate expression isn't getting the right\n>> collation assigned, so predtest.c decides the query doesn't imply the\n>> index's predicate. �Too tired to look into exactly why right now, but\n>> it's clearly bound up in all the recent collation changes.\n\n> Testing it again with very explicit collations, it still has issues:\n\nYeah, any sort of collation-sensitive operator in an index WHERE clause\nwas just plain broken. Fixed now.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 07 Apr 2011 02:37:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partial index slower than regular index " }, { "msg_contents": "On 7 April 2011 07:37, Tom Lane <[email protected]> wrote:\n> Thom Brown <[email protected]> writes:\n>> On 6 April 2011 05:44, Tom Lane <[email protected]> wrote:\n>>> It looks like the index predicate expression isn't getting the right\n>>> collation assigned, so predtest.c decides the query doesn't imply the\n>>> index's predicate.  Too tired to look into exactly why right now, but\n>>> it's clearly bound up in all the recent collation changes.\n>\n>> Testing it again with very explicit collations, it still has issues:\n>\n> Yeah, any sort of collation-sensitive operator in an index WHERE clause\n> was just plain broken.  Fixed now.\n\nThanks Tom.\n\nYou said in the commit message that an initdb isn't required, but is\nthere anything else since 20th March that would cause cluster files to\nbreak compatibility? I'm now getting the following message:\n\ntoucan:postgresql thom$ pg_ctl start\nserver starting\ntoucan:postgresql thom$ FATAL: database files are incompatible with server\nDETAIL: The database cluster was initialized with CATALOG_VERSION_NO\n201103201, but the server was compiled with CATALOG_VERSION_NO\n201104051.\nHINT: It looks like you need to initdb.\n\n-- \nThom Brown\nTwitter: @darkixion\nIRC (freenode): dark_ixion\nRegistered Linux user: #516935\n\nEnterpriseDB UK: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Thu, 7 Apr 2011 08:10:14 +0100", "msg_from": "Thom Brown <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Partial index slower than regular index" }, { "msg_contents": "On 7 April 2011 08:10, Thom Brown <[email protected]> wrote:\n> On 7 April 2011 07:37, Tom Lane <[email protected]> wrote:\n>> Thom Brown <[email protected]> writes:\n>>> On 6 April 2011 05:44, Tom Lane <[email protected]> wrote:\n>>>> It looks like the index predicate expression isn't getting the right\n>>>> collation assigned, so predtest.c decides the query doesn't imply the\n>>>> index's predicate.  Too tired to look into exactly why right now, but\n>>>> it's clearly bound up in all the recent collation changes.\n>>\n>>> Testing it again with very explicit collations, it still has issues:\n>>\n>> Yeah, any sort of collation-sensitive operator in an index WHERE clause\n>> was just plain broken.  Fixed now.\n>\n> Thanks Tom.\n>\n> You said in the commit message that an initdb isn't required, but is\n> there anything else since 20th March that would cause cluster files to\n> break compatibility?  I'm now getting the following message:\n>\n> toucan:postgresql thom$ pg_ctl start\n> server starting\n> toucan:postgresql thom$ FATAL:  database files are incompatible with server\n> DETAIL:  The database cluster was initialized with CATALOG_VERSION_NO\n> 201103201, but the server was compiled with CATALOG_VERSION_NO\n> 201104051.\n> HINT:  It looks like you need to initdb.\n\nNevermind. This was caused by \"Add casts from int4 and int8 to numeric.\".\n\n-- \nThom Brown\nTwitter: @darkixion\nIRC (freenode): dark_ixion\nRegistered Linux user: #516935\n\nEnterpriseDB UK: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Thu, 7 Apr 2011 09:01:58 +0100", "msg_from": "Thom Brown <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Partial index slower than regular index" } ]
[ { "msg_contents": "Hello,\n\nI saw some recommendations from people on the net not to use background fsck when running PostgreSQL \non FreeBSD. As I recall, these opinions were just thoughts of people which they shared with the \ncommunity, following their bad experience caused by using background fsck. So, not coming any deeper \nwith underatanding why not, I use that as a clear recommendation for myself and keep background fsck \nturned off on all my machines, regardless how much faster a server could come up after a crash.\n\nBut waiting so much time (like now) during foreground fsck of a large data filesystem after unclean \nshutdown, makes me to come to this group to ask whether I really need to avoid background fsck on a \nPostgreSQL machine? Could I hear your opinions?\n\nThanks\n\nIrek.\n", "msg_date": "Thu, 07 Apr 2011 00:33:35 +0200", "msg_from": "Ireneusz Pluta <[email protected]>", "msg_from_op": true, "msg_subject": "Background fsck" }, { "msg_contents": "On Wed, Apr 6, 2011 at 4:33 PM, Ireneusz Pluta <[email protected]> wrote:\n> Hello,\n>\n> I saw some recommendations from people on the net not to use background fsck\n> when running PostgreSQL on FreeBSD. As I recall, these opinions were just\n> thoughts of people which they shared with the community, following their bad\n> experience caused by using background fsck. So, not coming any deeper with\n> underatanding why not, I use that as a clear recommendation for myself and\n> keep background fsck turned off on all my machines, regardless how much\n> faster a server could come up after a crash.\n>\n> But waiting so much time (like now) during foreground fsck of a large data\n> filesystem after unclean shutdown, makes me to come to this group to ask\n> whether I really need to avoid background fsck on a PostgreSQL machine?\n> Could I hear your opinions?\n\nShouldn't a journaling file system just come back up almost immediately?\n", "msg_date": "Wed, 6 Apr 2011 16:48:22 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Background fsck" }, { "msg_contents": "But waiting so much time (like now) during foreground fsck of a large data\n>> filesystem after unclean shutdown, makes me to come to this group to ask\n>> whether I really need to avoid background fsck on a PostgreSQL machine?\n>> Could I hear your opinions?\n> Shouldn't a journaling file system just come back up almost immediately?\n>\nit's ufs2 with softupdates in my case. That's why I am asking abot background fsck.\n", "msg_date": "Thu, 07 Apr 2011 00:59:08 +0200", "msg_from": "Ireneusz Pluta <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Background fsck" }, { "msg_contents": "On 07/04/2011 00:48, Scott Marlowe wrote:\n> On Wed, Apr 6, 2011 at 4:33 PM, Ireneusz Pluta<[email protected]> wrote:\n>> Hello,\n>>\n>> I saw some recommendations from people on the net not to use background fsck\n>> when running PostgreSQL on FreeBSD. As I recall, these opinions were just\n>> thoughts of people which they shared with the community, following their bad\n>> experience caused by using background fsck. So, not coming any deeper with\n>> underatanding why not, I use that as a clear recommendation for myself and\n>> keep background fsck turned off on all my machines, regardless how much\n>> faster a server could come up after a crash.\n>>\n>> But waiting so much time (like now) during foreground fsck of a large data\n>> filesystem after unclean shutdown, makes me to come to this group to ask\n>> whether I really need to avoid background fsck on a PostgreSQL machine?\n>> Could I hear your opinions?\n\nAFAIK, the reason why background fsck has been discouraged when used \nwith databases is because it uses disk bandwidth which may be needed by \nthe application. If you are not IO saturated, then there is no \nparticular reason why you should avoid it.\n\n> Shouldn't a journaling file system just come back up almost immediately?\n\nIt's a tradeoff; UFS does not use journalling (at least not likely in \nthe version the OP is talking about) but it uses \"soft updates\". It \nbrings most of the benefits of journalling, including \"instant up\" after \na crash without the double-write overheads (for comparison, see some \nPostgreSQL benchmarks showing that unjournaled ext2 can be faster than \njournaled ext3, since PostgreSQL has its own form of journaling - the \nWAL). The downside is that fsck needs to be run occasionally to cleanup \nnon-critical dangling references on the file system - thus the \n\"background fsck\" mode in FreeBSD.\n\n\n", "msg_date": "Thu, 07 Apr 2011 15:31:50 +0200", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Background fsck" }, { "msg_contents": "Στις Thursday 07 April 2011 16:31:50 ο/η Ivan Voras έγραψε:\n> On 07/04/2011 00:48, Scott Marlowe wrote:\n> > On Wed, Apr 6, 2011 at 4:33 PM, Ireneusz Pluta<[email protected]> wrote:\n> >> Hello,\n> >>\n> >> I saw some recommendations from people on the net not to use background fsck\n> >> when running PostgreSQL on FreeBSD. As I recall, these opinions were just\n> >> thoughts of people which they shared with the community, following their bad\n> >> experience caused by using background fsck. So, not coming any deeper with\n> >> underatanding why not, I use that as a clear recommendation for myself and\n> >> keep background fsck turned off on all my machines, regardless how much\n> >> faster a server could come up after a crash.\n> >>\n> >> But waiting so much time (like now) during foreground fsck of a large data\n> >> filesystem after unclean shutdown, makes me to come to this group to ask\n> >> whether I really need to avoid background fsck on a PostgreSQL machine?\n> >> Could I hear your opinions?\n> \n> AFAIK, the reason why background fsck has been discouraged when used \n> with databases is because it uses disk bandwidth which may be needed by \n> the application. If you are not IO saturated, then there is no \n> particular reason why you should avoid it.\n> \n> > Shouldn't a journaling file system just come back up almost immediately?\n> \n> It's a tradeoff; UFS does not use journalling (at least not likely in \n> the version the OP is talking about) but it uses \"soft updates\". It \n> brings most of the benefits of journalling, including \"instant up\" after \n> a crash without the double-write overheads (for comparison, see some \n> PostgreSQL benchmarks showing that unjournaled ext2 can be faster than \n> journaled ext3, since PostgreSQL has its own form of journaling - the \n> WAL). The downside is that fsck needs to be run occasionally to cleanup \n> non-critical dangling references on the file system - thus the \n> \"background fsck\" mode in FreeBSD.\n> \n\nI agree with Ivan. In the case of background fsck ,there is absolutely no reason to \npostpone using postgresql, more than doing the same for any other service in the system.\n\nIn anyway, having FreeBSD to fsck, (background or not) should not happen. And the problem\nbecomes bigger when cheap SATA drives will cheat about their write cache being flushed to the disk.\nSo in the common case with cheap hardware, it is wise to have a UPS connected and being monitored\nby the system.\n\n-- \nAchilleas Mantzios\n", "msg_date": "Thu, 7 Apr 2011 16:01:02 +0200", "msg_from": "Achilleas Mantzios <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Background fsck" }, { "msg_contents": "On 04/06/2011 06:33 PM, Ireneusz Pluta wrote:\n> I saw some recommendations from people on the net not to use \n> background fsck when running PostgreSQL on FreeBSD. As I recall, these \n> opinions were just thoughts of people which they shared with the \n> community, following their bad experience caused by using background fsck.\n\nPresumably you're talking about reports like these two:\n\nhttp://blog.e-shell.org/266\nhttp://lists.freebsd.org/pipermail/freebsd-current/2007-July/074773.html\n\n> But waiting so much time (like now) during foreground fsck of a large \n> data filesystem after unclean shutdown, makes me to come to this group \n> to ask whether I really need to avoid background fsck on a PostgreSQL \n> machine?\n\nThe soft update code used in FreeBSD makes sure that there's no damage \nto the filesystem that PostgreSQL can't recover from. Once the WAL is \nreplayed after a crash, the database is consistent. The main purpose of \nthe background fsck is to find \"orphaned\" space, things that the \nfilesystem incorrectly remembers the state of in regards to whether it \nwas allocated and used. In theory, there's no reason that can't happen \nin the background, concurrent with normal database activity.\n\nIn practice, background fsck is such an infrequently used piece of code \nthat it's developed a bit of a reputation for being buggier than \naverage. It's really hard to test it, filesystem code is complicated, \nand the sort of inconsistent data you get after a hard crash is often \nreally surprising. I wouldn't be too concerned about the database \nintegrity, but there is a small risk that background fsck will run into \nsomething unexpected and panic. And that's a problem you're much less \nlikely to hit using the more stable regular fsck code; thus the \nrecommendations by some to avoid it.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Thu, 07 Apr 2011 12:16:54 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Background fsck" }, { "msg_contents": "Achilleas Mantzios wrote:\n>\n> In anyway, having FreeBSD to fsck, (background or not) should not happen. And the problem\n> becomes bigger when cheap SATA drives will cheat about their write cache being flushed to the disk.\n> So in the common case with cheap hardware, it is wise to have a UPS connected and being monitored\n> by the system.\n>\n\nIt's not lack of UPS. Power issues are taken care of here. It's a buggy 3ware controller which hangs \nthe machine ocassionally and the only way to have it come back is to power cycle, hard reset is not \nenough.\n\n", "msg_date": "Fri, 08 Apr 2011 07:55:51 +0200", "msg_from": "Ireneusz Pluta <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Background fsck" }, { "msg_contents": "Στις Friday 08 April 2011 08:55:51 ο/η Ireneusz Pluta έγραψε:\n> Achilleas Mantzios wrote:\n> >\n> > In anyway, having FreeBSD to fsck, (background or not) should not happen. And the problem\n> > becomes bigger when cheap SATA drives will cheat about their write cache being flushed to the disk.\n> > So in the common case with cheap hardware, it is wise to have a UPS connected and being monitored\n> > by the system.\n> >\n> \n> It's not lack of UPS. Power issues are taken care of here. It's a buggy 3ware controller which hangs \n> the machine ocassionally and the only way to have it come back is to power cycle, hard reset is not \n> enough.\n\nWhat has happened to me (as Greg mentioned) is that repeatedly interrupted background fscks (having the system\ncrash while background fsck was executing) might result in a seriously damaged fs.\nAdd to this, the possible overhead by rebuilding software raid (gmirror) at the same time,\nand the situation becomes more complicated.\n\nSo its better to replace/fix/remove this buggy controller, before anything else.\n\n> \n> \n\n\n\n-- \nAchilleas Mantzios\n", "msg_date": "Fri, 8 Apr 2011 08:19:42 +0200", "msg_from": "Achilleas Mantzios <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Background fsck" }, { "msg_contents": "On Fri, Apr 8, 2011 at 12:19 AM, Achilleas Mantzios\n<[email protected]> wrote:\n> Στις Friday 08 April 2011 08:55:51 ο/η Ireneusz Pluta έγραψε:\n>> Achilleas Mantzios wrote:\n>> >\n>> > In anyway, having FreeBSD to fsck, (background or not) should not happen. And the problem\n>> > becomes bigger when cheap SATA drives will cheat about their write cache being flushed to the disk.\n>> > So in the common case with cheap hardware, it is wise to have a UPS connected and being monitored\n>> > by the system.\n>> >\n>>\n>> It's not lack of UPS. Power issues are taken care of here. It's a buggy 3ware controller which hangs\n>> the machine ocassionally and the only way to have it come back is to power cycle, hard reset is not\n>> enough.\n>\n> What has happened to me (as Greg mentioned) is that repeatedly interrupted background fscks (having the system\n> crash while background fsck was executing) might result in a seriously damaged fs.\n> Add to this, the possible overhead by rebuilding software raid (gmirror) at the same time,\n> and the situation becomes more complicated.\n>\n> So its better to replace/fix/remove this buggy controller, before anything else.\n\nIf I may ask, how often does it crash? And have you tried updating\nthe firmware of the controller and / or the driver in the OS?\n", "msg_date": "Fri, 8 Apr 2011 00:22:11 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Background fsck" }, { "msg_contents": "Greg Smith wrote:\n> The soft update code used in FreeBSD makes sure that there's no damage to the filesystem that \n> PostgreSQL can't recover from. Once the WAL is replayed after a crash, the database is \n> consistent. The main purpose of the background fsck is to find \"orphaned\" space, things that the \n> filesystem incorrectly remembers the state of in regards to whether it was allocated and used. In \n> theory, there's no reason that can't happen in the background, concurrent with normal database \n> activity.\n>\n> In practice, background fsck is such an infrequently used piece of code that it's developed a bit \n> of a reputation for being buggier than average. It's really hard to test it, filesystem code is \n> complicated, and the sort of inconsistent data you get after a hard crash is often really \n> surprising. I wouldn't be too concerned about the database integrity, but there is a small risk \n> that background fsck will run into something unexpected and panic. And that's a problem you're \n> much less likely to hit using the more stable regular fsck code; thus the recommendations by some \n> to avoid it.\n>\n\nThank you all for your responses.\n\nGreg, given your opinion, and these few raised issues found on the net, I think I better stay with \nbackground fsck disabled.\n\nWhat I was primarily concerned about, was long time waiting in front of console, looking at lazy \nfsck messages and nervously confirming that disk LEDs are still blinking. It's even harder with \nremote KVM, where LED's view is not available. But my personal comfort is not a priority, anyway, so \nI let foreground fsck doing its job for as much time as it needs.\n\nAs I said in my another response, the problem initially comes from the machine hanging and having to \nbe manually power cycled. There is already a significant downtinme before the recycle has a chance \nto happen. So yet another fourty minutes of fsck does not matter too much from the point of view of \nservice availability.\n\nfsck runtime duration could be shortened if I used smaller inode density for the filesystem. I think \nthat makes much sense for a filesystem fully decicated to a postgres data cluster, specifically if I \nhave not so many but large tables, which I rather do.\n\nThe system in question has:\n\ndf -hi | grep -E 'base|ifree'\nFilesystem Size Used Avail Capacity iused ifree %iused Mounted on\n/dev/da1p3 3.0T 1.7T 1.0T 63% 485k 392M 0% /pg/base\n(will I ever have even tens of millions of tables?)\n\nI reserved less inodes in a newer, bigger system:\nFilesystem Size Used Avail Capacity iused ifree %iused Mounted on\n/dev/mfid0p8 12T 4.8T 6.0T 45% 217k 49M 0% /pg/base\n\nor even less in yet newer one:\nFilesystem Size Used Avail Capacity iused ifree %iused Mounted on\n/dev/mfid0p1 12T 3.6T 7.4T 33% 202k 3.4M 6% /pg/base\n(ups, maybe too aggressive here?)\n\nWhen I forced a power drop on these two other systems, to check how they survive, fsck duration on \nthem was substantially less.\n\nIn the inode density context, let me ask you yet another question. Does tuning it in this way have \nany other, good or bad, significant impact on system performance?\n\n\nIrek.\n\n\n", "msg_date": "Fri, 08 Apr 2011 11:34:53 +0200", "msg_from": "Ireneusz Pluta <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Background fsck" }, { "msg_contents": "On 08/04/2011 07:55, Ireneusz Pluta wrote:\n> Achilleas Mantzios wrote:\n>>\n>> In anyway, having FreeBSD to fsck, (background or not) should not\n>> happen. And the problem\n>> becomes bigger when cheap SATA drives will cheat about their write\n>> cache being flushed to the disk.\n>> So in the common case with cheap hardware, it is wise to have a UPS\n>> connected and being monitored\n>> by the system.\n>>\n>\n> It's not lack of UPS. Power issues are taken care of here. It's a buggy\n> 3ware controller which hangs the machine ocassionally and the only way\n> to have it come back is to power cycle, hard reset is not enough.\n\nSo just to summarize your position: you think it is ok to run a database \non a buggy disk controller but are afraid of problems with normal OS \nutilities? :) You are of course free to do whatever you want but it \nseems odd :)\n\n\n", "msg_date": "Fri, 08 Apr 2011 12:21:50 +0200", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Background fsck" }, { "msg_contents": "Scott Marlowe wrote:\n> If I may ask, how often does it crash? And have you tried updating\n> the firmware of the controller and / or the driver in the OS?\n>\nIt happens once per two or three months, or so, taking the average. The firmware is beta as of \nJanuary this year, advised to use by their technical support.\n\nKinda off topic here, but as asked, take a look at the details:\n\n/c0 Driver Version = 3.60.04.006\n/c0 Model = 9650SE-16ML\n/c0 Available Memory = 224MB\n/c0 Firmware Version = FE9X 4.10.00.016\n/c0 Bios Version = BE9X 4.08.00.002\n/c0 Boot Loader Version = BL9X 3.08.00.001\n\nThere is a newer beta firmware available from LSI support page, but changelog does not indicate \nanything which might be related to this problem.\nOS is 6.2-RELEASE FreeBSD\nThe driver also should be the newest for this platform.\n\nWhat's more, this is already a new controller. It replaced the previous one because of exactly the \nsame persisting problem. I think tech support people not knowing a solution just buy some time for \nthem and say \"flash this beta firmware maybe it helps\" or \"replace your hardware\".\n\nThe controller always hangs with the following:\n\nSend AEN (code, time): 0031h, 04/06/2011 21:56:45\nSynchronize host/controller time\n(EC:0x31, SK=0x00, ASC=0x00, ASCQ=0x00, SEV=04, Type=0x71)\n\nAssert:0 from Command Task\nFile:cacheSegMgr.cpp Line:290\n\nand this is usually at the time of IO peaks, when dumps get transferred to another system.\n\nMy general plan for now is to migrate all services from this machine to the new ones and refresh it \ncompletely for less critical services. But it is not a task for just a few days so the failures have \ntheir chances to happen. While bearing this, I wanted to check if I could ease my life a little with \nbackground checks.\n\n\n", "msg_date": "Fri, 08 Apr 2011 13:52:03 +0200", "msg_from": "Ireneusz Pluta <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Background fsck" }, { "msg_contents": "Scott Marlowe wrote:\n> If I may ask, how often does it crash? And have you tried updating\n> the firmware of the controller and / or the driver in the OS?\n>\nIt happens once per two or three months, or so, taking the average. The firmware is beta as of \nJanuary this year, advised to use by their technical support.\n\nKinda off topic here, but as asked, take a look at the details:\n\n/c0 Driver Version = 3.60.04.006\n/c0 Model = 9650SE-16ML\n/c0 Available Memory = 224MB\n/c0 Firmware Version = FE9X 4.10.00.016\n/c0 Bios Version = BE9X 4.08.00.002\n/c0 Boot Loader Version = BL9X 3.08.00.001\n\nThere is a newer beta firmware available from LSI support page, but changelog does not indicate \nanything which might be related to this problem.\nOS is 6.2-RELEASE FreeBSD\nThe driver also should be the newest for this platform.\n\nWhat's more, this is already a new controller. It replaced the previous one because of exactly the \nsame persisting problem. I think tech support people not knowing a solution just buy some time for \nthem and say \"flash this beta firmware maybe it helps\" or \"replace your hardware\".\n\nThe controller always hangs with the following:\n\nSend AEN (code, time): 0031h, 04/06/2011 21:56:45\nSynchronize host/controller time\n(EC:0x31, SK=0x00, ASC=0x00, ASCQ=0x00, SEV=04, Type=0x71)\n\nAssert:0 from Command Task\nFile:cacheSegMgr.cpp Line:290\n\nand this is usually at the time of IO peaks, when dumps get transferred to another system.\n\nMy general plan for now is to migrate all services from this machine to the new ones and refresh it \ncompletely for use with less critical services. But it is not a task for just a few days so the \nfailures have their chances to happen. While bearing this, I wanted to check if I could ease my life \na little with background checks.\n\n\n", "msg_date": "Fri, 08 Apr 2011 13:53:58 +0200", "msg_from": "Ireneusz Pluta <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Background fsck" }, { "msg_contents": "Στις Friday 08 April 2011 14:53:58 ο/η Ireneusz Pluta έγραψε:\n> \n> My general plan for now is to migrate all services from this machine to the new ones and refresh it \n> completely for use with less critical services. But it is not a task for just a few days so the \n\nThat's a pain. Migrating from 7.1 to 8.2 was a pain for me as well.\nBut OTOH, you should upgrade to FreeBSD 8.2 since it is a production system.\nImagine your 3ware card was ok, but the driver has the problem.\nHow are you gonna show up in the FreeBSD-* mailing list when you are still on 6.2?\n\nBTW, when you make the final transition to 8.2, DO NOT upgrade in place, make a new system\nand migrate the data. Or just upgrade system in place but pkg_deinstall all your ports before the upgrade.\nportupgrade will not make it through.\n\n> failures have their chances to happen. While bearing this, I wanted to check if I could ease my life \n> a little with background checks.\n> \n> \n> \n\n\n\n-- \nAchilleas Mantzios\n", "msg_date": "Fri, 8 Apr 2011 14:14:04 +0200", "msg_from": "Achilleas Mantzios <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Background fsck" }, { "msg_contents": "Friday, April 8, 2011, 1:52:03 PM you wrote:\n\n> Scott Marlowe wrote:\n>> If I may ask, how often does it crash? And have you tried updating\n>> the firmware of the controller and / or the driver in the OS?\n>>\n> It happens once per two or three months, or so, taking the average. The firmware is beta as of\n> January this year, advised to use by their technical support.\n\nDo you run any software to periodically check the array status?\n\nOr are there any other regular tasks involving 'tw_cli'-calls?\n\nI had this effect while trying to examine the SMART-status of the attached \ndrives on a 9690-8E, leading to spurious controller resets due to timeouts, \nalso under high load.\n\nDisabling the task solved the problem, and no further resets occured.\n\n-- \nJochen Erwied | home: [email protected] +49-208-38800-18, FAX: -19\nSauerbruchstr. 17 | work: [email protected] +49-2151-7294-24, FAX: -50\nD-45470 Muelheim | mobile: [email protected] +49-173-5404164\n\n", "msg_date": "Fri, 8 Apr 2011 16:32:44 +0200", "msg_from": "Jochen Erwied <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Background fsck" }, { "msg_contents": "Achilleas Mantzios wrote:\n> How are you gonna show up in the FreeBSD-* mailing list when you are still on 6.2?\n\nPsst! - I came just here. Don't tell them.\n\n", "msg_date": "Fri, 08 Apr 2011 18:12:31 +0200", "msg_from": "Ireneusz Pluta <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Background fsck" }, { "msg_contents": "\n> What's more, this is already a new controller. It replaced the previous \n> one because of exactly the same persisting problem. I think tech support \n> people not knowing a solution just buy some time for them and say \"flash \n> this beta firmware maybe it helps\" or \"replace your hardware\".\n\nWe had a problem like this on a server a few years ago on the job... The \nmachine randomly crashed once a month. XFS coped alright until, one day, \nit threw the towel, and the poor maintenance guys needed to run xfsrepair. \nNeedless to say, the machine crashed again while xfsrepair was running \nconcurrently on all filesystems. All filesystems were then completely \ntrashed... That convinced the boss maybe something was wrong and a new box \nwas rushed in... Then a few tens of terabytes of backup restoration ... \nzzzzzz ....\n\nIt turned out it was a faulty SCSI cable.\n", "msg_date": "Mon, 18 Apr 2011 23:21:56 +0200", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Background fsck" } ]
[ { "msg_contents": "http%3A%2F%2Fwww%2Eproductionsoundmixer%2Eorg%2Fimages%2Famw%2Ephp\n", "msg_date": "Fri, 8 Apr 2011 16:08:18 +0200", "msg_from": "Whatever Deep <[email protected]>", "msg_from_op": true, "msg_subject": "" } ]
[ { "msg_contents": "Database Test Suite\nH o w are you all ??\nI am new in this group.\nI have windows 7 as OS and i installed postgresql 9.0 i want \na to do some tests so i need bench mark to test some workloads?\ni need the same as Database Test Suite \nhttp://sourceforge.net/projects/osdldbt/\nbut for windows can any one help me??\n \nRad..\nDatabase Test Suite\nH o w are you all ??\nI am new in this group.\nI have windows 7 as OS and i installed postgresql 9.0 i want \na to do some tests so i need bench mark to test some workloads?\ni need the same as Database Test Suite\nhttp://sourceforge.net/projects/osdldbt/\nbut for windows can any one help me??\n \nRad..", "msg_date": "Fri, 8 Apr 2011 07:44:01 -0700 (PDT)", "msg_from": "Radhya sahal <[email protected]>", "msg_from_op": true, "msg_subject": "postgresql benchmark" } ]
[ { "msg_contents": "Hi,\n\nI am trying to tune a query that is taking too much time on a large\ndataset (postgres 8.3).\n\n \n\nSELECT DISTINCT\n\n role_user.project_id AS projectId,\n\n sfuser.username AS adminUsername,\n\n sfuser.full_name AS adminFullName\n\nFROM\n\n role_operation role_operation,\n\n role role,\n\n sfuser sfuser,\n\n role_user role_user\n\nWHERE\n\n role_operation.role_id=role.id\n\n AND role.id=role_user.role_id\n\n AND role_user.user_id=sfuser.id\n\n AND role_operation.object_type_id='SfMain.Project'\n\n AND role_operation.operation_category='admin'\n\n AND role_operation.operation_name='admin'\n\nORDER BY\n\nadminFullName ASC\n\n \n\n \n\nIt has the following query plan:\n\nQUERY PLAN \n\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n-----------------------------------------------\n\nUnique (cost=1218.57..1221.26 rows=269 width=35) (actual\ntime=16700.332..17212.849 rows=30136 loops=1)\n\n -> Sort (cost=1218.57..1219.24 rows=269 width=35) (actual\ntime=16700.306..16885.972 rows=41737 loops=1)\n\n Sort Key: sfuser.full_name, role_user.project_id,\nsfuser.username\n\n Sort Method: quicksort Memory: 4812kB\n\n -> Nested Loop (cost=0.00..1207.71 rows=269 width=35) (actual\ntime=71.173..15788.798 rows=41737 loops=1)\n\n -> Nested Loop (cost=0.00..1118.22 rows=269 width=18)\n(actual time=65.550..12440.383 rows=41737 loops=1)\n\n -> Nested Loop (cost=0.00..256.91 rows=41\nwidth=18) (actual time=19.312..7150.925 rows=6108 loops=1)\n\n -> Index Scan using role_oper_obj_oper on\nrole_operation (cost=0.00..85.15 rows=41 width=9) (actual\ntime=19.196..2561.765 rows=6108 loops=1)\n\n Index Cond: (((object_type_id)::text =\n'SfMain.Project'::text) AND ((operation_category)::text = 'admin'::text)\nAND ((operation_name)::text = 'admin'::text))\n\n -> Index Scan using role_pk on role\n(cost=0.00..4.18 rows=1 width=9) (actual time=0.727..0.732 rows=1\nloops=6108)\n\n Index Cond: ((role.id)::text =\n(role_operation.role_id)::text)\n\n -> Index Scan using role_user_proj_idx on\nrole_user (cost=0.00..20.84 rows=13 width=27) (actual time=0.301..0.795\nrows=7 loops=6108)\n\n Index Cond: ((role_user.role_id)::text =\n(role_operation.role_id)::text)\n\n -> Index Scan using sfuser_pk on sfuser\n(cost=0.00..0.32 rows=1 width=35) (actual time=0.056..0.062 rows=1\nloops=41737)\n\n Index Cond: ((sfuser.id)::text =\n(role_user.user_id)::text)\n\nTotal runtime: 17343.185 ms\n\n(16 rows)\n\n \n\n \n\nI have tried adding an index on role_operation.role_id but it didn't\nseem to help or changing the query to:\n\nSELECT\n role_user.project_id AS projectId,\n sfuser.username AS adminUsername,\n sfuser.full_name AS adminFullName\nFROM\n sfuser sfuser,\n role_user role_user\nWHERE\n role_user.role_id in (select role_operation.role_id from\nrole_operation where role_operation.object_type_id=\n'SfMain.Project'\n AND role_operation.operation_category='admin'\n AND role_operation.operation_name='admin') AND\nrole_user.user_id=sfuser.id\n \nORDER BY\n adminFullName ASC\n \nNone of this seemed to improve the performance.\n \nDoes anyone have a suggestion?\n\n \n\nThanks a lot,\n\nAnne\n\n\nHi,I am trying to tune a query that is taking too much time on a large dataset (postgres 8.3). SELECT DISTINCT       role_user.project_id AS projectId,       sfuser.username AS adminUsername,       sfuser.full_name AS adminFullNameFROM       role_operation role_operation,       role role,       sfuser sfuser,       role_user role_userWHERE       role_operation.role_id=role.id        AND role.id=role_user.role_id        AND role_user.user_id=sfuser.id        AND role_operation.object_type_id='SfMain.Project'        AND role_operation.operation_category='admin'        AND role_operation.operation_name='admin'ORDER BYadminFullName ASC  It has the following query plan:QUERY PLAN                                                         ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Unique  (cost=1218.57..1221.26 rows=269 width=35) (actual time=16700.332..17212.849 rows=30136 loops=1)   ->  Sort  (cost=1218.57..1219.24 rows=269 width=35) (actual time=16700.306..16885.972 rows=41737 loops=1)         Sort Key: sfuser.full_name, role_user.project_id, sfuser.username         Sort Method:  quicksort  Memory: 4812kB         ->  Nested Loop  (cost=0.00..1207.71 rows=269 width=35) (actual time=71.173..15788.798 rows=41737 loops=1)               ->  Nested Loop  (cost=0.00..1118.22 rows=269 width=18) (actual time=65.550..12440.383 rows=41737 loops=1)                     ->  Nested Loop  (cost=0.00..256.91 rows=41 width=18) (actual time=19.312..7150.925 rows=6108 loops=1)                           ->  Index Scan using role_oper_obj_oper on role_operation  (cost=0.00..85.15 rows=41 width=9) (actual time=19.196..2561.765 rows=6108 loops=1)                                 Index Cond: (((object_type_id)::text = 'SfMain.Project'::text) AND ((operation_category)::text = 'admin'::text) AND ((operation_name)::text = 'admin'::text))                           ->  Index Scan using role_pk on role  (cost=0.00..4.18 rows=1 width=9) (actual time=0.727..0.732 rows=1 loops=6108)                                 Index Cond: ((role.id)::text = (role_operation.role_id)::text)                     ->  Index Scan using role_user_proj_idx on role_user  (cost=0.00..20.84 rows=13 width=27) (actual time=0.301..0.795 rows=7 loops=6108)                           Index Cond: ((role_user.role_id)::text = (role_operation.role_id)::text)               ->  Index Scan using sfuser_pk on sfuser  (cost=0.00..0.32 rows=1 width=35) (actual time=0.056..0.062 rows=1 loops=41737)                     Index Cond: ((sfuser.id)::text = (role_user.user_id)::text) Total runtime: 17343.185 ms(16 rows)  I have tried adding an index on role_operation.role_id but it didn’t seem to help or changing the query to:SELECT       role_user.project_id AS projectId,       sfuser.username AS adminUsername,       sfuser.full_name AS adminFullNameFROM        sfuser sfuser,       role_user role_userWHERE      role_user.role_id in (select role_operation.role_id from role_operation where role_operation.object_type_id='SfMain.Project'        AND role_operation.operation_category='admin'        AND role_operation.operation_name='admin') AND role_user.user_id=sfuser.id ORDER BY       adminFullName ASC None of this seemed to improve the performance. Does anyone have a suggestion? Thanks a lot,Anne", "msg_date": "Fri, 8 Apr 2011 18:29:42 -0700", "msg_from": "\"Anne Rosset\" <[email protected]>", "msg_from_op": true, "msg_subject": "Slow query postgres 8.3" }, { "msg_contents": "> Hi,\n>\n> I am trying to tune a query that is taking too much time on a large\n> dataset (postgres 8.3).\n>\n\nHi, run ANALYZE on the tables used in the query - the stats are very off,\nso the db chooses a really bad execution plan.\n\nTomas\n\n", "msg_date": "Sat, 9 Apr 2011 12:35:54 +0200", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Slow query postgres 8.3" }, { "msg_contents": "Hi Thomas,\r\nHere is the plan after explain. \r\nQUERY PLAN \r\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\r\n Unique (cost=1330.27..1333.24 rows=297 width=35) (actual time=4011.861..4526.583 rows=30136 loops=1)\r\n -> Sort (cost=1330.27..1331.01 rows=297 width=35) (actual time=4011.828..4198.006 rows=41737 loops=1)\r\n Sort Key: sfuser.full_name, role_user.project_id, sfuser.username\r\n Sort Method: quicksort Memory: 4812kB\r\n -> Nested Loop (cost=0.00..1318.07 rows=297 width=35) (actual time=0.622..3107.994 rows=41737 loops=1)\r\n -> Nested Loop (cost=0.00..1219.26 rows=297 width=18) (actual time=0.426..1212.175 rows=41737 loops=1)\r\n -> Nested Loop (cost=0.00..282.11 rows=45 width=18) (actual time=0.325..371.295 rows=6108 loops=1)\r\n -> Index Scan using role_oper_obj_oper on role_operation (cost=0.00..93.20 rows=45 width=9) (actual time=0.236..71.291 rows=6108 loops=1)\r\n Index Cond: (((object_type_id)::text = 'SfMain.Project'::text) AND ((operation_category)::text = 'admin'::text) AND ((operation_name)::text = 'admin'::text))\r\n -> Index Scan using role_pk on role (cost=0.00..4.19 rows=1 width=9) (actual time=0.025..0.030 rows=1 loops=6108)\r\n Index Cond: ((role.id)::text = (role_operation.role_id)::text)\r\n -> Index Scan using role_user_proj_idx on role_user (cost=0.00..20.66 rows=13 width=27) (actual time=0.025..0.066 rows=7 loops=6108)\r\n Index Cond: ((role_user.role_id)::text = (role_operation.role_id)::text)\r\n -> Index Scan using sfuser_pk on sfuser (cost=0.00..0.32 rows=1 width=35) (actual time=0.022..0.027 rows=1 loops=41737)\r\n Index Cond: ((sfuser.id)::text = (role_user.user_id)::text)\r\n Total runtime: 4657.488 ms\r\n(16 rows)\r\n\r\nIs there anything that can be done. For instance for the 1s in the index scan on sfuser?\r\nThanks,\r\nAnne\r\n\r\n-----Original Message-----\r\nFrom: [email protected] [mailto:[email protected]] \r\nSent: Saturday, April 09, 2011 3:36 AM\r\nTo: Anne Rosset\r\nCc: [email protected]\r\nSubject: Re: [PERFORM] Slow query postgres 8.3\r\n\r\n> Hi,\r\n>\r\n> I am trying to tune a query that is taking too much time on a large \r\n> dataset (postgres 8.3).\r\n>\r\n\r\nHi, run ANALYZE on the tables used in the query - the stats are very off, so the db chooses a really bad execution plan.\r\n\r\nTomas\r\n\r\n", "msg_date": "Mon, 11 Apr 2011 09:07:45 -0700", "msg_from": "\"Anne Rosset\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query postgres 8.3" }, { "msg_contents": "\"Anne Rosset\" <[email protected]> wrote:\n \n> -> Index Scan using role_oper_obj_oper\n> on role_operation (cost=0.00..93.20 rows=45 width=9) (actual\n> time=0.236..71.291 rows=6108 loops=1)\n> Index Cond:\n> (((object_type_id)::text = 'SfMain.Project'::text) AND\n> ((operation_category)::text = 'admin'::text) AND\n> ((operation_name)::text = 'admin'::text))\n \nThis looks like another case where there is a correlation among\nmultiple values used for selection. The optimizer assumes, for\nexample, that category = 'admin' will be true no more often for rows\nwith operation_name = 'admin' than for other values of\noperation_name. There has been much talk lately about how to make\nit smarter about that, but right now there's no general solution,\nand workarounds can be tricky.\n \nIn more recent versions you could probably work around this with a\nCommon Table Expression (CTE) (using a WITH clause). In 8.3 the\nbest idea which comes immediately to mind is to select from the\nrole_operation table into a temporary table using whichever of those\nthree criteria is most selective, and then join that temporary table\ninto the rest of the query. Maybe someone else can think of\nsomething better.\n \n-Kevin\n", "msg_date": "Mon, 11 Apr 2011 11:59:43 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query postgres 8.3" }, { "msg_contents": "I actually implemented a statistical system for measuring these kinds\nof correlations.\n\nIt's complex, but it might be adaptable to pgsql. Furthermore, one of\nthe latest projects of mine was to replace the purely statistical\napproach with SVCs.\nToo bad I won't be able to devote any time to that project before september.\n\nOn Mon, Apr 11, 2011 at 6:59 PM, Kevin Grittner\n<[email protected]> wrote:\n> There has been much talk lately about how to make\n> it smarter about that, but right now there's no general solution,\n> and workarounds can be tricky.\n", "msg_date": "Tue, 12 Apr 2011 09:33:50 +0200", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query postgres 8.3" }, { "msg_contents": "Dne 12.4.2011 09:33, Claudio Freire napsal(a):\n> I actually implemented a statistical system for measuring these kinds\n> of correlations.\n> \n> It's complex, but it might be adaptable to pgsql. Furthermore, one of\n> the latest projects of mine was to replace the purely statistical\n> approach with SVCs.\n\nYou mean Support Vector Classifiers? Interesting idea, although I don't\nsee how to apply that to query planning, especially with non-numeric\ninputs. Could you share more details on that statistical system and how\ndo you think it could be applied in the pgsql world?\n\n> Too bad I won't be able to devote any time to that project before september.\n\nI've been working on cross column stats for some time, and although I\nhad to put it aside for some time I'm going to devote more time to this\nissue soon. So interesting ideas/comments are very welcome.\n\nregards\nTomas\n", "msg_date": "Wed, 13 Apr 2011 22:16:35 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query postgres 8.3" }, { "msg_contents": "On Wed, Apr 13, 2011 at 10:16 PM, Tomas Vondra <[email protected]> wrote:\n> You mean Support Vector Classifiers? Interesting idea, although I don't\n> see how to apply that to query planning, especially with non-numeric\n> inputs. Could you share more details on that statistical system and how\n> do you think it could be applied in the pgsql world?\n\nWell, in my case, the data was a simple list of attributes. You either\nhad them or not, and data was very sparse, so the task was to fill the\nmissing bits.\n\nFor that, what I did is take a training set of data, and each time I\nwanted to know the likelihood of having a certain attribute I would\ncompute the conditional probability given the training data -\nconditional on a set of other data.\n\nSo, for postgres, if I had an index over a few columns of booleans\n(yea, bare with me) (a,b,c,d), and I wanted to know the selectivity of\n\"where a\", IF i already accounted for \"where b\" then I'd pick my\ntraining data and count how many of those that have b have also a. So\nP(a if b).\n\nOf course, my application had to handle thousands of attributes, so I\ncouldn't apply conditional distributions on everything, I'd pick the\nconditional part (if b) to something that selected an \"appropriately\nsized\" sample from my training data.\n\nAll that's very expensive.\n\nSo I thought... what about replacing that with an SVC - train an SVC\nor SVR model for a, taking b, c, d as parameters. I never had the\noportunity to test the idea, but the SVC variant would probably be\nusable by postgres, since all you need to know is b, c, d, they don't\nneed to be booleans, or scalars in fact, SVCs are very flexible.\nUnknown values could easily be compensated for, with some cleverness.\nThe tough part is, of course, training the SVC, picking the kind of\nSVC to use, and storing it into stats tables during analyze. Oh, and\nhoping it doesn't make fatal mistakes.\n\nI know, the idea is very green, but it would be a fun project - cough GSoC ;-)\n", "msg_date": "Wed, 13 Apr 2011 23:37:41 +0200", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query postgres 8.3" } ]
[ { "msg_contents": "I have a table that I need to rebuild indexes on from time to time (records get loaded before indexes get build).\n\nTo build the indexes, I use 'create index ...', which reads the entire table and builds the index, one at a time.\nI'm wondering if there is a way to build these indexes in parallel while reading the table only once for all indexes and building them all at the same time. Is there an index build tool that I missed somehow, that can do this?\n\nThanks,\nChris. \n\n\n\nbest regards,\nchris\n-- \nchris ruprecht\ndatabase grunt and bit pusher extraordinaíre\n\n", "msg_date": "Sat, 9 Apr 2011 12:28:21 -0400", "msg_from": "Chris Ruprecht <[email protected]>", "msg_from_op": true, "msg_subject": "Multiple index builds on same table - in one sweep?" }, { "msg_contents": "Chris Ruprecht <[email protected]> writes:\n> I have a table that I need to rebuild indexes on from time to time (records get loaded before indexes get build).\n> To build the indexes, I use 'create index ...', which reads the entire table and builds the index, one at a time.\n> I'm wondering if there is a way to build these indexes in parallel while reading the table only once for all indexes and building them all at the same time. Is there an index build tool that I missed somehow, that can do this?\n\nI don't know of any automated tool, but if you launch several CREATE\nINDEX operations on the same table at approximately the same time (in\nseparate sessions), they should share the I/O required to read the\ntable. (The \"synchronized scans\" feature guarantees this in recent\nPG releases, even if you're not very careful about starting them at\nthe same time.)\n\nThe downside of that is that you need N times the working memory and\nyou will have N times the subsidiary I/O for sort temp files and writes\nto the finished indexes. Depending on the characteristics of your I/O\nsystem it's not hard to imagine this being a net loss ... but it'd be\ninteresting to experiment.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 09 Apr 2011 13:10:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Multiple index builds on same table - in one sweep? " }, { "msg_contents": "I'm running 2 tests now, one, where I'm doing the traditional indexing, in sequence. The server isn't doing anything else, so I should get pretty accurate results.\nTest 2 will win all the create index sessions in separate sessions in parallel (echo \"create index ...\"|psql ... & ) once the 'serial build' test is done.\n\nMaybe, in a future release, somebody will develop something that can create indexes as inactive and have a build tool build and activate them at the same time. Food for thought?\n \nOn Apr 9, 2011, at 13:10 , Tom Lane wrote:\n\n> Chris Ruprecht <[email protected]> writes:\n>> I have a table that I need to rebuild indexes on from time to time (records get loaded before indexes get build).\n>> To build the indexes, I use 'create index ...', which reads the entire table and builds the index, one at a time.\n>> I'm wondering if there is a way to build these indexes in parallel while reading the table only once for all indexes and building them all at the same time. Is there an index build tool that I missed somehow, that can do this?\n> \n> I don't know of any automated tool, but if you launch several CREATE\n> INDEX operations on the same table at approximately the same time (in\n> separate sessions), they should share the I/O required to read the\n> table. (The \"synchronized scans\" feature guarantees this in recent\n> PG releases, even if you're not very careful about starting them at\n> the same time.)\n> \n> The downside of that is that you need N times the working memory and\n> you will have N times the subsidiary I/O for sort temp files and writes\n> to the finished indexes. Depending on the characteristics of your I/O\n> system it's not hard to imagine this being a net loss ... but it'd be\n> interesting to experiment.\n> \n> \t\t\tregards, tom lane\n\n", "msg_date": "Sat, 9 Apr 2011 13:23:51 -0400", "msg_from": "Chris Ruprecht <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Multiple index builds on same table - in one sweep?" }, { "msg_contents": "On 04/09/2011 01:23 PM, Chris Ruprecht wrote:\n> Maybe, in a future release, somebody will develop something that can create indexes as inactive and have a build tool build and activate them at the same time. Food for thought?\n> \n\nWell, the most common case where this sort of thing happens is when \npeople are using pg_restore to load a dump of an entire database. In \nthat case, you can use \"-j\" to run more than one loader job in parallel, \nwhich can easily end up doing a bunch of index builds at once, \nparticularly at the end. That already works about as well as it can \nbecause of the synchronized scan feature Tom mentioned.\n\nI doubt you'll ever get much traction arguing for something other than \ncontinuing to accelerate that path; correspondingly, making your own \nindex builds look as much like it as possible is a good practice. Fire \nup as many builds as you can stand in parallel and see how many you can \ntake given the indexes+data involved. It's not clear to me how a create \nas inactive strategy could improve on that.\n\nThere are some types of index build operations that bottleneck on CPU \noperations, and executing several of those in parallel can be a win. At \nsome point you run out of physical I/O, or the additional memory you're \nusing starts taking away too much from caching. Once you're at that \npoint, it's better to build the indexes on another pass, even if it \nrequires re-scanning the table data to do it. The tipping point varies \nbased on both system and workload, it's very hard to predict or automate.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Sun, 10 Apr 2011 22:29:49 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Multiple index builds on same table - in one sweep?" }, { "msg_contents": "On Sun, Apr 10, 2011 at 8:29 PM, Greg Smith <[email protected]> wrote:\n> On 04/09/2011 01:23 PM, Chris Ruprecht wrote:\n>>\n>> Maybe, in a future release, somebody will develop something that can\n>> create indexes as inactive and have a build tool build and activate them at\n>> the same time. Food for thought?\n>>\n>\n> Well, the most common case where this sort of thing happens is when people\n> are using pg_restore to load a dump of an entire database.  In that case,\n> you can use \"-j\" to run more than one loader job in parallel, which can\n> easily end up doing a bunch of index builds at once, particularly at the\n> end.  That already works about as well as it can because of the synchronized\n> scan feature Tom mentioned.\n\nFYI, in 8.3.13 I get this for all but one index:\n\nERROR: deadlock detected\nDETAIL: Process 24488 waits for ShareLock on virtual transaction\n64/825033; blocked by process 27505.\nProcess 27505 waits for ShareUpdateExclusiveLock on relation 297369165\nof database 278059474; blocked by process 24488.\n\nI'll try it on a big server running 8.4 and see what happens.\n", "msg_date": "Sun, 10 Apr 2011 23:35:49 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Multiple index builds on same table - in one sweep?" }, { "msg_contents": "On Sun, Apr 10, 2011 at 11:35 PM, Scott Marlowe <[email protected]> wrote:\n> On Sun, Apr 10, 2011 at 8:29 PM, Greg Smith <[email protected]> wrote:\n>> On 04/09/2011 01:23 PM, Chris Ruprecht wrote:\n>>>\n>>> Maybe, in a future release, somebody will develop something that can\n>>> create indexes as inactive and have a build tool build and activate them at\n>>> the same time. Food for thought?\n>>>\n>>\n>> Well, the most common case where this sort of thing happens is when people\n>> are using pg_restore to load a dump of an entire database.  In that case,\n>> you can use \"-j\" to run more than one loader job in parallel, which can\n>> easily end up doing a bunch of index builds at once, particularly at the\n>> end.  That already works about as well as it can because of the synchronized\n>> scan feature Tom mentioned.\n>\n> FYI, in 8.3.13 I get this for all but one index:\n>\n> ERROR:  deadlock detected\n> DETAIL:  Process 24488 waits for ShareLock on virtual transaction\n> 64/825033; blocked by process 27505.\n> Process 27505 waits for ShareUpdateExclusiveLock on relation 297369165\n> of database 278059474; blocked by process 24488.\n>\n> I'll try it on a big server running 8.4 and see what happens.\n\nSame error on pg 8.4.6\n", "msg_date": "Mon, 11 Apr 2011 01:31:34 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Multiple index builds on same table - in one sweep?" }, { "msg_contents": "Scott Marlowe wrote:\n> FYI, in 8.3.13 I get this for all but one index:\n>\n> ERROR: deadlock detected\n> DETAIL: Process 24488 waits for ShareLock on virtual transaction\n> 64/825033; blocked by process 27505.\n> Process 27505 waits for ShareUpdateExclusiveLock on relation 297369165\n> of database 278059474; blocked by process 24488.\n> \n\nIs that trying to build them by hand? The upthread request here is \nactually already on the TODO list at \nhttp://wiki.postgresql.org/wiki/Todo and it talks a bit about what works \nand what doesn't right now:\n\nAllow multiple indexes to be created concurrently, ideally via a single \nheap scan\n-pg_restore allows parallel index builds, but it is done via \nsubprocesses, and there is no SQL interface for this.\n\nThis whole idea was all the rage on these lists circa early 2008, but \nparallel restore seems to have satisfied enough of the demand in this \ngeneral area that it doesn't come up quite as much now.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Mon, 11 Apr 2011 03:41:24 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Multiple index builds on same table - in one sweep?" }, { "msg_contents": "On Mon, Apr 11, 2011 at 1:41 AM, Greg Smith <[email protected]> wrote:\n> Scott Marlowe wrote:\n>>\n>> FYI, in 8.3.13 I get this for all but one index:\n>>\n>> ERROR:  deadlock detected\n>> DETAIL:  Process 24488 waits for ShareLock on virtual transaction\n>> 64/825033; blocked by process 27505.\n>> Process 27505 waits for ShareUpdateExclusiveLock on relation 297369165\n>> of database 278059474; blocked by process 24488.\n>>\n>\n> Is that trying to build them by hand?  The upthread request here is actually\n> already on the TODO list at http://wiki.postgresql.org/wiki/Todo and it\n> talks a bit about what works and what doesn't right now:\n\nYes, by hand. It creates an entry for the index but lists but marks\nit as INVALID\n", "msg_date": "Mon, 11 Apr 2011 01:57:35 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Multiple index builds on same table - in one sweep?" }, { "msg_contents": "On 04/09/2011 11:28 AM, Chris Ruprecht wrote:\n\n> I'm wondering if there is a way to build these indexes in parallel\n> while reading the table only once for all indexes and building them\n> all at the same time. Is there an index build tool that I missed\n> somehow, that can do this?\n\nI threw together a very crude duo of shell scripts to do this. I've \nattached them for you. To use them, you make a file named tablist.txt \nwhich contains the names of all the tables you want to reindex, and then \nyou run them like this:\n\nbash generate_rebuild_scripts.sh my_database 8\nbash launch_rebuild_scripts.sh my_database\n\nThe first one in the above example would connect to my_database and \ncreate eight scripts that would run in parallel, with indexes ordered \nsmallest to largest to prevent one script from getting stuck with \nseveral large indexes while the rest got small ones. The second script \njust launches them and makes a log directory so you can watch the progress.\n\nI've run this with up to 16 concurrent threads without major issue. It \ncomes in handy.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email", "msg_date": "Mon, 11 Apr 2011 10:13:41 -0500", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Multiple index builds on same table - in one sweep?" }, { "msg_contents": "Scott Marlowe <[email protected]> writes:\n> On Mon, Apr 11, 2011 at 1:41 AM, Greg Smith <[email protected]> wrote:\n>> Scott Marlowe wrote:\n>>> FYI, in 8.3.13 I get this for all but one index:\n>>> ERROR: �deadlock detected\n\n>> Is that trying to build them by hand? �The upthread request here is actually\n>> already on the TODO list at http://wiki.postgresql.org/wiki/Todo and it\n>> talks a bit about what works and what doesn't right now:\n\n> Yes, by hand. It creates an entry for the index but lists but marks\n> it as INVALID\n\nAre you trying to use CREATE INDEX CONCURRENTLY? AFAIR that doesn't\nsupport multiple index creations at the same time. Usually you wouldn't\nwant that combination anyway, since the point of CREATE INDEX\nCONCURRENTLY is to not prevent foreground use of the table while you're\nmaking the index --- and multiple index creations are probably going to\neat enough I/O that you shouldn't be doing them during normal operations\nanyhow.\n\nJust use plain CREATE INDEX.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 11 Apr 2011 14:39:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Multiple index builds on same table - in one sweep? " }, { "msg_contents": "On Mon, Apr 11, 2011 at 12:39 PM, Tom Lane <[email protected]> wrote:\n> Scott Marlowe <[email protected]> writes:\n>> On Mon, Apr 11, 2011 at 1:41 AM, Greg Smith <[email protected]> wrote:\n>>> Scott Marlowe wrote:\n>>>> FYI, in 8.3.13 I get this for all but one index:\n>>>> ERROR:  deadlock detected\n>\n>>> Is that trying to build them by hand?  The upthread request here is actually\n>>> already on the TODO list at http://wiki.postgresql.org/wiki/Todo and it\n>>> talks a bit about what works and what doesn't right now:\n>\n>> Yes, by hand.  It creates an entry for the index but lists but marks\n>> it as INVALID\n>\n> Are you trying to use CREATE INDEX CONCURRENTLY?  AFAIR that doesn't\n> support multiple index creations at the same time.  Usually you wouldn't\n> want that combination anyway, since the point of CREATE INDEX\n> CONCURRENTLY is to not prevent foreground use of the table while you're\n> making the index --- and multiple index creations are probably going to\n> eat enough I/O that you shouldn't be doing them during normal operations\n> anyhow.\n>\n> Just use plain CREATE INDEX.\n\nI thought they'd stand in line waiting on each other. I'll give it a try.\n", "msg_date": "Mon, 11 Apr 2011 12:39:49 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Multiple index builds on same table - in one sweep?" } ]
[ { "msg_contents": "Dear ,all \ni would to ask you about how postgresql optimizer parameters :-\n1- random page cost \n2- cpu tuple cost3- cpu operator cost4- cpu index tuple cost5- effective cache \nsize\nplay as parameters in cost estimator equation \ni imagine that cost function is the same as\nf(x,y,z,....)=ax+by......\ncost(cpu tuple cost,cpu operator cost,....)\ncan any one help me to know the equation that cost estimator used it..\nMy regard \nRadhya,,,,\nDear ,all \ni would to ask you about how postgresql optimizer parameters :-\n1- random page cost \n\n2- cpu tuple cost\n3- cpu operator cost\n4- cpu index tuple cost\n5- effective cache size\nplay as parameters in cost estimator equation \ni imagine that cost function is the same as\nf(x,y,z,....)=ax+by......\ncost(cpu tuple cost,cpu operator cost,....)\ncan any one help me to know the equation that cost estimator used it..\nMy regard \nRadhya,,,,", "msg_date": "Sun, 10 Apr 2011 11:22:57 -0700 (PDT)", "msg_from": "Radhya sahal <[email protected]>", "msg_from_op": true, "msg_subject": "optimizer parameters" }, { "msg_contents": "There's a quite nice description in the docs:\n\nhttp://www.postgresql.org/docs/9.0/interactive/row-estimation-examples.html\n\nand a some more details for indexes:\n\nhttp://www.postgresql.org/docs/9.0/interactive/index-cost-estimation.html\n\nA bit more info about how this is used is available in this presentation:\n\nhttp://momjian.us/main/writings/pgsql/internalpics.pdf\n\nBut if you need more details, then I quess the best approach to get it\nis to read the sources (search for the cost estimation etc.).\n\nregards\nTomas\n\nDne 10.4.2011 20:22, Radhya sahal napsal(a):\n> Dear ,all\n> i would to ask you about how postgresql optimizer parameters :-\n> 1- random page cost\n> \n> 2- cpu tuple cost\n> \n> 3- cpu operator cost\n> \n> 4- cpu index tuple cost\n> \n> 5- effective cache size\n> \n> play as parameters in cost estimator equation \n> \n> i imagine that cost function is the same as\n> \n> f(x,y,z,....)=ax+by......\n> \n> cost(cpu tuple cost,cpu operator cost,....)\n> \n> can any one help me to know the equation that cost estimator used it..\n> \n> My regard\n> \n> Radhya,,,,\n> \n> \n> \n\n", "msg_date": "Mon, 11 Apr 2011 01:27:58 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimizer parameters" }, { "msg_contents": "On 04/10/2011 07:27 PM, Tomas Vondra wrote:\n> But if you need more details, then I quess the best approach to get it\n> is to read the sources (search for the cost estimation etc.).\n> \n\nThere's a small fully worked out example of this in my book too, where I \nduplicate the optimizer's EXPLAIN cost computations for a simple query. \nThe main subtle thing most people don't appreciate fully is how much the \noptimizer takes into account two things: the selectivity of operators, \nand the expected size of the data being moved around. For example, a \nlot of the confusion around \"why didn't it use my index?\" comes from not \nnoticing the size of the index involved, and therefore how expensive its \npage cost is considered to be.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Sun, 10 Apr 2011 22:40:48 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimizer parameters" } ]
[ { "msg_contents": "Hi Guys,\n\nI'm just doing some tests on a new server running one of our heavy select functions (the select part of a plpgsql function to allocate seats) concurrently.  We do use connection pooling and split out some selects to slony slaves, but the tests here are primeraly to test what an individual server is capable of.\n\nThe new server uses 4 x 8 core Xeon X7550 CPUs at 2Ghz, our current servers are 2 x 4 core Xeon E5320 CPUs at 2Ghz.\n\nWhat I'm seeing is when the number of clients is greater than the number of cores, the new servers perform better on fewer cores.\n\nHas anyone else seen this behaviour? I'm guessing this is either a hardware limitation or something to do with linux process management / scheduling? Any idea what to look into?\n\nMy benchmark utility is just using a little .net/npgsql app that runs increacing numbers of clients concurrently, each client runs a specified number of iterations of any sql I specify.\n\nI've posted some results and the test program here:\n\nhttp://www.8kb.co.uk/server_benchmarks/\n\n", "msg_date": "Mon, 11 Apr 2011 14:04:08 +0100 (BST)", "msg_from": "Glyn Astill <[email protected]>", "msg_from_op": true, "msg_subject": "Linux: more cores = less concurrency." }, { "msg_contents": "Glyn Astill <[email protected]> wrote:\n \n> The new server uses 4 x 8 core Xeon X7550 CPUs at 2Ghz\n \nWhich has hyperthreading.\n \n> our current servers are 2 x 4 core Xeon E5320 CPUs at 2Ghz.\n \nWhich doesn't have hyperthreading.\n \nPostgreSQL often performs worse with hyperthreading than without. \nHave you turned HT off on your new machine? If not, I would start\nthere.\n \n-Kevin\n", "msg_date": "Mon, 11 Apr 2011 13:09:15 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux: more cores = less concurrency." }, { "msg_contents": "On Mon, 11 Apr 2011 13:09:15 -0500, \"Kevin Grittner\"\n<[email protected]> wrote:\n> Glyn Astill <[email protected]> wrote:\n> \n>> The new server uses 4 x 8 core Xeon X7550 CPUs at 2Ghz\n> \n> Which has hyperthreading.\n> \n>> our current servers are 2 x 4 core Xeon E5320 CPUs at 2Ghz.\n> \n> Which doesn't have hyperthreading.\n> \n> PostgreSQL often performs worse with hyperthreading than without. \n> Have you turned HT off on your new machine? If not, I would start\n> there.\n\nAnd then make sure you aren't running CFQ.\n\nJD\n\n> \n> -Kevin\n\n-- \nPostgreSQL - XMPP: jdrake(at)jabber(dot)postgresql(dot)org\n Consulting, Development, Support, Training\n 503-667-4564 - http://www.commandprompt.com/\n The PostgreSQL Company, serving since 1997\n", "msg_date": "Mon, 11 Apr 2011 11:12:55 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux: more cores =?UTF-8?Q?=3D=20less=20concurrency=2E?=" }, { "msg_contents": "\n\n--- On Mon, 11/4/11, Joshua D. Drake <[email protected]> wrote:\n\n> From: Joshua D. Drake <[email protected]>\n> Subject: Re: [PERFORM] Linux: more cores = less concurrency.\n> To: \"Kevin Grittner\" <[email protected]>\n> Cc: [email protected], \"Glyn Astill\" <[email protected]>\n> Date: Monday, 11 April, 2011, 19:12\n> On Mon, 11 Apr 2011 13:09:15 -0500,\n> \"Kevin Grittner\"\n> <[email protected]>\n> wrote:\n> > Glyn Astill <[email protected]>\n> wrote:\n> >  \n> >> The new server uses 4 x 8 core Xeon X7550 CPUs at\n> 2Ghz\n> >  \n> > Which has hyperthreading.\n> >  \n> >> our current servers are 2 x 4 core Xeon E5320 CPUs\n> at 2Ghz.\n> >  \n> > Which doesn't have hyperthreading.\n> >  \n\nYep, off. If you look at the benchmarks I took, HT absoloutely killed it.\n\n> > PostgreSQL often performs worse with hyperthreading\n> than without. \n> > Have you turned HT off on your new machine?  If\n> not, I would start\n> > there.\n> \n> And then make sure you aren't running CFQ.\n> \n> JD\n> \n\nNot running CFQ, running the no-op i/o scheduler.\n", "msg_date": "Mon, 11 Apr 2011 19:23:50 +0100 (BST)", "msg_from": "Glyn Astill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Linux: more cores = less concurrency." }, { "msg_contents": "On Mon, Apr 11, 2011 at 12:12 PM, Joshua D. Drake <[email protected]> wrote:\n> On Mon, 11 Apr 2011 13:09:15 -0500, \"Kevin Grittner\"\n> <[email protected]> wrote:\n>> Glyn Astill <[email protected]> wrote:\n>>\n>>> The new server uses 4 x 8 core Xeon X7550 CPUs at 2Ghz\n>>\n>> Which has hyperthreading.\n>>\n>>> our current servers are 2 x 4 core Xeon E5320 CPUs at 2Ghz.\n>>\n>> Which doesn't have hyperthreading.\n>>\n>> PostgreSQL often performs worse with hyperthreading than without.\n>> Have you turned HT off on your new machine?  If not, I would start\n>> there.\n>\n> And then make sure you aren't running CFQ.\n>\n> JD\n\nThis++\n\nAlso if you're running a good hardware RAID controller, jsut go to NOOP\n", "msg_date": "Mon, 11 Apr 2011 12:32:33 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux: more cores = less concurrency." }, { "msg_contents": "On Mon, Apr 11, 2011 at 12:23 PM, Glyn Astill <[email protected]> wrote:\n>\n>\n> --- On Mon, 11/4/11, Joshua D. Drake <[email protected]> wrote:\n>\n>> From: Joshua D. Drake <[email protected]>\n>> Subject: Re: [PERFORM] Linux: more cores = less concurrency.\n>> To: \"Kevin Grittner\" <[email protected]>\n>> Cc: [email protected], \"Glyn Astill\" <[email protected]>\n>> Date: Monday, 11 April, 2011, 19:12\n>> On Mon, 11 Apr 2011 13:09:15 -0500,\n>> \"Kevin Grittner\"\n>> <[email protected]>\n>> wrote:\n>> > Glyn Astill <[email protected]>\n>> wrote:\n>> >\n>> >> The new server uses 4 x 8 core Xeon X7550 CPUs at\n>> 2Ghz\n>> >\n>> > Which has hyperthreading.\n>> >\n>> >> our current servers are 2 x 4 core Xeon E5320 CPUs\n>> at 2Ghz.\n>> >\n>> > Which doesn't have hyperthreading.\n>> >\n>\n> Yep, off. If you look at the benchmarks I took, HT absoloutely killed it.\n>\n>> > PostgreSQL often performs worse with hyperthreading\n>> than without.\n>> > Have you turned HT off on your new machine?  If\n>> not, I would start\n>> > there.\n>>\n>> And then make sure you aren't running CFQ.\n>>\n>> JD\n>>\n>\n> Not running CFQ, running the no-op i/o scheduler.\n\nJust FYI, in synthetic pgbench type benchmarks, a 48 core AMD Magny\nCours with LSI HW RAID and 34 15k6 Hard drives scales almost linearly\nup to 48 or so threads, getting into the 7000+ tps range. With SW\nRAID it gets into the 5500 tps range.\n", "msg_date": "Mon, 11 Apr 2011 13:29:29 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux: more cores = less concurrency." }, { "msg_contents": "--- On Mon, 11/4/11, Scott Marlowe <[email protected]> wrote:\n\n> Just FYI, in synthetic pgbench type benchmarks, a 48 core\n> AMD Magny\n> Cours with LSI HW RAID and 34 15k6 Hard drives scales\n> almost linearly\n> up to 48 or so threads, getting into the 7000+ tps\n> range.  With SW\n> RAID it gets into the 5500 tps range.\n> \n\nI'll have to try with the synthetic benchmarks next then, but somethings definately going off here. I'm seeing no disk activity at all as they're selects and all pages are in ram.\n \nI was wondering if anyone had any deeper knowledge of any kernel tunables, or anything else for that matter.\n\nA wild guess is something like multiple cores contending for cpu cache, cpu affinity, or some kind of contention in the kernel, alas a little out of my depth.\n\nIt's pretty sickening to think I can't get anything else out of more than 8 cores. \n", "msg_date": "Mon, 11 Apr 2011 20:42:46 +0100 (BST)", "msg_from": "Glyn Astill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Linux: more cores = less concurrency." }, { "msg_contents": "On 04/11/2011 02:32 PM, Scott Marlowe wrote:\n> On Mon, Apr 11, 2011 at 12:12 PM, Joshua D. Drake<[email protected]> wrote:\n>> On Mon, 11 Apr 2011 13:09:15 -0500, \"Kevin Grittner\"\n>> <[email protected]> wrote:\n>>> Glyn Astill<[email protected]> wrote:\n>>>\n>>>> The new server uses 4 x 8 core Xeon X7550 CPUs at 2Ghz\n>>> Which has hyperthreading.\n>>>\n>>>> our current servers are 2 x 4 core Xeon E5320 CPUs at 2Ghz.\n>>> Which doesn't have hyperthreading.\n>>>\n>>> PostgreSQL often performs worse with hyperthreading than without.\n>>> Have you turned HT off on your new machine? If not, I would start\n>>> there.\nAnyone know the reason for that?\n>> And then make sure you aren't running CFQ.\n>>\n>> JD\n> This++\n>\n> Also if you're running a good hardware RAID controller, jsut go to NOOP\n>\n\n\n-- \nStephen Clark\n*NetWolves*\nSr. Software Engineer III\nPhone: 813-579-3200\nFax: 813-882-0209\nEmail: [email protected]\nhttp://www.netwolves.com\n\n\n\n\n\n\n On 04/11/2011 02:32 PM, Scott Marlowe wrote:\n \nOn Mon, Apr 11, 2011 at 12:12 PM, Joshua D. Drake <[email protected]> wrote:\n\n\nOn Mon, 11 Apr 2011 13:09:15 -0500, \"Kevin Grittner\"\n<[email protected]> wrote:\n\n\nGlyn Astill <[email protected]> wrote:\n\n\n\nThe new server uses 4 x 8 core Xeon X7550 CPUs at 2Ghz\n\n\n\nWhich has hyperthreading.\n\n\n\nour current servers are 2 x 4 core Xeon E5320 CPUs at 2Ghz.\n\n\n\nWhich doesn't have hyperthreading.\n\nPostgreSQL often performs worse with hyperthreading than without.\nHave you turned HT off on your new machine?  If not, I would start\nthere.\n\n\n\n\n\n\n Anyone know the reason for that?\n\n\nAnd then make sure you aren't running CFQ.\n\nJD\n\n\n\nThis++\n\nAlso if you're running a good hardware RAID controller, jsut go to NOOP\n\n\n\n\n\n-- \n Stephen Clark\nNetWolves\n Sr. Software Engineer III\n Phone: 813-579-3200\n Fax: 813-882-0209\n Email: [email protected]\nhttp://www.netwolves.com", "msg_date": "Mon, 11 Apr 2011 15:51:47 -0400", "msg_from": "Steve Clark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux: more cores = less concurrency." }, { "msg_contents": "On 2011-04-11 21:42, Glyn Astill wrote:\n>\n> I'll have to try with the synthetic benchmarks next then, but somethings definately going off here. I'm seeing no disk activity at all as they're selects and all pages are in ram.\nWell, if you dont have enough computations to be bottlenecked on the\ncpu, then a 4 socket system is slower than a comparative 2 socket system\nand a 1 socket system is even better.\n\nIf you have a 1 socket system, all of your data can be fetched from\n\"local\" ram seen from you cpu, on a 2 socket, 50% of your accesses\nwill be \"way slower\", 4 socket even worse.\n\nSo the more sockets first begin to kick in when you can actually\nuse the CPU's or add in even more memory to keep your database\nfrom going to disk due to size.\n\n-- \nJesper\n", "msg_date": "Mon, 11 Apr 2011 21:56:40 +0200", "msg_from": "Jesper Krogh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux: more cores = less concurrency." }, { "msg_contents": "On Mon, 11 Apr 2011, Steve Clark wrote:\n\n> On 04/11/2011 02:32 PM, Scott Marlowe wrote:\n>> On Mon, Apr 11, 2011 at 12:12 PM, Joshua D. Drake<[email protected]> \n>> wrote:\n>>> On Mon, 11 Apr 2011 13:09:15 -0500, \"Kevin Grittner\"\n>>> <[email protected]> wrote:\n>>>> Glyn Astill<[email protected]> wrote:\n>>>> \n>>>>> The new server uses 4 x 8 core Xeon X7550 CPUs at 2Ghz\n>>>> Which has hyperthreading.\n>>>> \n>>>>> our current servers are 2 x 4 core Xeon E5320 CPUs at 2Ghz.\n>>>> Which doesn't have hyperthreading.\n>>>> \n>>>> PostgreSQL often performs worse with hyperthreading than without.\n>>>> Have you turned HT off on your new machine? If not, I would start\n>>>> there.\n> Anyone know the reason for that?\n\nhyperthreads are not real cores.\n\nthey make the assumption that you aren't fully using the core (because it \nis stalled waiting for memory or something like that) and context-switches \nyou to a different set of registers, but useing the same computational \nresources for your extra 'core'\n\nfor some applications, this works well, but for others it can be a very \nsignificant performance hit. (IIRC, this ranges from +60% to -30% or so in \nbenchmarks).\n\nIntel has wonderful marketing and has managed to convince people that HT \ncores are real cores, but 16 real cores will outperform 8 real cores + 8 \nHT 'fake' cores every time. the 16 real cores will eat more power, be more \nexpensive, etc so you are paying for the performance.\n\nin your case, try your new servers without hyperthreading. you will end up \nwith a 4x4 core system, which should handily outperform the 2x4 core \nsystem you are replacing.\n\nthe limit isn't 8 cores, it's that the hyperthreaded cores don't work well \nwith the postgres access patterns.\n\nDavid Lang\n", "msg_date": "Mon, 11 Apr 2011 13:04:58 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Linux: more cores = less concurrency." }, { "msg_contents": "--- On Mon, 11/4/11, [email protected] <[email protected]> wrote:\n\n> From: [email protected] <[email protected]>\n> Subject: Re: [PERFORM] Linux: more cores = less concurrency.\n> To: \"Steve Clark\" <[email protected]>\n> Cc: \"Scott Marlowe\" <[email protected]>, \"Joshua D. Drake\" <[email protected]>, \"Kevin Grittner\" <[email protected]>, [email protected], \"Glyn Astill\" <[email protected]>\n> Date: Monday, 11 April, 2011, 21:04\n> On Mon, 11 Apr 2011, Steve Clark\n> wrote:\n> \n> the limit isn't 8 cores, it's that the hyperthreaded cores\n> don't work well with the postgres access patterns.\n> \n\nThis has nothing to do with hyperthreading. I have a hyperthreaded benchmark purely for completion, but can we please forget about it.\n\nThe issue I'm seeing is that 8 real cores outperform 16 real cores, which outperform 32 real cores under high concurrency.\n\n32 cores is much faster than 8 when I have relatively few clients, but as the number of clients is scaled up 8 cores wins outright.\n\nI was hoping someone had seen this sort of behaviour before, and could offer some sort of explanation or advice.\n", "msg_date": "Mon, 11 Apr 2011 21:17:04 +0100 (BST)", "msg_from": "Glyn Astill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Linux: more cores = less concurrency." }, { "msg_contents": ">>>>> \"GA\" == Glyn Astill <[email protected]> writes:\n\nGA> I was hoping someone had seen this sort of behaviour before,\nGA> and could offer some sort of explanation or advice.\n\nJesper's reply is probably most on point as to the reason.\n\nI know that recent Opterons use some of their cache to better manage\ncache-coherency. I presum recent Xeons do so, too, but perhaps yours\nare not recent enough for that?\n\n-JimC\n-- \nJames Cloos <[email protected]> OpenPGP: 1024D/ED7DAEA6\n", "msg_date": "Mon, 11 Apr 2011 16:39:46 -0400", "msg_from": "James Cloos <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux: more cores = less concurrency." }, { "msg_contents": "On Mon, Apr 11, 2011 at 1:42 PM, Glyn Astill <[email protected]> wrote:\n\n> A wild guess is something like multiple cores contending for cpu cache, cpu affinity, or some kind of contention in the kernel, alas a little out of my depth.\n>\n> It's pretty sickening to think I can't get anything else out of more than 8 cores.\n\nHave you tried running the memory stream benchmark Greg Smith had\nposted here a while back? It'll let you know if you're memory is\nbottlenecking. Right now my 48 core machines are the king of that\nbenchmark with something like 70+Gig a second.\n", "msg_date": "Mon, 11 Apr 2011 14:52:39 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux: more cores = less concurrency." }, { "msg_contents": "Glyn Astill <[email protected]> wrote:\n \n> The issue I'm seeing is that 8 real cores outperform 16 real\n> cores, which outperform 32 real cores under high concurrency.\n \nWith every benchmark I've done of PostgreSQL, the \"knee\" in the\nperformance graph comes right around ((2 * cores) +\neffective_spindle_count). With the database fully cached (as I\nbelieve you mentioned), effective_spindle_count is zero. If you\ndon't use a connection pool to limit active transactions to the\nnumber from that formula, performance drops off. The more CPUs you\nhave, the sharper the drop after the knee.\n \nI think it's nearly inevitable that PostgreSQL will eventually add\nsome sort of admission policy or scheduler so that the user doesn't\nsee this effect. With an admission policy, PostgreSQL would\neffectively throttle the startup of new transactions so that things\nremained almost flat after the knee. A well-designed scheduler\nmight even be able to sneak marginal improvements past the current\nknee. As things currently stand it is up to you to do this with a\ncarefully designed connection pool.\n \n> 32 cores is much faster than 8 when I have relatively few clients,\n> but as the number of clients is scaled up 8 cores wins outright.\n \nRight. If you were hitting disk heavily with random access, the\nsweet spot would increase by the number of spindles you were\nhitting.\n \n> I was hoping someone had seen this sort of behaviour before, and\n> could offer some sort of explanation or advice.\n \nWhen you have multiple resources, adding active processes increases\noverall throughput until roughly the point when you can keep them\nall busy. Once you hit that point, adding more processes to contend\nfor the resources just adds overhead and blocking. HT is so bad\nbecause it tends to cause context switch storms, but context\nswitching becomes an issue even without it. The other main issue is\nlock contention. Beyond a certain point, processes start to contend\nfor lightweight locks, so you might context switch to a process only\nto find that it's still blocked and you have to switch again to try\nthe next process, until you finally find one which can make\nprogress. To acquire the lightweight lock you first need to acquire\na spinlock, so as things get busier processes start eating lots of\nCPU in the spinlock loops trying to get to the point of being able\nto check the LW locks to see if they're available.\n \nYou clearly got the best performance with all 32 cores and 16 to 32\nprocesses active. I don't know why you were hitting the knee sooner\nthan I've seen in my benchmarks, but the principle is the same. Use\na connection pool which limits how many transactions are active,\nsuch that you don't exceed 32 processes busy at the same time, and\nmake sure that it queues transaction requests beyond that so that a\nnew transaction can be started promptly when you are at your limit\nand a transaction completes.\n \n-Kevin\n", "msg_date": "Mon, 11 Apr 2011 16:06:53 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux: more cores = less concurrency." }, { "msg_contents": "\n\n--- On Mon, 11/4/11, Scott Marlowe <[email protected]> wrote:\n\n> From: Scott Marlowe <[email protected]>\n> Subject: Re: [PERFORM] Linux: more cores = less concurrency.\n> To: \"Glyn Astill\" <[email protected]>\n> Cc: \"Kevin Grittner\" <[email protected]>, \"Joshua D. Drake\" <[email protected]>, [email protected]\n> Date: Monday, 11 April, 2011, 21:52\n> On Mon, Apr 11, 2011 at 1:42 PM, Glyn\n> Astill <[email protected]>\n> wrote:\n> \n> > A wild guess is something like multiple cores\n> contending for cpu cache, cpu affinity, or some kind of\n> contention in the kernel, alas a little out of my depth.\n> >\n> > It's pretty sickening to think I can't get anything\n> else out of more than 8 cores.\n> \n> Have you tried running the memory stream benchmark Greg\n> Smith had\n> posted here a while back?  It'll let you know if\n> you're memory is\n> bottlenecking.  Right now my 48 core machines are the\n> king of that\n> benchmark with something like 70+Gig a second.\n> \n\nNo I haven't, but I will first thing tomorow morning. I did run a sysbench memory write test though, if I recall correctly that gave me somewhere just over 3000 Mb/s\n\n\n", "msg_date": "Mon, 11 Apr 2011 22:08:09 +0100 (BST)", "msg_from": "Glyn Astill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Linux: more cores = less concurrency." }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> wrote:\n \n> I don't know why you were hitting the knee sooner than I've seen\n> in my benchmarks\n \nIf you're compiling your own executable, you might try boosting\nLOG2_NUM_LOCK_PARTITIONS (defined in lwlocks.h) to 5 or 6. The\ncurrent value of 4 means that there are 16 partitions to spread\ncontention for the lightweight locks which protect the heavyweight\nlocking, and this corresponds to your best throughput point. It\nmight be instructive to see what happens when you tweak the number\nof partitions.\n \nAlso, if you can profile PostgreSQL at the sweet spot and again at a\npessimal load, comparing the profiles should give good clues about\nthe points of contention.\n \n-Kevin\n", "msg_date": "Mon, 11 Apr 2011 16:35:47 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux: more cores = less concurrency." }, { "msg_contents": "On Mon, Apr 11, 2011 at 6:04 AM, Glyn Astill <[email protected]> wrote:\n> The new server uses 4 x 8 core Xeon X7550 CPUs at 2Ghz, our current servers are 2 x 4 core Xeon E5320 CPUs at 2Ghz.\n>\n> What I'm seeing is when the number of clients is greater than the number of cores, the new servers perform better on fewer cores.\n\nThe X7550 have \"Turbo Boost\" which means they will overclock to 2.4\nGHz from 2.0 GHz when not all cores are in use per-die. I don't know\nif it's possible to monitor this, but I think you can disable \"Turbo\nBoost\" in bios for further testing.\n\nThe E5320 CPUs in your old servers doesn't appear \"Turbo Boost\".\n\n-Dave\n", "msg_date": "Mon, 11 Apr 2011 15:11:29 -0700", "msg_from": "David Rees <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux: more cores = less concurrency." }, { "msg_contents": "\n\n> -----Original Message-----\n> From: [email protected] [mailto:pgsql-performance-\n> [email protected]] On Behalf Of Scott Marlowe\n> Sent: Monday, April 11, 2011 1:29 PM\n> To: Glyn Astill\n> Cc: Kevin Grittner; Joshua D. Drake; [email protected]\n> Subject: Re: [PERFORM] Linux: more cores = less concurrency.\n> \n> On Mon, Apr 11, 2011 at 12:23 PM, Glyn Astill <[email protected]>\n> wrote:\n> >\n> >\n> > --- On Mon, 11/4/11, Joshua D. Drake <[email protected]> wrote:\n> >\n> >> From: Joshua D. Drake <[email protected]>\n> >> Subject: Re: [PERFORM] Linux: more cores = less concurrency.\n> >> To: \"Kevin Grittner\" <[email protected]>\n> >> Cc: [email protected], \"Glyn Astill\"\n> <[email protected]>\n> >> Date: Monday, 11 April, 2011, 19:12\n> >> On Mon, 11 Apr 2011 13:09:15 -0500,\n> >> \"Kevin Grittner\"\n> >> <[email protected]>\n> >> wrote:\n> >> > Glyn Astill <[email protected]>\n> >> wrote:\n> >> >\n> >> >> The new server uses 4 x 8 core Xeon X7550 CPUs at\n> >> 2Ghz\n> >> >\n> >> > Which has hyperthreading.\n> >> >\n> >> >> our current servers are 2 x 4 core Xeon E5320 CPUs\n> >> at 2Ghz.\n> >> >\n> >> > Which doesn't have hyperthreading.\n> >> >\n> >\n> > Yep, off. If you look at the benchmarks I took, HT absoloutely killed\n> it.\n> >\n> >> > PostgreSQL often performs worse with hyperthreading\n> >> than without.\n> >> > Have you turned HT off on your new machine?  If\n> >> not, I would start\n> >> > there.\n> >>\n> >> And then make sure you aren't running CFQ.\n> >>\n> >> JD\n> >>\n> >\n> > Not running CFQ, running the no-op i/o scheduler.\n> \n> Just FYI, in synthetic pgbench type benchmarks, a 48 core AMD Magny\n> Cours with LSI HW RAID and 34 15k6 Hard drives scales almost linearly\n> up to 48 or so threads, getting into the 7000+ tps range. With SW\n> RAID it gets into the 5500 tps range.\n\nJust wondering, which LSI card ?\nWas this 32 drives in Raid 1+0 with a two drive raid 1 for logs or some\nother config?\n\n\n-M\n\n\n> \n> --\n> Sent via pgsql-performance mailing list (pgsql-\n> [email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Mon, 11 Apr 2011 18:05:22 -0600", "msg_from": "\"mark\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux: more cores = less concurrency." }, { "msg_contents": "On Mon, Apr 11, 2011 at 6:05 PM, mark <[email protected]> wrote:\n> Just wondering, which LSI card ?\n> Was this 32 drives in Raid 1+0 with a two drive raid 1 for logs or some\n> other config?\n\nWe were using teh LSI8888 but I'll be switching back to Areca when we\ngo back to HW RAID. The LSI8888 only performed well if we setup 15\nRAID-1 pairs in HW and use linux SW RAID 0 on top. RAID1+0 in the\nLSI8888 was a pretty mediocre performer. Areca 1680 OTOH, beats it in\nevery test, with HW RAID10 only. Much simpler to admin.\n", "msg_date": "Mon, 11 Apr 2011 18:18:02 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux: more cores = less concurrency." }, { "msg_contents": "On Mon, Apr 11, 2011 at 6:18 PM, Scott Marlowe <[email protected]> wrote:\n> On Mon, Apr 11, 2011 at 6:05 PM, mark <[email protected]> wrote:\n>> Just wondering, which LSI card ?\n>> Was this 32 drives in Raid 1+0 with a two drive raid 1 for logs or some\n>> other config?\n>\n> We were using teh LSI8888 but I'll be switching back to Areca when we\n> go back to HW RAID.  The LSI8888 only performed well if we setup 15\n> RAID-1 pairs in HW and use linux SW RAID 0 on top.  RAID1+0 in the\n> LSI8888 was a pretty mediocre performer.  Areca 1680 OTOH, beats it in\n> every test, with HW RAID10 only.  Much simpler to admin.\n\nAnd it was RAID-10 w 4 drives for pg_xlog and RAID-10 with 24 drives\nfor the data store. Both controllers, and pure SW when the LSI8888s\ncooked inside the poorly cooled Supermicro 1U we had it in.\n", "msg_date": "Mon, 11 Apr 2011 18:18:59 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux: more cores = less concurrency." }, { "msg_contents": "\n\n> -----Original Message-----\n> From: Scott Marlowe [mailto:[email protected]]\n> Sent: Monday, April 11, 2011 6:18 PM\n> To: mark\n> Cc: Glyn Astill; Kevin Grittner; Joshua D. Drake; pgsql-\n> [email protected]\n> Subject: Re: [PERFORM] Linux: more cores = less concurrency.\n> \n> On Mon, Apr 11, 2011 at 6:05 PM, mark <[email protected]> wrote:\n> > Just wondering, which LSI card ?\n> > Was this 32 drives in Raid 1+0 with a two drive raid 1 for logs or\n> some\n> > other config?\n> \n> We were using teh LSI8888 but I'll be switching back to Areca when we\n> go back to HW RAID. The LSI8888 only performed well if we setup 15\n> RAID-1 pairs in HW and use linux SW RAID 0 on top. RAID1+0 in the\n> LSI8888 was a pretty mediocre performer. Areca 1680 OTOH, beats it in\n> every test, with HW RAID10 only. Much simpler to admin.\n\nInteresting, thanks for sharing. \n\nI guess I have never gotten to the point where I felt I needed more than 2\ndrives for my xlogs. Maybe I have been dismissing that as a possibility\nsomething. (my biggest array is only 24 SFF drives tho)\n\nI am trying to get my hands on a dual core lsi card for testing at work.\n(either a 9265-8i or 9285-8e) don't see any dual core 6Gbps SAS Areca cards\nyet....still rocking a Arcea 1130 at home tho. \n\n\n-M\n\n", "msg_date": "Mon, 11 Apr 2011 18:50:32 -0600", "msg_from": "\"mark\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux: more cores = less concurrency." }, { "msg_contents": "On Mon, Apr 11, 2011 at 5:06 PM, Kevin Grittner\n<[email protected]> wrote:\n> Glyn Astill <[email protected]> wrote:\n>\n>> The issue I'm seeing is that 8 real cores outperform 16 real\n>> cores, which outperform 32 real cores under high concurrency.\n>\n> With every benchmark I've done of PostgreSQL, the \"knee\" in the\n> performance graph comes right around ((2 * cores) +\n> effective_spindle_count).  With the database fully cached (as I\n> believe you mentioned), effective_spindle_count is zero.  If you\n> don't use a connection pool to limit active transactions to the\n> number from that formula, performance drops off.  The more CPUs you\n> have, the sharper the drop after the knee.\n\nI was about to say something similar with some canned advice to use a\nconnection pooler to control this. However, OP scaling is more or\nless topping out at cores / 4...yikes!. Here are my suspicions in\nrough order:\n\n1. There is scaling problem in client/network/etc. Trivially\ndisproved, convert the test to pgbench -f and post results\n2. The test is in fact i/o bound. Scaling is going to be\nhardware/kernel determined. Can we see iostat/vmstat/top snipped\nduring test run? Maybe no-op is burning you?\n3. Locking/concurrency issue in heavy_seat_function() (source for\nthat?) how much writing does it do?\n\nCan we see some iobound and cpubound pgbench runs on both servers?\n\nmerlin\n", "msg_date": "Mon, 11 Apr 2011 21:59:05 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux: more cores = less concurrency." }, { "msg_contents": "On Mon, Apr 11, 2011 at 6:50 PM, mark <[email protected]> wrote:\n>\n> Interesting, thanks for sharing.\n>\n> I guess I have never gotten to the point where I felt I needed more than 2\n> drives for my xlogs. Maybe I have been dismissing that as a possibility\n> something. (my biggest array is only 24 SFF drives tho)\n>\n> I am trying to get my hands on a dual core lsi card for testing at work.\n> (either a 9265-8i or 9285-8e) don't see any dual core 6Gbps SAS Areca cards\n> yet....still rocking a Arcea 1130 at home tho.\n\nMake doubly sure whatever machine you're putting it in moves plenty of\nair across it's PCI cards. They make plenty of heat. the Areca 1880\nare the 6GB/s cards, don't know if they're single or dual core. The\nLSI interface and command line tools are so horribly designed and the\nperformance was so substandard I've pretty much given up on them.\nMaybe the newer cards are better, but the 9xxx series wouldn't get\nalong with my motherboard so it was the 8888 or Areca.\n\nAs for pg_xlog, with 4 drives in a RAID-10 we were hitting a limit\nwith only two drives in RAID-1 against 24 drives in the RAID-10 for\nthe data store in our mixed load. And we use an old 12xx series Areca\nat work for our primary file server and it's been super reliable for\nthe two years it's been running.\n", "msg_date": "Mon, 11 Apr 2011 20:12:39 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux: more cores = less concurrency." }, { "msg_contents": "On 2011-04-11 22:39, James Cloos wrote:\n>>>>>> \"GA\" == Glyn Astill<[email protected]> writes:\n> GA> I was hoping someone had seen this sort of behaviour before,\n> GA> and could offer some sort of explanation or advice.\n>\n> Jesper's reply is probably most on point as to the reason.\n>\n> I know that recent Opterons use some of their cache to better manage\n> cache-coherency. I presum recent Xeons do so, too, but perhaps yours\n> are not recent enough for that?\n\nBetter cache-coherence also benefits, but it does nothing to\nthe fact that remote DRAM fetches is way more expensive\nthan local ones. (Hard numbers to get excact nowadays).\n\n-- \nJesper\n", "msg_date": "Tue, 12 Apr 2011 07:17:54 +0200", "msg_from": "Jesper Krogh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux: more cores = less concurrency." }, { "msg_contents": "On Mon, Apr 11, 2011 at 7:04 AM, Glyn Astill <[email protected]> wrote:\n> Hi Guys,\n>\n> I'm just doing some tests on a new server running one of our heavy select functions (the select part of a plpgsql function to allocate seats) concurrently.  We do use connection pooling and split out some selects to slony slaves, but the tests here are primeraly to test what an individual server is capable of.\n>\n> The new server uses 4 x 8 core Xeon X7550 CPUs at 2Ghz, our current servers are 2 x 4 core Xeon E5320 CPUs at 2Ghz.\n>\n> What I'm seeing is when the number of clients is greater than the number of cores, the new servers perform better on fewer cores.\n\nO man, I completely forgot the issue I ran into in my machines, and\nthat was that zone_reclaim completely screwed postgresql and file\nsystem performance. On machines with more CPU nodes and higher\ninternode cost it gets turned on automagically and destroys\nperformance for machines that use a lot of kernel cache / shared\nmemory.\n\nBe sure and use sysctl.conf to turn it off:\n\nvm.zone_reclaim_mode = 0\n", "msg_date": "Mon, 11 Apr 2011 23:55:03 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux: more cores = less concurrency." }, { "msg_contents": "\nOn 11-4-2011 22:04 [email protected] wrote:\n> in your case, try your new servers without hyperthreading. you will end\n> up with a 4x4 core system, which should handily outperform the 2x4 core\n> system you are replacing.\n>\n> the limit isn't 8 cores, it's that the hyperthreaded cores don't work\n> well with the postgres access patterns.\n\nIt would be really weird if disabling HT would turn these 8-core cpu's \nin 4-core cpu's ;) They have 8 physical cores and 16 threads each. So he \nbasically has a 32-core machine with 64 threads in total (if HT were \nenabled). Still, HT may or may not improve things, back when we had time \nto benchmark new systems we had one of the first HT-Xeon's (a dual 5080, \nwith two cores + HT each) available:\nhttp://ic.tweakimg.net/ext/i/1155958729.png\n\nThe blue lines are all slightly above the orange/red lines. So back then \nHT slightly improved our read-mostly Postgresql benchmark score.\n\nWe also did benchmarks with Sun's UltraSparc T2 back then:\nhttp://ic.tweakimg.net/ext/i/1214930814.png\n\nAdding full cores (including threads) made things much better, but we \nalso tested full cores with more threads each:\nhttp://ic.tweakimg.net/ext/i/1214930816.png\n\nAs you can see, with that benchmark, it was better to have 4 cores with \n8 threads each, than 8 cores with 2 threads each.\n\nThe T2-threads where much heavier duty than the HT-threads back then, \nbut afaik Intel has improved its technology with this re-introduction of \nthem quite a bit.\n\nSo I wouldn't dismiss hyper threading for a read-mostly Postgresql \nworkload too easily.\n\nThen again, keeping 32 cores busy, without them contending for every \nresource will already be quite hard. So adding 32 additional \"threads\" \nmay indeed make matters much worse.\n\nBest regards,\n\nArjen\n", "msg_date": "Tue, 12 Apr 2011 09:31:20 +0200", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux: more cores = less concurrency." }, { "msg_contents": "--- On Tue, 12/4/11, Merlin Moncure <[email protected]> wrote:\n\n> >> The issue I'm seeing is that 8 real cores\n> outperform 16 real\n> >> cores, which outperform 32 real cores under high\n> concurrency.\n> >\n> > With every benchmark I've done of PostgreSQL, the\n> \"knee\" in the\n> > performance graph comes right around ((2 * cores) +\n> > effective_spindle_count).  With the database fully\n> cached (as I\n> > believe you mentioned), effective_spindle_count is\n> zero.  If you\n> > don't use a connection pool to limit active\n> transactions to the\n> > number from that formula, performance drops off.  The\n> more CPUs you\n> > have, the sharper the drop after the knee.\n> \n> I was about to say something similar with some canned\n> advice to use a\n> connection pooler to control this.  However, OP\n> scaling is more or\n> less topping out at cores / 4...yikes!.  Here are my\n> suspicions in\n> rough order:\n> \n> 1. There is scaling problem in client/network/etc. \n> Trivially\n> disproved, convert the test to pgbench -f and post results\n> 2. The test is in fact i/o bound. Scaling is going to be\n> hardware/kernel determined.  Can we see\n> iostat/vmstat/top snipped\n> during test run?  Maybe no-op is burning you?\n\nThis is during my 80 clients test, this is a point at which the performance is well below that of the same machine limited to 8 cores.\n\nhttp://www.privatepaste.com/dc131ff26e\n\n> 3. Locking/concurrency issue in heavy_seat_function()\n> (source for\n> that?)  how much writing does it do?\n> \n\nNo writing afaik - its a select with a few joins and subqueries - I'm pretty sure it's not writing out temp data either, but all clients are after the same data in the test - maybe theres some locks there?\n\n> Can we see some iobound and cpubound pgbench runs on both\n> servers?\n> \n\nOf course, I'll post when I've gotten to that.\n\n", "msg_date": "Tue, 12 Apr 2011 09:54:59 +0100 (BST)", "msg_from": "Glyn Astill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Linux: more cores = less concurrency." }, { "msg_contents": "--- On Tue, 12/4/11, Scott Marlowe <[email protected]> wrote:\n\n> From: Scott Marlowe <[email protected]>\n> Subject: Re: [PERFORM] Linux: more cores = less concurrency.\n> To: \"Glyn Astill\" <[email protected]>\n> Cc: [email protected]\n> Date: Tuesday, 12 April, 2011, 6:55\n> On Mon, Apr 11, 2011 at 7:04 AM, Glyn\n> Astill <[email protected]>\n> wrote:\n> > Hi Guys,\n> >\n> > I'm just doing some tests on a new server running one\n> of our heavy select functions (the select part of a plpgsql\n> function to allocate seats) concurrently.  We do use\n> connection pooling and split out some selects to slony\n> slaves, but the tests here are primeraly to test what an\n> individual server is capable of.\n> >\n> > The new server uses 4 x 8 core Xeon X7550 CPUs at\n> 2Ghz, our current servers are 2 x 4 core Xeon E5320 CPUs at\n> 2Ghz.\n> >\n> > What I'm seeing is when the number of clients is\n> greater than the number of cores, the new servers perform\n> better on fewer cores.\n> \n> O man, I completely forgot the issue I ran into in my\n> machines, and\n> that was that zone_reclaim completely screwed postgresql\n> and file\n> system performance.  On machines with more CPU nodes\n> and higher\n> internode cost it gets turned on automagically and\n> destroys\n> performance for machines that use a lot of kernel cache /\n> shared\n> memory.\n> \n> Be sure and use sysctl.conf to turn it off:\n> \n> vm.zone_reclaim_mode = 0\n> \n\nI've made this change, not seen any immediate changes however it's good to know. Thanks Scott.\n", "msg_date": "Tue, 12 Apr 2011 09:57:07 +0100 (BST)", "msg_from": "Glyn Astill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Linux: more cores = less concurrency." }, { "msg_contents": "--- On Mon, 11/4/11, Kevin Grittner <[email protected]> wrote:\n\n> From: Kevin Grittner <[email protected]>\n> Subject: Re: [PERFORM] Linux: more cores = less concurrency.\n> To: [email protected], \"Steve Clark\" <[email protected]>, \"Kevin Grittner\" <[email protected]>, \"Glyn Astill\" <[email protected]>\n> Cc: \"Joshua D. Drake\" <[email protected]>, \"Scott Marlowe\" <[email protected]>, [email protected]\n> Date: Monday, 11 April, 2011, 22:35\n> \"Kevin Grittner\" <[email protected]>\n> wrote:\n> \n> > I don't know why you were hitting the knee sooner than\n> I've seen\n> > in my benchmarks\n> \n> If you're compiling your own executable, you might try\n> boosting\n> LOG2_NUM_LOCK_PARTITIONS (defined in lwlocks.h) to 5 or\n> 6.  The\n> current value of 4 means that there are 16 partitions to\n> spread\n> contention for the lightweight locks which protect the\n> heavyweight\n> locking, and this corresponds to your best throughput\n> point.  It\n> might be instructive to see what happens when you tweak the\n> number\n> of partitions.\n> \n\nTried tweeking LOG2_NUM_LOCK_PARTITIONS between 5 and 7. My results took a dive when I changed to 32 partitions, and improved as I increaced to 128, but appeared to be happiest at the default of 16.\n\n> Also, if you can profile PostgreSQL at the sweet spot and\n> again at a\n> pessimal load, comparing the profiles should give good\n> clues about\n> the points of contention.\n> \n\nResults for the same machine on 8 and 32 cores are here:\n\nhttp://www.8kb.co.uk/server_benchmarks/dblt_results.csv\n\nHere's the sweet spot for 32 cores, and the 8 core equivalent:\n\nhttp://www.8kb.co.uk/server_benchmarks/iostat-32cores_32Clients.txt\nhttp://www.8kb.co.uk/server_benchmarks/vmstat-32cores_32Clients.txt\n\nhttp://www.8kb.co.uk/server_benchmarks/iostat-8cores_32Clients.txt\nhttp://www.8kb.co.uk/server_benchmarks/vmstat-8cores_32Clients.txt\n\n... and at the pessimal load for 32 cores, and the 8 core equivalent:\n\nhttp://www.8kb.co.uk/server_benchmarks/iostat-32cores_100Clients.txt\nhttp://www.8kb.co.uk/server_benchmarks/vmstat-32cores_100Clients.txt\n\nhttp://www.8kb.co.uk/server_benchmarks/iostat-8cores_100Clients.txt\nhttp://www.8kb.co.uk/server_benchmarks/vmstat-8cores_100Clients.txt\n \nvmstat shows double the context switches on 32 cores, could this be a factor? Is there anything else I'm missing there?\n\nCheers\nGlyn\n", "msg_date": "Tue, 12 Apr 2011 13:35:19 +0100 (BST)", "msg_from": "Glyn Astill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Linux: more cores = less concurrency." }, { "msg_contents": "On Tue, Apr 12, 2011 at 3:54 AM, Glyn Astill <[email protected]> wrote:\n> --- On Tue, 12/4/11, Merlin Moncure <[email protected]> wrote:\n>\n>> >> The issue I'm seeing is that 8 real cores\n>> outperform 16 real\n>> >> cores, which outperform 32 real cores under high\n>> concurrency.\n>> >\n>> > With every benchmark I've done of PostgreSQL, the\n>> \"knee\" in the\n>> > performance graph comes right around ((2 * cores) +\n>> > effective_spindle_count).  With the database fully\n>> cached (as I\n>> > believe you mentioned), effective_spindle_count is\n>> zero.  If you\n>> > don't use a connection pool to limit active\n>> transactions to the\n>> > number from that formula, performance drops off.  The\n>> more CPUs you\n>> > have, the sharper the drop after the knee.\n>>\n>> I was about to say something similar with some canned\n>> advice to use a\n>> connection pooler to control this.  However, OP\n>> scaling is more or\n>> less topping out at cores / 4...yikes!.  Here are my\n>> suspicions in\n>> rough order:\n>>\n>> 1. There is scaling problem in client/network/etc.\n>> Trivially\n>> disproved, convert the test to pgbench -f and post results\n>> 2. The test is in fact i/o bound. Scaling is going to be\n>> hardware/kernel determined.  Can we see\n>> iostat/vmstat/top snipped\n>> during test run?  Maybe no-op is burning you?\n>\n> This is during my 80 clients test, this is a point at which the performance is well below that of the same machine limited to 8 cores.\n>\n> http://www.privatepaste.com/dc131ff26e\n>\n>> 3. Locking/concurrency issue in heavy_seat_function()\n>> (source for\n>> that?)  how much writing does it do?\n>>\n>\n> No writing afaik - its a select with a few joins and subqueries - I'm pretty sure it's not writing out temp data either, but all clients are after the same data in the test - maybe theres some locks there?\n>\n>> Can we see some iobound and cpubound pgbench runs on both\n>> servers?\n>>\n>\n> Of course, I'll post when I've gotten to that.\n\nOk, there's no writing going on -- so the i/o tets aren't necessary.\nContext switches are also not too high -- the problem is likely in\npostgres or on your end.\n\nHowever, I Would still like to see:\npgbench select only tests:\npgbench -i -s 1\npgbench -S -c 8 -t 500\npgbench -S -c 32 -t 500\npgbench -S -c 80 -t 500\n\npgbench -i -s 500\npgbench -S -c 8 -t 500\npgbench -S -c 32 -t 500\npgbench -S -c 80 -t 500\n\nwrite out bench.sql with:\nbegin;\nselect * from heavy_seat_function();\nselect * from heavy_seat_function();\ncommit;\n\npgbench -n bench.sql -c 8 -t 500\npgbench -n bench.sql -c 8 -t 500\npgbench -n bench.sql -c 8 -t 500\n\nI'm still suspecting an obvious problem here. One thing we may have\noverlooked is that you are connecting and disconnecting one per\nbenchmarking step (two query executions). If you have heavy RSA\nencryption enabled on connection establishment, this could eat you.\n\nIf pgbench results confirm your scaling problems and our issue is not\nin the general area of connection establishment, it's time to break\nout the profiler :/.\n\nmerlin\n", "msg_date": "Tue, 12 Apr 2011 08:23:21 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux: more cores = less concurrency." }, { "msg_contents": "On Tue, Apr 12, 2011 at 8:23 AM, Merlin Moncure <[email protected]> wrote:\n> On Tue, Apr 12, 2011 at 3:54 AM, Glyn Astill <[email protected]> wrote:\n>> --- On Tue, 12/4/11, Merlin Moncure <[email protected]> wrote:\n>>\n>>> >> The issue I'm seeing is that 8 real cores\n>>> outperform 16 real\n>>> >> cores, which outperform 32 real cores under high\n>>> concurrency.\n>>> >\n>>> > With every benchmark I've done of PostgreSQL, the\n>>> \"knee\" in the\n>>> > performance graph comes right around ((2 * cores) +\n>>> > effective_spindle_count).  With the database fully\n>>> cached (as I\n>>> > believe you mentioned), effective_spindle_count is\n>>> zero.  If you\n>>> > don't use a connection pool to limit active\n>>> transactions to the\n>>> > number from that formula, performance drops off.  The\n>>> more CPUs you\n>>> > have, the sharper the drop after the knee.\n>>>\n>>> I was about to say something similar with some canned\n>>> advice to use a\n>>> connection pooler to control this.  However, OP\n>>> scaling is more or\n>>> less topping out at cores / 4...yikes!.  Here are my\n>>> suspicions in\n>>> rough order:\n>>>\n>>> 1. There is scaling problem in client/network/etc.\n>>> Trivially\n>>> disproved, convert the test to pgbench -f and post results\n>>> 2. The test is in fact i/o bound. Scaling is going to be\n>>> hardware/kernel determined.  Can we see\n>>> iostat/vmstat/top snipped\n>>> during test run?  Maybe no-op is burning you?\n>>\n>> This is during my 80 clients test, this is a point at which the performance is well below that of the same machine limited to 8 cores.\n>>\n>> http://www.privatepaste.com/dc131ff26e\n>>\n>>> 3. Locking/concurrency issue in heavy_seat_function()\n>>> (source for\n>>> that?)  how much writing does it do?\n>>>\n>>\n>> No writing afaik - its a select with a few joins and subqueries - I'm pretty sure it's not writing out temp data either, but all clients are after the same data in the test - maybe theres some locks there?\n>>\n>>> Can we see some iobound and cpubound pgbench runs on both\n>>> servers?\n>>>\n>>\n>> Of course, I'll post when I've gotten to that.\n>\n> Ok, there's no writing going on -- so the i/o tets aren't necessary.\n> Context switches are also not too high -- the problem is likely in\n> postgres or on your end.\n>\n> However, I Would still like to see:\n> pgbench select only tests:\n> pgbench -i -s 1\n> pgbench -S -c 8 -t 500\n> pgbench -S -c 32 -t 500\n> pgbench -S -c 80 -t 500\n>\n> pgbench -i -s 500\n> pgbench -S -c 8 -t 500\n> pgbench -S -c 32 -t 500\n> pgbench -S -c 80 -t 500\n>\n> write out bench.sql with:\n> begin;\n> select * from heavy_seat_function();\n> select * from heavy_seat_function();\n> commit;\n>\n> pgbench -n bench.sql -c 8 -t 500\n> pgbench -n bench.sql -c 8 -t 500\n> pgbench -n bench.sql -c 8 -t 500\n\nwhoops:\npgbench -n bench.sql -c 8 -t 500\npgbench -n bench.sql -c 32 -t 500\npgbench -n bench.sql -c 80 -t 500\n\nmerlin\n", "msg_date": "Tue, 12 Apr 2011 08:24:34 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux: more cores = less concurrency." }, { "msg_contents": "Glyn Astill <[email protected]> wrote:\n \n> Tried tweeking LOG2_NUM_LOCK_PARTITIONS between 5 and 7. My\n> results took a dive when I changed to 32 partitions, and improved\n> as I increaced to 128, but appeared to be happiest at the default\n> of 16.\n \nGood to know.\n \n>> Also, if you can profile PostgreSQL at the sweet spot and again\n>> at a pessimal load, comparing the profiles should give good clues\n>> about the points of contention.\n \n> [iostat and vmstat output]\n \nWow, zero idle and zero wait, and single digit for system. Did you\never run those RAM speed tests? (I don't remember seeing results\nfor that -- or failed to recognize them.) At this point, my best\nguess at this point is that you don't have the bandwidth to RAM to\nsupport the CPU power. Databases tend to push data around in RAM a\nlot.\n \nWhen I mentioned profiling, I was thinking more of oprofile or\nsomething like it. If it were me, I'd be going there by now.\n \n-Kevin\n", "msg_date": "Tue, 12 Apr 2011 09:43:01 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux: more cores = less concurrency." }, { "msg_contents": "--- On Tue, 12/4/11, Merlin Moncure <[email protected]> wrote:\n\n> >>> Can we see some iobound and cpubound pgbench\n> runs on both\n> >>> servers?\n> >>>\n> >>\n> >> Of course, I'll post when I've gotten to that.\n> >\n> > Ok, there's no writing going on -- so the i/o tets\n> aren't necessary.\n> > Context switches are also not too high -- the problem\n> is likely in\n> > postgres or on your end.\n> >\n> > However, I Would still like to see:\n> > pgbench select only tests:\n> > pgbench -i -s 1\n> > pgbench -S -c 8 -t 500\n> > pgbench -S -c 32 -t 500\n> > pgbench -S -c 80 -t 500\n> >\n> > pgbench -i -s 500\n> > pgbench -S -c 8 -t 500\n> > pgbench -S -c 32 -t 500\n> > pgbench -S -c 80 -t 500\n> >\n> > write out bench.sql with:\n> > begin;\n> > select * from heavy_seat_function();\n> > select * from heavy_seat_function();\n> > commit;\n> >\n> > pgbench -n bench.sql -c 8 -t 500\n> > pgbench -n bench.sql -c 8 -t 500\n> > pgbench -n bench.sql -c 8 -t 500\n> \n> whoops:\n> pgbench -n bench.sql -c 8 -t 500\n> pgbench -n bench.sql -c 32 -t 500\n> pgbench -n bench.sql -c 80 -t 500\n> \n> merlin\n> \n\nRight, here they are:\n\nhttp://www.privatepaste.com/3dd777f4db\n\n\n", "msg_date": "Tue, 12 Apr 2011 17:01:49 +0100 (BST)", "msg_from": "Glyn Astill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Linux: more cores = less concurrency." }, { "msg_contents": "--- On Tue, 12/4/11, Kevin Grittner <[email protected]> wrote:\n\n> Wow, zero idle and zero wait, and single digit for\n> system.  Did you\n> ever run those RAM speed tests?  (I don't remember\n> seeing results\n> for that -- or failed to recognize them.)  At this\n> point, my best\n> guess at this point is that you don't have the bandwidth to\n> RAM to\n> support the CPU power.  Databases tend to push data\n> around in RAM a\n> lot.\n\nI mentioned sysbench was giving me something like 3000 MB/sec on memory write tests, but nothing more.\n\nResults from Greg Smiths stream_scaling test are here:\n\nhttp://www.privatepaste.com/4338aa1196\n\n> \n> When I mentioned profiling, I was thinking more of oprofile\n> or\n> something like it.  If it were me, I'd be going there\n> by now.\n> \n\nAdvice taken, it'll be my next step.\n\nGlyn\n", "msg_date": "Tue, 12 Apr 2011 17:07:00 +0100 (BST)", "msg_from": "Glyn Astill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Linux: more cores = less concurrency." }, { "msg_contents": "On Tue, Apr 12, 2011 at 11:01 AM, Glyn Astill <[email protected]> wrote:\n> --- On Tue, 12/4/11, Merlin Moncure <[email protected]> wrote:\n>\n>> >>> Can we see some iobound and cpubound pgbench\n>> runs on both\n>> >>> servers?\n>> >>>\n>> >>\n>> >> Of course, I'll post when I've gotten to that.\n>> >\n>> > Ok, there's no writing going on -- so the i/o tets\n>> aren't necessary.\n>> > Context switches are also not too high -- the problem\n>> is likely in\n>> > postgres or on your end.\n>> >\n>> > However, I Would still like to see:\n>> > pgbench select only tests:\n>> > pgbench -i -s 1\n>> > pgbench -S -c 8 -t 500\n>> > pgbench -S -c 32 -t 500\n>> > pgbench -S -c 80 -t 500\n>> >\n>> > pgbench -i -s 500\n>> > pgbench -S -c 8 -t 500\n>> > pgbench -S -c 32 -t 500\n>> > pgbench -S -c 80 -t 500\n>> >\n>> > write out bench.sql with:\n>> > begin;\n>> > select * from heavy_seat_function();\n>> > select * from heavy_seat_function();\n>> > commit;\n>> >\n>> > pgbench -n bench.sql -c 8 -t 500\n>> > pgbench -n bench.sql -c 8 -t 500\n>> > pgbench -n bench.sql -c 8 -t 500\n>>\n>> whoops:\n>> pgbench -n bench.sql -c 8 -t 500\n>> pgbench -n bench.sql -c 32 -t 500\n>> pgbench -n bench.sql -c 80 -t 500\n>>\n>> merlin\n>>\n>\n> Right, here they are:\n>\n> http://www.privatepaste.com/3dd777f4db\n\nyour results unfortunately confirmed the worst -- no easy answers on\nthis one :(. Before breaking out the profiler, can you take some\nrandom samples of:\n\nselect count(*) from pg_stat_activity where waiting;\n\nto see if you have any locking issues?\nAlso, are you sure your function executions are relatively free of\nside effects?\nI can take a look at the code off list if you'd prefer to keep it discrete.\n\nmerlin\n", "msg_date": "Tue, 12 Apr 2011 11:12:37 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux: more cores = less concurrency." }, { "msg_contents": "Glyn Astill <[email protected]> wrote:\n \n> Results from Greg Smiths stream_scaling test are here:\n> \n> http://www.privatepaste.com/4338aa1196\n \nWell, that pretty much clinches it. Your RAM access tops out at 16\nprocessors. It appears that your processors are spending most of\ntheir time waiting for and contending for the RAM bus.\n \nI have gotten machines in where moving a jumper, flipping a DIP\nswitch, or changing BIOS options from the default made a big\ndifference. I'd be looking at the manuals for my motherboard and\nBIOS right now to see what options there might be to improve that.\n \n-Kevin\n", "msg_date": "Tue, 12 Apr 2011 11:40:27 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux: more cores = less concurrency." }, { "msg_contents": "On Tue, Apr 12, 2011 at 6:40 PM, Kevin Grittner\n<[email protected]> wrote:\n>\n> Well, that pretty much clinches it.  Your RAM access tops out at 16\n> processors.  It appears that your processors are spending most of\n> their time waiting for and contending for the RAM bus.\n\nIt tops, but it doesn't drop.\n\nI'd propose that the perceived drop in TPS is due to cache contention\n- ie, more processes fighting for the scarce cache means less\nefficient use of the (constant upwards of 16 processes) bandwidth.\n\nSo... the solution would be to add more servers, rather than just sockets.\n(or a server with more sockets *and* more bandwidth)\n", "msg_date": "Tue, 12 Apr 2011 18:43:55 +0200", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux: more cores = less concurrency." }, { "msg_contents": "Hi,\n\nI think that a NUMA architecture machine can solve the problem....\n\nA +\nLe 11/04/2011 15:04, Glyn Astill a �crit :\n>\n> Hi Guys,\n>\n> I'm just doing some tests on a new server running one of our heavy select functions (the select part of a plpgsql function to allocate seats) concurrently. We do use connection pooling and split out some selects to slony slaves, but the tests here are primeraly to test what an individual server is capable of.\n>\n> The new server uses 4 x 8 core Xeon X7550 CPUs at 2Ghz, our current servers are 2 x 4 core Xeon E5320 CPUs at 2Ghz.\n>\n> What I'm seeing is when the number of clients is greater than the number of cores, the new servers perform better on fewer cores.\n>\n> Has anyone else seen this behaviour? I'm guessing this is either a hardware limitation or something to do with linux process management / scheduling? Any idea what to look into?\n>\n> My benchmark utility is just using a little .net/npgsql app that runs increacing numbers of clients concurrently, each client runs a specified number of iterations of any sql I specify.\n>\n> I've posted some results and the test program here:\n>\n> http://www.8kb.co.uk/server_benchmarks/\n>\n>\n\n\n-- \nFr�d�ric BROUARD - expert SGBDR et SQL - MVP SQL Server - 06 11 86 40 66\nLe site sur le langage SQL et les SGBDR : http://sqlpro.developpez.com\nEnseignant Arts & M�tiers PACA, ISEN Toulon et CESI/EXIA Aix en Provence\nAudit, conseil, expertise, formation, mod�lisation, tuning, optimisation\n*********************** http://www.sqlspot.com *************************\n\n", "msg_date": "Tue, 12 Apr 2011 18:58:47 +0200", "msg_from": "\"F. BROUARD / SQLpro\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux: more cores = less concurrency." }, { "msg_contents": "Kevin Grittner wrote:\n> Glyn Astill <[email protected]> wrote:\n> \n> \n>> Results from Greg Smiths stream_scaling test are here:\n>>\n>> http://www.privatepaste.com/4338aa1196\n>> \n> \n> Well, that pretty much clinches it. Your RAM access tops out at 16\n> processors. It appears that your processors are spending most of\n> their time waiting for and contending for the RAM bus.\n> \n\nI've pulled Glyn's results into \nhttps://github.com/gregs1104/stream-scaling so they're easy to compare \nagainst similar processors, his system is the one labled 4 X X7550. I'm \nhearing this same story from multiple people lately: these 32+ core \nservers bottleneck on aggregate memory speed with running PostgreSQL \nlong before the CPUs are fully utilized. This server is close to \nmaximum memory utilization at 8 cores, and the small increase in gross \nthroughput above that doesn't seem to be making up for the loss in L1 \nand L2 thrashing from trying to run more. These systems with many cores \ncan only be used fully if you have a program that can work efficiency \nsome of the time with just local CPU resources. That's very rarely the \ncase for a database that's moving 8K pages, tuple caches, and other \nforms of working memory around all the time.\n\n\n> I have gotten machines in where moving a jumper, flipping a DIP\n> switch, or changing BIOS options from the default made a big\n> difference. I'd be looking at the manuals for my motherboard and\n> BIOS right now to see what options there might be to improve that\n\nI already forwarded Glyn a good article about tuning these Dell BIOSs in \nparticular from an interesting blog series others here might like too:\n\nhttp://bleything.net/articles/postgresql-benchmarking-memory.html\n\nBen Bleything is doing a very thorough walk-through of server hardware \nvalidation, and as is often the case he's already found one major \nproblem with the vendor config he had to fix to get expected results.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Tue, 12 Apr 2011 10:00:39 -0700", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux: more cores = less concurrency." }, { "msg_contents": "Scott Marlowe wrote:\n> Have you tried running the memory stream benchmark Greg Smith had\n> posted here a while back? It'll let you know if you're memory is\n> bottlenecking. Right now my 48 core machines are the king of that\n> benchmark with something like 70+Gig a second.\n> \n\nThe big Opterons are still the front-runners here, but not with 70GB/s \nanymore. Earlier versions of stream-scaling didn't use nearly enough \ndata to avoid L3 cache in the processors interfering with results. More \nrecent tests I've gotten in done after I expanded the default test size \nfor them show the Opterons normally hitting the same ~35GB/s maximum \nthroughput that the Intel processors get out of similar DDR3/1333 sets. \nThere are some outliers where >50GB/s still shows up. I'm not sure if I \nreally believe them though; attempts to increase the test size now hit a \n32-bit limit inside stream.c, and I think that's not really big enough \nto avoid L3 cache effects here.\n\nIn the table at https://github.com/gregs1104/stream-scaling the 4 X 6172 \nserver is similar to Scott's system. I believe the results for 8 \n(37613) and 48 cores (32301) there. I remain somewhat suspicious that \nthe higher reuslts of 40 - 51GB/s shown between 16 and 32 cores may be \ninflated by caching. At this point I'll probably need direct access to \none of them to resolve this for sure. I've made a lot of progress with \nother people's servers, but complete trust in those particular results \nstill isn't there yet.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Tue, 12 Apr 2011 10:10:00 -0700", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux: more cores = less concurrency." }, { "msg_contents": "On Tue, Apr 12, 2011 at 12:00 PM, Greg Smith <[email protected]> wrote:\n> Kevin Grittner wrote:\n>>\n>> Glyn Astill <[email protected]> wrote:\n>>\n>>>\n>>> Results from Greg Smiths stream_scaling test are here:\n>>>\n>>> http://www.privatepaste.com/4338aa1196\n>>>\n>>\n>>  Well, that pretty much clinches it.  Your RAM access tops out at 16\n>> processors.  It appears that your processors are spending most of\n>> their time waiting for and contending for the RAM bus.\n>>\n>\n> I've pulled Glyn's results into https://github.com/gregs1104/stream-scaling\n> so they're easy to compare against similar processors, his system is the one\n> labled 4 X X7550.  I'm hearing this same story from multiple people lately:\n>  these 32+ core servers bottleneck on aggregate memory speed with running\n> PostgreSQL long before the CPUs are fully utilized.  This server is close to\n> maximum memory utilization at 8 cores, and the small increase in gross\n> throughput above that doesn't seem to be making up for the loss in L1 and L2\n> thrashing from trying to run more.  These systems with many cores can only\n> be used fully if you have a program that can work efficiency some of the\n> time with just local CPU resources.  That's very rarely the case for a\n> database that's moving 8K pages, tuple caches, and other forms of working\n> memory around all the time.\n>\n>\n>> I have gotten machines in where moving a jumper, flipping a DIP\n>> switch, or changing BIOS options from the default made a big\n>> difference.  I'd be looking at the manuals for my motherboard and\n>> BIOS right now to see what options there might be to improve that\n>\n> I already forwarded Glyn a good article about tuning these Dell BIOSs in\n> particular from an interesting blog series others here might like too:\n>\n> http://bleything.net/articles/postgresql-benchmarking-memory.html\n>\n> Ben Bleything is doing a very thorough walk-through of server hardware\n> validation, and as is often the case he's already found one major problem\n> with the vendor config he had to fix to get expected results.\n\nFor posterity, since it looks like you guys have nailed this one, I\ntook a look at some of the code off list and I can confirm there is no\nobvious bottleneck coming from locking type issues. The functions are\n'stable' as implemented with no fancy tricks.\n\nmerlin\n", "msg_date": "Tue, 12 Apr 2011 12:14:11 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux: more cores = less concurrency." }, { "msg_contents": "When purchasing the intel 7500 series, please make sure to check the hemisphere mode of your memory configuration. There is a HUGE difference in the memory configuration around 50% speed if you don't populate all the memory slots on the controllers properly.\r\n\r\nhttps://globalsp.ts.fujitsu.com/dmsp/docs/wp-nehalem-ex-memory-performance-ww-en.pdf\r\n\r\n- John\r\n\r\n-----Original Message-----\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Merlin Moncure\r\nSent: Tuesday, April 12, 2011 12:14 PM\r\nTo: Greg Smith\r\nCc: Kevin Grittner; [email protected]; Steve Clark; Glyn Astill; Joshua D. Drake; Scott Marlowe; [email protected]\r\nSubject: Re: [PERFORM] Linux: more cores = less concurrency.\r\n\r\nOn Tue, Apr 12, 2011 at 12:00 PM, Greg Smith <[email protected]> wrote:\r\n> Kevin Grittner wrote:\r\n>>\r\n>> Glyn Astill <[email protected]> wrote:\r\n>>\r\n>>>\r\n>>> Results from Greg Smiths stream_scaling test are here:\r\n>>>\r\n>>> http://www.privatepaste.com/4338aa1196\r\n>>>\r\n>>\r\n>>  Well, that pretty much clinches it.  Your RAM access tops out at 16 \r\n>> processors.  It appears that your processors are spending most of \r\n>> their time waiting for and contending for the RAM bus.\r\n>>\r\n>\r\n> I've pulled Glyn's results into \r\n> https://github.com/gregs1104/stream-scaling\r\n> so they're easy to compare against similar processors, his system is \r\n> the one labled 4 X X7550.  I'm hearing this same story from multiple people lately:\r\n>  these 32+ core servers bottleneck on aggregate memory speed with \r\n> running PostgreSQL long before the CPUs are fully utilized.  This \r\n> server is close to maximum memory utilization at 8 cores, and the \r\n> small increase in gross throughput above that doesn't seem to be \r\n> making up for the loss in L1 and L2 thrashing from trying to run more.  \r\n> These systems with many cores can only be used fully if you have a \r\n> program that can work efficiency some of the time with just local CPU \r\n> resources.  That's very rarely the case for a database that's moving \r\n> 8K pages, tuple caches, and other forms of working memory around all the time.\r\n>\r\n>\r\n>> I have gotten machines in where moving a jumper, flipping a DIP \r\n>> switch, or changing BIOS options from the default made a big \r\n>> difference.  I'd be looking at the manuals for my motherboard and \r\n>> BIOS right now to see what options there might be to improve that\r\n>\r\n> I already forwarded Glyn a good article about tuning these Dell BIOSs \r\n> in particular from an interesting blog series others here might like too:\r\n>\r\n> http://bleything.net/articles/postgresql-benchmarking-memory.html\r\n>\r\n> Ben Bleything is doing a very thorough walk-through of server hardware \r\n> validation, and as is often the case he's already found one major \r\n> problem with the vendor config he had to fix to get expected results.\r\n\r\nFor posterity, since it looks like you guys have nailed this one, I took a look at some of the code off list and I can confirm there is no obvious bottleneck coming from locking type issues. The functions are 'stable' as implemented with no fancy tricks.\r\n\r\n\r\nmerlin\r\n\r\n--\r\nSent via pgsql-performance mailing list ([email protected])\r\nTo make changes to your subscription:\r\nhttp://www.postgresql.org/mailpref/pgsql-performance\r\nThis communication is for informational purposes only. It is not\nintended as an offer or solicitation for the purchase or sale of\nany financial instrument or as an official confirmation of any\ntransaction. All market prices, data and other information are not\nwarranted as to completeness or accuracy and are subject to change\nwithout notice. Any comments or statements made herein do not\nnecessarily reflect those of JPMorgan Chase & Co., its subsidiaries\nand affiliates.\r\n\r\nThis transmission may contain information that is privileged,\nconfidential, legally privileged, and/or exempt from disclosure\nunder applicable law. If you are not the intended recipient, you\nare hereby notified that any disclosure, copying, distribution, or\nuse of the information contained herein (including any reliance\nthereon) is STRICTLY PROHIBITED. Although this transmission and any\nattachments are believed to be free of any virus or other defect\nthat might affect any computer system into which it is received and\nopened, it is the responsibility of the recipient to ensure that it\nis virus free and no responsibility is accepted by JPMorgan Chase &\nCo., its subsidiaries and affiliates, as applicable, for any loss\nor damage arising in any way from its use. If you received this\ntransmission in error, please immediately contact the sender and\ndestroy the material in its entirety, whether in electronic or hard\ncopy format. Thank you.\r\n\r\nPlease refer to http://www.jpmorgan.com/pages/disclosures for\ndisclosures relating to European legal entities.\n", "msg_date": "Tue, 12 Apr 2011 14:50:26 -0400", "msg_from": "\"Strange, John W\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux: more cores = less concurrency." }, { "msg_contents": "--- On Tue, 12/4/11, Greg Smith <[email protected]> wrote:\n\n> From: Greg Smith <[email protected]>\n> Subject: Re: [PERFORM] Linux: more cores = less concurrency.\n> To: \"Kevin Grittner\" <[email protected]>\n> Cc: [email protected], \"Steve Clark\" <[email protected]>, \"Glyn Astill\" <[email protected]>, \"Joshua D. Drake\" <[email protected]>, \"Scott Marlowe\" <[email protected]>, [email protected]\n> Date: Tuesday, 12 April, 2011, 18:00\n> Kevin Grittner wrote:\n> > Glyn Astill <[email protected]>\n> wrote:\n> >    \n> >> Results from Greg Smiths stream_scaling test are\n> here:\n> >> \n> >> http://www.privatepaste.com/4338aa1196\n> >>     \n> >  Well, that pretty much clinches it.  Your\n> RAM access tops out at 16\n> > processors.  It appears that your processors are\n> spending most of\n> > their time waiting for and contending for the RAM\n> bus.\n> >   \n> \n> I've pulled Glyn's results into https://github.com/gregs1104/stream-scaling so they're\n> easy to compare against similar processors, his system is\n> the one labled 4 X X7550.  I'm hearing this same story\n> from multiple people lately:  these 32+ core servers\n> bottleneck on aggregate memory speed with running PostgreSQL\n> long before the CPUs are fully utilized.  This server\n> is close to maximum memory utilization at 8 cores, and the\n> small increase in gross throughput above that doesn't seem\n> to be making up for the loss in L1 and L2 thrashing from\n> trying to run more.  These systems with many cores can\n> only be used fully if you have a program that can work\n> efficiency some of the time with just local CPU\n> resources.  That's very rarely the case for a database\n> that's moving 8K pages, tuple caches, and other forms of\n> working memory around all the time.\n> \n> \n> > I have gotten machines in where moving a jumper,\n> flipping a DIP\n> > switch, or changing BIOS options from the default made\n> a big\n> > difference.  I'd be looking at the manuals for my\n> motherboard and\n> > BIOS right now to see what options there might be to\n> improve that\n> \n> I already forwarded Glyn a good article about tuning these\n> Dell BIOSs in particular from an interesting blog series\n> others here might like too:\n> \n> http://bleything.net/articles/postgresql-benchmarking-memory.html\n> \n> Ben Bleything is doing a very thorough walk-through of\n> server hardware validation, and as is often the case he's\n> already found one major problem with the vendor config he\n> had to fix to get expected results.\n> \n\nThanks Greg. I've been through that post, but unfortunately there's no settings that make a difference.\n\nHowever upon further investigation and looking at the manual for the R910 here\n\nhttp://support.dell.com/support/edocs/systems/per910/en/HOM/HTML/install.htm#wp1266264\n\nI've discovered we only have 4 of the 8 memory risers, and the manual states that in this configuration we are running in \"Power Optimized\" mode, rather than \"Performance Optimized\".\n\nWe've got two of these machines, so I've just pulled all the risers from one system, removed half the memory as indicated by that document from Dell above, and now I'm seeing almost double the throughput.\n\n\n", "msg_date": "Wed, 13 Apr 2011 10:48:01 +0100 (BST)", "msg_from": "Glyn Astill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Linux: more cores = less concurrency." }, { "msg_contents": "If postgres is memory bandwidth constrained, what can be done to reduce\nits bandwidth use?\n\nHuge Pages could help some, by reducing page table lookups and making\noverall access more efficient.\nCompressed pages (speedy / lzo) in memory can help trade CPU cycles for\nmemory usage for certain memory segments/pages -- this could potentially\nsave a lot of I/O too if more pages fit in RAM as a result, and also make\ncaches more effective.\n\nAs I've noted before, the optimizer inappropriately choses the larger side\nof a join to hash instead of the smaller one in many cases on hash joins,\nwhich is less cache efficient.\nDual-pivot quicksort is more cache firendly than Postgres' single pivit\none and uses less memory bandwidth on average (fewer swaps, but the same\nnumber of compares).\n\n\n\nOn 4/13/11 2:48 AM, \"Glyn Astill\" <[email protected]> wrote:\n\n>--- On Tue, 12/4/11, Greg Smith <[email protected]> wrote:\n>\n>>\n>> \n>\n>Thanks Greg. I've been through that post, but unfortunately there's no\n>settings that make a difference.\n>\n>However upon further investigation and looking at the manual for the R910\n>here\n>\n>http://support.dell.com/support/edocs/systems/per910/en/HOM/HTML/install.h\n>tm#wp1266264\n>\n>I've discovered we only have 4 of the 8 memory risers, and the manual\n>states that in this configuration we are running in \"Power Optimized\"\n>mode, rather than \"Performance Optimized\".\n>\n>We've got two of these machines, so I've just pulled all the risers from\n>one system, removed half the memory as indicated by that document from\n>Dell above, and now I'm seeing almost double the throughput.\n>\n>\n>\n>-- \n>Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Wed, 13 Apr 2011 09:33:26 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux: more cores = less concurrency." }, { "msg_contents": "Scott Carey wrote:\n> If postgres is memory bandwidth constrained, what can be done to reduce\n> its bandwidth use?\n>\n> Huge Pages could help some, by reducing page table lookups and making\n> overall access more efficient.\n> Compressed pages (speedy / lzo) in memory can help trade CPU cycles for\n> memory usage for certain memory segments/pages -- this could potentially\n> save a lot of I/O too if more pages fit in RAM as a result, and also make\n> caches more effective.\n> \n\nThe problem with a lot of these ideas is that they trade the memory \nproblem for increased disruption to the CPU L1 and L2 caches. I don't \nknow how much that moves the bottleneck forward. And not every workload \nis memory constrained, either, so those that aren't might suffer from \nthe same optimizations that help in this situation.\n\nI just posted my slides from my MySQL conference talk today at \nhttp://projects.2ndquadrant.com/talks , and those include some graphs of \nrecent data collected with stream-scaling. The current situation is \nreally strange in both Intel and AMD's memory architectures. I'm even \nseeing situations where lightly loaded big servers are actually \noutperformed by small ones running the same workload. The 32 and 48 \ncore systems using server-class DDR3/1333 just don't have the bandwidth \nto a single core that, say, an i7 desktop using triple-channel DDR3-1600 \ndoes. The trade-offs here are extremely hardware and workload \ndependent, and it's very easy to tune for one combination while slowing \nanother.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Wed, 13 Apr 2011 21:23:23 -0700", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux: more cores = less concurrency." }, { "msg_contents": "* Jesper Krogh:\n\n> If you have a 1 socket system, all of your data can be fetched from\n> \"local\" ram seen from you cpu, on a 2 socket, 50% of your accesses\n> will be \"way slower\", 4 socket even worse.\n\nThere are non-NUMA multi-socket systems, so this doesn't apply in all\ncases. (The E5320-based system is likely non-NUMA.)\n\nSpeaking about NUMA, do you know if there are some non-invasive tools\nwhich can be used to monitor page migration and off-node memory\naccesses?\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nKriegsstraße 100 tel: +49-721-96201-1\nD-76133 Karlsruhe fax: +49-721-96201-99\n", "msg_date": "Thu, 14 Apr 2011 10:09:19 +0000", "msg_from": "Florian Weimer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux: more cores = less concurrency." }, { "msg_contents": "2011/4/14 Florian Weimer <[email protected]>:\n> * Jesper Krogh:\n>\n>> If you have a 1 socket system, all of your data can be fetched from\n>> \"local\" ram seen from you cpu, on a 2 socket, 50% of your accesses\n>> will be \"way slower\", 4 socket even worse.\n>\n> There are non-NUMA multi-socket systems, so this doesn't apply in all\n> cases.  (The E5320-based system is likely non-NUMA.)\n>\n> Speaking about NUMA, do you know if there are some non-invasive tools\n> which can be used to monitor page migration and off-node memory\n> accesses?\n\nI am unsure it is exactly what you are looking for, but linux do\nprovide access to counters in:\n/sys/devices/system/node/node*/numastat\n\nI also find usefull to check meminfo per node instead of via /proc\n\n\n-- \nCédric Villemain               2ndQuadrant\nhttp://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support\n", "msg_date": "Thu, 14 Apr 2011 13:07:24 +0200", "msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux: more cores = less concurrency." }, { "msg_contents": "\n\nOn 4/13/11 9:23 PM, \"Greg Smith\" <[email protected]> wrote:\n\n>Scott Carey wrote:\n>> If postgres is memory bandwidth constrained, what can be done to reduce\n>> its bandwidth use?\n>>\n>> Huge Pages could help some, by reducing page table lookups and making\n>> overall access more efficient.\n>> Compressed pages (speedy / lzo) in memory can help trade CPU cycles for\n>> memory usage for certain memory segments/pages -- this could potentially\n>> save a lot of I/O too if more pages fit in RAM as a result, and also\n>>make\n>> caches more effective.\n>> \n>\n>The problem with a lot of these ideas is that they trade the memory\n>problem for increased disruption to the CPU L1 and L2 caches. I don't\n>know how much that moves the bottleneck forward. And not every workload\n>is memory constrained, either, so those that aren't might suffer from\n>the same optimizations that help in this situation.\n\nCompression has this problem, but I'm not sure where the plural \"a lot of\nthese ideas\" comes from.\n\nHuge Pages helps caches.\nDual-Pivot quicksort is more cache friendly and is _always_ equal to or\nfaster than traditional quicksort (its a provably improved algorithm).\nSmaller hash tables help caches.\n\n>\n>I just posted my slides from my MySQL conference talk today at\n>http://projects.2ndquadrant.com/talks , and those include some graphs of\n>recent data collected with stream-scaling. The current situation is\n>really strange in both Intel and AMD's memory architectures. I'm even\n>seeing situations where lightly loaded big servers are actually\n>outperformed by small ones running the same workload. The 32 and 48\n>core systems using server-class DDR3/1333 just don't have the bandwidth\n>to a single core that, say, an i7 desktop using triple-channel DDR3-1600\n>does. The trade-offs here are extremely hardware and workload\n>dependent, and it's very easy to tune for one combination while slowing\n>another.\n>\n>-- \n>Greg Smith 2ndQuadrant US [email protected] Baltimore, MD\n>PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n>\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n>\n\n", "msg_date": "Thu, 14 Apr 2011 13:05:36 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux: more cores = less concurrency." }, { "msg_contents": "On Thu, Apr 14, 2011 at 10:05 PM, Scott Carey <[email protected]> wrote:\n> Huge Pages helps caches.\n> Dual-Pivot quicksort is more cache friendly and is _always_ equal to or\n> faster than traditional quicksort (its a provably improved algorithm).\n\nIf you want a cache-friendly sorting algorithm, you need mergesort.\n\nI don't know any algorithm as friendly to caches as mergesort.\n\nQuicksort could be better only when the sorting buffer is guaranteed\nto fit on the CPU's cache, and that's usually just a few 4kb pages.\n", "msg_date": "Thu, 14 Apr 2011 22:19:17 +0200", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux: more cores = less concurrency." }, { "msg_contents": "\nOn 4/14/11 1:19 PM, \"Claudio Freire\" <[email protected]> wrote:\n\n>On Thu, Apr 14, 2011 at 10:05 PM, Scott Carey <[email protected]>\n>wrote:\n>> Huge Pages helps caches.\n>> Dual-Pivot quicksort is more cache friendly and is _always_ equal to or\n>> faster than traditional quicksort (its a provably improved algorithm).\n>\n>If you want a cache-friendly sorting algorithm, you need mergesort.\n>\n>I don't know any algorithm as friendly to caches as mergesort.\n>\n>Quicksort could be better only when the sorting buffer is guaranteed\n>to fit on the CPU's cache, and that's usually just a few 4kb pages.\n\nOf mergesort variants, Timsort is a recent general purpose variant favored\nby many since it is sub- O(n log(n)) on partially sorted data.\n\nWhich work best under which circumstances depends a lot on the size of the\ndata, size of the elements, cost of the compare function, whether you're\nsorting the data directly or sorting pointers, and other factors.\n\nMergesort may be more cache friendly (?) but might use more memory\nbandwidth. I'm not sure.\n\nI do know that dual-pivot quicksort provably causes fewer swaps (but the\nsame # of compares) as the usual single-pivot quicksort. And swaps are a\nlot slower than you would expect due to the effects on processor caches.\nTherefore it might help with multiprocessor scalability by reducing\nmemory/cache pressure.\n\n", "msg_date": "Thu, 14 Apr 2011 15:42:46 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux: more cores = less concurrency." }, { "msg_contents": "On Fri, Apr 15, 2011 at 12:42 AM, Scott Carey <[email protected]> wrote:\n> I do know that dual-pivot quicksort provably causes fewer swaps (but the\n> same # of compares) as the usual single-pivot quicksort.  And swaps are a\n> lot slower than you would expect due to the effects on processor caches.\n> Therefore it might help with multiprocessor scalability by reducing\n> memory/cache pressure.\n\nI agree, and it's quite non-disruptive - ie, a drop-in replacement for\nquicksort, whereas mergesort or timsort both require bigger changes\nand heavier profiling.\n", "msg_date": "Fri, 15 Apr 2011 08:57:45 +0200", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux: more cores = less concurrency." } ]
[ { "msg_contents": "Hi All,\n\nI have setup postgres 9 master slave streaming replication but\nexperiencing slave lagging sometimes by 50 min to 60 min. I am not\ngetting exact reason for slave lag delay. Below are the details:\n\n1. Master table contains partition tables with frequent updates.\n2. Slave is used for report generation so long running queries hit slave.\n3. ANALYZE runs every hour on partition table on master.\n4. postgresql.conf:\narchive_timeout = 900\ncheckpoint_segments = 500\ncheckpoint_timeout = 30min\ncheckpoint_warning = 2000s\nmax_standby_archive_delay = -1\n\nI noticed that whenever long running query executes on slave, slave\nstarts lagging by master and as soon as query completes log files get\napplied immediately. Please help how can I avoid slave lag.\n\nThanks\n\nRegards\nSaurabh Agrawal\n", "msg_date": "Mon, 11 Apr 2011 19:40:55 +0530", "msg_from": "Saurabh Agrawal <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres 9 slave lagging" }, { "msg_contents": "On Mon, Apr 11, 2011 at 9:10 AM, Saurabh Agrawal <[email protected]> wrote:\n> Hi All,\n>\n> I have setup postgres 9 master slave streaming replication but\n> experiencing slave lagging sometimes by 50 min to 60 min. I am not\n> getting exact reason for slave lag delay. Below are the details:\n>\n> 1. Master table contains partition tables with frequent updates.\n> 2. Slave is used for report generation so long running queries hit slave.\n> 3. ANALYZE runs every hour on partition table on master.\n> 4. postgresql.conf:\n> archive_timeout = 900\n> checkpoint_segments = 500\n> checkpoint_timeout = 30min\n> checkpoint_warning = 2000s\n> max_standby_archive_delay = -1\n>\n> I noticed that whenever long running query executes on slave, slave\n> starts lagging by master and as soon as query completes log files get\n> applied immediately. Please help how can I avoid slave lag.\n\nYou answered your own question. Long running queries on the slave can\nhold up replay so that the slave can see consistent data....this is\nexplained in detail in the documentation You deliberately configured\nthe slave to do this: max_standby_archive_delay -1, which mains defer\nreplay forever so you can get queries to complete. You can set this\nlower to get more responsive replication, but that means you need to\nbe prepared to have long running queries on the slave fail.\n\nThe larger answer is that a particular slave can be configured to\nsupport long running queries or be very responsive but not both. You\ncan always configure 2 slaves of course with settings optimized for\nspecific purposes.\n\nmerlin\n", "msg_date": "Mon, 11 Apr 2011 17:39:38 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres 9 slave lagging" } ]
[ { "msg_contents": "I have a database that contains many tables, each with some common \ncharacteristics. For legacy reasons, they have to be implemented in a \nway so that they are *all* searchable by an older identifier to find the \nnewer identifier. To do this, we've used table inheritance.\n\nEach entry has an id, as well as a legacyid1 and legacyid2. There's a \nmaster table that the application uses, containing a base representation \nand common characteristics:\n\nobjects ( id, ... )\nitem ( id, legacyid1, legacyid2 )\n | - itemXX\n | - itemYY\n\nThere is nothing at all in the item table, it's just used for \ninheritance. However, weird things happen when this table is joined:\n\nEXPLAIN ANALYZE SELECT * FROM objects INNER JOIN item f USING ( id );\n\n QUERY PLAN\n------------\n Hash Join (cost=457943.85..1185186.17 rows=8643757 width=506)\n Hash Cond: (f.id = objects.id)\n -> Append (cost=0.00..224458.57 rows=8643757 width=20)\n -> Seq Scan on item f (cost=0.00..26.30 rows=1630 width=20)\n -> Seq Scan on itemXX f (cost=0.00..1.90 rows=90 width=20)\n -> Seq Scan on itemYY f (cost=0.00..7.66 rows=266 width=20)\n -> Seq Scan on itemZZ f (cost=0.00..1.02 rows=2 width=20)\n ...\n -> Hash (cost=158447.49..158447.49 rows=3941949 width=490)\n -> Seq Scan on objects (cost=0.00..158447.49 rows=3941949 \nwidth=490)\n\nThis scans everything over everything, and obviously takes forever \n(there are millions of rows in the objects table, and tens of thousands \nin each itemXX table).\n\nHowever, if I disable seqscan (set enable_seqscan=false), I get the \nfollowing plan:\n\n QUERY PLAN\n------------\n Hash Join (cost=10001298843.53..290002337961.71 rows=8643757 width=506)\n Hash Cond: (f.id = objects.id)\n -> Append (cost=10000000000.00..290000536334.43 rows=8643757 width=20)\n -> Seq Scan on item f (cost=10000000000.00..10000000026.30 \nrows=1630 width=20)\n -> Index Scan using xxx_pkey on itemXX f (cost=0.00..10.60 \nrows=90 width=20)\n -> Index Scan using yyy_pkey on itemYY f (cost=0.00..25.24 \nrows=266 width=20)\n -> Index Scan using zzz_pkey on itemZZ f (cost=0.00..9.28 \nrows=2 width=20)\n ...\n -> Hash (cost=999347.17..999347.17 rows=3941949 width=490)\n -> Index Scan using objects_pkey on objects \n(cost=0.00..999347.17 rows=3941949 width=490)\n\nThis seems like a much more sensible query plan. But it seems to think \ndoing a sequential scan on the *empty* item table is excessively \nexpensive in this case.\n\nAside from enable_seqscan=false, is there any way I can make the query \nplanner not balk over doing a seqscan on an empty table?\n\nThanks,\nLucas Madar\n\n", "msg_date": "Mon, 11 Apr 2011 13:11:58 -0700", "msg_from": "Lucas Madar <[email protected]>", "msg_from_op": true, "msg_subject": "Poor performance when joining against inherited tables" }, { "msg_contents": "On 04/11/2011 03:11 PM, Lucas Madar wrote:\n\n> EXPLAIN ANALYZE SELECT * FROM objects INNER JOIN item f USING ( id );\n>\n> This scans everything over everything, and obviously takes forever\n> (there are millions of rows in the objects table, and tens of thousands\n> in each itemXX table).\n\nWhat is your constraint_exclusion setting? This needs to be 'ON' for the \ncheck constraints you use to enforce your inheritance rules to work right.\n\nYou *do* have check constraints on all your child tables, right? Just in \ncase, please refer to the doc on table partitioning:\n\nhttp://www.postgresql.org/docs/current/static/ddl-partitioning.html\n\nAlso, your example has no where clause. Without a where clause, \nconstraint exclusion won't even function. How is the database supposed \nto know that matching a 4M row table against several partitioned tables \nwill result in few matches? All it really has are stats on your joined \nid for this particular query, and you're basically telling to join all \nof them. That usually calls for a sequence scan, because millions of \nindex seeks will almost always be slower than a few sequence scans.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n", "msg_date": "Tue, 12 Apr 2011 08:22:05 -0500", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance when joining against inherited tables" }, { "msg_contents": "On Mon, Apr 11, 2011 at 4:11 PM, Lucas Madar <[email protected]> wrote:\n> I have a database that contains many tables, each with some common\n> characteristics. For legacy reasons, they have to be implemented in a way so\n> that they are *all* searchable by an older identifier to find the newer\n> identifier. To do this, we've used table inheritance.\n>\n> Each entry has an id, as well as a legacyid1 and legacyid2. There's a master\n> table that the application uses, containing a base representation and common\n> characteristics:\n>\n> objects ( id, ... )\n> item ( id, legacyid1, legacyid2 )\n>  | - itemXX\n>  | - itemYY\n>\n> There is nothing at all in the item table, it's just used for inheritance.\n> However, weird things happen when this table is joined:\n>\n> EXPLAIN ANALYZE SELECT * FROM objects INNER JOIN item f USING ( id );\n>\n>  QUERY PLAN\n> ------------\n>  Hash Join  (cost=457943.85..1185186.17 rows=8643757 width=506)\n>   Hash Cond: (f.id = objects.id)\n>   ->  Append  (cost=0.00..224458.57 rows=8643757 width=20)\n>         ->  Seq Scan on item f  (cost=0.00..26.30 rows=1630 width=20)\n>         ->  Seq Scan on itemXX f  (cost=0.00..1.90 rows=90 width=20)\n>         ->  Seq Scan on itemYY f  (cost=0.00..7.66 rows=266 width=20)\n>         ->  Seq Scan on itemZZ f  (cost=0.00..1.02 rows=2 width=20)\n>         ...\n>   ->  Hash  (cost=158447.49..158447.49 rows=3941949 width=490)\n>         ->  Seq Scan on objects  (cost=0.00..158447.49 rows=3941949\n> width=490)\n>\n> This scans everything over everything, and obviously takes forever (there\n> are millions of rows in the objects table, and tens of thousands in each\n> itemXX table).\n>\n> However, if I disable seqscan (set enable_seqscan=false), I get the\n> following plan:\n>\n>  QUERY PLAN\n> ------------\n>  Hash Join  (cost=10001298843.53..290002337961.71 rows=8643757 width=506)\n>   Hash Cond: (f.id = objects.id)\n>   ->  Append  (cost=10000000000.00..290000536334.43 rows=8643757 width=20)\n>         ->  Seq Scan on item f  (cost=10000000000.00..10000000026.30\n> rows=1630 width=20)\n>         ->  Index Scan using xxx_pkey on itemXX f  (cost=0.00..10.60 rows=90\n> width=20)\n>         ->  Index Scan using yyy_pkey on itemYY f  (cost=0.00..25.24\n> rows=266 width=20)\n>         ->  Index Scan using zzz_pkey on itemZZ f  (cost=0.00..9.28 rows=2\n> width=20)\n>         ...\n>   ->  Hash  (cost=999347.17..999347.17 rows=3941949 width=490)\n>         ->  Index Scan using objects_pkey on objects (cost=0.00..999347.17\n> rows=3941949 width=490)\n>\n> This seems like a much more sensible query plan.\n\nI don't think so. Scanning the index to extract all the rows in a\ntable is typically going to be a lot slower than a sequential scan.\n\nA more interesting question is why you're not getting a plan like this:\n\nNested Loop\n-> Seq Scan on objects\n-> Append\n -> Index Scan using xxx_pkey on itemXX\n -> Index Scan using yyy_pkey on itemYY\n -> Index Scan using zzz_pkey on itemZZ\n\n> But it seems to think doing\n> a sequential scan on the *empty* item table is excessively expensive in this\n> case.\n>\n> Aside from enable_seqscan=false, is there any way I can make the query\n> planner not balk over doing a seqscan on an empty table?\n\nWhy would you care? A sequential scan of an empty table is very fast.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Wed, 11 May 2011 12:38:44 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance when joining against inherited tables" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> A more interesting question is why you're not getting a plan like this:\n\n> Nested Loop\n> -> Seq Scan on objects\n> -> Append\n> -> Index Scan using xxx_pkey on itemXX\n> -> Index Scan using yyy_pkey on itemYY\n> -> Index Scan using zzz_pkey on itemZZ\n\nProbably because there are 4 million rows in the objects table.\n\nOr maybe it's a pre-8.2 database and can't even generate such a plan.\nBut if it did generate it, it would almost certainly have decided that\nthis was more expensive than a hash or merge join.\n\nPeople have this weird idea that the existence of an index ought to make\nenormous joins free ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 11 May 2011 13:00:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance when joining against inherited tables " }, { "msg_contents": "On 05/11/2011 09:38 AM, Robert Haas wrote:\n>> However, if I disable seqscan (set enable_seqscan=false), I get the\n>> following plan:\n>>\n>> QUERY PLAN\n>> ------------\n>> Hash Join (cost=10001298843.53..290002337961.71 rows=8643757 width=506)\n>> Hash Cond: (f.id = objects.id)\n>> -> Append (cost=10000000000.00..290000536334.43 rows=8643757 width=20)\n>> -> Seq Scan on item f (cost=10000000000.00..10000000026.30\n>> rows=1630 width=20)\n>> -> Index Scan using xxx_pkey on itemXX f (cost=0.00..10.60 rows=90\n>> width=20)\n>> -> Index Scan using yyy_pkey on itemYY f (cost=0.00..25.24\n>> rows=266 width=20)\n>> -> Index Scan using zzz_pkey on itemZZ f (cost=0.00..9.28 rows=2\n>> width=20)\n>> ...\n>> -> Hash (cost=999347.17..999347.17 rows=3941949 width=490)\n>> -> Index Scan using objects_pkey on objects (cost=0.00..999347.17\n>> rows=3941949 width=490)\n>>\n>> This seems like a much more sensible query plan.\n> I don't think so. Scanning the index to extract all the rows in a\n> table is typically going to be a lot slower than a sequential scan.\n>\n> A more interesting question is why you're not getting a plan like this:\n>\n> Nested Loop\n> -> Seq Scan on objects\n> -> Append\n> -> Index Scan using xxx_pkey on itemXX\n> -> Index Scan using yyy_pkey on itemYY\n> -> Index Scan using zzz_pkey on itemZZ\n\nCompared to the previous query plan (omitted in this e-mail, in which \nthe planner was scanning all the item tables sequentially), the second \nquery is much more desirable. It takes about 12 seconds to complete, \nversus the other query which I canceled after six hours. However, what \nyou propose seems to make even more sense.\n\n>> But it seems to think doing\n>> a sequential scan on the *empty* item table is excessively expensive in this\n>> case.\n>>\n>> Aside from enable_seqscan=false, is there any way I can make the query\n>> planner not balk over doing a seqscan on an empty table?\n> Why would you care? A sequential scan of an empty table is very fast.\n>\nMy issue is that it looks like it's avoiding the sequential scan:\n\nSeq Scan on item f (cost=10000000000.00..10000000026.30 rows=1630 width=20)\n\nIt says the sequential scan has a cost that's way too high, and I'm \npresuming that's why it's choosing the extremely slow plan over the much \nfaster plan. I don't know very much about plans, but I'm assuming the \nplanner chooses the plan with the lowest cost.\n\nI'd much prefer it *does* the sequential scan of the empty table and \ngoes with the other parts of the plan.\n\nThanks,\nLucas Madar\n", "msg_date": "Wed, 11 May 2011 13:47:53 -0700", "msg_from": "Lucas Madar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Poor performance when joining against inherited tables" }, { "msg_contents": "> It says the sequential scan has a cost that's way too high, and I'm\n> presuming that's why it's choosing the extremely slow plan over the much\n> faster plan.\n\nWell, not exactly. It's giving you that cost because you disabled\nseqscan, which actually just bumps the cost really high:\n\npostgres=# create temporary table foo as select generate_series(1,3);\nSELECT\npostgres=# explain analyze select * from foo;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------\n Seq Scan on foo (cost=0.00..34.00 rows=2400 width=4) (actual\ntime=0.010..0.012 rows=3 loops=1)\n Total runtime: 2.591 ms\n(2 rows)\n\npostgres=# set enable_seqscan to false;\nSET\npostgres=# explain analyze select * from foo;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------\n Seq Scan on foo (cost=10000000000.00..10000000034.00 rows=2400\nwidth=4) (actual time=0.004..0.007 rows=3 loops=1)\n Total runtime: 0.037 ms\n(2 rows)\n\n\nAs far as I know, there is no hard way to disable any given plan\noption, since sometimes that may be the only choice.\n\nThe (estimated) cost of the seq scan chosen here is *not* the same as\nthe cost of the scan when the planner actually considers this plan (in\nfact, that will the same as the one in the first plan).\n\nHowever, note the cost of the Index Scan nodes in the second plan:\nthey are *higher* than their corresponding Seq Scan nodes (in the\nfirst plan), which is why you get the first plan when seq can *is*\nenabled.\n\nAlso, your plan output looks like plain EXPLAIN and not EXPLAIN\nANALYZE (i.e., the \"actual time\" nodes are missing).\n\nOther than that, I think Shaun's comments apply.\n\n---\nMaciek Sakrejda | System Architect | Truviso\n\n1065 E. Hillsdale Blvd., Suite 215\nFoster City, CA 94404\n(650) 242-3500 Main\nwww.truviso.com\n", "msg_date": "Thu, 12 May 2011 14:07:03 -0700", "msg_from": "Maciek Sakrejda <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance when joining against inherited tables" }, { "msg_contents": "On Wed, May 11, 2011 at 4:47 PM, Lucas Madar <[email protected]> wrote:\n> On 05/11/2011 09:38 AM, Robert Haas wrote:\n>>>\n>>> However, if I disable seqscan (set enable_seqscan=false), I get the\n>>> following plan:\n>>>\n>>>  QUERY PLAN\n>>> ------------\n>>>  Hash Join  (cost=10001298843.53..290002337961.71 rows=8643757 width=506)\n>>>   Hash Cond: (f.id = objects.id)\n>>>   ->    Append  (cost=10000000000.00..290000536334.43 rows=8643757\n>>> width=20)\n>>>         ->    Seq Scan on item f  (cost=10000000000.00..10000000026.30\n>>> rows=1630 width=20)\n>>>         ->    Index Scan using xxx_pkey on itemXX f  (cost=0.00..10.60\n>>> rows=90\n>>> width=20)\n>>>         ->    Index Scan using yyy_pkey on itemYY f  (cost=0.00..25.24\n>>> rows=266 width=20)\n>>>         ->    Index Scan using zzz_pkey on itemZZ f  (cost=0.00..9.28\n>>> rows=2\n>>> width=20)\n>>>         ...\n>>>   ->    Hash  (cost=999347.17..999347.17 rows=3941949 width=490)\n>>>         ->    Index Scan using objects_pkey on objects\n>>> (cost=0.00..999347.17\n>>> rows=3941949 width=490)\n>>>\n>>> This seems like a much more sensible query plan.\n>>\n>> I don't think so.  Scanning the index to extract all the rows in a\n>> table is typically going to be a lot slower than a sequential scan.\n>>\n>\n> Compared to the previous query plan (omitted in this e-mail, in which the\n> planner was scanning all the item tables sequentially), the second query is\n> much more desirable. It takes about 12 seconds to complete, versus the other\n> query which I canceled after six hours. However, what you propose seems to\n> make even more sense.\n\nI was just looking at this email again, and had another thought:\nperhaps the tables in question are badly bloated. In your situation,\nit seems that the plan didn't change much when you set\nenable_seqscan=off: it just replaced full-table seq-scans with\nfull-table index-scans, which should be slower. But if you have a\ngiant table that's mostly empty space, then following the index\npointers to the limited number of blocks that contain any useful data\nmight be faster than scanning all the empty space. If you still have\nthese tables around somewhere, you could test this hypothesis by\nrunning CLUSTER on all the tables and see whether the seq-scan gets\nfaster.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Thu, 30 Jun 2011 15:07:12 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance when joining against inherited tables" } ]
[ { "msg_contents": "Dear ,all\nplz could any one help me !!!\nhow explian works as math equations to estimate cost with  constatn query \nparameters\nsuch as cpu_tuple cost ,random page cost ...etc\n i want maths  expression  in order to know how these parameters will effect in \ncost ???\nplease any one can help me ??\n \n \nRegards\nRadhya...\nDear ,all\nplz could any one help me !!!\nhow explian works as math equations to estimate cost with  constatn query parameters\nsuch as cpu_tuple cost ,random page cost ...etc\n i want maths  expression  in order to know how these parameters will effect in cost ???\nplease any one can help me ??\n \n \nRegards\nRadhya...", "msg_date": "Mon, 11 Apr 2011 16:02:31 -0700 (PDT)", "msg_from": "Radhya sahal <[email protected]>", "msg_from_op": true, "msg_subject": "how explain works" }, { "msg_contents": "> how explian works as math equations to estimate cost with  constatn query\n> parameters\n> such as cpu_tuple cost ,random page cost ...etc\n>  i want maths  expression  in order to know how these parameters will effect\n> in cost ???\n\nThe expressions are complicated, and they are certainly not linear as\nyou seem to think from your previous post.\n\n> please any one can help me ??\n\nWhat do you need this for? If your goal is to optimize a real\napplication, then you should just vary the cost parameters and measure\nthe resulting change in query times. If your interests are academic,\nthere were some excellent suggestions for places to start in response\nto your previous post.\n\nBest,\nNathan\n", "msg_date": "Mon, 11 Apr 2011 16:09:07 -0700", "msg_from": "Nathan Boley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how explain works" }, { "msg_contents": "Thanks Mr Nathan Boley ,\ni want these equations to solve thsese equtions of parameters and total time  in \norder to get each paramter formula\ni need these formula  in my experiments is very important to know the rate for \neach parameter in total cost for plan. \nBest \nRadhya..\n\n________________________________\nFrom: Nathan Boley <[email protected]>\nTo: Radhya sahal <[email protected]>\nCc: pgsql-performance group <[email protected]>\nSent: Mon, April 11, 2011 4:09:07 PM\nSubject: Re: [PERFORM] how explain works\n\n> how explian works as math equations to estimate cost with  constatn query\n> parameters\n> such as cpu_tuple cost ,random page cost ...etc\n>  i want maths  expression  in order to know how these parameters will effect\n> in cost ???\n\nThe expressions are complicated, and they are certainly not linear as\nyou seem to think from your previous post.\n\n> please any one can help me ??\n\nWhat do you need this for? If your goal is to optimize a real\napplication, then you should just vary the cost parameters and measure\nthe resulting change in query times. If your interests are academic,\nthere were some excellent suggestions for places to start in response\nto your previous post.\n\nBest,\nNathan\n\nThanks Mr Nathan Boley ,\ni want these equations to solve thsese equtions of parameters and total time  in order to get each paramter formula\ni need these formula  in my experiments is very important to know the rate for each parameter in total cost for plan. Best \nRadhya..\n\n\n\nFrom: Nathan Boley <[email protected]>To: Radhya sahal <[email protected]>Cc: pgsql-performance group <[email protected]>Sent: Mon, April 11, 2011 4:09:07 PMSubject: Re: [PERFORM] how explain works> how explian works as math equations to estimate cost with  constatn query> parameters> such as cpu_tuple cost ,random page cost ...etc>  i want maths  expression  in order to know how these parameters will effect> in cost ???The expressions are complicated, and they are certainly not linear asyou seem to think from your previous post.> please any one can help me ??What do you\n need this for? If your goal is to optimize a realapplication, then you should just vary the cost parameters and measurethe resulting change in query times. If your interests are academic,there were some excellent suggestions for places to start in responseto your previous post.Best,Nathan", "msg_date": "Mon, 11 Apr 2011 16:22:27 -0700 (PDT)", "msg_from": "Radhya sahal <[email protected]>", "msg_from_op": true, "msg_subject": "Re: how explain works to Mr Nathan Boley" }, { "msg_contents": "Dne 12.4.2011 01:22, Radhya sahal napsal(a):\n> Thanks Mr Nathan Boley ,\n> i want these equations to solve thsese equtions of parameters and total\n> time in order to get each paramter formula\n> i need these formula in my experiments is very important to know the\n> rate for each parameter in total cost for plan. \n> Best\n> Radhya..\n\nI don't think those equations are fully documented outside the source\ncode. If you really need the exact formulas, you'll have to dig into the\nsource codes and search there.\n\nregards\nTomas\n", "msg_date": "Wed, 13 Apr 2011 22:19:10 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how explain works to Mr Nathan Boley" } ]
[ { "msg_contents": "I have two servers one has replication the other does not. The same\nquery on both servers. One takes 225seconds on the replicated server\nthe first time it runs and only 125ms on the other server the first time\nit runs. The second time you execute the query it drops to the 125ms.\nThey are using the same query plan. What kind of things should I be\nlooking at?\n\n \n\nQUERY:\n\nselect distinct cast(max(VehicleUsed.\"VehicleUsedPrice.max\") as int) as\n\"VehicleUsedPrice.max\",cast(min(VehicleUsed.\"VehicleUsedPrice.min\") as\nint) as\n\"VehicleUsedPrice.min\",cast(avg(VehicleUsed.\"VehicleUsedPrice.average\")\nas int) as \"VehicleUsedPrice.average\" \n\nfrom VehicleUsed_v1 as VehicleUsed \n\ninner join PostalCodeRegionCountyCity_v1 as PostalCodeRegionCountyCity\non\n(lower(VehicleUsed.PostalCode)=lower(PostalCodeRegionCountyCity.PostalCo\nde)) \n\nwhere (VehicleUsed.VehicleMakeId in (5,7,10,26,43,45,46,49,51,67,86))\nand (PostalCodeRegionCountyCity.RegionId=44) \n\nlimit 500000\n\n \n\n \n\n \n\n \n\nQUERY PLAN:\n\n\"Limit (cost=54953.88..54953.93 rows=1 width=12)\"\n\n\" -> Unique (cost=54953.88..54953.93 rows=1 width=12)\"\n\n\" -> Sort (cost=54953.88..54953.90 rows=1 width=12)\"\n\n\" Sort Key: (max(vehicleused.\"VehicleUsedPrice.max\")),\n(min(vehicleused.\"VehicleUsedPrice.min\")),\n((avg(vehicleused.\"VehicleUsedPrice.average\"))::integer)\"\n\n\" -> Aggregate (cost=54953.73..54953.84 rows=1 width=12)\"\n\n\" -> Hash Join (cost=4354.43..54255.18 rows=23284\nwidth=12)\"\n\n\" Hash Cond:\n(lower((vehicleused.postalcode)::text) =\nlower((postalcoderegioncountycity.postalcode)::text))\"\n\n\" -> Bitmap Heap Scan on vehicleused_v1\nvehicleused (cost=3356.65..48157.38 rows=50393 width=18)\"\n\n\" Recheck Cond: (vehiclemakeid = ANY\n('{5,7,10,26,43,45,46,49,51,67,86}'::integer[]))\"\n\n\" -> Bitmap Index Scan on\nvehicleused_v1_i08 (cost=0.00..3306.26 rows=50393 width=0)\"\n\n\" Index Cond: (vehiclemakeid = ANY\n('{5,7,10,26,43,45,46,49,51,67,86}'::integer[]))\"\n\n\" -> Hash (cost=711.12..711.12 rows=2606\nwidth=6)\"\n\n\" -> Index Scan using\npostalcoderegioncountycity_v1_i05 on postalcoderegioncountycity_v1\npostalcoderegioncountycity (cost=0.00..711.12 rows=2606 width=6)\"\n\n\" Index Cond: (regionid = 44)\"\n\n \n\n \n\n \n\nSERVER SETTINGS:\n\nThe settings are the same on each server with the exception of the\nreplication:\n\n \n\nPGSQL9.0.3\n\n \n\nlisten_addresses = '*' # what IP address(es) to listen on;\n\n # comma-separated list of\naddresses;\n\n # defaults to 'localhost', '*' =\nall\n\n # (change requires restart)\n\nport = 5432 # (change requires restart)\n\nmax_connections = 100 # (change requires restart)\n\n # (change requires restart)\n\nbonjour_name = 'halcpcnt1s' # defaults to the\ncomputer name\n\n # (change requires restart)\n\n \n\nshared_buffers = 3GB # min 128kB\n\neffective_cache_size = 6GB\n\n \n\nlog_destination = 'stderr' # Valid values are combinations\nof\n\nlogging_collector = on # Enable capturing of stderr and csvlog\n\n \n\n \n\ndatestyle = 'iso, mdy'\n\nlc_messages = 'en_US.UTF-8' # locale for system\nerror message\n\n # strings\n\nlc_monetary = 'en_US.UTF-8' # locale for monetary\nformatting\n\nlc_numeric = 'en_US.UTF-8' # locale for number\nformatting\n\nlc_time = 'en_US.UTF-8' # locale for time\nformatting\n\n \n\n# default configuration for text search\n\ndefault_text_search_config = 'pg_catalog.english'\n\n \n\nmax_connections = 100\n\ntemp_buffers = 100MB\n\nwork_mem = 100MB\n\nmaintenance_work_mem = 500MB\n\nmax_files_per_process = 10000\n\nseq_page_cost = 1.0\n\nrandom_page_cost = 1.1\n\ncpu_tuple_cost = 0.1\n\ncpu_index_tuple_cost = 0.05\n\ncpu_operator_cost = 0.01\n\ndefault_statistics_target = 1000\n\nautovacuum_max_workers = 1\n\n \n\nconstraint_exclusion = on\n\ncheckpoint_completion_target = 0.9\n\nwal_buffers = 8MB\n\ncheckpoint_segments = 100\n\n \n\n#log_min_messages = DEBUG1\n\n#log_min_duration_statement = 1000\n\n#log_statement = all\n\n#log_temp_files = 128\n\n#log_lock_waits = on\n\n#log_line_prefix = '%m %u %d %h %p %i %c %l %s'\n\n#log_duration = on\n\n#debug_print_plan = on\n\n \n\n# Replication Settings\n\nhot_standby = on\n\nwal_level = hot_standby\n\nmax_wal_senders = 5\n\nwal_keep_segments = 32\n\narchive_mode = on\n\narchive_command = 'cp %p /usr/local/pgsql/data/pg_xlog/archive/'\n\n \n\n \n\nPam Ozer\n\nData Architect\n\[email protected] <mailto:[email protected]> \n\ntel. 949.705.3468\n\n \n\n \n\nSource Interlink Media\n\n1733 Alton Pkwy Suite 100, Irvine, CA 92606\n\nwww.simautomotive.com <http://www.simautomotive.com> \n\nConfidentiality Notice- This electronic communication, and all\ninformation herein, including files attached hereto, is private, and is\nthe property of the sender. This communication is intended only for the\nuse of the individual or entity named above. If you are not the intended\nrecipient, you are hereby notified that any disclosure of; dissemination\nof; distribution of; copying of; or, taking any action in reliance upon\nthis communication, is strictly prohibited. If you have received this\ncommunication in error, please immediately notify us by telephone,\n(949)-705-3000, and destroy all copies of this communication. Thank you.", "msg_date": "Mon, 11 Apr 2011 16:28:45 -0700", "msg_from": "\"Ozer, Pam\" <[email protected]>", "msg_from_op": true, "msg_subject": "Two servers - One Replicated - Same query" }, { "msg_contents": "\"Ozer, Pam\" <[email protected]> wrote:\n \n> I have two servers one has replication the other does not. The\n> same query on both servers. One takes 225seconds on the\n> replicated server the first time it runs and only 125ms on the\n> other server the first time it runs. The second time you execute\n> the query it drops to the 125ms. They are using the same query\n> plan. What kind of things should I be looking at?\n \nCaching.\n \nApparently the usage pattern on one server tends to keep the\nnecessary data in cache, while the usage pattern on the other is\nflushing it out occasionally to make room for other data. Adding\nRAM to the server might help.\n \n-Kevin\n", "msg_date": "Tue, 12 Apr 2011 09:32:46 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Two servers - One Replicated - Same query" } ]
[ { "msg_contents": "Hi everybody,\n\nI have a performance-problem with a query using a LIMIT. There are other threads rergading performance issues with LIMIT, but I didn't find useful hints for our problem and it might\nbe interesting for other postgres-users.\n\n\nThere are only 2 simple tables:\n\nCREATE TABLE newsfeed\n(\nid varchar(32) PRIMARY KEY,\nversion int4 NOT NULL,\nnewsfeed_type varchar(20) NOT NULL,\nnew_item_count int4 NOT NULL\n);\nCREATE INDEX IDX_NEWSFEED_TYPE ON newsfeed (newsfeed_type);\n\n\nCREATE TABLE newsfeed_item\n(\nid varchar(32) PRIMARY NOT NULL,\nitem_type varchar(35) NOT NULL,\nversion int4 NOT NULL,\ncategory varchar(25) NULL,\ndata1 bytea NULL,\ndata2 bytea NULL,\ndate_time timestamp NOT NULL,\nguid1 varchar(32) NULL,\nguid2 varchar(32) NULL,\nguid3 varchar(32) NULL,\nid1 int8 NULL,\nid2 int8 NULL,\nlong_value1 int8 NULL,\nlong_value2 int8 NULL,\nlong_value3 int8 NULL,\nstring_value1 varchar(4000) NULL,\nstring_value2 varchar(500) NULL,\nstring_value3 varchar(500) NULL,\nstring_value4 varchar(500) NULL,\nstring_value5 varchar(500) NULL,\nstring_value6 varchar(500) NULL,\nnewsfeed varchar(32) NOT NULL\n);\nCREATE UNIQUE INDEX newsfeed_item_pkey ON newsfeed_item (id);\nCREATE INDEX idx_nfi_guid1 ON newsfeed_item (guid1);\nCREATE INDEX idx_nfi_guid2 ON newsfeed_item (guid2);\nCREATE INDEX idx_nfi_guid3 ON newsfeed_item (guid3);\nCREATE INDEX idx_nfi_id1 ON newsfeed_item (id1);\nCREATE INDEX idx_nfi_id2 ON newsfeed_item (id2);\nCREATE INDEX idx_nfi_newsfeed ON newsfeed_item (newsfeed);\nCREATE INDEX idx_nfi_type ON newsfeed_item (item_type);\nCREATE INDEX idx_nfi_datetime ON newsfeed_item (date_time);\n\nnewsfeed contains 457036 rows\nnewsweed_item contains 5169727 rows\n\npostgres version: 9.0.2\nOS: CentOS release 5.5 (Final)\n\n\nThe following query took 4.2 seconds:\n\n-------------------------\nselect *\nfrom newsfeed_item \nwhere newsfeed in \n (\n '173ee4dcec0d11de9f4f12313c0018c1','10dabde0f70211df816612313b02054e',\n '17841c9af70211df874b12313b02054e','1783fce2f70211df814412313b02054e','1783fdd2f70211df8c1d12313b02054e','178405a2f70211df829212313b02054e',\n '178440c6f70211df97c812313b02054e','178416e6f70211dfac3412313b02054e','1783e4aaf70211df9acd12313b02054e','178437e8f70211df8b8512313b02054e',\n '1783f54ef70211df81e012313b02054e','178415c4f70211df8f8112313b02054e' \n ) \norder by date_time desc \n\nlimit 25\n-------------------------\n\nIf the LIMIT was removed, the query took 60 milliseconds! If the sorting order was changed to ASC, the query took 44ms, even with the LIMIT.\n\nThen I tried to create the index on date_time in DESC order (because the result is sorted in descending order), but that did not change anything. \n\n\nThen I removed the index on date_time with the following results:\n\nquery with the limit: 40 ms\nquery without the limit: 60 ms\n\n=> the optimizer seems to use a wrong index (I did perform an ANALYZE on newsfeed_item and a REINDEX before I did the test). Since I currently don't need \nthe index on date_time (but will need it in the near future), I removed the index on date_time, which is ok for now.\n\n------------------------\n\nhere are the explain analyze results:\n\n1) the query in descending order with the limit and index on date_time (the slow one):\n\nLimit (cost=0.00..980.09 rows=25 width=963) (actual time=48.592..4060.779 rows=25 loops=1)\n -> Index Scan Backward using \"IDX_NFI_DATETIME\" on newsfeed_item (cost=0.00..409365.16 rows=10442 width=963) (actual time=48.581..4060.542 rows=25 loops=1)\n Filter: ((newsfeed)::text = ANY ('{173ee4dcec0d11de9f4f12313c0018c1,10dabde0f70211df816612313b02054e,17841c9af70211df874b12313b02054e,1783fce2f70211df814412313b02054e,1783fdd2f70211df8c1d12313b02054e,178405a2f70211df829212313b02054e,178440c6f70211df97c812313b02054e,178416e6f70211dfac3412313b02054e,1783e4aaf70211df9acd12313b02054e,178437e8f70211df8b8512313b02054e,1783f54ef70211df81e012313b02054e,178415c4f70211df8f8112313b02054e}'::text[]))\nTotal runtime: 4060.959 ms\n\n\n2) the query in descending order without the limit (which is much faster):\n\nSort (cost=39575.23..39601.33 rows=10442 width=963) (actual time=15.014..17.038 rows=477 loops=1)\n Sort Key: date_time\n Sort Method: quicksort Memory: 287kB\n -> Bitmap Heap Scan on newsfeed_item (cost=421.41..34450.72 rows=10442 width=963) (actual time=0.644..12.601 rows=477 loops=1)\n Recheck Cond: ((newsfeed)::text = ANY ('{173ee4dcec0d11de9f4f12313c0018c1,10dabde0f70211df816612313b02054e,17841c9af70211df874b12313b02054e,1783fce2f70211df814412313b02054e,1783fdd2f70211df8c1d12313b02054e,178405a2f70211df829212313b02054e,178440c6f70211df97c812313b02054e,178416e6f70211dfac3412313b02054e,1783e4aaf70211df9acd12313b02054e,178437e8f70211df8b8512313b02054e,1783f54ef70211df81e012313b02054e,178415c4f70211df8f8112313b02054e}'::text[]))\n -> Bitmap Index Scan on idx_nfi_newsfeed (cost=0.00..418.80 rows=10442 width=0) (actual time=0.555..0.555 rows=477 loops=1)\n Index Cond: ((newsfeed)::text = ANY ('{173ee4dcec0d11de9f4f12313c0018c1,10dabde0f70211df816612313b02054e,17841c9af70211df874b12313b02054e,1783fce2f70211df814412313b02054e,1783fdd2f70211df8c1d12313b02054e,178405a2f70211df829212313b02054e,178440c6f70211df97c812313b02054e,178416e6f70211dfac3412313b02054e,1783e4aaf70211df9acd12313b02054e,178437e8f70211df8b8512313b02054e,1783f54ef70211df81e012313b02054e,178415c4f70211df8f8112313b02054e}'::text[]))\nTotal runtime: 19.065 ms\n\n3) the query in ascending order with the limit (which is fast):\n\nLimit (cost=0.00..980.09 rows=25 width=963) (actual time=0.261..3.704 rows=25 loops=1)\n -> Index Scan using \"IDX_NFI_DATETIME\" on newsfeed_item (cost=0.00..409365.16 rows=10442 width=963) (actual time=0.250..3.495 rows=25 loops=1)\n Filter: ((newsfeed)::text = ANY ('{173ee4dcec0d11de9f4f12313c0018c1,10dabde0f70211df816612313b02054e,17841c9af70211df874b12313b02054e,1783fce2f70211df814412313b02054e,1783fdd2f70211df8c1d12313b02054e,178405a2f70211df829212313b02054e,178440c6f70211df97c812313b02054e,178416e6f70211dfac3412313b02054e,1783e4aaf70211df9acd12313b02054e,178437e8f70211df8b8512313b02054e,1783f54ef70211df81e012313b02054e,178415c4f70211df8f8112313b02054e}'::text[]))\nTotal runtime: 3.854 ms\n\n\n4) The query after removing the index on date_time, in descending order with the LIMIT (which is fast as well).\n\nLimit (cost=34745.39..34745.45 rows=25 width=963) (actual time=12.855..13.143 rows=25 loops=1)\n -> Sort (cost=34745.39..34771.49 rows=10442 width=963) (actual time=12.846..12.946 rows=25 loops=1)\n Sort Key: date_time\n Sort Method: top-N heapsort Memory: 40kB\n -> Bitmap Heap Scan on newsfeed_item (cost=421.41..34450.72 rows=10442 width=963) (actual time=0.622..9.936 rows=477 loops=1)\n Recheck Cond: ((newsfeed)::text = ANY ('{173ee4dcec0d11de9f4f12313c0018c1,10dabde0f70211df816612313b02054e,17841c9af70211df874b12313b02054e,1783fce2f70211df814412313b02054e,1783fdd2f70211df8c1d12313b02054e,178405a2f70211df829212313b02054e,178440c6f70211df97c812313b02054e,178416e6f70211dfac3412313b02054e,1783e4aaf70211df9acd12313b02054e,178437e8f70211df8b8512313b02054e,1783f54ef70211df81e012313b02054e,178415c4f70211df8f8112313b02054e}'::text[]))\n -> Bitmap Index Scan on idx_nfi_newsfeed (cost=0.00..418.80 rows=10442 width=0) (actual time=0.543..0.543 rows=477 loops=1)\n Index Cond: ((newsfeed)::text = ANY ('{173ee4dcec0d11de9f4f12313c0018c1,10dabde0f70211df816612313b02054e,17841c9af70211df874b12313b02054e,1783fce2f70211df814412313b02054e,1783fdd2f70211df8c1d12313b02054e,178405a2f70211df829212313b02054e,178440c6f70211df97c812313b02054e,178416e6f70211dfac3412313b02054e,1783e4aaf70211df9acd12313b02054e,178437e8f70211df8b8512313b02054e,1783f54ef70211df81e012313b02054e,178415c4f70211df8f8112313b02054e}'::text[]))\n\nTotal runtime: 13.318 ms\n\nIs there anything I can do to add the index on date_time without the performance problem?\n\nregards\nDieter\n\n", "msg_date": "Tue, 12 Apr 2011 07:20:44 +0200", "msg_from": "Dieter Rehbein <[email protected]>", "msg_from_op": true, "msg_subject": "performance problem with LIMIT (order BY in DESC order). Wrong index\n\tused?" }, { "msg_contents": "On Tue, Apr 12, 2011 at 7:20 AM, Dieter Rehbein\n<[email protected]> wrote:\n> Hi everybody,\n>\n> I have a performance-problem with a query using a LIMIT. There are other threads rergading performance issues with LIMIT, but I didn't find useful hints for our problem and it might\n> be interesting for other postgres-users.\n\nDid you perform an ANALYZE or VACUUM ANALYZE?\nDid you try increasing the statistic targets?\n\nAFAIK, it looks a lot like the planner is missing stats, since it\nestimates the index query on idx_nfi_newsfeed will fetch 10k rows -\ninstead of 25.\n", "msg_date": "Tue, 12 Apr 2011 09:42:27 +0200", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance problem with LIMIT (order BY in DESC\n\torder). Wrong index used?" }, { "msg_contents": "what I did, was an ANALYZE, which did not change anything. \n\nI just executed a VACUUM ANALYZE and now everything performs well. hm, strange.\n\nthanks\nDieter\n\n\n\nAm 12.04.2011 um 09:42 schrieb Claudio Freire:\n\nOn Tue, Apr 12, 2011 at 7:20 AM, Dieter Rehbein\n<[email protected]> wrote:\n> Hi everybody,\n> \n> I have a performance-problem with a query using a LIMIT. There are other threads rergading performance issues with LIMIT, but I didn't find useful hints for our problem and it might\n> be interesting for other postgres-users.\n\nDid you perform an ANALYZE or VACUUM ANALYZE?\nDid you try increasing the statistic targets?\n\nAFAIK, it looks a lot like the planner is missing stats, since it\nestimates the index query on idx_nfi_newsfeed will fetch 10k rows -\ninstead of 25.\n\n", "msg_date": "Tue, 12 Apr 2011 10:59:22 +0200", "msg_from": "Dieter Rehbein <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance problem with LIMIT (order BY in DESC order). Wrong\n\tindex used?" }, { "msg_contents": "On Tue, Apr 12, 2011 at 10:59 AM, Dieter Rehbein\n<[email protected]> wrote:\n> I just executed a VACUUM ANALYZE and now everything performs well. hm, strange.\n\nThat probably means you need more statistics - try increasing the\nnewsfeed's statistics target count.\n\nALTER TABLE newsfeed_item ALTER COLUMN newsfeed SET STATISTICS <n>;\n\nTry different <n> numbers, you can crank it up to 4000 or perhaps more\nin 9.0, but you should start lower I guess.\n", "msg_date": "Tue, 12 Apr 2011 11:07:27 +0200", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance problem with LIMIT (order BY in DESC\n\torder). Wrong index used?" }, { "msg_contents": "> On Tue, Apr 12, 2011 at 10:59 AM, Dieter Rehbein\n> <[email protected]> wrote:\n>> I just executed a VACUUM ANALYZE and now everything performs well. hm,\n>> strange.\n>\n> That probably means you need more statistics - try increasing the\n> newsfeed's statistics target count.\n>\n> ALTER TABLE newsfeed_item ALTER COLUMN newsfeed SET STATISTICS <n>;\n>\n> Try different <n> numbers, you can crank it up to 4000 or perhaps more\n> in 9.0, but you should start lower I guess.\n\nAFAIK the max value is 10000 and the default is 100. Higher numbers mean\nhigher overhead, so do not jump to 10000 directly. Set it to 1000 and see\nif that helps, etc.\n\nregards\nTomas\n\n", "msg_date": "Tue, 12 Apr 2011 11:33:31 +0200", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: performance problem with LIMIT (order BY in DESC\n\torder). Wrong index used?" }, { "msg_contents": "thank's a lot guys, I will try that out.\n\nregards \nDieter\n\n\n\nAm 12.04.2011 um 11:07 schrieb Claudio Freire:\n\nOn Tue, Apr 12, 2011 at 10:59 AM, Dieter Rehbein\n<[email protected]> wrote:\n> I just executed a VACUUM ANALYZE and now everything performs well. hm, strange.\n\nThat probably means you need more statistics - try increasing the\nnewsfeed's statistics target count.\n\nALTER TABLE newsfeed_item ALTER COLUMN newsfeed SET STATISTICS <n>;\n\nTry different <n> numbers, you can crank it up to 4000 or perhaps more\nin 9.0, but you should start lower I guess.\n\n", "msg_date": "Tue, 12 Apr 2011 11:36:23 +0200", "msg_from": "Dieter Rehbein <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance problem with LIMIT (order BY in DESC order). Wrong\n\tindex used?" }, { "msg_contents": "Claudio Freire <[email protected]> writes:\n> Did you try increasing the statistic targets?\n\n> AFAIK, it looks a lot like the planner is missing stats, since it\n> estimates the index query on idx_nfi_newsfeed will fetch 10k rows -\n> instead of 25.\n\nBTW, this is the right suggestion, but for the wrong reason. You seem\nto be looking at\n\nLimit (cost=0.00..980.09 rows=25 width=963) (actual time=48.592..4060.779 rows=25 loops=1)\n -> Index Scan Backward using \"IDX_NFI_DATETIME\" on newsfeed_item (cost=0.00..409365.16 rows=10442 width=963) (actual time=48.581..4060.542 rows=25 loops=1)\n\nHere, the actual row count is constrained to 25 because the LIMIT node\nstops calling the indexscan node once it's got 25. So this case proves\nlittle about whether the planner's estimates are any good. You need to\ncheck the estimates in the unconstrained plan:\n\n -> Bitmap Heap Scan on newsfeed_item (cost=421.41..34450.72 rows=10442 width=963) (actual time=0.644..12.601 rows=477 loops=1)\n\nHere we can see that there really are only 477 rows in the table that\nsatisfy the WHERE clause, versus an estimate of 10K. So sure enough,\nthe statistics are bad, and an increase in stats target might help.\nBut you can't conclude that from an explain that involves LIMIT.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 12 Apr 2011 10:38:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance problem with LIMIT (order BY in DESC order). Wrong\n\tindex used?" } ]
[ { "msg_contents": "Hi,\n\nAnyone lucky to have dbt5 run for PostgreSQL 9.0.3?!\n\nI am trying on Novell SuSE Linux Enterprise Server 11 SP1 x86_64 with a\nvirtual machine and bit hard with no success run yet. If you can help me\nwith any docs will be more of a support.\n\nRegards,\n\nSethu Prasad\n\nHi,Anyone lucky to have dbt5 run for PostgreSQL 9.0.3?!I am trying on Novell SuSE Linux Enterprise Server 11 SP1 x86_64 with a virtual machine and bit hard with no success run yet. If you can help me with any docs will be more of a support.\nRegards,Sethu Prasad", "msg_date": "Tue, 12 Apr 2011 09:51:30 +0200", "msg_from": "Sethu Prasad <[email protected]>", "msg_from_op": true, "msg_subject": "DBT-5 & Postgres 9.0.3" }, { "msg_contents": "On Tue, Apr 12, 2011 at 3:51 AM, Sethu Prasad <[email protected]> wrote:\n> Anyone lucky to have dbt5 run for PostgreSQL 9.0.3?!\n>\n> I am trying on Novell SuSE Linux Enterprise Server 11 SP1 x86_64 with a\n> virtual machine and bit hard with no success run yet. If you can help me\n> with any docs will be more of a support.\n\nWhat's going wrong for you?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Wed, 11 May 2011 23:22:27 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DBT-5 & Postgres 9.0.3" }, { "msg_contents": "http://sourceforge.net/mailarchive/forum.php?forum_name=osdldbt-general&max_rows=25&style=nested&viewmonth=201104\n\n- Sethu\n\n\nOn Thu, May 12, 2011 at 5:22 AM, Robert Haas <[email protected]> wrote:\n\n> On Tue, Apr 12, 2011 at 3:51 AM, Sethu Prasad <[email protected]>\n> wrote:\n> > Anyone lucky to have dbt5 run for PostgreSQL 9.0.3?!\n> >\n> > I am trying on Novell SuSE Linux Enterprise Server 11 SP1 x86_64 with a\n> > virtual machine and bit hard with no success run yet. If you can help me\n> > with any docs will be more of a support.\n>\n> What's going wrong for you?\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\nhttp://sourceforge.net/mailarchive/forum.php?forum_name=osdldbt-general&max_rows=25&style=nested&viewmonth=201104\n- SethuOn Thu, May 12, 2011 at 5:22 AM, Robert Haas <[email protected]> wrote:\nOn Tue, Apr 12, 2011 at 3:51 AM, Sethu Prasad <[email protected]> wrote:\n> Anyone lucky to have dbt5 run for PostgreSQL 9.0.3?!\n>\n> I am trying on Novell SuSE Linux Enterprise Server 11 SP1 x86_64 with a\n> virtual machine and bit hard with no success run yet. If you can help me\n> with any docs will be more of a support.\n\nWhat's going wrong for you?\n\n--\nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Thu, 12 May 2011 09:18:34 +0200", "msg_from": "Sethu Prasad <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DBT-5 & Postgres 9.0.3" }, { "msg_contents": "On Thu, May 12, 2011 at 3:18 AM, Sethu Prasad <[email protected]> wrote:\n> http://sourceforge.net/mailarchive/forum.php?forum_name=osdldbt-general&max_rows=25&style=nested&viewmonth=201104\n\nIt's not very obvious from reading through that link what you still\nneed help with.\n\nhttp://wiki.postgresql.org/wiki/Guide_to_reporting_problems\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Sun, 15 May 2011 17:12:43 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DBT-5 & Postgres 9.0.3" }, { "msg_contents": "Hi, I know this is an old thread, but I wanted to chime in since I am having\nproblems with this as well.\n\nI too am trying to run dbt5 against Postgres. Specifically I am trying to\nrun it against Postgres 9.1beta3.\n\nAfter jumping through many hoops I ultimately was able to build dbt5 on my\ndebian environment, but when I attempt to run the benchmark with:\n\ndbt5-run-workload -a pgsql -c 5000 -t 5000 -d 60 -u 1 -i ~/dbt5-0.1.0/egen \n-f 500 -w 300 -n dbt5 -p 5432 -o /tmp/results\n\nit runs to completion but all of the dbt5 log files contain errors like:\n\nterminate called after throwing an instance of 'pqxx::broken_connection'\n what(): could not connect to server: No such file or directory\n Is the server running locally and accepting\n connections on Unix domain socket\n\"/var/run/postgresql/.s.PGSQL.5432\"?\n\nI'm lead to believe that this is an error I would receive if the Postgres db\nwere not running, but it is. In fact, the way dbt5-run-workload works it\nstarts the database automatically. I have also confirmed it is running by\nmanually connecting while this benchmark is in progress (and after it has\nalready started the database and logged the above error).\n\nAny thoughts on why I might be getting this error?\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/DBT-5-Postgres-9-0-3-tp4297670p4708692.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n", "msg_date": "Wed, 17 Aug 2011 08:29:54 -0700 (PDT)", "msg_from": "bobbyw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DBT-5 & Postgres 9.0.3" }, { "msg_contents": "On 8/17/2011 10:29 AM, bobbyw wrote:\n> Hi, I know this is an old thread, but I wanted to chime in since I am having\n> problems with this as well.\n>\n> I too am trying to run dbt5 against Postgres. Specifically I am trying to\n> run it against Postgres 9.1beta3.\n>\n> After jumping through many hoops I ultimately was able to build dbt5 on my\n> debian environment, but when I attempt to run the benchmark with:\n>\n> dbt5-run-workload -a pgsql -c 5000 -t 5000 -d 60 -u 1 -i ~/dbt5-0.1.0/egen\n> -f 500 -w 300 -n dbt5 -p 5432 -o /tmp/results\n>\n> it runs to completion but all of the dbt5 log files contain errors like:\n>\n> terminate called after throwing an instance of 'pqxx::broken_connection'\n> what(): could not connect to server: No such file or directory\n> Is the server running locally and accepting\n> connections on Unix domain socket\n> \"/var/run/postgresql/.s.PGSQL.5432\"?\n>\n> I'm lead to believe that this is an error I would receive if the Postgres db\n> were not running, but it is. In fact, the way dbt5-run-workload works it\n> starts the database automatically. I have also confirmed it is running by\n> manually connecting while this benchmark is in progress (and after it has\n> already started the database and logged the above error).\n>\n> Any thoughts on why I might be getting this error?\n>\n> --\n> View this message in context: http://postgresql.1045698.n5.nabble.com/DBT-5-Postgres-9-0-3-tp4297670p4708692.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>\n\nIts trying to connect to unix socket \"/var/run/postgresql/.s.PGSQL.5432\",\n\nbut your postgresql.conf file probably has:\nunix_socket_directory = '/tmp'\n\n\nChange it to:\nunix_socket_directory = '/var/run/postgresql'\n\nand restart PG.\n\n\n-Andy\n", "msg_date": "Wed, 17 Aug 2011 12:28:11 -0500", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DBT-5 & Postgres 9.0.3" }, { "msg_contents": "Awesome.. that did it! It was actually not set at all in postgresql.conf,\nalthough it was commented out as:\n\n# unix_socket_directory = '' \n\nPresumably it was using the default of '/tmp'?\n\nAnyway, after making that change dbt5 runs fine, but now when I try to\nconnect via \"psql\" I get:\n\npsql.bin: could not connect to server: No such file or directory\n Is the server running locally and accepting\n connections on Unix domain socket \"/tmp/.s.PGSQL.5432\"?\n\nWhy is psql looking in /tmp?\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/DBT-5-Postgres-9-0-3-tp4297670p4709231.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n", "msg_date": "Wed, 17 Aug 2011 10:59:12 -0700 (PDT)", "msg_from": "bobbyw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DBT-5 & Postgres 9.0.3" }, { "msg_contents": "On Wed, Aug 17, 2011 at 10:59:12AM -0700, bobbyw wrote:\n> Awesome.. that did it! It was actually not set at all in postgresql.conf,\n> although it was commented out as:\n> \n> # unix_socket_directory = '' \n> \n> Presumably it was using the default of '/tmp'?\n> \n> Anyway, after making that change dbt5 runs fine, but now when I try to\n> connect via \"psql\" I get:\n> \n> psql.bin: could not connect to server: No such file or directory\n> Is the server running locally and accepting\n> connections on Unix domain socket \"/tmp/.s.PGSQL.5432\"?\n> \n> Why is psql looking in /tmp?\n> \n\nBecause that is the default location. If you want to change it, you need\nto use the -h commandline option.\n\nRegards,\nKen\n", "msg_date": "Wed, 17 Aug 2011 13:14:06 -0500", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DBT-5 & Postgres 9.0.3" }, { "msg_contents": "\"[email protected]\" <[email protected]> writes:\n> On Wed, Aug 17, 2011 at 10:59:12AM -0700, bobbyw wrote:\n>> Why is psql looking in /tmp?\n\n> Because that is the default location. If you want to change it, you need\n> to use the -h commandline option.\n\nIt sounds to me like bobbyw might have two separate installations of\npostgres (or at least two copies of psql), one compiled with /tmp as the\ndefault socket location and one compiled with /var/run/postgresql as the\ndefault. /tmp is the out-of-the-box default but I think Debian likes to\nbuild it with /var/run/postgresql as the default.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Aug 2011 16:12:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DBT-5 & Postgres 9.0.3 " }, { "msg_contents": "On Wed, Aug 17, 2011 at 4:12 PM, Tom Lane <[email protected]> wrote:\n\n> It sounds to me like bobbyw might have two separate installations of\n> postgres (or at least two copies of psql), one compiled with /tmp as the\n> default socket location and one compiled with /var/run/postgresql as the\n> default.  /tmp is the out-of-the-box default but I think Debian likes to\n> build it with /var/run/postgresql as the default.\n\nIt looked like the actual DBT-5 harness is built with \"system\nlibraries\" (libpqxx, linked to system libpq, with debian's\n/var/run/postgresql), but the scaffolding around it uses a \"local\"\npostgres (server and psql) using the source default of /tmp?\n\na.\n\n\n-- \nAidan Van Dyk                                             Create like a god,\[email protected]                                       command like a king,\nhttp://www.highrise.ca/                                   work like a slave.\n", "msg_date": "Wed, 17 Aug 2011 16:41:48 -0400", "msg_from": "Aidan Van Dyk <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DBT-5 & Postgres 9.0.3" }, { "msg_contents": "Aidan Van Dyk <[email protected]> writes:\n> On Wed, Aug 17, 2011 at 4:12 PM, Tom Lane <[email protected]> wrote:\n>> It sounds to me like bobbyw might have two separate installations of\n>> postgres (or at least two copies of psql), one compiled with /tmp as the\n>> default socket location and one compiled with /var/run/postgresql as the\n>> default. /tmp is the out-of-the-box default but I think Debian likes to\n>> build it with /var/run/postgresql as the default.\n\n> It looked like the actual DBT-5 harness is built with \"system\n> libraries\" (libpqxx, linked to system libpq, with debian's\n> /var/run/postgresql), but the scaffolding around it uses a \"local\"\n> postgres (server and psql) using the source default of /tmp?\n\nHmm ... doesn't sound like an amazingly good idea. But if DBT wants to\ndo it that way, it'd be well advised to not assume that the system\nlibraries have either the port number or socket directory defaulting\nto what it is using. Or maybe the problem is that it does override all\nthat stuff and works fine by itself, but then you can't easily connect\nto the server manually?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Aug 2011 17:17:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DBT-5 & Postgres 9.0.3 " }, { "msg_contents": "On Wed, Aug 17, 2011 at 8:29 AM, bobbyw <[email protected]> wrote:\n> Hi, I know this is an old thread, but I wanted to chime in since I am having\n> problems with this as well.\n>\n> I too am trying to run dbt5 against Postgres.  Specifically I am trying to\n> run it against Postgres 9.1beta3.\n>\n> After jumping through many hoops I ultimately was able to build dbt5 on my\n> debian environment, but when I attempt to run the benchmark with:\n>\n> dbt5-run-workload -a pgsql -c 5000 -t 5000 -d 60 -u 1 -i ~/dbt5-0.1.0/egen\n> -f 500 -w 300 -n dbt5 -p 5432 -o /tmp/results\n>\n> it runs to completion but all of the dbt5 log files contain errors like:\n>\n> terminate called after throwing an instance of 'pqxx::broken_connection'\n>  what():  could not connect to server: No such file or directory\n>        Is the server running locally and accepting\n>        connections on Unix domain socket\n> \"/var/run/postgresql/.s.PGSQL.5432\"?\n>\n> I'm lead to believe that this is an error I would receive if the Postgres db\n> were not running, but it is.  In fact, the way dbt5-run-workload works it\n> starts the database automatically.  I have also confirmed it is running by\n> manually connecting while this benchmark is in progress (and after it has\n> already started the database and logged the above error).\n>\n> Any thoughts on why I might be getting this error?\n\nHi there,\n\nSorry I didn't catch this sooner. Can you try using the code from the\ngit repository? I removed libpqxx and just used libpq a while ago to\nhopefully simplify the kit:\n\ngit://osdldbt.git.sourceforge.net/gitroot/osdldbt/dbt5\n\nRegards,\nMark\n", "msg_date": "Sat, 24 Sep 2011 08:45:51 -0700", "msg_from": "Mark Wong <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DBT-5 & Postgres 9.0.3" } ]
[ { "msg_contents": "I have been wrestling with the configuration of the dedicated Postges 9.0.3 server at work and granted, there's more activity on the production server, but the same queries take twice as long on the beefier server than my mac at home. I have pasted what I have changed in postgresql.conf - I am wondering if there's any way one can help me change things around to be more efficient.\n\nDedicated PostgreSQL 9.0.3 Server with 16GB Ram\n\nHeavy write and read (for reporting and calculations) server. \n\nmax_connections = 350 \nshared_buffers = 4096MB \nwork_mem = 32MB\nmaintenance_work_mem = 512MB\n\n\nseq_page_cost = 0.02 # measured on an arbitrary scale\nrandom_page_cost = 0.03 \ncpu_tuple_cost = 0.02 \neffective_cache_size = 8192MB\n\n\n\nThe planner costs seem a bit low but this was from suggestions from this very list a while ago. \n\n\nThank you\n\nOgden\nI have been wrestling with the configuration of the dedicated Postges 9.0.3 server at work and granted, there's more activity on the production server, but the same queries take twice as long on the beefier server than my mac at home. I have pasted what I have changed in postgresql.conf - I am wondering if there's any way one can help me change things around to be more efficient.Dedicated PostgreSQL 9.0.3 Server with 16GB RamHeavy write and read (for reporting and calculations) server. max_connections = 350 shared_buffers = 4096MB  work_mem = 32MBmaintenance_work_mem = 512MBseq_page_cost = 0.02                    # measured on an arbitrary scalerandom_page_cost = 0.03 cpu_tuple_cost = 0.02  effective_cache_size = 8192MBThe planner costs seem a bit low but this was from suggestions from this very list a while ago. Thank youOgden", "msg_date": "Tue, 12 Apr 2011 11:36:19 -0500", "msg_from": "Ogden <[email protected]>", "msg_from_op": true, "msg_subject": "Performance " }, { "msg_contents": "Ogden <[email protected]> wrote:\n\n> I have been wrestling with the configuration of the dedicated Postges 9.0.3\n> server at work and granted, there's more activity on the production server, but\n> the same queries take twice as long on the beefier server than my mac at home.\n> I have pasted what I have changed in postgresql.conf - I am wondering if\n> there's any way one can help me change things around to be more efficient.\n> \n> Dedicated PostgreSQL 9.0.3 Server with 16GB Ram\n> \n> Heavy write and read (for reporting and calculations) server. \n> \n> max_connections = 350 \n> shared_buffers = 4096MB \n> work_mem = 32MB\n> maintenance_work_mem = 512MB\n\nThat's okay.\n\n\n> \n> \n> seq_page_cost = 0.02 # measured on an arbitrary scale\n> random_page_cost = 0.03 \n\nDo you have super, Super, SUPER fast disks? I think, this (seq_page_cost\nand random_page_cost) are completly wrong.\n\n\n\n> cpu_tuple_cost = 0.02 \n> effective_cache_size = 8192MB\n> \n> \n> \n> The planner costs seem a bit low but this was from suggestions from this very\n> list a while ago. \n\nSure? Can you tell us a link into the archive?\n\n\nAndreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknown)\nKaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n", "msg_date": "Tue, 12 Apr 2011 19:18:55 +0200", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance" }, { "msg_contents": "\nOn Apr 12, 2011, at 12:18 PM, Andreas Kretschmer wrote:\n\n> Ogden <[email protected]> wrote:\n> \n>> I have been wrestling with the configuration of the dedicated Postges 9.0.3\n>> server at work and granted, there's more activity on the production server, but\n>> the same queries take twice as long on the beefier server than my mac at home.\n>> I have pasted what I have changed in postgresql.conf - I am wondering if\n>> there's any way one can help me change things around to be more efficient.\n>> \n>> Dedicated PostgreSQL 9.0.3 Server with 16GB Ram\n>> \n>> Heavy write and read (for reporting and calculations) server. \n>> \n>> max_connections = 350 \n>> shared_buffers = 4096MB \n>> work_mem = 32MB\n>> maintenance_work_mem = 512MB\n> \n> That's okay.\n> \n> \n>> \n>> \n>> seq_page_cost = 0.02 # measured on an arbitrary scale\n>> random_page_cost = 0.03 \n> \n> Do you have super, Super, SUPER fast disks? I think, this (seq_page_cost\n> and random_page_cost) are completly wrong.\n> \n\nNo, I don't have super fast disks. Just the 15K SCSI over RAID. I find by raising them to:\n\nseq_page_cost = 1.0\nrandom_page_cost = 3.0\ncpu_tuple_cost = 0.3\n#cpu_index_tuple_cost = 0.005 # same scale as above - 0.005\n#cpu_operator_cost = 0.0025 # same scale as above\neffective_cache_size = 8192MB \n\nThat this is better, some queries run much faster. Is this better?\n\nI will find the archive and post. \n\nThank you\n\nOgden\n\n\n", "msg_date": "Tue, 12 Apr 2011 12:23:15 -0500", "msg_from": "Ogden <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance" }, { "msg_contents": "Dne 12.4.2011 19:23, Ogden napsal(a):\n> \n> On Apr 12, 2011, at 12:18 PM, Andreas Kretschmer wrote:\n> \n>> Ogden <[email protected]> wrote:\n>>\n>>> I have been wrestling with the configuration of the dedicated Postges 9.0.3\n>>> server at work and granted, there's more activity on the production server, but\n>>> the same queries take twice as long on the beefier server than my mac at home.\n>>> I have pasted what I have changed in postgresql.conf - I am wondering if\n>>> there's any way one can help me change things around to be more efficient.\n>>>\n>>> Dedicated PostgreSQL 9.0.3 Server with 16GB Ram\n>>>\n>>> Heavy write and read (for reporting and calculations) server. \n>>>\n>>> max_connections = 350 \n>>> shared_buffers = 4096MB \n>>> work_mem = 32MB\n>>> maintenance_work_mem = 512MB\n>>\n>> That's okay.\n>>\n>>\n>>>\n>>>\n>>> seq_page_cost = 0.02 # measured on an arbitrary scale\n>>> random_page_cost = 0.03 \n>>\n>> Do you have super, Super, SUPER fast disks? I think, this (seq_page_cost\n>> and random_page_cost) are completly wrong.\n>>\n> \n> No, I don't have super fast disks. Just the 15K SCSI over RAID. I\n> find by raising them to:\n> \n> seq_page_cost = 1.0\n> random_page_cost = 3.0\n> cpu_tuple_cost = 0.3\n> #cpu_index_tuple_cost = 0.005 # same scale as above - 0.005\n> #cpu_operator_cost = 0.0025 # same scale as above\n> effective_cache_size = 8192MB \n> \n> That this is better, some queries run much faster. Is this better?\n\nI guess it is. What really matters with those cost variables is the\nrelative scale - the original values\n\nseq_page_cost = 0.02\nrandom_page_cost = 0.03\ncpu_tuple_cost = 0.02\n\nsuggest that the random reads are almost as expensive as sequential\nreads (which usually is not true - the random reads are significantly\nmore expensive), and that processing each row is about as expensive as\nreading the page from disk (again, reading data from disk is much more\nexpensive than processing them).\n\nSo yes, the current values are much more likely to give good results.\n\nYou've mentioned those values were recommended on this list - can you\npoint out the actual discussion?\n\nregards\nTomas\n", "msg_date": "Tue, 12 Apr 2011 20:16:26 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance" }, { "msg_contents": "\nOn Apr 12, 2011, at 1:16 PM, Tomas Vondra wrote:\n\n> Dne 12.4.2011 19:23, Ogden napsal(a):\n>> \n>> On Apr 12, 2011, at 12:18 PM, Andreas Kretschmer wrote:\n>> \n>>> Ogden <[email protected]> wrote:\n>>> \n>>>> I have been wrestling with the configuration of the dedicated Postges 9.0.3\n>>>> server at work and granted, there's more activity on the production server, but\n>>>> the same queries take twice as long on the beefier server than my mac at home.\n>>>> I have pasted what I have changed in postgresql.conf - I am wondering if\n>>>> there's any way one can help me change things around to be more efficient.\n>>>> \n>>>> Dedicated PostgreSQL 9.0.3 Server with 16GB Ram\n>>>> \n>>>> Heavy write and read (for reporting and calculations) server. \n>>>> \n>>>> max_connections = 350 \n>>>> shared_buffers = 4096MB \n>>>> work_mem = 32MB\n>>>> maintenance_work_mem = 512MB\n>>> \n>>> That's okay.\n>>> \n>>> \n>>>> \n>>>> \n>>>> seq_page_cost = 0.02 # measured on an arbitrary scale\n>>>> random_page_cost = 0.03 \n>>> \n>>> Do you have super, Super, SUPER fast disks? I think, this (seq_page_cost\n>>> and random_page_cost) are completly wrong.\n>>> \n>> \n>> No, I don't have super fast disks. Just the 15K SCSI over RAID. I\n>> find by raising them to:\n>> \n>> seq_page_cost = 1.0\n>> random_page_cost = 3.0\n>> cpu_tuple_cost = 0.3\n>> #cpu_index_tuple_cost = 0.005 # same scale as above - 0.005\n>> #cpu_operator_cost = 0.0025 # same scale as above\n>> effective_cache_size = 8192MB \n>> \n>> That this is better, some queries run much faster. Is this better?\n> \n> I guess it is. What really matters with those cost variables is the\n> relative scale - the original values\n> \n> seq_page_cost = 0.02\n> random_page_cost = 0.03\n> cpu_tuple_cost = 0.02\n> \n> suggest that the random reads are almost as expensive as sequential\n> reads (which usually is not true - the random reads are significantly\n> more expensive), and that processing each row is about as expensive as\n> reading the page from disk (again, reading data from disk is much more\n> expensive than processing them).\n> \n> So yes, the current values are much more likely to give good results.\n> \n> You've mentioned those values were recommended on this list - can you\n> point out the actual discussion?\n> \n> \n\nThank you for your reply. \n\nhttp://archives.postgresql.org/pgsql-performance/2010-09/msg00169.php is how I first played with those values...\n\nOgden", "msg_date": "Tue, 12 Apr 2011 13:28:14 -0500", "msg_from": "Ogden <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance" }, { "msg_contents": "Dne 12.4.2011 20:28, Ogden napsal(a):\n> \n> On Apr 12, 2011, at 1:16 PM, Tomas Vondra wrote:\n> \n>> Dne 12.4.2011 19:23, Ogden napsal(a):\n>>>\n>>> On Apr 12, 2011, at 12:18 PM, Andreas Kretschmer wrote:\n>>>\n>>>> Ogden <[email protected]> wrote:\n>>>>\n>>>>> I have been wrestling with the configuration of the dedicated Postges 9.0.3\n>>>>> server at work and granted, there's more activity on the production server, but\n>>>>> the same queries take twice as long on the beefier server than my mac at home.\n>>>>> I have pasted what I have changed in postgresql.conf - I am wondering if\n>>>>> there's any way one can help me change things around to be more efficient.\n>>>>>\n>>>>> Dedicated PostgreSQL 9.0.3 Server with 16GB Ram\n>>>>>\n>>>>> Heavy write and read (for reporting and calculations) server. \n>>>>>\n>>>>> max_connections = 350 \n>>>>> shared_buffers = 4096MB \n>>>>> work_mem = 32MB\n>>>>> maintenance_work_mem = 512MB\n>>>>\n>>>> That's okay.\n>>>>\n>>>>\n>>>>>\n>>>>>\n>>>>> seq_page_cost = 0.02 # measured on an arbitrary scale\n>>>>> random_page_cost = 0.03 \n>>>>\n>>>> Do you have super, Super, SUPER fast disks? I think, this (seq_page_cost\n>>>> and random_page_cost) are completly wrong.\n>>>>\n>>>\n>>> No, I don't have super fast disks. Just the 15K SCSI over RAID. I\n>>> find by raising them to:\n>>>\n>>> seq_page_cost = 1.0\n>>> random_page_cost = 3.0\n>>> cpu_tuple_cost = 0.3\n>>> #cpu_index_tuple_cost = 0.005 # same scale as above - 0.005\n>>> #cpu_operator_cost = 0.0025 # same scale as above\n>>> effective_cache_size = 8192MB \n>>>\n>>> That this is better, some queries run much faster. Is this better?\n>>\n>> I guess it is. What really matters with those cost variables is the\n>> relative scale - the original values\n>>\n>> seq_page_cost = 0.02\n>> random_page_cost = 0.03\n>> cpu_tuple_cost = 0.02\n>>\n>> suggest that the random reads are almost as expensive as sequential\n>> reads (which usually is not true - the random reads are significantly\n>> more expensive), and that processing each row is about as expensive as\n>> reading the page from disk (again, reading data from disk is much more\n>> expensive than processing them).\n>>\n>> So yes, the current values are much more likely to give good results.\n>>\n>> You've mentioned those values were recommended on this list - can you\n>> point out the actual discussion?\n>>\n>>\n> \n> Thank you for your reply. \n> \n> http://archives.postgresql.org/pgsql-performance/2010-09/msg00169.php is how I first played with those values...\n> \n\nOK, what JD said there generally makes sense, although those values are\na bit extreme - in most cases it's recommended to leave seq_page_cost=1\nand decrease the random_page_cost (to 2, the dafault value is 4). That\nusually pushes the planner towards index scans.\n\nI'm not saying those small values (0.02 etc.) are bad, but I guess the\neffect is about the same and it changes the impact of the other cost\nvariables (cpu_tuple_cost, etc.)\n\nI see there is 16GB of RAM but shared_buffers are just 4GB. So there's\nnothing else running and the rest of the RAM is used for pagecache? I've\nnoticed the previous discussion mentions there are 8GB of RAM and the DB\nsize is 7GB (so it might fit into memory). Is this still the case?\n\nregards\nTomas\n", "msg_date": "Tue, 12 Apr 2011 23:09:54 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance" }, { "msg_contents": "\nOn Apr 12, 2011, at 4:09 PM, Tomas Vondra wrote:\n\n> Dne 12.4.2011 20:28, Ogden napsal(a):\n>> \n>> On Apr 12, 2011, at 1:16 PM, Tomas Vondra wrote:\n>> \n>>> Dne 12.4.2011 19:23, Ogden napsal(a):\n>>>> \n>>>> On Apr 12, 2011, at 12:18 PM, Andreas Kretschmer wrote:\n>>>> \n>>>>> Ogden <[email protected]> wrote:\n>>>>> \n>>>>>> I have been wrestling with the configuration of the dedicated Postges 9.0.3\n>>>>>> server at work and granted, there's more activity on the production server, but\n>>>>>> the same queries take twice as long on the beefier server than my mac at home.\n>>>>>> I have pasted what I have changed in postgresql.conf - I am wondering if\n>>>>>> there's any way one can help me change things around to be more efficient.\n>>>>>> \n>>>>>> Dedicated PostgreSQL 9.0.3 Server with 16GB Ram\n>>>>>> \n>>>>>> Heavy write and read (for reporting and calculations) server. \n>>>>>> \n>>>>>> max_connections = 350 \n>>>>>> shared_buffers = 4096MB \n>>>>>> work_mem = 32MB\n>>>>>> maintenance_work_mem = 512MB\n>>>>> \n>>>>> That's okay.\n>>>>> \n>>>>> \n>>>>>> \n>>>>>> \n>>>>>> seq_page_cost = 0.02 # measured on an arbitrary scale\n>>>>>> random_page_cost = 0.03 \n>>>>> \n>>>>> Do you have super, Super, SUPER fast disks? I think, this (seq_page_cost\n>>>>> and random_page_cost) are completly wrong.\n>>>>> \n>>>> \n>>>> No, I don't have super fast disks. Just the 15K SCSI over RAID. I\n>>>> find by raising them to:\n>>>> \n>>>> seq_page_cost = 1.0\n>>>> random_page_cost = 3.0\n>>>> cpu_tuple_cost = 0.3\n>>>> #cpu_index_tuple_cost = 0.005 # same scale as above - 0.005\n>>>> #cpu_operator_cost = 0.0025 # same scale as above\n>>>> effective_cache_size = 8192MB \n>>>> \n>>>> That this is better, some queries run much faster. Is this better?\n>>> \n>>> I guess it is. What really matters with those cost variables is the\n>>> relative scale - the original values\n>>> \n>>> seq_page_cost = 0.02\n>>> random_page_cost = 0.03\n>>> cpu_tuple_cost = 0.02\n>>> \n>>> suggest that the random reads are almost as expensive as sequential\n>>> reads (which usually is not true - the random reads are significantly\n>>> more expensive), and that processing each row is about as expensive as\n>>> reading the page from disk (again, reading data from disk is much more\n>>> expensive than processing them).\n>>> \n>>> So yes, the current values are much more likely to give good results.\n>>> \n>>> You've mentioned those values were recommended on this list - can you\n>>> point out the actual discussion?\n>>> \n>>> \n>> \n>> Thank you for your reply. \n>> \n>> http://archives.postgresql.org/pgsql-performance/2010-09/msg00169.php is how I first played with those values...\n>> \n> \n> OK, what JD said there generally makes sense, although those values are\n> a bit extreme - in most cases it's recommended to leave seq_page_cost=1\n> and decrease the random_page_cost (to 2, the dafault value is 4). That\n> usually pushes the planner towards index scans.\n> \n> I'm not saying those small values (0.02 etc.) are bad, but I guess the\n> effect is about the same and it changes the impact of the other cost\n> variables (cpu_tuple_cost, etc.)\n> \n> I see there is 16GB of RAM but shared_buffers are just 4GB. So there's\n> nothing else running and the rest of the RAM is used for pagecache? I've\n> noticed the previous discussion mentions there are 8GB of RAM and the DB\n> size is 7GB (so it might fit into memory). Is this still the case?\n> \n> regards\n> Tomas\n\n\nThomas,\n\nBy decreasing random_page_cost to 2 (instead of 4), there is a slight performance decrease as opposed to leaving it just at 4. For example, if I set it 3 (or 4), a query may take 0.057 seconds. The same query takes 0.144s when I set random_page_cost to 2. Should I keep it at 3 (or 4) as I have done now?\n\nYes there is 16GB of RAM but the database is much bigger than that. Should I increase shared_buffers?\n\nThank you so very much\n\nOgden", "msg_date": "Tue, 12 Apr 2011 16:19:32 -0500", "msg_from": "Ogden <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance" }, { "msg_contents": "Dne 12.4.2011 23:19, Ogden napsal(a):\n> \n> On Apr 12, 2011, at 4:09 PM, Tomas Vondra wrote:\n> \n>> Dne 12.4.2011 20:28, Ogden napsal(a):\n>>>\n>>> On Apr 12, 2011, at 1:16 PM, Tomas Vondra wrote:\n>>>\n>>>> Dne 12.4.2011 19:23, Ogden napsal(a):\n>>>>>\n>>>>> On Apr 12, 2011, at 12:18 PM, Andreas Kretschmer wrote:\n>>>>>\n>>>>>> Ogden <[email protected]> wrote:\n>>>>>>\n>>>>>>> I have been wrestling with the configuration of the dedicated Postges 9.0.3\n>>>>>>> server at work and granted, there's more activity on the production server, but\n>>>>>>> the same queries take twice as long on the beefier server than my mac at home.\n>>>>>>> I have pasted what I have changed in postgresql.conf - I am wondering if\n>>>>>>> there's any way one can help me change things around to be more efficient.\n>>>>>>>\n>>>>>>> Dedicated PostgreSQL 9.0.3 Server with 16GB Ram\n>>>>>>>\n>>>>>>> Heavy write and read (for reporting and calculations) server. \n>>>>>>>\n>>>>>>> max_connections = 350 \n>>>>>>> shared_buffers = 4096MB \n>>>>>>> work_mem = 32MB\n>>>>>>> maintenance_work_mem = 512MB\n>>>>>>\n>>>>>> That's okay.\n>>>>>>\n>>>>>>\n>>>>>>>\n>>>>>>>\n>>>>>>> seq_page_cost = 0.02 # measured on an arbitrary scale\n>>>>>>> random_page_cost = 0.03 \n>>>>>>\n>>>>>> Do you have super, Super, SUPER fast disks? I think, this (seq_page_cost\n>>>>>> and random_page_cost) are completly wrong.\n>>>>>>\n>>>>>\n>>>>> No, I don't have super fast disks. Just the 15K SCSI over RAID. I\n>>>>> find by raising them to:\n>>>>>\n>>>>> seq_page_cost = 1.0\n>>>>> random_page_cost = 3.0\n>>>>> cpu_tuple_cost = 0.3\n>>>>> #cpu_index_tuple_cost = 0.005 # same scale as above - 0.005\n>>>>> #cpu_operator_cost = 0.0025 # same scale as above\n>>>>> effective_cache_size = 8192MB \n>>>>>\n>>>>> That this is better, some queries run much faster. Is this better?\n>>>>\n>>>> I guess it is. What really matters with those cost variables is the\n>>>> relative scale - the original values\n>>>>\n>>>> seq_page_cost = 0.02\n>>>> random_page_cost = 0.03\n>>>> cpu_tuple_cost = 0.02\n>>>>\n>>>> suggest that the random reads are almost as expensive as sequential\n>>>> reads (which usually is not true - the random reads are significantly\n>>>> more expensive), and that processing each row is about as expensive as\n>>>> reading the page from disk (again, reading data from disk is much more\n>>>> expensive than processing them).\n>>>>\n>>>> So yes, the current values are much more likely to give good results.\n>>>>\n>>>> You've mentioned those values were recommended on this list - can you\n>>>> point out the actual discussion?\n>>>>\n>>>>\n>>>\n>>> Thank you for your reply. \n>>>\n>>> http://archives.postgresql.org/pgsql-performance/2010-09/msg00169.php is how I first played with those values...\n>>>\n>>\n>> OK, what JD said there generally makes sense, although those values are\n>> a bit extreme - in most cases it's recommended to leave seq_page_cost=1\n>> and decrease the random_page_cost (to 2, the dafault value is 4). That\n>> usually pushes the planner towards index scans.\n>>\n>> I'm not saying those small values (0.02 etc.) are bad, but I guess the\n>> effect is about the same and it changes the impact of the other cost\n>> variables (cpu_tuple_cost, etc.)\n>>\n>> I see there is 16GB of RAM but shared_buffers are just 4GB. So there's\n>> nothing else running and the rest of the RAM is used for pagecache? I've\n>> noticed the previous discussion mentions there are 8GB of RAM and the DB\n>> size is 7GB (so it might fit into memory). Is this still the case?\n>>\n>> regards\n>> Tomas\n> \n> \n> Thomas,\n> \n> By decreasing random_page_cost to 2 (instead of 4), there is a slight performance decrease as opposed to leaving it just at 4. For example, if I set it 3 (or 4), a query may take 0.057 seconds. The same query takes 0.144s when I set random_page_cost to 2. Should I keep it at 3 (or 4) as I have done now?\n> \n> Yes there is 16GB of RAM but the database is much bigger than that. Should I increase shared_buffers?\n\nOK, that's a very important information and it kinda explains all the\nproblems you had. When the planner decides what execution plan to use,\nit computes a 'virtual cost' for different plans and then chooses the\ncheapest one.\n\nDecreasing 'random_page_cost' decreases the expected cost of plans\ninvolving index scans, so that at a certain point it seems cheaper than\na plan using sequential scans etc.\n\nYou can see this when using EXPLAIN - do it with the original cost\nvalues, then change the values (for that session only) and do the\nEXPLAIN only. You'll see how the execution plan suddenly changes and\nstarts to use index scans.\n\nThe problem with random I/O is that it's usually much more expensive\nthan sequential I/O as the drives need to seek etc. The only case when\nrandom I/O is just as cheap as sequential I/O is when all the data is\ncached in memory, because within RAM there's no difference between\nrandom and sequential access (right, that's why it's called Random\nAccess Memory).\n\nSo in the previous post setting both random_page_cost and seq_page_cost\nto the same value makes sense, because when the whole database fits into\nthe memory, there's no difference and index scans are favorable.\n\nIn this case (the database is much bigger than the available RAM) this\nno longer holds - index scans hit the drives, resulting in a lot of\nseeks etc. So it's a serious performance killer ...\n\nNot sure about increasing the shared_buffers - if the block is not found\nin shared buffers, it still might be found in pagecache (without need to\ndo a physical read). There are ways to check if the current size of\nshared buffers is enough or not - I usually use pg_stat views (bgwriter\nand database).\n\nregards\nTomas\n", "msg_date": "Wed, 13 Apr 2011 00:36:58 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance" }, { "msg_contents": "\nOn Apr 12, 2011, at 5:36 PM, Tomas Vondra wrote:\n\n> Dne 12.4.2011 23:19, Ogden napsal(a):\n>> \n>> On Apr 12, 2011, at 4:09 PM, Tomas Vondra wrote:\n>> \n>>> Dne 12.4.2011 20:28, Ogden napsal(a):\n>>>> \n>>>> On Apr 12, 2011, at 1:16 PM, Tomas Vondra wrote:\n>>>> \n>>>>> Dne 12.4.2011 19:23, Ogden napsal(a):\n>>>>>> \n>>>>>> On Apr 12, 2011, at 12:18 PM, Andreas Kretschmer wrote:\n>>>>>> \n>>>>>>> Ogden <[email protected]> wrote:\n>>>>>>> \n>>>>>>>> I have been wrestling with the configuration of the dedicated Postges 9.0.3\n>>>>>>>> server at work and granted, there's more activity on the production server, but\n>>>>>>>> the same queries take twice as long on the beefier server than my mac at home.\n>>>>>>>> I have pasted what I have changed in postgresql.conf - I am wondering if\n>>>>>>>> there's any way one can help me change things around to be more efficient.\n>>>>>>>> \n>>>>>>>> Dedicated PostgreSQL 9.0.3 Server with 16GB Ram\n>>>>>>>> \n>>>>>>>> Heavy write and read (for reporting and calculations) server. \n>>>>>>>> \n>>>>>>>> max_connections = 350 \n>>>>>>>> shared_buffers = 4096MB \n>>>>>>>> work_mem = 32MB\n>>>>>>>> maintenance_work_mem = 512MB\n>>>>>>> \n>>>>>>> That's okay.\n>>>>>>> \n>>>>>>> \n>>>>>>>> \n>>>>>>>> \n>>>>>>>> seq_page_cost = 0.02 # measured on an arbitrary scale\n>>>>>>>> random_page_cost = 0.03 \n>>>>>>> \n>>>>>>> Do you have super, Super, SUPER fast disks? I think, this (seq_page_cost\n>>>>>>> and random_page_cost) are completly wrong.\n>>>>>>> \n>>>>>> \n>>>>>> No, I don't have super fast disks. Just the 15K SCSI over RAID. I\n>>>>>> find by raising them to:\n>>>>>> \n>>>>>> seq_page_cost = 1.0\n>>>>>> random_page_cost = 3.0\n>>>>>> cpu_tuple_cost = 0.3\n>>>>>> #cpu_index_tuple_cost = 0.005 # same scale as above - 0.005\n>>>>>> #cpu_operator_cost = 0.0025 # same scale as above\n>>>>>> effective_cache_size = 8192MB \n>>>>>> \n>>>>>> That this is better, some queries run much faster. Is this better?\n>>>>> \n>>>>> I guess it is. What really matters with those cost variables is the\n>>>>> relative scale - the original values\n>>>>> \n>>>>> seq_page_cost = 0.02\n>>>>> random_page_cost = 0.03\n>>>>> cpu_tuple_cost = 0.02\n>>>>> \n>>>>> suggest that the random reads are almost as expensive as sequential\n>>>>> reads (which usually is not true - the random reads are significantly\n>>>>> more expensive), and that processing each row is about as expensive as\n>>>>> reading the page from disk (again, reading data from disk is much more\n>>>>> expensive than processing them).\n>>>>> \n>>>>> So yes, the current values are much more likely to give good results.\n>>>>> \n>>>>> You've mentioned those values were recommended on this list - can you\n>>>>> point out the actual discussion?\n>>>>> \n>>>>> \n>>>> \n>>>> Thank you for your reply. \n>>>> \n>>>> http://archives.postgresql.org/pgsql-performance/2010-09/msg00169.php is how I first played with those values...\n>>>> \n>>> \n>>> OK, what JD said there generally makes sense, although those values are\n>>> a bit extreme - in most cases it's recommended to leave seq_page_cost=1\n>>> and decrease the random_page_cost (to 2, the dafault value is 4). That\n>>> usually pushes the planner towards index scans.\n>>> \n>>> I'm not saying those small values (0.02 etc.) are bad, but I guess the\n>>> effect is about the same and it changes the impact of the other cost\n>>> variables (cpu_tuple_cost, etc.)\n>>> \n>>> I see there is 16GB of RAM but shared_buffers are just 4GB. So there's\n>>> nothing else running and the rest of the RAM is used for pagecache? I've\n>>> noticed the previous discussion mentions there are 8GB of RAM and the DB\n>>> size is 7GB (so it might fit into memory). Is this still the case?\n>>> \n>>> regards\n>>> Tomas\n>> \n>> \n>> Thomas,\n>> \n>> By decreasing random_page_cost to 2 (instead of 4), there is a slight performance decrease as opposed to leaving it just at 4. For example, if I set it 3 (or 4), a query may take 0.057 seconds. The same query takes 0.144s when I set random_page_cost to 2. Should I keep it at 3 (or 4) as I have done now?\n>> \n>> Yes there is 16GB of RAM but the database is much bigger than that. Should I increase shared_buffers?\n> \n> OK, that's a very important information and it kinda explains all the\n> problems you had. When the planner decides what execution plan to use,\n> it computes a 'virtual cost' for different plans and then chooses the\n> cheapest one.\n> \n> Decreasing 'random_page_cost' decreases the expected cost of plans\n> involving index scans, so that at a certain point it seems cheaper than\n> a plan using sequential scans etc.\n> \n> You can see this when using EXPLAIN - do it with the original cost\n> values, then change the values (for that session only) and do the\n> EXPLAIN only. You'll see how the execution plan suddenly changes and\n> starts to use index scans.\n> \n> The problem with random I/O is that it's usually much more expensive\n> than sequential I/O as the drives need to seek etc. The only case when\n> random I/O is just as cheap as sequential I/O is when all the data is\n> cached in memory, because within RAM there's no difference between\n> random and sequential access (right, that's why it's called Random\n> Access Memory).\n> \n> So in the previous post setting both random_page_cost and seq_page_cost\n> to the same value makes sense, because when the whole database fits into\n> the memory, there's no difference and index scans are favorable.\n> \n> In this case (the database is much bigger than the available RAM) this\n> no longer holds - index scans hit the drives, resulting in a lot of\n> seeks etc. So it's a serious performance killer ...\n> \n> Not sure about increasing the shared_buffers - if the block is not found\n> in shared buffers, it still might be found in pagecache (without need to\n> do a physical read). There are ways to check if the current size of\n> shared buffers is enough or not - I usually use pg_stat views (bgwriter\n> and database).\n\n\nThomas,\n\nThank you for your very detailed and well written description. In conclusion, I should keep my random_page_cost (3.0) to a value more than seq_page_cost (1.0)? Is this bad practice or will this suffice for my setup (where the database is much bigger than the RAM in the system)? Or is this not what you are suggesting at all?\n\nThank you\n\nOgden\n\n\n", "msg_date": "Wed, 13 Apr 2011 09:05:13 -0500", "msg_from": "Ogden <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance" }, { "msg_contents": "> Thomas,\n>\n> Thank you for your very detailed and well written description. In\n> conclusion, I should keep my random_page_cost (3.0) to a value more than\n> seq_page_cost (1.0)? Is this bad practice or will this suffice for my\n> setup (where the database is much bigger than the RAM in the system)? Or\n> is this not what you are suggesting at all?\n\nYes, keep it that way. The fact that 'random_page_cost >= seq_page_cost'\ngenerally means that random reads are more expensive than sequential\nreads. The actual values are dependent but 4:1 is usually OK, unless your\ndb fits into memory etc.\n\nThe decrease of performance after descreasing random_page_cost to 3 due to\nchanges of some execution plans (the index scan becomes slightly less\nexpensive than seq scan), but in your case it's a false assumption. So\nkeep it at 4 (you may even try to increase it, just to see if that\nimproves the performance).\n\nregards\nTomas\n\n", "msg_date": "Wed, 13 Apr 2011 16:14:42 +0200", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Performance" }, { "msg_contents": "Ogden <[email protected]> wrote:\n \n> In conclusion, I should keep my random_page_cost (3.0) to a value\n> more than seq_page_cost (1.0)? Is this bad practice or will this\n> suffice for my setup (where the database is much bigger than the\n> RAM in the system)?\n \nThe idea is to adjust the costing factors to model the actual\nrelative costs of the various actions in your environment with your\nworkload. The best way to determine whether your settings are good\nis to gauge how happy the those using the database are with\nperformance. :-)\n \nThe degree of caching has a large effect on the page costs. We've\nmanaged to keep the active portion of our databases cached to a\ndegree that we have always benefited by reducing the\nrandom_page_cost to 2 or less. Where the entire database is cached,\nwe get the best plans with seq_page_cost and random_page_cost set to\nequal values in the 0.1 to 0.05 range. We've occasionally needed to\nbump the cpu_tuple_cost up a bit relative to other cpu costs, too. \n \nOn the other hand, I've seen reports of people who have found it\nnecessary to increase random_page_cost to get good plans. These\nhave been people with large databases where the entire database is\n\"active\" (versus our databases where recent, active data is accessed\nmuch more heavily than, say, 20 year old data).\n \nIf you model the costing to reflect the reality on your server, good\nplans will be chosen.\n \n-Kevin\n", "msg_date": "Wed, 13 Apr 2011 09:32:24 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance" }, { "msg_contents": "On Wed, Apr 13, 2011 at 4:32 PM, Kevin Grittner\n<[email protected]> wrote:\n> If you model the costing to reflect the reality on your server, good\n> plans will be chosen.\n\nWouldn't it be \"better\" to derive those costs from actual performance\ndata measured at runtime?\n\nSay, pg could measure random/seq page cost, *per tablespace* even.\n\nHas that been tried?\n", "msg_date": "Wed, 13 Apr 2011 23:17:18 +0200", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance" }, { "msg_contents": "Claudio Freire <[email protected]> writes:\n> On Wed, Apr 13, 2011 at 4:32 PM, Kevin Grittner\n> <[email protected]> wrote:\n>> If you model the costing to reflect the reality on your server, good\n>> plans will be chosen.\n\n> Wouldn't it be \"better\" to derive those costs from actual performance\n> data measured at runtime?\n\n> Say, pg could measure random/seq page cost, *per tablespace* even.\n\n> Has that been tried?\n\nGetting numbers that mean much of anything is a slow, expensive\nprocess. You really don't want the database trying to do that for you.\nOnce you've got them, you *really* don't want the database\neditorializing on them.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 13 Apr 2011 17:52:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance " }, { "msg_contents": "On Wed, Apr 13, 2011 at 11:52 PM, Tom Lane <[email protected]> wrote:\n> Getting numbers that mean much of anything is a slow, expensive\n> process.  You really don't want the database trying to do that for you.\n> Once you've got them, you *really* don't want the database\n> editorializing on them.\n>\n\nSo it hasn't even been tried.\n", "msg_date": "Wed, 13 Apr 2011 23:54:53 +0200", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance" }, { "msg_contents": "Claudio Freire <[email protected]> wrote:\n \n> So it hasn't even been tried.\n \nIf you want to do that, I would be interested in your benchmark\nnumbers. Or if you're not up to that, there are a number of\ncompanies which I'd bet would be willing to spend the time if they\nhad a sponsor to pay for their hours. So far nobody has felt it\nlikely enough to be beneficial to want to put their time or money on\nthe line for it. Here's your chance to be first.\n \n-Kevin\n", "msg_date": "Wed, 13 Apr 2011 16:59:39 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance" }, { "msg_contents": ">> If you model the costing to reflect the reality on your server, good\n>> plans will be chosen.\n>\n> Wouldn't it be \"better\" to derive those costs from actual performance\n> data measured at runtime?\n>\n> Say, pg could measure random/seq page cost, *per tablespace* even.\n>\n> Has that been tried?\n\nFWIW, awhile ago I wrote a simple script to measure this and found\nthat the *actual* random_page / seq_page cost ratio was much higher\nthan 4/1.\n\nThe problem is that caching effects have a large effect on the time it\ntakes to access a random page, and caching effects are very workload\ndependent. So anything automated would probably need to optimize the\nparameter values over a set of 'typical' queries, which is exactly\nwhat a good DBA does when they set random_page_cost...\n\nBest,\nNathan\n", "msg_date": "Wed, 13 Apr 2011 15:05:14 -0700", "msg_from": "Nathan Boley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance" }, { "msg_contents": "Nathan Boley <[email protected]> wrote:\n \n> The problem is that caching effects have a large effect on the\n> time it takes to access a random page, and caching effects are\n> very workload dependent. So anything automated would probably need\n> to optimize the parameter values over a set of 'typical' queries,\n> which is exactly what a good DBA does when they set\n> random_page_cost...\n \nAnother database product I've used has a stored procedure you can\nrun to turn on monitoring of workload, another to turn it off and\nreport on what happened during the interval. It drags performance\nenough that you don't want to leave it running except as a tuning\nexercise, but it does produce very detailed statistics and actually\noffers suggestions on what you might try tuning to improve\nperformance. If someone wanted to write something to deal with this\nissue, that seems like a sound overall strategy.\n \n-Kevin\n", "msg_date": "Wed, 13 Apr 2011 17:15:31 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance" }, { "msg_contents": "Dne 14.4.2011 00:05, Nathan Boley napsal(a):\n>>> If you model the costing to reflect the reality on your server, good\n>>> plans will be chosen.\n>>\n>> Wouldn't it be \"better\" to derive those costs from actual performance\n>> data measured at runtime?\n>>\n>> Say, pg could measure random/seq page cost, *per tablespace* even.\n>>\n>> Has that been tried?\n> \n> FWIW, awhile ago I wrote a simple script to measure this and found\n> that the *actual* random_page / seq_page cost ratio was much higher\n> than 4/1.\n> \n> The problem is that caching effects have a large effect on the time it\n> takes to access a random page, and caching effects are very workload\n> dependent. So anything automated would probably need to optimize the\n> parameter values over a set of 'typical' queries, which is exactly\n> what a good DBA does when they set random_page_cost...\n\nPlus there's a separate pagecache outside shared_buffers, which adds\nanother layer of complexity.\n\nWhat I was thinking about was a kind of 'autotuning' using real\nworkload. I mean - measure the time it takes to process a request\n(depends on the application - could be time to load a page, process an\ninvoice, whatever ...) and compute some reasonable metric on it\n(average, median, variance, ...). Move the cost variables a bit (e.g.\nthe random_page_cost) and see how that influences performance. If it\nimproved, do another step in the same direction, otherwise do step in\nthe other direction (or do no change the values at all).\n\nYes, I've had some lectures on non-linear programming so I'm aware that\nthis won't work if the cost function has multiple extremes (walleys /\nhills etc.) but I somehow suppose that's not the case of cost estimates.\n\nAnother issue is that when measuring multiple values (processing of\ndifferent requests), the decisions may be contradictory so it really\ncan't be fully automatic.\n\nregards\nTomas\n", "msg_date": "Thu, 14 Apr 2011 00:19:26 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance" }, { "msg_contents": "On Thu, Apr 14, 2011 at 12:19 AM, Tomas Vondra <[email protected]> wrote:\n>\n> Another issue is that when measuring multiple values (processing of\n> different requests), the decisions may be contradictory so it really\n> can't be fully automatic.\n>\n\nI don't think it's soooo dependant on workload. It's dependant on\naccess patterns (and working set sizes), and that all can be\nquantified, as opposed to \"workload\".\n\nI've been meaning to try this for a while yet, and it needs not be as\nexpensive as one would imagine. It just needs a clever implementation\nthat isn't too intrusive and that is customizable enough not to\nalienate DBAs.\n\nI'm not doing database stuff ATM (though I've been doing it for\nseveral years), and I don't expect to return to database tasks for a\nfew months. But whenever I get back to it, sure, I'd be willing to\ninvest time on it.\n\nWhat an automated system can do and a DBA cannot, and it's why this\nidea occurred to me in the first place, is tailor the metrics for\nvariable contexts and situations. Like, I had a DB that was working\nperfectly fine most of the time, but some days it got \"overworked\" and\nsticking with fixed cost variables made no sense - in those\nsituations, random page cost was insanely high because of the\nworkload, but sequential scans would have ran much faster because of\nOS read-ahead and because of synchroscans. I'm talking of a decision\nsupport system that did lots of heavy duty queries, where sequential\nscans are an alternative. I reckon most OLTP systems are different.\n\nSo, to make things short, adaptability to varying conditions is what\nI'd imagine this technique would provide, and a DBA cannot no matter\nhow skilled. That and the advent of SSDs and really really different\ncharacteristics of different tablespaces only strengthen my intuition\nthat automation might be better than parameterization.\n", "msg_date": "Thu, 14 Apr 2011 01:10:05 +0200", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance" }, { "msg_contents": "Dne 14.4.2011 01:10, Claudio Freire napsal(a):\n> On Thu, Apr 14, 2011 at 12:19 AM, Tomas Vondra <[email protected]> wrote:\n>>\n>> Another issue is that when measuring multiple values (processing of\n>> different requests), the decisions may be contradictory so it really\n>> can't be fully automatic.\n>>\n> \n> I don't think it's soooo dependant on workload. It's dependant on\n> access patterns (and working set sizes), and that all can be\n> quantified, as opposed to \"workload\".\n\nWell, think about a database that's much bigger than the available RAM.\n\nWorkload A: Touches just a very small portion of the database, to the\n'active' part actually fits into the memory. In this case the cache hit\nratio can easily be close to 99%.\n\nWorkload B: Touches large portion of the database, so it hits the drive\nvery often. In this case the cache hit ratio is usually around RAM/(size\nof the database).\n\nSo yes, it may be very workload dependent. In the first case you may\nactually significantly lower the random_page_cost (even to\nseq_page_cost) and it's going to be quite fast (thanks to the cache).\n\nIf you do the same thing with workload B, the database is going to burn.\n\nI'm not saying it's not possible to do some autotuning, but it's a bit\ntricky and it's not just about hardware. The workload *is* a very\nimportant part of the equation.\n\nBut I have to admit this post probably sounds like an overengineering.\nIf you can develop something simple (even if that does not consider\nworkload at all), it might be a useful starting point. If I could help\nyou in any way with this, let me know.\n\nregards\nTomas\n", "msg_date": "Thu, 14 Apr 2011 01:26:29 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance" }, { "msg_contents": "Nathan Boley <[email protected]> writes:\n> FWIW, awhile ago I wrote a simple script to measure this and found\n> that the *actual* random_page / seq_page cost ratio was much higher\n> than 4/1.\n\nThat 4:1 ratio is based on some rather extensive experimentation that\nI did back in 2000. In the interim, disk transfer rates have improved\nquite a lot more than disk seek times have, and the CPU cost to process\na page's worth of data has also improved compared to the seek time.\nSo yeah, you'd likely get a higher number if you redid those experiments\non modern hardware (at least assuming it was rotating media and not SSD).\nOn the other hand, the effects of caching push the numbers in the other\ndirection, and modern machines also have a lot more RAM to cache in than\nwas typical ten years ago. I'm not sure how much point there is in\ntrying to improve the default number in the abstract --- we'd really\nneed to have a more robust model of cache effects before I'd trust any\nautomatic tuning procedure to set the value for me.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 13 Apr 2011 20:03:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance " }, { "msg_contents": "On 04/13/2011 05:03 PM, Tom Lane wrote:\n> That 4:1 ratio is based on some rather extensive experimentation that\n> I did back in 2000. In the interim, disk transfer rates have improved\n> quite a lot more than disk seek times have, and the CPU cost to process\n> a page's worth of data has also improved compared to the seek time.\nMy experience is that at least a 1/1 is more appropriate.\n\nJD\n", "msg_date": "Wed, 13 Apr 2011 17:37:05 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance" }, { "msg_contents": "On Wed, Apr 13, 2011 at 5:26 PM, Tomas Vondra <[email protected]> wrote:\n\n> Workload A: Touches just a very small portion of the database, to the\n> 'active' part actually fits into the memory. In this case the cache hit\n> ratio can easily be close to 99%.\n>\n> Workload B: Touches large portion of the database, so it hits the drive\n> very often. In this case the cache hit ratio is usually around RAM/(size\n> of the database).\n\nI've had this kind of split-brain operation in the past, where 99% of\nall accesses would be cached, and the 1% that weren't needed their own\ntuning. Luckily you can tune by user (alter user set random_page_cost\netc) so I was able to do that. One of the best features of pgsql\nimnsho.\n", "msg_date": "Wed, 13 Apr 2011 21:39:44 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance" }, { "msg_contents": "On Thu, Apr 14, 2011 at 1:26 AM, Tomas Vondra <[email protected]> wrote:\n> Workload A: Touches just a very small portion of the database, to the\n> 'active' part actually fits into the memory. In this case the cache hit\n> ratio can easily be close to 99%.\n>\n> Workload B: Touches large portion of the database, so it hits the drive\n> very often. In this case the cache hit ratio is usually around RAM/(size\n> of the database).\n\nYou've answered it yourself without even realized it.\n\nThis particular factor is not about an abstract and opaque \"Workload\"\nthe server can't know about. It's about cache hit rate, and the server\ncan indeed measure that.\n", "msg_date": "Thu, 14 Apr 2011 08:49:56 +0200", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance" }, { "msg_contents": "> On Thu, Apr 14, 2011 at 1:26 AM, Tomas Vondra <[email protected]> wrote:\n>> Workload A: Touches just a very small portion of the database, to the\n>> 'active' part actually fits into the memory. In this case the cache hit\n>> ratio can easily be close to 99%.\n>>\n>> Workload B: Touches large portion of the database, so it hits the drive\n>> very often. In this case the cache hit ratio is usually around RAM/(size\n>> of the database).\n>\n> You've answered it yourself without even realized it.\n>\n> This particular factor is not about an abstract and opaque \"Workload\"\n> the server can't know about. It's about cache hit rate, and the server\n> can indeed measure that.\n\nOK, so it's not a matter of tuning random_page_cost/seq_page_cost? Because\ntuning based on cache hit ratio is something completely different (IMHO).\n\nAnyway I'm not an expert in this field, but AFAIK something like this\nalready happens - btw that's the purpose of effective_cache_size. But I'm\nafraid there might be serious fail cases where the current model works\nbetter, e.g. what if you ask for data that's completely uncached (was\ninactive for a long time). But if you have an idea on how to improve this,\ngreat - start a discussion in the hackers list and let's see.\n\nregards\nTomas\n\n", "msg_date": "Thu, 14 Apr 2011 10:23:26 +0200", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Performance" }, { "msg_contents": "2011/4/14 Tom Lane <[email protected]>:\n> Nathan Boley <[email protected]> writes:\n>> FWIW, awhile ago I wrote a simple script to measure this and found\n>> that the *actual* random_page / seq_page cost ratio was much higher\n>> than 4/1.\n>\n> That 4:1 ratio is based on some rather extensive experimentation that\n> I did back in 2000.  In the interim, disk transfer rates have improved\n> quite a lot more than disk seek times have, and the CPU cost to process\n> a page's worth of data has also improved compared to the seek time.\n> So yeah, you'd likely get a higher number if you redid those experiments\n> on modern hardware (at least assuming it was rotating media and not SSD).\n> On the other hand, the effects of caching push the numbers in the other\n> direction, and modern machines also have a lot more RAM to cache in than\n> was typical ten years ago.  I'm not sure how much point there is in\n> trying to improve the default number in the abstract --- we'd really\n> need to have a more robust model of cache effects before I'd trust any\n> automatic tuning procedure to set the value for me.\n\nWell, at spare time, I am doing some POC with \"ANALYZE OSCACHE\nrelation;\", pg stats are updated accordingly with new data ( it is not\nfinish yet) : at least the percentage in OS cache, maybe the number of\ngroups in cache and/or the distribution.\n\nAnyway the idea is to allow the planner to use random and seq page\ncost to be applyed on the part not-in-cache, without replacing the\nalgo using effective_cache_size. The planner may have one other GUC\nlike 'mem_page_cost' to set a cost on access from cache and use it\nwhile estinating the cost...\n\nSide effect is that random page cost and seq page cost should be more\nstable and easiest to set based on a script because they won't have\nthe mixed sources of disk/memory, only the disk acces cost. (if\nANALYZE OSCACHE is good enough)\n-- \nCédric Villemain               2ndQuadrant\nhttp://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support\n", "msg_date": "Thu, 14 Apr 2011 13:29:35 +0200", "msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance" }, { "msg_contents": "On Apr 14, 2011, at 2:49 AM, Claudio Freire <[email protected]> wrote:\n> This particular factor is not about an abstract and opaque \"Workload\"\n> the server can't know about. It's about cache hit rate, and the server\n> can indeed measure that.\n\nThe server can and does measure hit rates for the PG buffer pool, but to my knowledge there is no clear-cut way for PG to know whether read() is satisfied from the OS cache or a drive cache or the platter.\n\n...Robert\n", "msg_date": "Tue, 26 Apr 2011 01:30:58 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance" }, { "msg_contents": "On Apr 13, 2011, at 6:19 PM, Tomas Vondra <[email protected]> wrote:\n> Yes, I've had some lectures on non-linear programming so I'm aware that\n> this won't work if the cost function has multiple extremes (walleys /\n> hills etc.) but I somehow suppose that's not the case of cost estimates.\n\nI think that supposition might turn out to be incorrect, though. Probably what will happen on simple queries is that a small change will make no difference, and a large enough change will cause a plan change. On complex queries it will approach continuous variation but why shouldn't there be local minima?\n\n...Robert", "msg_date": "Tue, 26 Apr 2011 01:35:35 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance" }, { "msg_contents": "On Tue, Apr 26, 2011 at 7:30 AM, Robert Haas <[email protected]> wrote:\n> On Apr 14, 2011, at 2:49 AM, Claudio Freire <[email protected]> wrote:\n>> This particular factor is not about an abstract and opaque \"Workload\"\n>> the server can't know about. It's about cache hit rate, and the server\n>> can indeed measure that.\n>\n> The server can and does measure hit rates for the PG buffer pool, but to my knowledge there is no clear-cut way for PG to know whether read() is satisfied from the OS cache or a drive cache or the platter.\n\nIsn't latency an indicator?\n\nIf you plot latencies, you should see three markedly obvious clusters:\nOS cache (microseconds), Drive cache (slightly slower), platter\n(tail).\n\nI think I had seen a study of sorts somewhere[0]...\n\nOk, that link is about sequential/random access, but I distinctively\nremember one about caches and CAV...\n\n[0] http://blogs.sun.com/brendan/entry/heat_map_analytics\n", "msg_date": "Tue, 26 Apr 2011 09:49:39 +0200", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance" }, { "msg_contents": "Dne 26.4.2011 07:35, Robert Haas napsal(a):\n> On Apr 13, 2011, at 6:19 PM, Tomas Vondra <[email protected]> wrote:\n>> Yes, I've had some lectures on non-linear programming so I'm aware that\n>> this won't work if the cost function has multiple extremes (walleys /\n>> hills etc.) but I somehow suppose that's not the case of cost estimates.\n> \n> I think that supposition might turn out to be incorrect, though. Probably\n> what will happen on simple queries is that a small change will make no\n> difference, and a large enough change will cause a plan change. On\n> complex queries it will approach continuous variation but why\n> shouldn't there be local minima?\n\nAaaah, damn! I was not talking about cost estimates - those obviously do\nnot have this feature, as you've pointed out (thanks!).\n\nI was talking about the 'response time' I mentioned when describing the\nautotuning using real workload. The idea is to change the costs a bit\nand then measure the average response time - if the overall performance\nimproved, do another step in the same direction. Etc.\n\nI wonder if there are cases where an increase of random_page_cost would\nhurt performance, and another increase would improve it ... And I'm not\ntalking about individual queries, I'm talking about overall performance.\n\nregards\nTomas\n", "msg_date": "Tue, 26 Apr 2011 20:54:34 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance" }, { "msg_contents": "[email protected] wrote:\n> Anyway I'm not an expert in this field, but AFAIK something like this\n> already happens - btw that's the purpose of effective_cache_size.\n\neffective_cache_size probably doesn't do as much as you suspect. It is \nused for one of the computations for whether an index is small enough \nthat it can likely be read into memory efficiently. It has no impact on \ncaching decisions outside of that.\n\nAs for the ideas bouncing around here for tinkering with \nrandom_page_size more automatically, I have a notebook with about a \ndozen different ways to do that I've come up with over the last few \nyears. The reason no work can be done in this area is because there are \nno standardized benchmarks of query execution in PostgreSQL being run \nregularly right now. Bringing up ideas for changing the computation is \neasy; proving that such a change is positive on enough workloads to be \nworth considering is the hard part. There is no useful discussion to be \nmade on the hackers list that doesn't start with \"here's the mix the \nbenchmarks I intend to test this new model against\".\n\nPerformance regression testing for the the query optimizer is a giant \npile of boring work we get minimal volunteers interested in. Nobody \ngets to do the fun model change work without doing that first though. \nFor this type of change, you're guaranteed to just be smacking around \nparameters to optimize for only a single case without some broader \nbenchmarking context.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Wed, 27 Apr 2011 14:49:17 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance" }, { "msg_contents": "Greg Smith <[email protected]> wrote:\n \n> The reason no work can be done in this area is because there are \n> no standardized benchmarks of query execution in PostgreSQL being\n> run regularly right now. Bringing up ideas for changing the\n> computation is easy; proving that such a change is positive on\n> enough workloads to be worth considering is the hard part. There\n> is no useful discussion to be made on the hackers list that\n> doesn't start with \"here's the mix the benchmarks I intend to test\n> this new model against\".\n \nThis is looming as an ever-more-acute need for the project, in\nseveral areas.\n \n-Kevin\n", "msg_date": "Wed, 27 Apr 2011 13:56:48 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance" }, { "msg_contents": "On Tue, Apr 26, 2011 at 8:54 PM, Tomas Vondra <[email protected]> wrote:\n> I wonder if there are cases where an increase of random_page_cost would\n> hurt performance, and another increase would improve it ... And I'm not\n> talking about individual queries, I'm talking about overall performance.\n\nI don't think there are many. But I don't think you can assume that\nthere are none.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Wed, 27 Apr 2011 22:22:51 +0200", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance" }, { "msg_contents": "On Tue, Apr 26, 2011 at 9:49 AM, Claudio Freire <[email protected]> wrote:\n> On Tue, Apr 26, 2011 at 7:30 AM, Robert Haas <[email protected]> wrote:\n>> On Apr 14, 2011, at 2:49 AM, Claudio Freire <[email protected]> wrote:\n>>> This particular factor is not about an abstract and opaque \"Workload\"\n>>> the server can't know about. It's about cache hit rate, and the server\n>>> can indeed measure that.\n>>\n>> The server can and does measure hit rates for the PG buffer pool, but to my knowledge there is no clear-cut way for PG to know whether read() is satisfied from the OS cache or a drive cache or the platter.\n>\n> Isn't latency an indicator?\n>\n> If you plot latencies, you should see three markedly obvious clusters:\n> OS cache (microseconds), Drive cache (slightly slower), platter\n> (tail).\n\nWhat if the user is using an SSD or ramdisk?\n\nAdmittedly, in many cases, we could probably get somewhat useful\nnumbers this way. But I think it would be pretty expensive.\ngettimeofday() is one of the reasons why running EXPLAIN ANALYZE on a\nquery is significantly slower than just running it normally. I bet if\nwe put such calls around every read() and write(), it would cause a\nBIG slowdown for workloads that don't fit in shared_buffers.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Wed, 27 Apr 2011 22:27:48 +0200", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance" }, { "msg_contents": "Dne 27.4.2011 20:56, Kevin Grittner napsal(a):\n> Greg Smith <[email protected]> wrote:\n> \n>> The reason no work can be done in this area is because there are \n>> no standardized benchmarks of query execution in PostgreSQL being\n>> run regularly right now. Bringing up ideas for changing the\n>> computation is easy; proving that such a change is positive on\n>> enough workloads to be worth considering is the hard part. There\n>> is no useful discussion to be made on the hackers list that\n>> doesn't start with \"here's the mix the benchmarks I intend to test\n>> this new model against\".\n> \n> This is looming as an ever-more-acute need for the project, in\n> several areas.\n\nHmmm, just wondering - what would be needed to build such 'workload\nlibrary'? Building it from scratch is not feasible IMHO, but I guess\npeople could provide their own scripts (as simple as 'set up a a bunch\nof tables, fill it with data, run some queries') and there's a pile of\nsuch examples in the pgsql-performance list.\n\nregards\nTomas\n", "msg_date": "Wed, 27 Apr 2011 22:41:35 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance" }, { "msg_contents": "On Wed, Apr 27, 2011 at 10:27 PM, Robert Haas <[email protected]> wrote:\n>\n> What if the user is using an SSD or ramdisk?\n>\n> Admittedly, in many cases, we could probably get somewhat useful\n> numbers this way.  But I think it would be pretty expensive.\n> gettimeofday() is one of the reasons why running EXPLAIN ANALYZE on a\n> query is significantly slower than just running it normally.  I bet if\n> we put such calls around every read() and write(), it would cause a\n> BIG slowdown for workloads that don't fit in shared_buffers.\n\nI've just been reading an article about something intimately related\nwith that in ACM.\n\nThe article was about cache-conscious scheduling. Mostly memory cache,\nbut disk cache isn't that different. There are lots of work, real,\nserious work in characterizing cache contention, and the article\nshowed how a simplified version of the cache reuse profile model\nbehaves under various workloads.\n\nThe simplified model simply used cache miss rates, and it performed\neven better than the more complex model - they went on and analyzed\nwhy.\n\nLong story short, there is indeed a lot of literature about the\nsubject, there is a lot of formal and experimental results. One of\nthose models have to be embodied into a patch, and tested - that's\nabout it.\n\nThe patch may be simple, the testing not so much. I know that.\n\nWhat tools do we have to do that testing? There are lots, and all\nimply a lot of work. Is that work worth the trouble? Because if it\nis... why not work?\n\nI would propose a step in the right direction: a patch to compute and\nlog periodical estimations of the main I/O tunables: random_page_cost,\nsequential_page_cost and effective_cache_size. Maybe per-tablespace.\nEvaluate the performance impact, and work from there.\n\nBecause, probably just using those values as input to the optimizer\nwon't work, because dbas will want a way to tune the optimizer,\nbecause the system may not be stable enough, even because even with\naccurate estimates for those values, the optimizer may not perform as\nexpected. I mean, right now those values are tunables, not real\nmetrics, so perhaps the optimizer won't respond well to real values.\n\nBut having the ability to measure them without a serious performance\nimpact is a step in the right direction, right?\n", "msg_date": "Wed, 27 Apr 2011 23:01:46 +0200", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance" }, { "msg_contents": "Tomas Vondra wrote:\n> Hmmm, just wondering - what would be needed to build such 'workload\n> library'? Building it from scratch is not feasible IMHO, but I guess\n> people could provide their own scripts (as simple as 'set up a a bunch\n> of tables, fill it with data, run some queries') and there's a pile of\n> such examples in the pgsql-performance list.\n> \n\nThe easiest place to start is by re-using the work already done by the \nTPC for benchmarking commercial databases. There are ports of the TPC \nworkloads to PostgreSQL available in the DBT-2, DBT-3, and DBT-5 tests; \nsee http://wiki.postgresql.org/wiki/Category:Benchmarking for initial \ninformation on those (the page on TPC-H is quite relevant too). I'd \nlike to see all three of those DBT tests running regularly, as well as \ntwo tests it's possible to simulate with pgbench or sysbench: an \nin-cache read-only test, and a write as fast as possible test.\n\nThe main problem with re-using posts from this list for workload testing \nis getting an appropriately sized data set for them that stays \nrelevant. The nature of this sort of benchmark always includes some \nnotion of the size of the database, and you get different results based \non how large things are relative to RAM and the database parameters. \nThat said, some sort of systematic collection of \"hard queries\" would \nalso be a very useful project for someone to take on.\n\nPeople show up regularly who want to play with the optimizer in some \nway. It's still possible to do that by targeting specific queries you \nwant to accelerate, where it's obvious (or, more likely, hard but still \nstraightforward) how to do better. But I don't think any of these \nproposed exercises adjusting the caching model or default optimizer \nparameters in the database is going anywhere without some sort of \nbenchmarking framework for evaluating the results. And the TPC tests \nare a reasonable place to start. They're a good mixed set of queries, \nand improving results on those does turn into a real commercial benefit \nto PostgreSQL in the future too.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Wed, 27 Apr 2011 17:55:36 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance" }, { "msg_contents": "Just want to share the DBT(2&5) thing\n\nhttp://archives.postgresql.org/pgsql-performance/2011-04/msg00145.php\nhttp://sourceforge.net/mailarchive/forum.php?forum_name=osdldbt-general&max_rows=25&style=nested&viewmonth=201104\n\n\n\nOn Wed, Apr 27, 2011 at 11:55 PM, Greg Smith <[email protected]> wrote:\n\n> Tomas Vondra wrote:\n>\n>> Hmmm, just wondering - what would be needed to build such 'workload\n>> library'? Building it from scratch is not feasible IMHO, but I guess\n>> people could provide their own scripts (as simple as 'set up a a bunch\n>> of tables, fill it with data, run some queries') and there's a pile of\n>> such examples in the pgsql-performance list.\n>>\n>>\n>\n> The easiest place to start is by re-using the work already done by the TPC\n> for benchmarking commercial databases. There are ports of the TPC workloads\n> to PostgreSQL available in the DBT-2, DBT-3, and DBT-5 tests; see\n> http://wiki.postgresql.org/wiki/Category:Benchmarking for initial\n> information on those (the page on TPC-H is quite relevant too). I'd like to\n> see all three of those DBT tests running regularly, as well as two tests\n> it's possible to simulate with pgbench or sysbench: an in-cache read-only\n> test, and a write as fast as possible test.\n>\n> The main problem with re-using posts from this list for workload testing is\n> getting an appropriately sized data set for them that stays relevant. The\n> nature of this sort of benchmark always includes some notion of the size of\n> the database, and you get different results based on how large things are\n> relative to RAM and the database parameters. That said, some sort of\n> systematic collection of \"hard queries\" would also be a very useful project\n> for someone to take on.\n>\n> People show up regularly who want to play with the optimizer in some way.\n> It's still possible to do that by targeting specific queries you want to\n> accelerate, where it's obvious (or, more likely, hard but still\n> straightforward) how to do better. But I don't think any of these proposed\n> exercises adjusting the caching model or default optimizer parameters in the\n> database is going anywhere without some sort of benchmarking framework for\n> evaluating the results. And the TPC tests are a reasonable place to start.\n> They're a good mixed set of queries, and improving results on those does\n> turn into a real commercial benefit to PostgreSQL in the future too.\n>\n>\n> --\n> Greg Smith 2ndQuadrant US [email protected] Baltimore, MD\n> PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n> \"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nJust want to share the DBT(2&5) thinghttp://archives.postgresql.org/pgsql-performance/2011-04/msg00145.phphttp://sourceforge.net/mailarchive/forum.php?forum_name=osdldbt-general&max_rows=25&style=nested&viewmonth=201104\nOn Wed, Apr 27, 2011 at 11:55 PM, Greg Smith <[email protected]> wrote:\nTomas Vondra wrote:\n\nHmmm, just wondering - what would be needed to build such 'workload\nlibrary'? Building it from scratch is not feasible IMHO, but I guess\npeople could provide their own scripts (as simple as 'set up a a bunch\nof tables, fill it with data, run some queries') and there's a pile of\nsuch examples in the pgsql-performance list.\n  \n\n\nThe easiest place to start is by re-using the work already done by the TPC for benchmarking commercial databases.  There are ports of the TPC workloads to PostgreSQL available in the DBT-2, DBT-3, and DBT-5 tests; see http://wiki.postgresql.org/wiki/Category:Benchmarking for initial information on those (the page on TPC-H is quite relevant too).  I'd like to see all three of those DBT tests running regularly, as well as two tests it's possible to simulate with pgbench or sysbench:  an in-cache read-only test, and a write as fast as possible test.\n\nThe main problem with re-using posts from this list for workload testing is getting an appropriately sized data set for them that stays relevant.  The nature of this sort of benchmark always includes some notion of the size of the database, and you get different results based on how large things are relative to RAM and the database parameters.  That said, some sort of systematic collection of \"hard queries\" would also be a very useful project for someone to take on.\n\nPeople show up regularly who want to play with the optimizer in some way.  It's still possible to do that by targeting specific queries you want to accelerate, where it's obvious (or, more likely, hard but still straightforward) how to do better.  But I don't think any of these proposed exercises adjusting the caching model or default optimizer parameters in the database is going anywhere without some sort of benchmarking framework for evaluating the results.  And the TPC tests are a reasonable place to start.  They're a good mixed set of queries, and improving results on those does turn into a real commercial benefit to PostgreSQL in the future too.\n\n\n-- \nGreg Smith   2ndQuadrant US    [email protected]   Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 28 Apr 2011 10:03:39 +0200", "msg_from": "Sethu Prasad <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance" }, { "msg_contents": "Dne 27.4.2011 23:55, Greg Smith napsal(a):\n\n> The easiest place to start is by re-using the work already done by the\n> TPC for benchmarking commercial databases. There are ports of the TPC\n> workloads to PostgreSQL available in the DBT-2, DBT-3, and DBT-5 tests;\n> see http://wiki.postgresql.org/wiki/Category:Benchmarking for initial\n> information on those (the page on TPC-H is quite relevant too). I'd\n> like to see all three of those DBT tests running regularly, as well as\n> two tests it's possible to simulate with pgbench or sysbench: an\n> in-cache read-only test, and a write as fast as possible test.\n\nThat's a natural first step, I guess.\n\n> The main problem with re-using posts from this list for workload testing\n> is getting an appropriately sized data set for them that stays\n> relevant. The nature of this sort of benchmark always includes some\n> notion of the size of the database, and you get different results based\n> on how large things are relative to RAM and the database parameters. \n> That said, some sort of systematic collection of \"hard queries\" would\n> also be a very useful project for someone to take on.\n\nYes, I'm aware of that. The examples posted to the lists usually lack\nthe data, but I guess we could get it at least from some of the posters\n(anonymized etc.). And some of the examples are rather simple so it's\npossible to generate as much data as you want using a PL/pgSQL or so.\n\nAnyway I hesitate to call those examples 'workloads' - it's usually just\none query, sometimes two. But it's still a useful test IMHO.\n\nI was thinking about several VMs, each with a different configuration\n(amount of RAM, CPU, ...). The benchmarks might be a bunch of very\nsimple scripts I guess, each one taking care of preparing the data,\nrunning the test, uploading the results somewhere.\n\nAnd I guess it'd be useful to make this awailable for download, so that\neveryone can run the tests locally ...\n\nA bit naive question - where to run this? I know there's a build farm\nbut I guess this it's mostly for building and not for such benchmarks.\n\n> People show up regularly who want to play with the optimizer in some\n> way. It's still possible to do that by targeting specific queries you\n> want to accelerate, where it's obvious (or, more likely, hard but still\n> straightforward) how to do better. But I don't think any of these\n> proposed exercises adjusting the caching model or default optimizer\n> parameters in the database is going anywhere without some sort of\n> benchmarking framework for evaluating the results. And the TPC tests\n> are a reasonable place to start. They're a good mixed set of queries,\n> and improving results on those does turn into a real commercial benefit\n> to PostgreSQL in the future too.\n\n100% true.\n\nregards\nTomas\n", "msg_date": "Fri, 29 Apr 2011 02:22:09 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance" }, { "msg_contents": "All,\n\n> The easiest place to start is by re-using the work already done by the\n> TPC for benchmarking commercial databases. There are ports of the TPC\n> workloads to PostgreSQL available in the DBT-2, DBT-3, and DBT-5\n> tests;\n\nAlso EAStress, which I think the project still has a license for.\n\nThe drawback to these is that they're quite difficult and time-consuming to run, making them unsuitable for doing, say, incremental tuning tests which need to run 100 iterations. At least, now that we don't have access to the OSDL or Sun labs anymore. \n\nOn the other hand, Greg has made the first steps in a benchmark constructor kit by making it possible for pgBench to run arbitrary workloads. Someone could build on Greg's foundation by:\n\na) building a more complex database model with random data generators, and\nb) designing a wide series of queries designed to test specific performance problems, i.e, \"large object reads\", \"complex nested subqueries\", \"mass bulk correllated updates\"\nc) finally creating scripts which generate benchmarks by choosing a database size and a \"mix\" of the query menu\n\nThis would give us kit which would be capable of testing performance regressions and improvements for PostgreSQL.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\nSan Francisco\n", "msg_date": "Thu, 28 Apr 2011 23:58:33 -0500 (CDT)", "msg_from": "Joshua Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance" }, { "msg_contents": "Robert Haas wrote:\n> The server can and does measure hit rates for the PG buffer pool, but to my knowledge there is no clear-cut way for PG to know whether read() is satisfied from the OS cache or a drive cache or the platter.\n>\n> \nDoes the server know which IO it thinks is sequential, and which it \nthinks is random? Could it not time the IOs (perhaps optionally) and at \nleast keep some sort of statistics of the actual observed times?\n\nIt might not be appropriate for the server to attempt auto-tuning, but \nit might be able to provide some information that can be used by a DBA \nto make informed decisions.\n\nJames\n\n", "msg_date": "Fri, 29 Apr 2011 09:25:00 +0100", "msg_from": "James Mansion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance" }, { "msg_contents": "James Mansion wrote:\n> Does the server know which IO it thinks is sequential, and which it \n> thinks is random? Could it not time the IOs (perhaps optionally) and \n> at least keep some sort of statistics of the actual observed times?\n\nIt makes some assumptions based on what the individual query nodes are \ndoing. Sequential scans are obviously sequential; index lookupss \nrandom; bitmap index scans random.\n\nThe \"measure the I/O and determine cache state from latency profile\" has \nbeen tried, I believe it was Greg Stark who ran a good experiment of \nthat a few years ago. Based on the difficulties of figuring out what \nyou're actually going to with that data, I don't think the idea will \never go anywhere. There are some really nasty feedback loops possible \nin all these approaches for better modeling what's in cache, and this \none suffers the worst from that possibility. If for example you \ndiscover that accessing index blocks is slow, you might avoid using them \nin favor of a measured fast sequential scan. Once you've fallen into \nthat local minimum, you're stuck there. Since you never access the \nindex blocks, they'll never get into RAM so that accessing them becomes \nfast--even though doing that once might be much more efficient, \nlong-term, than avoiding the index.\n\nThere are also some severe query plan stability issues with this idea \nbeyond this. The idea that your plan might vary based on execution \nlatency, that the system load going up can make query plans alter with \nit, is terrifying for a production server.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Fri, 29 Apr 2011 14:55:49 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance" }, { "msg_contents": "On 4/29/2011 1:55 PM, Greg Smith wrote:\n> James Mansion wrote:\n>> Does the server know which IO it thinks is sequential, and which it\n>> thinks is random? Could it not time the IOs (perhaps optionally) and\n>> at least keep some sort of statistics of the actual observed times?\n>\n> It makes some assumptions based on what the individual query nodes are\n> doing. Sequential scans are obviously sequential; index lookupss random;\n> bitmap index scans random.\n>\n> The \"measure the I/O and determine cache state from latency profile\" has\n> been tried, I believe it was Greg Stark who ran a good experiment of\n> that a few years ago. Based on the difficulties of figuring out what\n> you're actually going to with that data, I don't think the idea will\n> ever go anywhere. There are some really nasty feedback loops possible in\n> all these approaches for better modeling what's in cache, and this one\n> suffers the worst from that possibility. If for example you discover\n> that accessing index blocks is slow, you might avoid using them in favor\n> of a measured fast sequential scan. Once you've fallen into that local\n> minimum, you're stuck there. Since you never access the index blocks,\n> they'll never get into RAM so that accessing them becomes fast--even\n> though doing that once might be much more efficient, long-term, than\n> avoiding the index.\n>\n> There are also some severe query plan stability issues with this idea\n> beyond this. The idea that your plan might vary based on execution\n> latency, that the system load going up can make query plans alter with\n> it, is terrifying for a production server.\n>\n\nHow about if the stats were kept, but had no affect on plans, or \noptimizer or anything else.\n\nIt would be a diag tool. When someone wrote the list saying \"AH! It \nused the wrong index!\". You could say, \"please post your config \nsettings, and the stats from 'select * from pg_stats_something'\"\n\nWe (or, you really) could compare the seq_page_cost and random_page_cost \nfrom the config to the stats collected by PG and determine they are way \noff... and you should edit your config a little and restart PG.\n\n-Andy\n", "msg_date": "Fri, 29 Apr 2011 15:23:25 -0500", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance" }, { "msg_contents": "Greg Smith wrote:\n> There are also some severe query plan stability issues with this idea \n> beyond this. The idea that your plan might vary based on execution \n> latency, that the system load going up can make query plans alter with \n> it, is terrifying for a production server.\n>\nI thought I was clear that it should present some stats to the DBA, not \nthat it would try to auto-tune? This thread started with a discussion \nof appropriate tunings for random page cost vs sequential page cost I \nbelieve,, based on some finger in the air based on total size vs \navailable disk cache. And it was observed that on systems that have \nvery large databases but modest hot data, you can perform like a fully \ncached system, for much of the time.\n\nI'm just suggesting providing statistical information to the DBA which \nwill indicate whether the system has 'recently' been behaving like a \nsystem that runs from buffer cache and/or subsystem caches, or one that \nruns from disk platters, and what the actual observed latency difference \nis. It may well be that this varies with time of day or day of week. \nWhether the actual latencies translate directly into the relative costs \nis another matter.\n\n\n\n", "msg_date": "Fri, 29 Apr 2011 21:27:21 +0100", "msg_from": "James Mansion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance" }, { "msg_contents": "James Mansion wrote:\n> I thought I was clear that it should present some stats to the DBA, \n> not that it would try to auto-tune?\n\nYou were. But people are bound to make decisions about how to retune \ntheir database based on that information. The situation when doing \nmanual tuning isn't that much different, it just occurs more slowly, and \nwith the potential to not react at all if the data is incoherent. That \nmight be better, but you have to assume that a naive person will just \nfollow suggestions on how to re-tune based on that the same way an \nauto-tune process would.\n\nI don't like this whole approach because it takes something the database \nand DBA have no control over (read timing) and makes it a primary input \nto the tuning model. Plus, the overhead of collecting this data is big \nrelative to its potential value.\n\nAnyway, how to collect this data is a separate problem from what should \nbe done with it in the optimizer. I don't actually care about the \ncollection part very much; there are a bunch of approaches with various \ntrade-offs. Deciding how to tell the optimizer about what's cached \nalready is the more important problem that needs to be solved before any \nof this takes you somewhere useful, and focusing on the collection part \ndoesn't move that forward. Trying to map the real world into the \ncurrently exposed parameter set isn't a solvable problem. We really \nneed cached_page_cost and random_page_cost, plus a way to model the \ncached state per relation that doesn't fall easily into feedback loops.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Fri, 29 Apr 2011 17:37:39 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance" }, { "msg_contents": "On Fri, Apr 29, 2011 at 11:37 PM, Greg Smith <[email protected]> wrote:\n> Anyway, how to collect this data is a separate problem from what should be\n> done with it in the optimizer.  I don't actually care about the collection\n> part very much; there are a bunch of approaches with various trade-offs.\n>  Deciding how to tell the optimizer about what's cached already is the more\n> important problem that needs to be solved before any of this takes you\n> somewhere useful, and focusing on the collection part doesn't move that\n> forward.  Trying to map the real world into the currently exposed parameter\n> set isn't a solvable problem.  We really need cached_page_cost and\n> random_page_cost, plus a way to model the cached state per relation that\n> doesn't fall easily into feedback loops.\n\nThis is valuable input...\n\nI was already worried about feedback loops, and hearing that it has\nbeen tried and resulted in them is invaluable.\n\n From my experience, what really blows up in your face when your\nservers are saturated, is the effective cache size. Postgres thinks an\nindex will fit into the cache, but it doesn't at times of high load,\nmeaning that, actually, a sequential scan would be orders of magnitude\nbetter - if it's a \"small enough table\".\n\nPerhaps just adjusting effective cache size would provide a good\nenough benefit without the disastrous feedback loops?\n\nI'll have to test that idea...\n", "msg_date": "Sat, 30 Apr 2011 00:03:29 +0200", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance" }, { "msg_contents": "On Apr 29, 2011, at 10:25 AM, James Mansion <[email protected]> wrote:\n> Robert Haas wrote:\n>> The server can and does measure hit rates for the PG buffer pool, but to my knowledge there is no clear-cut way for PG to know whether read() is satisfied from the OS cache or a drive cache or the platter.\n>> \n>> \n> Does the server know which IO it thinks is sequential, and which it thinks is random? \n\nNo. It models this in the optimizer, but the executor has no clue. And sometimes we model I/O as partly random, partly sequential, as in the case of heap fetches on a clustered index. So the answer isn't even a Boolean.\n\n...Robert", "msg_date": "Sat, 30 Apr 2011 01:00:23 +0200", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance" }, { "msg_contents": "On Apr 27, 2011, at 11:01 PM, Claudio Freire <[email protected]> wrote:\n> The patch may be simple, the testing not so much. I know that.\n> \n> What tools do we have to do that testing? There are lots, and all\n> imply a lot of work. Is that work worth the trouble? Because if it\n> is... why not work?\n> \n> I would propose a step in the right direction: a patch to compute and\n> log periodical estimations of the main I/O tunables: random_page_cost,\n> sequential_page_cost and effective_cache_size. Maybe per-tablespace.\n> Evaluate the performance impact, and work from there.\n> \n> Because, probably just using those values as input to the optimizer\n> won't work, because dbas will want a way to tune the optimizer,\n> because the system may not be stable enough, even because even with\n> accurate estimates for those values, the optimizer may not perform as\n> expected. I mean, right now those values are tunables, not real\n> metrics, so perhaps the optimizer won't respond well to real values.\n> \n> But having the ability to measure them without a serious performance\n> impact is a step in the right direction, right?\n\nSure. It's not a real easy problem, but don't let that discourage you from working on it. Getting more eyeballs on these issues can only be a good thing.\n\n...Robert", "msg_date": "Sat, 30 Apr 2011 01:03:23 +0200", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance" } ]
[ { "msg_contents": "Hi,\nI have done migration of the Request Tracker 3.8.9\n(http://requesttracker.wikia.com/wiki/HomePage) from Mysql to\nPostgreSQL in testing environment.\nThe RT schema used can be viewed at\nhttps://github.com/bestpractical/rt/blob/3.8-trunk/etc/schema.Pg.\nI have added full text search on table Attachments based on trigrams\n(and still experimenting with it), but is is not interesting for the\nproblem (the problem is not caused by it directly).\nThe full text search alone works quite good. A user testing a new RT instance\nreported a poor performance problem with a bit more complex query (more\nconditions resulting in table joins).\nQueries are constructed by module DBIx::SearchBuilder.\nThe problematic query logged:\n\nrt=# EXPLAIN ANALYZE SELECT DISTINCT main.* FROM Tickets main JOIN Transactions Transactions_1 ON ( Transactions_1.ObjectId = main.id ) JOIN Attachments Attachments_2 ON ( Attachments_2.TransactionId = Transactions_1.id ) WHERE (Transactions_1.ObjectType = 'RT::Ticket') AND (main.Status != 'deleted') AND (main.Status = 'resolved' AND main.LastUpdated > '2008-12-31 23:00:00' AND main.Created > '2005-12-31 23:00:00' AND main.Queue = '15' AND ( Attachments_2.trigrams @@ text_to_trgm_tsquery('uir') AND Attachments_2.Content ILIKE '%uir%' ) ) AND (main.Type = 'ticket') AND (main.EffectiveId = main.id) ORDER BY main.id ASC;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=23928.60..23928.67 rows=1 width=162) (actual time=5201.139..5207.965 rows=649 loops=1)\n -> Sort (cost=23928.60..23928.61 rows=1 width=162) (actual time=5201.137..5201.983 rows=5280 loops=1)\n Sort Key: main.effectiveid, main.issuestatement, main.resolution, main.owner, main.subject, main.initialpriority, main.finalpriority, main.priority, main.timeestimated, main.timeworked, main.timeleft, main.told, main.starts, main.started, main.due, main.resolved, main.lastupdatedby, main.lastupdated, main.creator, main.created, main.disabled\n Sort Method: quicksort Memory: 1598kB\n -> Nested Loop (cost=0.00..23928.59 rows=1 width=162) (actual time=10.060..5120.834 rows=5280 loops=1)\n -> Nested Loop (cost=0.00..10222.38 rows=1734 width=166) (actual time=8.702..1328.970 rows=417711 loops=1)\n -> Seq Scan on tickets main (cost=0.00..5687.88 rows=85 width=162) (actual time=8.258..94.012 rows=25410 loops=1)\n Filter: (((status)::text <> 'deleted'::text) AND (lastupdated > '2008-12-31 23:00:00'::timestamp without time zone) AND (created > '2005-12-31 23:00:00'::timestamp without time zone) AND (effectiveid = id) AND (queue = 15) AND ((type)::text = 'ticket'::text) AND ((status)::text = 'resolved'::text))\n -> Index Scan using transactions1 on transactions transactions_1 (cost=0.00..53.01 rows=27 width=8) (actual time=0.030..0.039 rows=16 loops=25410)\n Index Cond: (((transactions_1.objecttype)::text = 'RT::Ticket'::text) AND (transactions_1.objectid = main.effectiveid))\n -> Index Scan using attachments2 on attachments attachments_2 (cost=0.00..7.89 rows=1 width=4) (actual time=0.008..0.009 rows=0 loops=417711)\n Index Cond: (attachments_2.transactionid = transactions_1.id)\n Filter: ((attachments_2.trigrams @@ '''uir'''::tsquery) AND (attachments_2.content ~~* '%uir%'::text))\n Total runtime: 5208.149 ms\n(14 rows)\n\nThe above times are for already cached data (repeated query).\nI think the execution plan is poor. Better would be to filter table attachments\nat first and then join the rest. The reason is a bad estimate on number of rows\nreturned from table tickets (85 estimated -> 25410 in the reality).\nEliminating sub-condition...\n\n\nrt=# explain analyze select * from tickets where effectiveid = id;\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------\n Seq Scan on tickets (cost=0.00..4097.40 rows=530 width=162) (actual time=0.019..38.130 rows=101869 loops=1)\n Filter: (effectiveid = id)\n Total runtime: 54.318 ms\n(3 rows)\n\nEstimated 530 rows, but reality is 101869 rows.\n\nThe problem is the strong dependance between id and effectiveid. The RT\ndocumentation says:\n\n EffectiveId:\n By default, a ticket's EffectiveId is the same as its ID. RT supports the\n ability to merge tickets together. When you merge a ticket into\n another one, RT sets the first ticket's EffectiveId to the second\n ticket's ID. RT uses this data to quickly look up which ticket\n you're really talking about when you reference a merged ticket.\n\n\nI googled the page http://wiki.postgresql.org/wiki/Cross_Columns_Stats\n\nMaybe I identified the already documented problem. What I can do with this\nsituation? Some workaround?\n\nThanks in advance for any suggestions.\nBest Regards\n-- \nZito\n", "msg_date": "Wed, 13 Apr 2011 01:23:43 +0200", "msg_from": "=?iso-8859-1?Q?V=E1clav_Ovs=EDk?= <[email protected]>", "msg_from_op": true, "msg_subject": "poor execution plan because column dependence" }, { "msg_contents": "Zito,\n\nUsing psql log in as the database owner and run \"analyze verbose\". Happiness will ensue.\n\nAlso, when requesting help with a query its important to state the database version (\"select version();\") and what, if any, configuration changes you have made in postgresql.conf. Listing ony the ones that have changed is sufficient.\n\nFinally, the wiki has some good information on the care and feeding of a PostgreSQL database:\n\nhttp://wiki.postgresql.org/wiki/Introduction_to_VACUUM,_ANALYZE,_EXPLAIN,_and_COUNT\n\n\n\nBob Lunney\n\n--- On Tue, 4/12/11, Václav Ovsík <[email protected]> wrote:\n\n> From: Václav Ovsík <[email protected]>\n> Subject: [PERFORM] poor execution plan because column dependence\n> To: [email protected]\n> Date: Tuesday, April 12, 2011, 7:23 PM\n> Hi,\n> I have done migration of the Request Tracker 3.8.9\n> (http://requesttracker.wikia.com/wiki/HomePage) from\n> Mysql to\n> PostgreSQL in testing environment.\n> The RT schema used can be viewed at\n> https://github.com/bestpractical/rt/blob/3.8-trunk/etc/schema.Pg.\n> I have added full text search on table Attachments based on\n> trigrams\n> (and still experimenting with it), but is is not\n> interesting for the\n> problem (the problem is not caused by it directly).\n> The full text search alone works quite good. A user testing\n> a new RT instance\n> reported a poor performance problem with a bit more complex\n> query (more\n> conditions resulting in table joins).\n> Queries are constructed by module DBIx::SearchBuilder.\n> The problematic query logged:\n> \n> rt=# EXPLAIN ANALYZE SELECT DISTINCT  main.* FROM\n> Tickets main JOIN Transactions Transactions_1  ON (\n> Transactions_1.ObjectId = main.id ) JOIN Attachments\n> Attachments_2  ON ( Attachments_2.TransactionId =\n> Transactions_1.id )  WHERE (Transactions_1.ObjectType =\n> 'RT::Ticket') AND (main.Status != 'deleted') AND\n> (main.Status = 'resolved' AND main.LastUpdated >\n> '2008-12-31 23:00:00' AND main.Created > '2005-12-31\n> 23:00:00' AND main.Queue = '15' AND  (\n> Attachments_2.trigrams @@ text_to_trgm_tsquery('uir') AND\n> Attachments_2.Content ILIKE '%uir%' ) ) AND (main.Type =\n> 'ticket') AND (main.EffectiveId = main.id)  ORDER BY\n> main.id ASC;\n>                \n>                \n>                \n>                \n>                \n>                \n>                \n>                \n>                \n>                \n>            QUERY\n> PLAN               \n>                \n>                \n>                \n>                \n>                \n>                \n>                \n>                \n>                \n>             \n> -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Unique  (cost=23928.60..23928.67 rows=1 width=162)\n> (actual time=5201.139..5207.965 rows=649 loops=1)\n>    ->  Sort \n> (cost=23928.60..23928.61 rows=1 width=162) (actual\n> time=5201.137..5201.983 rows=5280 loops=1)\n>          Sort Key:\n> main.effectiveid, main.issuestatement, main.resolution,\n> main.owner, main.subject, main.initialpriority,\n> main.finalpriority, main.priority, main.timeestimated,\n> main.timeworked, main.timeleft, main.told, main.starts,\n> main.started, main.due, main.resolved, main.lastupdatedby,\n> main.lastupdated, main.creator, main.created, main.disabled\n>          Sort Method: \n> quicksort  Memory: 1598kB\n>          ->  Nested\n> Loop  (cost=0.00..23928.59 rows=1 width=162) (actual\n> time=10.060..5120.834 rows=5280 loops=1)\n>            \n>    ->  Nested Loop \n> (cost=0.00..10222.38 rows=1734 width=166) (actual\n> time=8.702..1328.970 rows=417711 loops=1)\n>                \n>      ->  Seq Scan on tickets\n> main  (cost=0.00..5687.88 rows=85 width=162) (actual\n> time=8.258..94.012 rows=25410 loops=1)\n>                \n>            Filter:\n> (((status)::text <> 'deleted'::text) AND (lastupdated\n> > '2008-12-31 23:00:00'::timestamp without time zone) AND\n> (created > '2005-12-31 23:00:00'::timestamp without time\n> zone) AND (effectiveid = id) AND (queue = 15) AND\n> ((type)::text = 'ticket'::text) AND ((status)::text =\n> 'resolved'::text))\n>                \n>      ->  Index Scan using\n> transactions1 on transactions transactions_1 \n> (cost=0.00..53.01 rows=27 width=8) (actual time=0.030..0.039\n> rows=16 loops=25410)\n>                \n>            Index Cond:\n> (((transactions_1.objecttype)::text = 'RT::Ticket'::text)\n> AND (transactions_1.objectid = main.effectiveid))\n>            \n>    ->  Index Scan using attachments2\n> on attachments attachments_2  (cost=0.00..7.89 rows=1\n> width=4) (actual time=0.008..0.009 rows=0 loops=417711)\n>                \n>      Index Cond:\n> (attachments_2.transactionid = transactions_1.id)\n>                \n>      Filter: ((attachments_2.trigrams @@\n> '''uir'''::tsquery) AND (attachments_2.content ~~*\n> '%uir%'::text))\n> Total runtime: 5208.149 ms\n> (14 rows)\n> \n> The above times are for already cached data (repeated\n> query).\n> I think the execution plan is poor. Better would be to\n> filter table attachments\n> at first and then join the rest. The reason is a bad\n> estimate on number of rows\n> returned from table tickets (85 estimated -> 25410 in\n> the reality).\n> Eliminating sub-condition...\n> \n> \n> rt=# explain analyze select * from tickets where\n> effectiveid = id;\n>                \n>                \n>                \n>   QUERY PLAN           \n>                \n>                \n>       \n> --------------------------------------------------------------------------------------------------------------\n> Seq Scan on tickets  (cost=0.00..4097.40 rows=530\n> width=162) (actual time=0.019..38.130 rows=101869 loops=1)\n>    Filter: (effectiveid = id)\n> Total runtime: 54.318 ms\n> (3 rows)\n> \n> Estimated 530 rows, but reality is 101869 rows.\n> \n> The problem is the strong dependance between id and\n> effectiveid. The RT\n> documentation says:\n> \n>     EffectiveId:\n>     By default, a ticket's EffectiveId is the\n> same as its ID. RT supports the\n>     ability to merge tickets together. When you\n> merge a ticket into\n>     another one, RT sets the first ticket's\n> EffectiveId to the second\n>     ticket's ID. RT uses this data to quickly\n> look up which ticket\n>     you're really talking about when you\n> reference a merged ticket.\n> \n> \n> I googled the page http://wiki.postgresql.org/wiki/Cross_Columns_Stats\n> \n> Maybe I identified the already documented problem. What I\n> can do with this\n> situation? Some workaround?\n> \n> Thanks in advance for any suggestions.\n> Best Regards\n> -- \n> Zito\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n\n \n", "msg_date": "Tue, 12 Apr 2011 17:14:29 -0700 (PDT)", "msg_from": "Bob Lunney <[email protected]>", "msg_from_op": false, "msg_subject": "Re: poor execution plan because column dependence" }, { "msg_contents": "=?iso-8859-1?Q?V=E1clav_Ovs=EDk?= <[email protected]> writes:\n> I think the execution plan is poor. Better would be to filter table attachments\n> at first and then join the rest. The reason is a bad estimate on number of rows\n> returned from table tickets (85 estimated -> 25410 in the reality).\n> ...\n> The problem is the strong dependance between id and effectiveid. The RT\n> documentation says:\n\n> EffectiveId:\n> By default, a ticket's EffectiveId is the same as its ID. RT supports the\n> ability to merge tickets together. When you merge a ticket into\n> another one, RT sets the first ticket's EffectiveId to the second\n> ticket's ID. RT uses this data to quickly look up which ticket\n> you're really talking about when you reference a merged ticket.\n\n> I googled the page http://wiki.postgresql.org/wiki/Cross_Columns_Stats\n\n> Maybe I identified the already documented problem. What I can do with this\n> situation? Some workaround?\n\nYeah, that main.EffectiveId = main.id clause is going to be\nunderestimated by a factor of about 200, which is most though not all of\nyour rowcount error for that table. Not sure whether you can do much\nabout it, if the query is coming from a query generator that you can't\nchange. If you can change it, try replacing main.EffectiveId = main.id\nwith the underlying function, eg if they're integers use\nint4eq(main.EffectiveId, main.id). This will bypass the overoptimistic\nestimator for the \"=\" operator and get you a default selectivity\nestimate of (IIRC) 0.3333. Which is still off, but only by 3x not 200x,\nand that should be close enough to get a decent plan.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 12 Apr 2011 20:52:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: poor execution plan because column dependence " }, { "msg_contents": "Dear Bob,\n\nOn Tue, Apr 12, 2011 at 05:14:29PM -0700, Bob Lunney wrote:\n> Zito,\n> \n> Using psql log in as the database owner and run \"analyze verbose\". Happiness will ensue.\n\nUnfortunately not. I ran \"analyze\" with different values\ndefault_statistics_target till 1000 as first tries always with the same\nproblem described. I returned the value to the default 100 at the end:\n\n> Also, when requesting help with a query its important to state the\n> database version (\"select version();\") and what, if any, configuration\n> changes you have made in postgresql.conf. Listing ony the ones that\n> have changed is sufficient.\n\nYou are right. I red about this, but after reading, analyzing,\nexperimenting finally forgot to mention this basic information :(. The reason\nwas I didn't feel to be interesting now also probably. The problem is\nplanner I am afraid.\nApplication and PostgreSQL is running on KVM virtual machine hosting Debian\nGNU/Linux Squeeze. \"select version();\" returns:\n\n'PostgreSQL 8.4.7 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.4.real (Debian 4.4.5-8) 4.4.5, 64-bit'\n\nChanged interesting parameters in postgresql.conf:\n\nmax_connections = 48\nshared_buffers = 1024MB\nwork_mem = 32MB\nmaintenance_work_mem = 256MB\ncheckpoint_segments = 24\neffective_cache_size = 2048MB\nlog_min_duration_statement = 500\n\nThe virtual machine is the only one currently running on iron Dell\nPowerEdge R710, 2 x CPU Xeon L5520 @ 2.27GHz (quad-core), 32GiB RAM.\n\nPostgreSQL package installed is 8.4.7-0squeeze2.\n\nThe VM has allocated 6GiB RAM and 2 CPU.\n\n\nOne of my first hope was maybe a newer PostgreSQL series 9, can\nbehaves better. I installed a second virtual machine with Debian\nGNU/Linux Sid and PostgreSQL package version 9.0.3-1. The result was the\nsame.\n\n\n> Finally, the wiki has some good information on the care and feeding of a PostgreSQL database:\n> \n> http://wiki.postgresql.org/wiki/Introduction_to_VACUUM,_ANALYZE,_EXPLAIN,_and_COUNT\n\nI red this already.\nThanks\n-- \nZito\n", "msg_date": "Wed, 13 Apr 2011 09:55:37 +0200", "msg_from": "=?iso-8859-1?Q?V=E1clav_Ovs=EDk?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: poor execution plan because column dependence" }, { "msg_contents": "Dear Tom,\n\nOn Tue, Apr 12, 2011 at 08:52:15PM -0400, Tom Lane wrote:\n>.. \n> Yeah, that main.EffectiveId = main.id clause is going to be\n> underestimated by a factor of about 200, which is most though not all of\n> your rowcount error for that table. Not sure whether you can do much\n> about it, if the query is coming from a query generator that you can't\n> change. If you can change it, try replacing main.EffectiveId = main.id\n> with the underlying function, eg if they're integers use\n> int4eq(main.EffectiveId, main.id). This will bypass the overoptimistic\n> estimator for the \"=\" operator and get you a default selectivity\n> estimate of (IIRC) 0.3333. Which is still off, but only by 3x not 200x,\n> and that should be close enough to get a decent plan.\n\nGreat idea!\n\nrt=# EXPLAIN ANALYZE SELECT DISTINCT main.* FROM Tickets main JOIN Transactions Transactions_1 ON ( Transactions_1.ObjectId = main.id ) JOIN Attachments Attachments_2 ON ( Attachments_2.TransactionId = Transactions_1.id ) WHERE (Transactions_1.ObjectType = 'RT::Ticket') AND (main.Status != 'deleted') AND (main.Status = 'resolved' AND main.LastUpdated > '2008-12-31 23:00:00' AND main.Created > '2005-12-31 23:00:00' AND main.Queue = '15' AND ( Attachments_2.trigrams @@ text_to_trgm_tsquery('uir') AND Attachments_2.Content ILIKE '%uir%' ) ) AND (main.Type = 'ticket') AND int4eq(main.EffectiveId, main.id) ORDER BY main.id ASC;\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=37504.61..37505.00 rows=6 width=162) (actual time=1377.087..1383.844 rows=649 loops=1)\n -> Sort (cost=37504.61..37504.62 rows=6 width=162) (actual time=1377.085..1377.973 rows=5280 loops=1)\n Sort Key: main.id, main.effectiveid, main.issuestatement, main.resolution, main.owner, main.subject, main.initialpriority, main.finalpriority, main.priority, main.timeestimated, main.timeworked, main.timeleft, main.told, main.starts, main.started, main.due, main.resolved, main.lastupdatedby, main.lastupdated, main.creator, main.created, main.disabled\n Sort Method: quicksort Memory: 1598kB\n -> Nested Loop (cost=7615.47..37504.53 rows=6 width=162) (actual time=13.678..1322.292 rows=5280 loops=1)\n -> Nested Loop (cost=7615.47..37179.22 rows=74 width=4) (actual time=5.670..1266.703 rows=15593 loops=1)\n -> Bitmap Heap Scan on attachments attachments_2 (cost=7615.47..36550.26 rows=74 width=4) (actual time=5.658..1196.160 rows=15593 loops=1)\n Recheck Cond: (trigrams @@ '''uir'''::tsquery)\n Filter: (content ~~* '%uir%'::text)\n -> Bitmap Index Scan on attachments_textsearch (cost=0.00..7615.45 rows=8016 width=0) (actual time=3.863..3.863 rows=15972 loops=1)\n Index Cond: (trigrams @@ '''uir'''::tsquery)\n -> Index Scan using transactions_pkey on transactions transactions_1 (cost=0.00..8.49 rows=1 width=8) (actual time=0.003..0.003 rows=1 loops=15593)\n Index Cond: (transactions_1.id = attachments_2.transactionid)\n Filter: ((transactions_1.objecttype)::text = 'RT::Ticket'::text)\n -> Index Scan using tickets5 on tickets main (cost=0.00..4.38 rows=1 width=162) (actual time=0.003..0.003 rows=0 loops=15593)\n Index Cond: (main.id = transactions_1.objectid)\n Filter: (((main.status)::text <> 'deleted'::text) AND (main.lastupdated > '2008-12-31 23:00:00'::timestamp without time zone) AND (main.created > '2005-12-31 23:00:00'::timestamp without time zone) AND int4eq(main.effectiveid, main.id) AND (main.queue = 15) AND ((main.type)::text = 'ticket'::text) AND ((main.status)::text = 'resolved'::text))\n Total runtime: 1384.038 ms\n(18 rows)\n\nExecution plan desired! :)\n\nIndexes:\n \"tickets_pkey\" PRIMARY KEY, btree (id)\n \"tickets1\" btree (queue, status)\n \"tickets2\" btree (owner)\n \"tickets3\" btree (effectiveid)\n \"tickets4\" btree (id, status)\n \"tickets5\" btree (id, effectiveid)\n\nInteresting the original index tickets5 is still used for\nint4eq(main.effectiveid, main.id), no need to build a different.\nGreat!\n\nI think no problem to do this small hack into the SearchBuilder. I did\nalready one for full text search so there will be two hacks :).\n\nThanks very much.\nBest Regards\n-- \nZito\n", "msg_date": "Wed, 13 Apr 2011 10:21:39 +0200", "msg_from": "=?iso-8859-1?Q?V=E1clav_Ovs=EDk?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: poor execution plan because column dependence" }, { "msg_contents": "=?iso-8859-1?Q?V=E1clav_Ovs=EDk?= <[email protected]> writes:\n> On Tue, Apr 12, 2011 at 08:52:15PM -0400, Tom Lane wrote:\n>> ... If you can change it, try replacing main.EffectiveId = main.id\n>> with the underlying function, eg if they're integers use\n>> int4eq(main.EffectiveId, main.id). This will bypass the overoptimistic\n>> estimator for the \"=\" operator and get you a default selectivity\n>> estimate of (IIRC) 0.3333. Which is still off, but only by 3x not 200x,\n>> and that should be close enough to get a decent plan.\n\n> Great idea!\n\n> Interesting the original index tickets5 is still used for\n> int4eq(main.effectiveid, main.id), no need to build a different.\n\nWell, no, it won't be. This hack is entirely dependent on the fact that\nthe optimizer mostly works with operator expressions, and is blind to\nthe fact that the underlying functions are really the same thing.\n(Which is something I'd like to see fixed someday, but in the meantime\nit gives you an escape hatch.) If you use the int4eq() construct in a\ncontext where you'd like to see it transformed into an index qual, it\nwon't be. For this particular case that doesn't matter because there's\nno use in using an index for that clause anyway. But you'll need to be\nvery careful that your changes in the query generator don't result in\nusing int4eq() in any contexts other than the \"main.EffectiveId=main.id\"\ncheck.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 13 Apr 2011 12:24:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: poor execution plan because column dependence " }, { "msg_contents": "On Wed, Apr 13, 2011 at 12:24:06PM -0400, Tom Lane wrote:\n> > Interesting the original index tickets5 is still used for\n> > int4eq(main.effectiveid, main.id), no need to build a different.\n> \n> Well, no, it won't be. This hack is entirely dependent on the fact that\n> the optimizer mostly works with operator expressions, and is blind to\n> the fact that the underlying functions are really the same thing.\n> (Which is something I'd like to see fixed someday, but in the meantime\n> it gives you an escape hatch.) If you use the int4eq() construct in a\n> context where you'd like to see it transformed into an index qual, it\n> won't be. For this particular case that doesn't matter because there's\n> no use in using an index for that clause anyway. But you'll need to be\n> very careful that your changes in the query generator don't result in\n> using int4eq() in any contexts other than the \"main.EffectiveId=main.id\"\n> check.\n\nSorry I'm not certain understand your paragraph completely...\n\nI perfectly understand the fact that change from\n\tA = B\tinto\tint4eq(A, B)\nstopped bad estimate and execution plan is corrected, but that can\nchange someday in the future.\n\nI'm not certain about your sentence touching int4eq() and index. The\nexecution plan as show in my previous mail contains information about\nusing index tickets5:\n\n...\n -> Index Scan using tickets5 on tickets main (cost=0.00..4.38 rows=1 width=162) (actual time=0.006..0.006 rows=0 loops=15593)\n Index Cond: (main.id = transactions_1.objectid)\n Filter: (((main.status)::text <> 'deleted'::text) AND (main.lastupdated > '2008-12-31 23:00:00'::timestamp without time zone) AND (main.created > '2005-12-31 23:00:00'::timestamp without time zone) AND int4eq(main.effectiveid, main.id) AND (main.queue = 15) AND ((main.type)::text = 'ticket'::text) AND ((main.status)::text = 'resolved'::text))\n...\n\n\nFilter condition contains int4eq(main.effectiveid, main.id) and tickets5\nis: \"tickets5\" btree (id, effectiveid)\n\nThat means tickets5 index was used for int4eq(main.effectiveid, main.id).\nIs it right? Or am I something missing?\n\nWell the index will not be used generally probably, because of\nselectivity of int4eq() you mention (33%). The planner thinks it is\nbetter to use seq scan then. I tried this now.\n\nI did hack for this particular case only:\n\n\ndiff --git a/local/lib/DBIx/SearchBuilder.pm b/local/lib/DBIx/SearchBuilder.pm\nindex f3ee1e1..9e3a6a6 100644\n--- a/local/lib/DBIx/SearchBuilder.pm\n+++ b/local/lib/DBIx/SearchBuilder.pm\n@@ -1040,7 +1040,9 @@ sub _CompileGenericRestrictions {\n $result .= ' '. $entry . ' ';\n }\n else {\n- $result .= join ' ', @{$entry}{qw(field op value)};\n+ my $term = join ' ', @{$entry}{qw(field op value)};\n+ $term =~ s/^(main|Tickets_\\d+)\\.(EffectiveId) = (\\1)\\.(id)$/int4eq($1.$2, $3.$4)/i;\n+ $result .= $term;\n }\n }\n $result .= ')';\n\n\nIt works as expected.\nThanks\nBest Regards\n-- \nZito\n", "msg_date": "Thu, 14 Apr 2011 10:11:52 +0200", "msg_from": "=?iso-8859-1?Q?V=E1clav_Ovs=EDk?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: poor execution plan because column dependence" }, { "msg_contents": "=?iso-8859-1?Q?V=E1clav_Ovs=EDk?= <[email protected]> writes:\n> I'm not certain about your sentence touching int4eq() and index. The\n> execution plan as show in my previous mail contains information about\n> using index tickets5:\n\n> -> Index Scan using tickets5 on tickets main (cost=0.00..4.38 rows=1 width=162) (actual time=0.006..0.006 rows=0 loops=15593)\n> Index Cond: (main.id = transactions_1.objectid)\n> Filter: (((main.status)::text <> 'deleted'::text) AND (main.lastupdated > '2008-12-31 23:00:00'::timestamp without time zone) AND (main.created > '2005-12-31 23:00:00'::timestamp without time zone) AND int4eq(main.effectiveid, main.id) AND (main.queue = 15) AND ((main.type)::text = 'ticket'::text) AND ((main.status)::text = 'resolved'::text))\n\n> That means tickets5 index was used for int4eq(main.effectiveid, main.id).\n> Is it right? Or am I something missing?\n\nNo, the clause that's being used with the index is\n\tmain.id = transactions_1.objectid\nThe \"filter condition\" is just along for the ride --- it doesn't matter\nwhat sort of expressions are in there, so long as they only use\nvariables available at this point in the plan. But if you had coded\nthat clause as\n\tint4eq(main.id, transactions_1.objectid)\nit would have been unable to create this plan at all.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 14 Apr 2011 10:10:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: poor execution plan because column dependence " }, { "msg_contents": "Dear Tom,\n\nOn Thu, Apr 14, 2011 at 10:10:44AM -0400, Tom Lane wrote:\n> =?iso-8859-1?Q?V=E1clav_Ovs=EDk?= <[email protected]> writes:\n> > I'm not certain about your sentence touching int4eq() and index. The\n> > execution plan as show in my previous mail contains information about\n> > using index tickets5:\n> \n> > -> Index Scan using tickets5 on tickets main (cost=0.00..4.38 rows=1 width=162) (actual time=0.006..0.006 rows=0 loops=15593)\n> > Index Cond: (main.id = transactions_1.objectid)\n> > Filter: (((main.status)::text <> 'deleted'::text) AND (main.lastupdated > '2008-12-31 23:00:00'::timestamp without time zone) AND (main.created > '2005-12-31 23:00:00'::timestamp without time zone) AND int4eq(main.effectiveid, main.id) AND (main.queue = 15) AND ((main.type)::text = 'ticket'::text) AND ((main.status)::text = 'resolved'::text))\n> \n> > That means tickets5 index was used for int4eq(main.effectiveid, main.id).\n> > Is it right? Or am I something missing?\n> \n> No, the clause that's being used with the index is\n> \tmain.id = transactions_1.objectid\n> The \"filter condition\" is just along for the ride --- it doesn't matter\n> what sort of expressions are in there, so long as they only use\n> variables available at this point in the plan. But if you had coded\n> that clause as\n> \tint4eq(main.id, transactions_1.objectid)\n> it would have been unable to create this plan at all.\n\nThanks you for the explanation and the patience with me. I have red the\nchapter \"Multicolumn Indexes\" in the Pg doc and discover new things for\nme. The planner can use multicolumn index with an index leftmost field\nalone - I missed this. I understand things a bit better now.\nThanks!\nBest Regards\n-- \nZito\n", "msg_date": "Fri, 15 Apr 2011 09:59:26 +0200", "msg_from": "=?iso-8859-1?Q?V=E1clav_Ovs=EDk?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: poor execution plan because column dependence" } ]
[ { "msg_contents": "Sorry for resurrecting this thread, but this has been in my outbox for months and I think it is important: On Oct 27, 2010, at 12:56 PM, Tom Lane wrote: > Scott Carey writes: > > Why does hashjoin behave poorly when the inner relation is not > > uniformly distributed and the outer is? > Because a poorly distributed inner relation leads to long hash chains. > In the very worst case, all the keys are on the same hash chain and it > degenerates to a nested-loop join. (There is an assumption in the > costing code that the longer hash chains also tend to get searched more > often, which maybe doesn't apply if the outer rel is flat, but it's not > obvious that it's safe to not assume that.) I disagree. Either 1: The estimator is wrong or 2: The hash data structure is flawed. A pathological skew case (all relations with the same key), should be _cheaper_ to probe. There should be only _one_ entry in the hash (for the one key), and that entry will be a list of all relations matching the key. Therefore, hash probes will either instantly fail to match on an empty bucket, fail to match the one key with one compare, or match the one key and join on the matching list. In particular for anti-join, high skew should be the best case scenario. A hash structure that allows multiple entries per key is inappropriate for skewed data, because it is not O(n). One that has one entry per key remains O(n) for all skew. Furthermore, the hash buckets and # of entries is proportional to n_distinct in this case, and smaller and more cache and memory friendly to probe. > Not really. It's still searching a long hash chain; maybe it will find > an exact match early in the chain, or maybe not. It's certainly not > *better* than antijoin with a well-distributed inner rel. There shouldn't be long hash chains. A good hash function + proper bucket count + one entry per key = no long chains. > Although the > point is moot, anyway, since if it's an antijoin there is only one > candidate for which rel to put on the outside. You can put either relation on the outside with an anti-join, but would need a different algorithm and cost estimator if done the other way around. Construct a hash on the join key, that keeps a list of relations per key, iterate over the other relation, and remove the key and corresponding list from the hash when there is a match, when complete the remaining items in the hash are the result of the join (also already grouped by the key). It could be terminated early if all entries are removed. This would be useful if the hash was small, the other side of the hash too large to fit in memory, and alternative was a massive sort on the other relation. Does the hash cost estimator bias towards smaller hashes due to hash probe cost increasing with hash size due to processor caching effects? Its not quite O(n) due to caching effects. > regards, tom lane\n\nSorry for resurrecting this thread, but this has been in my outbox for \nmonths and I think it is important:\n\nOn Oct 27, 2010, at 12:56 PM, Tom Lane wrote:\n\n\n> Scott Carey writes:\n> > Why does hashjoin behave poorly when the inner relation is not\n\n> > uniformly distributed and the outer is?\n\n\n\n> Because a poorly distributed inner relation leads to long hash chains.\n> In the very worst case, all the keys are on the same hash chain and it\n> degenerates to a nested-loop join. (There is an assumption in the\n> costing code that the longer hash chains also tend to get searched more\n> often, which maybe doesn't apply if the outer rel is flat, but it's not\n> obvious that it's safe to not assume that.)\n\n\n\nI disagree. Either\n1: The estimator is wrong\nor\n2: The hash data structure is flawed.\n\nA pathological skew case (all relations with the same key), should be \n_cheaper_ to probe. There should be only _one_ entry in the hash (for \nthe one key), and that entry will be a list of all relations matching the \nkey. Therefore, hash probes will either instantly fail to match on an \nempty bucket, fail to match the one key with one compare, or match the one \nkey and join on the matching list.\n\nIn particular for anti-join, high skew should be the best case scenario.\n\nA hash structure that allows multiple entries per key is inappropriate for \nskewed data, because it is not O(n). One that has one entry per key \nremains O(n) for all skew. Furthermore, the hash buckets and # of entries \nis proportional to n_distinct in this case, and smaller and more cache and \nmemory friendly to probe.\n\n\n\n> Not really. It's still searching a long hash chain; maybe it will find\n> an exact match early in the chain, or maybe not. It's certainly not\n> *better* than antijoin with a well-distributed inner rel. \n\nThere shouldn't be long hash chains. A good hash function + proper bucket \ncount + one entry per key = no long chains.\n\n\n> Although the\n> point is moot, anyway, since if it's an antijoin there is only one\n> candidate for which rel to put on the outside.\n\nYou can put either relation on the outside with an anti-join, but would \nneed a different algorithm and cost estimator if done the other way \naround. Construct a hash on the join key, that keeps a list of relations \nper key, iterate over the other relation, and remove the key and \ncorresponding list from the hash when there is a match, when complete the \nremaining items in the hash are the result of the join (also already \ngrouped by the key). It could be terminated early if all entries are \nremoved. \nThis would be useful if the hash was small, the other side of the hash too \nlarge to fit in memory, and alternative was a massive sort on the other \nrelation.\n\nDoes the hash cost estimator bias towards smaller hashes due to hash probe \ncost increasing with hash size due to processor caching effects? Its not \nquite O(n) due to caching effects.\n\n\n\n> regards, tom lane", "msg_date": "Wed, 13 Apr 2011 10:07:53 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": true, "msg_subject": "Re: HashJoin order, hash the large or small table?\n\tPostgres likes to hash the big one, why?" }, { "msg_contents": "New email-client nightmares! Fixed below. I think.\n-------------\n\nSorry for resurrecting this thread, but this has been in my outbox for\nmonths and I think it is important:\n\n>On Oct 27, 2010, at 12:56 PM, Tom Lane wrote:\n>\n>\n>> Scott Carey writes:\n>>Why does hashjoin behave poorly when the inner relation is not\n>\n>>uniformly distributed and the outer is?\n>\n>\n>\n> Because a poorly distributed inner relation leads to long hash chains.\n> In the very worst case, all the keys are on the same hash chain and it\n> degenerates to a nested-loop join. (There is an assumption in the\n> costing code that the longer hash chains also tend to get searched more\n> often, which maybe doesn't apply if the outer rel is flat, but it's not\n> obvious that it's safe to not assume that.)\n>\n\nI disagree. Either\n1: The estimator is wrong\nor\n2: The hash data structure is flawed.\n\nA pathological skew case (all relations with the same key), should be\n_cheaper_ to probe. There should be only _one_ entry in the hash (for\nthe one key), and that entry will be a list of all relations matching the\nkey. Therefore, hash probes will either instantly fail to match on an\nempty bucket, fail to match the one key with one compare, or match the one\nkey and join on the matching list.\n\nIn particular for anti-join, high skew should be the best case scenario.\n\nA hash structure that allows multiple entries per key is inappropriate for\nskewed data, because it is not O(n). One that has one entry per key\nremains O(n) for all skew. Furthermore, the hash buckets and # of entries\nis proportional to n_distinct in this case, and smaller and more cache and\nmemory friendly to probe.\n\n>Not really. It's still searching a long hash chain; maybe it will find\n> an exact match early in the chain, or maybe not. It's certainly not\n> *better* than antijoin with a well-distributed inner rel.\n\nThere shouldn't be long hash chains. A good hash function + proper bucket\ncount + one entry per key = no long chains.\n\n\n> Although the\n> point is moot, anyway, since if it's an antijoin there is only one\n> candidate for which rel to put on the outside.\n\nYou can put either relation on the outside with an anti-join, but would\nneed a different algorithm and cost estimator if done the other way\naround. Construct a hash on the join key, that keeps a list of relations\nper key, iterate over the other relation, and remove the key and\ncorresponding list from the hash when there is a match, when complete the\nremaining items in the hash are the result of the join (also already\ngrouped by the key). It could be terminated early if all entries are\nremoved. \nThis would be useful if the hash was small, the other side of the hash too\nlarge to fit in memory, and alternative was a massive sort on the other\nrelation.\n\nDoes the hash cost estimator bias towards smaller hashes due to hash probe\ncost increasing with hash size due to processor caching effects? Its not\nquite O(n) due to caching effects.\n\n>\n>> regards, tom lane\n\n", "msg_date": "Wed, 13 Apr 2011 10:22:40 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": true, "msg_subject": "Re: HashJoin order, hash the large or small table?\n\tPostgres likes to hash the big one, why?" }, { "msg_contents": "Scott Carey <[email protected]> writes:\n>> On Oct 27, 2010, at 12:56 PM, Tom Lane wrote:\n>> Because a poorly distributed inner relation leads to long hash chains.\n>> In the very worst case, all the keys are on the same hash chain and it\n>> degenerates to a nested-loop join.\n\n> A pathological skew case (all relations with the same key), should be\n> _cheaper_ to probe.\n\nI think you're missing the point, which is that all the hash work is\njust pure overhead in such a case (and it is most definitely not\nzero-cost overhead). You might as well just do a nestloop join.\nHashing is only beneficial to the extent that it allows a smaller subset\nof the inner relation to be compared to each outer-relation tuple.\nSo I think biasing against skew-distributed inner relations is entirely\nappropriate.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 13 Apr 2011 13:35:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: HashJoin order,\n\thash the large or small table? Postgres likes to hash the big one,\n\twhy?" }, { "msg_contents": "\n\nOn 4/13/11 10:35 AM, \"Tom Lane\" <[email protected]> wrote:\n\n>Scott Carey <[email protected]> writes:\n>>> On Oct 27, 2010, at 12:56 PM, Tom Lane wrote:\n>>> Because a poorly distributed inner relation leads to long hash chains.\n>>> In the very worst case, all the keys are on the same hash chain and it\n>>> degenerates to a nested-loop join.\n>\n>> A pathological skew case (all relations with the same key), should be\n>> _cheaper_ to probe.\n>\n>I think you're missing the point, which is that all the hash work is\n>just pure overhead in such a case (and it is most definitely not\n>zero-cost overhead). You might as well just do a nestloop join.\n>Hashing is only beneficial to the extent that it allows a smaller subset\n>of the inner relation to be compared to each outer-relation tuple.\n>So I think biasing against skew-distributed inner relations is entirely\n>appropriate.\n\nNo it is not pure overhead, and nested loops is far slower. The only way\nit is the same is if there is only _one_ hash bucket! And that would be a\nbug...\nIn the pathological skew case:\n\nExample: 1,000,000 outer relations. 10,000 inner relations, all with one\nkey.\n\nNested loops join:\n10 billion compares.\n\nHashjoin with small inner relation hashed with poor hash data structure:\n1. 10,000 hash functions to build the hash (10,000 'puts').\n2. 1,000,000 hash functions to probe (1,000,000 'gets').\n3. Only keys that fall in the same bucket trigger a compare. Assume 100\nhash buckets (any less is a bug, IMO) and skew such that the bucket is 10x\nmore likely than average to be hit. 100,000 hit the bucket. Those that\nmatch are just like nested loops -- this results in 1 billion compares.\nAll other probes hit an empty bucket and terminate without a compare.\nTotal: 1.01 million hash functions and bucket seeks, 0.01 of which are\nhash 'puts', + 1 billion compares\n\n\nHashjoin with 'one entry per key; entry value is list of matching\nrelations' data structure:\n1. 10,000 hash functions to build the hash (10,000 'puts').\n2. 1,000,000 hash functions to probe (1,000,000 'gets').\n3. Only keys that fall in the same bucket trigger a compare. Assume 100\nhash buckets and enough skew so that the bucket is 10x as likely to be\nhit. 100,000 hit bucket. Those that match only do a compare against one\nkey -- this results in 100,000 compares.\nTotal: 1.01 million hash functions and bucket seeks, 0.01 of which are\nslightly more expensive hash 'puts', + 0.1 million compares\n\n\n\n10 billion compares is much more expensive than either hash scenario. If\na hash function is 5x the cost of a compare, and a hash 'put' 2x a 'get'\nthen the costs are about:\n\n10 billion,\n1.006 billion,\n~6 million\n\nThe cost of the actual join output is significant (pairing relations and\ncreating new relations for output) but close to constant in all three.\n\n\n\nIn both the 'hash the big one' and 'hash the small one' case you have to\ncalculate the hash and seek into the hash table the same number of times.\n10,000 hash calculations and 'puts' + 1,000,000 hash calculations and\n'gets', versus 1,000,000 hash 'puts' and 10,000 'gets'.\nBut in one case that table is smaller and more cache efficient, making\nthose gets and puts cheaper.\n\nWhich is inner versus outer changes the number of buckets, which can alter\nthe number of expected compares, but that can be controlled for benefit --\nthe ratio of keys to buckets can be controlled. If you choose the smaller\nrelation, you can afford to overcompensate with more buckets, resulting in\nmore probes on empty buckets and thus fewer compares.\n\nAdditionally, a hash structure that only has one entry per key can greatly\nreduce the number of compares and make hashjoin immune to skew from the\ncost perspective. It also makes it so that choosing the smaller relation\nover the big one to hash is always better provided the number of buckets\nis chosen well.\n\n\n>\n> regards, tom lane\n\n", "msg_date": "Wed, 13 Apr 2011 11:27:21 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": true, "msg_subject": "Re: HashJoin order, hash the large or small table?\n\tPostgres likes to hash the big one, why?" }, { "msg_contents": "On Wed, Apr 13, 2011 at 1:22 PM, Scott Carey <[email protected]> wrote:\n> A pathological skew case (all relations with the same key), should be\n> _cheaper_ to probe.   There should be only _one_ entry in the hash (for\n> the one key), and that entry will be a list of all relations matching the\n> key.  Therefore, hash probes will either instantly fail to match on an\n> empty bucket, fail to match the one key with one compare, or match the one\n> key and join on the matching list.\n>\n> In particular for anti-join, high skew should be the best case scenario.\n\nI think this argument may hold some water for an anti-join, and maybe\nfor a semi-join, but it sure doesn't seem right for any kind of join\nthat has to iterate over all matches (rather than just the first one);\nthat is, inner, left, right, or full.\n\n> A hash structure that allows multiple entries per key is inappropriate for\n> skewed data, because it is not O(n).  One that has one entry per key\n> remains O(n) for all skew.  Furthermore, the hash buckets and # of entries\n> is proportional to n_distinct in this case, and smaller and more cache and\n> memory friendly to probe.\n\nI don't think this argument is right. The hash table is sized for a\nload factor significantly less than one, so if there are multiple\nentries in a bucket, it is fairly likely that they are all for the\nsame key. Granted, we have to double-check the keys to figure that\nout; but I believe that the data structure you are proposing would\nrequire similar comparisons. The only difference is that they'd be\nrequired when building the hash table, rather than when probing it.\n\n> You can put either relation on the outside with an anti-join, but would\n> need a different algorithm and cost estimator if done the other way\n> around.  Construct a hash on the join key, that keeps a list of relations\n> per key, iterate over the other relation, and remove the key and\n> corresponding list from the hash when there is a match, when complete the\n> remaining items in the hash are the result of the join (also already\n> grouped by the key).  It could be terminated early if all entries are\n> removed.\n> This would be useful if the hash was small, the other side of the hash too\n> large to fit in memory, and alternative was a massive sort on the other\n> relation.\n\nThis would be a nice extension of commit\nf4e4b3274317d9ce30de7e7e5b04dece7c4e1791.\n\n> Does the hash cost estimator bias towards smaller hashes due to hash probe\n> cost increasing with hash size due to processor caching effects?  Its not\n> quite O(n) due to caching effects.\n\nI don't think we account for that (and I'm not convinced we need to).\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Wed, 11 May 2011 23:21:15 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: HashJoin order, hash the large or small table? Postgres\n\tlikes to hash the big one, why?" } ]
[ { "msg_contents": "hi.\n\n>I think you're missing the point, which is that all the hash work is\n>just pure overhead in such a case (and it is most definitely not\n>zero-cost overhead). You might as well just do a nestloop join.\n>Hashing is only beneficial to the extent that it allows a smaller subset\n>of the inner relation to be compared to each outer-relation tuple.\n>So I think biasing against skew-distributed inner relations is entirely\n>appropriate.\n\n\nScanning smaller relation first is better with cursors.\nFirst rows from query are returned faster in this case.\nMaybe add this optimization for cursors only?\n\n\n\n------------\npasman\n", "msg_date": "Fri, 15 Apr 2011 17:15:06 +0200", "msg_from": "=?ISO-8859-2?Q?pasman_pasma=F1ski?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: HashJoin order, hash the large or small table? Postgres likes to\n\thash the big one, why?" } ]
[ { "msg_contents": "We are experiencing a problem with our query plans when using a range \nquery in Postgresql 8.3. The query we are executing attempts to select \nthe minimum primary key id after a certain date. Our date columns are \nbigint's holding a unix epoch representation of the date. We have an \nindex on the primary key and the date column.\n\nFor the following query just specified the predicate modificationDate >= ?\n\nexplain SELECT min(messageID) FROM Message WHERE modificationDate >= \n1302627793988;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------\n Result (cost=2640.96..2640.97 rows=1 width=0)\n InitPlan\n -> Limit (cost=0.00..2640.96 rows=1 width=8)\n -> Index Scan using message_pk on message \n(cost=0.00..3298561.09 rows=1249 width=8)\n Filter: ((messageid IS NOT NULL) AND (modificationdate \n >= 1302627793988::bigint))\n(5 rows)\n\nFor some reason it is deciding to scan the primary key column of the \ntable. This results in scanning the entire table which is huge (10 \nmillion records).\n\nHowever, if we specify a fake upper bound then the planner will \ncorrectly use the date column index:\n\nexplain SELECT min(messageID) FROM Message WHERE modificationDate >= \n1302627793988 and modificationDate < 9999999999999999;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=9.64..9.65 rows=1 width=8)\n -> Index Scan using jvmssg_mdate_idx on message (cost=0.00..9.64 \nrows=1 width=8)\n Index Cond: ((modificationdate >= 1302627793988::bigint) AND \n(modificationdate < 9999999999999999::bigint))\n(3 rows)\n\nWe have carried out all the usual maintenance tasks. We have increase \nthe statistics_target on both indexes to the maximum (1000) and \nperformed a vacuum analyze on the table. Our resource configurations are \nvery good since this is our production server.\n\nInterestingly this does not appear to happen with exactly the same \ndatabase when using 8.4. Instead we get the correct plan without having \nto add the upper bound.\n\nHere is the full description of the the table. It contains upwards of 10 \nmillion rows.\n\n Table \"public.message\"\n Column | Type | Modifiers\n------------------+------------------------+-----------\n messageid | bigint | not null\n parentmessageid | bigint |\n threadid | bigint | not null\n containertype | integer | not null\n containerid | bigint | not null\n userid | bigint |\n subject | character varying(255) |\n body | text |\n modvalue | integer | not null\n rewardpoints | integer | not null\n creationdate | bigint | not null\n modificationdate | bigint | not null\n status | integer | not null\nIndexes:\n \"message_pk\" PRIMARY KEY, btree (messageid)\n \"jvmssg_cdate_idx\" btree (creationdate)\n \"jvmssg_cidctmd_idx\" btree (containerid, containertype, \nmodificationdate)\n \"jvmssg_mdate_idx\" btree (modificationdate)\n \"jvmssg_mdvle_idx\" btree (modvalue)\n \"jvmssg_prntid_idx\" btree (parentmessageid)\n \"jvmssg_thrd_idx\" btree (threadid)\n \"jvmssg_usrid_idx\" btree (userid)\nReferenced by:\n TABLE \"answer\" CONSTRAINT \"answer_mid_fk\" FOREIGN KEY (messageid) \nREFERENCES message(messageid)\n TABLE \"messageprop\" CONSTRAINT \"jmp_msgid_fk\" FOREIGN KEY \n(messageid) REFERENCES message(messageid)\n\n\nAny insight into this would be greatly appreciated. We are not able to \nupgrade our databases to 8.4. We are reluctant to re-write all our range \nqueries if possible.\n\n\n-m\n\n\n\n\n\n\n\n\n\n\n", "msg_date": "Fri, 15 Apr 2011 10:17:32 -0700", "msg_from": "Mark Williams <[email protected]>", "msg_from_op": true, "msg_subject": "Bad Query Plan with Range Query" }, { "msg_contents": "On Fri, Apr 15, 2011 at 10:17:32AM -0700, Mark Williams wrote:\n> We are experiencing a problem with our query plans when using a range query \n> in Postgresql 8.3. The query we are executing attempts to select the \n> minimum primary key id after a certain date. Our date columns are bigint's \n> holding a unix epoch representation of the date. We have an index on the \n> primary key and the date column.\n>\n> For the following query just specified the predicate modificationDate >= ?\n>\n> explain SELECT min(messageID) FROM Message WHERE modificationDate >= \n> 1302627793988;\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------\n> Result (cost=2640.96..2640.97 rows=1 width=0)\n> InitPlan\n> -> Limit (cost=0.00..2640.96 rows=1 width=8)\n> -> Index Scan using message_pk on message \n> (cost=0.00..3298561.09 rows=1249 width=8)\n> Filter: ((messageid IS NOT NULL) AND (modificationdate >= \n> 1302627793988::bigint))\n> (5 rows)\n>\n> For some reason it is deciding to scan the primary key column of the table. \n> This results in scanning the entire table which is huge (10 million \n> records).\n>\n> However, if we specify a fake upper bound then the planner will correctly \n> use the date column index:\n>\n> explain SELECT min(messageID) FROM Message WHERE modificationDate >= \n> 1302627793988 and modificationDate < 9999999999999999;\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=9.64..9.65 rows=1 width=8)\n> -> Index Scan using jvmssg_mdate_idx on message (cost=0.00..9.64 \n> rows=1 width=8)\n> Index Cond: ((modificationdate >= 1302627793988::bigint) AND \n> (modificationdate < 9999999999999999::bigint))\n> (3 rows)\n>\n> We have carried out all the usual maintenance tasks. We have increase the \n> statistics_target on both indexes to the maximum (1000) and performed a \n> vacuum analyze on the table. Our resource configurations are very good \n> since this is our production server.\n>\n> Interestingly this does not appear to happen with exactly the same database \n> when using 8.4. Instead we get the correct plan without having to add the \n> upper bound.\n>\n> Here is the full description of the the table. It contains upwards of 10 \n> million rows.\n>\n> Table \"public.message\"\n> Column | Type | Modifiers\n> ------------------+------------------------+-----------\n> messageid | bigint | not null\n> parentmessageid | bigint |\n> threadid | bigint | not null\n> containertype | integer | not null\n> containerid | bigint | not null\n> userid | bigint |\n> subject | character varying(255) |\n> body | text |\n> modvalue | integer | not null\n> rewardpoints | integer | not null\n> creationdate | bigint | not null\n> modificationdate | bigint | not null\n> status | integer | not null\n> Indexes:\n> \"message_pk\" PRIMARY KEY, btree (messageid)\n> \"jvmssg_cdate_idx\" btree (creationdate)\n> \"jvmssg_cidctmd_idx\" btree (containerid, containertype, \n> modificationdate)\n> \"jvmssg_mdate_idx\" btree (modificationdate)\n> \"jvmssg_mdvle_idx\" btree (modvalue)\n> \"jvmssg_prntid_idx\" btree (parentmessageid)\n> \"jvmssg_thrd_idx\" btree (threadid)\n> \"jvmssg_usrid_idx\" btree (userid)\n> Referenced by:\n> TABLE \"answer\" CONSTRAINT \"answer_mid_fk\" FOREIGN KEY (messageid) \n> REFERENCES message(messageid)\n> TABLE \"messageprop\" CONSTRAINT \"jmp_msgid_fk\" FOREIGN KEY (messageid) \n> REFERENCES message(messageid)\n>\n>\n> Any insight into this would be greatly appreciated. We are not able to \n> upgrade our databases to 8.4. We are reluctant to re-write all our range \n> queries if possible.\n>\n>\n> -m\n>\n\nHere is the fix that was added to 8.4+:\n\nhttp://archives.postgresql.org/pgsql-committers/2010-01/msg00021.php\n\nI think you are stuck with one of those options so if upgrading\nis not available, then re-writing the range queries wins by a landslide. :)\n\nRegards,\nKen\n", "msg_date": "Fri, 15 Apr 2011 12:38:05 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad Query Plan with Range Query" }, { "msg_contents": "Mark Williams <[email protected]> wrote:\n \n> explain SELECT min(messageID) FROM Message\n> WHERE modificationDate >= 1302627793988;\n \n> For some reason it is deciding to scan the primary key column of\n> the table. This results in scanning the entire table\n \nNo, it scans until it finds the first row where modificationDate >= \n1302627793988, at which point the scan is done because it's doing an\nascending scan on what you want the min() of. You might have a clue\nthat the first such row will be ten million rows into the scan, but\nthe optimizer doesn't know that. It's assuming that rows which meet\nthat condition are scattered randomly through the primary key range.\nIt thinks that it will, on average, need to scan 1249 rows to find a\nmatch.\n \nThe patch Ken referenced causes the alternative to be assigned a\nmore accurate (and lower) cost, which tips the scales in favor of\nthat plan -- at least for the case you've tried; but this seems to\nme to be another case related to the correlation of values. It's a\nnew and different form of it, but it seems at least somewhat\nrelated. It might be a good example for those working on\nmulti-column statistics to keep in mind.\n \n-Kevin\n", "msg_date": "Fri, 15 Apr 2011 12:54:18 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad Query Plan with Range Query" }, { "msg_contents": "Thanks for the response guys. There is something else which confuses me. \nIf I re-write the query like this:\n\nexplain SELECT messageID FROM Message WHERE modificationDate >= \n1302627793988 ORDER BY modificationDate LIMIT 1;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------\n Limit (cost=0.00..2.97 rows=1 width=16)\n -> Index Scan using jvmssg_mdate_idx on message \n(cost=0.00..3705.59 rows=1249 width=16)\n Index Cond: (modificationdate >= 1302627793988::bigint)\n(3 rows)\n\nI also get a better plan. However, this is not always the case. On some \nother instances we still get a sequential scan on the primary key.\n\n\n\n\nOn 04/15/2011 10:54 AM, Kevin Grittner wrote:\n> Mark Williams<[email protected]> wrote:\n>\n>> explain SELECT min(messageID) FROM Message\n>> WHERE modificationDate>= 1302627793988;\n>\n>> For some reason it is deciding to scan the primary key column of\n>> the table. This results in scanning the entire table\n>\n> No, it scans until it finds the first row where modificationDate>=\n> 1302627793988, at which point the scan is done because it's doing an\n> ascending scan on what you want the min() of. You might have a clue\n> that the first such row will be ten million rows into the scan, but\n> the optimizer doesn't know that. It's assuming that rows which meet\n> that condition are scattered randomly through the primary key range.\n> It thinks that it will, on average, need to scan 1249 rows to find a\n> match.\n>\n> The patch Ken referenced causes the alternative to be assigned a\n> more accurate (and lower) cost, which tips the scales in favor of\n> that plan -- at least for the case you've tried; but this seems to\n> me to be another case related to the correlation of values. It's a\n> new and different form of it, but it seems at least somewhat\n> related. It might be a good example for those working on\n> multi-column statistics to keep in mind.\n>\n> -Kevin\n\n", "msg_date": "Fri, 15 Apr 2011 11:06:51 -0700", "msg_from": "Mark Williams <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bad Query Plan with Range Query" }, { "msg_contents": "Mark Williams <[email protected]> wrote:\n \n> If I re-write the query like this:\n> \n> explain SELECT messageID FROM Message WHERE modificationDate >= \n> 1302627793988 ORDER BY modificationDate LIMIT 1;\n \n> I also get a better plan.\n \nYeah, but it's not necessarily the same value. Do you want the\nminimum messageID where modificationDate >= 1302627793988 or do you\nwant the messageID of some row (possibly of many) with the minimum\nmodificationDate where modificationDate >= 1302627793988?\n \nSince you're asking for a logically different value with that query,\nit's not surprising it uses a different plan.\n \n-Kevin\n", "msg_date": "Fri, 15 Apr 2011 13:13:26 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad Query Plan with Range Query" }, { "msg_contents": "Whoops,\n\nI meant this query (ordering my messageid):\n\nSELECT messageID FROM Message WHERE modificationDate>= 1302627793988 ORDER BY messageID LIMIT 1;\n\n\nSometimes this gives the better plan. But not always.\n\n\n\nOn 04/15/2011 11:13 AM, Kevin Grittner wrote:\n> Mark Williams<[email protected]> wrote:\n>\n>> If I re-write the query like this:\n>>\n>> explain SELECT messageID FROM Message WHERE modificationDate>=\n>> 1302627793988 ORDER BY modificationDate LIMIT 1;\n>\n>> I also get a better plan.\n>\n> Yeah, but it's not necessarily the same value. Do you want the\n> minimum messageID where modificationDate>= 1302627793988 or do you\n> want the messageID of some row (possibly of many) with the minimum\n> modificationDate where modificationDate>= 1302627793988?\n>\n> Since you're asking for a logically different value with that query,\n> it's not surprising it uses a different plan.\n>\n> -Kevin\n\n", "msg_date": "Fri, 15 Apr 2011 15:29:10 -0700", "msg_from": "Mark Williams <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bad Query Plan with Range Query" } ]
[ { "msg_contents": "Hi List,\nI am using PostgreSQL 9.0.3 and I have a need to dump only the selective\ndata from partial list of tables of a database. Is there a straight way to\ndo it with pg_dump or any alternative work around to suggest here?!\n\nSethu Prasad. G.\n\nHi List,I am using PostgreSQL 9.0.3 and I have a need to dump only the selective data from partial list of tables of a database. Is there a straight way to do it with pg_dump or any alternative work around to suggest here?!\nSethu Prasad. G.", "msg_date": "Mon, 18 Apr 2011 17:05:22 +0200", "msg_from": "Sethu Prasad <[email protected]>", "msg_from_op": true, "msg_subject": "Is there a way to selective dump of records in Postgres 9.0.3?" }, { "msg_contents": "This probably isn't the right place to ask that question but you may as well\ntry `pg_dump -t PATTERN`. Man pg_dump for more information on how to form\nthat pattern.\n\nOn Mon, Apr 18, 2011 at 11:05 AM, Sethu Prasad <[email protected]>wrote:\n\n> Hi List,\n> I am using PostgreSQL 9.0.3 and I have a need to dump only the selective\n> data from partial list of tables of a database. Is there a straight way to\n> do it with pg_dump or any alternative work around to suggest here?!\n>\n> Sethu Prasad. G.\n>\n>\n\nThis probably isn't the right place to ask that question but you may as well try `pg_dump -t PATTERN`.  Man pg_dump for more information on how to form that pattern.On Mon, Apr 18, 2011 at 11:05 AM, Sethu Prasad <[email protected]> wrote:\nHi List,I am using PostgreSQL 9.0.3 and I have a need to dump only the selective data from partial list of tables of a database. Is there a straight way to do it with pg_dump or any alternative work around to suggest here?!\n\nSethu Prasad. G.", "msg_date": "Mon, 18 Apr 2011 11:11:26 -0400", "msg_from": "Nikolas Everett <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is there a way to selective dump of records in Postgres 9.0.3?" }, { "msg_contents": "On Mon, Apr 18, 2011 at 8:11 AM, Nikolas Everett <[email protected]> wrote:\n\n> This probably isn't the right place to ask that question but you may as\n> well try `pg_dump -t PATTERN`. Man pg_dump for more information on how to\n> form that pattern.\n>\n>\n> On Mon, Apr 18, 2011 at 11:05 AM, Sethu Prasad <[email protected]>wrote:\n>\n>> Hi List,\n>> I am using PostgreSQL 9.0.3 and I have a need to dump only the selective\n>> data from partial list of tables of a database. Is there a straight way to\n>> do it with pg_dump or any alternative work around to suggest here?!\n>>\n>\nOr if you need partial data from one table - a WHERE clause - then you can\ndo:\n\nCOPY (select * from whatever where column=value) TO '/tmp/dump.csv' WITH CSV\nHEADER\n\nin combination with\n\npg_dump -f whatever.sql -s -t whatever db\n\nto dump the DDL for the 'whatever' table into whatever.sql.\n\nhttp://www.postgresql.org/docs/current/static/sql-copy.html\n\nIf it is a lot of data, you'll want to edit the whatever.sql file to remove\nthe CREATE INDEX statements until after you've loaded the table and then\ndepeneding upon how many indexes there are and how many rows you havem you\nmay want to parallelize the CREATE INDEX statements by running them in\nparallel in multiple psql sessions (and possibly with an artificially large\nmaintenance_work_mem if that speeds things up)\n\nOn Mon, Apr 18, 2011 at 8:11 AM, Nikolas Everett <[email protected]> wrote:\nThis probably isn't the right place to ask that question but you may as well try `pg_dump -t PATTERN`.  Man pg_dump for more information on how to form that pattern.\nOn Mon, Apr 18, 2011 at 11:05 AM, Sethu Prasad <[email protected]> wrote:\nHi List,I am using PostgreSQL 9.0.3 and I have a need to dump only the selective data from partial list of tables of a database. Is there a straight way to do it with pg_dump or any alternative work around to suggest here?!\nOr if you need partial data from one table - a WHERE clause - then you can do: COPY (select * from whatever where column=value) TO '/tmp/dump.csv' WITH CSV HEADER \nin combination with pg_dump -f whatever.sql -s -t whatever dbto dump the DDL for the 'whatever' table into whatever.sql.\nhttp://www.postgresql.org/docs/current/static/sql-copy.htmlIf it is a lot of data, you'll want to edit the whatever.sql file to remove the CREATE INDEX statements until after you've loaded the table and then depeneding upon how many indexes there are and how many rows you havem you may want to parallelize the CREATE INDEX statements by running them in parallel in multiple psql sessions (and possibly with an artificially large maintenance_work_mem if that speeds things up)", "msg_date": "Mon, 18 Apr 2011 09:01:44 -0700", "msg_from": "Samuel Gendler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is there a way to selective dump of records in Postgres 9.0.3?" } ]
[ { "msg_contents": "Hi all:\n\nAn application running against a postgres 8.4.5 database under CentOS\n5.5 uses cursors (I think via SqlAlchemy). To look for database\nperformance issues I log any query that takes > 2 seconds to complete.\n\nI am seeing:\n\n 2011-04-16 00:55:33 UTC user@database(3516): LOG: duration:\n 371954.811 ms statement: FETCH FORWARD 1 FROM c_2aaaaaaeea50_a08\n\nWhile I obviously have a problem here, is there any way to log the\nactual select associated with the cursor other than logging all\nstatements?\n\nAlso once I have the select statement, does the fact that is is\nassociated with a fetch/cursor change the steps I should take in\ntuning it compared to somebody just issuing a normal select?\n\nThanks for any ideas.\n\n-- \n\t\t\t\t-- rouilj\n\nJohn Rouillard System Administrator\nRenesys Corporation 603-244-9084 (cell) 603-643-9300 x 111\n", "msg_date": "Mon, 18 Apr 2011 18:12:41 +0000", "msg_from": "John Rouillard <[email protected]>", "msg_from_op": true, "msg_subject": "Assessing performance of fetches" }, { "msg_contents": "John Rouillard <[email protected]> writes:\n> I am seeing:\n\n> 2011-04-16 00:55:33 UTC user@database(3516): LOG: duration:\n> 371954.811 ms statement: FETCH FORWARD 1 FROM c_2aaaaaaeea50_a08\n\n> While I obviously have a problem here, is there any way to log the\n> actual select associated with the cursor other than logging all\n> statements?\n\nCan't think of one :-(\n\n> Also once I have the select statement, does the fact that is is\n> associated with a fetch/cursor change the steps I should take in\n> tuning it compared to somebody just issuing a normal select?\n\nThe planner does treat cursor queries a bit different from plain\nqueries, putting more emphasis on getting the first rows sooner.\nIf you want to be sure you're getting the truth about what's happening,\ntry\n\tEXPLAIN [ANALYZE] DECLARE c CURSOR FOR SELECT ...\nrather than just\n\tEXPLAIN [ANALYZE] SELECT ...\nOther than that, it's the same as tuning a regular query.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 18 Apr 2011 15:41:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Assessing performance of fetches " } ]
[ { "msg_contents": "I browsed the faq and looked at PostgreSQL performance books but I\ncould not find the obvious:\nHow to configure a read-only database server?\n\nI have a single-disk virtual Linux system and a read-only dataset\nwhich is exposed to internet and completely replaced from time to\ntime.\n\nThis is what I found so far:\n\n* Disabling autovacuum daemon.\n* Setting postgresql.conf parameters:\n fsync=off\n synchronous_commit=off\n full_page_writes=off\n\n* For the session:\n SET transaction_read_only TO FALSE;\n SET TRANSACTION READ ONLY;\n\n* What about wal_level and archive_mode?\n\n=> Any comments on speeding up/optimizing such database server?\n\nYours, Stefan\n", "msg_date": "Tue, 19 Apr 2011 00:08:38 +0200", "msg_from": "Stefan Keller <[email protected]>", "msg_from_op": true, "msg_subject": "How to configure a read-only database server?" }, { "msg_contents": "hi,\n\nPerhaps in postgresql.conf :\n default_transaction_read_only\n\nregards\n\nphilippe\n\n\nLe 19/04/2011 00:08, Stefan Keller a écrit :\n> I browsed the faq and looked at PostgreSQL performance books but I\n> could not find the obvious:\n> How to configure a read-only database server?\n>\n> I have a single-disk virtual Linux system and a read-only dataset\n> which is exposed to internet and completely replaced from time to\n> time.\n>\n> This is what I found so far:\n>\n> * Disabling autovacuum daemon.\n> * Setting postgresql.conf parameters:\n> fsync=off\n> synchronous_commit=off\n> full_page_writes=off\n>\n> * For the session:\n> SET transaction_read_only TO FALSE;\n> SET TRANSACTION READ ONLY;\n>\n> * What about wal_level and archive_mode?\n>\n> => Any comments on speeding up/optimizing such database server?\n>\n> Yours, Stefan\n>\n\n", "msg_date": "Tue, 19 Apr 2011 09:45:56 +0200", "msg_from": "philippe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to configure a read-only database server?" }, { "msg_contents": "On Tue, Apr 19, 2011 at 12:08 AM, Stefan Keller <[email protected]> wrote:\n> I browsed the faq and looked at PostgreSQL performance books but I\n> could not find the obvious:\n> How to configure a read-only database server?\n>\n> I have a single-disk virtual Linux system and a read-only dataset\n> which is exposed to internet and completely replaced from time to\n> time.\n>\n> This is what I found so far:\n>\n> * Disabling autovacuum daemon.\n\nI guess this will give you only small benefits as the daemon won't\nfind any tables with modifications.\n\n> * Setting postgresql.conf parameters:\n>   fsync=off\n>   synchronous_commit=off\n\nSince you don't commit changes the effect of this might be small as well.\n\n>   full_page_writes=off\n>\n> * For the session:\n>   SET transaction_read_only TO FALSE;\n\nDid you mean \"TRUE\"?\n\n>   SET TRANSACTION READ ONLY;\n\nWhat about\n\nALTER DATABASE x SET default_transaction_read_only = on;\n\n?\n\n> * What about wal_level and archive_mode?\n>\n> => Any comments on speeding up/optimizing such database server?\n\nKind regards\n\nrobert\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n", "msg_date": "Tue, 19 Apr 2011 10:57:49 +0200", "msg_from": "Robert Klemme <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to configure a read-only database server?" }, { "msg_contents": "On 04/18/2011 06:08 PM, Stefan Keller wrote:\n> * What about wal_level and archive_mode?\n> \n\nPresumably you don't care about either of these. wal_level=minimal, \narchive_mode=off.\n\nThe other non-obvious thing you should do in this situation is do all \nthe database maintenance in one big run after the data is loaded, \nsomething like:\n\nVACUUM FREEZE ANALYZE;\n\nOtherwise you will still have some trickle of write-activity going on, \nnot always efficiently, despite being in read-only mode. It's because \nof what's referred to as Hint Bits: \nhttp://wiki.postgresql.org/wiki/Hint_Bits\n\nVACUUMing everything will clean those us, and freezing everything makes \nsure there's no old transactions to concerned about that might kick off \nanti-wraparound autovacuum.\n\nThe only other thing you probably want to do is set checkpoint_segments \nto a big number. Shouldn't matter normally, but when doing this freeze \noperation it will help that execute quickly. You want a lower \nmaintenance_work_mem on a read-only system than the master too, possibly \na higher shared_buffers as well. It's all pretty subtle beyond the big \nparameters you already identified.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Tue, 19 Apr 2011 10:30:48 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to configure a read-only database server?" }, { "msg_contents": "On Apr 18, 2011, at 6:08 PM, Stefan Keller <[email protected]> wrote:\n> I browsed the faq and looked at PostgreSQL performance books but I\n> could not find the obvious:\n> How to configure a read-only database server?\n> \n> I have a single-disk virtual Linux system and a read-only dataset\n> which is exposed to internet and completely replaced from time to\n> time.\n> \n> This is what I found so far:\n> \n> * Disabling autovacuum daemon.\n> * Setting postgresql.conf parameters:\n> fsync=off\n> synchronous_commit=off\n> full_page_writes=off\n\nAll of those speed up writes. I don't know that they will make any difference at all on a read-only workload.\n\n> * What about wal_level and archive_mode?\n\nSame with these.\n\n> \n\n...Robert", "msg_date": "Sat, 23 Apr 2011 12:10:15 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to configure a read-only database server?" }, { "msg_contents": "AFAIK it helps at least bulk loading my data every other time.\n\nSo I'm confused and backup again: Given a single-disk virtual Linux\nsystem and a 'read-only' dataset, which is exposed to the internet and\ncompletely replaced from time to time, and expecting SELECT queries\nincluding joins, sorts, equality and range (sub-)queries...\n\n=> What are the suggested postgresql.conf and session parameters for\nsuch a \"read-only database\" to \"Whac-A-Mole\" (i.e. to consider :->)?\n\nStefan\n\n2011/4/23 Robert Haas <[email protected]>:\n> On Apr 18, 2011, at 6:08 PM, Stefan Keller <[email protected]> wrote:\n>> I browsed the faq and looked at PostgreSQL performance books but I\n>> could not find the obvious:\n>> How to configure a read-only database server?\n>>\n>> I have a single-disk virtual Linux system and a read-only dataset\n>> which is exposed to internet and completely replaced from time to\n>> time.\n>>\n>> This is what I found so far:\n>>\n>> * Disabling autovacuum daemon.\n>> * Setting postgresql.conf parameters:\n>>   fsync=off\n>>   synchronous_commit=off\n>>   full_page_writes=off\n>\n> All of those speed up writes. I don't know that they will make any difference at all on a read-only workload.\n>\n>> * What about wal_level and archive_mode?\n>\n> Same with these.\n>\n>>\n>\n> ...Robert\n", "msg_date": "Sun, 24 Apr 2011 11:38:53 +0200", "msg_from": "Stefan Keller <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to configure a read-only database server?" }, { "msg_contents": "Dne 24.4.2011 11:38, Stefan Keller napsal(a):\n> AFAIK it helps at least bulk loading my data every other time.\n\nYes, but this thread was about setting the DB for read-only workload, so\nthose settings were a bit strange.\n\n> So I'm confused and backup again: Given a single-disk virtual Linux\n> system and a 'read-only' dataset, which is exposed to the internet and\n> completely replaced from time to time, and expecting SELECT queries\n> including joins, sorts, equality and range (sub-)queries...\n> \n> => What are the suggested postgresql.conf and session parameters for\n> such a \"read-only database\" to \"Whac-A-Mole\" (i.e. to consider :->)?\n\nWhat database size are we talking about? Does that fit into RAM or not?\n\nIf not, set large shared buffers and effective cache size appropriately.\n\nIf it fits into memory, you could lower the random_page_cost (but this\nshould be handled by the DB). Or you could create a ramdisk and use it\nto store the data (in this case lowering random_page_cost makes much\nmore sense).\n\nregards\nTomas\n", "msg_date": "Sun, 24 Apr 2011 14:37:17 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to configure a read-only database server?" } ]
[ { "msg_contents": "2 kingston V+100 500GB\nSoft RAID1 (md)\nioscheduler [noop]\next3\nLinux pro-cdn1 2.6.26-2-amd64 #1 SMP Tue Jan 25 05:59:43 UTC 2011\nx86_64 GNU/Linux\nFilesystem Size Used Avail Use% Mounted on\n/dev/md4 452G 301G 128G 71% /home/ssd\n--------\n\nRandom 8KB read/write with 1% write\n./pgiosim -w1 -a1 -v -b 1000000 /home/ssd/big.1\nWrite Mode: 1%\nStallcheck at 1.000000\nVerbose\nUnknown units of blocks\nArg: 1\nRead 1000000 blocks\nAdded /home/ssd/big.1\n\n-------\n\n3.20%, 32036 read, 300 written, 25625.08kB/sec 3203.14 iops\n5.33%, 21283 read, 198 written, 17026.31kB/sec 2128.29 iops\n7.47%, 21356 read, 245 written, 17084.73kB/sec 2135.59 iops\n9.62%, 21511 read, 234 written, 17208.72kB/sec 2151.09 iops\n11.78%, 21591 read, 216 written, 17271.71kB/sec 2158.96 iops\n14.08%, 23032 read, 245 written, 18425.52kB/sec 2303.19 iops\n16.53%, 24527 read, 228 written, 19621.52kB/sec 2452.69 iops\n18.89%, 23535 read, 225 written, 18827.91kB/sec 2353.49 iops\n21.19%, 23003 read, 229 written, 18402.34kB/sec 2300.29 iops\n23.40%, 22139 read, 211 written, 17711.13kB/sec 2213.89 iops\n25.66%, 22628 read, 225 written, 18102.33kB/sec 2262.79 iops\n27.86%, 21983 read, 238 written, 17586.32kB/sec 2198.29 iops\n30.14%, 22823 read, 211 written, 18258.31kB/sec 2282.29 iops\n32.44%, 22975 read, 240 written, 18379.92kB/sec 2297.49 iops\n34.83%, 23870 read, 214 written, 19095.92kB/sec 2386.99 iops\n37.24%, 24129 read, 213 written, 19303.10kB/sec 2412.89 iops\n39.49%, 22450 read, 210 written, 17959.92kB/sec 2244.99 iops\n41.77%, 22827 read, 235 written, 18261.53kB/sec 2282.69 iops\n43.98%, 22138 read, 218 written, 17710.30kB/sec 2213.79 iops\n46.31%, 23293 read, 241 written, 18634.30kB/sec 2329.29 iops\n48.86%, 25422 read, 258 written, 20337.52kB/sec 2542.19 iops\n51.06%, 22091 read, 222 written, 17672.72kB/sec 2209.09 iops\n53.46%, 23970 read, 215 written, 19175.93kB/sec 2396.99 iops\n55.80%, 23359 read, 224 written, 18687.12kB/sec 2335.89 iops\n58.04%, 22472 read, 232 written, 17977.24kB/sec 2247.16 iops\n60.34%, 22981 read, 230 written, 18384.72kB/sec 2298.09 iops\n62.17%, 18228 read, 192 written, 14580.33kB/sec 1822.54 iops\n64.60%, 24336 read, 229 written, 19465.85kB/sec 2433.23 iops\n66.89%, 22912 read, 210 written, 18329.52kB/sec 2291.19 iops\n69.06%, 21677 read, 231 written, 17341.54kB/sec 2167.69 iops\n71.28%, 22255 read, 210 written, 17803.91kB/sec 2225.49 iops\n73.68%, 23928 read, 243 written, 19142.30kB/sec 2392.79 iops\n75.90%, 22255 read, 205 written, 17803.93kB/sec 2225.49 iops\n78.17%, 22641 read, 233 written, 18112.72kB/sec 2264.09 iops\n80.50%, 23328 read, 235 written, 18662.29kB/sec 2332.79 iops\n82.84%, 23379 read, 230 written, 18703.11kB/sec 2337.89 iops\n84.90%, 20670 read, 236 written, 16535.95kB/sec 2066.99 iops\n86.91%, 20018 read, 222 written, 16012.14kB/sec 2001.52 iops\n89.24%, 23321 read, 235 written, 18654.39kB/sec 2331.80 iops\n91.56%, 23224 read, 227 written, 18579.13kB/sec 2322.39 iops\n94.05%, 24880 read, 262 written, 19903.93kB/sec 2487.99 iops\n96.40%, 23549 read, 205 written, 18839.14kB/sec 2354.89 iops\n98.80%, 23956 read, 230 written, 19164.73kB/sec 2395.59 iops\n\n------\n./pgiosim -w10 -a1 -v -b 1000000 /home/ssd/big.1\nWrite Mode: 10%\nStallcheck at 1.000000\nVerbose\nUnknown units of blocks\nArg: 1\nRead 1000000 blocks\nAdded /home/ssd/big.1\n\n1.62%, 16226 read, 1642 written, 12979.00kB/sec 1622.37 iops\n1.67%, 433 read, 38 written, 346.40kB/sec 43.30 iops\n2.95%, 12839 read, 1282 written, 10271.06kB/sec 1283.88 iops\n2.95%, 3 read, 0 written, 2.40kB/sec 0.30 iops\n3.50%, 5500 read, 548 written, 4399.83kB/sec 549.98 iops\n4.95%, 14524 read, 1468 written, 11619.12kB/sec 1452.39 iops\n4.95%, 3 read, 1 written, 2.40kB/sec 0.30 iops\n4.95%, 8 read, 0 written, 6.40kB/sec 0.80 iops\n6.20%, 12471 read, 1241 written, 9976.67kB/sec 1247.08 iops\n6.20%, 3 read, 0 written, 2.40kB/sec 0.30 iops\n7.63%, 14272 read, 1445 written, 11417.26kB/sec 1427.16 iops\n7.63%, 3 read, 0 written, 2.40kB/sec 0.30 iops\n7.65%, 263 read, 22 written, 210.40kB/sec 26.30 iops\n8.65%, 9930 read, 990 written, 7943.83kB/sec 992.98 iops\n8.67%, 268 read, 24 written, 214.40kB/sec 26.80 iops\n9.30%, 6296 read, 621 written, 5036.78kB/sec 629.60 iops\n9.83%, 5233 read, 541 written, 4186.35kB/sec 523.29 iops\n10.75%, 9222 read, 960 written, 7377.56kB/sec 922.20 iops\n10.80%, 506 read, 52 written, 404.80kB/sec 50.60 iops\n11.74%, 9417 read, 933 written, 7533.53kB/sec 941.69 iops\n11.77%, 314 read, 29 written, 251.20kB/sec 31.40 iops\n12.56%, 7906 read, 793 written, 6324.78kB/sec 790.60 iops\n13.37%, 8052 read, 830 written, 6441.52kB/sec 805.19 iops\n13.40%, 309 read, 29 written, 247.20kB/sec 30.90 iops\n14.01%, 6116 read, 635 written, 4892.73kB/sec 611.59 iops\n14.71%, 6994 read, 675 written, 5595.18kB/sec 699.40 iops\n15.83%, 11205 read, 1188 written, 8953.19kB/sec 1119.15 iops\n15.90%, 651 read, 68 written, 520.80kB/sec 65.10 iops\n16.33%, 4355 read, 490 written, 3480.77kB/sec 435.10 iops\n17.14%, 8098 read, 777 written, 6478.33kB/sec 809.79 iops\n17.14%, 6 read, 0 written, 4.80kB/sec 0.60 iops\n18.70%, 15603 read, 1622 written, 12482.33kB/sec 1560.29 iops\n18.70%, 4 read, 0 written, 3.20kB/sec 0.40 iops\n18.71%, 34 read, 5 written, 27.20kB/sec 3.40 iops\n19.29%, 5829 read, 595 written, 4663.08kB/sec 582.89 iops\n20.81%, 15209 read, 1477 written, 12167.16kB/sec 1520.90 iops\n20.81%, 3 read, 0 written, 2.40kB/sec 0.30 iops\n20.82%, 61 read, 8 written, 48.80kB/sec 6.10 iops\n21.98%, 11665 read, 1144 written, 9331.42kB/sec 1166.43 iops\n21.98%, 2 read, 0 written, 1.60kB/sec 0.20 iops\n22.78%, 7988 read, 817 written, 6389.86kB/sec 798.73 iops\n23.12%, 3364 read, 346 written, 2690.97kB/sec 336.37 iops\n23.49%, 3746 read, 357 written, 2996.69kB/sec 374.59 iops\n24.67%, 11720 read, 1137 written, 9375.97kB/sec 1172.00 iops\n24.67%, 4 read, 0 written, 3.20kB/sec 0.40 iops\n25.50%, 8365 read, 808 written, 6691.96kB/sec 836.49 iops\n25.69%, 1865 read, 190 written, 1491.33kB/sec 186.42 iops\n26.14%, 4462 read, 424 written, 3569.58kB/sec 446.20 iops\n27.07%, 9377 read, 928 written, 7494.42kB/sec 936.80 iops\n27.58%, 5103 read, 532 written, 4082.37kB/sec 510.30 iops\n28.54%, 9537 read, 918 written, 7619.92kB/sec 952.49 iops\n28.70%, 1600 read, 151 written, 1279.99kB/sec 160.00 iops\n29.99%, 12901 read, 1247 written, 10307.61kB/sec 1288.45 iops\n30.12%, 1339 read, 139 written, 1071.19kB/sec 133.90 iops\n30.16%, 421 read, 41 written, 336.75kB/sec 42.09 iops\n30.60%, 4368 read, 486 written, 3494.38kB/sec 436.80 iops\n32.16%, 15580 read, 1585 written, 12463.80kB/sec 1557.97 iops\n32.16%, 2 read, 0 written, 1.60kB/sec 0.20 iops\n32.19%, 330 read, 21 written, 263.97kB/sec 33.00 iops\n32.92%, 7302 read, 748 written, 5841.50kB/sec 730.19 iops\n33.95%, 10339 read, 1030 written, 8269.88kB/sec 1033.74 iops\n33.96%, 4 read, 0 written, 3.20kB/sec 0.40 iops\n34.32%, 3673 read, 374 written, 2938.35kB/sec 367.29 iops\n35.66%, 13395 read, 1335 written, 10715.92kB/sec 1339.49 iops\n35.66%, 9 read, 3 written, 7.20kB/sec 0.90 iops\n36.05%, 3890 read, 389 written, 3111.96kB/sec 388.99 iops\n37.19%, 11383 read, 1104 written, 9097.74kB/sec 1137.22 iops\n37.19%, 2 read, 0 written, 1.60kB/sec 0.20 iops\n38.62%, 14295 read, 1511 written, 11421.06kB/sec 1427.63 iops\n38.62%, 6 read, 0 written, 4.80kB/sec 0.60 iops\n38.62%, 2 read, 0 written, 1.60kB/sec 0.20 iops\n39.20%, 5831 read, 579 written, 4664.78kB/sec 583.10 iops\n40.25%, 10453 read, 1006 written, 8355.83kB/sec 1044.48 iops\n40.25%, 2 read, 0 written, 1.60kB/sec 0.20 iops\n41.72%, 14710 read, 1501 written, 11767.59kB/sec 1470.95 iops\n41.72%, 4 read, 0 written, 3.20kB/sec 0.40 iops\n41.72%, 2 read, 0 written, 1.60kB/sec 0.20 iops\n42.87%, 11472 read, 1144 written, 9177.50kB/sec 1147.19 iops\n42.87%, 5 read, 0 written, 4.00kB/sec 0.50 iops\n44.08%, 12076 read, 1233 written, 9660.75kB/sec 1207.59 iops\n44.08%, 4 read, 1 written, 3.20kB/sec 0.40 iops\n44.45%, 3684 read, 364 written, 2947.17kB/sec 368.40 iops\n45.71%, 12684 read, 1339 written, 10147.10kB/sec 1268.39 iops\n45.71%, 4 read, 0 written, 3.20kB/sec 0.40 iops\n46.00%, 2823 read, 284 written, 2256.73kB/sec 282.09 iops\n46.41%, 4116 read, 423 written, 3292.78kB/sec 411.60 iops\n47.54%, 11360 read, 1121 written, 9087.86kB/sec 1135.98 iops\n47.59%, 451 read, 52 written, 360.75kB/sec 45.09 iops\n48.25%, 6646 read, 680 written, 5316.07kB/sec 664.51 iops\n49.53%, 12717 read, 1281 written, 10173.34kB/sec 1271.67 iops\n49.53%, 4 read, 0 written, 3.20kB/sec 0.40 iops\n49.84%, 3107 read, 311 written, 2484.65kB/sec 310.58 iops\n50.81%, 9737 read, 961 written, 7789.56kB/sec 973.69 iops\n50.81%, 4 read, 1 written, 3.20kB/sec 0.40 iops\n51.69%, 8811 read, 934 written, 7047.89kB/sec 880.99 iops\n51.82%, 1281 read, 129 written, 1024.53kB/sec 128.07 iops\n52.89%, 10674 read, 1034 written, 8539.07kB/sec 1067.38 iops\n52.89%, 14 read, 3 written, 11.20kB/sec 1.40 iops\n53.41%, 5235 read, 532 written, 4187.99kB/sec 523.50 iops\n54.08%, 6711 read, 679 written, 5368.75kB/sec 671.09 iops\n54.94%, 8535 read, 878 written, 6827.91kB/sec 853.49 iops\n54.94%, 4 read, 0 written, 3.20kB/sec 0.40 iops\n55.65%, 7086 read, 682 written, 5668.57kB/sec 708.57 iops\n57.23%, 15799 read, 1561 written, 12639.04kB/sec 1579.88 iops\n57.23%, 2 read, 0 written, 1.60kB/sec 0.20 iops\n57.23%, 4 read, 0 written, 3.20kB/sec 0.40 iops\n57.83%, 6043 read, 615 written, 4833.90kB/sec 604.24 iops\n59.43%, 16034 read, 1657 written, 12827.16kB/sec 1603.39 iops\n59.43%, 5 read, 0 written, 4.00kB/sec 0.50 iops\n59.44%, 8 read, 1 written, 6.40kB/sec 0.80 iops\n60.81%, 13705 read, 1397 written, 10963.65kB/sec 1370.46 iops\n60.81%, 4 read, 0 written, 3.20kB/sec 0.40 iops\n60.84%, 377 read, 30 written, 301.55kB/sec 37.69 iops\n61.58%, 7402 read, 760 written, 5921.57kB/sec 740.20 iops\n62.26%, 6764 read, 679 written, 5411.12kB/sec 676.39 iops\n63.18%, 9188 read, 870 written, 7346.94kB/sec 918.37 iops\n63.77%, 5951 read, 554 written, 4760.63kB/sec 595.08 iops\n63.77%, 5 read, 0 written, 4.00kB/sec 0.50 iops\n64.71%, 9386 read, 973 written, 7508.68kB/sec 938.59 iops\n64.71%, 4 read, 0 written, 3.20kB/sec 0.40 iops\n65.87%, 11559 read, 1142 written, 9246.80kB/sec 1155.85 iops\n65.91%, 406 read, 41 written, 324.79kB/sec 40.60 iops\n67.53%, 16184 read, 1634 written, 12929.73kB/sec 1616.22 iops\n67.53%, 3 read, 0 written, 2.40kB/sec 0.30 iops\n67.53%, 3 read, 0 written, 2.40kB/sec 0.30 iops\n68.66%, 11289 read, 1124 written, 9031.13kB/sec 1128.89 iops\n68.66%, 3 read, 0 written, 2.40kB/sec 0.30 iops\n70.12%, 14665 read, 1482 written, 11731.93kB/sec 1466.49 iops\n70.13%, 6 read, 1 written, 4.80kB/sec 0.60 iops\n70.17%, 489 read, 53 written, 391.20kB/sec 48.90 iops\n70.61%, 4311 read, 438 written, 3448.77kB/sec 431.10 iops\n72.32%, 17185 read, 1741 written, 13747.94kB/sec 1718.49 iops\n72.32%, 4 read, 0 written, 3.20kB/sec 0.40 iops\n72.32%, 1 read, 0 written, 0.80kB/sec 0.10 iops\n72.97%, 6433 read, 653 written, 5141.42kB/sec 642.68 iops\n73.55%, 5830 read, 584 written, 4663.93kB/sec 582.99 iops\n74.72%, 11705 read, 1188 written, 9355.64kB/sec 1169.45 iops\n74.72%, 1 read, 0 written, 0.80kB/sec 0.10 iops\n75.18%, 4561 read, 388 written, 3643.86kB/sec 455.48 iops\n75.50%, 3230 read, 292 written, 2583.90kB/sec 322.99 iops\n76.88%, 13816 read, 1427 written, 11052.76kB/sec 1381.60 iops\n76.88%, 3 read, 1 written, 2.40kB/sec 0.30 iops\n77.86%, 9736 read, 1007 written, 7788.77kB/sec 973.60 iops\n78.00%, 1424 read, 138 written, 1139.19kB/sec 142.40 iops\n78.67%, 6737 read, 675 written, 5389.53kB/sec 673.69 iops\n79.50%, 8296 read, 816 written, 6636.43kB/sec 829.55 iops\n79.50%, 3 read, 0 written, 2.40kB/sec 0.30 iops\n80.44%, 9361 read, 924 written, 7488.75kB/sec 936.09 iops\n80.44%, 3 read, 0 written, 2.40kB/sec 0.30 iops\n81.66%, 12259 read, 1217 written, 9807.12kB/sec 1225.89 iops\n81.66%, 3 read, 1 written, 2.40kB/sec 0.30 iops\n82.72%, 10554 read, 1106 written, 8441.93kB/sec 1055.24 iops\n82.95%, 2329 read, 224 written, 1863.19kB/sec 232.90 iops\n82.99%, 408 read, 39 written, 326.40kB/sec 40.80 iops\n84.07%, 10723 read, 1090 written, 8578.37kB/sec 1072.30 iops\n84.07%, 4 read, 0 written, 3.20kB/sec 0.40 iops\n85.10%, 10379 read, 1028 written, 8303.15kB/sec 1037.89 iops\n85.12%, 167 read, 14 written, 133.59kB/sec 16.70 iops\n86.31%, 11925 read, 1117 written, 9539.95kB/sec 1192.49 iops\n86.31%, 4 read, 1 written, 3.20kB/sec 0.40 iops\n86.66%, 3452 read, 352 written, 2761.42kB/sec 345.18 iops\n87.77%, 11113 read, 1088 written, 8890.26kB/sec 1111.28 iops\n87.77%, 3 read, 1 written, 2.40kB/sec 0.30 iops\n88.91%, 11434 read, 1081 written, 9147.12kB/sec 1143.39 iops\n88.91%, 5 read, 0 written, 4.00kB/sec 0.50 iops\n89.75%, 8341 read, 802 written, 6672.71kB/sec 834.09 iops\n90.23%, 4835 read, 491 written, 3867.94kB/sec 483.49 iops\n91.12%, 8910 read, 941 written, 7127.93kB/sec 890.99 iops\n91.12%, 3 read, 0 written, 2.40kB/sec 0.30 iops\n91.71%, 5909 read, 567 written, 4726.76kB/sec 590.85 iops\n92.15%, 4316 read, 415 written, 3452.78kB/sec 431.60 iops\n93.44%, 12960 read, 1299 written, 10367.83kB/sec 1295.98 iops\n93.44%, 4 read, 0 written, 3.20kB/sec 0.40 iops\n93.73%, 2857 read, 292 written, 2285.59kB/sec 285.70 iops\n95.35%, 16170 read, 1658 written, 12935.95kB/sec 1616.99 iops\n95.35%, 3 read, 0 written, 2.40kB/sec 0.30 iops\n95.35%, 5 read, 0 written, 4.00kB/sec 0.50 iops\n96.65%, 13060 read, 1274 written, 10447.89kB/sec 1305.99 iops\n96.65%, 5 read, 0 written, 4.00kB/sec 0.50 iops\n97.46%, 8065 read, 853 written, 6451.07kB/sec 806.38 iops\n97.57%, 1158 read, 120 written, 926.40kB/sec 115.80 iops\n98.70%, 11221 read, 1113 written, 8975.70kB/sec 1121.96 iops\n98.70%, 42 read, 4 written, 33.60kB/sec 4.20 iops\n99.85%, 11480 read, 1170 written, 9180.01kB/sec 1147.50 iops\n99.85%, 2 read, 0 written, 1.60kB/sec 0.20 iops\n\n\n-- \nLaurent \"ker2x\" Laborde\nSysadmin & DBA at http://www.over-blog.com/\n", "msg_date": "Tue, 19 Apr 2011 11:15:30 +0200", "msg_from": "Laurent Laborde <[email protected]>", "msg_from_op": true, "msg_subject": "postgresql random io test with 2 SSD Kingston V+100 500GB in\n\t(software) Raid1" }, { "msg_contents": "Sorry, it's not 2x512GB in Raid1 but 4x256GB in raid10\n\n-- \nLaurent \"ker2x\" Laborde\nSysadmin & DBA at http://www.over-blog.com/\n", "msg_date": "Tue, 19 Apr 2011 11:28:45 +0200", "msg_from": "Laurent Laborde <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgresql random io test with 2 SSD Kingston V+100 500GB in\n\t(software) Raid1" }, { "msg_contents": "On 04/19/2011 05:15 AM, Laurent Laborde wrote:\n> 2 kingston V+100 500GB\n> \n\nThanks for the performance report. The V+100 is based on a Toshiba \nT6UG1XBG controller, and it doesn't have any durable cache from either a \nbattery or capacitor. As such, putting a database on that drive is very \nrisky. You can expect the database to be corrupted during an unusual \npower outage event. See http://wiki.postgresql.org/wiki/Reliable_Writes \nfor more information.\n\nAt this point most people considering one of Kingston's drives for a \ndatabase would be better off getting an Intel 320 series drive, which is \naround the same price but doesn't have this issue.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Tue, 19 Apr 2011 08:07:41 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql random io test with 2 SSD Kingston V+100\n\t500GB in (software) Raid1" }, { "msg_contents": "On Tue, Apr 19, 2011 at 2:07 PM, Greg Smith <[email protected]> wrote:\n> On 04/19/2011 05:15 AM, Laurent Laborde wrote:\n>>\n>> 2 kingston V+100 500GB\n\n4x250GB in Raid10 (see my 2nd post)\n\n> Thanks for the performance report.  The V+100 is based on a Toshiba T6UG1XBG\n> controller, and it doesn't have any durable cache from either a battery or\n> capacitor.  As such, putting a database on that drive is very risky.  You\n> can expect the database to be corrupted during an unusual power outage\n> event.  See http://wiki.postgresql.org/wiki/Reliable_Writes for more\n> information.\n>\n> At this point most people considering one of Kingston's drives for a\n> database would be better off getting an Intel 320 series drive, which is\n> around the same price but doesn't have this issue.\n\nIf we use them (unlikely), recovery in case of power outage isn't a\nproblem, as we will use it on slave database (using Slony-I) that can\nbe created/destroyed at will.\nAnd, anyway, our slave have fsync=off so the battery won't change\nanything in case of power outage :)\n\ni am currently testing on a single V+100 250GB (without raid).\nReport will follow soon :)\n\n-- \nLaurent \"ker2x\" Laborde\nSysadmin & DBA at http://www.over-blog.com/\n", "msg_date": "Tue, 19 Apr 2011 14:36:42 +0200", "msg_from": "Laurent Laborde <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgresql random io test with 2 SSD Kingston V+100\n\t500GB in (software) Raid1" }, { "msg_contents": "1 SSD Kingston V+100 250GB, no raid.\n\n/home/pgiosim-0.5/pgiosim -w1 -a1 -v -b 1000000 /home/ssd/big1\nWrite Mode: 1%\nStallcheck at 1.000000\nVerbose\nUnknown units of blocks\nArg: 1\nRead 1000000 blocks\nAdded /home/ssd/big1\n3.57%, 35720 read, 365 written, 28567.73kB/sec 3570.97 iops\n6.14%, 25684 read, 276 written, 20519.77kB/sec 2564.97 iops\n8.17%, 20270 read, 219 written, 16215.94kB/sec 2026.99 iops\n9.66%, 14937 read, 131 written, 11945.84kB/sec 1493.23 iops\n12.91%, 32494 read, 327 written, 25995.08kB/sec 3249.38 iops\n14.06%, 11508 read, 118 written, 9206.33kB/sec 1150.79 iops\n16.09%, 20292 read, 187 written, 16233.55kB/sec 2029.19 iops\n17.57%, 14817 read, 141 written, 11853.49kB/sec 1481.69 iops\n19.62%, 20515 read, 201 written, 16411.94kB/sec 2051.49 iops\n21.90%, 22794 read, 214 written, 18222.39kB/sec 2277.80 iops\n23.92%, 20207 read, 197 written, 16160.23kB/sec 2020.03 iops\n26.11%, 21812 read, 213 written, 17427.32kB/sec 2178.42 iops\n28.29%, 21852 read, 213 written, 17475.40kB/sec 2184.43 iops\n30.73%, 24416 read, 234 written, 19507.42kB/sec 2438.43 iops\n32.46%, 17298 read, 183 written, 13833.15kB/sec 1729.14 iops\n33.25%, 7863 read, 87 written, 6290.35kB/sec 786.29 iops\n35.67%, 24229 read, 213 written, 19383.12kB/sec 2422.89 iops\n37.71%, 20397 read, 208 written, 16317.50kB/sec 2039.69 iops\n39.61%, 19022 read, 200 written, 15217.51kB/sec 1902.19 iops\n41.63%, 20190 read, 202 written, 16151.85kB/sec 2018.98 iops\n44.00%, 23651 read, 266 written, 18913.60kB/sec 2364.20 iops\n45.30%, 13066 read, 112 written, 10452.69kB/sec 1306.59 iops\n47.37%, 20697 read, 218 written, 16557.55kB/sec 2069.69 iops\n49.75%, 23726 read, 217 written, 18980.50kB/sec 2372.56 iops\n51.55%, 18087 read, 170 written, 14469.56kB/sec 1808.69 iops\n53.47%, 19194 read, 193 written, 15355.08kB/sec 1919.39 iops\n55.30%, 18250 read, 205 written, 14599.93kB/sec 1824.99 iops\n57.00%, 16999 read, 160 written, 13599.09kB/sec 1699.89 iops\n58.79%, 17912 read, 180 written, 14329.56kB/sec 1791.19 iops\n61.76%, 29694 read, 318 written, 23753.91kB/sec 2969.24 iops\n62.96%, 12039 read, 113 written, 9631.16kB/sec 1203.90 iops\n65.67%, 27048 read, 273 written, 21609.48kB/sec 2701.18 iops\n67.00%, 13305 read, 130 written, 10639.63kB/sec 1329.95 iops\n69.22%, 22229 read, 227 written, 17783.07kB/sec 2222.88 iops\n71.13%, 19062 read, 170 written, 15249.52kB/sec 1906.19 iops\n72.06%, 9299 read, 97 written, 7437.79kB/sec 929.72 iops\n74.31%, 22492 read, 202 written, 17986.09kB/sec 2248.26 iops\n76.66%, 23493 read, 219 written, 18768.77kB/sec 2346.10 iops\n78.75%, 20979 read, 209 written, 16775.76kB/sec 2096.97 iops\n80.68%, 19305 read, 194 written, 15428.97kB/sec 1928.62 iops\n83.05%, 23670 read, 222 written, 18927.19kB/sec 2365.90 iops\n84.59%, 15391 read, 169 written, 12299.46kB/sec 1537.43 iops\n86.32%, 17246 read, 166 written, 13796.73kB/sec 1724.59 iops\n88.33%, 20133 read, 201 written, 16106.22kB/sec 2013.28 iops\n89.98%, 16561 read, 172 written, 13248.30kB/sec 1656.04 iops\n92.81%, 28298 read, 252 written, 22627.87kB/sec 2828.48 iops\n94.85%, 20388 read, 198 written, 16308.57kB/sec 2038.57 iops\n96.75%, 18974 read, 178 written, 15179.09kB/sec 1897.39 iops\n98.45%, 16956 read, 190 written, 13564.73kB/sec 1695.59 iops\n\n-------------\n\n/home/pgiosim-0.5/pgiosim -w10 -a1 -v -b 1000000 /home/ssd/big1\nWrite Mode: 10%\nStallcheck at 1.000000\nVerbose\nUnknown units of blocks\nArg: 1\nRead 1000000 blocks\nAdded /home/ssd/big1\n2.01%, 20122 read, 1978 written, 16097.57kB/sec 2012.20 iops\n2.01%, 3 read, 0 written, 2.40kB/sec 0.30 iops\n2.01%, 3 read, 0 written, 2.40kB/sec 0.30 iops\n2.01%, 3 read, 0 written, 2.40kB/sec 0.30 iops\n3.82%, 18036 read, 1779 written, 14428.73kB/sec 1803.59 iops\n4.03%, 2175 read, 209 written, 1739.98kB/sec 217.50 iops\n4.03%, 3 read, 0 written, 2.40kB/sec 0.30 iops\n4.03%, 2 read, 0 written, 1.60kB/sec 0.20 iops\n4.04%, 4 read, 0 written, 3.20kB/sec 0.40 iops\n4.04%, 4 read, 0 written, 3.20kB/sec 0.40 iops\n5.62%, 15804 read, 1614 written, 12643.13kB/sec 1580.39 iops\n5.62%, 3 read, 2 written, 2.40kB/sec 0.30 iops\n5.62%, 4 read, 0 written, 3.20kB/sec 0.40 iops\n5.62%, 3 read, 0 written, 2.40kB/sec 0.30 iops\n5.62%, 3 read, 0 written, 2.40kB/sec 0.30 iops\n6.86%, 12414 read, 1264 written, 9931.17kB/sec 1241.40 iops\n6.86%, 3 read, 0 written, 2.40kB/sec 0.30 iops\n6.86%, 2 read, 0 written, 1.60kB/sec 0.20 iops\n7.18%, 3213 read, 343 written, 2570.39kB/sec 321.30 iops\n8.34%, 11563 read, 1215 written, 9250.36kB/sec 1156.30 iops\n8.34%, 3 read, 0 written, 2.40kB/sec 0.30 iops\n8.34%, 4 read, 0 written, 3.20kB/sec 0.40 iops\n8.34%, 3 read, 1 written, 2.40kB/sec 0.30 iops\n8.64%, 3055 read, 276 written, 2443.98kB/sec 305.50 iops\n10.57%, 19227 read, 1947 written, 15381.53kB/sec 1922.69 iops\n10.57%, 3 read, 0 written, 2.40kB/sec 0.30 iops\n10.57%, 4 read, 0 written, 3.20kB/sec 0.40 iops\n10.57%, 3 read, 0 written, 2.40kB/sec 0.30 iops\n10.57%, 2 read, 1 written, 1.60kB/sec 0.20 iops\n10.57%, 7 read, 1 written, 5.60kB/sec 0.70 iops\n11.32%, 7488 read, 752 written, 5990.38kB/sec 748.80 iops\n11.32%, 3 read, 0 written, 2.40kB/sec 0.30 iops\n13.32%, 20043 read, 1968 written, 16034.36kB/sec 2004.29 iops\n13.32%, 3 read, 0 written, 2.40kB/sec 0.30 iops\n13.32%, 4 read, 0 written, 3.20kB/sec 0.40 iops\n13.32%, 3 read, 0 written, 2.40kB/sec 0.30 iops\n13.32%, 4 read, 1 written, 3.20kB/sec 0.40 iops\n13.32%, 5 read, 0 written, 4.00kB/sec 0.50 iops\n15.12%, 17970 read, 1878 written, 14375.96kB/sec 1796.99 iops\n15.12%, 4 read, 3 written, 3.20kB/sec 0.40 iops\n15.12%, 5 read, 1 written, 4.00kB/sec 0.50 iops\n^CCTRL-C Interrupt - stopping\n!%*@#\n\n-- \nLaurent \"ker2x\" Laborde\nSysadmin & DBA at http://www.over-blog.com/\n", "msg_date": "Tue, 19 Apr 2011 14:49:32 +0200", "msg_from": "Laurent Laborde <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgresql random io test with 2 SSD Kingston V+100\n\t500GB in (software) Raid1" }, { "msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:pgsql-performance-\n> [email protected]] On Behalf Of Laurent Laborde\n> Sent: Tuesday, April 19, 2011 8:37 AM\n> To: [email protected]\n> Subject: Re: [PERFORM] postgresql random io test with 2 SSD Kingston\n> V+100 500GB in (software) Raid1\n> \n> If we use them (unlikely), recovery in case of power outage isn't a\n> problem, as we will use it on slave database (using Slony-I) that can\n> be created/destroyed at will.\n> And, anyway, our slave have fsync=off so the battery won't change\n> anything in case of power outage :)\n\nAre these on the same UPS? If so, you have a failure case that could cause you to lose everything.\n\nBrad.\n\n", "msg_date": "Tue, 19 Apr 2011 13:21:04 +0000", "msg_from": "\"Nicholson, Brad (Toronto, ON, CA)\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql random io test with 2 SSD Kingston V+100\n\t500GB in (software) Raid1" }, { "msg_contents": "On Tue, Apr 19, 2011 at 3:21 PM, Nicholson, Brad (Toronto, ON, CA)\n<[email protected]> wrote:\n>> -----Original Message-----\n>> From: [email protected] [mailto:pgsql-performance-\n>> [email protected]] On Behalf Of Laurent Laborde\n>> Sent: Tuesday, April 19, 2011 8:37 AM\n>> To: [email protected]\n>> Subject: Re: [PERFORM] postgresql random io test with 2 SSD Kingston\n>> V+100 500GB in (software) Raid1\n>>\n>> If we use them (unlikely), recovery in case of power outage isn't a\n>> problem, as we will use it on slave database (using Slony-I) that can\n>> be created/destroyed at will.\n>> And, anyway, our slave have fsync=off so the battery won't change\n>> anything in case of power outage :)\n>\n> Are these on the same UPS?  If so, you have a failure case that could cause you to lose everything.\n\nOh, not at all.\nWe're doing balancing/switch/failover between 2 different datacenter.\nWe can maintain (somewhat degraded) operation if one of the datacenter fail :)\n\n-- \nLaurent \"ker2x\" Laborde\nSysadmin & DBA at http://www.over-blog.com/\n", "msg_date": "Tue, 19 Apr 2011 15:44:09 +0200", "msg_from": "Laurent Laborde <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgresql random io test with 2 SSD Kingston V+100\n\t500GB in (software) Raid1" }, { "msg_contents": "\nOn Apr 19, 2011, at 8:49 AM, Laurent Laborde wrote:\n\n> Write Mode: 10%\n> Stallcheck at 1.000000\n> Verbose\n> Unknown units of blocks\n> Arg: 1\n> Read 1000000 blocks\n> Added /home/ssd/big1\n> 2.01%, 20122 read, 1978 written, 16097.57kB/sec 2012.20 iops\n> 2.01%, 3 read, 0 written, 2.40kB/sec 0.30 iops\n> 2.01%, 3 read, 0 written, 2.40kB/sec 0.30 iops\n> 2.01%, 3 read, 0 written, 2.40kB/sec 0.30 iops\n> 3.82%, 18036 read, 1779 written, 14428.73kB/sec 1803.59 iops\n> 4.03%, 2175 read, 209 written, 1739.98kB/sec 217.50 iops\n> 4.03%, 3 read, 0 written, 2.40kB/sec 0.30 iops\n> 4.03%, 2 read, 0 written, 1.60kB/sec 0.20 iops\n> 4.04%, 4 read, 0 written, 3.20kB/sec 0.40 iops\n> 4.04%, 4 read, 0 written, 3.20kB/sec 0.40 iops\n\nThe performance here looks like the old jmicron based ssds that had \nabsolutely abysmal performance - the intel x25s do not suffer like \nthis. The x25's however suffer from the power durability Greg has \nmentioned. (And they will eventually need to be security erase'd to \nrestore performance - you'll start getting major write stalls). Looks \nlike you were on the cusp of stalling here.\n\nbtw, yay pgiosim! :)\n\n--\nJeff Trout <[email protected]>\nhttp://www.stuarthamm.net/\nhttp://www.dellsmartexitin.com/\n\n\n\n", "msg_date": "Wed, 20 Apr 2011 08:39:07 -0400", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql random io test with 2 SSD Kingston V+100\n\t500GB in (software) Raid1" }, { "msg_contents": "On Wed, Apr 20, 2011 at 2:39 PM, Jeff <[email protected]> wrote:\n>\n> The performance here looks like the old jmicron based ssds that had\n> absolutely abysmal performance - the intel x25s do not suffer like this. The\n> x25's however suffer from the power durability Greg has mentioned.  (And\n> they will eventually need to be security erase'd to restore performance -\n> you'll start getting major write stalls). Looks like you were on the cusp of\n> stalling here.\n\nA review of the V+100 on the excellent anandtech :\nhttp://www.anandtech.com/show/4010/kingston-ssdnow-v-plus-100-review\n\n> btw, yay pgiosim! :)\n\nyay \\o/\n\n-- \nLaurent \"ker2x\" Laborde\nSysadmin & DBA at http://www.over-blog.com/\n", "msg_date": "Wed, 20 Apr 2011 16:01:22 +0200", "msg_from": "Laurent Laborde <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgresql random io test with 2 SSD Kingston V+100\n\t500GB in (software) Raid1" }, { "msg_contents": "On 04/20/2011 09:01 AM, Laurent Laborde wrote:\n\n> A review of the V+100 on the excellent anandtech :\n> http://www.anandtech.com/show/4010/kingston-ssdnow-v-plus-100-review\n\nThat's horrifying. 4.9MB/s random writes? 19.7MB/s random reads? That's \nat least an order of magnitude lower than other SSDs of that generation. \nI can't imagine that would be very good for database usage patterns by \ncomparison. Especially with that aggressive garbage collection.\n\nI mean... an old Indilinx OCZ Vertex has better performance than that.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n", "msg_date": "Wed, 20 Apr 2011 10:40:13 -0500", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql random io test with 2 SSD Kingston V+100\n\t500GB in (software) Raid1" }, { "msg_contents": "On Wed, Apr 20, 2011 at 5:40 PM, Shaun Thomas <[email protected]> wrote:\n> On 04/20/2011 09:01 AM, Laurent Laborde wrote:\n>\n>> A review of the V+100 on the excellent anandtech :\n>> http://www.anandtech.com/show/4010/kingston-ssdnow-v-plus-100-review\n>\n> That's horrifying. 4.9MB/s random writes? 19.7MB/s random reads? That's at\n> least an order of magnitude lower than other SSDs of that generation. I\n> can't imagine that would be very good for database usage patterns by\n> comparison. Especially with that aggressive garbage collection.\n>\n> I mean... an old Indilinx OCZ Vertex has better performance than that\n\nWe just orderer 2 Corsair C300 240GB to compare performance and see if\nthe difference is as huge as claimed on anandtech's benchmark :)\n\n-- \nLaurent \"ker2x\" Laborde\nSysadmin & DBA at http://www.over-blog.com/\n", "msg_date": "Thu, 21 Apr 2011 10:02:07 +0200", "msg_from": "Laurent Laborde <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgresql random io test with 2 SSD Kingston V+100\n\t500GB in (software) Raid1" }, { "msg_contents": "Am 19.04.2011 11:15, schrieb Laurent Laborde:\n> Soft RAID1 (md)\n> ext3\n\nWe have experimented a bit with Postgres and ext3 (with and without Linux \nsoftware RAID1) and have found that since somewhere after 2.6.18, it has been \nprohibitively slow and causing high latencies during buffer flushes. You will \nprobably see a significant improvement with ext4 (mkfs.ext4, not just remount as \next4, which is also possible).\n\nAlso, you need to make sure that your blocks are properly aligned with SSDs, \nthat might explain low random I/O performance. See \nhttp://www.nuclex.org/blog/personal/80-aligning-an-ssd-on-linux and \nhttp://www.ocztechnologyforum.com/forum/showthread.php?54379-Linux-Tips-tweaks-and-alignment&p=373226&viewfull=1#post373226 \nfor example.\n\nRegards,\n Marinos\n", "msg_date": "Fri, 22 Apr 2011 23:43:58 +0200", "msg_from": "Marinos Yannikos <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql random io test with 2 SSD Kingston V+100\n\t500GB in (software) Raid1" } ]
[ { "msg_contents": "Hi,\n\nI'm using PostgreSQL 8.4 (and also 8.3).\n\nA partial index like this:\nCREATE INDEX table2_field1_idx\n ON table2 (field1)\n WHERE NOT field1 ISNULL;\n\nWill not be used when select one record from 100K records:\n\nexplain select * from table2 where field1 = 256988\n'Seq Scan on table2 (cost=0.00..1693.01 rows=1 width=4)'\n' Filter: (field1 = 256988)'\n\nBut it WILL be used like this:\n\nexplain select * from table2 where field1 = 256988 and not field1 isnull\n'Index Scan using table2_field1_idx on table2 (cost=0.00..8.28 rows=1\nwidth=4)'\n' Index Cond: (field1 = 256988)'\n\n\nBut, when i change the index from\"NOT field1 ISNULL \" to \"field1 NOTNULL\",\nthen the index WILL be used in both queries:\n\nexplain select * from table1 where field1 = 256988\n'Index Scan using table1_field1_idx on table1 (cost=0.00..8.28 rows=1\nwidth=4)'\n' Index Cond: (field1 = 256988)'\n\n'Index Scan using table1_field1_idx on table1 (cost=0.00..8.28 rows=1\nwidth=4)'\n' Index Cond: (field1 = 256988)'\n' Filter: (NOT (field1 IS NULL))'\n\n\nAny ideas why this might be?\n\n\nCheers,\n\nWBL\n\nCode below:\n\n--drop table table1;\ncreate table table1(field1 integer);\nCREATE INDEX table1_field1_idx\n ON table1 (field1)\n WHERE field1 NOTNULL;\ninsert into table1 values(null);\ninsert into table1 select generate_series(1,100000);\n\nvacuum analyze table1;\n\nexplain select * from table1 where field1 = 256988\nexplain select * from table1 where field1 = 256988 and not field1 isnull\n\n\n--drop table table2;\ncreate table table2(field1 integer);\nCREATE INDEX table2_field1_idx\n ON table2 (field1)\n WHERE NOT field1 ISNULL;\ninsert into table2 values(null);\ninsert into table2 select generate_series(1,100000);\n\nvacuum analyze table2;\n\nexplain select * from table2 where field1 = 256988\nexplain select * from table2 where field1 = 256988 and not field1 isnull\n\n\n-- \n\"Patriotism is the conviction that your country is superior to all others\nbecause you were born in it.\" -- George Bernard Shaw\n\nHi,I'm using PostgreSQL 8.4 (and also 8.3).A partial index like this:CREATE INDEX table2_field1_idx  ON table2 (field1)\n WHERE NOT field1 ISNULL;Will not be used when select one record from 100K records:explain select * from table2 where field1 = 256988'Seq Scan on table2  (cost=0.00..1693.01 rows=1 width=4)'\n'  Filter: (field1 = 256988)'But it WILL be used like this:explain select * from table2 where field1 = 256988 and not field1 isnull'Index Scan using table2_field1_idx on table2  (cost=0.00..8.28 rows=1 width=4)'\n'  Index Cond: (field1 = 256988)'But, when i change the index from\"NOT field1 ISNULL \" to \"field1 NOTNULL\", then the index WILL be used in both queries:\nexplain select * from table1 where field1 = 256988'Index Scan using table1_field1_idx on table1  (cost=0.00..8.28 rows=1 width=4)''  Index Cond: (field1 = 256988)'\n'Index Scan using table1_field1_idx on table1  (cost=0.00..8.28 rows=1 width=4)''  Index Cond: (field1 = 256988)''  Filter: (NOT (field1 IS NULL))'\nAny ideas why this might be?Cheers,WBLCode below:--drop table table1;\ncreate table table1(field1 integer);CREATE INDEX table1_field1_idx  ON table1 (field1)  WHERE field1 NOTNULL;insert into table1 values(null);insert into table1 select generate_series(1,100000);\nvacuum analyze table1;explain select * from table1 where field1 = 256988explain select * from table1 where field1 = 256988 and not field1 isnull\n--drop table table2;create table table2(field1 integer);CREATE INDEX table2_field1_idx  ON table2 (field1)  WHERE NOT field1 ISNULL;insert into table2 values(null);\ninsert into table2 select generate_series(1,100000);vacuum analyze table2;explain select * from table2 where field1 = 256988explain select * from table2 where field1 = 256988 and not field1 isnull\n-- \"Patriotism is the conviction that your country is superior to all others because you were born in it.\" -- George Bernard Shaw", "msg_date": "Wed, 20 Apr 2011 10:46:59 +0200", "msg_from": "Willy-Bas Loos <[email protected]>", "msg_from_op": true, "msg_subject": "not using partial index" }, { "msg_contents": "Willy-Bas Loos <[email protected]> writes:\n> [ NOT field1 ISNULL is not seen as equivalent to field1 IS NOT NULL ]\n\n> Any ideas why this might be?\n\nThe planner does not spend an infinite number of cycles on trying to\nmake different expressions look alike.\n\nAs it happens, 9.1 does know this equivalence, as a byproduct of\nhttp://git.postgresql.org/gitweb?p=postgresql.git&a=commitdiff&h=220e45bf325b061b8dbd7451f87cedc07da61706\nBut I don't consider it a bug that older versions don't do it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 20 Apr 2011 09:25:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: not using partial index " } ]
[ { "msg_contents": "Hi,\nwe are facing a performance issue on Postgres 8.4, the CPU reaches 100% \nwith less than 50 simultaneous users.\n\nWe were thinking to migrate the HR system from Oracle to Postgres but now \nthat we have those big performance problems on relatively small \napplications, we are questioning this choice.\n\nThe machine configuration, dedicated to Postgres is as follow:\nRAM: 3GB\nCPU: Intel Xeon CPU 3.2 Ghz\n\nCould you please provide us some support regarding this issue?\n\nRegards\n\n\n\nThis e-mail and any attachment are confidential and intended solely for the use of the individual to whom it is addressed. If you are not the intended recipient, please telephone or email the sender and delete this message and any attachment from your system. Unauthorized publication, use, dissemination, forwarding, printing or copying of this e-mail and its associated attachments is strictly prohibited.\n\nhttp://disclaimer.carrefour.com/\n\nLet's respect the environment together. Only print this message if necessary", "msg_date": "Wed, 20 Apr 2011 17:55:25 +0300", "msg_from": "Allen Sooredoo <[email protected]>", "msg_from_op": true, "msg_subject": "%100 CPU on Windows Server 2003" }, { "msg_contents": "Hello\n\nplease, can you attach a value of shadow_buffers and work_mem from config\nfile?\n\nWindows are very sensitive on memory setting. There must be lot of memory\njust for MS Windows.\n\nRegards\n\nPavel Stehule\n\n2011/4/20 Allen Sooredoo <[email protected]>\n\n> Hi,\n> we are facing a performance issue on Postgres 8.4, the CPU reaches 100%\n> with less than 50 simultaneous users.\n>\n> We were thinking to migrate the HR system from Oracle to Postgres but now\n> that we have those big performance problems on relatively small\n> applications, we are questioning this choice.\n>\n> The machine configuration, dedicated to Postgres is as follow:\n> RAM: 3GB\n> CPU: Intel Xeon CPU 3.2 Ghz\n>\n> Could you please provide us some support regarding this issue?\n>\n> Regards\n>\n>\n>\n> *This e-mail and any attachment are confidential and intended solely for\n> the use of the individual to whom it is addressed. If you are not the\n> intended recipient, please telephone or email the sender and delete this\n> message and any attachment from your system. Unauthorized publication, use,\n> dissemination, forwarding, printing or copying of this e-mail and its\n> associated attachments is strictly prohibited.\n> http://disclaimer.carrefour.com\n> Let's respect the environment together. Only print this message if\n> necessary *\n>\n\nHelloplease, can you attach a value of shadow_buffers and work_mem from config file?Windows are very sensitive on memory setting. There must be lot of memory just for MS Windows.RegardsPavel Stehule\n2011/4/20 Allen Sooredoo <[email protected]>\nHi,\nwe are facing a performance issue on\nPostgres 8.4, the CPU reaches 100% with less than 50 simultaneous users.\n\nWe were thinking to migrate the HR system\nfrom Oracle to Postgres but now that we have those big performance problems\non relatively small applications, we are questioning this choice.\n\nThe machine configuration, dedicated\nto Postgres is as follow:\nRAM: 3GB\nCPU: Intel Xeon CPU 3.2 Ghz\n\nCould you please provide us some support\nregarding this issue?\n\nRegards\n\n\n\nThis e-mail and any attachment are confidential and intended solely \n for the use of the individual to whom it is addressed. If you are not the \n intended recipient, please telephone or email the sender and delete this message \n and any attachment from your system. Unauthorized publication, use, dissemination, \n forwarding, printing or copying of this e-mail and its associated attachments \n is strictly prohibited.\n\n http://disclaimer.carrefour.com \n\nLet's respect the environment together. Only print this message if necessary", "msg_date": "Thu, 21 Apr 2011 08:01:54 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: %100 CPU on Windows Server 2003" }, { "msg_contents": "On Wed, Apr 20, 2011 at 10:55 AM, Allen Sooredoo <\[email protected]> wrote:\n\n> Hi,\n> we are facing a performance issue on Postgres 8.4, the CPU reaches 100%\n> with less than 50 simultaneous users.\n>\n> We were thinking to migrate the HR system from Oracle to Postgres but now\n> that we have those big performance problems on relatively small\n> applications, we are questioning this choice.\n>\n> The machine configuration, dedicated to Postgres is as follow:\n> RAM: 3GB\n> CPU: Intel Xeon CPU 3.2 Ghz\n>\n> Could you please provide us some support regarding this issue?\n>\n\nProbably, but you'll need to provide more details. It is likely that you\nhave some queries that need to be tuned, and possibly some settings that\nneed to be changed. But without some information about what is using up all\nthe CPU time, it's hard to speculate as to what the problem might be.\n\nhttp://wiki.postgresql.org/wiki/Guide_to_reporting_problems\n\nYou might also want to read this article:\n\nhttp://www.linux.com/learn/tutorials/394523-configuring-postgresql-for-pretty-good-performance\n\nI've also found that log_min_duration_statement is pretty useful for\nfiguring out where the CPU time is going - it helps you find your\nlong-running queries.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nOn Wed, Apr 20, 2011 at 10:55 AM, Allen Sooredoo <[email protected]> wrote:\nHi,\nwe are facing a performance issue on\nPostgres 8.4, the CPU reaches 100% with less than 50 simultaneous users.\n\nWe were thinking to migrate the HR system\nfrom Oracle to Postgres but now that we have those big performance problems\non relatively small applications, we are questioning this choice.\n\nThe machine configuration, dedicated\nto Postgres is as follow:\nRAM: 3GB\nCPU: Intel Xeon CPU 3.2 Ghz\n\nCould you please provide us some support\nregarding this issue?\nProbably, but you'll need to provide more details.  It is likely that you have some queries that need to be tuned, and possibly some settings that need to be changed.  But without some information about what is using up all the CPU time, it's hard to speculate as to what the problem might be.\nhttp://wiki.postgresql.org/wiki/Guide_to_reporting_problemsYou might also want to read this article:http://www.linux.com/learn/tutorials/394523-configuring-postgresql-for-pretty-good-performance\nI've also found that log_min_duration_statement is pretty useful for figuring out where the CPU time is going - it helps you find your long-running queries.-- Robert HaasEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 25 Apr 2011 19:44:05 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: %100 CPU on Windows Server 2003" } ]
[ { "msg_contents": "All,\n\nApparently our CE is unable to deal with even moderately complex\nexpressions. For example, given a CE check constraint of:\n\n \"chk_start\" CHECK (start >= '2011-01-31 00:00:00-05'::timestamp with\ntime zone AND start < '2011-03-01 00:00:00-05'::timestamp with time zone)\n\nPostgreSQL CE is unable to figure out not to scan this partition for a\nquery which contains the following filter condition:\n\n WHERE start >= '2010-11-01'::timestamptz\n AND start < ('2010-11-30'::timestamptz + '1\nday'::interval)::timestamptz\n\nEven though it can figure out this one:\n\n WHERE call_start >= '2010-11-01'::timestamptz\n AND call_start < '2010-12-01'::timestamptz\n\nI understand why now() is a problem for CE, but I'd expect that it could\nat least handle a simple expression with immutable outputs.\n\nWe need a new form of partitioning ...\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n", "msg_date": "Wed, 20 Apr 2011 16:49:17 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Constraint exclusion can't process simple constant expressions?" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> I understand why now() is a problem for CE, but I'd expect that it could\n> at least handle a simple expression with immutable outputs.\n\ntimestamptz + interval is not immutable --- in fact, the particular\nexample you give (ts + '1 day') is certainly dependent on timezone\nsetting.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 20 Apr 2011 20:48:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Constraint exclusion can't process simple constant expressions? " }, { "msg_contents": "Tom,\n\n> timestamptz + interval is not immutable --- in fact, the particular\n> example you give (ts + '1 day') is certainly dependent on timezone\n> setting.\n\nWhy not? Given that the time zone will be the same for both the\ntimestamptz and the interval, how would the result not be immutable?\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n", "msg_date": "Wed, 20 Apr 2011 18:58:56 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Constraint exclusion can't process simple constant\n expressions?" }, { "msg_contents": "On 21 April 2011 11:58, Josh Berkus <[email protected]> wrote:\n>> timestamptz + interval is not immutable --- in fact, the particular\n>> example you give (ts + '1 day') is certainly dependent on timezone\n>> setting.\n>\n> Why not?  Given that the time zone will be the same for both the\n> timestamptz and the interval, how would the result not be immutable?\n>\n\n\"IMMUTABLE indicates that the function cannot modify the database and\nalways returns the same result when given the same argument values\"\n\nEmphasis on \"always\". If the result of the function, given the same\nargument values, can be different after a SET, then it doesn't qualify\nfor immutability. At least, that's my understanding.\n\nCheers,\nBJ\n", "msg_date": "Thu, 21 Apr 2011 12:05:29 +1000", "msg_from": "Brendan Jurd <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Constraint exclusion can't process simple constant expressions?" }, { "msg_contents": "\n> Emphasis on \"always\". If the result of the function, given the same\n> argument values, can be different after a SET, then it doesn't qualify\n> for immutability. At least, that's my understanding.\n\nHmmmm. But within the context of the query plan itself, the results of\nthat expression are going to be constant. That is, for a given query\nexecution, it's always going to be the same comparison.\n\nSo this goes back to my original assertion that CE can't be fixed ...\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n", "msg_date": "Wed, 20 Apr 2011 19:13:07 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Constraint exclusion can't process simple constant\n expressions?" }, { "msg_contents": "On 21 April 2011 12:13, Josh Berkus <[email protected]> wrote:\n>> Emphasis on \"always\".  If the result of the function, given the same\n>> argument values, can be different after a SET, then it doesn't qualify\n>> for immutability.  At least, that's my understanding.\n>\n> Hmmmm.  But within the context of the query plan itself, the results of\n> that expression are going to be constant.  That is, for a given query\n> execution, it's always going to be the same comparison.\n>\n\nYou may be thinking of the STABLE volatility level. It requires that\nthe results of the function are the same for the same inputs, within\nthe same transaction.\n\n\"STABLE indicates that the function cannot modify the database, and\nthat within a single table scan it will consistently return the same\nresult for the same argument values, but that its result could change\nacross SQL statements. This is the appropriate selection for functions\nwhose results depend on database lookups, parameter variables (such as\nthe current time zone), etc.\"\n\nCheers,\nBJ\n", "msg_date": "Thu, 21 Apr 2011 12:17:11 +1000", "msg_from": "Brendan Jurd <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Constraint exclusion can't process simple constant expressions?" }, { "msg_contents": "\n> You may be thinking of the STABLE volatility level. It requires that\n> the results of the function are the same for the same inputs, within\n> the same transaction.\n\nRight. But CE will only pay attention to immutable values, not stable\nones, AFAICT.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n", "msg_date": "Wed, 20 Apr 2011 19:18:23 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Constraint exclusion can't process simple constant\n expressions?" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n>> timestamptz + interval is not immutable --- in fact, the particular\n>> example you give (ts + '1 day') is certainly dependent on timezone\n>> setting.\n\n> Why not? Given that the time zone will be the same for both the\n> timestamptz and the interval, how would the result not be immutable?\n\nThe reason it depends on the timezone is that the result varies if\n\"plus one day\" means crossing a DST boundary.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 20 Apr 2011 22:44:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Constraint exclusion can't process simple constant expressions? " }, { "msg_contents": "On Thu, Apr 21, 2011 at 4:05 AM, Brendan Jurd <[email protected]> wrote:\n>\n> \"IMMUTABLE indicates that the function cannot modify the database and\n> always returns the same result when given the same argument values\"\n>\n> Emphasis on \"always\".  If the result of the function, given the same\n> argument values, can be different after a SET, then it doesn't qualify\n> for immutability.  At least, that's my understanding.\n\nThat's a ridiculous use of the word \"Immutable\"\n\nIn any CS class, the timezone would be an implicit input to the\nfunction. So it would be immutable in *that* sense (it also takes\ntimezone into consideration).\n\nPerhaps the optimizer should take contextual information that cannot\nchange inside a query as input too.\n", "msg_date": "Thu, 21 Apr 2011 09:30:36 +0200", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Constraint exclusion can't process simple constant expressions?" }, { "msg_contents": "On Thu, Apr 21, 2011 at 9:30 AM, Claudio Freire <[email protected]> wrote:\n> On Thu, Apr 21, 2011 at 4:05 AM, Brendan Jurd <[email protected]> wrote:\n>>\n>> \"IMMUTABLE indicates that the function cannot modify the database and\n>> always returns the same result when given the same argument values\"\n>>\n>> Emphasis on \"always\".  If the result of the function, given the same\n>> argument values, can be different after a SET, then it doesn't qualify\n>> for immutability.  At least, that's my understanding.\n>\n> That's a ridiculous use of the word \"Immutable\"\n>\n> In any CS class, the timezone would be an implicit input to the\n> function. So it would be immutable in *that* sense (it also takes\n> timezone into consideration).\n>\n> Perhaps the optimizer should take contextual information that cannot\n> change inside a query as input too.\n>\n\nIn any case, the point is that the CE check (which is what CE cares\nabout) is indeed immutable in the PG sense.\n\nIf it is instantiated with a STABLE expression, it would still be\nequivalent to IMMUTABLE within the transaction - which is what CE\ncares about.\n\nAm I missing something?\n", "msg_date": "Thu, 21 Apr 2011 09:34:47 +0200", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Constraint exclusion can't process simple constant expressions?" }, { "msg_contents": "Claudio,\n\n> Am I missing something?\n\nYes, prepared statements.\n\nThis whole issue arises because CE is implemented purely on the planner\nlevel. The executor can treat Immutable and Stable functions as the\nsame; the planner cannot, AFAIK.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n", "msg_date": "Thu, 21 Apr 2011 10:36:04 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Constraint exclusion can't process simple constant\n expressions?" } ]
[ { "msg_contents": "So sometime along yellow brick firmware road HP changed (and maybe your\nvendor did too) the output of what happens when the write cache is off due\nto failed batteries attached to the card/cache. (and no they don't always\nbeep with a self test in case someone happens to be walking near your cage,\nand yes you will be wondering why batteries attached to these things are not\nhot swappable.)\n\nSo yeah -- even if you were pretty sure some code had properly checked with\nsay nagios checks (or whatever) and were monitoring them... if someone\nhasn't replaced a dead battery in a while you should probably be wondering\nwhy. \n\nNow go forth and test/ your HA/DR plans, check you raid and UPS batteries,\ntest your monitoring (again). \n\n-M\n\n", "msg_date": "Wed, 20 Apr 2011 20:14:35 -0600", "msg_from": "\"mark\" <[email protected]>", "msg_from_op": true, "msg_subject": "rant ? check the BBWC" } ]
[ { "msg_contents": "Is there anyone that could help me understand why all of a sudden with\nno noticeable change in data, no change in hardware, no change in OS,\nI'm seeing postmaster getting killed by oom_killer?\n\nThe dmesg shows that swap has not been touched free and total are the\nsame, so this system is not running out of total memory per say.\n\nI keep thinking it's something to do with lowmem vs highmem 32bit vs\n64 bit, but again no changes and I'm getting hit nightly on 2\ndifferent servers (running slon, so switched over and same thing, even\ndisabled memory over commit and still got nailed.\n\nIs there anyone familiar with this or could take a look at the dmesg\noutput (off list) and decipher it for me?\n\nthis is a Fedora 12 system, 2.6.32.23-170. I've been reading and\nappears this is yet another fedora bug, but so far I have not found\nany concrete evidence on how to fix it.\n\nFedora 12\n32gig memory, 8 proc\npostgres 8.4.4, slony 1.20\n5 gigs of swap (never hit it!)\n\nThanks\nTory\n", "msg_date": "Thu, 21 Apr 2011 01:28:45 -0700", "msg_from": "Tory M Blue <[email protected]>", "msg_from_op": true, "msg_subject": "oom_killer" }, { "msg_contents": "Funny concidence, I was just reading up a blog post on postgres an OOM killer.\n\nhttp://gentooexperimental.org/~patrick/weblog/archives/2011-04.html#e2011-04-20T21_58_37.txt\n\nHope this helps.\n\n2011/4/21 Tory M Blue <[email protected]>:\n> Is there anyone that could help me understand why all of a sudden with\n> no noticeable change in data, no change in hardware, no change in OS,\n> I'm seeing postmaster getting killed by oom_killer?\n>\n> The dmesg shows that swap has not been touched free and total are the\n> same, so this system is not running out of total memory per say.\n>\n> I keep thinking it's something to do with lowmem vs highmem 32bit vs\n> 64 bit, but again no changes and I'm getting hit nightly on 2\n> different servers (running slon, so switched over and same thing, even\n> disabled memory over commit and still got nailed.\n>\n> Is there anyone familiar with this or could take a look at the dmesg\n> output (off list) and decipher it for me?\n>\n> this is a Fedora 12 system, 2.6.32.23-170. I've been reading and\n> appears this is yet another fedora bug, but so far I have not found\n> any concrete evidence on how to fix it.\n>\n> Fedora 12\n> 32gig memory, 8 proc\n> postgres 8.4.4, slony 1.20\n> 5 gigs of swap (never hit it!)\n>\n> Thanks\n> Tory\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Thu, 21 Apr 2011 14:37:11 +0200", "msg_from": "yoshi watanabe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: oom_killer" }, { "msg_contents": "* Tory M Blue ([email protected]) wrote:\n> Is there anyone that could help me understand why all of a sudden with\n> no noticeable change in data, no change in hardware, no change in OS,\n> I'm seeing postmaster getting killed by oom_killer?\n\nYou would really be best off just turning off the oom_killer.. Of\ncourse, you should probably also figure out what process is actually\nchewing through your memory to the point that the OOM killer is getting\nrun.\n\n> The dmesg shows that swap has not been touched free and total are the\n> same, so this system is not running out of total memory per say.\n\nThere's probably something else that's trying to grab all the memory and\nthen tries to use it and PG ends up getting nailed because the kernel\nover-attributes memory to it. You should be looking for that other\nprocess..\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Thu, 21 Apr 2011 08:48:51 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: oom_killer" }, { "msg_contents": "On Thu, Apr 21, 2011 at 2:48 PM, Stephen Frost <[email protected]> wrote:\n>\n> There's probably something else that's trying to grab all the memory and\n> then tries to use it and PG ends up getting nailed because the kernel\n> over-attributes memory to it.  You should be looking for that other\n> process..\n\nNot only that, you probably should set up your oom killer not to kill\npostmaster. Ever.\n", "msg_date": "Thu, 21 Apr 2011 14:53:35 +0200", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: oom_killer" }, { "msg_contents": "On Thu, Apr 21, 2011 at 2:53 PM, Claudio Freire <[email protected]> wrote:\n> On Thu, Apr 21, 2011 at 2:48 PM, Stephen Frost <[email protected]> wrote:\n>>\n>> There's probably something else that's trying to grab all the memory and\n>> then tries to use it and PG ends up getting nailed because the kernel\n>> over-attributes memory to it.  You should be looking for that other\n>> process..\n>\n> Not only that, you probably should set up your oom killer not to kill\n> postmaster. Ever.\n>\n\nHere: http://developer.postgresql.org/pgdocs/postgres/kernel-resources.html\n", "msg_date": "Thu, 21 Apr 2011 15:02:44 +0200", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: oom_killer" }, { "msg_contents": "On Thu, Apr 21, 2011 at 3:28 AM, Tory M Blue <[email protected]> wrote:\n> Is there anyone that could help me understand why all of a sudden with\n> no noticeable change in data, no change in hardware, no change in OS,\n> I'm seeing postmaster getting killed by oom_killer?\n>\n> The dmesg shows that swap has not been touched free and total are the\n> same, so this system is not running out of total memory per say.\n>\n> I keep thinking it's something to do with lowmem vs highmem 32bit vs\n> 64 bit, but again no changes and I'm getting hit nightly on 2\n> different servers (running slon, so switched over and same thing, even\n> disabled memory over commit and still got nailed.\n>\n> Is there anyone familiar with this or could take a look at the dmesg\n> output (off list) and decipher it for me?\n>\n> this is a Fedora 12 system, 2.6.32.23-170. I've been reading and\n> appears this is yet another fedora bug, but so far I have not found\n> any concrete evidence on how to fix it.\n>\n> Fedora 12\n> 32gig memory, 8 proc\n> postgres 8.4.4, slony 1.20\n> 5 gigs of swap (never hit it!)\n\ncurious: using 32/64 bit postgres? what are your postgresql.conf\nmemory settings?\n\nmerlin\n", "msg_date": "Thu, 21 Apr 2011 09:27:40 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: oom_killer" }, { "msg_contents": "On Thu, Apr 21, 2011 at 7:27 AM, Merlin Moncure <[email protected]> wrote:\n> On Thu, Apr 21, 2011 at 3:28 AM, Tory M Blue <[email protected]> wrote:\n\n>> Fedora 12\n>> 32gig memory, 8 proc\n>> postgres 8.4.4, slony 1.20\n>> 5 gigs of swap (never hit it!)\n>\n> curious: using 32/64 bit postgres? what are your postgresql.conf\n> memory settings?\n>\n> merlin\n>\n\n32bit\n32gb\nPAE kernel\n\n# - Checkpoints -\ncheckpoint_segments = 100\nmax_connections = 300\nshared_buffers = 2500MB # min 128kB or max_connections*16kB\nmax_prepared_transactions = 0\nwork_mem = 100MB\nmaintenance_work_mem = 128MB\nfsync = on\n\nthanks\nTory\n", "msg_date": "Thu, 21 Apr 2011 08:50:54 -0700", "msg_from": "Tory M Blue <[email protected]>", "msg_from_op": true, "msg_subject": "Re: oom_killer" }, { "msg_contents": "On Thu, Apr 21, 2011 at 5:53 AM, Claudio Freire <[email protected]> wrote:\n> On Thu, Apr 21, 2011 at 2:48 PM, Stephen Frost <[email protected]> wrote:\n>>\n>> There's probably something else that's trying to grab all the memory and\n>> then tries to use it and PG ends up getting nailed because the kernel\n>> over-attributes memory to it.  You should be looking for that other\n>> process..\n>\n> Not only that, you probably should set up your oom killer not to kill\n> postmaster. Ever.\n\nYa did that last night setting it to a -17 ya.\n\nand to the other user stating I should disable oom_killer all together,\n\nYa of setting vm.overcommit to 2 and the ratio to 0 doesn't disable\nit, I don't know what else to do. out of memory is out of memory, but\nif swap is not being touched, I can't tell you what the heck this\nfedora team is doing/thinking\n\nTory\n", "msg_date": "Thu, 21 Apr 2011 08:54:38 -0700", "msg_from": "Tory M Blue <[email protected]>", "msg_from_op": true, "msg_subject": "Re: oom_killer" }, { "msg_contents": "On Thu, Apr 21, 2011 at 5:50 PM, Tory M Blue <[email protected]> wrote:\n> # - Checkpoints -\n> checkpoint_segments = 100\n> max_connections = 300\n> shared_buffers = 2500MB       # min 128kB or max_connections*16kB\n> max_prepared_transactions = 0\n> work_mem = 100MB\n> maintenance_work_mem = 128MB\n> fsync = on\n\nThat's an unrealistic setting for a 32-bit system, which can only\naddress 3GB of memory per process.\n\nYou take away 2500MB for shared buffers, that leaves you only 500M for\ndata, some of which is code.\n\nThere's no way PG can operate with 100MB work_mem llike that.\n\nEither decrease shared_buffers, or get a 64-bit system.\n", "msg_date": "Thu, 21 Apr 2011 17:57:55 +0200", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: oom_killer" }, { "msg_contents": "On Thu, Apr 21, 2011 at 8:57 AM, Claudio Freire <[email protected]> wrote:\n> On Thu, Apr 21, 2011 at 5:50 PM, Tory M Blue <[email protected]> wrote:\n>> # - Checkpoints -\n>> checkpoint_segments = 100\n>> max_connections = 300\n>> shared_buffers = 2500MB       # min 128kB or max_connections*16kB\n>> max_prepared_transactions = 0\n>> work_mem = 100MB\n>> maintenance_work_mem = 128MB\n>> fsync = on\n>\n> That's an unrealistic setting for a 32-bit system, which can only\n> address 3GB of memory per process.\n>\n> You take away 2500MB for shared buffers, that leaves you only 500M for\n> data, some of which is code.\n>\n> There's no way PG can operate with 100MB work_mem llike that.\n>\n> Either decrease shared_buffers, or get a 64-bit system.\n\nWhile I don't mind the occasional slap of reality. This configuration\nhas run for 4+ years. It's possible that as many other components each\nfedora release is worse then the priors.\n\nThe Os has changed 170 days ago from fc6 to f12, but the postgres\nconfiguration has been the same, and umm no way it can operate, is so\nblack and white, especially when it has ran performed well with a\ndecent sized data set for over 4 years.\n\nThis is not the first time I've posted configs to this list over the\nlast few years and not once has anyone pointed this shortcoming out or\nsaid this will never work.\n\nWhile i'm still a newb when it comes to postgres performance tuning, I\ndon't generally see things in black and white. And again zero swap is\nbeing used but oom_killer is being called??\n\nBut if I remove\n", "msg_date": "Thu, 21 Apr 2011 09:15:51 -0700", "msg_from": "Tory M Blue <[email protected]>", "msg_from_op": true, "msg_subject": "Re: oom_killer" }, { "msg_contents": "On Thu, Apr 21, 2011 at 6:15 PM, Tory M Blue <[email protected]> wrote:\n> While I don't mind the occasional slap of reality. This configuration\n> has run for 4+ years. It's possible that as many other components each\n> fedora release is worse then the priors.\n\nI'd say you've been lucky.\nYou must be running overnight report queries that didn't run before,\nand that require more sorting memory than usual. Or... I dunno... but\nsomething did change.\n", "msg_date": "Thu, 21 Apr 2011 18:37:39 +0200", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: oom_killer" }, { "msg_contents": "On Thu, Apr 21, 2011 at 11:15 AM, Tory M Blue <[email protected]> wrote:\n\n> While I don't mind the occasional slap of reality. This configuration\n> has run for 4+ years. It's possible that as many other components each\n> fedora release is worse then the priors.\n\nHow many of those 300 max connections do you generally use? If you've\nalways used a handful, or you've used more but they weren't memory\nhungry then you've been lucky.\n\nwork_mem is how much memory postgresql can allocate PER sort or hash\ntype operation. Each connection can do that more than once. A\ncomplex query can do it dozens of times. Can you see that going from\n20 to 200 connections and increasing complexity can result in memory\nusage going from a few megabytes to something like 200 connections *\n100Megabytes per sort * 3 sorts = 60Gigabytes.\n\n> The Os has changed 170 days ago from fc6 to f12, but the postgres\n> configuration has been the same, and umm no way it can operate, is so\n> black and white, especially when it has ran performed well with a\n> decent sized data set for over 4 years.\n\nJust because you've been walking around with a gun pointing at your\nhead without it going off does not mean walking around with a gun\npointing at your head is a good idea.\n", "msg_date": "Thu, 21 Apr 2011 15:04:00 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: oom_killer" }, { "msg_contents": "On Thu, Apr 21, 2011 at 1:04 PM, Scott Marlowe <[email protected]> wrote:\n> On Thu, Apr 21, 2011 at 11:15 AM, Tory M Blue <[email protected]> wrote:\n>\n>> While I don't mind the occasional slap of reality. This configuration\n>> has run for 4+ years. It's possible that as many other components each\n>> fedora release is worse then the priors.\n>\n> How many of those 300 max connections do you generally use?  If you've\n> always used a handful, or you've used more but they weren't memory\n> hungry then you've been lucky.\n\nmax of 45\n\n> work_mem is how much memory postgresql can allocate PER sort or hash\n> type operation.  Each connection can do that more than once.  A\n> complex query can do it dozens of times.  Can you see that going from\n> 20 to 200 connections and increasing complexity can result in memory\n> usage going from a few megabytes to something like 200 connections *\n> 100Megabytes per sort * 3 sorts = 60Gigabytes.\n>\n>> The Os has changed 170 days ago from fc6 to f12, but the postgres\n>> configuration has been the same, and umm no way it can operate, is so\n>> black and white, especially when it has ran performed well with a\n>> decent sized data set for over 4 years.\n>\n> Just because you've been walking around with a gun pointing at your\n> head without it going off does not mean walking around with a gun\n> pointing at your head is a good idea.\n\n\nYes that is what I gathered. It's good information and I'm always open\nto a smack if I learn something, which in this case I did.\n\nWe were already working on moving to 64bit, but again the oom_killer\npopping up without the system even attempting to use swap is what has\ncaused me some pause.\n\nThanks again\nTory\n", "msg_date": "Thu, 21 Apr 2011 13:08:00 -0700", "msg_from": "Tory M Blue <[email protected]>", "msg_from_op": true, "msg_subject": "Re: oom_killer" }, { "msg_contents": "On Thu, Apr 21, 2011 at 3:04 PM, Scott Marlowe <[email protected]> wrote:\n> Just because you've been walking around with a gun pointing at your\n> head without it going off does not mean walking around with a gun\n> pointing at your head is a good idea.\n\n+1\n", "msg_date": "Thu, 21 Apr 2011 15:08:01 -0500", "msg_from": "J Sisson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: oom_killer" }, { "msg_contents": "On Thu, Apr 21, 2011 at 3:08 PM, Tory M Blue <[email protected]> wrote:\n> On Thu, Apr 21, 2011 at 1:04 PM, Scott Marlowe <[email protected]> wrote:\n>> On Thu, Apr 21, 2011 at 11:15 AM, Tory M Blue <[email protected]> wrote:\n>>\n>>> While I don't mind the occasional slap of reality. This configuration\n>>> has run for 4+ years. It's possible that as many other components each\n>>> fedora release is worse then the priors.\n>>\n>> How many of those 300 max connections do you generally use?  If you've\n>> always used a handful, or you've used more but they weren't memory\n>> hungry then you've been lucky.\n>\n> max of 45\n>\n>> work_mem is how much memory postgresql can allocate PER sort or hash\n>> type operation.  Each connection can do that more than once.  A\n>> complex query can do it dozens of times.  Can you see that going from\n>> 20 to 200 connections and increasing complexity can result in memory\n>> usage going from a few megabytes to something like 200 connections *\n>> 100Megabytes per sort * 3 sorts = 60Gigabytes.\n>>\n>>> The Os has changed 170 days ago from fc6 to f12, but the postgres\n>>> configuration has been the same, and umm no way it can operate, is so\n>>> black and white, especially when it has ran performed well with a\n>>> decent sized data set for over 4 years.\n>>\n>> Just because you've been walking around with a gun pointing at your\n>> head without it going off does not mean walking around with a gun\n>> pointing at your head is a good idea.\n>\n>\n> Yes that is what I gathered. It's good information and I'm always open\n> to a smack if I learn something, which in this case I did.\n>\n> We were already working on moving to 64bit, but again the oom_killer\n> popping up without the system even attempting to use swap is what has\n> caused me some pause.\n\nI think this might have been the 32 bit address space biting you. But\nthat's just a guess. Or the OS was running out of something other\nthan just plain memory, like file handles or something. But I'm not\nthat familiar with OOM killer as it's one of the things I tend to shut\noff when building a pg server. I also turn off swap and zone_reclaim\nmode.\n", "msg_date": "Thu, 21 Apr 2011 15:11:51 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: oom_killer" }, { "msg_contents": "On Thu, Apr 21, 2011 at 3:08 PM, Tory M Blue <[email protected]> wrote:\n> On Thu, Apr 21, 2011 at 1:04 PM, Scott Marlowe <[email protected]> wrote:\n>> On Thu, Apr 21, 2011 at 11:15 AM, Tory M Blue <[email protected]> wrote:\n>>\n>>> While I don't mind the occasional slap of reality. This configuration\n>>> has run for 4+ years. It's possible that as many other components each\n>>> fedora release is worse then the priors.\n>>\n>> How many of those 300 max connections do you generally use?  If you've\n>> always used a handful, or you've used more but they weren't memory\n>> hungry then you've been lucky.\n>\n> max of 45\n>\n>> work_mem is how much memory postgresql can allocate PER sort or hash\n>> type operation.  Each connection can do that more than once.  A\n>> complex query can do it dozens of times.  Can you see that going from\n>> 20 to 200 connections and increasing complexity can result in memory\n>> usage going from a few megabytes to something like 200 connections *\n>> 100Megabytes per sort * 3 sorts = 60Gigabytes.\n>>\n>>> The Os has changed 170 days ago from fc6 to f12, but the postgres\n>>> configuration has been the same, and umm no way it can operate, is so\n>>> black and white, especially when it has ran performed well with a\n>>> decent sized data set for over 4 years.\n>>\n>> Just because you've been walking around with a gun pointing at your\n>> head without it going off does not mean walking around with a gun\n>> pointing at your head is a good idea.\n>\n>\n> Yes that is what I gathered. It's good information and I'm always open\n> to a smack if I learn something, which in this case I did.\n>\n> We were already working on moving to 64bit, but again the oom_killer\n> popping up without the system even attempting to use swap is what has\n> caused me some pause.\n\nYour shared_buffers is way way to high...you have dangerously\noversubscribed this system. I would consider dropping down to\n256-512mb. Yeah, you have PAE but that only helps so much. Your\nserver can only address so much memory and you allocated a huge chunk\nof it right off the bat.\n\nAlso, you might want to consider connection pooler to keep your\n#backends down, especially if you need to keep work_mem high.\n\nmerlin\n", "msg_date": "Thu, 21 Apr 2011 15:12:37 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: oom_killer" }, { "msg_contents": "2011/4/21 Tory M Blue <[email protected]>:\n> On Thu, Apr 21, 2011 at 7:27 AM, Merlin Moncure <[email protected]> wrote:\n>> On Thu, Apr 21, 2011 at 3:28 AM, Tory M Blue <[email protected]> wrote:\n>\n>>> Fedora 12\n>>> 32gig memory, 8 proc\n>>> postgres 8.4.4, slony 1.20\n>>> 5 gigs of swap (never hit it!)\n>>\n>> curious: using 32/64 bit postgres? what are your postgresql.conf\n>> memory settings?\n>>\n>> merlin\n>>\n>\n> 32bit\n> 32gb\n> PAE kernel\n>\n> # - Checkpoints -\n> checkpoint_segments = 100\n> max_connections = 300\n> shared_buffers = 2500MB       # min 128kB or max_connections*16kB\n> max_prepared_transactions = 0\n> work_mem = 100MB\n> maintenance_work_mem = 128MB\n> fsync = on\n>\n\nI didn't understand what value you set for vm.overcommit parameters.\nCan you give it and the values in /proc/meminfo, the interesting one\nare \"Commit*\" ?\n\nIf you have strict rules(overcommit=2), then your current kernel\nconfig may need some love : the commit_limit is probably too low\nbecause you have a small swap partition. One way is to change :\nvm.overcommit_ratio.\nBy default it should be something like 21GB (0.5*32+5) of\ncommit_limit, and you probably want 32GB :)\n\nMaybe you have some minor changes in your install or application usage\nand you just hit the limit.\n\n-- \nCédric Villemain               2ndQuadrant\nhttp://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support\n", "msg_date": "Fri, 22 Apr 2011 13:03:23 +0200", "msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: oom_killer" }, { "msg_contents": "On Fri, Apr 22, 2011 at 4:03 AM, Cédric Villemain\n<[email protected]> wrote:\n> 2011/4/21 Tory M Blue <[email protected]>:\n>> On Thu, Apr 21, 2011 at 7:27 AM, Merlin Moncure <[email protected]> wrote:\n>>> On Thu, Apr 21, 2011 at 3:28 AM, Tory M Blue <[email protected]> wrote:\n>>\n>>>> Fedora 12\n>>>> 32gig memory, 8 proc\n>>>> postgres 8.4.4, slony 1.20\n>>>> 5 gigs of swap (never hit it!)\n>>>\n>>> curious: using 32/64 bit postgres? what are your postgresql.conf\n>>> memory settings?\n>>>\n>>> merlin\n>>>\n>>\n>> 32bit\n>> 32gb\n>> PAE kernel\n>>\n>> # - Checkpoints -\n>> checkpoint_segments = 100\n>> max_connections = 300\n>> shared_buffers = 2500MB       # min 128kB or max_connections*16kB\n>> max_prepared_transactions = 0\n>> work_mem = 100MB\n>> maintenance_work_mem = 128MB\n>> fsync = on\n>>\n>\n> I didn't understand what value you set for vm.overcommit parameters.\n> Can you give it and the values in /proc/meminfo, the interesting one\n> are \"Commit*\" ?\n>\n> If you have strict rules(overcommit=2), then your current kernel\n> config may need some love : the commit_limit is probably too low\n> because you have a small swap partition. One way is to change :\n> vm.overcommit_ratio.\n> By default it should be something like 21GB (0.5*32+5) of\n> commit_limit, and you probably want 32GB :)\n>\n> Maybe you have some minor changes in your install or application usage\n> and you just hit the limit.\n\nThanks Cedric\n\nthe sysctl vm's are\n\n# 04/17/2011 to keep overcommit memory in check\nvm.overcommit_memory = 2\nvm.overcommit_ratio = 0\n\nCommitLimit: 4128760 kB\nCommitted_AS: 2380408 kB\n\n\nYa I do think my swap space is biting us, (but again just starting to\ngrasp that my swap space which has not grown with the continued\naddition of memory). I am just not starting to learn that the swap\ndoes need to be properly sized whether it's being used or not. I\nfigured it would use the swap and it would run out, but sounds like\nthe system takes the size into consideration and just decides not to\nuse it.\n\nI appreciate the totally no postgres responses with this.\n\nThanks\nTory\n", "msg_date": "Fri, 22 Apr 2011 09:27:16 -0700", "msg_from": "Tory M Blue <[email protected]>", "msg_from_op": true, "msg_subject": "Re: oom_killer" }, { "msg_contents": "Tory M Blue <[email protected]> wrote:\n \n> I appreciate the totally no postgres responses with this.\n \nI didn't understand that. What do you mean?\n \n-Kevin\n", "msg_date": "Fri, 22 Apr 2011 11:34:03 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: oom_killer" }, { "msg_contents": "On Fri, Apr 22, 2011 at 9:34 AM, Kevin Grittner\n<[email protected]> wrote:\n> Tory M Blue <[email protected]> wrote:\n>\n>> I appreciate the totally no postgres responses with this.\n>\n> I didn't understand that.  What do you mean?\n>\n> -Kevin\n\nI meant that when starting to talk about kernel commit limits/ etc,\nit's not really postgres centric, but you folks are still assisting me\nwith this. So thanks, some could say take it to Linux kernels, even\nthough this is killing postgres.\n\nTory\n", "msg_date": "Fri, 22 Apr 2011 09:37:23 -0700", "msg_from": "Tory M Blue <[email protected]>", "msg_from_op": true, "msg_subject": "Re: oom_killer" }, { "msg_contents": "2011/4/22 Tory M Blue <[email protected]>:\n> On Fri, Apr 22, 2011 at 4:03 AM, Cédric Villemain\n> <[email protected]> wrote:\n>> 2011/4/21 Tory M Blue <[email protected]>:\n>>> On Thu, Apr 21, 2011 at 7:27 AM, Merlin Moncure <[email protected]> wrote:\n>>>> On Thu, Apr 21, 2011 at 3:28 AM, Tory M Blue <[email protected]> wrote:\n>>>\n>>>>> Fedora 12\n>>>>> 32gig memory, 8 proc\n>>>>> postgres 8.4.4, slony 1.20\n>>>>> 5 gigs of swap (never hit it!)\n>>>>\n>>>> curious: using 32/64 bit postgres? what are your postgresql.conf\n>>>> memory settings?\n>>>>\n>>>> merlin\n>>>>\n>>>\n>>> 32bit\n>>> 32gb\n>>> PAE kernel\n>>>\n>>> # - Checkpoints -\n>>> checkpoint_segments = 100\n>>> max_connections = 300\n>>> shared_buffers = 2500MB       # min 128kB or max_connections*16kB\n>>> max_prepared_transactions = 0\n>>> work_mem = 100MB\n>>> maintenance_work_mem = 128MB\n>>> fsync = on\n>>>\n>>\n>> I didn't understand what value you set for vm.overcommit parameters.\n>> Can you give it and the values in /proc/meminfo, the interesting one\n>> are \"Commit*\" ?\n>>\n>> If you have strict rules(overcommit=2), then your current kernel\n>> config may need some love : the commit_limit is probably too low\n>> because you have a small swap partition. One way is to change :\n>> vm.overcommit_ratio.\n>> By default it should be something like 21GB (0.5*32+5) of\n>> commit_limit, and you probably want 32GB :)\n>>\n>> Maybe you have some minor changes in your install or application usage\n>> and you just hit the limit.\n>\n> Thanks Cedric\n>\n> the sysctl vm's are\n>\n> # 04/17/2011 to keep overcommit memory in check\n> vm.overcommit_memory = 2\n> vm.overcommit_ratio = 0\n>\n> CommitLimit:     4128760 kB\n> Committed_AS:    2380408 kB\n\nAre you sure it is a PAE kernel ? You look limited to 4GB.\n\nI don't know atm if overcommit_ratio=0 has a special meaning, else I\nwould suggest to update it to something like 40% (the default), but\n60% should still be safe (60% of 32GB + 5GB)\n\n>\n>\n> Ya I do think my swap space is biting us, (but again just starting to\n> grasp that my swap space which has not grown with the continued\n> addition of memory). I am just not starting to learn that the swap\n> does need to be properly sized whether it's being used or not. I\n> figured it would use the swap and it would run out, but sounds like\n> the system takes the size into consideration and just decides not to\n> use it.\n>\n> I appreciate the totally no postgres responses with this.\n>\n> Thanks\n> Tory\n>\n\n\n\n-- \nCédric Villemain               2ndQuadrant\nhttp://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support\n", "msg_date": "Fri, 22 Apr 2011 18:45:25 +0200", "msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: oom_killer" }, { "msg_contents": "2011/4/22 Cédric Villemain <[email protected]>:\n> 2011/4/22 Tory M Blue <[email protected]>:\n>> On Fri, Apr 22, 2011 at 4:03 AM, Cédric Villemain\n>> <[email protected]> wrote:\n>>> 2011/4/21 Tory M Blue <[email protected]>:\n>>>> On Thu, Apr 21, 2011 at 7:27 AM, Merlin Moncure <[email protected]> wrote:\n>>>>> On Thu, Apr 21, 2011 at 3:28 AM, Tory M Blue <[email protected]> wrote:\n>>>>\n>>>>>> Fedora 12\n>>>>>> 32gig memory, 8 proc\n>>>>>> postgres 8.4.4, slony 1.20\n>>>>>> 5 gigs of swap (never hit it!)\n>>>>>\n>>>>> curious: using 32/64 bit postgres? what are your postgresql.conf\n>>>>> memory settings?\n>>>>>\n>>>>> merlin\n>>>>>\n>>>>\n>>>> 32bit\n>>>> 32gb\n>>>> PAE kernel\n>>>>\n>>>> # - Checkpoints -\n>>>> checkpoint_segments = 100\n>>>> max_connections = 300\n>>>> shared_buffers = 2500MB       # min 128kB or max_connections*16kB\n>>>> max_prepared_transactions = 0\n>>>> work_mem = 100MB\n>>>> maintenance_work_mem = 128MB\n>>>> fsync = on\n>>>>\n>>>\n>>> I didn't understand what value you set for vm.overcommit parameters.\n>>> Can you give it and the values in /proc/meminfo, the interesting one\n>>> are \"Commit*\" ?\n>>>\n>>> If you have strict rules(overcommit=2), then your current kernel\n>>> config may need some love : the commit_limit is probably too low\n>>> because you have a small swap partition. One way is to change :\n>>> vm.overcommit_ratio.\n>>> By default it should be something like 21GB (0.5*32+5) of\n>>> commit_limit, and you probably want 32GB :)\n>>>\n>>> Maybe you have some minor changes in your install or application usage\n>>> and you just hit the limit.\n>>\n>> Thanks Cedric\n>>\n>> the sysctl vm's are\n>>\n>> # 04/17/2011 to keep overcommit memory in check\n>> vm.overcommit_memory = 2\n>> vm.overcommit_ratio = 0\n>>\n>> CommitLimit:     4128760 kB\n>> Committed_AS:    2380408 kB\n>\n> Are you sure it is a PAE kernel ? You look limited to 4GB.\n>\n> I don't know atm if overcommit_ratio=0 has a special meaning, else I\n> would suggest to update it to something like 40% (the default), but\n\ndefault being 50 ...\n\n> 60% should still be safe (60% of 32GB + 5GB)\n>\n>>\n>>\n>> Ya I do think my swap space is biting us, (but again just starting to\n>> grasp that my swap space which has not grown with the continued\n>> addition of memory). I am just not starting to learn that the swap\n>> does need to be properly sized whether it's being used or not. I\n>> figured it would use the swap and it would run out, but sounds like\n>> the system takes the size into consideration and just decides not to\n>> use it.\n>>\n>> I appreciate the totally no postgres responses with this.\n>>\n>> Thanks\n>> Tory\n>>\n>\n>\n>\n> --\n> Cédric Villemain               2ndQuadrant\n> http://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support\n>\n\n\n\n-- \nCédric Villemain               2ndQuadrant\nhttp://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support\n", "msg_date": "Fri, 22 Apr 2011 18:46:01 +0200", "msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: oom_killer" }, { "msg_contents": "On Fri, Apr 22, 2011 at 9:46 AM, Cédric Villemain\n<[email protected]> wrote:\n> 2011/4/22 Cédric Villemain <[email protected]>:\n\n\n>> Are you sure it is a PAE kernel ? You look limited to 4GB.\n>>\n>> I don't know atm if overcommit_ratio=0 has a special meaning, else I\n>> would suggest to update it to something like 40% (the default), but\n>\n> default being 50 ...\n>\n>> 60% should still be safe (60% of 32GB + 5GB)\n\n2.6.32.23-170.fc12.i686.PAE , so it says. Okay so instead of dropping\nto 2-0 with the overcommit and ratio settings, I should look at\nmatching the ratio more to what I actually have in swap? Sorry but\nknee jerk reaction when I got hit by the oom_killer twice.\n\nTory\n", "msg_date": "Fri, 22 Apr 2011 10:18:36 -0700", "msg_from": "Tory M Blue <[email protected]>", "msg_from_op": true, "msg_subject": "Re: oom_killer" }, { "msg_contents": "On Thu, Apr 21, 2011 at 1:28 AM, Tory M Blue <[email protected]> wrote:\n> this is a Fedora 12 system, 2.6.32.23-170. I've been reading and\n> appears this is yet another fedora bug, but so far I have not found\n> any concrete evidence on how to fix it.\n\nIf it's a \"fedora\" bug, it's most likely related to the kernel where\nthe OOM-killer lives which really makes it more of a kernel bug than a\nfedora bug as fedora kernels generally track upstream very closely.\n\nGiven that both the version of Fedora you're using is no longer\nsupported, at a minimum you should be running F-13 (or preferably F-14\nsince F-13 will lose maintenance in appx 2 months). If you have to\nstay on F-12 you might at least try building the latest\n2.6.32-longterm kernel which is up to version 2.6.32.39.\n\nAll that said - have you tried tracking memory usage of the machine\nleading up to OOM killer events?\n\n-Dave\n", "msg_date": "Fri, 22 Apr 2011 11:15:57 -0700", "msg_from": "David Rees <[email protected]>", "msg_from_op": false, "msg_subject": "Re: oom_killer" }, { "msg_contents": "On Fri, Apr 22, 2011 at 11:15 AM, David Rees <[email protected]> wrote:\n> On Thu, Apr 21, 2011 at 1:28 AM, Tory M Blue <[email protected]> wrote:\n>> this is a Fedora 12 system, 2.6.32.23-170. I've been reading and\n>> appears this is yet another fedora bug, but so far I have not found\n>> any concrete evidence on how to fix it.\n>\n> If it's a \"fedora\" bug, it's most likely related to the kernel where\n> the OOM-killer lives which really makes it more of a kernel bug than a\n> fedora bug as fedora kernels generally track upstream very closely.\n>\n> Given that both the version of Fedora you're using is no longer\n> supported, at a minimum you should be running F-13 (or preferably F-14\n> since F-13 will lose maintenance in appx 2 months).  If you have to\n> stay on F-12 you might at least try building the latest\n> 2.6.32-longterm kernel which is up to version 2.6.32.39.\n>\n> All that said - have you tried tracking memory usage of the machine\n> leading up to OOM killer events?\n>\n\nThanks David and I have and in fact I do see spikes that would cause\nmy system to run out of memory, but one thing I'm struggling with is\nmy system always runs at the limit. It's the nature of linux to take\nall the memory and manage it. The larger hurdle is why no swap is ever\nused, it's there, but the system never uses it. even the oom killer\nshows that I have the full 5gb of swap available, yet nothing is using\nis. I want want want to see swap being used!\n\nIf I run a script to do a bunch of malocs and hold I can see the\nsystem use up available memory then lay the smack down on my swap\nbefore oom is invoked.\n\nSo I'm starting to think in the meantime, while I rebuild, I need to\nmake sure I've got my postgres/kernel params in a good place. my ratio\nof 0 still allows oom_killer, but I've removed postgres from being\ntargeted by oom_killer now. I should still set the overcommit ratio\ncorrect for my 32gb 4-5gb swap system, but having a hard time wrapping\nmy head around that setting.\n\nTory\n", "msg_date": "Fri, 22 Apr 2011 11:22:39 -0700", "msg_from": "Tory M Blue <[email protected]>", "msg_from_op": true, "msg_subject": "Re: oom_killer" }, { "msg_contents": "On Fri, Apr 22, 2011 at 9:45 AM, Cédric Villemain\n<[email protected]> wrote:\n\n>> CommitLimit:     4128760 kB\n>> Committed_AS:    2380408 kB\n>\n> Are you sure it is a PAE kernel ? You look limited to 4GB.\n\nFigured that the Commitlimit is actually the size of swap, so on one\nserver it's 4gb and the other it's 5gb.\n\nSo still need to figure out with 32gig of ram and 4 to 5gig swap, what\nmy overcommit ratio should be.\n\nTory\n", "msg_date": "Fri, 22 Apr 2011 12:02:59 -0700", "msg_from": "Tory M Blue <[email protected]>", "msg_from_op": true, "msg_subject": "Re: oom_killer" }, { "msg_contents": "2011/4/22 Tory M Blue <[email protected]>:\n> On Fri, Apr 22, 2011 at 9:45 AM, Cédric Villemain\n> <[email protected]> wrote:\n>\n>>> CommitLimit:     4128760 kB\n>>> Committed_AS:    2380408 kB\n>>\n>> Are you sure it is a PAE kernel ? You look limited to 4GB.\n>\n> Figured that the Commitlimit is actually the size of swap, so on one\n> server it's 4gb and the other it's 5gb.\n>\n> So still need to figure out with 32gig of ram and 4 to 5gig swap, what\n> my overcommit ratio should be.\n\nat least the default value of 50, probably more, up to you to adjust.\nYou should have something ok with 50, given that it used to work well\nuntil now with 0 (so you'll have 21GB of commitable memory )\n\n>\n> Tory\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nCédric Villemain               2ndQuadrant\nhttp://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support\n", "msg_date": "Fri, 22 Apr 2011 21:18:49 +0200", "msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: oom_killer" }, { "msg_contents": "On Fri, Apr 22, 2011 at 6:45 PM, Cédric Villemain\n<[email protected]> wrote:\n> Are you sure it is a PAE kernel ? You look limited to 4GB.\n\nIf my memory/knowledge serves me right, PAE doesn't remove that limit.\nPAE allows more processes, and they can use more memory together, but\none process alone has to live within an addressable range, and that is\nstill 4GB, mandated by the 32-bit addressable space when operating in\nlinear addressing mode.\n\nBut linux kernels usually reserve 1GB for kernel stuff (buffers and\nthat kind of stuff), so the addressable portion for processes is 3GB.\n\nTake away 2.5GB of shared buffers, and you only leave 0.5G for general\ndata and code.\n\nReally, lowering shared_buffers will probably be a solution. Moving to\n64 bits would be a better one.\n", "msg_date": "Sat, 23 Apr 2011 01:19:19 +0200", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: oom_killer" }, { "msg_contents": "On Apr 22, 2011, at 2:22 PM, Tory M Blue <[email protected]> wrote:\n> Thanks David and I have and in fact I do see spikes that would cause\n> my system to run out of memory, but one thing I'm struggling with is\n> my system always runs at the limit. It's the nature of linux to take\n> all the memory and manage it. \n\nOne thing to watch is the size of the filesystem cache. Generally as the system comes under memory pressure you will see the cache shrink. Not sure what is happening on your system, but typically when it gets down to some minimal size, that's when the swapping starts.\n\n...Robert", "msg_date": "Sat, 23 Apr 2011 15:24:28 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: oom_killer" }, { "msg_contents": "On Sat, Apr 23, 2011 at 12:24 PM, Robert Haas <[email protected]> wrote:\n\n> One thing to watch is the size of the filesystem cache. Generally as the system comes under memory pressure you will see the cache shrink. Not sure what is happening on your system, but typically when it gets down to some minimal size, that's when the swapping starts.\n>\n> ...Robert\n\nThanks everyone, I've tuned the system in the tune of overcommit 2 and\nratio of 80% this makes my commit look like:\nCommitLimit: 31694880 kB\nCommitted_AS: 2372084 kB\n\n\nSo with 32G of system memory and 4gb cache so far it's running okay,\nno ooms in the last 2 days and the DB is performing well again. I've\nalso dropped the shared buffers to 2gb, that gives me 1 gb for data\netc. I'll test with smaller 1.5gb if need be.\n\nI've already started the 64bit process, I've got to test if slon will\nreplicate between a 32bit and 64 bit system, if the postgres/slon\nversions are the same (slon being the key here). If this works, I will\nbe able to do the migration to 64bit that much easier, if not well,\nya that changes the scheme a ton.\n\nThanks for the all the assistance in this, it's really appreciated\n\nTory\n", "msg_date": "Sat, 23 Apr 2011 21:22:22 -0700", "msg_from": "Tory M Blue <[email protected]>", "msg_from_op": true, "msg_subject": "Re: oom_killer" } ]
[ { "msg_contents": "This is a good one :)\n\nHere is a \"brief\" description of our issue(Postgres 9.0): \n\nTables:\nmain fact table:\nTable \"public.parent_fact\"\n Column | Type | \n----------------------+-----------------------------+-----------\n etime | date | not null\n pcamp_id | integer | \n location_id | integer | \n impressions | bigint | \n clicks | int\n\nthis table partitioned by etime.\n\nWe are trying to build a report, which has last week numbers alongside with this \nweek numbers. For example: if today is Wednesday, I want to compare daily \nnumbers from last week 3 days (mon through wed) with this week 3 days(mon \nthrough wed).\n\nTo accomplish that, we've decided to build a transformation table, which has two \ncolumns:\n\n Table \"public.trans_last_week\"\n Column | Type | Modifiers \n----------+-----------------------------+-----------\n etime | date | \n lw_etime | date |\n\nSo for each date(etime), we have lw_etime, which is essentially etime-7 days.\n\nHere is the first query, which performs fine:\n\nselect a11.location_id AS location_id,\n a11.pcamp_id AS pcamp_id,\n sum(a11.clicks)\nfrom parent_fact a11\nwhere a11.etime between '2011-14-18' and '2011-04-20'\ngroup by a11.location_id,\n a11.pcamp_id\n\neverything is good there - it calculates numbers from the current week and goes \nto only 3 partitions to aggregate numbers. \n\nHere is the second query:\n\nselect a11.location_id AS location_id,\n a11.pcamp_id AS pcamp_id,\n sum(a11.clicks)\nfrom parent_fact a11\n join trans_last_week a12\n on (a11.etime = a12.lw_etime)\nwhere a12.etime between '2011-14-18' and '2011-04-20'\ngroup by a11.location_id,\n a11.pcamp_id\n\n\nHere it scans through all partitions in the parent_fact table and runs 3-4 times \nslower.\n\nWhat was noticed, that the only case when Postgres is actually going to execute \nthe query against the right partitions is query #1. \n\nIs that by design? Second query join, will also result in 3 days(3 partitions) \n\nThis query (#3) also scans all partitions:\n\nselect a11.location_id AS location_id,\n a11.pcamp_id AS pcamp_id,\n sum(a11.clicks)\nfrom parent_fact a11\nwhere a11.etime in (select a12.etime from trans_last_week a12 \nwhere a11.etime = a12.lw_etime)\ngroup by a11.location_id,\n a11.pcamp_id\n\n\nThank you!\n\n\n\n\n\n\nThis is a good one :)\n\nHere is a \"brief\" description of our issue(Postgres 9.0): \n\nTables:\nmain fact table:\nTable \"public.parent_fact\"\n        Column        |            Type             |  \n----------------------+-----------------------------+-----------\n etime                | date | not null\n pcamp_id             | integer                     | \n location_id          | integer                     | \n impressions          | bigint                      | \n clicks              | int\n\nthis table partitioned by etime.\n\nWe are trying to build a report, which has last week numbers alongside with this week numbers. For example: if today is Wednesday, I want to compare daily numbers from last week 3 days (mon through wed) with this week 3 days(mon through wed).\n\nTo accomplish that, we've decided to build a transformation table, which has two columns:\n\n Table \"public.trans_last_week\"\n  Column  |            Type             | Modifiers \n----------+-----------------------------+-----------\n etime    | date | \n lw_etime | date |\n\nSo for each date(etime), we have lw_etime, which is essentially etime-7 days.\n\nHere is the first query, which performs fine:\n\nselect    a11.location_id AS location_id,\n    a11.pcamp_id AS  pcamp_id,\n    sum(a11.clicks)\nfrom    parent_fact    a11\nwhere    a11.etime between '2011-14-18' and '2011-04-20'\ngroup  by    a11.location_id,\n    a11.pcamp_id\n\neverything is good there -  it calculates numbers from the current week and goes to only 3 partitions to aggregate numbers. \n\nHere is the second query:\n\nselect    a11.location_id AS location_id,\n    a11.pcamp_id AS  pcamp_id,\n    sum(a11.clicks)\nfrom    parent_fact    a11\n    join    trans_last_week    a12\n      on     (a11.etime = a12.lw_etime)\nwhere    a12.etime between '2011-14-18' and '2011-04-20'\ngroup  by    a11.location_id,\n    a11.pcamp_id\n\n\nHere it scans through all partitions in the parent_fact table and runs 3-4 times slower.\n\nWhat was noticed, that the only case when Postgres is actually going to execute the query against the right partitions is query #1. \n\nIs that by design? Second query join, will also result in 3 days(3 partitions) \n\nThis query (#3) also scans all partitions:\n\nselect    a11.location_id AS location_id,\n    a11.pcamp_id AS  pcamp_id,\n    sum(a11.clicks)\nfrom    parent_fact    a11\nwhere    a11.etime in (select a12.etime from trans_last_week    a12 where a11.etime = a12.lw_etime)\ngroup  by    a11.location_id,\n    a11.pcamp_id\n\n\nThank you!", "msg_date": "Thu, 21 Apr 2011 18:26:18 -0700 (PDT)", "msg_from": "Paul Pierce <[email protected]>", "msg_from_op": true, "msg_subject": "Issue with partition elimination" }, { "msg_contents": "On 4/21/11 6:26 PM, Paul Pierce wrote:\n> What was noticed, that the only case when Postgres is actually going to execute \n> the query against the right partitions is query #1. \n> \n> Is that by design? Second query join, will also result in 3 days(3 partitions) \n\nPartition elimination currently can only handle constants and\nexpressions which are equivalent to constants. It will not filter on\nJoins successfully.\n\nThis will improve somewhat in 9.1, possibly enough to fix your case.\nPlease test this on 9.1a5 and see how well it works, and give us feedback.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n", "msg_date": "Mon, 25 Apr 2011 11:06:47 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with partition elimination" } ]
[ { "msg_contents": "Hi all\n\nI'm trying to track down the causes of an application crash and reviewing PG\nlogs I'm seeing this:\n\n2011-04-22 06:00:16 CEST LOG: checkpoint complete: wrote 140 buffers\n(3.4%); 0 transaction log file(s) added, 0 removed, 0 recycled; write=27.937\ns, sync=1.860 s, total=29.906 s\n2011-04-22 06:04:47 CEST LOG: checkpoint starting: time\n2011-04-22 06:05:20 CEST LOG: checkpoint complete: wrote 161 buffers\n(3.9%); 0 transaction log file(s) added, 0 removed, 0 recycled; write=32.188\ns, sync=0.890 s, total=33.157 s\n2011-04-22 06:09:20 CEST ERROR: update or delete on table \"automezzo\"\nviolates foreign key constraint \"fk77e102615cffe609\" on table \"carico\"\n2011-04-22 06:09:20 CEST DETAIL: Key (id)=(13237) is still referenced from\ntable \"carico\".\n2011-04-22 06:09:20 CEST STATEMENT: delete from Automezzo where id=$1\n2011-04-22 06:09:20 CEST ERROR: current transaction is aborted, commands\nignored until end of transaction block\n2011-04-22 06:09:20 CEST STATEMENT: SELECT key from CONFIGURAZIONE LIMIT 0\n2011-04-22 06:09:47 CEST LOG: checkpoint starting: time\n2011-04-22 06:10:15 CEST LOG: checkpoint complete: wrote 137 buffers\n(3.3%); 0 transaction log file(s) added, 0 removed, 0 recycled; write=27.406\ns, sync=0.781 s, total=28.360 s\n2011-04-22 06:14:47 CEST LOG: checkpoint starting: time\n2011-04-22 06:15:14 CEST LOG: checkpoint complete: wrote 136 buffers\n(3.3%); 0 transaction log file(s) added, 0 removed, 0 recycled; write=27.078\ns, sync=0.797 s, total=27.984 s\n2011-04-22 06:19:47 CEST LOG: checkpoint starting: time\n2011-04-22 06:20:20 CEST LOG: checkpoint complete: wrote 164 buffers\n(4.0%); 0 transaction log file(s) added, 0 removed, 0 recycled; write=32.938\ns, sync=0.594 s, total=33.625 s\n2011-04-22 06:24:47 CEST LOG: checkpoint starting: time\n2011-04-22 06:25:20 CEST LOG: checkpoint complete: wrote 160 buffers\n(3.9%); 0 transaction log file(s) added, 0 removed, 0 recycled; write=31.875\ns, sync=0.953 s, total=32.922 s\n2011-04-22 06:29:47 CEST LOG: checkpoint starting: time\n2011-04-22 06:30:14 CEST LOG: checkpoint complete: wrote 130 buffers\n(3.2%); 0 transaction log file(s) added, 0 removed, 0 recycled; write=25.922\ns, sync=0.937 s, total=26.937 s\n2011-04-22 06:34:47 CEST LOG: checkpoint starting: time\n2011-04-22 06:35:15 CEST LOG: checkpoint complete: wrote 132 buffers\n(3.2%); 0 transaction log file(s) added, 0 removed, 0 recycled; write=26.578\ns, sync=1.250 s, total=27.984 s\n2011-04-22 06:39:47 CEST LOG: checkpoint starting: time\n2011-04-22 06:40:15 CEST LOG: checkpoint complete: wrote 136 buffers\n(3.3%); 0 transaction log file(s) added, 0 removed, 0 recycled; write=27.250\ns, sync=0.984 s, total=28.453 s\n2011-04-22 06:44:47 CEST LOG: checkpoint starting: time\n2011-04-22 06:51:41 CEST LOG: checkpoint complete: wrote 108 buffers\n(2.6%); 0 transaction log file(s) added, 0 removed, 0 recycled;\nwrite=409.007 s, sync=4.672 s, total=414.070 s\n2011-04-22 06:55:42 CEST LOG: could not receive data from client: No\nconnection could be made because the target machine actively refused it.\n\n<goes on like the above until application restart>\n\n\nNow, I'm fairly new with PG and checkpoint internals but even after reading\naround (PG docs,\nhttp://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm and\nvarious other posts from this ml) I haven't been able to figure out the\nfollowing:\n\n-given the configuration attached (which is basically a vanilla one) and the\nnumber of buffers written at each execution, are these execution times\nnormal or above average? \n-in the case of the execution that overruns past the timeout, what are the\nimplications wrt the client application? \n-AFAIU client connections are basically stalled during checkpoints. Is it\nreasonable to infer that the fact that the application blocking on a\ngetConnection() might be related to checkpoints being executed?\n-considering some tuning on the PG side, should I try increasing\ncheckpoint_timeout and rising checkpoint_completion_target to lessen the\nimpact of IO on the client or should I shorten the period so there's less\nstuff to write? from the number of buffers written on average I'd assume the\nfirst option is the one to go for but I might miss some bit of reasoning\nhere...\n\nThe server both PG and the application server (Tomcat) are running on is a\nStratus ft2300 machine, which I think is setup to do RAID1. I've read about\nRAID5 not being a wise setup for disks hosting PG, what about RAID1?\n\nPostgres version is 8.3\nhttp://postgresql.1045698.n5.nabble.com/file/n4332601/postgresql.conf\npostgresql.conf \n\nAny help will be greatly appreciated:) Thanks, and sorry for the long post.\n\ncheers\nFrancesco\n\n\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Checkpoint-execution-overrun-impact-tp4332601p4332601.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n", "msg_date": "Fri, 22 Apr 2011 02:21:35 -0700 (PDT)", "msg_from": "drvillo <[email protected]>", "msg_from_op": true, "msg_subject": "Checkpoint execution overrun impact?" }, { "msg_contents": "On Fri, Apr 22, 2011 at 5:21 AM, drvillo <[email protected]> wrote:\n> -given the configuration attached (which is basically a vanilla one) and the\n> number of buffers written at each execution, are these execution times\n> normal or above average?\n\nThey seem fine. Remember that the write is deliberately spread out;\nit's not as if the system couldn't write out 130-160 8k blocks in less\nthan 30 s.\n\n> -in the case of the execution that overruns past the timeout, what are the\n> implications wrt the client application?\n\nNot sure what you are referring to here.\n\n> -AFAIU client connections are basically stalled during checkpoints. Is it\n> reasonable to infer that the fact that the application blocking on a\n> getConnection() might be related to checkpoints being executed?\n> -considering some tuning on the PG side, should I try increasing\n> checkpoint_timeout and rising checkpoint_completion_target to lessen the\n> impact of IO on the client or should I shorten the period so there's less\n> stuff to write? from the number of buffers written on average I'd assume the\n> first option is the one to go for but I might miss some bit of reasoning\n> here...\n\nI'm a bit puzzled by all of this because the logs you posted seem to\nreflect a system under very light load. Each checkpoint is writing no\nmore than 4% of shared_buffers and the sync phases are generally\ncompleting in less than one second. I don't see why that would be\ncausing stalls.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Wed, 11 May 2011 23:36:19 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Checkpoint execution overrun impact?" }, { "msg_contents": "drvillo wrote:\n> -given the configuration attached (which is basically a vanilla one) and the\n> number of buffers written at each execution, are these execution times\n> normal or above average? \n> \n\nGiven the configuration attached, most of them are normal. One problem \nmay be that your vanilla configuration has checkpoint_segments set to \n3. There is some logic in the checkpoint code to try and spread \ncheckpoint writes out over a longer period of time. The intention is \nfor a slower write spread to disrupt concurrent client activity less. \nIt doesn't work all that well unless you give it some more segments to \nwork with.\n\nAlso, with the default setting for shared_buffers, you are doing a lot \nmore redundant writes than you should be. The following postgresql.conf \nchanges should improve things for you:\n\nshared_buffers=256MB\ncheckpoint_segments=10\nwal_buffers=16MB\n\nYou may have to adjust your kernel shared memory memory settings for \nthat to work. See \nhttp://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server for an \nintro to these and the other common parameters you should consider \nadjusting.\n\n> -in the case of the execution that overruns past the timeout, what are the\n> implications wrt the client application? \n> \n\nThere really aren't any in the database. The server will immediately \nbegin another checkpoint. Some additional disk space is used. So long \nas the server doesn't run out of disk space from that, clients shouldn't \ncare.\n\n\n> -AFAIU client connections are basically stalled during checkpoints. Is it\n> reasonable to infer that the fact that the application blocking on a\n> getConnection() might be related to checkpoints being executed?\n> \n\nIt can be. What I suspect is happening during the bad one:\n\n2011-04-22 06:51:41 CEST LOG: checkpoint complete: wrote 108 buffers\n(2.6%); 0 transaction log file(s) added, 0 removed, 0 recycled;\nwrite=409.007 s, sync=4.672 s, total=414.070 s\n2011-04-22 06:55:42 CEST LOG: could not receive data from client: No\nconnection could be made because the target machine actively refused it.\n\n\nIs that something is happening on the disks of the server that keeps the \ndatabase from being able to write efficiently during this checkpoint. \nIt then slows the checkpoint so much that clients are timing out.\n\nThe tuning changes I suggested will lower the total amount of I/O the \nserver does between checkpoints, which will mean there is less \ninformation in the OS cache to write out when the checkpoint comes. \nThat may help, if the problem is really in the database.\n\n> -considering some tuning on the PG side, should I try increasing\n> checkpoint_timeout and rising checkpoint_completion_target to lessen the\n> impact of IO on the client or should I shorten the period so there's less\n> stuff to write? from the number of buffers written on average I'd assume the\n> first option is the one to go for but I might miss some bit of reasoning\n> here...\n> \n\nYour problems are likely because the operating system cache is getting \nfilled with something that is slowing checkpoints down. Maybe it's the \nregular database writes during the five minutes between checkpoints; \nmaybe it's something else running on the server. Whatever is happening, \nyou're unlikely to make it better by adjusting how often they happen. \nEither get the database to write less between checkpoints (like the \nchanges I suggested), or figure out what else is doing the writes. I \nsuspect they are coming from outside the database, only because if you \nreally had high write activity on this server you'd also be having \ncheckpoints more frequently, too.\n\n\n> I've read about\n> RAID5 not being a wise setup for disks hosting PG, what about RAID1?\n> \n\nThe problem with RAID5 is that it lowers write performance of a larger \nnumber of disks so it's potentially no better than a single drive. \nRAID1 is essentially a single drive, too. You may discover you're just \nrunning over what one drive can do. Something odd does seem to be doing \non though. Normally in your situation I would try to find some system \ndowntime and test the read/write speed of the drives, look for issues \nthere. As Robert said already, you shouldn't be running this slowly \nunless there's something going wrong.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Thu, 12 May 2011 22:10:32 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Checkpoint execution overrun impact?" } ]
[ { "msg_contents": "\nHi all:\n\nI realize this is slightly off topic, but is an issue of concern with\nthe use of ssd's. We are setting up a storage server under solaris\nusing ZFS. We have a couple of ssd's 2 x 160GB Intel X25-M MLC SATA\nacting as the zil (write journal) and are trying to see if it is safe\nto use for a power fail situation.\n\nOur testing (10 runs) hasn't shown any data loss, but I am not sure\nour testing has been running long enough and is valid, so I hoped the\npeople here who have tested an ssd for data loss may have some\nguidance.\n\nThe testing method is to copy a bunch of files over NFS to the server\nwith the zil. When the copy is running along, pull the power to the\nserver. The NFS client will stop and if the client got a message that\nblock X was written safely to the zil, it will continue writing with\nblock x+1. After the server comes backup and and the copies\nresume/finish the files are checksummed. If block X went missing, the\nchecksums will fail and we will have our proof.\n\nWe are looking at how to max out the writes to the SSD on the theory\nthat we need to fill the dram buffer on the SSD and get it saturated\nenough such that it can't flush data to permanent storage as fast as\nthe data is coming in. (I.E. make it a writeback with a longer delay\nso it's more likely to drop data.)\n\nDoes anybody have any comments or testing methodologies that don't\ninvolve using an actual postgres instance?\n\nThanks for your help.\n-- \n\t\t\t\t-- rouilj\n\nJohn Rouillard System Administrator\nRenesys Corporation 603-244-9084 (cell) 603-643-9300 x 111\n", "msg_date": "Fri, 22 Apr 2011 14:04:17 +0000", "msg_from": "John Rouillard <[email protected]>", "msg_from_op": true, "msg_subject": "OT (slightly) testing for data loss on an SSD drive due to power\n\tfailure" }, { "msg_contents": "On 04/22/2011 10:04 AM, John Rouillard wrote:\n> We have a couple of ssd's 2 x 160GB Intel X25-M MLC SATA\n> acting as the zil (write journal) and are trying to see if it is safe\n> to use for a power fail situation.\n> \n\nWell, the quick answer is \"no\". I've lost several weekends of my life \nto recovering information from database stored on those drivers, after \nthey were corrupted in a crash.\n\n> The testing method is to copy a bunch of files over NFS to the server\n> with the zil. When the copy is running along, pull the power to the\n> server. The NFS client will stop and if the client got a message that\n> block X was written safely to the zil, it will continue writing with\n> block x+1. After the server comes backup and and the copies\n> resume/finish the files are checksummed. If block X went missing, the\n> checksums will fail and we will have our proof.\n> \n\nInterestingly, you have reinvented parts of the standard script for \ntesting for data loss, diskchecker.pl: \nhttp://brad.livejournal.com/2116715.html\n\nYou can get a few thousand commits per second using that program, which \nis enough to fill the drive buffer such that a power pull should \nsometimes lose something. I don't think you can do a proper test here \nusing NFS; you really need something that is executing fsync calls \ndirectly in the same pattern a database server will.\n\nZFS is more resilient than most filesystem as far as avoiding file \ncorruption in this case. But you should still be able to find some \nmissing transactions that are sitting in the drive cache.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Fri, 22 Apr 2011 22:48:04 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OT (slightly) testing for data loss on an SSD drive\n\tdue to power failure" } ]
[ { "msg_contents": "\n\nWilly-Bas Loos <[email protected]> wrote:\n\n>Hi,\n>\n>I'm using PostgreSQL 8.4 (and also 8.3).\n>\n>A partial index like this:\n>CREATE INDEX table2_field1_idx\n> ON table2 (field1)\n> WHERE NOT field1 ISNULL;\n>\n>Will not be used when select one record from 100K records:\n>\n>explain select * from table2 where field1 = 256988\n>'Seq Scan on table2 (cost=0.00..1693.01 rows=1 width=4)'\n>' Filter: (field1 = 256988)'\n>\n>But it WILL be used like this:\n>\n>explain select * from table2 where field1 = 256988 and not field1 isnull\n>'Index Scan using table2_field1_idx on table2 (cost=0.00..8.28 rows=1\n>width=4)'\n>' Index Cond: (field1 = 256988)'\n>\n>\n>But, when i change the index from\"NOT field1 ISNULL \" to \"field1 NOTNULL\",\n>then the index WILL be used in both queries:\n>\n>explain select * from table1 where field1 = 256988\n>'Index Scan using table1_field1_idx on table1 (cost=0.00..8.28 rows=1\n>width=4)'\n>' Index Cond: (field1 = 256988)'\n>\n>'Index Scan using table1_field1_idx on table1 (cost=0.00..8.28 rows=1\n>width=4)'\n>' Index Cond: (field1 = 256988)'\n>' Filter: (NOT (field1 IS NULL))'\n>\n>\n>Any ideas why this might be?\n>\n>\n>Cheers,\n>\n>WBL\n>\n>Code below:\n>\n>--drop table table1;\n>create table table1(field1 integer);\n>CREATE INDEX table1_field1_idx\n> ON table1 (field1)\n> WHERE field1 NOTNULL;\n>insert into table1 values(null);\n>insert into table1 select generate_series(1,100000);\n>\n>vacuum analyze table1;\n>\n>explain select * from table1 where field1 = 256988\n>explain select * from table1 where field1 = 256988 and not field1 isnull\n>\n>\n>--drop table table2;\n>create table table2(field1 integer);\n>CREATE INDEX table2_field1_idx\n> ON table2 (field1)\n> WHERE NOT field1 ISNULL;\n>insert into table2 values(null);\n>insert into table2 select generate_series(1,100000);\n>\n>vacuum analyze table2;\n>\n>explain select * from table2 where field1 = 256988\n>explain select * from table2 where field1 = 256988 and not field1 isnull\n>\n>\n>-- \n>\"Patriotism is the conviction that your country is superior to all others\n>because you were born in it.\" -- George Bernard Shaw\n", "msg_date": "Sat, 23 Apr 2011 10:48:35 -0500", "msg_from": "Henry <[email protected]>", "msg_from_op": true, "msg_subject": "Re: not using partial index" } ]
[ { "msg_contents": "Not sure if this is the right list...but:\n\nDisclaimer: I realize this is comparing apples to oranges. I'm not\ntrying to start a database flame-war. I just want to say thanks to\nthe PostgreSQL developers who make my life easier.\n\nI manage thousands of databases (PostgreSQL, SQL Server, and MySQL),\nand this past weekend we had a massive power surge that knocked out\ntwo APC cabinets. Quite a few machines rebooted (and management is\ntaking a new look at the request for newer power cabinets heh).\nTalking theory is one thing, predicting results is another...and yet\nthe only thing that counts is \"what happens when 'worst-case-scenario'\nbecomes reality?\"\n\nLong story short, every single PostgreSQL machine survived the failure\nwith *zero* data corruption. I had a few issues with SQL Server\nmachines, and virtually every MySQL machine has required data cleanup\nand table scans and tweaks to get it back to \"production\" status.\n\nI was really impressed...you guys do amazing work. Thank you.\n", "msg_date": "Mon, 25 Apr 2011 14:30:59 -0500", "msg_from": "J Sisson <[email protected]>", "msg_from_op": true, "msg_subject": "Time to put theory to the test?" }, { "msg_contents": "On Mon, Apr 25, 2011 at 12:30 PM, J Sisson <[email protected]> wrote:\n> machines, and virtually every MySQL machine has required data cleanup\n> and table scans and tweaks to get it back to \"production\" status.\n\nTip from someone that manages thousands of MySQL servers: Use InnoDB\nwhen using MySQL. Using a crash unsafe product will yield undesirable\nresults when a server crashes. It is also faster for many use cases.\n\nInnoDB is crash safe. It is just that simple.\n\n-- \nRob Wultsch\[email protected]\n", "msg_date": "Mon, 25 Apr 2011 20:04:57 -0700", "msg_from": "Rob Wultsch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Time to put theory to the test?" }, { "msg_contents": "On 26/04/11 15:04, Rob Wultsch wrote:\n> On Mon, Apr 25, 2011 at 12:30 PM, J Sisson<[email protected]> wrote:\n>> machines, and virtually every MySQL machine has required data cleanup\n>> and table scans and tweaks to get it back to \"production\" status.\n> Tip from someone that manages thousands of MySQL servers: Use InnoDB\n> when using MySQL. Using a crash unsafe product will yield undesirable\n> results when a server crashes. It is also faster for many use cases.\n>\n> InnoDB is crash safe. It is just that simple.\n>\n+1\n\nOr even switch to the Mariadb fork and use the crash safe Aria engine \n[1] instead of Myisam if you must be without transactions. That has the \nadditional benefit of getting out from under the \"Big O\" - which is \nalways nice!\n\nCheers\n\nMark\n\n[1] I have not personally tested that Aria is crash safe, but can attest \nthat Innodb certainly is.\n\n\n\n\n\n\n On 26/04/11 15:04, Rob Wultsch wrote:\n \nOn Mon, Apr 25, 2011 at 12:30 PM, J Sisson <[email protected]> wrote:\n\n\nmachines, and virtually every MySQL machine has required data cleanup\nand table scans and tweaks to get it back to \"production\" status.\n\n\n\nTip from someone that manages thousands of MySQL servers: Use InnoDB\nwhen using MySQL. Using a crash unsafe product will yield undesirable\nresults when a server crashes. It is also faster for many use cases.\n\nInnoDB is crash safe. It is just that simple.\n\n\n\n+1\n\n Or even switch to the Mariadb fork and use the crash safe Aria\n engine [1] instead of Myisam if you must be without\n transactions. That has the additional benefit of getting out\n from under the \"Big O\" - which is always nice!\n\n Cheers\n\n Mark\n\n [1] I have not personally tested that Aria is crash safe, but\n can attest that Innodb certainly is.", "msg_date": "Tue, 26 Apr 2011 18:30:51 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Time to put theory to the test?" }, { "msg_contents": "On Mon, Apr 25, 2011 at 10:04 PM, Rob Wultsch <[email protected]> wrote:\n> Tip from someone that manages thousands of MySQL servers: Use InnoDB\n> when using MySQL.\n\nGranted, my knowledge of PostgreSQL (and even MSSQL) far surpasses my\nknowledge of MySQL, but if InnoDB has such amazing benefits as being\ncrash safe, and even speed increases in some instances, why isn't\nInnoDB default? I suppose the real issue is that I prefer software\nthat gives me safe defaults that I can adjust towards the \"unsafe\" end\nas far as I'm comfortable with, rather than starting off in la-la land\nand working back towards sanity.\n\nI'll concede that the issues we had with MySQL were self-inflicted for\nusing MyISAM. Thanks for pointing this out. Time to go get my\nknowledge of MySQL up to par with my knowledge of PostgreSQL...\n", "msg_date": "Tue, 26 Apr 2011 09:13:17 -0500", "msg_from": "J Sisson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Time to put theory to the test?" }, { "msg_contents": "J Sisson <[email protected]> wrote:\n> Rob Wultsch <[email protected]> wrote:\n>> Tip from someone that manages thousands of MySQL servers: Use\n>> InnoDB when using MySQL.\n> \n> Granted, my knowledge of PostgreSQL (and even MSSQL) far surpasses\n> my knowledge of MySQL, but if InnoDB has such amazing benefits as\n> being crash safe, and even speed increases in some instances, why\n> isn't InnoDB default?\n \nBecause it's not as fast as the unsafe ISAM implementation for most\nbenchmarks.\n \nThere is one minor gotcha in InnoDB (unless it's been fixed since\n2008): the release of locks is not atomic with the persistence of\nthe data in the write-ahead log (which makes it S2PL but not SS2PL).\nSo it is possible for another connection to see data that won't be\nthere after crash recovery. This is justified as an optimization.\nPersonally, I would prefer not to see data from other transactions\nuntil it has actually been successfully committed.\n \n-Kevin\n", "msg_date": "Tue, 26 Apr 2011 09:58:49 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Time to put theory to the test?" }, { "msg_contents": "On Tue, Apr 26, 2011 at 8:13 AM, J Sisson <[email protected]> wrote:\n> On Mon, Apr 25, 2011 at 10:04 PM, Rob Wultsch <[email protected]> wrote:\n>> Tip from someone that manages thousands of MySQL servers: Use InnoDB\n>> when using MySQL.\n>\n> Granted, my knowledge of PostgreSQL (and even MSSQL) far surpasses my\n> knowledge of MySQL, but if InnoDB has such amazing benefits as being\n> crash safe, and even speed increases in some instances, why isn't\n> InnoDB default?  I suppose the real issue is that I prefer software\n> that gives me safe defaults that I can adjust towards the \"unsafe\" end\n> as far as I'm comfortable with, rather than starting off in la-la land\n> and working back towards sanity.\n\nBecause for many read heavy workloads myisam is still faster. Note\nthat even if you use innodb tables, your system catalogs are stored in\nmyisam. The Drizzle project aims to fix such things, but I'd assume\nthey're a little ways from full production ready status.\n", "msg_date": "Tue, 26 Apr 2011 09:02:08 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Time to put theory to the test?" }, { "msg_contents": "\n\n---- Original message ----\n>Date: Tue, 26 Apr 2011 09:13:17 -0500\n>From: [email protected] (on behalf of J Sisson <[email protected]>)\n>Subject: Re: [PERFORM] Time to put theory to the test? \n>To: Rob Wultsch <[email protected]>\n>Cc: \"[email protected]\" <[email protected]>\n>\n>On Mon, Apr 25, 2011 at 10:04 PM, Rob Wultsch <[email protected]> wrote:\n>> Tip from someone that manages thousands of MySQL servers: Use InnoDB\n>> when using MySQL.\n>\n>Granted, my knowledge of PostgreSQL (and even MSSQL) far surpasses my\n>knowledge of MySQL, but if InnoDB has such amazing benefits as being\n>crash safe, and even speed increases in some instances, why isn't\n>InnoDB default? \n\nbecause it is. recently.\nhttp://dev.mysql.com/doc/refman/5.5/en/innodb-default-se.html\n\n", "msg_date": "Tue, 26 Apr 2011 11:20:33 -0400 (EDT)", "msg_from": "<[email protected]>", "msg_from_op": false, "msg_subject": "Re: Time to put theory to the test?" }, { "msg_contents": "J Sisson <[email protected]> writes:\n> Granted, my knowledge of PostgreSQL (and even MSSQL) far surpasses my\n> knowledge of MySQL, but if InnoDB has such amazing benefits as being\n> crash safe, and even speed increases in some instances, why isn't\n> InnoDB default?\n\nIt *is* default in the most recent versions (5.5 and up). They saw\nthe light eventually. I wonder whether being bought out by Oracle\nhad something to do with that attitude adjustment ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 26 Apr 2011 11:51:17 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Time to put theory to the test? " }, { "msg_contents": "On Tue, Apr 26, 2011 at 17:51, Tom Lane <[email protected]> wrote:\n> J Sisson <[email protected]> writes:\n>> Granted, my knowledge of PostgreSQL (and even MSSQL) far surpasses my\n>> knowledge of MySQL, but if InnoDB has such amazing benefits as being\n>> crash safe, and even speed increases in some instances, why isn't\n>> InnoDB default?\n>\n> It *is* default in the most recent versions (5.5 and up).  They saw\n> the light eventually.  I wonder whether being bought out by Oracle\n> had something to do with that attitude adjustment ...\n\nOracle has owned innodb for quite some time. MySQL didn't want to make\nthemselves dependant on an Oracle controlled technology. That argument\ncertainly went away when Oracle bought them - and I think that was the\nmain reason. Not the \"oracle mindset\" or anything like that...\n\n-- \n Magnus Hagander\n Me: http://www.hagander.net/\n Work: http://www.redpill-linpro.com/\n", "msg_date": "Tue, 26 Apr 2011 17:54:14 +0200", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Time to put theory to the test?" }, { "msg_contents": "On Tue, Apr 26, 2011 at 09:58:49AM -0500, Kevin Grittner wrote:\n> J Sisson <[email protected]> wrote:\n> > Rob Wultsch <[email protected]> wrote:\n> >> Tip from someone that manages thousands of MySQL servers: Use\n> >> InnoDB when using MySQL.\n> > \n> > Granted, my knowledge of PostgreSQL (and even MSSQL) far surpasses\n> > my knowledge of MySQL, but if InnoDB has such amazing benefits as\n> > being crash safe, and even speed increases in some instances, why\n> > isn't InnoDB default?\n> \n> Because it's not as fast as the unsafe ISAM implementation for most\n> benchmarks.\n> \n> There is one minor gotcha in InnoDB (unless it's been fixed since\n> 2008): the release of locks is not atomic with the persistence of\n> the data in the write-ahead log (which makes it S2PL but not SS2PL).\n> So it is possible for another connection to see data that won't be\n> there after crash recovery. This is justified as an optimization.\n> Personally, I would prefer not to see data from other transactions\n> until it has actually been successfully committed.\n> \n> -Kevin\n> \n\nIn addition, their fulltext indexing only works with MyISAM tables.\n\nKen\n", "msg_date": "Tue, 26 Apr 2011 12:04:21 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Time to put theory to the test?" }, { "msg_contents": "J,\n\n> Long story short, every single PostgreSQL machine survived the failure\n> with *zero* data corruption. I had a few issues with SQL Server\n> machines, and virtually every MySQL machine has required data cleanup\n> and table scans and tweaks to get it back to \"production\" status.\n\nCan I quote you on this? I'll need name/company.\n\nAnd, thank you for posting that regardless ...\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n", "msg_date": "Tue, 26 Apr 2011 10:57:04 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Time to put theory to the test?" } ]
[ { "msg_contents": "Hi,\n\nI am using PostgreSQL 9.0. There is a salutations table with 44 rows,\nand a contacts table with more than a million rows. The contacts table\nhas a nullable (only 0.002% null) salutation_id column, referencing\nsalutations.id.\n\nWith this query:\n\nSELECT\n salutations.id,\n salutations.name,\n salutations.description,\n EXISTS (\n SELECT 1\n FROM contacts\n WHERE salutations.id = contacts.salutation_id\n ) AS in_use\nFROM salutations\n\nI have to reduce random_page_cost from 4 to 2 to force index scan.\n\nEXPLAIN ANALYSE output with random_page_cost = 4:\n\n Seq Scan on salutations (cost=0.00..50.51 rows=44 width=229) (actual\ntime=0.188..3844.037 rows=44 loops=1)\n SubPlan 1\n -> Seq Scan on contacts (cost=0.00..64578.41 rows=57906\nwidth=0) (actual time=87.358..87.358 rows=1 loops=44)\n Filter: ($0 = salutation_id)\n Total runtime: 3844.113 ms\n\nEXPLAIN ANALYSE output with random_page_cost = 4, enable_seqscan = 0:\n\n Seq Scan on salutations (cost=10000000000.00..10000000095.42 rows=44\nwidth=229) (actual time=0.053..0.542 rows=44 loops=1)\n SubPlan 1\n -> Index Scan using ix_contacts_salutation_id on contacts\n(cost=0.00..123682.07 rows=57906 width=0) (actual time=0.011..0.011\nrows=1 loops=44)\n Index Cond: ($0 = salutation_id)\n Total runtime: 0.592 ms\n\nEXPLAIN ANALYSE output with random_page_cost = 2:\n\n Seq Scan on salutations (cost=0.00..48.87 rows=44 width=229) (actual\ntime=0.053..0.541 rows=44 loops=1)\n SubPlan 1\n -> Index Scan using ix_contacts_salutation_id on contacts\n(cost=0.00..62423.45 rows=57906 width=0) (actual time=0.011..0.011\nrows=1 loops=44)\n Index Cond: ($0 = salutation_id)\n Total runtime: 0.594 ms\n\nSo, index scan wins by a very small margin over sequential scan after\nthe tuning. I am a bit puzzled because index scan is more than 3000\ntimes faster in this case, but the estimated costs are about the same.\nDid I do something wrong?\n\nRegards,\nYap\n", "msg_date": "Tue, 26 Apr 2011 17:49:05 +0800", "msg_from": "Sok Ann Yap <[email protected]>", "msg_from_op": true, "msg_subject": "reducing random_page_cost from 4 to 2 to force index scan" }, { "msg_contents": "Sok Ann Yap <[email protected]> wrote:\n \n> So, index scan wins by a very small margin over sequential scan\n> after the tuning. I am a bit puzzled because index scan is more\n> than 3000 times faster in this case, but the estimated costs are\n> about the same. Did I do something wrong?\n \nTuning is generally needed to get best performance from PostgreSQL. \nNeeding to reduce random_page_cost is not unusual in situations\nwhere a good portion of the active data is in cache (between\nshared_buffers and the OS cache). Please show us your overall\nconfiguration and give a description of the hardware (how many of\nwhat kind of cores, how much RAM, what sort of storage system). The\nconfiguration part can be obtained by running the query on this page\nand pasting the result into your next post:\n \nhttp://wiki.postgresql.org/wiki/Server_Configuration\n \nThere are probably some other configuration adjustments you could do\nto ensure that good plans are chosen.\n \n-Kevin\n", "msg_date": "Tue, 26 Apr 2011 16:37:15 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: reducing random_page_cost from 4 to 2 to force\n\t index scan" }, { "msg_contents": "On Wed, Apr 27, 2011 at 5:37 AM, Kevin Grittner\n<[email protected]> wrote:\n> Sok Ann Yap <[email protected]> wrote:\n>\n>> So, index scan wins by a very small margin over sequential scan\n>> after the tuning. I am a bit puzzled because index scan is more\n>> than 3000 times faster in this case, but the estimated costs are\n>> about the same. Did I do something wrong?\n>\n> Tuning is generally needed to get best performance from PostgreSQL.\n> Needing to reduce random_page_cost is not unusual in situations\n> where a good portion of the active data is in cache (between\n> shared_buffers and the OS cache).  Please show us your overall\n> configuration and give a description of the hardware (how many of\n> what kind of cores, how much RAM, what sort of storage system).  The\n> configuration part can be obtained by running the query on this page\n> and pasting the result into your next post:\n>\n> http://wiki.postgresql.org/wiki/Server_Configuration\n>\n> There are probably some other configuration adjustments you could do\n> to ensure that good plans are chosen.\n>\n> -Kevin\n>\n\nHere's the configuration (this is just a low end laptop):\n\n name |\n current_setting\n----------------------------+-------------------------------------------------------------------------------------------------------------------------------\n version | PostgreSQL 9.0.4 on x86_64-pc-linux-gnu,\ncompiled by GCC x86_64-pc-linux-gnu-gcc (Gentoo 4.5.2 p1.0, pie-0.4.5)\n4.5.2, 64-bit\n checkpoint_segments | 16\n default_statistics_target | 10000\n effective_cache_size | 512MB\n lc_collate | en_US.UTF-8\n lc_ctype | en_US.UTF-8\n listen_addresses | *\n log_destination | syslog\n log_min_duration_statement | 0\n maintenance_work_mem | 256MB\n max_connections | 100\n max_stack_depth | 2MB\n port | 5432\n random_page_cost | 4\n server_encoding | UTF8\n shared_buffers | 256MB\n silent_mode | on\n TimeZone | Asia/Kuala_Lumpur\n wal_buffers | 1MB\n work_mem | 32MB\n(20 rows)\n\nThe thing is, the query I posted was fairly simple (I think), and\nPostgreSQL should be able to choose the 3000+ times faster index scan\nwith the default random_page_cost of 4. If I need to reduce it to 2\nwhen using a 5.4k rpm slow disk, what is random_page_cost = 4 good\nfor?\n\n(Sorry for the double message, I forgot to CC the list in the first one)\n", "msg_date": "Wed, 27 Apr 2011 07:23:42 +0800", "msg_from": "Sok Ann Yap <[email protected]>", "msg_from_op": true, "msg_subject": "Re: reducing random_page_cost from 4 to 2 to force index scan" }, { "msg_contents": "On Tue, Apr 26, 2011 at 4:37 PM, Kevin Grittner\n<[email protected]> wrote:\n> Sok Ann Yap <[email protected]> wrote:\n>\n>> So, index scan wins by a very small margin over sequential scan\n>> after the tuning. I am a bit puzzled because index scan is more\n>> than 3000 times faster in this case, but the estimated costs are\n>> about the same. Did I do something wrong?\n>\n> Tuning is generally needed to get best performance from PostgreSQL.\n> Needing to reduce random_page_cost is not unusual in situations\n> where a good portion of the active data is in cache (between\n> shared_buffers and the OS cache).  Please show us your overall\n> configuration and give a description of the hardware (how many of\n> what kind of cores, how much RAM, what sort of storage system).  The\n> configuration part can be obtained by running the query on this page\n> and pasting the result into your next post:\n>\n> http://wiki.postgresql.org/wiki/Server_Configuration\n>\n> There are probably some other configuration adjustments you could do\n> to ensure that good plans are chosen.\n\nThe very first thing to check is effective_cache_size and to set it to\na reasonable value.\n\nmerlin\n", "msg_date": "Tue, 26 Apr 2011 20:04:13 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: reducing random_page_cost from 4 to 2 to force index scan" }, { "msg_contents": "On Wed, Apr 27, 2011 at 3:04 AM, Merlin Moncure <[email protected]> wrote:\n> The very first thing to check is effective_cache_size and to set it to\n> a reasonable value.\n>\n\nThe problem there, I think, is that the planner is doing a full join,\ninstead of a semi-join.\n", "msg_date": "Wed, 27 Apr 2011 09:22:31 +0200", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: reducing random_page_cost from 4 to 2 to force index scan" }, { "msg_contents": "On Wed, Apr 27, 2011 at 9:22 AM, Claudio Freire <[email protected]> wrote:\n> The problem there, I think, is that the planner is doing a full join,\n> instead of a semi-join.\n\nOr, rather, computing cost as if it was a full join. I'm not sure why.\n", "msg_date": "Wed, 27 Apr 2011 09:23:32 +0200", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: reducing random_page_cost from 4 to 2 to force index scan" }, { "msg_contents": "On Tue, Apr 26, 2011 at 9:04 PM, Merlin Moncure <[email protected]> wrote:\n> On Tue, Apr 26, 2011 at 4:37 PM, Kevin Grittner\n> <[email protected]> wrote:\n>> Sok Ann Yap <[email protected]> wrote:\n>>\n>>> So, index scan wins by a very small margin over sequential scan\n>>> after the tuning. I am a bit puzzled because index scan is more\n>>> than 3000 times faster in this case, but the estimated costs are\n>>> about the same. Did I do something wrong?\n>>\n>> Tuning is generally needed to get best performance from PostgreSQL.\n>> Needing to reduce random_page_cost is not unusual in situations\n>> where a good portion of the active data is in cache (between\n>> shared_buffers and the OS cache).  Please show us your overall\n>> configuration and give a description of the hardware (how many of\n>> what kind of cores, how much RAM, what sort of storage system).  The\n>> configuration part can be obtained by running the query on this page\n>> and pasting the result into your next post:\n>>\n>> http://wiki.postgresql.org/wiki/Server_Configuration\n>>\n>> There are probably some other configuration adjustments you could do\n>> to ensure that good plans are chosen.\n>\n> The very first thing to check is effective_cache_size and to set it to\n> a reasonable value.\n\nActually, effective_cache_size has no impact on costing except when\nplanning a nested loop with inner index scan. So, a query against a\nsingle table can never benefit from changing that setting. Kevin's\nsuggestion of adjusting seq_page_cost and random_page_cost is the way\nto go.\n\nWe've talked in the past (and I still think it's a good idea, but\nhaven't gotten around to doing anything about it) about adjusting the\nplanner to attribute to each relation the percentage of its pages\nwhich we believe we'll find in cache. Although many complicated ideas\nfor determining that percentage have been proposed, my favorite one is\nfairly simple: assume that small relations will be mostly or entirely\ncached, and that big ones won't be. Allow the administrator to\noverride the result on a per-relation basis. It's difficult to\nimagine a situation where the planner should assume that a relation\nwith only handful of pages isn't going to be cached. Even if it\nisn't, as soon as someone begins accessing it, it will be.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Fri, 13 May 2011 13:44:24 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: reducing random_page_cost from 4 to 2 to force index scan" }, { "msg_contents": "Robert Haas <[email protected]> wrote:\n \n> We've talked in the past (and I still think it's a good idea, but\n> haven't gotten around to doing anything about it) about adjusting\n> the planner to attribute to each relation the percentage of its\n> pages which we believe we'll find in cache. Although many\n> complicated ideas for determining that percentage have been\n> proposed, my favorite one is fairly simple: assume that small\n> relations will be mostly or entirely cached, and that big ones\n> won't be. Allow the administrator to override the result on a\n> per-relation basis. It's difficult to imagine a situation where\n> the planner should assume that a relation with only handful of\n> pages isn't going to be cached. Even if it isn't, as soon as\n> someone begins accessing it, it will be.\n \nSimple as the heuristic is, I bet it would be effective. While one\ncan easily construct a synthetic case where it falls down, the ones\nI can think of aren't all that common, and you are suggesting an\noverride mechanism.\n \n-Kevin\n", "msg_date": "Fri, 13 May 2011 12:54:52 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: reducing random_page_cost from 4 to 2 to force\n\t index scan" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Tue, Apr 26, 2011 at 9:04 PM, Merlin Moncure <[email protected]> wrote:\n>> The very first thing to check is effective_cache_size and to set it to\n>> a reasonable value.\n\n> Actually, effective_cache_size has no impact on costing except when\n> planning a nested loop with inner index scan. So, a query against a\n> single table can never benefit from changing that setting.\n\nThat's flat out wrong. It does affect the cost estimate for plain\nindexscan (and bitmap indexscan) plans.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 13 May 2011 15:20:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: reducing random_page_cost from 4 to 2 to force index scan " }, { "msg_contents": "On Fri, May 13, 2011 at 3:20 PM, Tom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n>> On Tue, Apr 26, 2011 at 9:04 PM, Merlin Moncure <[email protected]> wrote:\n>>> The very first thing to check is effective_cache_size and to set it to\n>>> a reasonable value.\n>\n>> Actually, effective_cache_size has no impact on costing except when\n>> planning a nested loop with inner index scan.  So, a query against a\n>> single table can never benefit from changing that setting.\n>\n> That's flat out wrong.  It does affect the cost estimate for plain\n> indexscan (and bitmap indexscan) plans.\n\n<rereads code>\n\nOK, I agree. I obviously misinterpreted this code the last time I read it.\n\nI guess maybe the reason why it didn't matter for the OP is that - if\nthe size of the index page in pages is smaller than the pro-rated\nfraction of effective_cache_size allowed to the index - then the exact\nvalue doesn't affect the answer.\n\nI apparently need to study this code more.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Fri, 13 May 2011 16:04:39 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: reducing random_page_cost from 4 to 2 to force index scan" }, { "msg_contents": "\n> I guess maybe the reason why it didn't matter for the OP is that - if\n> the size of the index page in pages is smaller than the pro-rated\n> fraction of effective_cache_size allowed to the index - then the exact\n> value doesn't affect the answer.\n> \n> I apparently need to study this code more.\n\nFWIW: random_page_cost is meant to be the ratio between the cost of\nlooking up a single row as and index lookup, and the cost of looking up\nthat same row as part of a larger sequential scan. For specific\nstorage, that coefficient should be roughly the same regardless of the\ntable size. So if your plan for optimization involves manipulating RPC\nfor anything other than a change of storage, you're Doing It Wrong.\n\nInstead, we should be fixing the formulas these are based on and leaving\nRPC alone.\n\nFor any data page, there are actually four costs associated with each\ntuple lookup, per:\n\nin-memory/seq\t| on disk/seq\n----------------+----------------\nin-memory/random| on disk/random\n\n(yes, there's actually more for bitmapscan etc. but the example holds)\n\nFor any given tuple lookup, then, you can assign a cost based on where\nyou think that tuple falls in that quadrant map. Since this is all\nprobability-based, you'd be assigning a cost as a mixed % of in-memory\nand on-disk costs. Improvements in accuracy of this formula would come\nthrough improvements in accuracy in predicting if a particular data page\nwill be in memory.\n\nThis is what the combination of random_page_cost and\neffective_cache_size ought to supply, but I don't think it does, quite.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n", "msg_date": "Fri, 13 May 2011 13:13:41 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: reducing random_page_cost from 4 to 2 to force index\n scan" }, { "msg_contents": "2011/5/13 Josh Berkus <[email protected]>:\n>\n>> I guess maybe the reason why it didn't matter for the OP is that - if\n>> the size of the index page in pages is smaller than the pro-rated\n>> fraction of effective_cache_size allowed to the index - then the exact\n>> value doesn't affect the answer.\n>>\n>> I apparently need to study this code more.\n>\n> FWIW: random_page_cost is meant to be the ratio between the cost of\n> looking up a single row as and index lookup, and the cost of looking up\n> that same row as part of a larger sequential scan.  For specific\n> storage, that coefficient should be roughly the same regardless of the\n> table size.  So if your plan for optimization involves manipulating RPC\n> for anything other than a change of storage, you're Doing It Wrong.\n>\n> Instead, we should be fixing the formulas these are based on and leaving\n> RPC alone.\n>\n> For any data page, there are actually four costs associated with each\n> tuple lookup, per:\n>\n> in-memory/seq   | on disk/seq\n> ----------------+----------------\n> in-memory/random| on disk/random\n\nit lacks some more theorical like sort_page/temp_page : those are\nbased on a ratio of seq_page_cost and random_page_cost or a simple\nseq_page_cost (when working out of work_mem)\n\nmemory access is accounted with some 0.1 in some place AFAIR.\n(and memory random/seq is the same at the level of estimations we do)\n\n>\n> (yes, there's actually more for bitmapscan etc.  but the example holds)\n\n(if I read correctly the sources, for this one there is a linear\napproach to ponderate the cost between random_page cost and\nseq_page_cost on the heap page fetch plus the Mackert and Lohman\nformula, if needed, in its best usage : predicting what should be in\ncache *because* of the current query execution, not because of the\ncurrent status of the page cache)\n\n>\n> For any given tuple lookup, then, you can assign a cost based on where\n> you think that tuple falls in that quadrant map.  Since this is all\n> probability-based, you'd be assigning a cost as a mixed % of in-memory\n> and on-disk costs.  Improvements in accuracy of this formula would come\n> through improvements in accuracy in predicting if a particular data page\n> will be in memory.\n>\n> This is what the combination of random_page_cost and\n> effective_cache_size ought to supply, but I don't think it does, quite.\n>\n> --\n> Josh Berkus\n> PostgreSQL Experts Inc.\n> http://pgexperts.com\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nCédric Villemain               2ndQuadrant\nhttp://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support\n", "msg_date": "Fri, 13 May 2011 22:51:08 +0200", "msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: reducing random_page_cost from 4 to 2 to force index scan" }, { "msg_contents": "On Fri, May 13, 2011 at 4:13 PM, Josh Berkus <[email protected]> wrote:\n> Instead, we should be fixing the formulas these are based on and leaving\n> RPC alone.\n>\n> For any data page, there are actually four costs associated with each\n> tuple lookup, per:\n\nAll true. I suspect that in practice the different between random and\nsequential memory page costs is small enough to be ignorable, although\nof course I might be wrong. I've never seen a database that was fully\ncached in memory where it was necessary to set\nrandom_page_cost>seq_page_cost to get good plans -- no doubt partly\nbecause even if the pages were consecutive on disk, there's no reason\nto suppose they would be so in memory, and we certainly wouldn't know\none way or the other at planning time. But I agree we should add a\ncached_page_cost as part of all this.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Fri, 13 May 2011 22:20:17 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: reducing random_page_cost from 4 to 2 to force index scan" }, { "msg_contents": "On Sat, May 14, 2011 at 3:13 AM, Josh Berkus <[email protected]> wrote:\n\n> This is what the combination of random_page_cost and\n> effective_cache_size ought to supply, but I don't think it does, quite.\n\nI think random_page_cost causes problems because I need to combine\ndisk random access time, which I can measure, with a guesstimate of\nthe disk cache hit rate. It would be lovely if these two variables\nwere separate. It would be even lovelier if the disk cache hit rate\ncould be probed at run time and didn't need setting at all, but I\nsuspect that isn't possible on some platforms.\n\n-- \nStuart Bishop <[email protected]>\nhttp://www.stuartbishop.net/\n", "msg_date": "Sun, 15 May 2011 10:49:02 +0700", "msg_from": "Stuart Bishop <[email protected]>", "msg_from_op": false, "msg_subject": "Re: reducing random_page_cost from 4 to 2 to force index scan" }, { "msg_contents": "Robert,\n\n> All true. I suspect that in practice the different between random and\n> sequential memory page costs is small enough to be ignorable, although\n> of course I might be wrong. \n\nThis hasn't been my experience, although I have not carefully measured\nit. In fact, there's good reason to suppose that, if you were selecting\n50% of more of a table, sequential access would still be faster even for\nan entirely in-memory table.\n\nAs a parallel to our development, Redis used to store all data as linked\nlists, making every object lookup effectively a random lookup. They\nfound that even with a database which is pinned in memory, creating a\ndata page structure (they call it \"ziplists\") and supporting sequential\nscans was up to 10X faster for large lists.\n\nSo I would assume that there is still a coefficient difference between\nseeks and scans in memory until proven otherwise.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n", "msg_date": "Sun, 15 May 2011 11:08:53 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: reducing random_page_cost from 4 to 2 to force index\n scan" }, { "msg_contents": "Stuart,\n\n> I think random_page_cost causes problems because I need to combine\n> disk random access time, which I can measure, with a guesstimate of\n> the disk cache hit rate.\n\nSee, that's wrong. Disk cache hit rate is what effective_cache_size\n(ECS) is for.\n\nReally, there's several factors which should be going into the planner's\nestimates to determine a probability of a table being cached:\n\n* ratio between total database size and ECS\n* ratio between table size and ECS\n* ratio between index size and ECS\n* whether the table is \"hot\" or not\n* whether the index is \"hot\" or not\n\nThe last two statistics are critically important for good estimation,\nand they are not things we currently collect. By \"hot\" I mean: is this\na relation which is accessed several times per minute/hour and is thus\nlikely to be in the cache when we need it? Currently, we have no way of\nknowing that.\n\nWithout \"hot\" statistics, we're left with guessing based on size, which\nresults in bad plans for small tables in large databases which are\naccessed infrequently.\n\nMind you, for large tables it would be even better to go beyond that and\nactually have some knowledge of which disk pages might be in cache.\nHowever, I think that's beyond feasibility for current software/OSes.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n", "msg_date": "Sun, 15 May 2011 11:20:10 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: reducing random_page_cost from 4 to 2 to force index\n scan" }, { "msg_contents": "On Sun, May 15, 2011 at 2:08 PM, Josh Berkus <[email protected]> wrote:\n>> All true.  I suspect that in practice the different between random and\n>> sequential memory page costs is small enough to be ignorable, although\n>> of course I might be wrong.\n>\n> This hasn't been my experience, although I have not carefully measured\n> it.  In fact, there's good reason to suppose that, if you were selecting\n> 50% of more of a table, sequential access would still be faster even for\n> an entirely in-memory table.\n>\n> As a parallel to our development, Redis used to store all data as linked\n> lists, making every object lookup effectively a random lookup.  They\n> found that even with a database which is pinned in memory, creating a\n> data page structure (they call it \"ziplists\") and supporting sequential\n> scans was up to 10X faster for large lists.\n>\n> So I would assume that there is still a coefficient difference between\n> seeks and scans in memory until proven otherwise.\n\nWell, anything's possible. But I wonder whether the effects you are\ndescribing might result from a reduction in the *number* of pages\naccessed rather than a change in the access pattern.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Sun, 15 May 2011 16:32:01 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: reducing random_page_cost from 4 to 2 to force index scan" }, { "msg_contents": "2011/5/15 Josh Berkus <[email protected]>:\n> Stuart,\n>\n>> I think random_page_cost causes problems because I need to combine\n>> disk random access time, which I can measure, with a guesstimate of\n>> the disk cache hit rate.\n>\n> See, that's wrong. Disk cache hit rate is what effective_cache_size\n> (ECS) is for.\n>\n> Really, there's several factors which should be going into the planner's\n> estimates to determine a probability of a table being cached:\n>\n> * ratio between total database size and ECS\n> * ratio between table size and ECS\n> * ratio between index size and ECS\n> * whether the table is \"hot\" or not\n> * whether the index is \"hot\" or not\n>\n> The last two statistics are critically important for good estimation,\n> and they are not things we currently collect.  By \"hot\" I mean: is this\n> a relation which is accessed several times per minute/hour and is thus\n> likely to be in the cache when we need it?  Currently, we have no way of\n> knowing that.\n>\n> Without \"hot\" statistics, we're left with guessing based on size, which\n> results in bad plans for small tables in large databases which are\n> accessed infrequently.\n>\n> Mind you, for large tables it would be even better to go beyond that and\n> actually have some knowledge of which\n\n*which* ?\n do you mean 'area' of the tables ?\n\n> disk pages might be in cache.\n> However, I think that's beyond feasibility for current software/OSes.\n\nmaybe not :) mincore is available in many OSes, and windows have\noptions to get those stats too.\n\n>\n> --\n> Josh Berkus\n> PostgreSQL Experts Inc.\n> http://pgexperts.com\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nCédric Villemain               2ndQuadrant\nhttp://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support\n", "msg_date": "Sun, 15 May 2011 23:45:55 +0200", "msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: reducing random_page_cost from 4 to 2 to force index scan" }, { "msg_contents": "On 16/05/11 05:45, C�dric Villemain wrote:\n> 2011/5/15 Josh Berkus <[email protected]>:\n>> disk pages might be in cache.\n>> However, I think that's beyond feasibility for current software/OSes.\n> \n> maybe not :) mincore is available in many OSes, and windows have\n> options to get those stats too.\n\nAFAIK, mincore() is only useful for mmap()ed files and for finding out\nif it's safe to access certain blocks of memory w/o risking triggering\nheavy swapping.\n\nIt doesn't provide any visibility into the OS's block device / file\nsystem caches; you can't ask it \"how much of this file is cached in RAM\"\nor \"is this range of blocks in this file cached in RAM\".\n\nEven if you could, it's hard to see how an approach that relied on\nasking the OS through system calls about the cache state when planning\nevery query could be fast enough to be viable.\n\n--\nCraig Ringer\n", "msg_date": "Mon, 16 May 2011 09:05:25 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: reducing random_page_cost from 4 to 2 to force index\n scan" }, { "msg_contents": "Craig Ringer wrote:\n> AFAIK, mincore() is only useful for mmap()ed files and for finding out\n> if it's safe to access certain blocks of memory w/o risking triggering\n> heavy swapping.\n>\n> It doesn't provide any visibility into the OS's block device / file\n> system caches; you can't ask it \"how much of this file is cached in RAM\"\n> or \"is this range of blocks in this file cached in RAM\".\n> \n\nYou should try out pgfincore if you think this can't be done!\n\n> Even if you could, it's hard to see how an approach that relied on\n> asking the OS through system calls about the cache state when planning\n> every query could be fast enough to be viable.\n> \n\nYou can't do it in real-time. You don't necessarily want that to even \nif it were possible; too many possibilities for nasty feedback loops \nwhere you always favor using some marginal index that happens to be in \nmemory, and therefore never page in things that would be faster once \nthey're read. The only reasonable implementation that avoids completely \nunstable plans is to scan this data periodically and save some \nstatistics on it--the way ANALYZE does--and then have that turn into a \nplanner input.\n\nThe related secondary idea of just making assumptions about small \ntables/indexes, too, may be a useful heuristic to layer on top of this. \nThere's a pile of ideas here that all seem reasonable both in terms of \nmodeling real-world behavior and as things that could be inserted into \nthe optimizer. As usual, I suspect that work is needs to be followed by \na giant testing exercise though.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Sun, 15 May 2011 21:18:06 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: reducing random_page_cost from 4 to 2 to force index\n scan" }, { "msg_contents": "On 2011-05-16 03:18, Greg Smith wrote:\n> You can't do it in real-time. You don't necessarily want that to\n> even if it were possible; too many possibilities for nasty feedback\n> loops where you always favor using some marginal index that happens\n> to be in memory, and therefore never page in things that would be\n> faster once they're read. The only reasonable implementation that\n> avoids completely unstable plans is to scan this data periodically\n> and save some statistics on it--the way ANALYZE does--and then have\n> that turn into a planner input.\n\nWould that be feasible? Have process collecting the data every now-and-then\nprobably picking some conservative-average function and feeding\nit into pg_stats for each index/relation?\n\nTo me it seems like a robust and fairly trivial way to to get better \nnumbers. The\nfear is that the OS-cache is too much in flux to get any stable numbers out\nof it.\n\n-- \nJesper\n\n\n\n\n\n\n\n\n On 2011-05-16 03:18, Greg Smith wrote:\n> You can't do it in real-time.\n You don't necessarily want that to\n > even if it were possible; too many possibilities for nasty\n feedback\n > loops where you always favor using some marginal index that\n happens\n > to be in memory, and therefore never page in things that\n would be\n > faster once they're read. The only reasonable implementation\n that\n > avoids completely unstable plans is to scan this data\n periodically\n > and save some statistics on it--the way ANALYZE does--and\n then have\n > that turn into a planner input.\n\n Would that be feasible? Have process collecting the data every\n now-and-then\n probably picking some conservative-average function and feeding\n it into pg_stats for each index/relation? \n\n To me it seems like a robust and fairly trivial way to to get better\n numbers. The \n fear is that the OS-cache is too much in flux to get any stable\n numbers out\n of it.  \n\n -- \n Jesper", "msg_date": "Mon, 16 May 2011 06:41:58 +0200", "msg_from": "Jesper Krogh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: reducing random_page_cost from 4 to 2 to force index\n scan" }, { "msg_contents": "On 2011-05-16 06:41, Jesper Krogh wrote:\n> On 2011-05-16 03:18, Greg Smith wrote:\n>> You can't do it in real-time. You don't necessarily want that to\n>> even if it were possible; too many possibilities for nasty feedback\n>> loops where you always favor using some marginal index that happens\n>> to be in memory, and therefore never page in things that would be\n>> faster once they're read. The only reasonable implementation that\n>> avoids completely unstable plans is to scan this data periodically\n>> and save some statistics on it--the way ANALYZE does--and then have\n>> that turn into a planner input.\n>\n> Would that be feasible? Have process collecting the data every \n> now-and-then\n> probably picking some conservative-average function and feeding\n> it into pg_stats for each index/relation?\n>\n> To me it seems like a robust and fairly trivial way to to get better \n> numbers. The\n> fear is that the OS-cache is too much in flux to get any stable \n> numbers out\n> of it.\n\nOk, it may not work as well with index'es, since having 1% in cache may very\nwell mean that 90% of all requested blocks are there.. for tables in should\nbe more trivial.\n\n-- \nJesper\n", "msg_date": "Mon, 16 May 2011 06:49:20 +0200", "msg_from": "Jesper Krogh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: reducing random_page_cost from 4 to 2 to force index\n scan" }, { "msg_contents": "On Mon, May 16, 2011 at 12:49 AM, Jesper Krogh <[email protected]> wrote:\n>> To me it seems like a robust and fairly trivial way to to get better\n>> numbers. The\n>> fear is that the OS-cache is too much in flux to get any stable numbers\n>> out\n>> of it.\n>\n> Ok, it may not work as well with index'es, since having 1% in cache may very\n> well mean that 90% of all requested blocks are there.. for tables in should\n> be more trivial.\n\nTables can have hot spots, too. Consider a table that holds calendar\nreservations. Reservations can be inserted, updated, deleted. But\ntypically, the most recent data will be what is most actively\nmodified, and the older data will be relatively more (though not\ncompletely) static, and less frequently accessed. Such examples are\ncommon in many real-world applications.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Mon, 16 May 2011 10:34:12 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: reducing random_page_cost from 4 to 2 to force index scan" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Mon, May 16, 2011 at 12:49 AM, Jesper Krogh <[email protected]> wrote:\n>> Ok, it may not work as well with index'es, since having 1% in cache may very\n>> well mean that 90% of all requested blocks are there.. for tables in should\n>> be more trivial.\n\n> Tables can have hot spots, too. Consider a table that holds calendar\n> reservations. Reservations can be inserted, updated, deleted. But\n> typically, the most recent data will be what is most actively\n> modified, and the older data will be relatively more (though not\n> completely) static, and less frequently accessed. Such examples are\n> common in many real-world applications.\n\nYes. I'm not convinced that measuring the fraction of a table or index\nthat's in cache is really going to help us much. Historical cache hit\nrates might be useful, but only to the extent that the incoming query\nhas a similar access pattern to those in the (recent?) past. It's not\nan easy problem.\n\nI almost wonder if we should not try to measure this at all, but instead\nlet the DBA set a per-table or per-index number to use, analogous to the\noverride we added recently for column n-distinct statistics ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 16 May 2011 11:46:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: reducing random_page_cost from 4 to 2 to force index scan " }, { "msg_contents": "On Sun, May 15, 2011 at 9:49 PM, Jesper Krogh <[email protected]> wrote:\n>\n> Ok, it may not work as well with index'es, since having 1% in cache may very\n> well mean that 90% of all requested blocks are there.. for tables in should\n> be more trivial.\n\nWhy would the index have a meaningful hot-spot unless the underlying\ntable had one as well? (Of course the root block will be a hot-spot,\nbut certainly not 90% of all requests)\n\nCheers,\n\nJeff\n", "msg_date": "Mon, 16 May 2011 09:45:28 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: reducing random_page_cost from 4 to 2 to force index scan" }, { "msg_contents": "Jeff Janes <[email protected]> writes:\n> On Sun, May 15, 2011 at 9:49 PM, Jesper Krogh <[email protected]> wrote:\n>> Ok, it may not work as well with index'es, since having 1% in cache may very\n>> well mean that 90% of all requested blocks are there.. for tables in should\n>> be more trivial.\n\n> Why would the index have a meaningful hot-spot unless the underlying\n> table had one as well? (Of course the root block will be a hot-spot,\n> but certainly not 90% of all requests)\n\nThe accesses to an index are far more likely to be clustered than the\naccesses to the underlying table, because the index is organized in a\nway that's application-meaningful and the table not so much. Continuing\nthe earlier example of a timestamp column, accesses might preferentially\nhit near the right end of the index while the underlying rows are all\nover the table.\n\nIOW, hot spots measured at the row level and hot spots measured at the\npage level could very easily be different between table and index.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 16 May 2011 13:24:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: reducing random_page_cost from 4 to 2 to force index scan " }, { "msg_contents": "> The accesses to an index are far more likely to be clustered than the\n> accesses to the underlying table, because the index is organized in a\n> way that's application-meaningful and the table not so much.\n\nSo, to clarify, are you saying that if query were actually requesting\nrows uniformly random, then there would be no reason to suspect that\nindex accesses would have hotspots? It seems like the index structure\n( ie, the top node in b-trees ) could also get in the way.\n\nBest,\nNathan\n", "msg_date": "Mon, 16 May 2011 12:10:37 -0700", "msg_from": "Nathan Boley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: reducing random_page_cost from 4 to 2 to force index scan" }, { "msg_contents": "Nathan Boley <[email protected]> writes:\n>> The accesses to an index are far more likely to be clustered than the\n>> accesses to the underlying table, because the index is organized in a\n>> way that's application-meaningful and the table not so much.\n\n> So, to clarify, are you saying that if query were actually requesting\n> rows uniformly random, then there would be no reason to suspect that\n> index accesses would have hotspots? It seems like the index structure\n> ( ie, the top node in b-trees ) could also get in the way.\n\nThe upper nodes would tend to stay in cache, yes, but we already assume\nthat in the index access cost model, in a kind of indirect way: the\nmodel only considers leaf-page accesses in the first place ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 16 May 2011 15:41:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: reducing random_page_cost from 4 to 2 to force index scan " }, { "msg_contents": "On May 16, 2011, at 10:46 AM, Tom Lane wrote:\n> Robert Haas <[email protected]> writes:\n>> On Mon, May 16, 2011 at 12:49 AM, Jesper Krogh <[email protected]> wrote:\n>>> Ok, it may not work as well with index'es, since having 1% in cache may very\n>>> well mean that 90% of all requested blocks are there.. for tables in should\n>>> be more trivial.\n> \n>> Tables can have hot spots, too. Consider a table that holds calendar\n>> reservations. Reservations can be inserted, updated, deleted. But\n>> typically, the most recent data will be what is most actively\n>> modified, and the older data will be relatively more (though not\n>> completely) static, and less frequently accessed. Such examples are\n>> common in many real-world applications.\n> \n> Yes. I'm not convinced that measuring the fraction of a table or index\n> that's in cache is really going to help us much. Historical cache hit\n> rates might be useful, but only to the extent that the incoming query\n> has a similar access pattern to those in the (recent?) past. It's not\n> an easy problem.\n> \n> I almost wonder if we should not try to measure this at all, but instead\n> let the DBA set a per-table or per-index number to use, analogous to the\n> override we added recently for column n-distinct statistics ...\n\nI think the challenge there would be how to define the scope of the hot-spot. Is it the last X pages? Last X serial values? Something like correlation?\n\nHmm... it would be interesting if we had average relation access times for each stats bucket on a per-column basis; that would give the planner a better idea of how much IO overhead there would be for a given WHERE clause.\n--\nJim C. Nasby, Database Architect [email protected]\n512.569.9461 (cell) http://jim.nasby.net\n\n\n", "msg_date": "Tue, 17 May 2011 14:19:02 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: reducing random_page_cost from 4 to 2 to force index scan " }, { "msg_contents": "Jim Nasby wrote:\n> I think the challenge there would be how to define the scope of the hot-spot. Is it the last X pages? Last X serial values? Something like correlation?\n>\n> Hmm... it would be interesting if we had average relation access times for each stats bucket on a per-column basis; that would give the planner a better idea of how much IO overhead there would be for a given WHERE clause\n\nYou've already given one reasonable first answer to your question here. \nIf you defined a usage counter for each histogram bucket, and \nincremented that each time something from it was touched, that could \nlead to a very rough way to determine access distribution. Compute a \nratio of the counts in those buckets, then have an estimate of the total \ncached percentage; multiplying the two will give you an idea how much of \nthat specific bucket might be in memory. It's not perfect, and you need \nto incorporate some sort of aging method to it (probably weighted \naverage based), but the basic idea could work.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Wed, 18 May 2011 23:00:31 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: reducing random_page_cost from 4 to 2 to force index\n scan" }, { "msg_contents": "On Wed, May 18, 2011 at 11:00 PM, Greg Smith <[email protected]> wrote:\n> Jim Nasby wrote:\n>> I think the challenge there would be how to define the scope of the\n>> hot-spot. Is it the last X pages? Last X serial values? Something like\n>> correlation?\n>>\n>> Hmm... it would be interesting if we had average relation access times for\n>> each stats bucket on a per-column basis; that would give the planner a\n>> better idea of how much IO overhead there would be for a given WHERE clause\n>\n> You've already given one reasonable first answer to your question here.  If\n> you defined a usage counter for each histogram bucket, and incremented that\n> each time something from it was touched, that could lead to a very rough way\n> to determine access distribution.  Compute a ratio of the counts in those\n> buckets, then have an estimate of the total cached percentage; multiplying\n> the two will give you an idea how much of that specific bucket might be in\n> memory.  It's not perfect, and you need to incorporate some sort of aging\n> method to it (probably weighted average based), but the basic idea could\n> work.\n\nMaybe I'm missing something here, but it seems like that would be\nnightmarishly slow. Every time you read a tuple, you'd have to look\nat every column of the tuple and determine which histogram bucket it\nwas in (or, presumably, which MCV it is, since those aren't included\nin working out the histogram buckets). That seems like it would slow\ndown a sequential scan by at least 10x.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Thu, 19 May 2011 10:53:21 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: reducing random_page_cost from 4 to 2 to force index scan" }, { "msg_contents": "On May 19, 2011, at 9:53 AM, Robert Haas wrote:\n> On Wed, May 18, 2011 at 11:00 PM, Greg Smith <[email protected]> wrote:\n>> Jim Nasby wrote:\n>>> I think the challenge there would be how to define the scope of the\n>>> hot-spot. Is it the last X pages? Last X serial values? Something like\n>>> correlation?\n>>> \n>>> Hmm... it would be interesting if we had average relation access times for\n>>> each stats bucket on a per-column basis; that would give the planner a\n>>> better idea of how much IO overhead there would be for a given WHERE clause\n>> \n>> You've already given one reasonable first answer to your question here. If\n>> you defined a usage counter for each histogram bucket, and incremented that\n>> each time something from it was touched, that could lead to a very rough way\n>> to determine access distribution. Compute a ratio of the counts in those\n>> buckets, then have an estimate of the total cached percentage; multiplying\n>> the two will give you an idea how much of that specific bucket might be in\n>> memory. It's not perfect, and you need to incorporate some sort of aging\n>> method to it (probably weighted average based), but the basic idea could\n>> work.\n> \n> Maybe I'm missing something here, but it seems like that would be\n> nightmarishly slow. Every time you read a tuple, you'd have to look\n> at every column of the tuple and determine which histogram bucket it\n> was in (or, presumably, which MCV it is, since those aren't included\n> in working out the histogram buckets). That seems like it would slow\n> down a sequential scan by at least 10x.\n\nYou definitely couldn't do it real-time. But you might be able to copy the tuple somewhere and have a background process do the analysis.\n\nThat said, it might be more productive to know what blocks are available in memory and use correlation to guesstimate whether a particular query will need hot or cold blocks. Or perhaps we create a different structure that lets you track the distribution of each column linearly through the table; something more sophisticated than just using correlation.... perhaps something like indicating which stats bucket was most prevalent in each block/range of blocks in a table. That information would allow you to estimate exactly what blocks in the table you're likely to need...\n--\nJim C. Nasby, Database Architect [email protected]\n512.569.9461 (cell) http://jim.nasby.net\n\n\n", "msg_date": "Thu, 19 May 2011 13:39:58 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: reducing random_page_cost from 4 to 2 to force index scan" }, { "msg_contents": "On Thu, May 19, 2011 at 2:39 PM, Jim Nasby <[email protected]> wrote:\n> On May 19, 2011, at 9:53 AM, Robert Haas wrote:\n>> On Wed, May 18, 2011 at 11:00 PM, Greg Smith <[email protected]> wrote:\n>>> Jim Nasby wrote:\n>>>> I think the challenge there would be how to define the scope of the\n>>>> hot-spot. Is it the last X pages? Last X serial values? Something like\n>>>> correlation?\n>>>>\n>>>> Hmm... it would be interesting if we had average relation access times for\n>>>> each stats bucket on a per-column basis; that would give the planner a\n>>>> better idea of how much IO overhead there would be for a given WHERE clause\n>>>\n>>> You've already given one reasonable first answer to your question here.  If\n>>> you defined a usage counter for each histogram bucket, and incremented that\n>>> each time something from it was touched, that could lead to a very rough way\n>>> to determine access distribution.  Compute a ratio of the counts in those\n>>> buckets, then have an estimate of the total cached percentage; multiplying\n>>> the two will give you an idea how much of that specific bucket might be in\n>>> memory.  It's not perfect, and you need to incorporate some sort of aging\n>>> method to it (probably weighted average based), but the basic idea could\n>>> work.\n>>\n>> Maybe I'm missing something here, but it seems like that would be\n>> nightmarishly slow.  Every time you read a tuple, you'd have to look\n>> at every column of the tuple and determine which histogram bucket it\n>> was in (or, presumably, which MCV it is, since those aren't included\n>> in working out the histogram buckets).  That seems like it would slow\n>> down a sequential scan by at least 10x.\n>\n> You definitely couldn't do it real-time. But you might be able to copy the tuple somewhere and have a background process do the analysis.\n>\n> That said, it might be more productive to know what blocks are available in memory and use correlation to guesstimate whether a particular query will need hot or cold blocks. Or perhaps we create a different structure that lets you track the distribution of each column linearly through the table; something more sophisticated than just using correlation.... perhaps something like indicating which stats bucket was most prevalent in each block/range of blocks in a table. That information would allow you to estimate exactly what blocks in the table you're likely to need...\n\nWell, all of that stuff sounds impractically expensive to me... but I\njust work here.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Thu, 19 May 2011 16:07:39 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: reducing random_page_cost from 4 to 2 to force index scan" }, { "msg_contents": "2011/5/19 Jim Nasby <[email protected]>:\n> On May 19, 2011, at 9:53 AM, Robert Haas wrote:\n>> On Wed, May 18, 2011 at 11:00 PM, Greg Smith <[email protected]> wrote:\n>>> Jim Nasby wrote:\n>>>> I think the challenge there would be how to define the scope of the\n>>>> hot-spot. Is it the last X pages? Last X serial values? Something like\n>>>> correlation?\n>>>>\n>>>> Hmm... it would be interesting if we had average relation access times for\n>>>> each stats bucket on a per-column basis; that would give the planner a\n>>>> better idea of how much IO overhead there would be for a given WHERE clause\n>>>\n>>> You've already given one reasonable first answer to your question here.  If\n>>> you defined a usage counter for each histogram bucket, and incremented that\n>>> each time something from it was touched, that could lead to a very rough way\n>>> to determine access distribution.  Compute a ratio of the counts in those\n>>> buckets, then have an estimate of the total cached percentage; multiplying\n>>> the two will give you an idea how much of that specific bucket might be in\n>>> memory.  It's not perfect, and you need to incorporate some sort of aging\n>>> method to it (probably weighted average based), but the basic idea could\n>>> work.\n>>\n>> Maybe I'm missing something here, but it seems like that would be\n>> nightmarishly slow.  Every time you read a tuple, you'd have to look\n>> at every column of the tuple and determine which histogram bucket it\n>> was in (or, presumably, which MCV it is, since those aren't included\n>> in working out the histogram buckets).  That seems like it would slow\n>> down a sequential scan by at least 10x.\n>\n> You definitely couldn't do it real-time. But you might be able to copy the tuple somewhere and have a background process do the analysis.\n>\n> That said, it might be more productive to know what blocks are available in memory and use correlation to guesstimate whether a particular query will need hot or cold blocks. Or perhaps we create a different structure that lets you track the distribution of each column linearly through the table; something more sophisticated than just using correlation.... perhaps something like indicating which stats bucket was most prevalent in each block/range of blocks in a table. That information would allow you to estimate exactly what blocks in the table you're likely to need...\n\nThose are very good ideas I would get in mind for vacuum/checkpoint\ntasks: if you are able to know hot and cold data, then order it in the\nsegments of the relation. But making it work at the planner level\nlooks hard. I am not opposed to the idea, but no idea how to do it\nright now.\n\n> --\n> Jim C. Nasby, Database Architect                   [email protected]\n> 512.569.9461 (cell)                         http://jim.nasby.net\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nCédric Villemain               2ndQuadrant\nhttp://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support\n", "msg_date": "Fri, 20 May 2011 00:27:35 +0200", "msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: reducing random_page_cost from 4 to 2 to force index scan" }, { "msg_contents": "\n> Well, all of that stuff sounds impractically expensive to me... but I\n> just work here.\n\nI'll point out that the simple version, which just checks for hot tables\nand indexes, would improve estimates greatly and be a LOT less\ncomplicated than these proposals. Certainly having some form of\nblock-based or range-based stats would be better, but it also sounds\nhard enough to maybe never get done.\n\nHaving user-accessible \"hot\" stats would also be useful to DBAs.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n", "msg_date": "Mon, 23 May 2011 12:08:07 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: reducing random_page_cost from 4 to 2 to force index\n scan" }, { "msg_contents": "On Mon, May 23, 2011 at 3:08 PM, Josh Berkus <[email protected]> wrote:\n>\n>> Well, all of that stuff sounds impractically expensive to me... but I\n>> just work here.\n>\n> I'll point out that the simple version, which just checks for hot tables\n> and indexes, would improve estimates greatly and be a LOT less\n> complicated than these proposals.\n\nI realize I'm sounding like a broken record here, but as far as I can\ntell there is absolutely zero evidence that that would be better. I'm\nsure you're in good company thinking so, but the list of things that\ncould skew (or should I say, screw) the estimates is long and painful;\nand if those estimates are wrong, you'll end up with something that is\nboth worse and less predictable than the status quo. First, I haven't\nseen a shred of hard evidence that the contents of the buffer cache or\nOS cache are stable enough to be relied upon, and we've repeatedly\ndiscussed workloads where that might not be true. Has anyone done a\nsystematic study of this on a variety real production systems? If so,\nthe results haven't been posted here, at least not that I can recall.\nSecond, even if we were willing to accept that we could obtain\nrelatively stable and accurate measurements of this data, who is to\nsay that basing plans on it would actually result in an improvement in\nplan quality? That may seem obvious, but I don't think it is. The\nproposed method is a bit like trying to determine the altitude of a\nhot air balloon by throwing the ballast over the side and timing how\nlong it takes to hit the ground. Executing plans that are based on\nthe contents of the cache will change the contents of the cache, which\nwill in turn change the plans. The idea that we can know, without any\nexperimentation, how that's going to shake out, seems to me to be an\nexercise in unjustified optimism of the first order.\n\nSorry to sound grumpy and pessimistic, but I really think we're\nletting our enthusiasm get way, way ahead of the evidence.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Mon, 23 May 2011 22:44:26 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: reducing random_page_cost from 4 to 2 to force index scan" }, { "msg_contents": "Hello.\n\nAs of me, all this \"hot\" thing really looks like uncertain and dynamic \nenough.\nTwo things that I could directly use right now (and they are needed in \npair) are:\n1)Per-table/index/database bufferpools (split shared buffer into parts, \nallow to specify which index/table/database goes where)\n2)Per-table/index cost settings\n\nIf I had this, I could allocate specific bufferpools for tables/indexes \nthat MUST be hot in memory and set low costs for this specific tables.\nP.S. Third thing, great to have to companion this two is \"Load on \nstartup\" flag to automatically populate bufferpools with fast sequential \nread, but this can be easily emulated with a statement.\n\nBest regards, Vitalii Tymchyshyn\n", "msg_date": "Tue, 24 May 2011 12:12:34 +0300", "msg_from": "Vitalii Tymchyshyn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: reducing random_page_cost from 4 to 2 to force index\n scan" } ]
[ { "msg_contents": "I'm running pgsql on an m1.large EC2 instance with 7.5gb available memory. \n\nThe free command shows 7gb of free+cached. My understand from the docs is that I should dedicate 1.75gb to shared_buffers (25%) and set effective_cache_size to 7gb. \n\nIs this correct? I'm running 64-bit Ubuntu 10.10, e.g. \n\nLinux ... 2.6.35-28-virtual #50-Ubuntu SMP Fri Mar 18 19:16:26 UTC 2011 x86_64 GNU/Linux\n\n\tThanks, Joel\n\n--------------------------------------------------------------------------\n- for hire: mac osx device driver ninja, kernel extensions and usb drivers\n---------------------+------------+---------------------------------------\nhttp://wagerlabs.com | @wagerlabs | http://www.linkedin.com/in/joelreymont\n---------------------+------------+---------------------------------------\n\n\n\n", "msg_date": "Tue, 26 Apr 2011 16:15:42 +0100", "msg_from": "Joel Reymont <[email protected]>", "msg_from_op": true, "msg_subject": "tuning on ec2" }, { "msg_contents": "On Tue, Apr 26, 2011 at 11:15 AM, Joel Reymont <[email protected]> wrote:\n> I'm running pgsql on an m1.large EC2 instance with 7.5gb available memory.\n>\n> The free command shows 7gb of free+cached. My understand from the docs is that I should dedicate 1.75gb to shared_buffers (25%) and set effective_cache_size to 7gb.\n\nSounds like a reasonable starting point. You could certainly fiddle\naround a bit - especially with shared_buffers - to see if some other\nsetting works better, but that should be in the ballpark.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Wed, 11 May 2011 23:38:39 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tuning on ec2" }, { "msg_contents": "\n> Sounds like a reasonable starting point. You could certainly fiddle\n> around a bit - especially with shared_buffers - to see if some other\n> setting works better, but that should be in the ballpark.\n\nI tend to set it a bit higher on EC2 to discourage the VM from\novercommitting memory I need. So, I'd do 2.5GB for that one.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n", "msg_date": "Thu, 12 May 2011 17:10:11 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tuning on ec2" } ]
[ { "msg_contents": "Folks,\n\nI'm trying to optimize the following query that performs KL Divergence [1]. As you can see the distance function operates on vectors of 150 floats. \n\nThe query takes 12 minutes to run on an idle (apart from pgsql) EC2 m1 large instance with 2 million documents in the docs table. The CPU is pegged at 100% during this time. I need to be able to both process concurrent distance queries and otherwise use the database.\n\nI have the option of moving this distance calculation off of PG but are there other options?\n\nIs there anything clearly wrong that I'm doing here?\n\nWould it speed things up to make the float array a custom data type backed by C code?\n\n\tThanks in advance, Joel\n\n[1] http://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence\n\n---\n\nCREATE DOMAIN topics AS float[150];\nCREATE DOMAIN doc_id AS varchar(64);\n\nCREATE TABLE docs\n(\n id serial,\n doc_id doc_id NOT NULL PRIMARY KEY,\n topics topics NOT NULL\n);\n\nCREATE OR REPLACE FUNCTION docs_within_distance(vec topics, threshold float) \nRETURNS TABLE(id doc_id, distance float) AS $$\nBEGIN\n\tRETURN QUERY\n SELECT * \n FROM (SELECT doc_id, (SELECT sum(vec[i] * ln(vec[i] / topics[i])) \n FROM generate_subscripts(topics, 1) AS i\n WHERE topics[i] > 0) AS distance\n FROM docs) AS tab\n WHERE tab.distance <= threshold;\nEND;\n$$ LANGUAGE plpgsql;\n\n\n--------------------------------------------------------------------------\n- for hire: mac osx device driver ninja, kernel extensions and usb drivers\n---------------------+------------+---------------------------------------\nhttp://wagerlabs.com | @wagerlabs | http://www.linkedin.com/in/joelreymont\n---------------------+------------+---------------------------------------\n\n\n\n", "msg_date": "Tue, 26 Apr 2011 16:16:19 +0100", "msg_from": "Joel Reymont <[email protected]>", "msg_from_op": true, "msg_subject": "optimizing a cpu-heavy query" } ]
[ { "msg_contents": "Sok Ann Yap wrote:\n> Kevin Grittner wrote:\n \n>> Please show us your overall configuration and give a description\n>> of the hardware (how many of what kind of cores, how much RAM,\n>> what sort of storage system).\n \n> Here's the configuration (this is just a low end laptop):\n \n> version | PostgreSQL 9.0.4 on x86_64-pc-linux-gnu,\n> compiled by GCC x86_64-pc-linux-gnu-gcc (Gentoo 4.5.2 p1.0,\n> pie-0.4.5) 4.5.2, 64-bit\n> checkpoint_segments | 16\n> default_statistics_target | 10000\n \nUsually overkill. If this didn't help, you should probably change it\nback.\n \n> effective_cache_size | 512MB\n> lc_collate | en_US.UTF-8\n> lc_ctype | en_US.UTF-8\n> listen_addresses | *\n> log_destination | syslog\n> log_min_duration_statement | 0\n> maintenance_work_mem | 256MB\n> max_connections | 100\n \nYou probably don't need this many connections.\n \n> max_stack_depth | 2MB\n> port | 5432\n> random_page_cost | 4\n> server_encoding | UTF8\n> shared_buffers | 256MB\n> silent_mode | on\n> TimeZone | Asia/Kuala_Lumpur\n> wal_buffers | 1MB\n> work_mem | 32MB\n> (20 rows)\n \nIt's hard to recommend other changes without knowing the RAM on the\nsystem. How many of what kind of CPUs would help, too.\n \n> The thing is, the query I posted was fairly simple (I think), and\n> PostgreSQL should be able to choose the 3000+ times faster index\n> scan with the default random_page_cost of 4.\n \nIt picks the plan with the lowest estimated cost. If it's not\npicking the best plan, that's usually an indication that you need to\nadjust cost factors so that estimates better model the actual costs.\n \n> If I need to reduce it to 2 when using a 5.4k rpm slow disk, what\n> is random_page_cost = 4 good for?\n \nIt's good for large databases with a lot of physical disk I/O. In\nfact, in some of those cases, it needs to be higher. In your test,\nthe numbers indicate that everything was cached in RAM. That makes\nthe effective cost very low.\n \nAlso, the odds are that you have more total cache space between the\nshared_buffers and the OS cache than the effective_cache_size\nsetting, so the optimizer doesn't expect the number of cache hits\nyou're getting on index usage.\n \n-Kevin\n", "msg_date": "Wed, 27 Apr 2011 07:40:58 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: reducing random_page_cost from 4 to 2 to force\n\t index scan" }, { "msg_contents": "On Wed, Apr 27, 2011 at 8:40 PM, Kevin Grittner\n<[email protected]> wrote:\n> Sok Ann Yap  wrote:\n>> Kevin Grittner  wrote:\n>\n>>> Please show us your overall configuration and give a description\n>>> of the hardware (how many of what kind of cores, how much RAM,\n>>> what sort of storage system).\n>\n>> Here's the configuration (this is just a low end laptop):\n>\n>> version | PostgreSQL 9.0.4 on x86_64-pc-linux-gnu,\n>> compiled by GCC x86_64-pc-linux-gnu-gcc (Gentoo 4.5.2 p1.0,\n>> pie-0.4.5) 4.5.2, 64-bit\n>> checkpoint_segments | 16\n>> default_statistics_target | 10000\n>\n> Usually overkill.  If this didn't help, you should probably change it\n> back.\n>\n>> effective_cache_size | 512MB\n>> lc_collate | en_US.UTF-8\n>> lc_ctype | en_US.UTF-8\n>> listen_addresses | *\n>> log_destination | syslog\n>> log_min_duration_statement | 0\n>> maintenance_work_mem | 256MB\n>> max_connections | 100\n>\n> You probably don't need this many connections.\n>\n>> max_stack_depth | 2MB\n>> port | 5432\n>> random_page_cost | 4\n>> server_encoding | UTF8\n>> shared_buffers | 256MB\n>> silent_mode | on\n>> TimeZone | Asia/Kuala_Lumpur\n>> wal_buffers | 1MB\n>> work_mem | 32MB\n>> (20 rows)\n>\n> It's hard to recommend other changes without knowing the RAM on the\n> system.  How many of what kind of CPUs would help, too.\n>\n>> The thing is, the query I posted was fairly simple (I think), and\n>> PostgreSQL should be able to choose the 3000+ times faster index\n>> scan with the default random_page_cost of 4.\n>\n> It picks the plan with the lowest estimated cost.  If it's not\n> picking the best plan, that's usually an indication that you need to\n> adjust cost factors so that estimates better model the actual costs.\n>\n>> If I need to reduce it to 2 when using a 5.4k rpm slow disk, what\n>> is random_page_cost = 4 good for?\n>\n> It's good for large databases with a lot of physical disk I/O.  In\n> fact, in some of those cases, it needs to be higher.  In your test,\n> the numbers indicate that everything was cached in RAM.  That makes\n> the effective cost very low.\n>\n> Also, the odds are that you have more total cache space between the\n> shared_buffers and the OS cache than the effective_cache_size\n> setting, so the optimizer doesn't expect the number of cache hits\n> you're getting on index usage.\n>\n> -Kevin\n>\n\nThanks for the tips and explanation. I wrongly assumed the\nrandom_page_cost value is independent from caching.\n\nNow, let's go back to the original query:\n\nSELECT\n salutations.id,\n salutations.name,\n EXISTS (\n SELECT 1\n FROM contacts\n WHERE salutations.id = contacts.salutation_id\n ) AS in_use\nFROM salutations\n\nIf I split up the query, i.e. running this once:\n\nSELECT\n salutations.id,\n salutations.name\nFROM salutations\n\nand then running this 44 times, once for each row:\n\nSELECT\n EXISTS (\n SELECT 1\n FROM contacts\n WHERE contacts.salutation_id = ?\n ) AS in_use\n\nI can see that PostgreSQL will smartly pick the best plan, i.e. for\ncommon salutations (Madam, Ms, etc), it will do sequential scan, while\nfor salutations that are rarely used or not used at all, it will do\nindex scan.\n\nAnyway, the overhead of spawning 44 extra queries means that it is\nstill better off for me to stick with the original query and tune\nPostgreSQL to choose index scan.\n", "msg_date": "Thu, 28 Apr 2011 06:34:28 +0800", "msg_from": "Sok Ann Yap <[email protected]>", "msg_from_op": false, "msg_subject": "Re: reducing random_page_cost from 4 to 2 to force index scan" }, { "msg_contents": "Sok Ann Yap <[email protected]> wrote:\n \n> Anyway, the overhead of spawning 44 extra queries means that it is\n> still better off for me to stick with the original query and tune\n> PostgreSQL to choose index scan.\n \nMaybe, but what is *best* for you is to tune PostgreSQL so that your\ncosts are accurately modeled, at which point it will automatically\npick the best plan for most or all of your queries without you\nneeding to worry about it.\n \nIf you set your effective_cache_size to the sum of shared_buffers\nand what your OS reports as cache after you've been running a while,\nthat will help the optimizer know what size index fits in RAM, and\nwill tend to encourage index use. If the active portion of your\ndata is heavily cached, you might want to set random_page_cost and\nseq_page_cost to the same value, and make that value somewhere in\nthe 0.1 to 0.05 range. If you have moderate caching, using 1 and 2\ncan be good.\n \nIf you're still not getting reasonable plans, please post again with\nmore information about your hardware along with the query and its\nEXPLAIN ANALYZE output.\n \n-Kevin\n", "msg_date": "Wed, 27 Apr 2011 18:23:36 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: reducing random_page_cost from 4 to 2 to force\n\t index scan" }, { "msg_contents": "On Thu, Apr 28, 2011 at 7:23 AM, Kevin Grittner\n<[email protected]> wrote:\n> Sok Ann Yap <[email protected]> wrote:\n>\n>> Anyway, the overhead of spawning 44 extra queries means that it is\n>> still better off for me to stick with the original query and tune\n>> PostgreSQL to choose index scan.\n>\n> Maybe, but what is *best* for you is to tune PostgreSQL so that your\n> costs are accurately modeled, at which point it will automatically\n> pick the best plan for most or all of your queries without you\n> needing to worry about it.\n>\n> If you set your effective_cache_size to the sum of shared_buffers\n> and what your OS reports as cache after you've been running a while,\n> that will help the optimizer know what size index fits in RAM, and\n> will tend to encourage index use.  If the active portion of your\n> data is heavily cached, you might want to set random_page_cost and\n> seq_page_cost to the same value, and make that value somewhere in\n> the 0.1 to 0.05 range.  If you have moderate caching, using 1 and 2\n> can be good.\n>\n> If you're still not getting reasonable plans, please post again with\n> more information about your hardware along with the query and its\n> EXPLAIN ANALYZE output.\n>\n> -Kevin\n>\n\nI understand the need to tune PostgreSQL properly for my use case.\nWhat I am curious about is, for the data set I have, under what\ncircumstances (hardware/workload/cache status/etc) would a sequential\nscan really be faster than an index scan for that particular query?\n\nTo simulate a scenario when nothing is cached, I stopped PostgreSQL,\ndropped all system cache (sync; echo 3 > /proc/sys/vm/drop_caches),\nrestarted PostgreSQL, and ran the query. A sequential scan run took\n13.70 seconds, while an index scan run took 0.34 seconds, which is\nstill 40 times faster.\n\nAlso, I tried increasing effective_cache_size from 512MB to 3GB (the\ndatabase size is 2+GB), and it still favor sequential scan. The\nestimated costs did not change at all.\n", "msg_date": "Thu, 28 Apr 2011 08:19:01 +0800", "msg_from": "Sok Ann Yap <[email protected]>", "msg_from_op": false, "msg_subject": "Re: reducing random_page_cost from 4 to 2 to force index scan" }, { "msg_contents": "On Wed, Apr 27, 2011 at 5:19 PM, Sok Ann Yap <[email protected]> wrote:\n\n> On Thu, Apr 28, 2011 at 7:23 AM, Kevin Grittner\n>\n>\n> I understand the need to tune PostgreSQL properly for my use case.\n> What I am curious about is, for the data set I have, under what\n> circumstances (hardware/workload/cache status/etc) would a sequential\n> scan really be faster than an index scan for that particular query?\n>\n\nPossibly none on your hardware - if the index is likely to be in memory\nalong with the actual table rows. In which case, the cost for index scan\n(random page cost) should be made much closer to the cost for sequential\naccess. It looks like the planner must use the same strategy on each\niteration of the loop - it can't do index scan for some values and\nsequential scan for others, so it must be computing the cost as\nsequential_cost * (number of entries(44)) versus random_cost * (number of\nentries). If random page cost is unreasonably high, it's not hard to see\nhow it could wind up looking more expensive to the planner, causing it to\nchoose the sequential scan for each loop iteration. If it were able to\nchange strategy on each iteration, it would be able to accurately assess\ncost for each iteration and choose the correct strategy for that value. As\nsoon as you set the costs closer to actual cost for your system, postgres\ndoes make the correct choice. If there weren't enough memory that postgres\ncould be 'sure' that the index would remain in cache at least for the\nduration of all 44 iterations due to high workload, it is easy to see how\nthe index scan might become significantly more expensive than the sequential\nscan, since the index scan must also load the referenced page from the table\n- postgres cannot get values directly from the index.\n\n\n> To simulate a scenario when nothing is cached, I stopped PostgreSQL,\n> dropped all system cache (sync; echo 3 > /proc/sys/vm/drop_caches),\n> restarted PostgreSQL, and ran the query. A sequential scan run took\n> 13.70 seconds, while an index scan run took 0.34 seconds, which is\n> still 40 times faster.\n>\n> Also, I tried increasing effective_cache_size from 512MB to 3GB (the\n> database size is 2+GB), and it still favor sequential scan. The\n> estimated costs did not change at all.\n>\n\nGreg Smith had this to say in a another thread on this same subject:\n\neffective_cache_size probably doesn't do as much as you suspect. It is used\nfor one of the computations for whether an index is small enough that it can\nlikely be read into memory efficiently. It has no impact on caching\ndecisions outside of that.\n\n\nThis is why the cost for random page access must be fairly accurate.Even if\nthe index is in memory, *it still needs to access the page of data in the\ntable referenced by the index*, which is why the cost of random access must\nbe accurate. That cost is a factor of both the performance of your storage\ninfrastructure and the cache hit rate and can't really be computed by the\ndatabase on the fly. You seem to be looking at the data which exposes the\nfact that random page access is fast and wondering why postgres isn't doing\nthe right thing when postgres isn't doing the right thing precisely because\nit doesn't know that random page access is fast. Since you don't have\nparticularly fast storage infrastructure, this is likely a function of cache\nhit rate, so you must factor in eventual load on the db when setting this\nvalue. While it may be fast in a lightly loaded test environment, those\nrandom page accesses will get much more expensive when competing with other\nconcurrent disk access.\n\nThere's another thread currently active on this list (it started on April\n12) with subject \"Performance\" which contains this explanation of what is\ngoing on and why you need to tune these parameters independently of\neffective_cache_size:\n\nWhen the planner decides what execution plan to use,\nit computes a 'virtual cost' for different plans and then chooses the\ncheapest one.\n\nDecreasing 'random_page_cost' decreases the expected cost of plans\ninvolving index scans, so that at a certain point it seems cheaper than\na plan using sequential scans etc.\n\nYou can see this when using EXPLAIN - do it with the original cost\nvalues, then change the values (for that session only) and do the\nEXPLAIN only. You'll see how the execution plan suddenly changes and\nstarts to use index scans.\n\nThe problem with random I/O is that it's usually much more expensive\nthan sequential I/O as the drives need to seek etc. The only case when\nrandom I/O is just as cheap as sequential I/O is when all the data is\ncached in memory, because within RAM there's no difference between\nrandom and sequential access (right, that's why it's called Random\nAccess Memory).\n\nSo in the previous post setting both random_page_cost and seq_page_cost\nto the same value makes sense, because when the whole database fits into\nthe memory, there's no difference and index scans are favorable.\n\nIn this case (the database is much bigger than the available RAM) this\nno longer holds - index scans hit the drives, resulting in a lot of\nseeks etc. So it's a serious performance killer ...\n\n\nNote: I'm not a postgres developer, so I don't often contribute to these\nthreads for fear of communicating misinformation. I'm sure someone will\nspeak up if I got it wrong, but I happened to read that other thread this\nafternoon and I wasn't sure anyone else would bring it up, so I chimed in.\n Take my input with a grain of salt until confirmed by someone with more\nknowledge of postgres internals than I possess.\n\n--sam\n\nOn Wed, Apr 27, 2011 at 5:19 PM, Sok Ann Yap <[email protected]> wrote:\nOn Thu, Apr 28, 2011 at 7:23 AM, Kevin Grittner\n\nI understand the need to tune PostgreSQL properly for my use case.\nWhat I am curious about is, for the data set I have, under what\ncircumstances (hardware/workload/cache status/etc) would a sequential\nscan really be faster than an index scan for that particular query?Possibly none on your hardware - if the index is likely to be in memory along with the actual table rows.  In which case, the cost for index scan (random page cost) should be made much closer to the cost for sequential access.  It looks like the planner must use the same strategy on each iteration of the loop - it can't do index scan for some values and sequential scan for others, so it must be computing the cost as sequential_cost * (number of entries(44)) versus random_cost * (number of entries).  If random page cost is unreasonably high, it's not hard to see how it could wind up looking more expensive to the planner, causing it to choose the sequential scan for each loop iteration.  If it were able to change strategy on each iteration, it would be able to accurately assess cost for each iteration and choose the correct strategy for that value.  As soon as you set the costs closer to actual cost for your system, postgres does make the correct choice.  If there weren't enough memory that postgres could be 'sure' that the index would remain in cache at least for the duration of all 44 iterations due to high workload, it is easy to see how the index scan might become significantly more expensive than the sequential scan, since the index scan must also load the referenced page from the table - postgres cannot get values directly from the index.\n\n\nTo simulate a scenario when nothing is cached, I stopped PostgreSQL,\ndropped all system cache (sync; echo 3 > /proc/sys/vm/drop_caches),\nrestarted PostgreSQL, and ran the query. A sequential scan run took\n13.70 seconds, while an index scan run took 0.34 seconds, which is\nstill 40 times faster.\n\nAlso, I tried increasing effective_cache_size from 512MB to 3GB (the\ndatabase size is 2+GB), and it still favor sequential scan. The\nestimated costs did not change at all.Greg Smith had this to say in a another thread on this same subject:\neffective_cache_size probably doesn't do as much as you suspect.  It is used for one of the computations for whether an index is small enough that it can likely be read into memory efficiently.  It has no impact on caching decisions outside of that.\nThis is why the cost for random page access must be fairly accurate.Even if the index is in memory, it still needs to access the page of data in the table referenced by the index, which is why the cost of random access must be accurate.  That cost is a factor of both the performance of your storage infrastructure and the cache hit rate and can't really be computed by the database on the fly.  You seem to be looking at the data which exposes the fact that random page access is fast and wondering why postgres isn't doing the right thing when postgres isn't doing the right thing precisely because it doesn't know that random page access is fast.  Since you don't have particularly fast storage infrastructure, this is likely a function of cache hit rate, so you must factor in eventual load on the db when setting this value.  While it may be fast in a lightly loaded test environment, those random page accesses will get much more expensive when competing with other concurrent disk access.\nThere's another thread currently active on this list (it started on April 12) with subject \"Performance\" which contains this explanation of what is going on and why you need to tune these parameters independently of effective_cache_size:\nWhen the planner decides what execution plan to use,\nit computes a 'virtual cost' for different plans and then chooses the\ncheapest one.\nDecreasing 'random_page_cost' decreases the expected cost of plans\ninvolving index scans, so that at a certain point it seems cheaper than\na plan using sequential scans etc.\nYou can see this when using EXPLAIN - do it with the original cost\nvalues, then change the values (for that session only) and do the\nEXPLAIN only. You'll see how the execution plan suddenly changes and\nstarts to use index scans.\nThe problem with random I/O is that it's usually much more expensive\nthan sequential I/O as the drives need to seek etc. The only case when\nrandom I/O is just as cheap as sequential I/O is when all the data is\ncached in memory, because within RAM there's no difference between\nrandom and sequential access (right, that's why it's called Random\nAccess Memory).\nSo in the previous post setting both random_page_cost and seq_page_cost\nto the same value makes sense, because when the whole database fits into\nthe memory, there's no difference and index scans are favorable.\nIn this case (the database is much bigger than the available RAM) this\nno longer holds - index scans hit the drives, resulting in a lot of\nseeks etc. So it's a serious performance killer ...\nNote: I'm not a postgres developer, so I don't often contribute to these threads for fear of communicating misinformation.  I'm sure someone will speak up if I got it wrong, but I happened to read that other thread this afternoon and I wasn't sure anyone else would bring it up, so I chimed in.  Take my input with a grain of salt until confirmed by someone with more knowledge of postgres internals than I possess.\n--sam", "msg_date": "Thu, 28 Apr 2011 04:32:46 -0700", "msg_from": "Samuel Gendler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: reducing random_page_cost from 4 to 2 to force index scan" }, { "msg_contents": "On Wed, Apr 27, 2011 at 5:19 PM, Sok Ann Yap <[email protected]> wrote:\n>\n> I understand the need to tune PostgreSQL properly for my use case.\n> What I am curious about is, for the data set I have, under what\n> circumstances (hardware/workload/cache status/etc) would a sequential\n> scan really be faster than an index scan for that particular query?\n\n\nThe sequential scan on contacts can be terminated as soon as the first\nmatching row is found. If each block of the contacts table contains\none example of each salutation, then the inner sequential scan will\nalways be very short, and faster than an index scan.\n\nI can engineer this to be the case by populating the table like this:\n\ninsert into contacts select (generate_series%44+1)::int from\ngenerate_series (1,1000000);\n\nHere I get the seq scan being 2.6ms while the index scan is 5.6ms.\n\nPredicting how far the inner scan needs to go would be quite\ndifficult, and I don't know how the system will do it.\n\nHowever, when I create and populate simple tables based on your\ndescription, I get the index scan being the lower estimated cost. So\nthe tables I built are not sufficient to study the matter in detail.\n\n\n\nCheers,\n\nJeff\n", "msg_date": "Thu, 28 Apr 2011 08:56:10 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: reducing random_page_cost from 4 to 2 to force index scan" } ]
[ { "msg_contents": "Hi All,\n\nI am a new comer on postgres world and now using it for some serious (at\nleast for me) projects. I have a need where I am running some analytical +\naggregate functions on data where ordering is done on Date type column.\n\n From my initial read on documentation I believe internally a date type is\nrepresented by integer type of data. This makes me wonder would it make any\ngood to create additional column of Integer type and update it as data gets\nadded and use this integer column for all ordering purposes for my sqls - or\nshould I not hasitate using Date type straight into my sql for ordering?\n\nBetter yet, is there anyway I can verify impact of ordering on Date type vs.\nInteger type, apart from using \\timing and explain plan?\n\n\nThanks for sharing your insights.\n-DP.\n\nHi All,I am a new comer on postgres world and now using it for some serious (at least for me)  projects. I have a need where I am running some analytical + aggregate functions on data where ordering is done on Date type column.\nFrom my initial read on documentation I believe internally a date type is represented by integer type of data. This makes me wonder would it make any good to create additional  column of Integer type and update it as data gets added and use this integer column for all ordering purposes for my sqls - or should I not hasitate using Date type straight into my sql for ordering? \nBetter yet, is there anyway I can verify impact of ordering on Date type vs. Integer type, apart from using \\timing and explain plan?Thanks for sharing your insights.\n-DP.", "msg_date": "Wed, 27 Apr 2011 11:28:19 -0400", "msg_from": "Dhimant Patel <[email protected]>", "msg_from_op": true, "msg_subject": "Query Performance with Indexes on Integer type vs. Date type." }, { "msg_contents": "Dhimant Patel <[email protected]> writes:\n> From my initial read on documentation I believe internally a date type is\n> represented by integer type of data. This makes me wonder would it make any\n> good to create additional column of Integer type and update it as data gets\n> added and use this integer column for all ordering purposes for my sqls - or\n> should I not hasitate using Date type straight into my sql for ordering?\n\nDon't overcomplicate things. Comparison of dates is just about as fast as\ncomparison of integers, anyway.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 27 Apr 2011 12:11:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Performance with Indexes on Integer type vs. Date type. " }, { "msg_contents": "> This makes me wonder would it make any good to create additional column of Integer type and update it as data gets added and use this integer column for all ordering purposes for my sqls - or should I not hasitate using Date type straight into my sql for ordering?\n\nKeep in mind what Michael A. Jackson (among others) had to say on\nthis: \"The First Rule of Program Optimization: Don't do it. The Second\nRule of Program Optimization (for experts only!): Don't do it yet.\"\nFor one thing, adding an extra column to your data would mean more\ndata you need to cram in the cache as you query, so even if the *raw*\ninteger versus date ordering is faster, the \"optimization\" could still\nbe a net loss due to the fatter tuples. If you're willing to live with\n*only* integer-based dates, that could help, but that seems\nexceptionally painful and not worth considering unless you run into\ntrouble.\n\n> Better yet, is there anyway I can verify impact of ordering on Date type vs. Integer type, apart from using \\timing and explain plan?\n\nRemember to use explain analyze (and not just explain) when validating\nthese sorts of things. Explain is really just a guess. Also remember\nto ensure that stats are up to date before you test this.\n\n---\nMaciek Sakrejda | System Architect | Truviso\n\n1065 E. Hillsdale Blvd., Suite 215\nFoster City, CA 94404\n(650) 242-3500 Main\nwww.truviso.com\n", "msg_date": "Wed, 27 Apr 2011 09:12:32 -0700", "msg_from": "Maciek Sakrejda <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Performance with Indexes on Integer type vs. Date type." }, { "msg_contents": "Dhimant Patel <[email protected]> wrote:\n \n> I am a new comer on postgres world and now using it for some\n> serious (at least for me) projects. I have a need where I am\n> running some analytical + aggregate functions on data where\n> ordering is done on Date type column.\n> \n> From my initial read on documentation I believe internally a date\n> type is represented by integer type of data. This makes me wonder\n> would it make any good to create additional column of Integer type\n> and update it as data gets added and use this integer column for\n> all ordering purposes for my sqls - or should I not hasitate using\n> Date type straight into my sql for ordering?\n \nI doubt that this will improve performance, particularly if you ever\nwant to see your dates formatted as dates.\n \n> Better yet, is there anyway I can verify impact of ordering on\n> Date type vs. Integer type, apart from using \\timing and explain\n> plan?\n \nYou might be better off just writing the code in the most natural\nway, using the date type for dates, and then asking about any\nqueries which aren't performing as you hope they would. Premature\noptimization is often counter-productive. If you really want to do\nsome benchmarking of relative comparison speeds, though, see the\ngenerate_series function -- it can be good at generating test tables\nfor such things.\n \n-Kevin\n", "msg_date": "Wed, 27 Apr 2011 11:17:50 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Performance with Indexes on Integer type\n\t vs. Date type." }, { "msg_contents": "On Thu, Apr 28, 2011 at 12:17 AM, Kevin Grittner\n<[email protected]> wrote:\n>\n> Dhimant Patel <[email protected]> wrote:\n>\n> > I am a new comer on postgres world and now using it for some\n> > serious (at least for me)  projects. I have a need where I am\n> > running some analytical + aggregate functions on data where\n> > ordering is done on Date type column.\n> >\n> > From my initial read on documentation I believe internally a date\n> > type is represented by integer type of data. This makes me wonder\n> > would it make any good to create additional column of Integer type\n> > and update it as data gets added and use this integer column for\n> > all ordering purposes for my sqls - or should I not hasitate using\n> > Date type straight into my sql for ordering?\n>\n> I doubt that this will improve performance, particularly if you ever\n> want to see your dates formatted as dates.\n>\n> > Better yet, is there anyway I can verify impact of ordering on\n> > Date type vs. Integer type, apart from using \\timing and explain\n> > plan?\n>\n> You might be better off just writing the code in the most natural\n> way, using the date type for dates, and then asking about any\n> queries which aren't performing as you hope they would.  Premature\n> optimization is often counter-productive.  If you really want to do\n> some benchmarking of relative comparison speeds, though, see the\n> generate_series function -- it can be good at generating test tables\n> for such things.\n\n\n\n\nThere is a lot of really good advice here already. I'll just add one thought.\n\nIf the dates in your tables are static based only on creation (as in\nonly a CREATE_DATE, which will never be modified per row like a\nMODIFY_DATE for each record), then your thought might have made sense.\nBut in that case you can already use the ID field if you have one?\n\nIn most real world cases however the DATE field will likely be storing\nan update time as well. Which would make your thought about numbering\nwith integers pointless.\n", "msg_date": "Thu, 28 Apr 2011 00:46:08 +0800", "msg_from": "Phoenix Kiula <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Performance with Indexes on Integer type vs. Date type." }, { "msg_contents": "Thanks for all valuable insights. I decided to drop the idea of adding\nadditional column and\nwill just rely on Date column for all ordering.\n\nTom - thanks for clear answer on the issue I was concerned about.\nMaciek,Kevin -\nthanks for ideas, hint on generate_series() - I will have to go through cpl\nof times of postgres documentation before I will have better grasp of all\navailable tools but this forum is very valuable.\n\n\n-DP.\n\n\nOn Wed, Apr 27, 2011 at 12:46 PM, Phoenix Kiula <[email protected]>wrote:\n\n> On Thu, Apr 28, 2011 at 12:17 AM, Kevin Grittner\n> <[email protected]> wrote:\n> >\n> > Dhimant Patel <[email protected]> wrote:\n> >\n> > > I am a new comer on postgres world and now using it for some\n> > > serious (at least for me) projects. I have a need where I am\n> > > running some analytical + aggregate functions on data where\n> > > ordering is done on Date type column.\n> > >\n> > > From my initial read on documentation I believe internally a date\n> > > type is represented by integer type of data. This makes me wonder\n> > > would it make any good to create additional column of Integer type\n> > > and update it as data gets added and use this integer column for\n> > > all ordering purposes for my sqls - or should I not hasitate using\n> > > Date type straight into my sql for ordering?\n> >\n> > I doubt that this will improve performance, particularly if you ever\n> > want to see your dates formatted as dates.\n> >\n> > > Better yet, is there anyway I can verify impact of ordering on\n> > > Date type vs. Integer type, apart from using \\timing and explain\n> > > plan?\n> >\n> > You might be better off just writing the code in the most natural\n> > way, using the date type for dates, and then asking about any\n> > queries which aren't performing as you hope they would. Premature\n> > optimization is often counter-productive. If you really want to do\n> > some benchmarking of relative comparison speeds, though, see the\n> > generate_series function -- it can be good at generating test tables\n> > for such things.\n>\n>\n>\n>\n> There is a lot of really good advice here already. I'll just add one\n> thought.\n>\n> If the dates in your tables are static based only on creation (as in\n> only a CREATE_DATE, which will never be modified per row like a\n> MODIFY_DATE for each record), then your thought might have made sense.\n> But in that case you can already use the ID field if you have one?\n>\n> In most real world cases however the DATE field will likely be storing\n> an update time as well. Which would make your thought about numbering\n> with integers pointless.\n>\n\nThanks for all valuable insights. I decided to drop the idea of adding additional column and will just rely on Date column for all ordering.Tom - thanks for clear answer on the issue I was concerned about.\nMaciek,Kevin - thanks for ideas, hint on generate_series() - I will have to go through cpl of times of postgres documentation before I will have better grasp of all available tools but this forum is very valuable.\n-DP.On Wed, Apr 27, 2011 at 12:46 PM, Phoenix Kiula <[email protected]> wrote:\nOn Thu, Apr 28, 2011 at 12:17 AM, Kevin Grittner\n<[email protected]> wrote:\n>\n> Dhimant Patel <[email protected]> wrote:\n>\n> > I am a new comer on postgres world and now using it for some\n> > serious (at least for me)  projects. I have a need where I am\n> > running some analytical + aggregate functions on data where\n> > ordering is done on Date type column.\n> >\n> > From my initial read on documentation I believe internally a date\n> > type is represented by integer type of data. This makes me wonder\n> > would it make any good to create additional column of Integer type\n> > and update it as data gets added and use this integer column for\n> > all ordering purposes for my sqls - or should I not hasitate using\n> > Date type straight into my sql for ordering?\n>\n> I doubt that this will improve performance, particularly if you ever\n> want to see your dates formatted as dates.\n>\n> > Better yet, is there anyway I can verify impact of ordering on\n> > Date type vs. Integer type, apart from using \\timing and explain\n> > plan?\n>\n> You might be better off just writing the code in the most natural\n> way, using the date type for dates, and then asking about any\n> queries which aren't performing as you hope they would.  Premature\n> optimization is often counter-productive.  If you really want to do\n> some benchmarking of relative comparison speeds, though, see the\n> generate_series function -- it can be good at generating test tables\n> for such things.\n\n\n\n\nThere is a lot of really good advice here already. I'll just add one thought.\n\nIf the dates in your tables are static based only on creation (as in\nonly a CREATE_DATE, which will never be modified per row like a\nMODIFY_DATE for each record), then your thought might have made sense.\nBut in that case you can already use the ID field if you have one?\n\nIn most real world cases however the DATE field will likely be storing\nan update time as well. Which would make your thought about numbering\nwith integers pointless.", "msg_date": "Wed, 27 Apr 2011 14:34:48 -0400", "msg_from": "Dhimant Patel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query Performance with Indexes on Integer type vs. Date type." } ]
[ { "msg_contents": "Dear all\n\nWhen database files are on a VxFS filesystem, performance can be\nsignificantly improved by setting the VX_CONCURRENT cache advisory on\nthe file according to vxfs document,\n\nmy question is that have any tested by this?\n\n\n#include <sys/fs/vx_ioctl.h>\nioctl(fd, VX_SETCACHE, VX_CONCURRENT);\n\n\nRegards\n\nHSIEN WEN\n", "msg_date": "Thu, 28 Apr 2011 11:33:44 +0800", "msg_from": "HSIEN-WEN CHU <[email protected]>", "msg_from_op": true, "msg_subject": "VX_CONCURRENT flag on vxfs( 5.1 or later) for performance for\n\tpostgresql?" }, { "msg_contents": "HSIEN-WEN CHU <[email protected]> writes:\n> When database files are on a VxFS filesystem, performance can be\n> significantly improved by setting the VX_CONCURRENT cache advisory on\n> the file according to vxfs document,\n\nPresumably, if whatever behavior this invokes were an unalloyed good,\nthey'd have just made it the default. The existence of a flag makes\nme suppose that there are some clear application-visible downsides.\nWhat are they?\n\nBTW, please do not cross-post the same question to three different lists.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 28 Apr 2011 09:25:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VX_CONCURRENT flag on vxfs( 5.1 or later) for performance for\n\tpostgresql?" }, { "msg_contents": "On 04/27/2011 11:33 PM, HSIEN-WEN CHU wrote:\n> When database files are on a VxFS filesystem, performance can be\n> significantly improved by setting the VX_CONCURRENT cache advisory on\n> the file according to vxfs document,\n> \n\nThat won't improve performance, and it's not safe either. VX_CONCURRENT \nswitches the filesystem to use direct I/O. That's usually slower for \nPostgreSQL. And it introduces some requirements for both block \nalignment and the application avoiding overlapping writes. PostgreSQL \ndoesn't do either, so I wouldn't expect it to be compatible with \nVX_CONCURRENT.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\n\n", "msg_date": "Thu, 28 Apr 2011 20:14:16 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VX_CONCURRENT flag on vxfs( 5.1 or later) for performance\n\tfor postgresql?" } ]
[ { "msg_contents": "How the tables must be ordered in the list of tables in from statement?\n\nHow the tables must be ordered in the list of tables in from statement?", "msg_date": "Thu, 28 Apr 2011 14:50:16 +0530 (IST)", "msg_from": "Rishabh Kumar Jain <[email protected]>", "msg_from_op": true, "msg_subject": "Order of tables" }, { "msg_contents": "On Thu, Apr 28, 2011 at 11:20 AM, Rishabh Kumar Jain <[email protected]\n> wrote:\n\n> How the tables must be ordered in the list of tables in from statement?\n>\n\nTo achieve what? Generally there is no requirement for a particular\nordering of relation names in SQL.\n\nCheers\n\nrobert\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n\nOn Thu, Apr 28, 2011 at 11:20 AM, Rishabh Kumar Jain <[email protected]> wrote:\nHow the tables must be ordered in the list of tables in from statement?\nTo achieve what?  Generally there is no requirement for a particular ordering of relation names in SQL.Cheersrobert-- remember.guy do |as, often| as.you_can - without endhttp://blog.rubybestpractices.com/", "msg_date": "Thu, 28 Apr 2011 14:30:51 +0200", "msg_from": "Robert Klemme <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Order of tables" }, { "msg_contents": "On 28.04.2011 12:20, Rishabh Kumar Jain wrote:\n> How the tables must be ordered in the list of tables in from statement?\n\nThere is no difference in performance, if that's what you mean. (If not, \nthen pgsql-novice or pgsql-sql mailing list would've be more appropriate)\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Thu, 28 Apr 2011 15:33:06 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Order of tables" }, { "msg_contents": "On what relations are explicit joins added?\nI don't know when to add explicit joins.--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Order-of-tables-tp4346077p4358465.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n", "msg_date": "Fri, 29 Apr 2011 03:50:02 -0700 (PDT)", "msg_from": "Rishabh Kumar Jain <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Order of tables" }, { "msg_contents": "Thanks for previous reply my friend.\nIn what manner are explicit joins added to improve performence?\nAre there some rules for it?\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Order-of-tables-tp4346077p4369082.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n", "msg_date": "Tue, 3 May 2011 22:23:38 -0700 (PDT)", "msg_from": "Rishabh Kumar Jain <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Order of tables" }, { "msg_contents": "\nRobert Klemme-2 wrote:\n> \n> On Thu, Apr 28, 2011 at 11:20 AM, Rishabh Kumar Jain\n> &lt;[email protected]\n> &gt; wrote:\n> \n>> How the tables must be ordered in the list of tables in from statement?\n>>\n> \n> To achieve what? Generally there is no requirement for a particular\n> ordering of relation names in SQL.\n> \n> Cheers\n> \n> robert\n> \n> -- \n> remember.guy do |as, often| as.you_can - without end\n> http://blog.rubybestpractices.com/\n> \n\nOk I Understood this but there is one more problem friend\nIn what manner are explicit joins added to improve performence?\nAre there some rules for it?\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Order-of-tables-tp4346077p4369085.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n", "msg_date": "Tue, 3 May 2011 22:26:10 -0700 (PDT)", "msg_from": "Rishabh Kumar Jain <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Order of tables" }, { "msg_contents": "\nRobert Klemme-2 wrote:\n> \n> On Thu, Apr 28, 2011 at 11:20 AM, Rishabh Kumar Jain\n> &lt;[email protected]\n> &gt; wrote:\n> \n>> How the tables must be ordered in the list of tables in from statement?\n>>\n> \n> To achieve what? Generally there is no requirement for a particular\n> ordering of relation names in SQL.\n> \n> Cheers\n> \n> robert\n> \n> \n> \n\nOk I Understood this but there is one more problem friend\nIn what manner are explicit joins added to improve performence?\nAre there some rules for it?\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Order-of-tables-tp4346077p4369091.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n", "msg_date": "Tue, 3 May 2011 22:29:26 -0700 (PDT)", "msg_from": "Rishabh Kumar Jain <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Order of tables" }, { "msg_contents": "\nHeikki Linnakangas-3 wrote:\n> \n> On 28.04.2011 12:20, Rishabh Kumar Jain wrote:\n>> How the tables must be ordered in the list of tables in from statement?\n> \n> There is no difference in performance, if that's what you mean. (If not, \n> then pgsql-novice or pgsql-sql mailing list would've be more appropriate)\n> \n> -- \n> Heikki Linnakangas\n> EnterpriseDB http://www.enterprisedb.com\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n\nI understood that now but I have one more query.\nIn what manner are explicit joins added to improve performence?\nAre there some rules for it?\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Order-of-tables-tp4346077p4369093.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n", "msg_date": "Tue, 3 May 2011 22:30:58 -0700 (PDT)", "msg_from": "Rishabh Kumar Jain <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Order of tables" } ]
[ { "msg_contents": "Hei: \n\n\nWe have PostgreSQL 8.3 running on Debian Linux server. We built an applicantion using PHP programming language and Postgres database. There are appoximatly 150 users using the software constantly. We had some performance degration before and after some studies we figured out we will need to tune PostgreSQL configurations. \n\n\nWe have 10GB memory and we tuned PostgreSQL as follow:\n- max_connection = 100\n- effective_cache_size = 5GB\n- shared_buffer = 2GB\n- wal_buffer = 30MB\n- work_mem = 50MB\n\nHowever we suffered 2 times server crashes after tunning the configuration. Does anyone have any idea how this can happen?\n\nBR \n\nKevin Wang\n\nHei: We have PostgreSQL 8.3 running on Debian Linux server. We built an applicantion using PHP programming language and Postgres database. There are appoximatly 150 users using the software constantly. We had some performance degration before and after some studies we figured out we will need to tune PostgreSQL configurations. We have 10GB memory and we tuned PostgreSQL as follow:- max_connection = 100- effective_cache_size = 5GB- shared_buffer = 2GB- wal_buffer = 30MB- work_mem = 50MBHowever we suffered 2 times server crashes after tunning the configuration. Does anyone have any idea how this can happen?BR Kevin Wang", "msg_date": "Fri, 29 Apr 2011 00:13:08 -0700 (PDT)", "msg_from": "Qiang Wang <[email protected]>", "msg_from_op": true, "msg_subject": "Will shared_buffers crash a server" }, { "msg_contents": "As for the question in the title, no, if the server starts, shared\nbuffers should not be the reason for a subsequent crash.\n\nIn debian, it is common that the maximum allowed shared memory setting\non your kernel will prevent a server from even starting, but I guess\nthat's not your problem (because it did start).\n", "msg_date": "Fri, 29 Apr 2011 09:42:04 +0200", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Will shared_buffers crash a server" }, { "msg_contents": "Normally under heavy load, a machine could potential stall, even seem to hang, but not crash.\nAnd judging from your description of the situation your only change was in shared memory (IPC shm) usage, right?\nI would advise to immediately run all sorts of (offline/online) hardware tests (especially memtest86) on your machine,\nbefore blaming postgesql for anything.\n\nΣτις Friday 29 April 2011 10:13:08 ο/η Qiang Wang έγραψε:\n> Hei: \n> \n> \n> We have PostgreSQL 8.3 running on Debian Linux server. We built an applicantion using PHP programming language and Postgres database. There are appoximatly 150 users using the software constantly. We had some performance degration before and after some studies we figured out we will need to tune PostgreSQL configurations. \n> \n> \n> We have 10GB memory and we tuned PostgreSQL as follow:\n> - max_connection = 100\n> - effective_cache_size = 5GB\n> - shared_buffer = 2GB\n> - wal_buffer = 30MB\n> - work_mem = 50MB\n> \n> However we suffered 2 times server crashes after tunning the configuration. Does anyone have any idea how this can happen?\n> \n> BR \n> \n> Kevin Wang\n> \n\n\n\n-- \nAchilleas Mantzios\n", "msg_date": "Fri, 29 Apr 2011 09:48:35 +0200", "msg_from": "Achilleas Mantzios <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Will shared_buffers crash a server" }, { "msg_contents": "On Fri, Apr 29, 2011 at 1:13 AM, Qiang Wang <[email protected]> wrote:\n>\n> We have 10GB memory and we tuned PostgreSQL as follow:\n\n> - max_connection = 100\n> - work_mem = 50MB\n\nYou do know that work_mem is PER SORT right? Not per connection or\nper user or per database. If all 100 of those connections needs to do\na large sort at once (unlikely but possible, especially if under heavy\nload) then you could have pgsql trying to allocate 50000MB. If you\nhave the occasional odd jobs that really need 50MB work_mem then set\nit for a single user or connection and leave the other users in the 1\nto 4MB range until you can be sure you're not running your db server\nout of memory.\n\n> However we suffered 2 times server crashes after tunning the configuration.\n> Does anyone have any idea how this can happen?\n\nLook in your log files for clues. postgresql logs as well as system\nlogs. What exact symptoms, if any, can you tell us of the crash?\n", "msg_date": "Fri, 29 Apr 2011 04:34:38 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Will shared_buffers crash a server" }, { "msg_contents": "What messages did you get in the Postgresql logs?\n\n \n\nWhat other parameters have changed?\n\n \n\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Qiang Wang\nSent: 29 April 2011 08:13\nTo: [email protected]\nSubject: {Spam} [PERFORM] Will shared_buffers crash a server\n\n \n\nHei: \n\n \n\nWe have PostgreSQL 8.3 running on Debian Linux server. We built an\napplicantion using PHP programming language and Postgres database. There\nare appoximatly 150 users using the software constantly. We had some\nperformance degration before and after some studies we figured out we\nwill need to tune PostgreSQL configurations. \n\n \n\nWe have 10GB memory and we tuned PostgreSQL as follow:\n\n- max_connection = 100\n\n- effective_cache_size = 5GB\n\n- shared_buffer = 2GB\n\n- wal_buffer = 30MB\n\n- work_mem = 50MB\n\n \n\nHowever we suffered 2 times server crashes after tunning the\nconfiguration. Does anyone have any idea how this can happen?\n\n \n\nBR \n\nKevin Wang\n\n\n___________________________________________________ \n \nThis email is intended for the named recipient. The information contained \nin it is confidential. You should not copy it for any purposes, nor \ndisclose its contents to any other party. If you received this email \nin error, please notify the sender immediately via email, and delete it from\nyour computer. \n \nAny views or opinions presented are solely those of the author and do not \nnecessarily represent those of the company. \n \nPCI Compliancy: Please note, we do not send or wish to receive banking, credit\nor debit card information by email or any other form of communication. \n\nPlease try our new on-line ordering system at http://www.cromwell.co.uk/ice\n\nCromwell Tools Limited, PO Box 14, 65 Chartwell Drive\nWigston, Leicester LE18 1AT. Tel 0116 2888000\nRegistered in England and Wales, Reg No 00986161\nVAT GB 115 5713 87 900\n__________________________________________________\n\n\n\n\n\n\n\n\n\n\n\nWhat messages did you get in the Postgresql logs?\n \nWhat other parameters have changed?\n \n\n\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Qiang Wang\nSent: 29 April 2011 08:13\nTo: [email protected]\nSubject: {Spam} [PERFORM] Will shared_buffers crash a server\n\n\n \n\n\nHei: \n\n\n \n\n\nWe have\nPostgreSQL 8.3 running on Debian Linux server. We built an applicantion using\nPHP programming language and Postgres database. There are appoximatly 150 users\nusing the software constantly. We had some performance degration before and\nafter some studies we figured out we will need to tune PostgreSQL\nconfigurations. \n\n\n \n\n\nWe have\n10GB memory and we tuned PostgreSQL as follow:\n\n\n-\nmax_connection = 100\n\n\n-\neffective_cache_size = 5GB\n\n\n-\nshared_buffer = 2GB\n\n\n-\nwal_buffer = 30MB\n\n\n-\nwork_mem = 50MB\n\n\n \n\n\nHowever\nwe suffered 2 times server crashes after tunning the configuration. Does anyone\nhave any idea how this can happen?\n\n\n \n\n\nBR \n\n\nKevin\nWang\n\n\n\n\n___________________________________________________ \n\nThis email is intended for the named recipient. The information contained \nin it is confidential. You should not copy it for any purposes, nor \ndisclose its contents to any other party. If you received this email \nin error, please notify the sender immediately via email, and delete\nit from your computer. \n\nAny views or opinions presented are solely those of the author and do not \nnecessarily represent those of the company. \n\nPCI Compliancy: Please note, we do not send or wish to receive banking,\ncredit or debit card information by email or any other form of \ncommunication. \n\nPlease try our new on-line ordering system at http://www.cromwell.co.uk/ice\n\nCromwell Tools Limited, PO Box 14, 65 Chartwell Drive\nWigston, Leicester LE18 1AT. Tel 0116 2888000\nRegistered in England and Wales, Reg No 00986161\nVAT GB 115 5713 87 900\n__________________________________________________", "msg_date": "Fri, 29 Apr 2011 11:37:31 +0100", "msg_from": "\"French, Martin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: {Spam} Will shared_buffers crash a server" }, { "msg_contents": "Qiang Wang <[email protected]> wrote:\n> We have PostgreSQL 8.3 running on Debian Linux server. We built an\n> applicantion using PHP programming language and Postgres database. There are\n> appoximatly 150 users using the software constantly. We had some performance\n> degration before and after some studies we figured out we will need to tune\n> PostgreSQL configurations.\n\n> However we suffered 2 times server crashes after tunning the configuration.\n> Does anyone have any idea how this can happen?\n\nCould you explain in more detail, *how* it crashed?\n\nOn Linux, the first suspect for crashes is usually the OOM\n(out-of-memory) killer. When the kernel thinks it's run out of memory,\nit picks a task and kills it. Due to the way PostgreSQL uses shared\nmemory, it's more likely to be killed than other processes.\n\nTo figure out whether you've suffered an OOM kill, run \"dmesg\", you\nwould see something like:\n[2961426.424851] postgres invoked oom-killer: gfp_mask=0x201da,\norder=0, oomkilladj=0\n[2961426.424857] postgres cpuset=/ mems_allowed=0\n[2961426.424861] Pid: 932, comm: postgres Not tainted 2.6.31-22-server\n#65-Ubuntu\n[2961426.424863] Call Trace:\n...\n\nThe first step in solving OOM kills is disabling memory overcommit;\nadd 'vm.overcommit_memory = 0' to /etc/sysctl.conf and run the command\n'echo 0 > /proc/sys/vm/overcommit_memory'\n\nThis doesn't prevent OOM kills entirely, but usually reduces them\nsignificantly, queries will now abort with an \"out of memory\" error if\nthey're responsible for memory exhaustion.\n\nYou can also reduce the chance that PostgreSQL is chosen for killing,\nby changing its oom_adj, documented here:\nhttp://blog.credativ.com/en/2010/03/postgresql-and-linux-memory-management.html\n\nRegards,\nMarti\n", "msg_date": "Sun, 1 May 2011 19:18:05 +0300", "msg_from": "Marti Raudsepp <[email protected]>", "msg_from_op": false, "msg_subject": "Re: {Spam} Will shared_buffers crash a server" } ]
[ { "msg_contents": "Can pgpoolAdmin utility handle(administer) more than one pgpool-II custer?\n\nWe have a need of setting up 3 independent postgres clusters. One cluster\nhandling cadastral maps, one handling raster maps and one handling vector\nmaps. Each of these clusters must have a load balancer - EG pgpool-II.\nInternally in each cluster we plan to (and have tested) PostgreSQL(9.03)'s\nown streaming replication. We have installed pgpool-II, and are now\nconfronted with the complicated installation of pgpoolAdmin web-app. Hence\nwe would very much like to have only one pgpoolAdmin instance to govern all\n3 pgpool-II clusters.\n\n(Alternatively we will go for a more complex configuration with PostgresXC.)\n\n(Have tried to post to http://pgsqlpgpool.blogspot.com - with no success)\n\nKindest regards Jørgen Münster-Swendsen\nwww.kms.dk--\nView this message in context: http://postgresql.1045698.n5.nabble.com/pgpoolAdmin-handling-several-pgpool-II-clusters-tp4358647p4358647.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n", "msg_date": "Fri, 29 Apr 2011 05:46:42 -0700 (PDT)", "msg_from": "Jorgen <[email protected]>", "msg_from_op": true, "msg_subject": "pgpoolAdmin handling several pgpool-II clusters" }, { "msg_contents": "> Can pgpoolAdmin utility handle(administer) more than one pgpool-II custer?\n\nNo. pgpoolAdmin only supports one pgpool-II server.\n\n> We have a need of setting up 3 independent postgres clusters. One cluster\n> handling cadastral maps, one handling raster maps and one handling vector\n> maps. Each of these clusters must have a load balancer - EG pgpool-II.\n> Internally in each cluster we plan to (and have tested) PostgreSQL(9.03)'s\n> own streaming replication. We have installed pgpool-II, and are now\n> confronted with the complicated installation of pgpoolAdmin web-app. Hence\n> we would very much like to have only one pgpoolAdmin instance to govern all\n> 3 pgpool-II clusters.\n> \n> (Alternatively we will go for a more complex configuration with PostgresXC.)\n\nBecase pgpoolAdmin is a web application, you could assign a tab to a\npgpoolAdmin.\n\n> (Have tried to post to http://pgsqlpgpool.blogspot.com - with no success)\n\nIt's my personal blog:-) Please post to pgpool-geneal mailing list.\n\nYou can subscribe it from:\nhttp://lists.pgfoundry.org/mailman/listinfo/pgpool-general\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese: http://www.sraoss.co.jp\n", "msg_date": "Sat, 30 Apr 2011 10:27:51 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgpoolAdmin handling several pgpool-II clusters" }, { "msg_contents": "> Can pgpoolAdmin utility handle(administer) more than one pgpool-II custer?\n\nNo. pgpoolAdmin only supports one pgpool-II server.\n\n> We have a need of setting up 3 independent postgres clusters. One cluster\n> handling cadastral maps, one handling raster maps and one handling vector\n> maps. Each of these clusters must have a load balancer - EG pgpool-II.\n> Internally in each cluster we plan to (and have tested) PostgreSQL(9.03)'s\n> own streaming replication. We have installed pgpool-II, and are now\n> confronted with the complicated installation of pgpoolAdmin web-app. Hence\n> we would very much like to have only one pgpoolAdmin instance to govern all\n> 3 pgpool-II clusters.\n> \n> (Alternatively we will go for a more complex configuration with PostgresXC.)\n\nBecase pgpoolAdmin is a web application, you could assign a tab to a\npgpoolAdmin.\n\n> (Have tried to post to http://pgsqlpgpool.blogspot.com - with no success)\n\nIt's my personal blog:-) Please post to pgpool-geneal mailing list.\n\nYou can subscribe it from:\nhttp://lists.pgfoundry.org/mailman/listinfo/pgpool-general\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese: http://www.sraoss.co.jp\n", "msg_date": "Sat, 30 Apr 2011 10:28:53 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgpoolAdmin handling several pgpool-II clusters" }, { "msg_contents": ">No. pgpoolAdmin only supports one pgpool-II server. \n\nWe have installed pgpoolAdmin, and it is a good and easy to use web app. So\nwe now have to choose between 3 x pgpoolAdmin or go for PostgresXC.\n\nJørgen Münster-Swendsen\nwww.kms.dk\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/pgpoolAdmin-handling-several-pgpool-II-clusters-tp4358647p4366411.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n", "msg_date": "Mon, 2 May 2011 22:32:34 -0700 (PDT)", "msg_from": "Jorgen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pgpoolAdmin handling several pgpool-II clusters" }, { "msg_contents": "On Tue, May 3, 2011 at 07:32, Jorgen <[email protected]> wrote:\n>>No. pgpoolAdmin only supports one pgpool-II server.\n>\n> We have installed pgpoolAdmin, and it is a good and easy to use web app. So\n> we now have to choose between 3 x pgpoolAdmin or go for PostgresXC.\n\nIt's probably not a good idea to choose your clustering technology\nbased on the web interfaces... Running 3 pgpooladmin doesn't seem like\na huge thing.\n\nAnd you should note that PostgresXC is nowhere near production ready.\nIt'll get there, but it's pretty far away.\n\n\n-- \n Magnus Hagander\n Me: http://www.hagander.net/\n Work: http://www.redpill-linpro.com/\n", "msg_date": "Tue, 3 May 2011 09:29:11 +0200", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgpoolAdmin handling several pgpool-II clusters" }, { "msg_contents": ">It's probably not a good idea to choose your clustering technology\n>based on the web interfaces... Running 3 pgpooladmin doesn't seem like\n>a huge thing.\n\nNo - our choice will be made based on the performance of the cluster, the\nscalability and finally the amount of work involved in configuration,\nupgrading and administration.\n\nJørgen Münster-Swendsen\nwww.kms.dk\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/pgpoolAdmin-handling-several-pgpool-II-clusters-tp4358647p4366664.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n", "msg_date": "Tue, 3 May 2011 01:16:16 -0700 (PDT)", "msg_from": "Jorgen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pgpoolAdmin handling several pgpool-II clusters" } ]
[ { "msg_contents": "Hi,\nHad a recent conversation with a tech from this company called FUSION-IO.\nThey sell\n io cards designed to replace conventional disks. The cards can be up to 3\nTB in size and apparently\nare installed in closer proximity to the CPU than the disks are. They claim\nperformance boosts several times better than the spinning disks.\n\nJust wondering if anyone has had any experience with this company and these\ncards. We're currently at postgres 8.3.11.\n\nAny insights / recommendations appreciated. thank you,\n\n-- \n\n*Mark Steben\n*Database Administrator\n*@utoRevenue | Autobase | AVV\nThe CRM division of Dominion Dealer Solutions\n*95D Ashley Avenue\nWest Springfield, MA 01089\nt: 413.327.3045\nf: 413.732.1824\nw: www.autorevenue.com\n\nHi, Had a recent conversation with a tech from this company called FUSION-IO. They sell io cards designed to replace conventional disks.  The cards can be up to 3 TB in size and apparentlyare installed in closer proximity to the CPU than the disks are.  They claim\nperformance boosts several times better than the spinning disks.Just wondering if anyone has had any experience with this company and thesecards.  We're currently at postgres 8.3.11.  Any insights / recommendations appreciated.  thank you,\n-- Mark \nStebenDatabase \nAdministrator@utoRevenue  \n|  Autobase  |  AVVThe CRM division of Dominion Dealer \nSolutions95D Ashley \nAvenueWest Springfield,  MA  01089\nt: 413.327.3045f: \n413.732.1824w: www.autorevenue.com", "msg_date": "Fri, 29 Apr 2011 10:24:48 -0400", "msg_from": "Mark Steben <[email protected]>", "msg_from_op": true, "msg_subject": "FUSION-IO io cards" }, { "msg_contents": "On 4/29/2011 10:24 AM, Mark Steben wrote:\n> Hi,\n> Had a recent conversation with a tech from this company called\n> FUSION-IO. They sell\n> io cards designed to replace conventional disks. The cards can be up\n> to 3 TB in size and apparently\n> are installed in closer proximity to the CPU than the disks are. They claim\n> performance boosts several times better than the spinning disks.\n>\n> Just wondering if anyone has had any experience with this company and these\n> cards. We're currently at postgres 8.3.11.\n>\n> Any insights / recommendations appreciated. thank you,\n\nWell, The Woz works there. Not because he needs money, but because he \nthinks they are doing it right.\n", "msg_date": "Fri, 29 Apr 2011 10:33:00 -0400", "msg_from": "Stephen Cook <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FUSION-IO io cards" }, { "msg_contents": "On 29/04/2011 16:24, Mark Steben wrote:\n> Hi,\n> Had a recent conversation with a tech from this company called FUSION-IO.\n> They sell\n> io cards designed to replace conventional disks. The cards can be up to 3\n> TB in size and apparently\n> are installed in closer proximity to the CPU than the disks are. They claim\n> performance boosts several times better than the spinning disks.\n>\n> Just wondering if anyone has had any experience with this company and these\n> cards. We're currently at postgres 8.3.11.\n>\n> Any insights / recommendations appreciated. thank you,\n\nThey are actually very fast SSDs; the fact that they come in \"card\" \nformat and not in conventional \"box with plugs\" format is better since \nthey have less communication overhead and electrical interference.\n\nAs far as I've heard, the hardware is as good as they say it is.\n\n\n", "msg_date": "Fri, 29 Apr 2011 16:40:59 +0200", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FUSION-IO io cards" }, { "msg_contents": "Fusion SSDs install on PCIe slots, so are limited by slot count. None, so far as I recall, are bootable (although Fusion has been promising that for more than a year). If you've a BCNF schema of moderate size, then any SSD as primary store is a good option; Fusion's are just even faster. If you've got the typical flatfile bloated schema, then while SSD will be faster (if you've got the $$$), PCIe is not likely to have sufficient capacity.\n\nSSD is the reason to refactor to a Dr. Coddian schema. Few have taken the opportunity.\n\nregards,\nRobert\n\n\n---- Original message ----\n>Date: Fri, 29 Apr 2011 10:24:48 -0400\n>From: [email protected] (on behalf of Mark Steben <[email protected]>)\n>Subject: [PERFORM] FUSION-IO io cards \n>To: [email protected]\n>\n> Hi,\n> Had a recent conversation with a tech from this\n> company called FUSION-IO. They sell\n>  io cards designed to replace conventional disks. \n> The cards can be up to 3 TB in size and apparently\n> are installed in closer proximity to the CPU than\n> the disks are.  They claim\n> performance boosts several times better than the\n> spinning disks.\n>\n> Just wondering if anyone has had any experience with\n> this company and these\n> cards.  We're currently at postgres 8.3.11. \n>\n> Any insights / recommendations appreciated.  thank\n> you,\n>\n> --\n>\n> Mark Steben\n> Database Administrator\n> @utoRevenue  |  Autobase  |  AVV\n> The CRM division of Dominion Dealer Solutions\n> 95D Ashley Avenue\n> West Springfield,  MA  01089\n> t: 413.327.3045\n> f: 413.732.1824\n> w: www.autorevenue.com\n", "msg_date": "Fri, 29 Apr 2011 10:45:13 -0400 (EDT)", "msg_from": "<[email protected]>", "msg_from_op": false, "msg_subject": "Re: FUSION-IO io cards" }, { "msg_contents": "On Apr 29, 2011, at 7:24 AM, Mark Steben wrote:\n\n> Hi, \n> Had a recent conversation with a tech from this company called FUSION-IO. They sell\n> io cards designed to replace conventional disks. The cards can be up to 3 TB in size and apparently\n> are installed in closer proximity to the CPU than the disks are. They claim\n> performance boosts several times better than the spinning disks.\n> \n> Just wondering if anyone has had any experience with this company and these\n> cards. We're currently at postgres 8.3.11. \n> \n> Any insights / recommendations appreciated. thank you,\n\nWe have a bunch of their cards, purchased when we were still on 8.1 and were having difficulty with vacuums. (Duh.) They helped out a bunch for that. They're fast, no question about it. Each FusionIO device (they have cards with multiple devices) can do ~100k iops. So that's nifty. \n\nOn the downside, they're also somewhat exotic, in that they need special kernel drivers, so they're not as easy as just buying a bunch of drives. More negatively, they're $$$. And even more negatively, their drivers are inefficient - expect to dedicate a CPU core to doing whatever they need done. \n\nIn the \"still undecided\" category I'm somewhat worried about their longevity. They say they overprovision the amount of flash so that burnout isn't a problem, and at least it's not like competitors we've seen, which throttle your writes so that you don't burn out as fast. Of course, the only way to tell how long they'll really last is to use them a long time. We're only about 2 years into them so come back to me in 3 years about this. :) Also, while I would say they seem reliable (they have a supercap and succeeded every power-pull test we did) we just recently we've had some issues which appear to be fio driver-related that effectively brought our server down. Fusion thinks its our kernel parameters, but we're unconvinced, given the length of time we've run with the same kernel settings. I'm not yet ready to say these cards are unreliable, but I'm no longer willing to say they're problem-free, either. I would say, if you're going to buy them, make sure you get a support contract. We didn't, and the support we've gotten so far has not been as responsive and I would have expected from such an expensive product.\n\nOverall, I would recommend them. But just realize you're buying the race car of the storage world, which implies 1) you'll go fast, 2) you'll spend $$$, and 3) you'll have interesting problems most other people do not have.\nOn Apr 29, 2011, at 7:24 AM, Mark Steben wrote:Hi, Had a recent conversation with a tech from this company called FUSION-IO. They sell io cards designed to replace conventional disks.  The cards can be up to 3 TB in size and apparentlyare installed in closer proximity to the CPU than the disks are.  They claim\nperformance boosts several times better than the spinning disks.Just wondering if anyone has had any experience with this company and thesecards.  We're currently at postgres 8.3.11.  Any insights / recommendations appreciated.  thank you,We have a bunch of their cards, purchased when we were still on 8.1 and were having difficulty with vacuums. (Duh.) They helped out a bunch for that. They're fast, no question about it. Each FusionIO device (they have cards with multiple devices) can do ~100k iops. So that's nifty. On the downside, they're also somewhat exotic, in that they need special kernel drivers, so they're not as easy as just buying a bunch of drives. More negatively, they're $$$. And even more negatively, their drivers are inefficient - expect to dedicate a CPU core to doing whatever they need done. In the \"still undecided\" category I'm somewhat worried about their longevity. They say they overprovision the amount of flash so that burnout isn't a problem, and at least it's not like competitors we've seen, which throttle your writes so that you don't burn out as fast. Of course, the only way to tell how long they'll really last is to use them a long time. We're only about 2 years into them so come back to me in 3 years about this. :) Also, while I would say they seem reliable (they have a supercap and succeeded every power-pull test we did) we just recently we've had some issues which appear to be fio driver-related that effectively brought our server down. Fusion thinks its our kernel parameters, but we're unconvinced, given the length of time we've run with the same kernel settings. I'm not yet ready to say these cards are unreliable, but I'm no longer willing to say they're problem-free, either. I would say, if you're going to buy them, make sure you get a support contract. We didn't, and the support we've gotten so far has not been as responsive and I would have expected from such an expensive product.Overall, I would recommend them. But just realize you're buying the race car of the storage world, which implies 1) you'll go fast, 2) you'll spend $$$, and 3) you'll have interesting problems most other people do not have.", "msg_date": "Fri, 29 Apr 2011 07:54:14 -0700", "msg_from": "Ben Chobot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FUSION-IO io cards" }, { "msg_contents": "We use FusionIO products for PGSQL. They work in most linux distributions and even have beta FreeBSD drivers for those of us who prefer that OS. They cost a lot, perform really well, and FusionIO has great support for those of us who prefer not to use Windows or OS X, something that many other vendors can't and don't usually care about.\n\n\nTyler Mills\nNetwork Operations Tools Technician\nPavlov Media Inc.\nNOC On Call: 217.841.5045\nNOC main line: 217.353.3059\n________________________________\nFrom: [email protected] [[email protected]] on behalf of Ben Chobot [[email protected]]\nSent: Friday, April 29, 2011 9:54 AM\nTo: Mark Steben\nCc: [email protected]\nSubject: Re: [PERFORM] FUSION-IO io cards\n\nOn Apr 29, 2011, at 7:24 AM, Mark Steben wrote:\n\nHi,\nHad a recent conversation with a tech from this company called FUSION-IO. They sell\n io cards designed to replace conventional disks. The cards can be up to 3 TB in size and apparently\nare installed in closer proximity to the CPU than the disks are. They claim\nperformance boosts several times better than the spinning disks.\n\nJust wondering if anyone has had any experience with this company and these\ncards. We're currently at postgres 8.3.11.\n\nAny insights / recommendations appreciated. thank you,\n\nWe have a bunch of their cards, purchased when we were still on 8.1 and were having difficulty with vacuums. (Duh.) They helped out a bunch for that. They're fast, no question about it. Each FusionIO device (they have cards with multiple devices) can do ~100k iops. So that's nifty.\n\nOn the downside, they're also somewhat exotic, in that they need special kernel drivers, so they're not as easy as just buying a bunch of drives. More negatively, they're $$$. And even more negatively, their drivers are inefficient - expect to dedicate a CPU core to doing whatever they need done.\n\nIn the \"still undecided\" category I'm somewhat worried about their longevity. They say they overprovision the amount of flash so that burnout isn't a problem, and at least it's not like competitors we've seen, which throttle your writes so that you don't burn out as fast. Of course, the only way to tell how long they'll really last is to use them a long time. We're only about 2 years into them so come back to me in 3 years about this. :) Also, while I would say they seem reliable (they have a supercap and succeeded every power-pull test we did) we just recently we've had some issues which appear to be fio driver-related that effectively brought our server down. Fusion thinks its our kernel parameters, but we're unconvinced, given the length of time we've run with the same kernel settings. I'm not yet ready to say these cards are unreliable, but I'm no longer willing to say they're problem-free, either. I would say, if you're going to buy them, make sure you get a support contract. We didn't, and the support we've gotten so far has not been as responsive and I would have expected from such an expensive product.\n\nOverall, I would recommend them. But just realize you're buying the race car of the storage world, which implies 1) you'll go fast, 2) you'll spend $$$, and 3) you'll have interesting problems most other people do not have.\n\n\n\n\n\n\n\n\nWe use FusionIO products for PGSQL. They work in most linux distributions and even have beta FreeBSD drivers for those of us who prefer that OS.  They cost a lot, perform really well, and FusionIO has great support for those of us who prefer not to use Windows\n or OS X, something that many other vendors can't and don't usually care about.\n\n\n\n\n\nTyler Mills\nNetwork Operations Tools Technician\nPavlov Media Inc.\nNOC On Call: 217.841.5045\nNOC main line: 217.353.3059\n\n\n\n\n\nFrom: [email protected] [[email protected]] on behalf of Ben Chobot [[email protected]]\nSent: Friday, April 29, 2011 9:54 AM\nTo: Mark Steben\nCc: [email protected]\nSubject: Re: [PERFORM] FUSION-IO io cards\n\n\n\n\n\n\nOn Apr 29, 2011, at 7:24 AM, Mark Steben wrote:\n\nHi, \nHad a recent conversation with a tech from this company called FUSION-IO. They sell\n io cards designed to replace conventional disks.  The cards can be up to 3 TB in size and apparently\nare installed in closer proximity to the CPU than the disks are.  They claim\nperformance boosts several times better than the spinning disks.\n\nJust wondering if anyone has had any experience with this company and these\ncards.  We're currently at postgres 8.3.11.  \n\nAny insights / recommendations appreciated.  thank you,\n\n\n\n\nWe have a bunch of their cards, purchased when we were still on 8.1 and were having difficulty with vacuums. (Duh.) They helped out a bunch for that. They're fast, no question about it. Each FusionIO device (they have cards with multiple devices) can do\n ~100k iops. So that's nifty. \n\n\nOn the downside, they're also somewhat exotic, in that they need special kernel drivers, so they're not as easy as just buying a bunch of drives. More negatively, they're $$$. And even more negatively, their drivers are inefficient - expect to dedicate\n a CPU core to doing whatever they need done. \n\n\n\nIn the \"still undecided\" category I'm somewhat worried about their longevity. They say they overprovision the amount of flash so that burnout isn't a problem, and at least it's not like competitors we've seen, which throttle your writes so that you don't\n burn out as fast. Of course, the only way to tell how long they'll really last is to use them a long time. We're only about 2 years into them so come back to me in 3 years about this. :) Also, while I would say they seem reliable (they have a supercap and\n succeeded every power-pull test we did) we just recently we've had some issues which\nappear to be fio driver-related that effectively brought our server down. Fusion thinks its our kernel parameters, but we're unconvinced, given the length of time we've run with the same kernel settings. I'm not yet ready to say these cards are unreliable,\n but I'm no longer willing to say they're problem-free, either. I would say, if you're going to buy them, make sure you get a support contract. We didn't, and the support we've gotten so far has not been as responsive and I would have expected from such an\n expensive product.\n\n\nOverall, I would recommend them. But just realize you're buying the race car of the storage world, which implies 1) you'll go fast, 2) you'll spend $$$, and 3) you'll have interesting problems most other people do not have.", "msg_date": "Fri, 29 Apr 2011 15:20:04 +0000", "msg_from": "Tyler Mills <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FUSION-IO io cards" }, { "msg_contents": "On 04/29/2011 04:54 PM, Ben Chobot wrote:\n> We have a bunch of their cards, purchased when we were still on 8.1 and\n> were having difficulty with vacuums. (Duh.) They helped out a bunch for\n> that. They're fast, no question about it. Each FusionIO device (they\n> have cards with multiple devices) can do ~100k iops. So that's nifty.\n>\n> On the downside, they're also somewhat exotic, in that they need special\n> kernel drivers, so they're not as easy as just buying a bunch of drives.\n> More negatively, they're $$$. And even more negatively, their drivers\n> are inefficient - expect to dedicate a CPU core to doing whatever they\n> need done.\n\nI would recommend to have a look a Texas Memory Systems for a \ncomparison. FusionIO does a lot of work in software, as Ben noted \ncorrectly, while TMS (their stuff is called RAMSAN) is a more \nall-in-hardware device.\n\nHaven't used TMS myself, but talked to people who do know and their \nexperience with both products is that TMS is problem-free and has a more \ndeterministic performance. And I have in fact benchmarked FusionIO and \nobserved non-deterministic performance, which means performance goes \ndown siginificantly on occasion - probably because some software-based \nhouse-keeping needs to be done.\n\n-- \nJoachim Worringen\nSenior Performance Architect\n\nInternational Algorithmic Trading GmbH\n\n", "msg_date": "Fri, 29 Apr 2011 18:04:17 +0200", "msg_from": "Joachim Worringen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FUSION-IO io cards" }, { "msg_contents": "On Fri, Apr 29, 2011 at 10:24 AM, Mark Steben\n<[email protected]> wrote:\n> Just wondering if anyone has had any experience with this company and these\n> cards.  We're currently at postgres 8.3.11.\n\ntd;dr Ask for a sample and test it out for yourself.\n\nI asked for, and received, a sample 80GB unit from Fusion to test out.\nDue to my own time constraints, I did not get to do nearly the testing I wanted\nto perform.\n\nAnecdotally, they are bloody fast. I ran an intense OLTP workload for 8.4 on it,\nand the card far exceeded anything I've seen elsewhere.\n\nI did see the CPU utilization that is mentioned elsewhere in this\nthread, and the\ndrivers are exotic enough ( again repeating things mentioned elsewhere ) that\nI couldn't just load them up on any old linux distro I wanted to.\n\nThe pre-sales engineer assigned to me - Sarit Birzon - was very\nhelpful and nice.\n", "msg_date": "Fri, 29 Apr 2011 12:23:06 -0400", "msg_from": "Justin Pitts <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FUSION-IO io cards" }, { "msg_contents": "TMS RAMSAN is a DRAM device. TMS built DRAM SSDs going back decades, but have recently gotten into flash SSDs as well. The DRAM parts are in an order of magnitude more expensive than others' flash SSDs, gig by gig. Also, about as fast as off cpu storage gets.\n\nregards,\nRobert\n\n---- Original message ----\n>Date: Fri, 29 Apr 2011 18:04:17 +0200\n>From: [email protected] (on behalf of Joachim Worringen <[email protected]>)\n>Subject: Re: [PERFORM] FUSION-IO io cards \n>To: [email protected]\n>\n>On 04/29/2011 04:54 PM, Ben Chobot wrote:\n>> We have a bunch of their cards, purchased when we were still on 8.1 and\n>> were having difficulty with vacuums. (Duh.) They helped out a bunch for\n>> that. They're fast, no question about it. Each FusionIO device (they\n>> have cards with multiple devices) can do ~100k iops. So that's nifty.\n>>\n>> On the downside, they're also somewhat exotic, in that they need special\n>> kernel drivers, so they're not as easy as just buying a bunch of drives.\n>> More negatively, they're $$$. And even more negatively, their drivers\n>> are inefficient - expect to dedicate a CPU core to doing whatever they\n>> need done.\n>\n>I would recommend to have a look a Texas Memory Systems for a \n>comparison. FusionIO does a lot of work in software, as Ben noted \n>correctly, while TMS (their stuff is called RAMSAN) is a more \n>all-in-hardware device.\n>\n>Haven't used TMS myself, but talked to people who do know and their \n>experience with both products is that TMS is problem-free and has a more \n>deterministic performance. And I have in fact benchmarked FusionIO and \n>observed non-deterministic performance, which means performance goes \n>down siginificantly on occasion - probably because some software-based \n>house-keeping needs to be done.\n>\n>-- \n>Joachim Worringen\n>Senior Performance Architect\n>\n>International Algorithmic Trading GmbH\n>\n>\n>-- \n>Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 29 Apr 2011 12:52:14 -0400 (EDT)", "msg_from": "<[email protected]>", "msg_from_op": false, "msg_subject": "Re: FUSION-IO io cards" }, { "msg_contents": "Ben Chobot wrote:\n> Also, while I would say they seem reliable (they have a supercap and \n> succeeded every power-pull test we did) we just recently we've had \n> some issues which /appear/ to be fio driver-related that effectively \n> brought our server down. Fusion thinks its our kernel parameters, but \n> we're unconvinced, given the length of time we've run with the same \n> kernel settings. I'm not yet ready to say these cards are unreliable, \n> but I'm no longer willing to say they're problem-free, either.\n\nBen has written a nice summary of the broader experience of everyone \nI've talked to who has deployed Fusion IO. The race car anology is a \ngood one. Fast, minimal concerns about data loss, but occasional quirky \nthings that are frustrating to track down and eliminate. Not much \ntransparency in terms of what it's doing under the hood, which makes \nlong-term reliability a concern too. A particularly regular complaint \nis that there are situations where the card can requite a long \nconsistency check time on system boot after a crash. Nothing lost, but \na long (many minutes) delay before the server is functioning again is \npossible.\n\nThe already mentioned TI RAMSAN at an ever higher price point is also a \npossibility. Another more recent direct competitor to FusionIO's \nproducts comes from Virident: http://www.virident.com/ They seem to be \ndoing everything right to make a FusionIO competitor at the same basic \nprice point. They've already released good MySQL performance numbers, \nand they tell me that PostgreSQL ones are done but just not published \nyet; going through validation still. The \"Performance Relative to \nCapacity Used\" graph at \nhttp://www.ssdperformanceblog.com/2010/12/write-performance-on-virident-tachion-card/ \nis one anyone deploying on FusionIO should also be aware of.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Fri, 29 Apr 2011 15:03:18 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FUSION-IO io cards" }, { "msg_contents": "On 04/29/2011 06:52 PM, [email protected] wrote:\n> TMS RAMSAN is a DRAM device. TMS built DRAM SSDs going back decades,\n> but have recently gotten into flash SSDs as well. The DRAM parts are\n> in an order of magnitude more expensive than others' flash SSDs, gig\n> by gig. Also, about as fast as off cpu storage gets.\n\nTheir naming convention is a bit confusing, but in fact the RamSan boxes \nare available in flash and RAM-based variants:\n\n\"The RamSan-630 offers 10 TB SLC Flash storage, 1,000,000 IOPS (10 GB/s) \nrandom sustained throughput, and just 500 watts power consumption.\"\n\nI was referring to those. Of course, they may be more expensive than \nFusionIO. You get what you pay for (in this case).\n\n-- \nJoachim Worringen\nSenior Performance Architect\n\nInternational Algorithmic Trading GmbH\n\n", "msg_date": "Fri, 29 Apr 2011 21:15:33 +0200", "msg_from": "Joachim Worringen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FUSION-IO io cards" } ]
[ { "msg_contents": "Howdy. We've got a query that takes less than a second unless we add a \n\"order by\" to it, after which it takes 40 seconds. Here's the query:\n\nselect page_number, ps_id, ps_page_id from ps_page where ps_page_id in \n(select ps_page_id from documents_ps_page where document_id in (select \ndocument_id from temp_doc_ids)) order by ps_page_id;\n\nThe parts of the schema used in this query:\n\n Table \"public.ps_page\"\n Column | Type | Modifiers \n\n-------------+---------+--------------------------------------------------------------\n ps_page_id | integer | not null default \nnextval('ps_page_ps_page_id_seq'::regclass)\n ps_id | integer | not null\n page_number | integer | not null\nIndexes:\n \"ps_page_pkey\" PRIMARY KEY, btree (ps_page_id)\n \"ps_page_ps_id_key\" UNIQUE, btree (ps_id, page_number)\n\n Table \"public.documents_ps_page\"\n Column | Type | Modifiers\n-------------+---------+-----------\n document_id | text | not null\n ps_page_id | integer | not null\nIndexes:\n \"documents_ps_page_pkey\" PRIMARY KEY, btree (document_id, ps_page_id)\n \"documents_ps_page_ps_page_id_idx\" btree (ps_page_id)\n\ntemp_doc_ids (temporary table):\n document_id text not null\n\nThe query with the \"order by\" (slow):\n\nexplain analyze select page_number, ps_id, ps_page_id from ps_page where \nps_page_id in (select ps_page_id from documents_ps_page where \ndocument_id in (select document_id from temp_document_ids)) order by \nps_page_id\n Merge Semi Join (cost=212570.02..3164648.31 rows=34398932 \nwidth=12) (actual time=54749.281..54749.295 rows=5 loops=1)\n Merge Cond: (ps_page.ps_page_id = documents_ps_page.ps_page_id)\n -> Index Scan using ps_page_pkey on ps_page \n(cost=0.00..2999686.03 rows=86083592 width=12) (actual \ntime=0.029..36659.393 rows=85591467 loops=1)\n -> Sort (cost=18139.39..18152.52 rows=6255 width=4) (actual \ntime=0.080..0.083 rows=5 loops=1)\n Sort Key: documents_ps_page.ps_page_id\n Sort Method: quicksort Memory: 25kB\n -> Nested Loop (cost=26.23..17808.09 rows=6255 width=4) \n(actual time=0.044..0.073 rows=5 loops=1)\n -> HashAggregate (cost=26.23..27.83 rows=200 \nwidth=32) (actual time=0.015..0.017 rows=5 loops=1)\n -> Seq Scan on temp_document_ids \n(cost=0.00..23.48 rows=1310 width=32) (actual time=0.004..0.007 rows=5 \nloops=1)\n -> Index Scan using documents_ps_page_pkey on \ndocuments_ps_page (cost=0.00..88.59 rows=31 width=42) (actual \ntime=0.009..0.010 rows=1 loops=5)\n Index Cond: (documents_ps_page.document_id = \n(temp_document_ids.document_id)::text)\n Total runtime: 54753.028 ms\n\nThe query without the \"order by\" (fast):\n\nproduction=> explain analyze select page_number, ps_id, ps_page_id from \nps_page where ps_page_id in (select ps_page_id from documents_ps_page \nwhere document_id in (select document_id from temp_doc_ids));\n \n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=17821.42..87598.71 rows=34398932 width=12) (actual \ntime=0.099..0.136 rows=5 loops=1)\n -> HashAggregate (cost=17821.42..17871.46 rows=6255 width=4) \n(actual time=0.083..0.096 rows=5 loops=1)\n -> Nested Loop (cost=26.23..17808.28 rows=6255 width=4) \n(actual time=0.047..0.076 rows=5 loops=1)\n -> HashAggregate (cost=26.23..27.83 rows=200 width=32) \n(actual time=0.014..0.015 rows=5 loops=1)\n -> Seq Scan on temp_doc_ids (cost=0.00..23.48 \nrows=1310 width=32) (actual time=0.005..0.005 rows=5 loops=1)\n -> Index Scan using documents_ps_page_pkey on \ndocuments_ps_page (cost=0.00..88.59 rows=31 width=42) (actual \ntime=0.010..0.010 rows=1 loops=5)\n Index Cond: (documents_ps_page.document_id = \ntemp_doc_ids.document_id)\n -> Index Scan using ps_page_pkey on ps_page (cost=0.00..11.14 \nrows=1 width=12) (actual time=0.007..0.007 rows=1 loops=5)\n Index Cond: (ps_page.ps_page_id = documents_ps_page.ps_page_id)\n Total runtime: 0.213 ms\n(10 rows)\n\nWe notice that in all cases, the plans contain some estimated row counts \nthat differ quite a bit from the actual row counts. We tried increasing \n(from 100 to 1,000 and 10,000) the statistics targets for each of the \nindexed columns, one at a time, and analyzing the table/column with each \nchange. This had no effect.\n\nPostgres version 8.4.7 on AMD64, Debian Linux \"wheezy\" (aka \"testing\").\n\nWhere should we look next?\n", "msg_date": "Fri, 29 Apr 2011 11:24:11 -0700", "msg_from": "Wayne Conrad <[email protected]>", "msg_from_op": true, "msg_subject": "8.4.7, incorrect estimate" }, { "msg_contents": "Wayne Conrad <[email protected]> wrote:\n \n> select page_number, ps_id, ps_page_id from ps_page where\n> ps_page_id in (select ps_page_id from documents_ps_page where\n> document_id in (select document_id from temp_doc_ids)) order by\n> ps_page_id;\n \n> [estimated rows=34398932; actual rows=5]\n \n> We tried increasing (from 100 to 1,000 and 10,000) the statistics\n> targets for each of the indexed columns, one at a time, and\n> analyzing the table/column with each change. This had no effect\n \nOuch.\n \nOut of curiosity, what do you get with?:\n \nexplain analyze\nselect\n page_number,\n ps_id,\n ps_page_id\n from ps_page p\n where exists\n (\n select * from documents_ps_page d\n where d.ps_page_id = p.ps_page_id\n and exists\n (select * from temp_document_ids t\n where t.document_id = d.document_id)\n )\n order by ps_page_id\n;\n \n-Kevin\n", "msg_date": "Fri, 29 Apr 2011 14:12:44 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.4.7, incorrect estimate" }, { "msg_contents": "Wayne Conrad <[email protected]> wrote:\n \n> -> Seq Scan on temp_doc_ids \n> (cost=0.00..23.48 rows=1310 width=32)\n> (actual time=0.005..0.005 rows=5 loops=1)\n \nAlso, make sure that you run ANALYZE against your temp table right\nbefore running your query.\n \n-Kevin\n", "msg_date": "Fri, 29 Apr 2011 14:33:39 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.4.7, incorrect estimate" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> Also, make sure that you run ANALYZE against your temp table right\n> before running your query.\n\nYeah. It's fairly hard to credit that temp_document_ids has any stats\ngiven the way-off estimates for it. Keep in mind that autovacuum\ncan't help you on temp tables: since only your own session can\naccess a temp table, you have to ANALYZE it explicitly if you need\nthe planner to have decent stats about it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 29 Apr 2011 16:58:21 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.4.7, incorrect estimate " }, { "msg_contents": "Replying to the list this time (oops)...\n\nOn 04/29/11 12:33, Kevin Grittner wrote:\n> Also, make sure that you run ANALYZE against your temp table right\n> before running your query.\n\nI did that, and also added an index to it. That had no effect on the \nrun time, but did fix the estimate for the temporary table.\n\nOn 04/29/11 12:12, Kevin Grittner wrote:\n> Out of curiosity, what do you get with?:\n>\n> explain analyze\n> select\n> page_number,\n> ps_id,\n> ps_page_id\n> from ps_page p\n> where exists\n> (\n> select * from documents_ps_page d\n> where d.ps_page_id = p.ps_page_id\n> and exists\n> (select * from temp_document_ids t\n> where t.document_id = d.document_id)\n> )\n> order by ps_page_id\n\n Merge Semi Join (cost=186501.69..107938082.91 rows=29952777 width=12) \n(actual time=242801.828..244572.318 rows=5 loops=1)\n Merge Cond: (p.ps_page_id = d.ps_page_id)\n -> Index Scan using ps_page_pkey on ps_page p \n(cost=0.00..2995637.47 rows=86141904 width=12) (actual \ntime=0.052..64140.510 rows=85401688 loops=1)\n -> Index Scan using documents_ps_page_ps_page_id_idx on \ndocuments_ps_page d (cost=0.00..104384546.06 rows=37358320 width=4) \n(actual time=161483.657..163254.131 rows=5 loops=1)\n Filter: (alternatives: SubPlan 1 or hashed SubPlan 2)\n SubPlan 1\n -> Seq Scan on temp_doc_ids t (cost=0.00..1.35 rows=1 \nwidth=0) (never executed)\n Filter: (document_id = $0)\n SubPlan 2\n -> Seq Scan on temp_doc_ids t (cost=0.00..1.34 rows=5 \nwidth=35) (actual time=0.005..0.007 rows=5 loops=1)\n Total runtime: 244572.432 ms\n(11 rows)\n\n", "msg_date": "Mon, 02 May 2011 06:19:55 -0700", "msg_from": "Wayne Conrad <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.4.7, incorrect estimate" }, { "msg_contents": "Wayne Conrad <[email protected]> writes:\n> On 04/29/11 12:12, Kevin Grittner wrote:\n>> Out of curiosity, what do you get with?:\n>> \n>> explain analyze\n>> select\n>> page_number,\n>> ps_id,\n>> ps_page_id\n>> from ps_page p\n>> where exists\n>> (\n>> select * from documents_ps_page d\n>> where d.ps_page_id = p.ps_page_id\n>> and exists\n>> (select * from temp_document_ids t\n>> where t.document_id = d.document_id)\n>> )\n>> order by ps_page_id\n\n> Merge Semi Join (cost=186501.69..107938082.91 rows=29952777 width=12) \n> (actual time=242801.828..244572.318 rows=5 loops=1)\n> Merge Cond: (p.ps_page_id = d.ps_page_id)\n> -> Index Scan using ps_page_pkey on ps_page p \n> (cost=0.00..2995637.47 rows=86141904 width=12) (actual \n> time=0.052..64140.510 rows=85401688 loops=1)\n> -> Index Scan using documents_ps_page_ps_page_id_idx on \n> documents_ps_page d (cost=0.00..104384546.06 rows=37358320 width=4) \n> (actual time=161483.657..163254.131 rows=5 loops=1)\n> Filter: (alternatives: SubPlan 1 or hashed SubPlan 2)\n> SubPlan 1\n> -> Seq Scan on temp_doc_ids t (cost=0.00..1.35 rows=1 \n> width=0) (never executed)\n> Filter: (document_id = $0)\n> SubPlan 2\n> -> Seq Scan on temp_doc_ids t (cost=0.00..1.34 rows=5 \n> width=35) (actual time=0.005..0.007 rows=5 loops=1)\n> Total runtime: 244572.432 ms\n> (11 rows)\n\n[ pokes at that ... ] I think what you've got here is an oversight in\nthe convert-EXISTS-to-semijoin logic: it pulls up the outer EXISTS but\nfails to recurse on it, which would be needed to convert the lower\nEXISTS into a semijoin as well, which is what's needed in order to get\na non-bogus selectivity estimate for it.\n\nI'll take a look at fixing that, but not sure if it'll be reasonable to\nback-patch or not. In the meantime, you need to look into restructuring\nthe query to avoid nesting the EXISTS probes, if possible.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 02 May 2011 11:11:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.4.7, incorrect estimate " }, { "msg_contents": "Tom Lane <[email protected]> wrote:\n> Wayne Conrad <[email protected]> writes:\n \n>> Total runtime: 244572.432 ms\n \n> I'll take a look at fixing that, but not sure if it'll be\n> reasonable to back-patch or not. In the meantime, you need to\n> look into restructuring the query to avoid nesting the EXISTS\n> probes, if possible.\n \nWayne, I think your best bet at this point may be to just (INNER)\nJOIN the three tables, and if there is a possibility of duplicates\nto use SELECT DISTINCT.\n \n-Kevin\n", "msg_date": "Mon, 02 May 2011 11:04:56 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.4.7, incorrect estimate" }, { "msg_contents": "On 05/02/11 08:11, Tom Lane wrote:\n> Wayne Conrad<[email protected]> writes:\n>> On 04/29/11 12:12, Kevin Grittner wrote:\n>>> Out of curiosity, what do you get with?:\n>>>\n>>> explain analyze\n>>> select\n>>> page_number,\n>>> ps_id,\n>>> ps_page_id\n>>> from ps_page p\n>>> where exists\n>>> (\n>>> select * from documents_ps_page d\n>>> where d.ps_page_id = p.ps_page_id\n>>> and exists\n>>> (select * from temp_document_ids t\n>>> where t.document_id = d.document_id)\n>>> )\n>>> order by ps_page_id\n>\n>> Merge Semi Join (cost=186501.69..107938082.91 rows=29952777 width=12)\n>> (actual time=242801.828..244572.318 rows=5 loops=1)\n>> Merge Cond: (p.ps_page_id = d.ps_page_id)\n>> -> Index Scan using ps_page_pkey on ps_page p\n>> (cost=0.00..2995637.47 rows=86141904 width=12) (actual\n>> time=0.052..64140.510 rows=85401688 loops=1)\n>> -> Index Scan using documents_ps_page_ps_page_id_idx on\n>> documents_ps_page d (cost=0.00..104384546.06 rows=37358320 width=4)\n>> (actual time=161483.657..163254.131 rows=5 loops=1)\n>> Filter: (alternatives: SubPlan 1 or hashed SubPlan 2)\n>> SubPlan 1\n>> -> Seq Scan on temp_doc_ids t (cost=0.00..1.35 rows=1\n>> width=0) (never executed)\n>> Filter: (document_id = $0)\n>> SubPlan 2\n>> -> Seq Scan on temp_doc_ids t (cost=0.00..1.34 rows=5\n>> width=35) (actual time=0.005..0.007 rows=5 loops=1)\n>> Total runtime: 244572.432 ms\n>> (11 rows)\n>\n> [ pokes at that ... ] I think what you've got here is an oversight in\n> the convert-EXISTS-to-semijoin logic: it pulls up the outer EXISTS but\n> fails to recurse on it, which would be needed to convert the lower\n> EXISTS into a semijoin as well, which is what's needed in order to get\n> a non-bogus selectivity estimate for it.\n>\n> I'll take a look at fixing that, but not sure if it'll be reasonable to\n> back-patch or not. In the meantime, you need to look into restructuring\n> the query to avoid nesting the EXISTS probes, if possible.\n>\n> \t\t\tregards, tom lane\n>\n\nTom,\n\nThanks for looking at this. FYI, the same problem occurs when nesting \n\"where ... in (...)\" (see start of thread, or I can repost it if you \nwant). In any case, I can make the problem go away by using another \nlayer of temporary table to avoid the nesting. That's what I'll do for now.\n\nI'm not worried about back-patches to fix this in 8.4. We'll be \nupgrading this box to 9 at some point; we'll just pick up any fix when \nit hits 9.\n\nBest Regards,\nWayne Conrad\n", "msg_date": "Mon, 02 May 2011 09:14:41 -0700", "msg_from": "Wayne Conrad <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.4.7, incorrect estimate" } ]
[ { "msg_contents": "On 04/30/2011 12:24 AM, Hsien-Wen Chu wrote:\n> I'm little bit confuse why it is not safe. and my question is following.\n>\n> for database application, we need to avoid double cache, PostgreSQL\n> shared_buffer will cache the data, so we do not want to file system to\n> cache the data right?. so the DIRECT IO is better, right?.\n> \n\nNo. There are parts of PostgreSQL that expect the operating system to \ndo write caching. Two examples are the transaction logs and the \nprocessing done by VACUUM. If you eliminate that with direct I/O, the \nslowdown can be much, much larger than what you gain by eliminating \ndouble-buffering on reads.\n\nOn the read side, PostgreSQL also expects that operating system features \nlike read-ahead are working properly. While this does introduce some \ndouble-buffering, the benefits for sequential scans are larger than that \noverhead, too. You may not get the expected read-ahead behavior if you \nuse direct I/O.\n\nDirect I/O is not a magic switch that makes things faster; you have to \nvery specifically write your application to work around what it does, \ngood and bad, before it is expected to improves things. And PostgreSQL \nisn't written that way. It definitely requires OS caching to work well.\n\n> for VXFS, if the we use ioctl(fd,vx_cacheset,vx_concurrent) API,\n> according to the vxfs document, it will hold a shared lock for write\n> operation, but not the exclusive clock, also it is a direct IO,\n> \n\nThere are very specific technical requirements that you must follow when \nusing direct I/O. You don't get direct I/O without also following its \nalignment needs. Read the \"Direct I/O best practices\" section of \nhttp://people.redhat.com/msnitzer/docs/io-limits.txt for a quick intro \nto the subject. And there's this additional set of requirements you \nmention in order for this particular VXFS feature to work, which I can't \neven comment on. But you can be sure PostgreSQL doesn't try to do \neither of those things--it's definitely not aligning for direct I/O. \nHas nothing to do with ACID or the filesystem.\n\nNow, the VXFS implementation may do some tricks that bypass the \nalignment requirements. But even if you got it to work, it would still \nbe slower for anything but some read-only workloads. Double buffering \nis really not that big of a performance problem, you just need to make \nsure you don't set shared_buffers to an extremely large value.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Sat, 30 Apr 2011 00:47:49 -0400", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: VX_CONCURRENT flag on vxfs( 5.1 or later) for performance for\n\tpostgresql?" }, { "msg_contents": "Hi Mr. Greg Smith\n\nsince the block size is 8k for the default, and it consisted with many\ntuple/line; as my understand, if any tuple/line is changed(maybe\nupdate, insert, delete). the block will be marked as dirty block. and\nthen it will be flashed to disk by bgwriter.\n\nso my question is if the data block(8k) is aligned with the file\nsystem block? if it is aligned with file system block, so what's the\npotential issue make it is not safe for direct io. (please assume\nvxfs, vxvm and the disk sector is aligned ).please correct me if any\nincorrect.\n\nthank you very much\nTony\n\n\nOn 4/30/11, Greg Smith <[email protected]> wrote:\n> On 04/30/2011 12:24 AM, Hsien-Wen Chu wrote:\n>> I'm little bit confuse why it is not safe. and my question is following.\n>>\n>> for database application, we need to avoid double cache, PostgreSQL\n>> shared_buffer will cache the data, so we do not want to file system to\n>> cache the data right?. so the DIRECT IO is better, right?.\n>>\n>\n> No. There are parts of PostgreSQL that expect the operating system to\n> do write caching. Two examples are the transaction logs and the\n> processing done by VACUUM. If you eliminate that with direct I/O, the\n> slowdown can be much, much larger than what you gain by eliminating\n> double-buffering on reads.\n>\n> On the read side, PostgreSQL also expects that operating system features\n> like read-ahead are working properly. While this does introduce some\n> double-buffering, the benefits for sequential scans are larger than that\n> overhead, too. You may not get the expected read-ahead behavior if you\n> use direct I/O.\n>\n> Direct I/O is not a magic switch that makes things faster; you have to\n> very specifically write your application to work around what it does,\n> good and bad, before it is expected to improves things. And PostgreSQL\n> isn't written that way. It definitely requires OS caching to work well.\n>\n>> for VXFS, if the we use ioctl(fd,vx_cacheset,vx_concurrent) API,\n>> according to the vxfs document, it will hold a shared lock for write\n>> operation, but not the exclusive clock, also it is a direct IO,\n>>\n>\n> There are very specific technical requirements that you must follow when\n> using direct I/O. You don't get direct I/O without also following its\n> alignment needs. Read the \"Direct I/O best practices\" section of\n> http://people.redhat.com/msnitzer/docs/io-limits.txt for a quick intro\n> to the subject. And there's this additional set of requirements you\n> mention in order for this particular VXFS feature to work, which I can't\n> even comment on. But you can be sure PostgreSQL doesn't try to do\n> either of those things--it's definitely not aligning for direct I/O.\n> Has nothing to do with ACID or the filesystem.\n>\n> Now, the VXFS implementation may do some tricks that bypass the\n> alignment requirements. But even if you got it to work, it would still\n> be slower for anything but some read-only workloads. Double buffering\n> is really not that big of a performance problem, you just need to make\n> sure you don't set shared_buffers to an extremely large value.\n>\n> --\n> Greg Smith 2ndQuadrant US [email protected] Baltimore, MD\n> PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n> \"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n>\n>\n", "msg_date": "Sat, 30 Apr 2011 16:51:11 +0800", "msg_from": "Hsien-Wen Chu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VX_CONCURRENT flag on vxfs( 5.1 or later) for performance for\n\tpostgresql?" }, { "msg_contents": "On Sat, Apr 30, 2011 at 4:51 AM, Hsien-Wen Chu <[email protected]> wrote:\n> since the block size is 8k for the default, and it consisted with many\n> tuple/line; as my understand, if any tuple/line is changed(maybe\n> update, insert, delete). the block will be marked as dirty block. and\n> then it will be flashed to disk by bgwriter.\n\nTrue...\n\n> so my question is if the data block(8k) is aligned with the file\n> system block?  if it is aligned with file system block, so what's the\n> potential issue make it is not safe for direct io. (please  assume\n> vxfs, vxvm and the disk sector is aligned ).please correct me if any\n> incorrect.\n\nIt's not about safety - it's about performance. On a machine with\n64GB of RAM, a typical setting for shared_buffers set to 8GB. If you\nstart reading blocks into the PostgreSQL cache - or writing them out\nof the cache - in a way that bypasses the filesystem cache, you're\ngoing to have only 8GB of cache, instead of some much larger amount.\nMore cache = better performance.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Thu, 5 May 2011 13:53:16 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VX_CONCURRENT flag on vxfs( 5.1 or later) for\n\tperformance for postgresql?" }, { "msg_contents": "You should rather consider VxFS tuning - it has an auto-discovery for\nDIRECT I/O according the block size. Just change this setting to 8K or\n16-32K depending on your workload - then all I/O operations with a\nbigger block size will be executed in DIRECT mode and bypass FS cache\n(which logical as usually it'll correspond to a full scan or a seq\nscan of some data), while I/O requests with smaller blocks will remain\ncached which is very useful as it'll mainly cache random I/O (mainly\nindex access)..\n\nWith such a tuning I've got over %35 performance improvement comparing\nto any other states (full DIRECT or fully cached).\n\nRgds,\n-Dimitri\n\n\nRgds,\n-Dimitri\n\nOn 5/5/11, Robert Haas <[email protected]> wrote:\n> On Sat, Apr 30, 2011 at 4:51 AM, Hsien-Wen Chu <[email protected]>\n> wrote:\n>> since the block size is 8k for the default, and it consisted with many\n>> tuple/line; as my understand, if any tuple/line is changed(maybe\n>> update, insert, delete). the block will be marked as dirty block. and\n>> then it will be flashed to disk by bgwriter.\n>\n> True...\n>\n>> so my question is if the data block(8k) is aligned with the file\n>> system block?  if it is aligned with file system block, so what's the\n>> potential issue make it is not safe for direct io. (please  assume\n>> vxfs, vxvm and the disk sector is aligned ).please correct me if any\n>> incorrect.\n>\n> It's not about safety - it's about performance. On a machine with\n> 64GB of RAM, a typical setting for shared_buffers set to 8GB. If you\n> start reading blocks into the PostgreSQL cache - or writing them out\n> of the cache - in a way that bypasses the filesystem cache, you're\n> going to have only 8GB of cache, instead of some much larger amount.\n> More cache = better performance.\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Fri, 6 May 2011 12:53:07 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VX_CONCURRENT flag on vxfs( 5.1 or later) for\n\tperformance for postgresql?" } ]