threads
listlengths 1
275
|
---|
[
{
"msg_contents": "Greetings,\n\nI have several similar queries that are all suffering from a dramatic slow down after upgrading a RDS instance from 9.3 to 10.3. The query time goes from 28 milliseconds to over 70 seconds I could use some help trying to figure out the problem. This is one of the queries:\n\nSELECT \n r.rid as id,\n r.name,\n u._firstlastname as owner\nFROM resource_form r\nJOIN aw_user u ON (u.rid=r.fk_user)\nLEFT JOIN resource_form_user p on (p.fk_form=r.rid)\nWHERE r.fk_user=1 or p.fk_user=1\nORDER BY r.name, r.rid\n\n\nUsing Explain analyze, I get this on 10.3 (https://explain.depesz.com/s/pAdC <https://explain.depesz.com/s/pAdC>):\n\n+------------------------------------------------------------------------------------------------------------------------------------------------+\n| QUERY PLAN |\n+------------------------------------------------------------------------------------------------------------------------------------------------+\n| Sort (cost=201.35..201.42 rows=27 width=68) (actual time=77590.682..77590.683 rows=8 loops=1) |\n| Sort Key: r.name, r.rid |\n| Sort Method: quicksort Memory: 25kB |\n| -> Nested Loop (cost=127.26..200.71 rows=27 width=68) (actual time=0.519..77590.651 rows=8 loops=1) |\n| Join Filter: (r.fk_user = u.rid) |\n| Rows Removed by Join Filter: 1052160 |\n| -> Index Scan using aw_user_rid_key on aw_user u (cost=0.38..8.39 rows=1 width=840) (actual time=0.023..122.397 rows=131521 loops=1) |\n| -> Hash Right Join (cost=126.89..191.84 rows=27 width=40) (actual time=0.004..0.577 rows=8 loops=131521) |\n| Hash Cond: (p.fk_form = r.rid) |\n| Filter: ((r.fk_user = 1) OR (p.fk_user = 1)) |\n| Rows Removed by Filter: 1375 |\n| -> Seq Scan on resource_form_user p (cost=0.00..29.90 rows=1990 width=8) (actual time=0.003..0.203 rows=951 loops=131521) |\n| -> Hash (cost=93.06..93.06 rows=2706 width=40) (actual time=0.461..0.461 rows=550 loops=1) |\n| Buckets: 4096 Batches: 1 Memory Usage: 68kB |\n| -> Seq Scan on resource_form r (cost=0.00..93.06 rows=2706 width=40) (actual time=0.005..0.253 rows=550 loops=1) |\n| Planning time: 0.322 ms |\n| Execution time: 77590.734 ms |\n+------------------------------------------------------------------------------------------------------------------------------------------------+\n\nHere is the explain from 9.3 (https://explain.depesz.com/s/rGRf <https://explain.depesz.com/s/rGRf>):\n\n+-----------------------------------------------------------------------------------------------------------------------------------------+\n| QUERY PLAN |\n+-----------------------------------------------------------------------------------------------------------------------------------------+\n| Sort (cost=164.49..164.52 rows=10 width=43) (actual time=28.036..28.038 rows=11 loops=1) |\n| Sort Key: r.name, r.rid |\n| Sort Method: quicksort Memory: 25kB |\n| -> Nested Loop (cost=69.23..164.33 rows=10 width=43) (actual time=21.330..27.318 rows=11 loops=1) |\n| -> Hash Right Join (cost=68.81..99.92 rows=10 width=33) (actual time=21.283..27.161 rows=11 loops=1) |\n| Hash Cond: (p.fk_form = r.rid) |\n| Filter: ((r.fk_user = 1) OR (p.fk_user = 1)) |\n| Rows Removed by Filter: 1313 |\n| -> Seq Scan on resource_form_user p (cost=0.00..14.08 rows=908 width=8) (actual time=1.316..6.346 rows=908 loops=1) |\n| -> Hash (cost=62.25..62.25 rows=525 width=33) (actual time=19.927..19.927 rows=527 loops=1) |\n| Buckets: 1024 Batches: 1 Memory Usage: 35kB |\n| -> Seq Scan on resource_form r (cost=0.00..62.25 rows=525 width=33) (actual time=1.129..19.540 rows=527 loops=1) |\n| -> Index Scan using aw_user_rid_key on aw_user u (cost=0.42..6.43 rows=1 width=18) (actual time=0.009..0.009 rows=1 loops=11) |\n| Index Cond: (rid = r.fk_user) |\n| Total runtime: 28.171 ms |\n+-----------------------------------------------------------------------------------------------------------------------------------------+\n\nThe plans are very similar, but the results are quite different. In the 10.3 version, I don’t understand why the Hash Right Join is looping through all 131521 user records. Any thoughts?\n\nThank you,\nMichael\n\n\n\nGreetings,I have several similar queries that are all suffering from a dramatic slow down after upgrading a RDS instance from 9.3 to 10.3. The query time goes from 28 milliseconds to over 70 seconds I could use some help trying to figure out the problem. This is one of the queries:SELECT r.rid as id, r.name, u._firstlastname as ownerFROM resource_form rJOIN aw_user u ON (u.rid=r.fk_user)LEFT JOIN resource_form_user p on (p.fk_form=r.rid)WHERE r.fk_user=1 or p.fk_user=1ORDER BY r.name, r.ridUsing Explain analyze, I get this on 10.3 (https://explain.depesz.com/s/pAdC):+------------------------------------------------------------------------------------------------------------------------------------------------+| QUERY PLAN |+------------------------------------------------------------------------------------------------------------------------------------------------+| Sort (cost=201.35..201.42 rows=27 width=68) (actual time=77590.682..77590.683 rows=8 loops=1) || Sort Key: r.name, r.rid || Sort Method: quicksort Memory: 25kB || -> Nested Loop (cost=127.26..200.71 rows=27 width=68) (actual time=0.519..77590.651 rows=8 loops=1) || Join Filter: (r.fk_user = u.rid) || Rows Removed by Join Filter: 1052160 || -> Index Scan using aw_user_rid_key on aw_user u (cost=0.38..8.39 rows=1 width=840) (actual time=0.023..122.397 rows=131521 loops=1) || -> Hash Right Join (cost=126.89..191.84 rows=27 width=40) (actual time=0.004..0.577 rows=8 loops=131521) || Hash Cond: (p.fk_form = r.rid) || Filter: ((r.fk_user = 1) OR (p.fk_user = 1)) || Rows Removed by Filter: 1375 || -> Seq Scan on resource_form_user p (cost=0.00..29.90 rows=1990 width=8) (actual time=0.003..0.203 rows=951 loops=131521) || -> Hash (cost=93.06..93.06 rows=2706 width=40) (actual time=0.461..0.461 rows=550 loops=1) || Buckets: 4096 Batches: 1 Memory Usage: 68kB || -> Seq Scan on resource_form r (cost=0.00..93.06 rows=2706 width=40) (actual time=0.005..0.253 rows=550 loops=1) || Planning time: 0.322 ms || Execution time: 77590.734 ms |+------------------------------------------------------------------------------------------------------------------------------------------------+Here is the explain from 9.3 (https://explain.depesz.com/s/rGRf):+-----------------------------------------------------------------------------------------------------------------------------------------+| QUERY PLAN |+-----------------------------------------------------------------------------------------------------------------------------------------+| Sort (cost=164.49..164.52 rows=10 width=43) (actual time=28.036..28.038 rows=11 loops=1) || Sort Key: r.name, r.rid || Sort Method: quicksort Memory: 25kB || -> Nested Loop (cost=69.23..164.33 rows=10 width=43) (actual time=21.330..27.318 rows=11 loops=1) || -> Hash Right Join (cost=68.81..99.92 rows=10 width=33) (actual time=21.283..27.161 rows=11 loops=1) || Hash Cond: (p.fk_form = r.rid) || Filter: ((r.fk_user = 1) OR (p.fk_user = 1)) || Rows Removed by Filter: 1313 || -> Seq Scan on resource_form_user p (cost=0.00..14.08 rows=908 width=8) (actual time=1.316..6.346 rows=908 loops=1) || -> Hash (cost=62.25..62.25 rows=525 width=33) (actual time=19.927..19.927 rows=527 loops=1) || Buckets: 1024 Batches: 1 Memory Usage: 35kB || -> Seq Scan on resource_form r (cost=0.00..62.25 rows=525 width=33) (actual time=1.129..19.540 rows=527 loops=1) || -> Index Scan using aw_user_rid_key on aw_user u (cost=0.42..6.43 rows=1 width=18) (actual time=0.009..0.009 rows=1 loops=11) || Index Cond: (rid = r.fk_user) || Total runtime: 28.171 ms |+-----------------------------------------------------------------------------------------------------------------------------------------+The plans are very similar, but the results are quite different. In the 10.3 version, I don’t understand why the Hash Right Join is looping through all 131521 user records. Any thoughts?Thank you,Michael",
"msg_date": "Thu, 14 Jun 2018 13:02:49 -0500",
"msg_from": "Michael Sacket <[email protected]>",
"msg_from_op": true,
"msg_subject": "Small query plan change, big performance difference"
},
{
"msg_contents": "Have you run an analyze on all your tables after the upgrade to 10? The\nestimates are way off.\n\nHave you run an analyze on all your tables after the upgrade to 10? The estimates are way off.",
"msg_date": "Thu, 14 Jun 2018 14:24:42 -0400",
"msg_from": "Adam Brusselback <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Small query plan change, big performance difference"
},
{
"msg_contents": "\n> Have you run an analyze on all your tables after the upgrade to 10? The estimates are way off.\n\nThank you. I embarrassingly missed that step. That fixed the problem. In the future, if estimates are way off… I’ll run analyze.\n",
"msg_date": "Thu, 14 Jun 2018 13:49:34 -0500",
"msg_from": "Michael Sacket <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Small query plan change, big performance difference"
}
] |
[
{
"msg_contents": "Hi everyone,\n\nwe have a new query that performs badly with specific input parameters. We\nget worst performance when input data is most restrictive. I have partially\nidentified a problem: it always happens when index scan is done in inner\nloop\nand index type is pg_trgm. We also noticed that for simple query\n(\n select * from point where identifier = 'LOWW' vs\n select * from point where identifier LIKE 'LOWW'\n)\nthe difference between btree index and pg_trgm index can be quite high:\n0.009 ms vs 32.0 ms.\n\nWhat I would like to know is whenever query planner is aware that some index\ntypes are more expensive the the others and whenever it can take that into\naccount?\n\nI will describe background first, then give you query and its analysis for\ndifferent parameters and in the end I will write about all required\ninformation\nregarding setup (Postgres version, Schema, metadata, hardware, etc.)\n\nI would like to know whenever this is a bug in query planner or not and what\ncould we do about it.\n\n################################################################################\n# Background\n################################################################################\n\nWe have a database with navigational data for civil aviation.\nCurrent query is working on two tables: point and route.\nPoint represents a navigational point on Earth and route describes a route\nbetween two points.\n\nQuery that we have finds all routes between two set of points. A set is a\ndynamically/loosely defined by pattern given by the user input. So for\nexample\nif user wants to find all routes between international airports in Austria\ntoward London Heathrow, he or she would use 'LOW%' as\n:from_point_identifier\nand 'EGLL' as :to_point_identifier. Please keep in mind that is a simple\ncase,\nand that user is allowed to define search term any way he/she see it fit,\ni.e. '%OW%', 'EG%'.\n\nSELECT\n r.*\nFROM navdata.route r\n INNER JOIN navdata.point op ON r.frompointguid = op.guid\n INNER JOIN navdata.point dp ON r.topointguid = dp.guid\nWHERE\n r.routeidentifier ILIKE :route_identifier\n AND tsrange(r.startvalid, r.endvalid) @> :validity :: TIMESTAMP\n AND (NOT :use_sources :: BOOLEAN OR r.source = ANY (:sources :: VARCHAR\n[]))\n AND CONCAT(op.identifier, '') ILIKE :from_point_identifier\n AND op.type = ANY (:point_types :: VARCHAR [])\n AND tsrange(op.startvalid, op.endvalid) @> :validity :: TIMESTAMP\n AND dp.identifier ILIKE :to_point_identifier :: VARCHAR\n AND dp.type = ANY (:point_types :: VARCHAR [])\n AND tsrange(dp.startvalid, dp.endvalid) @> :validity :: TIMESTAMP\nORDER BY r.routeidentifier\nLIMIT 1000\n\n\nMost of the tables we have follows this layout principle:\n* uid - is primary key\n* guid - is globally unique key (i.e. London Heathrow could for example\n change it identifier EGLL, but our internal guid will stay same)\n* startvalid, endvalid - defines for which period is entry valid. Entires\nwith\n same guid should not have overlapping validity.\n\nWe don't use foreign keys for two reasons:\n* We need to do live migration without downtime. Creating a foreign key on\n huge dataset could take quite some time\n* Relationship between entities are defined based on guid and not on uid\n(primary key).\n\n################################################################################\n# Query analysis\n################################################################################\n\n--------------------------------------------------------------------------------\n# Case 1 : We search for all outgoing routes from Vienna International\nAirport\n--------------------------------------------------------------------------------\n\nEXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)\nSELECT\n r.*\nFROM navdata.route r\n INNER JOIN navdata.point op ON r.frompointguid = op.guid\n INNER JOIN navdata.point dp ON r.topointguid = dp.guid\nWHERE\n r.routeidentifier ILIKE '%'\n AND tsrange(r.startvalid, r.endvalid) @> now() :: TIMESTAMP\n AND (NOT false :: BOOLEAN OR r.source = ANY (ARRAY[] :: VARCHAR []))\n AND op.identifier ILIKE '%LOWW%'\n AND op.type = ANY (ARRAY['PA'] :: VARCHAR [])\n AND tsrange(op.startvalid, op.endvalid) @> now() :: TIMESTAMP\n AND dp.identifier ILIKE '%' :: VARCHAR\n AND dp.type = ANY (ARRAY['PA'] :: VARCHAR [])\n AND tsrange(dp.startvalid, dp.endvalid) @> now() :: TIMESTAMP\nORDER BY r.routeidentifier\nLIMIT 1000\n\nLimit (cost=666.58..666.58 rows=1 width=349) (actual time=358.466..359.688\nrows=1000 loops=1)\n Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid,\nr.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid,\nr.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar,\nr.from_first, r.dep_airports, r.dst_airports, r.tag,\nr.expanded_route_string, r.route_geometry\n Buffers: shared hit=29786 read=1\n -> Sort (cost=666.58..666.58 rows=1 width=349) (actual\ntime=358.464..358.942 rows=1000 loops=1)\n Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid,\nr.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid,\nr.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar,\nr.from_first, r.dep_airports, r.dst_airports, r.tag,\nr.expanded_route_string, r.route_geometry\n Sort Key: r.routeidentifier\n Sort Method: quicksort Memory: 582kB\n Buffers: shared hit=29786 read=1\n -> Nested Loop (cost=149.94..666.57 rows=1 width=349) (actual\ntime=291.681..356.261 rows=1540 loops=1)\n Output: r.uid, r.routeidentifier, r.frompointguid,\nr.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype,\nr.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi,\nr.grounddistance_excl_sidstar, r.from_first, r.dep_airports,\nr.dst_airports, r.tag, r.expanded_route_string, r.route_geometry\n Buffers: shared hit=29786 read=1\n -> Nested Loop (cost=149.51..653.92 rows=1 width=349)\n(actual time=291.652..300.076 rows=1546 loops=1)\n Output: r.uid, r.routeidentifier, r.frompointguid,\nr.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype,\nr.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi,\nr.grounddistance_excl_sidstar, r.from_first, r.dep_airports,\nr.dst_airports, r.tag, r.expanded_route_string, r.route_geometry\n Buffers: shared hit=13331 read=1\n -> Bitmap Heap Scan on navdata.point op\n(cost=5.75..358.28 rows=2 width=16) (actual time=95.933..96.155 rows=1\nloops=1)\n Output: op.uid, op.guid, op.airportguid,\nop.identifier, op.icaocode, op.name, op.type, op.coordinates, op.fir,\nop.navaidfrequency, op.elevation, op.magneticvariance, op.startvalid,\nop.endvalid, op.revisionuid, op.source, op.leveltype\n Recheck Cond: ((op.identifier)::text ~~*\n'%LOWW%'::text)\n Filter: (((op.type)::text = ANY ('{PA}'::text[]))\nAND (tsrange(op.startvalid, op.endvalid) @> (now())::timestamp without time\nzone))\n Rows Removed by Filter: 50\n Heap Blocks: exact=51\n Buffers: shared hit=4974 read=1\n -> Bitmap Index Scan on idx_point_08\n(cost=0.00..5.75 rows=178 width=0) (actual time=95.871..95.871 rows=51\nloops=1)\n Index Cond: ((op.identifier)::text ~~*\n'%LOWW%'::text)\n Buffers: shared hit=4924\n -> Bitmap Heap Scan on navdata.route r\n(cost=143.77..147.80 rows=2 width=349) (actual time=195.711..202.308\nrows=1546 loops=1)\n Output: r.uid, r.routeidentifier,\nr.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation,\nr.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi,\nr.grounddistance_excl_sidstar, r.from_first, r.dep_airports,\nr.dst_airports, r.tag, r.expanded_route_string, r.route_geometry\n Recheck Cond: ((r.frompointguid = op.guid) AND\n(tsrange(r.startvalid, r.endvalid) @> (now())::timestamp without time zone))\n Filter: ((r.routeidentifier)::text ~~* '%'::text)\n Heap Blocks: exact=1231\n Buffers: shared hit=8357\n -> BitmapAnd (cost=143.77..143.77 rows=2\nwidth=0) (actual time=195.501..195.501 rows=0 loops=1)\n Buffers: shared hit=7126\n -> Bitmap Index Scan on idx_route_02\n(cost=0.00..6.85 rows=324 width=0) (actual time=0.707..0.707 rows=4295\nloops=1)\n Index Cond: (r.frompointguid =\nop.guid)\n Buffers: shared hit=21\n -> Bitmap Index Scan on idx_route_07\n(cost=0.00..135.49 rows=4693 width=0) (actual time=193.881..193.881\nrows=579054 loops=1)\n Index Cond: (tsrange(r.startvalid,\nr.endvalid) @> (now())::timestamp without time zone)\n Buffers: shared hit=7105\n -> Index Scan using cidx_point on navdata.point dp\n(cost=0.43..12.63 rows=1 width=16) (actual time=0.009..0.034 rows=1\nloops=1546)\n Output: dp.uid, dp.guid, dp.airportguid, dp.identifier,\ndp.icaocode, dp.name, dp.type, dp.coordinates, dp.fir, dp.navaidfrequency,\ndp.elevation, dp.magneticvariance, dp.startvalid, dp.endvalid,\ndp.revisionuid, dp.source, dp.leveltype\n Index Cond: (dp.guid = r.topointguid)\n Filter: (((dp.type)::text = ANY ('{PA}'::text[])) AND\n((dp.identifier)::text ~~* '%'::text) AND (tsrange(dp.startvalid,\ndp.endvalid) @> (now())::timestamp without time zone))\n Rows Removed by Filter: 7\n Buffers: shared hit=16455\nPlanning time: 4.603 ms\nExecution time: 360.180 ms\n\n* 360 ms. That is quite fine for our standards. *\n\n--------------------------------------------------------------------------------\n# Case 2 : We search for all routes between Vienna International Airport and\nLondon Heathrow (here is where trouble begins)\n--------------------------------------------------------------------------------\n\nEXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)\nSELECT\n r.*\nFROM navdata.route r\n INNER JOIN navdata.point op ON r.frompointguid = op.guid\n INNER JOIN navdata.point dp ON r.topointguid = dp.guid\nWHERE\n r.routeidentifier ILIKE '%'\n AND tsrange(r.startvalid, r.endvalid) @> now() :: TIMESTAMP\n AND (NOT false :: BOOLEAN OR r.source = ANY (ARRAY[] :: VARCHAR []))\n AND op.identifier ILIKE '%LOWW%'\n AND op.type = ANY (ARRAY['PA'] :: VARCHAR [])\n AND tsrange(op.startvalid, op.endvalid) @> now() :: TIMESTAMP\n AND dp.identifier ILIKE '%EGLL%' :: VARCHAR\n AND dp.type = ANY (ARRAY['PA'] :: VARCHAR [])\n AND tsrange(dp.startvalid, dp.endvalid) @> now() :: TIMESTAMP\nORDER BY r.routeidentifier\nLIMIT 1000\n\n\nLimit (cost=659.57..659.58 rows=1 width=349) (actual\ntime=223118.664..223118.714 rows=36 loops=1)\n Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid,\nr.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid,\nr.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar,\nr.from_first, r.dep_airports, r.dst_airports, r.tag,\nr.expanded_route_string, r.route_geometry\n Buffers: shared hit=12033194\n -> Sort (cost=659.57..659.58 rows=1 width=349) (actual\ntime=223118.661..223118.681 rows=36 loops=1)\n Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid,\nr.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid,\nr.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar,\nr.from_first, r.dep_airports, r.dst_airports, r.tag,\nr.expanded_route_string, r.route_geometry\n Sort Key: r.routeidentifier\n Sort Method: quicksort Memory: 35kB\n Buffers: shared hit=12033194\n -> Nested Loop (cost=157.35..659.56 rows=1 width=349) (actual\ntime=4290.975..223118.490 rows=36 loops=1)\n Output: r.uid, r.routeidentifier, r.frompointguid,\nr.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype,\nr.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi,\nr.grounddistance_excl_sidstar, r.from_first, r.dep_airports,\nr.dst_airports, r.tag, r.expanded_route_string, r.route_geometry\n Buffers: shared hit=12033194\n -> Nested Loop (cost=149.32..649.49 rows=1 width=349)\n(actual time=319.717..367.139 rows=2439 loops=1)\n Output: r.uid, r.routeidentifier, r.frompointguid,\nr.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype,\nr.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi,\nr.grounddistance_excl_sidstar, r.from_first, r.dep_airports,\nr.dst_airports, r.tag, r.expanded_route_string, r.route_geometry\n Buffers: shared hit=15788\n -> Bitmap Heap Scan on navdata.point dp\n(cost=5.75..358.28 rows=2 width=16) (actual time=124.922..125.008 rows=1\nloops=1)\n Output: dp.uid, dp.guid, dp.airportguid,\ndp.identifier, dp.icaocode, dp.name, dp.type, dp.coordinates, dp.fir,\ndp.navaidfrequency, dp.elevation, dp.magneticvariance, dp.startvalid,\ndp.endvalid, dp.revisionuid, dp.source, dp.leveltype\n Recheck Cond: ((dp.identifier)::text ~~*\n'%EGLL%'::text)\n Filter: (((dp.type)::text = ANY ('{PA}'::text[]))\nAND (tsrange(dp.startvalid, dp.endvalid) @> (now())::timestamp without time\nzone))\n Rows Removed by Filter: 6\n Heap Blocks: exact=7\n Buffers: shared hit=6786\n -> Bitmap Index Scan on idx_point_08\n(cost=0.00..5.75 rows=178 width=0) (actual time=124.882..124.882 rows=7\nloops=1)\n Index Cond: ((dp.identifier)::text ~~*\n'%EGLL%'::text)\n Buffers: shared hit=6779\n -> Bitmap Heap Scan on navdata.route r\n(cost=143.57..145.60 rows=1 width=349) (actual time=194.785..237.128\nrows=2439 loops=1)\n Output: r.uid, r.routeidentifier,\nr.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation,\nr.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi,\nr.grounddistance_excl_sidstar, r.from_first, r.dep_airports,\nr.dst_airports, r.tag, r.expanded_route_string, r.route_geometry\n Recheck Cond: ((r.topointguid = dp.guid) AND\n(tsrange(r.startvalid, r.endvalid) @> (now())::timestamp without time zone))\n Filter: ((r.routeidentifier)::text ~~* '%'::text)\n Heap Blocks: exact=1834\n Buffers: shared hit=9002\n -> BitmapAnd (cost=143.57..143.57 rows=1\nwidth=0) (actual time=194.460..194.460 rows=0 loops=1)\n Buffers: shared hit=7168\n -> Bitmap Index Scan on idx_route_03\n(cost=0.00..6.66 rows=298 width=0) (actual time=2.326..2.326 rows=15148\nloops=1)\n Index Cond: (r.topointguid = dp.guid)\n Buffers: shared hit=63\n -> Bitmap Index Scan on idx_route_07\n(cost=0.00..135.49 rows=4693 width=0) (actual time=190.001..190.001\nrows=579054 loops=1)\n Index Cond: (tsrange(r.startvalid,\nr.endvalid) @> (now())::timestamp without time zone)\n Buffers: shared hit=7105\n -> Bitmap Heap Scan on navdata.point op (cost=8.03..10.06\nrows=1 width=16) (actual time=91.321..91.321 rows=0 loops=2439)\n Output: op.uid, op.guid, op.airportguid, op.identifier,\nop.icaocode, op.name, op.type, op.coordinates, op.fir, op.navaidfrequency,\nop.elevation, op.magneticvariance, op.startvalid, op.endvalid,\nop.revisionuid, op.source, op.leveltype\n Recheck Cond: ((op.guid = r.frompointguid) AND\n((op.identifier)::text ~~* '%LOWW%'::text))\n Filter: (((op.type)::text = ANY ('{PA}'::text[])) AND\n(tsrange(op.startvalid, op.endvalid) @> (now())::timestamp without time\nzone))\n Rows Removed by Filter: 0\n Heap Blocks: exact=252\n Buffers: shared hit=12017406\n -> BitmapAnd (cost=8.03..8.03 rows=1 width=0) (actual\ntime=91.315..91.315 rows=0 loops=2439)\n Buffers: shared hit=12017154\n -> Bitmap Index Scan on cidx_point\n(cost=0.00..2.04 rows=6 width=0) (actual time=0.017..0.017 rows=8\nloops=2439)\n Index Cond: (op.guid = r.frompointguid)\n Buffers: shared hit=7518\n -> Bitmap Index Scan on idx_point_08\n(cost=0.00..5.75 rows=178 width=0) (actual time=91.288..91.288 rows=51\nloops=2439)\n Index Cond: ((op.identifier)::text ~~*\n'%LOWW%'::text)\n Buffers: shared hit=12009636\nPlanning time: 5.162 ms\nExecution time: 223118.858 ms\n\n* Please pay attention to index scan on idx_point_08. It takes on average\n91 ms\nand it is executed 2439 times = 221949 ms. That is where we spend most of\nthe\ntime. *\n\n--------------------------------------------------------------------------------\n# Case 3 : We again search for all routes between Vienna International\nAirport\nand London Heathrow, but this time I use CONCAT(op.identifier, '') as\noptimization fence.\n--------------------------------------------------------------------------------\n\nEXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)\nSELECT\n r.*\nFROM navdata.route r\n INNER JOIN navdata.point op ON r.frompointguid = op.guid\n INNER JOIN navdata.point dp ON r.topointguid = dp.guid\nWHERE\n r.routeidentifier ILIKE '%'\n AND tsrange(r.startvalid, r.endvalid) @> now() :: TIMESTAMP\n AND (NOT false :: BOOLEAN OR r.source = ANY (ARRAY[] :: VARCHAR []))\n AND CONCAT(op.identifier, '') ILIKE '%LOWW%'\n AND op.type = ANY (ARRAY['PA'] :: VARCHAR [])\n AND tsrange(op.startvalid, op.endvalid) @> now() :: TIMESTAMP\n AND dp.identifier ILIKE '%EGLL%' :: VARCHAR\n AND dp.type = ANY (ARRAY['PA'] :: VARCHAR [])\n AND tsrange(dp.startvalid, dp.endvalid) @> now() :: TIMESTAMP\nORDER BY r.routeidentifier\nLIMIT 1000\n\nLimit (cost=662.16..662.17 rows=1 width=349) (actual time=411.756..411.808\nrows=36 loops=1)\n Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid,\nr.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid,\nr.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar,\nr.from_first, r.dep_airports, r.dst_airports, r.tag,\nr.expanded_route_string, r.route_geometry\n Buffers: shared hit=43025\n -> Sort (cost=662.16..662.17 rows=1 width=349) (actual\ntime=411.755..411.776 rows=36 loops=1)\n Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid,\nr.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid,\nr.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar,\nr.from_first, r.dep_airports, r.dst_airports, r.tag,\nr.expanded_route_string, r.route_geometry\n Sort Key: r.routeidentifier\n Sort Method: quicksort Memory: 35kB\n Buffers: shared hit=43025\n -> Nested Loop (cost=149.75..662.15 rows=1 width=349) (actual\ntime=316.518..411.656 rows=36 loops=1)\n Output: r.uid, r.routeidentifier, r.frompointguid,\nr.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype,\nr.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi,\nr.grounddistance_excl_sidstar, r.from_first, r.dep_airports,\nr.dst_airports, r.tag, r.expanded_route_string, r.route_geometry\n Buffers: shared hit=43025\n -> Nested Loop (cost=149.32..649.49 rows=1 width=349)\n(actual time=314.704..326.873 rows=2439 loops=1)\n Output: r.uid, r.routeidentifier, r.frompointguid,\nr.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype,\nr.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi,\nr.grounddistance_excl_sidstar, r.from_first, r.dep_airports,\nr.dst_airports, r.tag, r.expanded_route_string, r.route_geometry\n Buffers: shared hit=15788\n -> Bitmap Heap Scan on navdata.point dp\n(cost=5.75..358.28 rows=2 width=16) (actual time=123.267..123.310 rows=1\nloops=1)\n Output: dp.uid, dp.guid, dp.airportguid,\ndp.identifier, dp.icaocode, dp.name, dp.type, dp.coordinates, dp.fir,\ndp.navaidfrequency, dp.elevation, dp.magneticvariance, dp.startvalid,\ndp.endvalid, dp.revisionuid, dp.source, dp.leveltype\n Recheck Cond: ((dp.identifier)::text ~~*\n'%EGLL%'::text)\n Filter: (((dp.type)::text = ANY ('{PA}'::text[]))\nAND (tsrange(dp.startvalid, dp.endvalid) @> (now())::timestamp without time\nzone))\n Rows Removed by Filter: 6\n Heap Blocks: exact=7\n Buffers: shared hit=6786\n -> Bitmap Index Scan on idx_point_08\n(cost=0.00..5.75 rows=178 width=0) (actual time=123.232..123.232 rows=7\nloops=1)\n Index Cond: ((dp.identifier)::text ~~*\n'%EGLL%'::text)\n Buffers: shared hit=6779\n -> Bitmap Heap Scan on navdata.route r\n(cost=143.57..145.60 rows=1 width=349) (actual time=191.429..201.176\nrows=2439 loops=1)\n Output: r.uid, r.routeidentifier,\nr.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation,\nr.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi,\nr.grounddistance_excl_sidstar, r.from_first, r.dep_airports,\nr.dst_airports, r.tag, r.expanded_route_string, r.route_geometry\n Recheck Cond: ((r.topointguid = dp.guid) AND\n(tsrange(r.startvalid, r.endvalid) @> (now())::timestamp without time zone))\n Filter: ((r.routeidentifier)::text ~~* '%'::text)\n Heap Blocks: exact=1834\n Buffers: shared hit=9002\n -> BitmapAnd (cost=143.57..143.57 rows=1\nwidth=0) (actual time=191.097..191.097 rows=0 loops=1)\n Buffers: shared hit=7168\n -> Bitmap Index Scan on idx_route_03\n(cost=0.00..6.66 rows=298 width=0) (actual time=2.349..2.349 rows=15148\nloops=1)\n Index Cond: (r.topointguid = dp.guid)\n Buffers: shared hit=63\n -> Bitmap Index Scan on idx_route_07\n(cost=0.00..135.49 rows=4693 width=0) (actual time=186.640..186.640\nrows=579054 loops=1)\n Index Cond: (tsrange(r.startvalid,\nr.endvalid) @> (now())::timestamp without time zone)\n Buffers: shared hit=7105\n -> Index Scan using cidx_point on navdata.point op\n(cost=0.43..12.65 rows=1 width=16) (actual time=0.033..0.033 rows=0\nloops=2439)\n Output: op.uid, op.guid, op.airportguid, op.identifier,\nop.icaocode, op.name, op.type, op.coordinates, op.fir, op.navaidfrequency,\nop.elevation, op.magneticvariance, op.startvalid, op.endvalid,\nop.revisionuid, op.source, op.leveltype\n Index Cond: (op.guid = r.frompointguid)\n Filter: (((op.type)::text = ANY ('{PA}'::text[])) AND\n(concat(op.identifier, '') ~~* '%LOWW%'::text) AND (tsrange(op.startvalid,\nop.endvalid) @> (now())::timestamp without time zone))\n Rows Removed by Filter: 8\n Buffers: shared hit=27237\nPlanning time: 3.381 ms\nExecution time: 411.944 ms\n\n* We are back into acceptable margin. *\n\n################################################################################\n# Postgres version\n################################################################################\n\nPostgreSQL 9.6.9 on x86_64-pc-linux-gnu (Ubuntu 9.6.9-2.pgdg16.04+1),\ncompiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0 20160609, 64-bit\n\n################################################################################\n# Schema\n################################################################################\n\nCurrently, our tables are heavily indexed due to refactoring process and\nneed to\nwork with old and new version of software. Once we are finished, lot of\nindexes shell be removed.\n\nCREATE TABLE navdata.point (\n uid uuid NOT NULL,\n guid uuid NULL,\n airportguid uuid NULL,\n identifier varchar(5) NULL,\n icaocode varchar(2) NULL,\n \"name\" varchar(255) NULL,\n \"type\" varchar(2) NULL,\n coordinates geography NULL,\n fir varchar(5) NULL,\n navaidfrequency float8 NULL,\n elevation float8 NULL,\n magneticvariance float8 NULL,\n startvalid timestamp NULL,\n endvalid timestamp NULL,\n revisionuid uuid NULL,\n \"source\" varchar(4) NULL,\n leveltype varchar(1) NULL,\n CONSTRAINT point_pkey PRIMARY KEY (uid)\n)\nWITH (\n OIDS=FALSE\n) ;\nCREATE INDEX cidx_point ON navdata.point USING btree (guid) ;\nCREATE INDEX idx_point_01 ON navdata.point USING btree (identifier, guid) ;\nCREATE INDEX idx_point_03 ON navdata.point USING btree (identifier) ;\nCREATE INDEX idx_point_04 ON navdata.point USING gist (coordinates) WHERE\n(airportguid IS NULL) ;\nCREATE INDEX idx_point_05 ON navdata.point USING btree (identifier\ntext_pattern_ops) ;\nCREATE INDEX idx_point_06 ON navdata.point USING btree (airportguid) ;\nCREATE INDEX idx_point_07 ON navdata.point USING gist (coordinates) ;\nCREATE INDEX idx_point_08 ON navdata.point USING gist (identifier\ngist_trgm_ops) ;\nCREATE INDEX idx_point_09 ON navdata.point USING btree (type) ;\nCREATE INDEX idx_point_10 ON navdata.point USING gist (name gist_trgm_ops) ;\nCREATE INDEX idx_point_11 ON navdata.point USING btree (type, identifier\ntext_pattern_ops) ;\nCREATE INDEX idx_point_12 ON navdata.point USING gist\n(upper((identifier)::text) gist_trgm_ops) ;\nCREATE INDEX idx_point_13 ON navdata.point USING gist (upper((name)::text)\ngist_trgm_ops) ;\nCREATE INDEX idx_point_tmp ON navdata.point USING btree (leveltype) ;\nCREATE INDEX point_validity_idx ON navdata.point USING gist\n(tsrange(startvalid, endvalid)) ;\n\nCREATE TABLE navdata.route (\n uid uuid NOT NULL,\n routeidentifier varchar(3) NULL,\n frompointguid uuid NULL,\n topointguid uuid NULL,\n sidguid uuid NULL,\n starguid uuid NULL,\n routeinformation varchar NULL,\n routetype varchar(5) NULL,\n startvalid timestamp NULL,\n endvalid timestamp NULL,\n revisionuid uuid NULL,\n \"source\" varchar(4) NULL,\n fufi uuid NULL,\n grounddistance_excl_sidstar float8 NULL,\n from_first bool NULL,\n dep_airports varchar NULL,\n dst_airports varchar NULL,\n tag varchar NULL,\n expanded_route_string varchar NULL,\n route_geometry geometry NULL,\n CONSTRAINT route_pkey PRIMARY KEY (uid)\n)\nWITH (\n OIDS=FALSE\n) ;\nCREATE INDEX idx_route_01 ON navdata.route USING btree (uid) ;\nCREATE INDEX idx_route_02 ON navdata.route USING btree (frompointguid) ;\nCREATE INDEX idx_route_03 ON navdata.route USING btree (topointguid) ;\nCREATE INDEX idx_route_04 ON navdata.route USING btree (fufi) ;\nCREATE INDEX idx_route_05 ON navdata.route USING btree (source,\nrouteidentifier, startvalid, endvalid) ;\nCREATE INDEX idx_route_06 ON navdata.route USING gist (routeinformation\ngist_trgm_ops) ;\nCREATE INDEX idx_route_07 ON navdata.route USING gist (tsrange(startvalid,\nendvalid)) ;\nCREATE INDEX idx_route_09 ON navdata.route USING gist (routeidentifier\ngist_trgm_ops) ;\n\n################################################################################\n# Table metadata\n################################################################################\n\nrelname |relpages |reltuples |relallvisible |relkind |relnatts\n|relhassubclass |reloptions |pg_table_size |\n--------|---------|----------|--------------|--------|---------|---------------|-----------|--------------|\nroute |36600 |938573 |36595 |r |22\n|false |NULL |299941888 |\npoint |95241 |2156454 |95241 |r |17\n|false |NULL |780460032 |\n\n\n################################################################################\n# History\n################################################################################\n\nThis is a new query, because data layer is being refactored.\n\n################################################################################\n# Hardware\n################################################################################\n\nPostgres is running on virtual machine.\n\n* CPU: 8 cores assigned\n\nprocessor : 7\nvendor_id : AuthenticAMD\ncpu family : 21\nmodel : 2\nmodel name : AMD Opteron(tm) Processor 6380\nstepping : 0\nmicrocode : 0xffffffff\ncpu MHz : 2500.020\ncache size : 2048 KB\nphysical id : 0\nsiblings : 8\ncore id : 7\ncpu cores : 8\napicid : 7\ninitial apicid : 7\nfpu : yes\nfpu_exception : yes\ncpuid level : 13\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov\npat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt lm\nrep_good nopl extd_apicid pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 popcnt\naes xsave avx f16c hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a\nmisalignsse 3dnowprefetch osvw xop fma4 vmmcall bmi1 arat\nbugs : fxsave_leak sysret_ss_attrs\nbogomips : 4998.98\nTLB size : 1536 4K pages\nclflush size : 64\ncache_alignment : 64\naddress sizes : 42 bits physical, 48 bits virtual\npower management:\n\n\n* Memory: 32 GB\n\n* Disk: Should be ssd, but unfortunattely I don't know which model.\n\n################################################################################\n# bonnie++\n################################################################################\n\nUsing uid:111, gid:118.\nformat_version,bonnie_version,name,concurrency,seed,file_size,io_chunk_size,putc,putc_cpu,put_block,put_block_cpu,rewrite,rewrite_cpu,getc,getc_cpu,get_block,get_block_cpu,seeks,seeks_cpu,num_files,max_size,min_size,num_dirs,file_chunk_size,seq_create,seq_create_cpu,seq_stat,seq_stat_cpu,seq_del,seq_del_cpu,ran_create,ran_create_cpu,ran_stat,ran_stat_cpu,ran_del,ran_del_cpu,putc_latency,put_block_latency,rewrite_latency,getc_latency,get_block_latency,seeks_latency,seq_create_latency,seq_stat_latency,seq_del_latency,ran_create_latency,ran_stat_latency,ran_del_latency\nWriting intelligently...done\nRewriting...done\nReading intelligently...done\nstart 'em...done...done...done...done...done...\n1.97,1.97,v6565testdb01,1,1529491960,63G,,,,133872,20,96641,17,,,469654,41,+++++,+++,,,,,,,,,,,,,,,,,,,2117ms,2935ms,,270ms,4760us,,,,,,\nWriting intelligently...done\nRewriting...done\nReading intelligently...done\nstart 'em...done...done...done...done...done...\n1.97,1.97,v6565testdb01,1,1529491960,63G,,,,190192,26,143595,23,,,457357,37,+++++,+++,,,,,,,,,,,,,,,,,,,595ms,2201ms,,284ms,6110us,,,,,,\nWriting intelligently...done\nRewriting...done\nReading intelligently...done\nstart 'em...done...done...done...done...done...\n1.97,1.97,v6565testdb01,1,1529491960,63G,,,,542936,81,153952,25,,,446369,37,+++++,+++,,,,,,,,,,,,,,,,,,,347ms,3678ms,,101ms,5632us,,,,,,\nWriting intelligently...done\nRewriting...done\nReading intelligently...done\nstart 'em...done...done...done...done...done...\n1.97,1.97,v6565testdb01,1,1529491960,63G,,,,244155,33,157543,26,,,441115,38,16111,495,,,,,,,,,,,,,,,,,,,638ms,2667ms,,195ms,9068us,,,,,,\n\n\n################################################################################\n# Maintenance Setup\n################################################################################\n\nAutovacuum: yes\n\n################################################################################\n# postgresql.conf\n################################################################################\n\nmax_connections = 4096 # (change requires restart)\nshared_buffers = 8GB # (change requires restart)\nhuge_pages = try # on, off, or try\nwork_mem = 4MB # min 64kB\nmaintenance_work_mem = 2GB # min 1MB\ndynamic_shared_memory_type = posix # the default is the first option\nshared_preload_libraries = 'pg_stat_statements'\npg_stat_statements.max = 10000\npg_stat_statements.track = all\nwal_level = replica # minimal, replica, or logical\nwal_buffers = 16MB\nmax_wal_size = 2GB\nmin_wal_size = 1GB\ncheckpoint_completion_target = 0.7\nmax_wal_senders = 4 # max number of walsender processes\nrandom_page_cost = 2.0\neffective_cache_size = 24GB\ndefault_statistics_target = 100 # range 1-10000\n\n################################################################################\n# Statistics\n################################################################################\n\nfrac_mcv |tablename |attname |n_distinct |n_mcv\n|n_hist |\n--------------|----------|----------------------------|-------------|------|-------|\n |route |uid |-1 |\n|101 |\n0.969699979 |route |routeidentifier |78 |2\n|76 |\n0.44780004 |route |frompointguid |2899 |100\n|101 |\n0.441700101 |route |topointguid |3154 |100\n|101 |\n0.0368666835 |route |sidguid |2254 |100\n|101 |\n0.0418333709 |route |starguid |3182 |100\n|101 |\n0.0515667647 |route |routeinformation |-0.335044593 |100\n|101 |\n0.0528000034 |route |routetype |3 |3\n| |\n0.755399942 |route |startvalid |810 |100\n|101 |\n0.962899983 |route |endvalid |22 |3\n|19 |\n0.00513333362 |route |revisionuid |-0.809282064 |2\n|101 |\n0.97906667 |route |source |52 |4\n|48 |\n |route |fufi |0 |\n| |\n0.00923334155 |route |grounddistance_excl_sidstar |-0.552667081 |100\n|101 |\n0.0505000018 |route |from_first |2 |2\n| |\n0.0376333408 |route |dep_airports |326 |52\n|101 |\n0.0367666557 |route |dst_airports |388 |57\n|101 |\n |point |uid |-1 |\n|101 |\n0.00185333542 |point |guid |-0.164169937 |100\n|101 |\n0.0573133379 |point |airportguid |23575 |100\n|101 |\n0.175699964 |point |identifier |209296 |1000\n|1001 |\n0.754063368 |point |icaocode |254 |41\n|101 |\n0.00352332788 |point |name |37853 |100\n|101 |\n0.999230027 |point |type |11 |6\n|5 |\n |point |coordinates |-1 |\n| |\n0.607223332 |point |fir |281 |62\n|101 |\n0.0247033276 |point |navaidfrequency |744 |100\n|101 |\n0.0320866667 |point |elevation |14013 |100\n|101 |\n0.0011433335 |point |magneticvariance |-0.587834716 |100\n|101 |\n0.978270054 |point |startvalid |35 |12\n|23 |\n0.978176594 |point |endvalid |30 |11\n|19 |\n0.978123426 |point |revisionuid |62 |12\n|50 |\n0.99999994 |point |source |3 |3\n| |\n0.777056634 |point |leveltype |7 |7\n| |\n\n################################################################################\n\nI am looking forward to your suggestions.\n\nThanks in advance!\n\nSasa Vilic\n\nHi everyone,we have a new query that performs badly with specific input parameters. Weget worst performance when input data is most restrictive. I have partiallyidentified a problem: it always happens when index scan is done in inner loopand index type is pg_trgm. We also noticed that for simple query( select * from point where identifier = 'LOWW' vs select * from point where identifier LIKE 'LOWW')the difference between btree index and pg_trgm index can be quite high:0.009 ms vs 32.0 ms.What I would like to know is whenever query planner is aware that some indextypes are more expensive the the others and whenever it can take that intoaccount?I will describe background first, then give you query and its analysis fordifferent parameters and in the end I will write about all required informationregarding setup (Postgres version, Schema, metadata, hardware, etc.)I would like to know whenever this is a bug in query planner or not and whatcould we do about it.################################################################################# Background################################################################################We have a database with navigational data for civil aviation.Current query is working on two tables: point and route.Point represents a navigational point on Earth and route describes a route between two points.Query that we have finds all routes between two set of points. A set is adynamically/loosely defined by pattern given by the user input. So for example if user wants to find all routes between international airports in Austria toward London Heathrow, he or she would use 'LOW%' as :from_point_identifier and 'EGLL' as :to_point_identifier. Please keep in mind that is a simple case,and that user is allowed to define search term any way he/she see it fit,i.e. '%OW%', 'EG%'.SELECT r.*FROM navdata.route r INNER JOIN navdata.point op ON r.frompointguid = op.guid INNER JOIN navdata.point dp ON r.topointguid = dp.guidWHERE r.routeidentifier ILIKE :route_identifier AND tsrange(r.startvalid, r.endvalid) @> :validity :: TIMESTAMP AND (NOT :use_sources :: BOOLEAN OR r.source = ANY (:sources :: VARCHAR [])) AND CONCAT(op.identifier, '') ILIKE :from_point_identifier AND op.type = ANY (:point_types :: VARCHAR []) AND tsrange(op.startvalid, op.endvalid) @> :validity :: TIMESTAMP AND dp.identifier ILIKE :to_point_identifier :: VARCHAR AND dp.type = ANY (:point_types :: VARCHAR []) AND tsrange(dp.startvalid, dp.endvalid) @> :validity :: TIMESTAMPORDER BY r.routeidentifierLIMIT 1000Most of the tables we have follows this layout principle:* uid - is primary key* guid - is globally unique key (i.e. London Heathrow could for example change it identifier EGLL, but our internal guid will stay same)* startvalid, endvalid - defines for which period is entry valid. Entires with same guid should not have overlapping validity. We don't use foreign keys for two reasons:* We need to do live migration without downtime. Creating a foreign key on huge dataset could take quite some time* Relationship between entities are defined based on guid and not on uid (primary key). ################################################################################# Query analysis################################################################################--------------------------------------------------------------------------------# Case 1 : We search for all outgoing routes from Vienna International Airport--------------------------------------------------------------------------------EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)SELECT r.*FROM navdata.route r INNER JOIN navdata.point op ON r.frompointguid = op.guid INNER JOIN navdata.point dp ON r.topointguid = dp.guidWHERE r.routeidentifier ILIKE '%' AND tsrange(r.startvalid, r.endvalid) @> now() :: TIMESTAMP AND (NOT false :: BOOLEAN OR r.source = ANY (ARRAY[] :: VARCHAR [])) AND op.identifier ILIKE '%LOWW%' AND op.type = ANY (ARRAY['PA'] :: VARCHAR []) AND tsrange(op.startvalid, op.endvalid) @> now() :: TIMESTAMP AND dp.identifier ILIKE '%' :: VARCHAR AND dp.type = ANY (ARRAY['PA'] :: VARCHAR []) AND tsrange(dp.startvalid, dp.endvalid) @> now() :: TIMESTAMPORDER BY r.routeidentifierLIMIT 1000Limit (cost=666.58..666.58 rows=1 width=349) (actual time=358.466..359.688 rows=1000 loops=1) Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry Buffers: shared hit=29786 read=1 -> Sort (cost=666.58..666.58 rows=1 width=349) (actual time=358.464..358.942 rows=1000 loops=1) Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry Sort Key: r.routeidentifier Sort Method: quicksort Memory: 582kB Buffers: shared hit=29786 read=1 -> Nested Loop (cost=149.94..666.57 rows=1 width=349) (actual time=291.681..356.261 rows=1540 loops=1) Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry Buffers: shared hit=29786 read=1 -> Nested Loop (cost=149.51..653.92 rows=1 width=349) (actual time=291.652..300.076 rows=1546 loops=1) Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry Buffers: shared hit=13331 read=1 -> Bitmap Heap Scan on navdata.point op (cost=5.75..358.28 rows=2 width=16) (actual time=95.933..96.155 rows=1 loops=1) Output: op.uid, op.guid, op.airportguid, op.identifier, op.icaocode, op.name, op.type, op.coordinates, op.fir, op.navaidfrequency, op.elevation, op.magneticvariance, op.startvalid, op.endvalid, op.revisionuid, op.source, op.leveltype Recheck Cond: ((op.identifier)::text ~~* '%LOWW%'::text) Filter: (((op.type)::text = ANY ('{PA}'::text[])) AND (tsrange(op.startvalid, op.endvalid) @> (now())::timestamp without time zone)) Rows Removed by Filter: 50 Heap Blocks: exact=51 Buffers: shared hit=4974 read=1 -> Bitmap Index Scan on idx_point_08 (cost=0.00..5.75 rows=178 width=0) (actual time=95.871..95.871 rows=51 loops=1) Index Cond: ((op.identifier)::text ~~* '%LOWW%'::text) Buffers: shared hit=4924 -> Bitmap Heap Scan on navdata.route r (cost=143.77..147.80 rows=2 width=349) (actual time=195.711..202.308 rows=1546 loops=1) Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry Recheck Cond: ((r.frompointguid = op.guid) AND (tsrange(r.startvalid, r.endvalid) @> (now())::timestamp without time zone)) Filter: ((r.routeidentifier)::text ~~* '%'::text) Heap Blocks: exact=1231 Buffers: shared hit=8357 -> BitmapAnd (cost=143.77..143.77 rows=2 width=0) (actual time=195.501..195.501 rows=0 loops=1) Buffers: shared hit=7126 -> Bitmap Index Scan on idx_route_02 (cost=0.00..6.85 rows=324 width=0) (actual time=0.707..0.707 rows=4295 loops=1) Index Cond: (r.frompointguid = op.guid) Buffers: shared hit=21 -> Bitmap Index Scan on idx_route_07 (cost=0.00..135.49 rows=4693 width=0) (actual time=193.881..193.881 rows=579054 loops=1) Index Cond: (tsrange(r.startvalid, r.endvalid) @> (now())::timestamp without time zone) Buffers: shared hit=7105 -> Index Scan using cidx_point on navdata.point dp (cost=0.43..12.63 rows=1 width=16) (actual time=0.009..0.034 rows=1 loops=1546) Output: dp.uid, dp.guid, dp.airportguid, dp.identifier, dp.icaocode, dp.name, dp.type, dp.coordinates, dp.fir, dp.navaidfrequency, dp.elevation, dp.magneticvariance, dp.startvalid, dp.endvalid, dp.revisionuid, dp.source, dp.leveltype Index Cond: (dp.guid = r.topointguid) Filter: (((dp.type)::text = ANY ('{PA}'::text[])) AND ((dp.identifier)::text ~~* '%'::text) AND (tsrange(dp.startvalid, dp.endvalid) @> (now())::timestamp without time zone)) Rows Removed by Filter: 7 Buffers: shared hit=16455Planning time: 4.603 msExecution time: 360.180 ms* 360 ms. That is quite fine for our standards. *--------------------------------------------------------------------------------# Case 2 : We search for all routes between Vienna International Airport andLondon Heathrow (here is where trouble begins)--------------------------------------------------------------------------------EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)SELECT r.*FROM navdata.route r INNER JOIN navdata.point op ON r.frompointguid = op.guid INNER JOIN navdata.point dp ON r.topointguid = dp.guidWHERE r.routeidentifier ILIKE '%' AND tsrange(r.startvalid, r.endvalid) @> now() :: TIMESTAMP AND (NOT false :: BOOLEAN OR r.source = ANY (ARRAY[] :: VARCHAR [])) AND op.identifier ILIKE '%LOWW%' AND op.type = ANY (ARRAY['PA'] :: VARCHAR []) AND tsrange(op.startvalid, op.endvalid) @> now() :: TIMESTAMP AND dp.identifier ILIKE '%EGLL%' :: VARCHAR AND dp.type = ANY (ARRAY['PA'] :: VARCHAR []) AND tsrange(dp.startvalid, dp.endvalid) @> now() :: TIMESTAMPORDER BY r.routeidentifierLIMIT 1000Limit (cost=659.57..659.58 rows=1 width=349) (actual time=223118.664..223118.714 rows=36 loops=1) Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry Buffers: shared hit=12033194 -> Sort (cost=659.57..659.58 rows=1 width=349) (actual time=223118.661..223118.681 rows=36 loops=1) Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry Sort Key: r.routeidentifier Sort Method: quicksort Memory: 35kB Buffers: shared hit=12033194 -> Nested Loop (cost=157.35..659.56 rows=1 width=349) (actual time=4290.975..223118.490 rows=36 loops=1) Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry Buffers: shared hit=12033194 -> Nested Loop (cost=149.32..649.49 rows=1 width=349) (actual time=319.717..367.139 rows=2439 loops=1) Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry Buffers: shared hit=15788 -> Bitmap Heap Scan on navdata.point dp (cost=5.75..358.28 rows=2 width=16) (actual time=124.922..125.008 rows=1 loops=1) Output: dp.uid, dp.guid, dp.airportguid, dp.identifier, dp.icaocode, dp.name, dp.type, dp.coordinates, dp.fir, dp.navaidfrequency, dp.elevation, dp.magneticvariance, dp.startvalid, dp.endvalid, dp.revisionuid, dp.source, dp.leveltype Recheck Cond: ((dp.identifier)::text ~~* '%EGLL%'::text) Filter: (((dp.type)::text = ANY ('{PA}'::text[])) AND (tsrange(dp.startvalid, dp.endvalid) @> (now())::timestamp without time zone)) Rows Removed by Filter: 6 Heap Blocks: exact=7 Buffers: shared hit=6786 -> Bitmap Index Scan on idx_point_08 (cost=0.00..5.75 rows=178 width=0) (actual time=124.882..124.882 rows=7 loops=1) Index Cond: ((dp.identifier)::text ~~* '%EGLL%'::text) Buffers: shared hit=6779 -> Bitmap Heap Scan on navdata.route r (cost=143.57..145.60 rows=1 width=349) (actual time=194.785..237.128 rows=2439 loops=1) Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry Recheck Cond: ((r.topointguid = dp.guid) AND (tsrange(r.startvalid, r.endvalid) @> (now())::timestamp without time zone)) Filter: ((r.routeidentifier)::text ~~* '%'::text) Heap Blocks: exact=1834 Buffers: shared hit=9002 -> BitmapAnd (cost=143.57..143.57 rows=1 width=0) (actual time=194.460..194.460 rows=0 loops=1) Buffers: shared hit=7168 -> Bitmap Index Scan on idx_route_03 (cost=0.00..6.66 rows=298 width=0) (actual time=2.326..2.326 rows=15148 loops=1) Index Cond: (r.topointguid = dp.guid) Buffers: shared hit=63 -> Bitmap Index Scan on idx_route_07 (cost=0.00..135.49 rows=4693 width=0) (actual time=190.001..190.001 rows=579054 loops=1) Index Cond: (tsrange(r.startvalid, r.endvalid) @> (now())::timestamp without time zone) Buffers: shared hit=7105 -> Bitmap Heap Scan on navdata.point op (cost=8.03..10.06 rows=1 width=16) (actual time=91.321..91.321 rows=0 loops=2439) Output: op.uid, op.guid, op.airportguid, op.identifier, op.icaocode, op.name, op.type, op.coordinates, op.fir, op.navaidfrequency, op.elevation, op.magneticvariance, op.startvalid, op.endvalid, op.revisionuid, op.source, op.leveltype Recheck Cond: ((op.guid = r.frompointguid) AND ((op.identifier)::text ~~* '%LOWW%'::text)) Filter: (((op.type)::text = ANY ('{PA}'::text[])) AND (tsrange(op.startvalid, op.endvalid) @> (now())::timestamp without time zone)) Rows Removed by Filter: 0 Heap Blocks: exact=252 Buffers: shared hit=12017406 -> BitmapAnd (cost=8.03..8.03 rows=1 width=0) (actual time=91.315..91.315 rows=0 loops=2439) Buffers: shared hit=12017154 -> Bitmap Index Scan on cidx_point (cost=0.00..2.04 rows=6 width=0) (actual time=0.017..0.017 rows=8 loops=2439) Index Cond: (op.guid = r.frompointguid) Buffers: shared hit=7518 -> Bitmap Index Scan on idx_point_08 (cost=0.00..5.75 rows=178 width=0) (actual time=91.288..91.288 rows=51 loops=2439) Index Cond: ((op.identifier)::text ~~* '%LOWW%'::text) Buffers: shared hit=12009636Planning time: 5.162 msExecution time: 223118.858 ms* Please pay attention to index scan on idx_point_08. It takes on average 91 msand it is executed 2439 times = 221949 ms. That is where we spend most of the time. *--------------------------------------------------------------------------------# Case 3 : We again search for all routes between Vienna International Airportand London Heathrow, but this time I use CONCAT(op.identifier, '') asoptimization fence.--------------------------------------------------------------------------------EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)SELECT r.*FROM navdata.route r INNER JOIN navdata.point op ON r.frompointguid = op.guid INNER JOIN navdata.point dp ON r.topointguid = dp.guidWHERE r.routeidentifier ILIKE '%' AND tsrange(r.startvalid, r.endvalid) @> now() :: TIMESTAMP AND (NOT false :: BOOLEAN OR r.source = ANY (ARRAY[] :: VARCHAR [])) AND CONCAT(op.identifier, '') ILIKE '%LOWW%' AND op.type = ANY (ARRAY['PA'] :: VARCHAR []) AND tsrange(op.startvalid, op.endvalid) @> now() :: TIMESTAMP AND dp.identifier ILIKE '%EGLL%' :: VARCHAR AND dp.type = ANY (ARRAY['PA'] :: VARCHAR []) AND tsrange(dp.startvalid, dp.endvalid) @> now() :: TIMESTAMPORDER BY r.routeidentifierLIMIT 1000Limit (cost=662.16..662.17 rows=1 width=349) (actual time=411.756..411.808 rows=36 loops=1) Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry Buffers: shared hit=43025 -> Sort (cost=662.16..662.17 rows=1 width=349) (actual time=411.755..411.776 rows=36 loops=1) Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry Sort Key: r.routeidentifier Sort Method: quicksort Memory: 35kB Buffers: shared hit=43025 -> Nested Loop (cost=149.75..662.15 rows=1 width=349) (actual time=316.518..411.656 rows=36 loops=1) Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry Buffers: shared hit=43025 -> Nested Loop (cost=149.32..649.49 rows=1 width=349) (actual time=314.704..326.873 rows=2439 loops=1) Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry Buffers: shared hit=15788 -> Bitmap Heap Scan on navdata.point dp (cost=5.75..358.28 rows=2 width=16) (actual time=123.267..123.310 rows=1 loops=1) Output: dp.uid, dp.guid, dp.airportguid, dp.identifier, dp.icaocode, dp.name, dp.type, dp.coordinates, dp.fir, dp.navaidfrequency, dp.elevation, dp.magneticvariance, dp.startvalid, dp.endvalid, dp.revisionuid, dp.source, dp.leveltype Recheck Cond: ((dp.identifier)::text ~~* '%EGLL%'::text) Filter: (((dp.type)::text = ANY ('{PA}'::text[])) AND (tsrange(dp.startvalid, dp.endvalid) @> (now())::timestamp without time zone)) Rows Removed by Filter: 6 Heap Blocks: exact=7 Buffers: shared hit=6786 -> Bitmap Index Scan on idx_point_08 (cost=0.00..5.75 rows=178 width=0) (actual time=123.232..123.232 rows=7 loops=1) Index Cond: ((dp.identifier)::text ~~* '%EGLL%'::text) Buffers: shared hit=6779 -> Bitmap Heap Scan on navdata.route r (cost=143.57..145.60 rows=1 width=349) (actual time=191.429..201.176 rows=2439 loops=1) Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry Recheck Cond: ((r.topointguid = dp.guid) AND (tsrange(r.startvalid, r.endvalid) @> (now())::timestamp without time zone)) Filter: ((r.routeidentifier)::text ~~* '%'::text) Heap Blocks: exact=1834 Buffers: shared hit=9002 -> BitmapAnd (cost=143.57..143.57 rows=1 width=0) (actual time=191.097..191.097 rows=0 loops=1) Buffers: shared hit=7168 -> Bitmap Index Scan on idx_route_03 (cost=0.00..6.66 rows=298 width=0) (actual time=2.349..2.349 rows=15148 loops=1) Index Cond: (r.topointguid = dp.guid) Buffers: shared hit=63 -> Bitmap Index Scan on idx_route_07 (cost=0.00..135.49 rows=4693 width=0) (actual time=186.640..186.640 rows=579054 loops=1) Index Cond: (tsrange(r.startvalid, r.endvalid) @> (now())::timestamp without time zone) Buffers: shared hit=7105 -> Index Scan using cidx_point on navdata.point op (cost=0.43..12.65 rows=1 width=16) (actual time=0.033..0.033 rows=0 loops=2439) Output: op.uid, op.guid, op.airportguid, op.identifier, op.icaocode, op.name, op.type, op.coordinates, op.fir, op.navaidfrequency, op.elevation, op.magneticvariance, op.startvalid, op.endvalid, op.revisionuid, op.source, op.leveltype Index Cond: (op.guid = r.frompointguid) Filter: (((op.type)::text = ANY ('{PA}'::text[])) AND (concat(op.identifier, '') ~~* '%LOWW%'::text) AND (tsrange(op.startvalid, op.endvalid) @> (now())::timestamp without time zone)) Rows Removed by Filter: 8 Buffers: shared hit=27237Planning time: 3.381 msExecution time: 411.944 ms* We are back into acceptable margin. *################################################################################# Postgres version################################################################################PostgreSQL 9.6.9 on x86_64-pc-linux-gnu (Ubuntu 9.6.9-2.pgdg16.04+1), compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0 20160609, 64-bit################################################################################# Schema################################################################################Currently, our tables are heavily indexed due to refactoring process and need towork with old and new version of software. Once we are finished, lot of indexes shell be removed.CREATE TABLE navdata.point ( uid uuid NOT NULL, guid uuid NULL, airportguid uuid NULL, identifier varchar(5) NULL, icaocode varchar(2) NULL, \"name\" varchar(255) NULL, \"type\" varchar(2) NULL, coordinates geography NULL, fir varchar(5) NULL, navaidfrequency float8 NULL, elevation float8 NULL, magneticvariance float8 NULL, startvalid timestamp NULL, endvalid timestamp NULL, revisionuid uuid NULL, \"source\" varchar(4) NULL, leveltype varchar(1) NULL, CONSTRAINT point_pkey PRIMARY KEY (uid))WITH ( OIDS=FALSE) ;CREATE INDEX cidx_point ON navdata.point USING btree (guid) ;CREATE INDEX idx_point_01 ON navdata.point USING btree (identifier, guid) ;CREATE INDEX idx_point_03 ON navdata.point USING btree (identifier) ;CREATE INDEX idx_point_04 ON navdata.point USING gist (coordinates) WHERE (airportguid IS NULL) ;CREATE INDEX idx_point_05 ON navdata.point USING btree (identifier text_pattern_ops) ;CREATE INDEX idx_point_06 ON navdata.point USING btree (airportguid) ;CREATE INDEX idx_point_07 ON navdata.point USING gist (coordinates) ;CREATE INDEX idx_point_08 ON navdata.point USING gist (identifier gist_trgm_ops) ;CREATE INDEX idx_point_09 ON navdata.point USING btree (type) ;CREATE INDEX idx_point_10 ON navdata.point USING gist (name gist_trgm_ops) ;CREATE INDEX idx_point_11 ON navdata.point USING btree (type, identifier text_pattern_ops) ;CREATE INDEX idx_point_12 ON navdata.point USING gist (upper((identifier)::text) gist_trgm_ops) ;CREATE INDEX idx_point_13 ON navdata.point USING gist (upper((name)::text) gist_trgm_ops) ;CREATE INDEX idx_point_tmp ON navdata.point USING btree (leveltype) ;CREATE INDEX point_validity_idx ON navdata.point USING gist (tsrange(startvalid, endvalid)) ;CREATE TABLE navdata.route ( uid uuid NOT NULL, routeidentifier varchar(3) NULL, frompointguid uuid NULL, topointguid uuid NULL, sidguid uuid NULL, starguid uuid NULL, routeinformation varchar NULL, routetype varchar(5) NULL, startvalid timestamp NULL, endvalid timestamp NULL, revisionuid uuid NULL, \"source\" varchar(4) NULL, fufi uuid NULL, grounddistance_excl_sidstar float8 NULL, from_first bool NULL, dep_airports varchar NULL, dst_airports varchar NULL, tag varchar NULL, expanded_route_string varchar NULL, route_geometry geometry NULL, CONSTRAINT route_pkey PRIMARY KEY (uid))WITH ( OIDS=FALSE) ;CREATE INDEX idx_route_01 ON navdata.route USING btree (uid) ;CREATE INDEX idx_route_02 ON navdata.route USING btree (frompointguid) ;CREATE INDEX idx_route_03 ON navdata.route USING btree (topointguid) ;CREATE INDEX idx_route_04 ON navdata.route USING btree (fufi) ;CREATE INDEX idx_route_05 ON navdata.route USING btree (source, routeidentifier, startvalid, endvalid) ;CREATE INDEX idx_route_06 ON navdata.route USING gist (routeinformation gist_trgm_ops) ;CREATE INDEX idx_route_07 ON navdata.route USING gist (tsrange(startvalid, endvalid)) ;CREATE INDEX idx_route_09 ON navdata.route USING gist (routeidentifier gist_trgm_ops) ;################################################################################# Table metadata################################################################################relname |relpages |reltuples |relallvisible |relkind |relnatts |relhassubclass |reloptions |pg_table_size |--------|---------|----------|--------------|--------|---------|---------------|-----------|--------------|route |36600 |938573 |36595 |r |22 |false |NULL |299941888 |point |95241 |2156454 |95241 |r |17 |false |NULL |780460032 |################################################################################# History################################################################################This is a new query, because data layer is being refactored.################################################################################# Hardware################################################################################Postgres is running on virtual machine.* CPU: 8 cores assignedprocessor : 7vendor_id : AuthenticAMDcpu family : 21model : 2model name : AMD Opteron(tm) Processor 6380stepping : 0microcode : 0xffffffffcpu MHz : 2500.020cache size : 2048 KBphysical id : 0siblings : 8core id : 7cpu cores : 8apicid : 7initial apicid : 7fpu : yesfpu_exception : yescpuid level : 13wp : yesflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt lm rep_good nopl extd_apicid pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 popcnt aes xsave avx f16c hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw xop fma4 vmmcall bmi1 aratbugs : fxsave_leak sysret_ss_attrsbogomips : 4998.98TLB size : 1536 4K pagesclflush size : 64cache_alignment : 64address sizes : 42 bits physical, 48 bits virtualpower management:* Memory: 32 GB* Disk: Should be ssd, but unfortunattely I don't know which model.################################################################################# bonnie++################################################################################Using uid:111, gid:118.format_version,bonnie_version,name,concurrency,seed,file_size,io_chunk_size,putc,putc_cpu,put_block,put_block_cpu,rewrite,rewrite_cpu,getc,getc_cpu,get_block,get_block_cpu,seeks,seeks_cpu,num_files,max_size,min_size,num_dirs,file_chunk_size,seq_create,seq_create_cpu,seq_stat,seq_stat_cpu,seq_del,seq_del_cpu,ran_create,ran_create_cpu,ran_stat,ran_stat_cpu,ran_del,ran_del_cpu,putc_latency,put_block_latency,rewrite_latency,getc_latency,get_block_latency,seeks_latency,seq_create_latency,seq_stat_latency,seq_del_latency,ran_create_latency,ran_stat_latency,ran_del_latencyWriting intelligently...doneRewriting...doneReading intelligently...donestart 'em...done...done...done...done...done...1.97,1.97,v6565testdb01,1,1529491960,63G,,,,133872,20,96641,17,,,469654,41,+++++,+++,,,,,,,,,,,,,,,,,,,2117ms,2935ms,,270ms,4760us,,,,,,Writing intelligently...doneRewriting...doneReading intelligently...donestart 'em...done...done...done...done...done...1.97,1.97,v6565testdb01,1,1529491960,63G,,,,190192,26,143595,23,,,457357,37,+++++,+++,,,,,,,,,,,,,,,,,,,595ms,2201ms,,284ms,6110us,,,,,,Writing intelligently...doneRewriting...doneReading intelligently...donestart 'em...done...done...done...done...done...1.97,1.97,v6565testdb01,1,1529491960,63G,,,,542936,81,153952,25,,,446369,37,+++++,+++,,,,,,,,,,,,,,,,,,,347ms,3678ms,,101ms,5632us,,,,,,Writing intelligently...doneRewriting...doneReading intelligently...donestart 'em...done...done...done...done...done...1.97,1.97,v6565testdb01,1,1529491960,63G,,,,244155,33,157543,26,,,441115,38,16111,495,,,,,,,,,,,,,,,,,,,638ms,2667ms,,195ms,9068us,,,,,,################################################################################# Maintenance Setup################################################################################Autovacuum: yes################################################################################# postgresql.conf################################################################################max_connections = 4096 # (change requires restart)shared_buffers = 8GB # (change requires restart)huge_pages = try # on, off, or trywork_mem = 4MB # min 64kBmaintenance_work_mem = 2GB # min 1MBdynamic_shared_memory_type = posix # the default is the first optionshared_preload_libraries = 'pg_stat_statements'pg_stat_statements.max = 10000pg_stat_statements.track = allwal_level = replica # minimal, replica, or logicalwal_buffers = 16MBmax_wal_size = 2GBmin_wal_size = 1GBcheckpoint_completion_target = 0.7max_wal_senders = 4 # max number of walsender processesrandom_page_cost = 2.0effective_cache_size = 24GBdefault_statistics_target = 100 # range 1-10000################################################################################# Statistics################################################################################frac_mcv |tablename |attname |n_distinct |n_mcv |n_hist |--------------|----------|----------------------------|-------------|------|-------| |route |uid |-1 | |101 |0.969699979 |route |routeidentifier |78 |2 |76 |0.44780004 |route |frompointguid |2899 |100 |101 |0.441700101 |route |topointguid |3154 |100 |101 |0.0368666835 |route |sidguid |2254 |100 |101 |0.0418333709 |route |starguid |3182 |100 |101 |0.0515667647 |route |routeinformation |-0.335044593 |100 |101 |0.0528000034 |route |routetype |3 |3 | |0.755399942 |route |startvalid |810 |100 |101 |0.962899983 |route |endvalid |22 |3 |19 |0.00513333362 |route |revisionuid |-0.809282064 |2 |101 |0.97906667 |route |source |52 |4 |48 | |route |fufi |0 | | |0.00923334155 |route |grounddistance_excl_sidstar |-0.552667081 |100 |101 |0.0505000018 |route |from_first |2 |2 | |0.0376333408 |route |dep_airports |326 |52 |101 |0.0367666557 |route |dst_airports |388 |57 |101 | |point |uid |-1 | |101 |0.00185333542 |point |guid |-0.164169937 |100 |101 |0.0573133379 |point |airportguid |23575 |100 |101 |0.175699964 |point |identifier |209296 |1000 |1001 |0.754063368 |point |icaocode |254 |41 |101 |0.00352332788 |point |name |37853 |100 |101 |0.999230027 |point |type |11 |6 |5 | |point |coordinates |-1 | | |0.607223332 |point |fir |281 |62 |101 |0.0247033276 |point |navaidfrequency |744 |100 |101 |0.0320866667 |point |elevation |14013 |100 |101 |0.0011433335 |point |magneticvariance |-0.587834716 |100 |101 |0.978270054 |point |startvalid |35 |12 |23 |0.978176594 |point |endvalid |30 |11 |19 |0.978123426 |point |revisionuid |62 |12 |50 |0.99999994 |point |source |3 |3 | |0.777056634 |point |leveltype |7 |7 | |################################################################################I am looking forward to your suggestions.Thanks in advance!Sasa Vilic",
"msg_date": "Wed, 20 Jun 2018 15:21:26 +0200",
"msg_from": "Sasa Vilic <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow query when pg_trgm is in inner lopp"
},
{
"msg_contents": "Is there a reason you used GIST on your pg_trgm indices and not GIN? In my tests and previous posts on here, it nearly always performs worse. Also, did you make sure if it's really SSD and set the random_page_cost accordingly?\n\nMatthew Hall\n\n> On Jun 20, 2018, at 8:21 AM, Sasa Vilic <[email protected]> wrote:\n> \n> Hi everyone,\n> \n> we have a new query that performs badly with specific input parameters. We\n> get worst performance when input data is most restrictive. I have partially\n> identified a problem: it always happens when index scan is done in inner loop\n> and index type is pg_trgm. We also noticed that for simple query\n> (\n> select * from point where identifier = 'LOWW' vs\n> select * from point where identifier LIKE 'LOWW'\n> )\n> the difference between btree index and pg_trgm index can be quite high:\n> 0.009 ms vs 32.0 ms.\n> \n> What I would like to know is whenever query planner is aware that some index\n> types are more expensive the the others and whenever it can take that into\n> account?\n> \n> I will describe background first, then give you query and its analysis for\n> different parameters and in the end I will write about all required information\n> regarding setup (Postgres version, Schema, metadata, hardware, etc.)\n> \n> I would like to know whenever this is a bug in query planner or not and what\n> could we do about it.\n> \n> ################################################################################\n> # Background\n> ################################################################################\n> \n> We have a database with navigational data for civil aviation.\n> Current query is working on two tables: point and route.\n> Point represents a navigational point on Earth and route describes a route \n> between two points.\n> \n> Query that we have finds all routes between two set of points. A set is a\n> dynamically/loosely defined by pattern given by the user input. So for example \n> if user wants to find all routes between international airports in Austria \n> toward London Heathrow, he or she would use 'LOW%' as :from_point_identifier \n> and 'EGLL' as :to_point_identifier. Please keep in mind that is a simple case,\n> and that user is allowed to define search term any way he/she see it fit,\n> i.e. '%OW%', 'EG%'.\n> \n> SELECT\n> r.*\n> FROM navdata.route r\n> INNER JOIN navdata.point op ON r.frompointguid = op.guid\n> INNER JOIN navdata.point dp ON r.topointguid = dp.guid\n> WHERE\n> r.routeidentifier ILIKE :route_identifier\n> AND tsrange(r.startvalid, r.endvalid) @> :validity :: TIMESTAMP\n> AND (NOT :use_sources :: BOOLEAN OR r.source = ANY (:sources :: VARCHAR []))\n> AND CONCAT(op.identifier, '') ILIKE :from_point_identifier\n> AND op.type = ANY (:point_types :: VARCHAR [])\n> AND tsrange(op.startvalid, op.endvalid) @> :validity :: TIMESTAMP\n> AND dp.identifier ILIKE :to_point_identifier :: VARCHAR\n> AND dp.type = ANY (:point_types :: VARCHAR [])\n> AND tsrange(dp.startvalid, dp.endvalid) @> :validity :: TIMESTAMP\n> ORDER BY r.routeidentifier\n> LIMIT 1000\n> \n> \n> Most of the tables we have follows this layout principle:\n> * uid - is primary key\n> * guid - is globally unique key (i.e. London Heathrow could for example \n> change it identifier EGLL, but our internal guid will stay same)\n> * startvalid, endvalid - defines for which period is entry valid. Entires with \n> same guid should not have overlapping validity. \n> \n> We don't use foreign keys for two reasons:\n> * We need to do live migration without downtime. Creating a foreign key on \n> huge dataset could take quite some time\n> * Relationship between entities are defined based on guid and not on uid (primary key). \n> \n> ################################################################################\n> # Query analysis\n> ################################################################################\n> \n> --------------------------------------------------------------------------------\n> # Case 1 : We search for all outgoing routes from Vienna International Airport\n> --------------------------------------------------------------------------------\n> \n> EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)\n> SELECT\n> r.*\n> FROM navdata.route r\n> INNER JOIN navdata.point op ON r.frompointguid = op.guid\n> INNER JOIN navdata.point dp ON r.topointguid = dp.guid\n> WHERE\n> r.routeidentifier ILIKE '%'\n> AND tsrange(r.startvalid, r.endvalid) @> now() :: TIMESTAMP\n> AND (NOT false :: BOOLEAN OR r.source = ANY (ARRAY[] :: VARCHAR []))\n> AND op.identifier ILIKE '%LOWW%'\n> AND op.type = ANY (ARRAY['PA'] :: VARCHAR [])\n> AND tsrange(op.startvalid, op.endvalid) @> now() :: TIMESTAMP\n> AND dp.identifier ILIKE '%' :: VARCHAR\n> AND dp.type = ANY (ARRAY['PA'] :: VARCHAR [])\n> AND tsrange(dp.startvalid, dp.endvalid) @> now() :: TIMESTAMP\n> ORDER BY r.routeidentifier\n> LIMIT 1000\n> \n> Limit (cost=666.58..666.58 rows=1 width=349) (actual time=358.466..359.688 rows=1000 loops=1)\n> Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry\n> Buffers: shared hit=29786 read=1\n> -> Sort (cost=666.58..666.58 rows=1 width=349) (actual time=358.464..358.942 rows=1000 loops=1)\n> Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry\n> Sort Key: r.routeidentifier\n> Sort Method: quicksort Memory: 582kB\n> Buffers: shared hit=29786 read=1\n> -> Nested Loop (cost=149.94..666.57 rows=1 width=349) (actual time=291.681..356.261 rows=1540 loops=1)\n> Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry\n> Buffers: shared hit=29786 read=1\n> -> Nested Loop (cost=149.51..653.92 rows=1 width=349) (actual time=291.652..300.076 rows=1546 loops=1)\n> Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry\n> Buffers: shared hit=13331 read=1\n> -> Bitmap Heap Scan on navdata.point op (cost=5.75..358.28 rows=2 width=16) (actual time=95.933..96.155 rows=1 loops=1)\n> Output: op.uid, op.guid, op.airportguid, op.identifier, op.icaocode, op.name, op.type, op.coordinates, op.fir, op.navaidfrequency, op.elevation, op.magneticvariance, op.startvalid, op.endvalid, op.revisionuid, op.source, op.leveltype\n> Recheck Cond: ((op.identifier)::text ~~* '%LOWW%'::text)\n> Filter: (((op.type)::text = ANY ('{PA}'::text[])) AND (tsrange(op.startvalid, op.endvalid) @> (now())::timestamp without time zone))\n> Rows Removed by Filter: 50\n> Heap Blocks: exact=51\n> Buffers: shared hit=4974 read=1\n> -> Bitmap Index Scan on idx_point_08 (cost=0.00..5.75 rows=178 width=0) (actual time=95.871..95.871 rows=51 loops=1)\n> Index Cond: ((op.identifier)::text ~~* '%LOWW%'::text)\n> Buffers: shared hit=4924\n> -> Bitmap Heap Scan on navdata.route r (cost=143.77..147.80 rows=2 width=349) (actual time=195.711..202.308 rows=1546 loops=1)\n> Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry\n> Recheck Cond: ((r.frompointguid = op.guid) AND (tsrange(r.startvalid, r.endvalid) @> (now())::timestamp without time zone))\n> Filter: ((r.routeidentifier)::text ~~* '%'::text)\n> Heap Blocks: exact=1231\n> Buffers: shared hit=8357\n> -> BitmapAnd (cost=143.77..143.77 rows=2 width=0) (actual time=195.501..195.501 rows=0 loops=1)\n> Buffers: shared hit=7126\n> -> Bitmap Index Scan on idx_route_02 (cost=0.00..6.85 rows=324 width=0) (actual time=0.707..0.707 rows=4295 loops=1)\n> Index Cond: (r.frompointguid = op.guid)\n> Buffers: shared hit=21\n> -> Bitmap Index Scan on idx_route_07 (cost=0.00..135.49 rows=4693 width=0) (actual time=193.881..193.881 rows=579054 loops=1)\n> Index Cond: (tsrange(r.startvalid, r.endvalid) @> (now())::timestamp without time zone)\n> Buffers: shared hit=7105\n> -> Index Scan using cidx_point on navdata.point dp (cost=0.43..12.63 rows=1 width=16) (actual time=0.009..0.034 rows=1 loops=1546)\n> Output: dp.uid, dp.guid, dp.airportguid, dp.identifier, dp.icaocode, dp.name, dp.type, dp.coordinates, dp.fir, dp.navaidfrequency, dp.elevation, dp.magneticvariance, dp.startvalid, dp.endvalid, dp.revisionuid, dp.source, dp.leveltype\n> Index Cond: (dp.guid = r.topointguid)\n> Filter: (((dp.type)::text = ANY ('{PA}'::text[])) AND ((dp.identifier)::text ~~* '%'::text) AND (tsrange(dp.startvalid, dp.endvalid) @> (now())::timestamp without time zone))\n> Rows Removed by Filter: 7\n> Buffers: shared hit=16455\n> Planning time: 4.603 ms\n> Execution time: 360.180 ms\n> \n> * 360 ms. That is quite fine for our standards. *\n> \n> --------------------------------------------------------------------------------\n> # Case 2 : We search for all routes between Vienna International Airport and\n> London Heathrow (here is where trouble begins)\n> --------------------------------------------------------------------------------\n> \n> EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)\n> SELECT\n> r.*\n> FROM navdata.route r\n> INNER JOIN navdata.point op ON r.frompointguid = op.guid\n> INNER JOIN navdata.point dp ON r.topointguid = dp.guid\n> WHERE\n> r.routeidentifier ILIKE '%'\n> AND tsrange(r.startvalid, r.endvalid) @> now() :: TIMESTAMP\n> AND (NOT false :: BOOLEAN OR r.source = ANY (ARRAY[] :: VARCHAR []))\n> AND op.identifier ILIKE '%LOWW%'\n> AND op.type = ANY (ARRAY['PA'] :: VARCHAR [])\n> AND tsrange(op.startvalid, op.endvalid) @> now() :: TIMESTAMP\n> AND dp.identifier ILIKE '%EGLL%' :: VARCHAR\n> AND dp.type = ANY (ARRAY['PA'] :: VARCHAR [])\n> AND tsrange(dp.startvalid, dp.endvalid) @> now() :: TIMESTAMP\n> ORDER BY r.routeidentifier\n> LIMIT 1000\n> \n> \n> Limit (cost=659.57..659.58 rows=1 width=349) (actual time=223118.664..223118.714 rows=36 loops=1)\n> Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry\n> Buffers: shared hit=12033194\n> -> Sort (cost=659.57..659.58 rows=1 width=349) (actual time=223118.661..223118.681 rows=36 loops=1)\n> Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry\n> Sort Key: r.routeidentifier\n> Sort Method: quicksort Memory: 35kB\n> Buffers: shared hit=12033194\n> -> Nested Loop (cost=157.35..659.56 rows=1 width=349) (actual time=4290.975..223118.490 rows=36 loops=1)\n> Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry\n> Buffers: shared hit=12033194\n> -> Nested Loop (cost=149.32..649.49 rows=1 width=349) (actual time=319.717..367.139 rows=2439 loops=1)\n> Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry\n> Buffers: shared hit=15788\n> -> Bitmap Heap Scan on navdata.point dp (cost=5.75..358.28 rows=2 width=16) (actual time=124.922..125.008 rows=1 loops=1)\n> Output: dp.uid, dp.guid, dp.airportguid, dp.identifier, dp.icaocode, dp.name, dp.type, dp.coordinates, dp.fir, dp.navaidfrequency, dp.elevation, dp.magneticvariance, dp.startvalid, dp.endvalid, dp.revisionuid, dp.source, dp.leveltype\n> Recheck Cond: ((dp.identifier)::text ~~* '%EGLL%'::text)\n> Filter: (((dp.type)::text = ANY ('{PA}'::text[])) AND (tsrange(dp.startvalid, dp.endvalid) @> (now())::timestamp without time zone))\n> Rows Removed by Filter: 6\n> Heap Blocks: exact=7\n> Buffers: shared hit=6786\n> -> Bitmap Index Scan on idx_point_08 (cost=0.00..5.75 rows=178 width=0) (actual time=124.882..124.882 rows=7 loops=1)\n> Index Cond: ((dp.identifier)::text ~~* '%EGLL%'::text)\n> Buffers: shared hit=6779\n> -> Bitmap Heap Scan on navdata.route r (cost=143.57..145.60 rows=1 width=349) (actual time=194.785..237.128 rows=2439 loops=1)\n> Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry\n> Recheck Cond: ((r.topointguid = dp.guid) AND (tsrange(r.startvalid, r.endvalid) @> (now())::timestamp without time zone))\n> Filter: ((r.routeidentifier)::text ~~* '%'::text)\n> Heap Blocks: exact=1834\n> Buffers: shared hit=9002\n> -> BitmapAnd (cost=143.57..143.57 rows=1 width=0) (actual time=194.460..194.460 rows=0 loops=1)\n> Buffers: shared hit=7168\n> -> Bitmap Index Scan on idx_route_03 (cost=0.00..6.66 rows=298 width=0) (actual time=2.326..2.326 rows=15148 loops=1)\n> Index Cond: (r.topointguid = dp.guid)\n> Buffers: shared hit=63\n> -> Bitmap Index Scan on idx_route_07 (cost=0.00..135.49 rows=4693 width=0) (actual time=190.001..190.001 rows=579054 loops=1)\n> Index Cond: (tsrange(r.startvalid, r.endvalid) @> (now())::timestamp without time zone)\n> Buffers: shared hit=7105\n> -> Bitmap Heap Scan on navdata.point op (cost=8.03..10.06 rows=1 width=16) (actual time=91.321..91.321 rows=0 loops=2439)\n> Output: op.uid, op.guid, op.airportguid, op.identifier, op.icaocode, op.name, op.type, op.coordinates, op.fir, op.navaidfrequency, op.elevation, op.magneticvariance, op.startvalid, op.endvalid, op.revisionuid, op.source, op.leveltype\n> Recheck Cond: ((op.guid = r.frompointguid) AND ((op.identifier)::text ~~* '%LOWW%'::text))\n> Filter: (((op.type)::text = ANY ('{PA}'::text[])) AND (tsrange(op.startvalid, op.endvalid) @> (now())::timestamp without time zone))\n> Rows Removed by Filter: 0\n> Heap Blocks: exact=252\n> Buffers: shared hit=12017406\n> -> BitmapAnd (cost=8.03..8.03 rows=1 width=0) (actual time=91.315..91.315 rows=0 loops=2439)\n> Buffers: shared hit=12017154\n> -> Bitmap Index Scan on cidx_point (cost=0.00..2.04 rows=6 width=0) (actual time=0.017..0.017 rows=8 loops=2439)\n> Index Cond: (op.guid = r.frompointguid)\n> Buffers: shared hit=7518\n> -> Bitmap Index Scan on idx_point_08 (cost=0.00..5.75 rows=178 width=0) (actual time=91.288..91.288 rows=51 loops=2439)\n> Index Cond: ((op.identifier)::text ~~* '%LOWW%'::text)\n> Buffers: shared hit=12009636\n> Planning time: 5.162 ms\n> Execution time: 223118.858 ms\n> \n> * Please pay attention to index scan on idx_point_08. It takes on average 91 ms\n> and it is executed 2439 times = 221949 ms. That is where we spend most of the \n> time. *\n> \n> --------------------------------------------------------------------------------\n> # Case 3 : We again search for all routes between Vienna International Airport\n> and London Heathrow, but this time I use CONCAT(op.identifier, '') as\n> optimization fence.\n> --------------------------------------------------------------------------------\n> \n> EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)\n> SELECT\n> r.*\n> FROM navdata.route r\n> INNER JOIN navdata.point op ON r.frompointguid = op.guid\n> INNER JOIN navdata.point dp ON r.topointguid = dp.guid\n> WHERE\n> r.routeidentifier ILIKE '%'\n> AND tsrange(r.startvalid, r.endvalid) @> now() :: TIMESTAMP\n> AND (NOT false :: BOOLEAN OR r.source = ANY (ARRAY[] :: VARCHAR []))\n> AND CONCAT(op.identifier, '') ILIKE '%LOWW%'\n> AND op.type = ANY (ARRAY['PA'] :: VARCHAR [])\n> AND tsrange(op.startvalid, op.endvalid) @> now() :: TIMESTAMP\n> AND dp.identifier ILIKE '%EGLL%' :: VARCHAR\n> AND dp.type = ANY (ARRAY['PA'] :: VARCHAR [])\n> AND tsrange(dp.startvalid, dp.endvalid) @> now() :: TIMESTAMP\n> ORDER BY r.routeidentifier\n> LIMIT 1000\n> \n> Limit (cost=662.16..662.17 rows=1 width=349) (actual time=411.756..411.808 rows=36 loops=1)\n> Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry\n> Buffers: shared hit=43025\n> -> Sort (cost=662.16..662.17 rows=1 width=349) (actual time=411.755..411.776 rows=36 loops=1)\n> Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry\n> Sort Key: r.routeidentifier\n> Sort Method: quicksort Memory: 35kB\n> Buffers: shared hit=43025\n> -> Nested Loop (cost=149.75..662.15 rows=1 width=349) (actual time=316.518..411.656 rows=36 loops=1)\n> Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry\n> Buffers: shared hit=43025\n> -> Nested Loop (cost=149.32..649.49 rows=1 width=349) (actual time=314.704..326.873 rows=2439 loops=1)\n> Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry\n> Buffers: shared hit=15788\n> -> Bitmap Heap Scan on navdata.point dp (cost=5.75..358.28 rows=2 width=16) (actual time=123.267..123.310 rows=1 loops=1)\n> Output: dp.uid, dp.guid, dp.airportguid, dp.identifier, dp.icaocode, dp.name, dp.type, dp.coordinates, dp.fir, dp.navaidfrequency, dp.elevation, dp.magneticvariance, dp.startvalid, dp.endvalid, dp.revisionuid, dp.source, dp.leveltype\n> Recheck Cond: ((dp.identifier)::text ~~* '%EGLL%'::text)\n> Filter: (((dp.type)::text = ANY ('{PA}'::text[])) AND (tsrange(dp.startvalid, dp.endvalid) @> (now())::timestamp without time zone))\n> Rows Removed by Filter: 6\n> Heap Blocks: exact=7\n> Buffers: shared hit=6786\n> -> Bitmap Index Scan on idx_point_08 (cost=0.00..5.75 rows=178 width=0) (actual time=123.232..123.232 rows=7 loops=1)\n> Index Cond: ((dp.identifier)::text ~~* '%EGLL%'::text)\n> Buffers: shared hit=6779\n> -> Bitmap Heap Scan on navdata.route r (cost=143.57..145.60 rows=1 width=349) (actual time=191.429..201.176 rows=2439 loops=1)\n> Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry\n> Recheck Cond: ((r.topointguid = dp.guid) AND (tsrange(r.startvalid, r.endvalid) @> (now())::timestamp without time zone))\n> Filter: ((r.routeidentifier)::text ~~* '%'::text)\n> Heap Blocks: exact=1834\n> Buffers: shared hit=9002\n> -> BitmapAnd (cost=143.57..143.57 rows=1 width=0) (actual time=191.097..191.097 rows=0 loops=1)\n> Buffers: shared hit=7168\n> -> Bitmap Index Scan on idx_route_03 (cost=0.00..6.66 rows=298 width=0) (actual time=2.349..2.349 rows=15148 loops=1)\n> Index Cond: (r.topointguid = dp.guid)\n> Buffers: shared hit=63\n> -> Bitmap Index Scan on idx_route_07 (cost=0.00..135.49 rows=4693 width=0) (actual time=186.640..186.640 rows=579054 loops=1)\n> Index Cond: (tsrange(r.startvalid, r.endvalid) @> (now())::timestamp without time zone)\n> Buffers: shared hit=7105\n> -> Index Scan using cidx_point on navdata.point op (cost=0.43..12.65 rows=1 width=16) (actual time=0.033..0.033 rows=0 loops=2439)\n> Output: op.uid, op.guid, op.airportguid, op.identifier, op.icaocode, op.name, op.type, op.coordinates, op.fir, op.navaidfrequency, op.elevation, op.magneticvariance, op.startvalid, op.endvalid, op.revisionuid, op.source, op.leveltype\n> Index Cond: (op.guid = r.frompointguid)\n> Filter: (((op.type)::text = ANY ('{PA}'::text[])) AND (concat(op.identifier, '') ~~* '%LOWW%'::text) AND (tsrange(op.startvalid, op.endvalid) @> (now())::timestamp without time zone))\n> Rows Removed by Filter: 8\n> Buffers: shared hit=27237\n> Planning time: 3.381 ms\n> Execution time: 411.944 ms\n> \n> * We are back into acceptable margin. *\n> \n> ################################################################################\n> # Postgres version\n> ################################################################################\n> \n> PostgreSQL 9.6.9 on x86_64-pc-linux-gnu (Ubuntu 9.6.9-2.pgdg16.04+1), compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0 20160609, 64-bit\n> \n> ################################################################################\n> # Schema\n> ################################################################################\n> \n> Currently, our tables are heavily indexed due to refactoring process and need to\n> work with old and new version of software. Once we are finished, lot of \n> indexes shell be removed.\n> \n> CREATE TABLE navdata.point (\n> uid uuid NOT NULL,\n> guid uuid NULL,\n> airportguid uuid NULL,\n> identifier varchar(5) NULL,\n> icaocode varchar(2) NULL,\n> \"name\" varchar(255) NULL,\n> \"type\" varchar(2) NULL,\n> coordinates geography NULL,\n> fir varchar(5) NULL,\n> navaidfrequency float8 NULL,\n> elevation float8 NULL,\n> magneticvariance float8 NULL,\n> startvalid timestamp NULL,\n> endvalid timestamp NULL,\n> revisionuid uuid NULL,\n> \"source\" varchar(4) NULL,\n> leveltype varchar(1) NULL,\n> CONSTRAINT point_pkey PRIMARY KEY (uid)\n> )\n> WITH (\n> OIDS=FALSE\n> ) ;\n> CREATE INDEX cidx_point ON navdata.point USING btree (guid) ;\n> CREATE INDEX idx_point_01 ON navdata.point USING btree (identifier, guid) ;\n> CREATE INDEX idx_point_03 ON navdata.point USING btree (identifier) ;\n> CREATE INDEX idx_point_04 ON navdata.point USING gist (coordinates) WHERE (airportguid IS NULL) ;\n> CREATE INDEX idx_point_05 ON navdata.point USING btree (identifier text_pattern_ops) ;\n> CREATE INDEX idx_point_06 ON navdata.point USING btree (airportguid) ;\n> CREATE INDEX idx_point_07 ON navdata.point USING gist (coordinates) ;\n> CREATE INDEX idx_point_08 ON navdata.point USING gist (identifier gist_trgm_ops) ;\n> CREATE INDEX idx_point_09 ON navdata.point USING btree (type) ;\n> CREATE INDEX idx_point_10 ON navdata.point USING gist (name gist_trgm_ops) ;\n> CREATE INDEX idx_point_11 ON navdata.point USING btree (type, identifier text_pattern_ops) ;\n> CREATE INDEX idx_point_12 ON navdata.point USING gist (upper((identifier)::text) gist_trgm_ops) ;\n> CREATE INDEX idx_point_13 ON navdata.point USING gist (upper((name)::text) gist_trgm_ops) ;\n> CREATE INDEX idx_point_tmp ON navdata.point USING btree (leveltype) ;\n> CREATE INDEX point_validity_idx ON navdata.point USING gist (tsrange(startvalid, endvalid)) ;\n> \n> CREATE TABLE navdata.route (\n> uid uuid NOT NULL,\n> routeidentifier varchar(3) NULL,\n> frompointguid uuid NULL,\n> topointguid uuid NULL,\n> sidguid uuid NULL,\n> starguid uuid NULL,\n> routeinformation varchar NULL,\n> routetype varchar(5) NULL,\n> startvalid timestamp NULL,\n> endvalid timestamp NULL,\n> revisionuid uuid NULL,\n> \"source\" varchar(4) NULL,\n> fufi uuid NULL,\n> grounddistance_excl_sidstar float8 NULL,\n> from_first bool NULL,\n> dep_airports varchar NULL,\n> dst_airports varchar NULL,\n> tag varchar NULL,\n> expanded_route_string varchar NULL,\n> route_geometry geometry NULL,\n> CONSTRAINT route_pkey PRIMARY KEY (uid)\n> )\n> WITH (\n> OIDS=FALSE\n> ) ;\n> CREATE INDEX idx_route_01 ON navdata.route USING btree (uid) ;\n> CREATE INDEX idx_route_02 ON navdata.route USING btree (frompointguid) ;\n> CREATE INDEX idx_route_03 ON navdata.route USING btree (topointguid) ;\n> CREATE INDEX idx_route_04 ON navdata.route USING btree (fufi) ;\n> CREATE INDEX idx_route_05 ON navdata.route USING btree (source, routeidentifier, startvalid, endvalid) ;\n> CREATE INDEX idx_route_06 ON navdata.route USING gist (routeinformation gist_trgm_ops) ;\n> CREATE INDEX idx_route_07 ON navdata.route USING gist (tsrange(startvalid, endvalid)) ;\n> CREATE INDEX idx_route_09 ON navdata.route USING gist (routeidentifier gist_trgm_ops) ;\n> \n> ################################################################################\n> # Table metadata\n> ################################################################################\n> \n> relname |relpages |reltuples |relallvisible |relkind |relnatts |relhassubclass |reloptions |pg_table_size |\n> --------|---------|----------|--------------|--------|---------|---------------|-----------|--------------|\n> route |36600 |938573 |36595 |r |22 |false |NULL |299941888 |\n> point |95241 |2156454 |95241 |r |17 |false |NULL |780460032 |\n> \n> \n> ################################################################################\n> # History\n> ################################################################################\n> \n> This is a new query, because data layer is being refactored.\n> \n> ################################################################################\n> # Hardware\n> ################################################################################\n> \n> Postgres is running on virtual machine.\n> \n> * CPU: 8 cores assigned\n> \n> processor : 7\n> vendor_id : AuthenticAMD\n> cpu family : 21\n> model : 2\n> model name : AMD Opteron(tm) Processor 6380\n> stepping : 0\n> microcode : 0xffffffff\n> cpu MHz : 2500.020\n> cache size : 2048 KB\n> physical id : 0\n> siblings : 8\n> core id : 7\n> cpu cores : 8\n> apicid : 7\n> initial apicid : 7\n> fpu : yes\n> fpu_exception : yes\n> cpuid level : 13\n> wp : yes\n> flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt lm rep_good nopl extd_apicid pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 popcnt aes xsave avx f16c hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw xop fma4 vmmcall bmi1 arat\n> bugs : fxsave_leak sysret_ss_attrs\n> bogomips : 4998.98\n> TLB size : 1536 4K pages\n> clflush size : 64\n> cache_alignment : 64\n> address sizes : 42 bits physical, 48 bits virtual\n> power management:\n> \n> \n> * Memory: 32 GB\n> \n> * Disk: Should be ssd, but unfortunattely I don't know which model.\n> \n> ################################################################################\n> # bonnie++\n> ################################################################################\n> \n> Using uid:111, gid:118.\n> format_version,bonnie_version,name,concurrency,seed,file_size,io_chunk_size,putc,putc_cpu,put_block,put_block_cpu,rewrite,rewrite_cpu,getc,getc_cpu,get_block,get_block_cpu,seeks,seeks_cpu,num_files,max_size,min_size,num_dirs,file_chunk_size,seq_create,seq_create_cpu,seq_stat,seq_stat_cpu,seq_del,seq_del_cpu,ran_create,ran_create_cpu,ran_stat,ran_stat_cpu,ran_del,ran_del_cpu,putc_latency,put_block_latency,rewrite_latency,getc_latency,get_block_latency,seeks_latency,seq_create_latency,seq_stat_latency,seq_del_latency,ran_create_latency,ran_stat_latency,ran_del_latency\n> Writing intelligently...done\n> Rewriting...done\n> Reading intelligently...done\n> start 'em...done...done...done...done...done...\n> 1.97,1.97,v6565testdb01,1,1529491960,63G,,,,133872,20,96641,17,,,469654,41,+++++,+++,,,,,,,,,,,,,,,,,,,2117ms,2935ms,,270ms,4760us,,,,,,\n> Writing intelligently...done\n> Rewriting...done\n> Reading intelligently...done\n> start 'em...done...done...done...done...done...\n> 1.97,1.97,v6565testdb01,1,1529491960,63G,,,,190192,26,143595,23,,,457357,37,+++++,+++,,,,,,,,,,,,,,,,,,,595ms,2201ms,,284ms,6110us,,,,,,\n> Writing intelligently...done\n> Rewriting...done\n> Reading intelligently...done\n> start 'em...done...done...done...done...done...\n> 1.97,1.97,v6565testdb01,1,1529491960,63G,,,,542936,81,153952,25,,,446369,37,+++++,+++,,,,,,,,,,,,,,,,,,,347ms,3678ms,,101ms,5632us,,,,,,\n> Writing intelligently...done\n> Rewriting...done\n> Reading intelligently...done\n> start 'em...done...done...done...done...done...\n> 1.97,1.97,v6565testdb01,1,1529491960,63G,,,,244155,33,157543,26,,,441115,38,16111,495,,,,,,,,,,,,,,,,,,,638ms,2667ms,,195ms,9068us,,,,,,\n> \n> \n> ################################################################################\n> # Maintenance Setup\n> ################################################################################\n> \n> Autovacuum: yes\n> \n> ################################################################################\n> # postgresql.conf\n> ################################################################################\n> \n> max_connections = 4096 # (change requires restart)\n> shared_buffers = 8GB # (change requires restart)\n> huge_pages = try # on, off, or try\n> work_mem = 4MB # min 64kB\n> maintenance_work_mem = 2GB # min 1MB\n> dynamic_shared_memory_type = posix # the default is the first option\n> shared_preload_libraries = 'pg_stat_statements'\n> pg_stat_statements.max = 10000\n> pg_stat_statements.track = all\n> wal_level = replica # minimal, replica, or logical\n> wal_buffers = 16MB\n> max_wal_size = 2GB\n> min_wal_size = 1GB\n> checkpoint_completion_target = 0.7\n> max_wal_senders = 4 # max number of walsender processes\n> random_page_cost = 2.0\n> effective_cache_size = 24GB\n> default_statistics_target = 100 # range 1-10000\n> \n> ################################################################################\n> # Statistics\n> ################################################################################\n> \n> frac_mcv |tablename |attname |n_distinct |n_mcv |n_hist |\n> --------------|----------|----------------------------|-------------|------|-------|\n> |route |uid |-1 | |101 |\n> 0.969699979 |route |routeidentifier |78 |2 |76 |\n> 0.44780004 |route |frompointguid |2899 |100 |101 |\n> 0.441700101 |route |topointguid |3154 |100 |101 |\n> 0.0368666835 |route |sidguid |2254 |100 |101 |\n> 0.0418333709 |route |starguid |3182 |100 |101 |\n> 0.0515667647 |route |routeinformation |-0.335044593 |100 |101 |\n> 0.0528000034 |route |routetype |3 |3 | |\n> 0.755399942 |route |startvalid |810 |100 |101 |\n> 0.962899983 |route |endvalid |22 |3 |19 |\n> 0.00513333362 |route |revisionuid |-0.809282064 |2 |101 |\n> 0.97906667 |route |source |52 |4 |48 |\n> |route |fufi |0 | | |\n> 0.00923334155 |route |grounddistance_excl_sidstar |-0.552667081 |100 |101 |\n> 0.0505000018 |route |from_first |2 |2 | |\n> 0.0376333408 |route |dep_airports |326 |52 |101 |\n> 0.0367666557 |route |dst_airports |388 |57 |101 |\n> |point |uid |-1 | |101 |\n> 0.00185333542 |point |guid |-0.164169937 |100 |101 |\n> 0.0573133379 |point |airportguid |23575 |100 |101 |\n> 0.175699964 |point |identifier |209296 |1000 |1001 |\n> 0.754063368 |point |icaocode |254 |41 |101 |\n> 0.00352332788 |point |name |37853 |100 |101 |\n> 0.999230027 |point |type |11 |6 |5 |\n> |point |coordinates |-1 | | |\n> 0.607223332 |point |fir |281 |62 |101 |\n> 0.0247033276 |point |navaidfrequency |744 |100 |101 |\n> 0.0320866667 |point |elevation |14013 |100 |101 |\n> 0.0011433335 |point |magneticvariance |-0.587834716 |100 |101 |\n> 0.978270054 |point |startvalid |35 |12 |23 |\n> 0.978176594 |point |endvalid |30 |11 |19 |\n> 0.978123426 |point |revisionuid |62 |12 |50 |\n> 0.99999994 |point |source |3 |3 | |\n> 0.777056634 |point |leveltype |7 |7 | |\n> \n> ################################################################################\n> \n> I am looking forward to your suggestions.\n> \n> Thanks in advance!\n> \n> Sasa Vilic\n> \n\nIs there a reason you used GIST on your pg_trgm indices and not GIN? In my tests and previous posts on here, it nearly always performs worse. Also, did you make sure if it's really SSD and set the random_page_cost accordingly?Matthew HallOn Jun 20, 2018, at 8:21 AM, Sasa Vilic <[email protected]> wrote:Hi everyone,we have a new query that performs badly with specific input parameters. Weget worst performance when input data is most restrictive. I have partiallyidentified a problem: it always happens when index scan is done in inner loopand index type is pg_trgm. We also noticed that for simple query( select * from point where identifier = 'LOWW' vs select * from point where identifier LIKE 'LOWW')the difference between btree index and pg_trgm index can be quite high:0.009 ms vs 32.0 ms.What I would like to know is whenever query planner is aware that some indextypes are more expensive the the others and whenever it can take that intoaccount?I will describe background first, then give you query and its analysis fordifferent parameters and in the end I will write about all required informationregarding setup (Postgres version, Schema, metadata, hardware, etc.)I would like to know whenever this is a bug in query planner or not and whatcould we do about it.################################################################################# Background################################################################################We have a database with navigational data for civil aviation.Current query is working on two tables: point and route.Point represents a navigational point on Earth and route describes a route between two points.Query that we have finds all routes between two set of points. A set is adynamically/loosely defined by pattern given by the user input. So for example if user wants to find all routes between international airports in Austria toward London Heathrow, he or she would use 'LOW%' as :from_point_identifier and 'EGLL' as :to_point_identifier. Please keep in mind that is a simple case,and that user is allowed to define search term any way he/she see it fit,i.e. '%OW%', 'EG%'.SELECT r.*FROM navdata.route r INNER JOIN navdata.point op ON r.frompointguid = op.guid INNER JOIN navdata.point dp ON r.topointguid = dp.guidWHERE r.routeidentifier ILIKE :route_identifier AND tsrange(r.startvalid, r.endvalid) @> :validity :: TIMESTAMP AND (NOT :use_sources :: BOOLEAN OR r.source = ANY (:sources :: VARCHAR [])) AND CONCAT(op.identifier, '') ILIKE :from_point_identifier AND op.type = ANY (:point_types :: VARCHAR []) AND tsrange(op.startvalid, op.endvalid) @> :validity :: TIMESTAMP AND dp.identifier ILIKE :to_point_identifier :: VARCHAR AND dp.type = ANY (:point_types :: VARCHAR []) AND tsrange(dp.startvalid, dp.endvalid) @> :validity :: TIMESTAMPORDER BY r.routeidentifierLIMIT 1000Most of the tables we have follows this layout principle:* uid - is primary key* guid - is globally unique key (i.e. London Heathrow could for example change it identifier EGLL, but our internal guid will stay same)* startvalid, endvalid - defines for which period is entry valid. Entires with same guid should not have overlapping validity. We don't use foreign keys for two reasons:* We need to do live migration without downtime. Creating a foreign key on huge dataset could take quite some time* Relationship between entities are defined based on guid and not on uid (primary key). ################################################################################# Query analysis################################################################################--------------------------------------------------------------------------------# Case 1 : We search for all outgoing routes from Vienna International Airport--------------------------------------------------------------------------------EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)SELECT r.*FROM navdata.route r INNER JOIN navdata.point op ON r.frompointguid = op.guid INNER JOIN navdata.point dp ON r.topointguid = dp.guidWHERE r.routeidentifier ILIKE '%' AND tsrange(r.startvalid, r.endvalid) @> now() :: TIMESTAMP AND (NOT false :: BOOLEAN OR r.source = ANY (ARRAY[] :: VARCHAR [])) AND op.identifier ILIKE '%LOWW%' AND op.type = ANY (ARRAY['PA'] :: VARCHAR []) AND tsrange(op.startvalid, op.endvalid) @> now() :: TIMESTAMP AND dp.identifier ILIKE '%' :: VARCHAR AND dp.type = ANY (ARRAY['PA'] :: VARCHAR []) AND tsrange(dp.startvalid, dp.endvalid) @> now() :: TIMESTAMPORDER BY r.routeidentifierLIMIT 1000Limit (cost=666.58..666.58 rows=1 width=349) (actual time=358.466..359.688 rows=1000 loops=1) Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry Buffers: shared hit=29786 read=1 -> Sort (cost=666.58..666.58 rows=1 width=349) (actual time=358.464..358.942 rows=1000 loops=1) Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry Sort Key: r.routeidentifier Sort Method: quicksort Memory: 582kB Buffers: shared hit=29786 read=1 -> Nested Loop (cost=149.94..666.57 rows=1 width=349) (actual time=291.681..356.261 rows=1540 loops=1) Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry Buffers: shared hit=29786 read=1 -> Nested Loop (cost=149.51..653.92 rows=1 width=349) (actual time=291.652..300.076 rows=1546 loops=1) Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry Buffers: shared hit=13331 read=1 -> Bitmap Heap Scan on navdata.point op (cost=5.75..358.28 rows=2 width=16) (actual time=95.933..96.155 rows=1 loops=1) Output: op.uid, op.guid, op.airportguid, op.identifier, op.icaocode, op.name, op.type, op.coordinates, op.fir, op.navaidfrequency, op.elevation, op.magneticvariance, op.startvalid, op.endvalid, op.revisionuid, op.source, op.leveltype Recheck Cond: ((op.identifier)::text ~~* '%LOWW%'::text) Filter: (((op.type)::text = ANY ('{PA}'::text[])) AND (tsrange(op.startvalid, op.endvalid) @> (now())::timestamp without time zone)) Rows Removed by Filter: 50 Heap Blocks: exact=51 Buffers: shared hit=4974 read=1 -> Bitmap Index Scan on idx_point_08 (cost=0.00..5.75 rows=178 width=0) (actual time=95.871..95.871 rows=51 loops=1) Index Cond: ((op.identifier)::text ~~* '%LOWW%'::text) Buffers: shared hit=4924 -> Bitmap Heap Scan on navdata.route r (cost=143.77..147.80 rows=2 width=349) (actual time=195.711..202.308 rows=1546 loops=1) Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry Recheck Cond: ((r.frompointguid = op.guid) AND (tsrange(r.startvalid, r.endvalid) @> (now())::timestamp without time zone)) Filter: ((r.routeidentifier)::text ~~* '%'::text) Heap Blocks: exact=1231 Buffers: shared hit=8357 -> BitmapAnd (cost=143.77..143.77 rows=2 width=0) (actual time=195.501..195.501 rows=0 loops=1) Buffers: shared hit=7126 -> Bitmap Index Scan on idx_route_02 (cost=0.00..6.85 rows=324 width=0) (actual time=0.707..0.707 rows=4295 loops=1) Index Cond: (r.frompointguid = op.guid) Buffers: shared hit=21 -> Bitmap Index Scan on idx_route_07 (cost=0.00..135.49 rows=4693 width=0) (actual time=193.881..193.881 rows=579054 loops=1) Index Cond: (tsrange(r.startvalid, r.endvalid) @> (now())::timestamp without time zone) Buffers: shared hit=7105 -> Index Scan using cidx_point on navdata.point dp (cost=0.43..12.63 rows=1 width=16) (actual time=0.009..0.034 rows=1 loops=1546) Output: dp.uid, dp.guid, dp.airportguid, dp.identifier, dp.icaocode, dp.name, dp.type, dp.coordinates, dp.fir, dp.navaidfrequency, dp.elevation, dp.magneticvariance, dp.startvalid, dp.endvalid, dp.revisionuid, dp.source, dp.leveltype Index Cond: (dp.guid = r.topointguid) Filter: (((dp.type)::text = ANY ('{PA}'::text[])) AND ((dp.identifier)::text ~~* '%'::text) AND (tsrange(dp.startvalid, dp.endvalid) @> (now())::timestamp without time zone)) Rows Removed by Filter: 7 Buffers: shared hit=16455Planning time: 4.603 msExecution time: 360.180 ms* 360 ms. That is quite fine for our standards. *--------------------------------------------------------------------------------# Case 2 : We search for all routes between Vienna International Airport andLondon Heathrow (here is where trouble begins)--------------------------------------------------------------------------------EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)SELECT r.*FROM navdata.route r INNER JOIN navdata.point op ON r.frompointguid = op.guid INNER JOIN navdata.point dp ON r.topointguid = dp.guidWHERE r.routeidentifier ILIKE '%' AND tsrange(r.startvalid, r.endvalid) @> now() :: TIMESTAMP AND (NOT false :: BOOLEAN OR r.source = ANY (ARRAY[] :: VARCHAR [])) AND op.identifier ILIKE '%LOWW%' AND op.type = ANY (ARRAY['PA'] :: VARCHAR []) AND tsrange(op.startvalid, op.endvalid) @> now() :: TIMESTAMP AND dp.identifier ILIKE '%EGLL%' :: VARCHAR AND dp.type = ANY (ARRAY['PA'] :: VARCHAR []) AND tsrange(dp.startvalid, dp.endvalid) @> now() :: TIMESTAMPORDER BY r.routeidentifierLIMIT 1000Limit (cost=659.57..659.58 rows=1 width=349) (actual time=223118.664..223118.714 rows=36 loops=1) Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry Buffers: shared hit=12033194 -> Sort (cost=659.57..659.58 rows=1 width=349) (actual time=223118.661..223118.681 rows=36 loops=1) Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry Sort Key: r.routeidentifier Sort Method: quicksort Memory: 35kB Buffers: shared hit=12033194 -> Nested Loop (cost=157.35..659.56 rows=1 width=349) (actual time=4290.975..223118.490 rows=36 loops=1) Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry Buffers: shared hit=12033194 -> Nested Loop (cost=149.32..649.49 rows=1 width=349) (actual time=319.717..367.139 rows=2439 loops=1) Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry Buffers: shared hit=15788 -> Bitmap Heap Scan on navdata.point dp (cost=5.75..358.28 rows=2 width=16) (actual time=124.922..125.008 rows=1 loops=1) Output: dp.uid, dp.guid, dp.airportguid, dp.identifier, dp.icaocode, dp.name, dp.type, dp.coordinates, dp.fir, dp.navaidfrequency, dp.elevation, dp.magneticvariance, dp.startvalid, dp.endvalid, dp.revisionuid, dp.source, dp.leveltype Recheck Cond: ((dp.identifier)::text ~~* '%EGLL%'::text) Filter: (((dp.type)::text = ANY ('{PA}'::text[])) AND (tsrange(dp.startvalid, dp.endvalid) @> (now())::timestamp without time zone)) Rows Removed by Filter: 6 Heap Blocks: exact=7 Buffers: shared hit=6786 -> Bitmap Index Scan on idx_point_08 (cost=0.00..5.75 rows=178 width=0) (actual time=124.882..124.882 rows=7 loops=1) Index Cond: ((dp.identifier)::text ~~* '%EGLL%'::text) Buffers: shared hit=6779 -> Bitmap Heap Scan on navdata.route r (cost=143.57..145.60 rows=1 width=349) (actual time=194.785..237.128 rows=2439 loops=1) Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry Recheck Cond: ((r.topointguid = dp.guid) AND (tsrange(r.startvalid, r.endvalid) @> (now())::timestamp without time zone)) Filter: ((r.routeidentifier)::text ~~* '%'::text) Heap Blocks: exact=1834 Buffers: shared hit=9002 -> BitmapAnd (cost=143.57..143.57 rows=1 width=0) (actual time=194.460..194.460 rows=0 loops=1) Buffers: shared hit=7168 -> Bitmap Index Scan on idx_route_03 (cost=0.00..6.66 rows=298 width=0) (actual time=2.326..2.326 rows=15148 loops=1) Index Cond: (r.topointguid = dp.guid) Buffers: shared hit=63 -> Bitmap Index Scan on idx_route_07 (cost=0.00..135.49 rows=4693 width=0) (actual time=190.001..190.001 rows=579054 loops=1) Index Cond: (tsrange(r.startvalid, r.endvalid) @> (now())::timestamp without time zone) Buffers: shared hit=7105 -> Bitmap Heap Scan on navdata.point op (cost=8.03..10.06 rows=1 width=16) (actual time=91.321..91.321 rows=0 loops=2439) Output: op.uid, op.guid, op.airportguid, op.identifier, op.icaocode, op.name, op.type, op.coordinates, op.fir, op.navaidfrequency, op.elevation, op.magneticvariance, op.startvalid, op.endvalid, op.revisionuid, op.source, op.leveltype Recheck Cond: ((op.guid = r.frompointguid) AND ((op.identifier)::text ~~* '%LOWW%'::text)) Filter: (((op.type)::text = ANY ('{PA}'::text[])) AND (tsrange(op.startvalid, op.endvalid) @> (now())::timestamp without time zone)) Rows Removed by Filter: 0 Heap Blocks: exact=252 Buffers: shared hit=12017406 -> BitmapAnd (cost=8.03..8.03 rows=1 width=0) (actual time=91.315..91.315 rows=0 loops=2439) Buffers: shared hit=12017154 -> Bitmap Index Scan on cidx_point (cost=0.00..2.04 rows=6 width=0) (actual time=0.017..0.017 rows=8 loops=2439) Index Cond: (op.guid = r.frompointguid) Buffers: shared hit=7518 -> Bitmap Index Scan on idx_point_08 (cost=0.00..5.75 rows=178 width=0) (actual time=91.288..91.288 rows=51 loops=2439) Index Cond: ((op.identifier)::text ~~* '%LOWW%'::text) Buffers: shared hit=12009636Planning time: 5.162 msExecution time: 223118.858 ms* Please pay attention to index scan on idx_point_08. It takes on average 91 msand it is executed 2439 times = 221949 ms. That is where we spend most of the time. *--------------------------------------------------------------------------------# Case 3 : We again search for all routes between Vienna International Airportand London Heathrow, but this time I use CONCAT(op.identifier, '') asoptimization fence.--------------------------------------------------------------------------------EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)SELECT r.*FROM navdata.route r INNER JOIN navdata.point op ON r.frompointguid = op.guid INNER JOIN navdata.point dp ON r.topointguid = dp.guidWHERE r.routeidentifier ILIKE '%' AND tsrange(r.startvalid, r.endvalid) @> now() :: TIMESTAMP AND (NOT false :: BOOLEAN OR r.source = ANY (ARRAY[] :: VARCHAR [])) AND CONCAT(op.identifier, '') ILIKE '%LOWW%' AND op.type = ANY (ARRAY['PA'] :: VARCHAR []) AND tsrange(op.startvalid, op.endvalid) @> now() :: TIMESTAMP AND dp.identifier ILIKE '%EGLL%' :: VARCHAR AND dp.type = ANY (ARRAY['PA'] :: VARCHAR []) AND tsrange(dp.startvalid, dp.endvalid) @> now() :: TIMESTAMPORDER BY r.routeidentifierLIMIT 1000Limit (cost=662.16..662.17 rows=1 width=349) (actual time=411.756..411.808 rows=36 loops=1) Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry Buffers: shared hit=43025 -> Sort (cost=662.16..662.17 rows=1 width=349) (actual time=411.755..411.776 rows=36 loops=1) Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry Sort Key: r.routeidentifier Sort Method: quicksort Memory: 35kB Buffers: shared hit=43025 -> Nested Loop (cost=149.75..662.15 rows=1 width=349) (actual time=316.518..411.656 rows=36 loops=1) Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry Buffers: shared hit=43025 -> Nested Loop (cost=149.32..649.49 rows=1 width=349) (actual time=314.704..326.873 rows=2439 loops=1) Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry Buffers: shared hit=15788 -> Bitmap Heap Scan on navdata.point dp (cost=5.75..358.28 rows=2 width=16) (actual time=123.267..123.310 rows=1 loops=1) Output: dp.uid, dp.guid, dp.airportguid, dp.identifier, dp.icaocode, dp.name, dp.type, dp.coordinates, dp.fir, dp.navaidfrequency, dp.elevation, dp.magneticvariance, dp.startvalid, dp.endvalid, dp.revisionuid, dp.source, dp.leveltype Recheck Cond: ((dp.identifier)::text ~~* '%EGLL%'::text) Filter: (((dp.type)::text = ANY ('{PA}'::text[])) AND (tsrange(dp.startvalid, dp.endvalid) @> (now())::timestamp without time zone)) Rows Removed by Filter: 6 Heap Blocks: exact=7 Buffers: shared hit=6786 -> Bitmap Index Scan on idx_point_08 (cost=0.00..5.75 rows=178 width=0) (actual time=123.232..123.232 rows=7 loops=1) Index Cond: ((dp.identifier)::text ~~* '%EGLL%'::text) Buffers: shared hit=6779 -> Bitmap Heap Scan on navdata.route r (cost=143.57..145.60 rows=1 width=349) (actual time=191.429..201.176 rows=2439 loops=1) Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry Recheck Cond: ((r.topointguid = dp.guid) AND (tsrange(r.startvalid, r.endvalid) @> (now())::timestamp without time zone)) Filter: ((r.routeidentifier)::text ~~* '%'::text) Heap Blocks: exact=1834 Buffers: shared hit=9002 -> BitmapAnd (cost=143.57..143.57 rows=1 width=0) (actual time=191.097..191.097 rows=0 loops=1) Buffers: shared hit=7168 -> Bitmap Index Scan on idx_route_03 (cost=0.00..6.66 rows=298 width=0) (actual time=2.349..2.349 rows=15148 loops=1) Index Cond: (r.topointguid = dp.guid) Buffers: shared hit=63 -> Bitmap Index Scan on idx_route_07 (cost=0.00..135.49 rows=4693 width=0) (actual time=186.640..186.640 rows=579054 loops=1) Index Cond: (tsrange(r.startvalid, r.endvalid) @> (now())::timestamp without time zone) Buffers: shared hit=7105 -> Index Scan using cidx_point on navdata.point op (cost=0.43..12.65 rows=1 width=16) (actual time=0.033..0.033 rows=0 loops=2439) Output: op.uid, op.guid, op.airportguid, op.identifier, op.icaocode, op.name, op.type, op.coordinates, op.fir, op.navaidfrequency, op.elevation, op.magneticvariance, op.startvalid, op.endvalid, op.revisionuid, op.source, op.leveltype Index Cond: (op.guid = r.frompointguid) Filter: (((op.type)::text = ANY ('{PA}'::text[])) AND (concat(op.identifier, '') ~~* '%LOWW%'::text) AND (tsrange(op.startvalid, op.endvalid) @> (now())::timestamp without time zone)) Rows Removed by Filter: 8 Buffers: shared hit=27237Planning time: 3.381 msExecution time: 411.944 ms* We are back into acceptable margin. *################################################################################# Postgres version################################################################################PostgreSQL 9.6.9 on x86_64-pc-linux-gnu (Ubuntu 9.6.9-2.pgdg16.04+1), compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0 20160609, 64-bit################################################################################# Schema################################################################################Currently, our tables are heavily indexed due to refactoring process and need towork with old and new version of software. Once we are finished, lot of indexes shell be removed.CREATE TABLE navdata.point ( uid uuid NOT NULL, guid uuid NULL, airportguid uuid NULL, identifier varchar(5) NULL, icaocode varchar(2) NULL, \"name\" varchar(255) NULL, \"type\" varchar(2) NULL, coordinates geography NULL, fir varchar(5) NULL, navaidfrequency float8 NULL, elevation float8 NULL, magneticvariance float8 NULL, startvalid timestamp NULL, endvalid timestamp NULL, revisionuid uuid NULL, \"source\" varchar(4) NULL, leveltype varchar(1) NULL, CONSTRAINT point_pkey PRIMARY KEY (uid))WITH ( OIDS=FALSE) ;CREATE INDEX cidx_point ON navdata.point USING btree (guid) ;CREATE INDEX idx_point_01 ON navdata.point USING btree (identifier, guid) ;CREATE INDEX idx_point_03 ON navdata.point USING btree (identifier) ;CREATE INDEX idx_point_04 ON navdata.point USING gist (coordinates) WHERE (airportguid IS NULL) ;CREATE INDEX idx_point_05 ON navdata.point USING btree (identifier text_pattern_ops) ;CREATE INDEX idx_point_06 ON navdata.point USING btree (airportguid) ;CREATE INDEX idx_point_07 ON navdata.point USING gist (coordinates) ;CREATE INDEX idx_point_08 ON navdata.point USING gist (identifier gist_trgm_ops) ;CREATE INDEX idx_point_09 ON navdata.point USING btree (type) ;CREATE INDEX idx_point_10 ON navdata.point USING gist (name gist_trgm_ops) ;CREATE INDEX idx_point_11 ON navdata.point USING btree (type, identifier text_pattern_ops) ;CREATE INDEX idx_point_12 ON navdata.point USING gist (upper((identifier)::text) gist_trgm_ops) ;CREATE INDEX idx_point_13 ON navdata.point USING gist (upper((name)::text) gist_trgm_ops) ;CREATE INDEX idx_point_tmp ON navdata.point USING btree (leveltype) ;CREATE INDEX point_validity_idx ON navdata.point USING gist (tsrange(startvalid, endvalid)) ;CREATE TABLE navdata.route ( uid uuid NOT NULL, routeidentifier varchar(3) NULL, frompointguid uuid NULL, topointguid uuid NULL, sidguid uuid NULL, starguid uuid NULL, routeinformation varchar NULL, routetype varchar(5) NULL, startvalid timestamp NULL, endvalid timestamp NULL, revisionuid uuid NULL, \"source\" varchar(4) NULL, fufi uuid NULL, grounddistance_excl_sidstar float8 NULL, from_first bool NULL, dep_airports varchar NULL, dst_airports varchar NULL, tag varchar NULL, expanded_route_string varchar NULL, route_geometry geometry NULL, CONSTRAINT route_pkey PRIMARY KEY (uid))WITH ( OIDS=FALSE) ;CREATE INDEX idx_route_01 ON navdata.route USING btree (uid) ;CREATE INDEX idx_route_02 ON navdata.route USING btree (frompointguid) ;CREATE INDEX idx_route_03 ON navdata.route USING btree (topointguid) ;CREATE INDEX idx_route_04 ON navdata.route USING btree (fufi) ;CREATE INDEX idx_route_05 ON navdata.route USING btree (source, routeidentifier, startvalid, endvalid) ;CREATE INDEX idx_route_06 ON navdata.route USING gist (routeinformation gist_trgm_ops) ;CREATE INDEX idx_route_07 ON navdata.route USING gist (tsrange(startvalid, endvalid)) ;CREATE INDEX idx_route_09 ON navdata.route USING gist (routeidentifier gist_trgm_ops) ;################################################################################# Table metadata################################################################################relname |relpages |reltuples |relallvisible |relkind |relnatts |relhassubclass |reloptions |pg_table_size |--------|---------|----------|--------------|--------|---------|---------------|-----------|--------------|route |36600 |938573 |36595 |r |22 |false |NULL |299941888 |point |95241 |2156454 |95241 |r |17 |false |NULL |780460032 |################################################################################# History################################################################################This is a new query, because data layer is being refactored.################################################################################# Hardware################################################################################Postgres is running on virtual machine.* CPU: 8 cores assignedprocessor : 7vendor_id : AuthenticAMDcpu family : 21model : 2model name : AMD Opteron(tm) Processor 6380stepping : 0microcode : 0xffffffffcpu MHz : 2500.020cache size : 2048 KBphysical id : 0siblings : 8core id : 7cpu cores : 8apicid : 7initial apicid : 7fpu : yesfpu_exception : yescpuid level : 13wp : yesflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt lm rep_good nopl extd_apicid pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 popcnt aes xsave avx f16c hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw xop fma4 vmmcall bmi1 aratbugs : fxsave_leak sysret_ss_attrsbogomips : 4998.98TLB size : 1536 4K pagesclflush size : 64cache_alignment : 64address sizes : 42 bits physical, 48 bits virtualpower management:* Memory: 32 GB* Disk: Should be ssd, but unfortunattely I don't know which model.################################################################################# bonnie++################################################################################Using uid:111, gid:118.format_version,bonnie_version,name,concurrency,seed,file_size,io_chunk_size,putc,putc_cpu,put_block,put_block_cpu,rewrite,rewrite_cpu,getc,getc_cpu,get_block,get_block_cpu,seeks,seeks_cpu,num_files,max_size,min_size,num_dirs,file_chunk_size,seq_create,seq_create_cpu,seq_stat,seq_stat_cpu,seq_del,seq_del_cpu,ran_create,ran_create_cpu,ran_stat,ran_stat_cpu,ran_del,ran_del_cpu,putc_latency,put_block_latency,rewrite_latency,getc_latency,get_block_latency,seeks_latency,seq_create_latency,seq_stat_latency,seq_del_latency,ran_create_latency,ran_stat_latency,ran_del_latencyWriting intelligently...doneRewriting...doneReading intelligently...donestart 'em...done...done...done...done...done...1.97,1.97,v6565testdb01,1,1529491960,63G,,,,133872,20,96641,17,,,469654,41,+++++,+++,,,,,,,,,,,,,,,,,,,2117ms,2935ms,,270ms,4760us,,,,,,Writing intelligently...doneRewriting...doneReading intelligently...donestart 'em...done...done...done...done...done...1.97,1.97,v6565testdb01,1,1529491960,63G,,,,190192,26,143595,23,,,457357,37,+++++,+++,,,,,,,,,,,,,,,,,,,595ms,2201ms,,284ms,6110us,,,,,,Writing intelligently...doneRewriting...doneReading intelligently...donestart 'em...done...done...done...done...done...1.97,1.97,v6565testdb01,1,1529491960,63G,,,,542936,81,153952,25,,,446369,37,+++++,+++,,,,,,,,,,,,,,,,,,,347ms,3678ms,,101ms,5632us,,,,,,Writing intelligently...doneRewriting...doneReading intelligently...donestart 'em...done...done...done...done...done...1.97,1.97,v6565testdb01,1,1529491960,63G,,,,244155,33,157543,26,,,441115,38,16111,495,,,,,,,,,,,,,,,,,,,638ms,2667ms,,195ms,9068us,,,,,,################################################################################# Maintenance Setup################################################################################Autovacuum: yes################################################################################# postgresql.conf################################################################################max_connections = 4096 # (change requires restart)shared_buffers = 8GB # (change requires restart)huge_pages = try # on, off, or trywork_mem = 4MB # min 64kBmaintenance_work_mem = 2GB # min 1MBdynamic_shared_memory_type = posix # the default is the first optionshared_preload_libraries = 'pg_stat_statements'pg_stat_statements.max = 10000pg_stat_statements.track = allwal_level = replica # minimal, replica, or logicalwal_buffers = 16MBmax_wal_size = 2GBmin_wal_size = 1GBcheckpoint_completion_target = 0.7max_wal_senders = 4 # max number of walsender processesrandom_page_cost = 2.0effective_cache_size = 24GBdefault_statistics_target = 100 # range 1-10000################################################################################# Statistics################################################################################frac_mcv |tablename |attname |n_distinct |n_mcv |n_hist |--------------|----------|----------------------------|-------------|------|-------| |route |uid |-1 | |101 |0.969699979 |route |routeidentifier |78 |2 |76 |0.44780004 |route |frompointguid |2899 |100 |101 |0.441700101 |route |topointguid |3154 |100 |101 |0.0368666835 |route |sidguid |2254 |100 |101 |0.0418333709 |route |starguid |3182 |100 |101 |0.0515667647 |route |routeinformation |-0.335044593 |100 |101 |0.0528000034 |route |routetype |3 |3 | |0.755399942 |route |startvalid |810 |100 |101 |0.962899983 |route |endvalid |22 |3 |19 |0.00513333362 |route |revisionuid |-0.809282064 |2 |101 |0.97906667 |route |source |52 |4 |48 | |route |fufi |0 | | |0.00923334155 |route |grounddistance_excl_sidstar |-0.552667081 |100 |101 |0.0505000018 |route |from_first |2 |2 | |0.0376333408 |route |dep_airports |326 |52 |101 |0.0367666557 |route |dst_airports |388 |57 |101 | |point |uid |-1 | |101 |0.00185333542 |point |guid |-0.164169937 |100 |101 |0.0573133379 |point |airportguid |23575 |100 |101 |0.175699964 |point |identifier |209296 |1000 |1001 |0.754063368 |point |icaocode |254 |41 |101 |0.00352332788 |point |name |37853 |100 |101 |0.999230027 |point |type |11 |6 |5 | |point |coordinates |-1 | | |0.607223332 |point |fir |281 |62 |101 |0.0247033276 |point |navaidfrequency |744 |100 |101 |0.0320866667 |point |elevation |14013 |100 |101 |0.0011433335 |point |magneticvariance |-0.587834716 |100 |101 |0.978270054 |point |startvalid |35 |12 |23 |0.978176594 |point |endvalid |30 |11 |19 |0.978123426 |point |revisionuid |62 |12 |50 |0.99999994 |point |source |3 |3 | |0.777056634 |point |leveltype |7 |7 | |################################################################################I am looking forward to your suggestions.Thanks in advance!Sasa Vilic",
"msg_date": "Wed, 20 Jun 2018 08:29:19 -0500",
"msg_from": "Matthew Hall <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query when pg_trgm is in inner lopp"
},
{
"msg_contents": "Hi Matthew,\n\nthank you for query response.\n\nThere is no particular reason for using GIST instead of GIN. We only\nrecently discovered pg_trgm so we are new to this. What I read is that GIN\ncan be faster then GIST but it really depends on query and on amount of\ndata. Nevertheless, both index are by magnitude order slower then btree\nindex, right? I tried simple query on our production server (explain\nanalyze select * from navdata.point where identifier like 'LOWW') where I\nam 100% sure there is SSD and we have random_page_cost = 1, and query\nitself takes 43 ms. That is not much of the difference compared to test\nserver.\n\nWhat interest me, is whenever PG is aware of different costs for different\nindex types. Given that there is also index on guid which is used on\nrelationship, in our case it is always better to use that index and filter,\nthen to use both indexes and BitmapAnd.\n\nRegarding test server, I believe that it is a SSD, but I will get\nconfirmation for this. I tried changing random_page_cost on test server\nfrom 2.0 to 1.0 (that should be right value for SSD, right?) and also to\n4.0 and I get same results.\n\nRegards\nSasa Vilic\n\nOn 20 June 2018 at 15:29, Matthew Hall <[email protected]> wrote:\n\n> Is there a reason you used GIST on your pg_trgm indices and not GIN? In my\n> tests and previous posts on here, it nearly always performs worse. Also,\n> did you make sure if it's really SSD and set the random_page_cost\n> accordingly?\n>\n> Matthew Hall\n>\n> On Jun 20, 2018, at 8:21 AM, Sasa Vilic <[email protected]> wrote:\n>\n> Hi everyone,\n>\n> we have a new query that performs badly with specific input parameters. We\n> get worst performance when input data is most restrictive. I have partially\n> identified a problem: it always happens when index scan is done in inner\n> loop\n> and index type is pg_trgm. We also noticed that for simple query\n> (\n> select * from point where identifier = 'LOWW' vs\n> select * from point where identifier LIKE 'LOWW'\n> )\n> the difference between btree index and pg_trgm index can be quite high:\n> 0.009 ms vs 32.0 ms.\n>\n> What I would like to know is whenever query planner is aware that some\n> index\n> types are more expensive the the others and whenever it can take that into\n> account?\n>\n> I will describe background first, then give you query and its analysis for\n> different parameters and in the end I will write about all required\n> information\n> regarding setup (Postgres version, Schema, metadata, hardware, etc.)\n>\n> I would like to know whenever this is a bug in query planner or not and\n> what\n> could we do about it.\n>\n> ############################################################\n> ####################\n> # Background\n> ############################################################\n> ####################\n>\n> We have a database with navigational data for civil aviation.\n> Current query is working on two tables: point and route.\n> Point represents a navigational point on Earth and route describes a route\n> between two points.\n>\n> Query that we have finds all routes between two set of points. A set is a\n> dynamically/loosely defined by pattern given by the user input. So for\n> example\n> if user wants to find all routes between international airports in Austria\n> toward London Heathrow, he or she would use 'LOW%' as\n> :from_point_identifier\n> and 'EGLL' as :to_point_identifier. Please keep in mind that is a simple\n> case,\n> and that user is allowed to define search term any way he/she see it fit,\n> i.e. '%OW%', 'EG%'.\n>\n> SELECT\n> r.*\n> FROM navdata.route r\n> INNER JOIN navdata.point op ON r.frompointguid = op.guid\n> INNER JOIN navdata.point dp ON r.topointguid = dp.guid\n> WHERE\n> r.routeidentifier ILIKE :route_identifier\n> AND tsrange(r.startvalid, r.endvalid) @> :validity :: TIMESTAMP\n> AND (NOT :use_sources :: BOOLEAN OR r.source = ANY (:sources :: VARCHAR\n> []))\n> AND CONCAT(op.identifier, '') ILIKE :from_point_identifier\n> AND op.type = ANY (:point_types :: VARCHAR [])\n> AND tsrange(op.startvalid, op.endvalid) @> :validity :: TIMESTAMP\n> AND dp.identifier ILIKE :to_point_identifier :: VARCHAR\n> AND dp.type = ANY (:point_types :: VARCHAR [])\n> AND tsrange(dp.startvalid, dp.endvalid) @> :validity :: TIMESTAMP\n> ORDER BY r.routeidentifier\n> LIMIT 1000\n>\n>\n> Most of the tables we have follows this layout principle:\n> * uid - is primary key\n> * guid - is globally unique key (i.e. London Heathrow could for example\n> change it identifier EGLL, but our internal guid will stay same)\n> * startvalid, endvalid - defines for which period is entry valid. Entires\n> with\n> same guid should not have overlapping validity.\n>\n> We don't use foreign keys for two reasons:\n> * We need to do live migration without downtime. Creating a foreign key on\n> huge dataset could take quite some time\n> * Relationship between entities are defined based on guid and not on uid\n> (primary key).\n>\n> ############################################################\n> ####################\n> # Query analysis\n> ############################################################\n> ####################\n>\n> ------------------------------------------------------------\n> --------------------\n> # Case 1 : We search for all outgoing routes from Vienna International\n> Airport\n> ------------------------------------------------------------\n> --------------------\n>\n> EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)\n> SELECT\n> r.*\n> FROM navdata.route r\n> INNER JOIN navdata.point op ON r.frompointguid = op.guid\n> INNER JOIN navdata.point dp ON r.topointguid = dp.guid\n> WHERE\n> r.routeidentifier ILIKE '%'\n> AND tsrange(r.startvalid, r.endvalid) @> now() :: TIMESTAMP\n> AND (NOT false :: BOOLEAN OR r.source = ANY (ARRAY[] :: VARCHAR []))\n> AND op.identifier ILIKE '%LOWW%'\n> AND op.type = ANY (ARRAY['PA'] :: VARCHAR [])\n> AND tsrange(op.startvalid, op.endvalid) @> now() :: TIMESTAMP\n> AND dp.identifier ILIKE '%' :: VARCHAR\n> AND dp.type = ANY (ARRAY['PA'] :: VARCHAR [])\n> AND tsrange(dp.startvalid, dp.endvalid) @> now() :: TIMESTAMP\n> ORDER BY r.routeidentifier\n> LIMIT 1000\n>\n> Limit (cost=666.58..666.58 rows=1 width=349) (actual\n> time=358.466..359.688 rows=1000 loops=1)\n> Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid,\n> r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid,\n> r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar,\n> r.from_first, r.dep_airports, r.dst_airports, r.tag,\n> r.expanded_route_string, r.route_geometry\n> Buffers: shared hit=29786 read=1\n> -> Sort (cost=666.58..666.58 rows=1 width=349) (actual\n> time=358.464..358.942 rows=1000 loops=1)\n> Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid,\n> r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid,\n> r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar,\n> r.from_first, r.dep_airports, r.dst_airports, r.tag,\n> r.expanded_route_string, r.route_geometry\n> Sort Key: r.routeidentifier\n> Sort Method: quicksort Memory: 582kB\n> Buffers: shared hit=29786 read=1\n> -> Nested Loop (cost=149.94..666.57 rows=1 width=349) (actual\n> time=291.681..356.261 rows=1540 loops=1)\n> Output: r.uid, r.routeidentifier, r.frompointguid,\n> r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype,\n> r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi,\n> r.grounddistance_excl_sidstar, r.from_first, r.dep_airports,\n> r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry\n> Buffers: shared hit=29786 read=1\n> -> Nested Loop (cost=149.51..653.92 rows=1 width=349)\n> (actual time=291.652..300.076 rows=1546 loops=1)\n> Output: r.uid, r.routeidentifier, r.frompointguid,\n> r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype,\n> r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi,\n> r.grounddistance_excl_sidstar, r.from_first, r.dep_airports,\n> r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry\n> Buffers: shared hit=13331 read=1\n> -> Bitmap Heap Scan on navdata.point op\n> (cost=5.75..358.28 rows=2 width=16) (actual time=95.933..96.155 rows=1\n> loops=1)\n> Output: op.uid, op.guid, op.airportguid,\n> op.identifier, op.icaocode, op.name, op.type, op.coordinates, op.fir,\n> op.navaidfrequency, op.elevation, op.magneticvariance, op.startvalid,\n> op.endvalid, op.revisionuid, op.source, op.leveltype\n> Recheck Cond: ((op.identifier)::text ~~*\n> '%LOWW%'::text)\n> Filter: (((op.type)::text = ANY\n> ('{PA}'::text[])) AND (tsrange(op.startvalid, op.endvalid) @>\n> (now())::timestamp without time zone))\n> Rows Removed by Filter: 50\n> Heap Blocks: exact=51\n> Buffers: shared hit=4974 read=1\n> -> Bitmap Index Scan on idx_point_08\n> (cost=0.00..5.75 rows=178 width=0) (actual time=95.871..95.871 rows=51\n> loops=1)\n> Index Cond: ((op.identifier)::text ~~*\n> '%LOWW%'::text)\n> Buffers: shared hit=4924\n> -> Bitmap Heap Scan on navdata.route r\n> (cost=143.77..147.80 rows=2 width=349) (actual time=195.711..202.308\n> rows=1546 loops=1)\n> Output: r.uid, r.routeidentifier,\n> r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation,\n> r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi,\n> r.grounddistance_excl_sidstar, r.from_first, r.dep_airports,\n> r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry\n> Recheck Cond: ((r.frompointguid = op.guid) AND\n> (tsrange(r.startvalid, r.endvalid) @> (now())::timestamp without time zone))\n> Filter: ((r.routeidentifier)::text ~~* '%'::text)\n> Heap Blocks: exact=1231\n> Buffers: shared hit=8357\n> -> BitmapAnd (cost=143.77..143.77 rows=2\n> width=0) (actual time=195.501..195.501 rows=0 loops=1)\n> Buffers: shared hit=7126\n> -> Bitmap Index Scan on idx_route_02\n> (cost=0.00..6.85 rows=324 width=0) (actual time=0.707..0.707 rows=4295\n> loops=1)\n> Index Cond: (r.frompointguid =\n> op.guid)\n> Buffers: shared hit=21\n> -> Bitmap Index Scan on idx_route_07\n> (cost=0.00..135.49 rows=4693 width=0) (actual time=193.881..193.881\n> rows=579054 loops=1)\n> Index Cond: (tsrange(r.startvalid,\n> r.endvalid) @> (now())::timestamp without time zone)\n> Buffers: shared hit=7105\n> -> Index Scan using cidx_point on navdata.point dp\n> (cost=0.43..12.63 rows=1 width=16) (actual time=0.009..0.034 rows=1\n> loops=1546)\n> Output: dp.uid, dp.guid, dp.airportguid,\n> dp.identifier, dp.icaocode, dp.name, dp.type, dp.coordinates, dp.fir,\n> dp.navaidfrequency, dp.elevation, dp.magneticvariance, dp.startvalid,\n> dp.endvalid, dp.revisionuid, dp.source, dp.leveltype\n> Index Cond: (dp.guid = r.topointguid)\n> Filter: (((dp.type)::text = ANY ('{PA}'::text[])) AND\n> ((dp.identifier)::text ~~* '%'::text) AND (tsrange(dp.startvalid,\n> dp.endvalid) @> (now())::timestamp without time zone))\n> Rows Removed by Filter: 7\n> Buffers: shared hit=16455\n> Planning time: 4.603 ms\n> Execution time: 360.180 ms\n>\n> * 360 ms. That is quite fine for our standards. *\n>\n> ------------------------------------------------------------\n> --------------------\n> # Case 2 : We search for all routes between Vienna International Airport\n> and\n> London Heathrow (here is where trouble begins)\n> ------------------------------------------------------------\n> --------------------\n>\n> EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)\n> SELECT\n> r.*\n> FROM navdata.route r\n> INNER JOIN navdata.point op ON r.frompointguid = op.guid\n> INNER JOIN navdata.point dp ON r.topointguid = dp.guid\n> WHERE\n> r.routeidentifier ILIKE '%'\n> AND tsrange(r.startvalid, r.endvalid) @> now() :: TIMESTAMP\n> AND (NOT false :: BOOLEAN OR r.source = ANY (ARRAY[] :: VARCHAR []))\n> AND op.identifier ILIKE '%LOWW%'\n> AND op.type = ANY (ARRAY['PA'] :: VARCHAR [])\n> AND tsrange(op.startvalid, op.endvalid) @> now() :: TIMESTAMP\n> AND dp.identifier ILIKE '%EGLL%' :: VARCHAR\n> AND dp.type = ANY (ARRAY['PA'] :: VARCHAR [])\n> AND tsrange(dp.startvalid, dp.endvalid) @> now() :: TIMESTAMP\n> ORDER BY r.routeidentifier\n> LIMIT 1000\n>\n>\n> Limit (cost=659.57..659.58 rows=1 width=349) (actual\n> time=223118.664..223118.714 rows=36 loops=1)\n> Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid,\n> r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid,\n> r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar,\n> r.from_first, r.dep_airports, r.dst_airports, r.tag,\n> r.expanded_route_string, r.route_geometry\n> Buffers: shared hit=12033194\n> -> Sort (cost=659.57..659.58 rows=1 width=349) (actual\n> time=223118.661..223118.681 rows=36 loops=1)\n> Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid,\n> r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid,\n> r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar,\n> r.from_first, r.dep_airports, r.dst_airports, r.tag,\n> r.expanded_route_string, r.route_geometry\n> Sort Key: r.routeidentifier\n> Sort Method: quicksort Memory: 35kB\n> Buffers: shared hit=12033194\n> -> Nested Loop (cost=157.35..659.56 rows=1 width=349) (actual\n> time=4290.975..223118.490 rows=36 loops=1)\n> Output: r.uid, r.routeidentifier, r.frompointguid,\n> r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype,\n> r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi,\n> r.grounddistance_excl_sidstar, r.from_first, r.dep_airports,\n> r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry\n> Buffers: shared hit=12033194\n> -> Nested Loop (cost=149.32..649.49 rows=1 width=349)\n> (actual time=319.717..367.139 rows=2439 loops=1)\n> Output: r.uid, r.routeidentifier, r.frompointguid,\n> r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype,\n> r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi,\n> r.grounddistance_excl_sidstar, r.from_first, r.dep_airports,\n> r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry\n> Buffers: shared hit=15788\n> -> Bitmap Heap Scan on navdata.point dp\n> (cost=5.75..358.28 rows=2 width=16) (actual time=124.922..125.008 rows=1\n> loops=1)\n> Output: dp.uid, dp.guid, dp.airportguid,\n> dp.identifier, dp.icaocode, dp.name, dp.type, dp.coordinates, dp.fir,\n> dp.navaidfrequency, dp.elevation, dp.magneticvariance, dp.startvalid,\n> dp.endvalid, dp.revisionuid, dp.source, dp.leveltype\n> Recheck Cond: ((dp.identifier)::text ~~*\n> '%EGLL%'::text)\n> Filter: (((dp.type)::text = ANY\n> ('{PA}'::text[])) AND (tsrange(dp.startvalid, dp.endvalid) @>\n> (now())::timestamp without time zone))\n> Rows Removed by Filter: 6\n> Heap Blocks: exact=7\n> Buffers: shared hit=6786\n> -> Bitmap Index Scan on idx_point_08\n> (cost=0.00..5.75 rows=178 width=0) (actual time=124.882..124.882 rows=7\n> loops=1)\n> Index Cond: ((dp.identifier)::text ~~*\n> '%EGLL%'::text)\n> Buffers: shared hit=6779\n> -> Bitmap Heap Scan on navdata.route r\n> (cost=143.57..145.60 rows=1 width=349) (actual time=194.785..237.128\n> rows=2439 loops=1)\n> Output: r.uid, r.routeidentifier,\n> r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation,\n> r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi,\n> r.grounddistance_excl_sidstar, r.from_first, r.dep_airports,\n> r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry\n> Recheck Cond: ((r.topointguid = dp.guid) AND\n> (tsrange(r.startvalid, r.endvalid) @> (now())::timestamp without time zone))\n> Filter: ((r.routeidentifier)::text ~~* '%'::text)\n> Heap Blocks: exact=1834\n> Buffers: shared hit=9002\n> -> BitmapAnd (cost=143.57..143.57 rows=1\n> width=0) (actual time=194.460..194.460 rows=0 loops=1)\n> Buffers: shared hit=7168\n> -> Bitmap Index Scan on idx_route_03\n> (cost=0.00..6.66 rows=298 width=0) (actual time=2.326..2.326 rows=15148\n> loops=1)\n> Index Cond: (r.topointguid =\n> dp.guid)\n> Buffers: shared hit=63\n> -> Bitmap Index Scan on idx_route_07\n> (cost=0.00..135.49 rows=4693 width=0) (actual time=190.001..190.001\n> rows=579054 loops=1)\n> Index Cond: (tsrange(r.startvalid,\n> r.endvalid) @> (now())::timestamp without time zone)\n> Buffers: shared hit=7105\n> -> Bitmap Heap Scan on navdata.point op (cost=8.03..10.06\n> rows=1 width=16) (actual time=91.321..91.321 rows=0 loops=2439)\n> Output: op.uid, op.guid, op.airportguid,\n> op.identifier, op.icaocode, op.name, op.type, op.coordinates, op.fir,\n> op.navaidfrequency, op.elevation, op.magneticvariance, op.startvalid,\n> op.endvalid, op.revisionuid, op.source, op.leveltype\n> Recheck Cond: ((op.guid = r.frompointguid) AND\n> ((op.identifier)::text ~~* '%LOWW%'::text))\n> Filter: (((op.type)::text = ANY ('{PA}'::text[])) AND\n> (tsrange(op.startvalid, op.endvalid) @> (now())::timestamp without time\n> zone))\n> Rows Removed by Filter: 0\n> Heap Blocks: exact=252\n> Buffers: shared hit=12017406\n> -> BitmapAnd (cost=8.03..8.03 rows=1 width=0)\n> (actual time=91.315..91.315 rows=0 loops=2439)\n> Buffers: shared hit=12017154\n> -> Bitmap Index Scan on cidx_point\n> (cost=0.00..2.04 rows=6 width=0) (actual time=0.017..0.017 rows=8\n> loops=2439)\n> Index Cond: (op.guid = r.frompointguid)\n> Buffers: shared hit=7518\n> -> Bitmap Index Scan on idx_point_08\n> (cost=0.00..5.75 rows=178 width=0) (actual time=91.288..91.288 rows=51\n> loops=2439)\n> Index Cond: ((op.identifier)::text ~~*\n> '%LOWW%'::text)\n> Buffers: shared hit=12009636\n> Planning time: 5.162 ms\n> Execution time: 223118.858 ms\n>\n> * Please pay attention to index scan on idx_point_08. It takes on average\n> 91 ms\n> and it is executed 2439 times = 221949 ms. That is where we spend most of\n> the\n> time. *\n>\n> ------------------------------------------------------------\n> --------------------\n> # Case 3 : We again search for all routes between Vienna International\n> Airport\n> and London Heathrow, but this time I use CONCAT(op.identifier, '') as\n> optimization fence.\n> ------------------------------------------------------------\n> --------------------\n>\n> EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)\n> SELECT\n> r.*\n> FROM navdata.route r\n> INNER JOIN navdata.point op ON r.frompointguid = op.guid\n> INNER JOIN navdata.point dp ON r.topointguid = dp.guid\n> WHERE\n> r.routeidentifier ILIKE '%'\n> AND tsrange(r.startvalid, r.endvalid) @> now() :: TIMESTAMP\n> AND (NOT false :: BOOLEAN OR r.source = ANY (ARRAY[] :: VARCHAR []))\n> AND CONCAT(op.identifier, '') ILIKE '%LOWW%'\n> AND op.type = ANY (ARRAY['PA'] :: VARCHAR [])\n> AND tsrange(op.startvalid, op.endvalid) @> now() :: TIMESTAMP\n> AND dp.identifier ILIKE '%EGLL%' :: VARCHAR\n> AND dp.type = ANY (ARRAY['PA'] :: VARCHAR [])\n> AND tsrange(dp.startvalid, dp.endvalid) @> now() :: TIMESTAMP\n> ORDER BY r.routeidentifier\n> LIMIT 1000\n>\n> Limit (cost=662.16..662.17 rows=1 width=349) (actual\n> time=411.756..411.808 rows=36 loops=1)\n> Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid,\n> r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid,\n> r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar,\n> r.from_first, r.dep_airports, r.dst_airports, r.tag,\n> r.expanded_route_string, r.route_geometry\n> Buffers: shared hit=43025\n> -> Sort (cost=662.16..662.17 rows=1 width=349) (actual\n> time=411.755..411.776 rows=36 loops=1)\n> Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid,\n> r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid,\n> r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar,\n> r.from_first, r.dep_airports, r.dst_airports, r.tag,\n> r.expanded_route_string, r.route_geometry\n> Sort Key: r.routeidentifier\n> Sort Method: quicksort Memory: 35kB\n> Buffers: shared hit=43025\n> -> Nested Loop (cost=149.75..662.15 rows=1 width=349) (actual\n> time=316.518..411.656 rows=36 loops=1)\n> Output: r.uid, r.routeidentifier, r.frompointguid,\n> r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype,\n> r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi,\n> r.grounddistance_excl_sidstar, r.from_first, r.dep_airports,\n> r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry\n> Buffers: shared hit=43025\n> -> Nested Loop (cost=149.32..649.49 rows=1 width=349)\n> (actual time=314.704..326.873 rows=2439 loops=1)\n> Output: r.uid, r.routeidentifier, r.frompointguid,\n> r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype,\n> r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi,\n> r.grounddistance_excl_sidstar, r.from_first, r.dep_airports,\n> r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry\n> Buffers: shared hit=15788\n> -> Bitmap Heap Scan on navdata.point dp\n> (cost=5.75..358.28 rows=2 width=16) (actual time=123.267..123.310 rows=1\n> loops=1)\n> Output: dp.uid, dp.guid, dp.airportguid,\n> dp.identifier, dp.icaocode, dp.name, dp.type, dp.coordinates, dp.fir,\n> dp.navaidfrequency, dp.elevation, dp.magneticvariance, dp.startvalid,\n> dp.endvalid, dp.revisionuid, dp.source, dp.leveltype\n> Recheck Cond: ((dp.identifier)::text ~~*\n> '%EGLL%'::text)\n> Filter: (((dp.type)::text = ANY\n> ('{PA}'::text[])) AND (tsrange(dp.startvalid, dp.endvalid) @>\n> (now())::timestamp without time zone))\n> Rows Removed by Filter: 6\n> Heap Blocks: exact=7\n> Buffers: shared hit=6786\n> -> Bitmap Index Scan on idx_point_08\n> (cost=0.00..5.75 rows=178 width=0) (actual time=123.232..123.232 rows=7\n> loops=1)\n> Index Cond: ((dp.identifier)::text ~~*\n> '%EGLL%'::text)\n> Buffers: shared hit=6779\n> -> Bitmap Heap Scan on navdata.route r\n> (cost=143.57..145.60 rows=1 width=349) (actual time=191.429..201.176\n> rows=2439 loops=1)\n> Output: r.uid, r.routeidentifier,\n> r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation,\n> r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi,\n> r.grounddistance_excl_sidstar, r.from_first, r.dep_airports,\n> r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry\n> Recheck Cond: ((r.topointguid = dp.guid) AND\n> (tsrange(r.startvalid, r.endvalid) @> (now())::timestamp without time zone))\n> Filter: ((r.routeidentifier)::text ~~* '%'::text)\n> Heap Blocks: exact=1834\n> Buffers: shared hit=9002\n> -> BitmapAnd (cost=143.57..143.57 rows=1\n> width=0) (actual time=191.097..191.097 rows=0 loops=1)\n> Buffers: shared hit=7168\n> -> Bitmap Index Scan on idx_route_03\n> (cost=0.00..6.66 rows=298 width=0) (actual time=2.349..2.349 rows=15148\n> loops=1)\n> Index Cond: (r.topointguid =\n> dp.guid)\n> Buffers: shared hit=63\n> -> Bitmap Index Scan on idx_route_07\n> (cost=0.00..135.49 rows=4693 width=0) (actual time=186.640..186.640\n> rows=579054 loops=1)\n> Index Cond: (tsrange(r.startvalid,\n> r.endvalid) @> (now())::timestamp without time zone)\n> Buffers: shared hit=7105\n> -> Index Scan using cidx_point on navdata.point op\n> (cost=0.43..12.65 rows=1 width=16) (actual time=0.033..0.033 rows=0\n> loops=2439)\n> Output: op.uid, op.guid, op.airportguid,\n> op.identifier, op.icaocode, op.name, op.type, op.coordinates, op.fir,\n> op.navaidfrequency, op.elevation, op.magneticvariance, op.startvalid,\n> op.endvalid, op.revisionuid, op.source, op.leveltype\n> Index Cond: (op.guid = r.frompointguid)\n> Filter: (((op.type)::text = ANY ('{PA}'::text[])) AND\n> (concat(op.identifier, '') ~~* '%LOWW%'::text) AND (tsrange(op.startvalid,\n> op.endvalid) @> (now())::timestamp without time zone))\n> Rows Removed by Filter: 8\n> Buffers: shared hit=27237\n> Planning time: 3.381 ms\n> Execution time: 411.944 ms\n>\n> * We are back into acceptable margin. *\n>\n> ############################################################\n> ####################\n> # Postgres version\n> ############################################################\n> ####################\n>\n> PostgreSQL 9.6.9 on x86_64-pc-linux-gnu (Ubuntu 9.6.9-2.pgdg16.04+1),\n> compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0 20160609, 64-bit\n>\n> ############################################################\n> ####################\n> # Schema\n> ############################################################\n> ####################\n>\n> Currently, our tables are heavily indexed due to refactoring process and\n> need to\n> work with old and new version of software. Once we are finished, lot of\n> indexes shell be removed.\n>\n> CREATE TABLE navdata.point (\n> uid uuid NOT NULL,\n> guid uuid NULL,\n> airportguid uuid NULL,\n> identifier varchar(5) NULL,\n> icaocode varchar(2) NULL,\n> \"name\" varchar(255) NULL,\n> \"type\" varchar(2) NULL,\n> coordinates geography NULL,\n> fir varchar(5) NULL,\n> navaidfrequency float8 NULL,\n> elevation float8 NULL,\n> magneticvariance float8 NULL,\n> startvalid timestamp NULL,\n> endvalid timestamp NULL,\n> revisionuid uuid NULL,\n> \"source\" varchar(4) NULL,\n> leveltype varchar(1) NULL,\n> CONSTRAINT point_pkey PRIMARY KEY (uid)\n> )\n> WITH (\n> OIDS=FALSE\n> ) ;\n> CREATE INDEX cidx_point ON navdata.point USING btree (guid) ;\n> CREATE INDEX idx_point_01 ON navdata.point USING btree (identifier, guid) ;\n> CREATE INDEX idx_point_03 ON navdata.point USING btree (identifier) ;\n> CREATE INDEX idx_point_04 ON navdata.point USING gist (coordinates) WHERE\n> (airportguid IS NULL) ;\n> CREATE INDEX idx_point_05 ON navdata.point USING btree (identifier\n> text_pattern_ops) ;\n> CREATE INDEX idx_point_06 ON navdata.point USING btree (airportguid) ;\n> CREATE INDEX idx_point_07 ON navdata.point USING gist (coordinates) ;\n> CREATE INDEX idx_point_08 ON navdata.point USING gist (identifier\n> gist_trgm_ops) ;\n> CREATE INDEX idx_point_09 ON navdata.point USING btree (type) ;\n> CREATE INDEX idx_point_10 ON navdata.point USING gist (name gist_trgm_ops)\n> ;\n> CREATE INDEX idx_point_11 ON navdata.point USING btree (type, identifier\n> text_pattern_ops) ;\n> CREATE INDEX idx_point_12 ON navdata.point USING gist\n> (upper((identifier)::text) gist_trgm_ops) ;\n> CREATE INDEX idx_point_13 ON navdata.point USING gist (upper((name)::text)\n> gist_trgm_ops) ;\n> CREATE INDEX idx_point_tmp ON navdata.point USING btree (leveltype) ;\n> CREATE INDEX point_validity_idx ON navdata.point USING gist\n> (tsrange(startvalid, endvalid)) ;\n>\n> CREATE TABLE navdata.route (\n> uid uuid NOT NULL,\n> routeidentifier varchar(3) NULL,\n> frompointguid uuid NULL,\n> topointguid uuid NULL,\n> sidguid uuid NULL,\n> starguid uuid NULL,\n> routeinformation varchar NULL,\n> routetype varchar(5) NULL,\n> startvalid timestamp NULL,\n> endvalid timestamp NULL,\n> revisionuid uuid NULL,\n> \"source\" varchar(4) NULL,\n> fufi uuid NULL,\n> grounddistance_excl_sidstar float8 NULL,\n> from_first bool NULL,\n> dep_airports varchar NULL,\n> dst_airports varchar NULL,\n> tag varchar NULL,\n> expanded_route_string varchar NULL,\n> route_geometry geometry NULL,\n> CONSTRAINT route_pkey PRIMARY KEY (uid)\n> )\n> WITH (\n> OIDS=FALSE\n> ) ;\n> CREATE INDEX idx_route_01 ON navdata.route USING btree (uid) ;\n> CREATE INDEX idx_route_02 ON navdata.route USING btree (frompointguid) ;\n> CREATE INDEX idx_route_03 ON navdata.route USING btree (topointguid) ;\n> CREATE INDEX idx_route_04 ON navdata.route USING btree (fufi) ;\n> CREATE INDEX idx_route_05 ON navdata.route USING btree (source,\n> routeidentifier, startvalid, endvalid) ;\n> CREATE INDEX idx_route_06 ON navdata.route USING gist (routeinformation\n> gist_trgm_ops) ;\n> CREATE INDEX idx_route_07 ON navdata.route USING gist (tsrange(startvalid,\n> endvalid)) ;\n> CREATE INDEX idx_route_09 ON navdata.route USING gist (routeidentifier\n> gist_trgm_ops) ;\n>\n> ############################################################\n> ####################\n> # Table metadata\n> ############################################################\n> ####################\n>\n> relname |relpages |reltuples |relallvisible |relkind |relnatts\n> |relhassubclass |reloptions |pg_table_size |\n> --------|---------|----------|--------------|--------|------\n> ---|---------------|-----------|--------------|\n> route |36600 |938573 |36595 |r |22\n> |false |NULL |299941888 |\n> point |95241 |2156454 |95241 |r |17\n> |false |NULL |780460032 |\n>\n>\n> ############################################################\n> ####################\n> # History\n> ############################################################\n> ####################\n>\n> This is a new query, because data layer is being refactored.\n>\n> ############################################################\n> ####################\n> # Hardware\n> ############################################################\n> ####################\n>\n> Postgres is running on virtual machine.\n>\n> * CPU: 8 cores assigned\n>\n> processor : 7\n> vendor_id : AuthenticAMD\n> cpu family : 21\n> model : 2\n> model name : AMD Opteron(tm) Processor 6380\n> stepping : 0\n> microcode : 0xffffffff\n> cpu MHz : 2500.020\n> cache size : 2048 KB\n> physical id : 0\n> siblings : 8\n> core id : 7\n> cpu cores : 8\n> apicid : 7\n> initial apicid : 7\n> fpu : yes\n> fpu_exception : yes\n> cpuid level : 13\n> wp : yes\n> flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca\n> cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt lm\n> rep_good nopl extd_apicid pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 popcnt\n> aes xsave avx f16c hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a\n> misalignsse 3dnowprefetch osvw xop fma4 vmmcall bmi1 arat\n> bugs : fxsave_leak sysret_ss_attrs\n> bogomips : 4998.98\n> TLB size : 1536 4K pages\n> clflush size : 64\n> cache_alignment : 64\n> address sizes : 42 bits physical, 48 bits virtual\n> power management:\n>\n>\n> * Memory: 32 GB\n>\n> * Disk: Should be ssd, but unfortunattely I don't know which model.\n>\n> ############################################################\n> ####################\n> # bonnie++\n> ############################################################\n> ####################\n>\n> Using uid:111, gid:118.\n> format_version,bonnie_version,name,concurrency,seed,file_\n> size,io_chunk_size,putc,putc_cpu,put_block,put_block_cpu,\n> rewrite,rewrite_cpu,getc,getc_cpu,get_block,get_block_cpu,\n> seeks,seeks_cpu,num_files,max_size,min_size,num_dirs,file_\n> chunk_size,seq_create,seq_create_cpu,seq_stat,seq_stat_\n> cpu,seq_del,seq_del_cpu,ran_create,ran_create_cpu,ran_\n> stat,ran_stat_cpu,ran_del,ran_del_cpu,putc_latency,put_\n> block_latency,rewrite_latency,getc_latency,get_block_\n> latency,seeks_latency,seq_create_latency,seq_stat_\n> latency,seq_del_latency,ran_create_latency,ran_stat_\n> latency,ran_del_latency\n> Writing intelligently...done\n> Rewriting...done\n> Reading intelligently...done\n> start 'em...done...done...done...done...done...\n> 1.97,1.97,v6565testdb01,1,1529491960,63G,,,,133872,20,\n> 96641,17,,,469654,41,+++++,+++,,,,,,,,,,,,,,,,,,,2117ms,\n> 2935ms,,270ms,4760us,,,,,,\n> Writing intelligently...done\n> Rewriting...done\n> Reading intelligently...done\n> start 'em...done...done...done...done...done...\n> 1.97,1.97,v6565testdb01,1,1529491960,63G,,,,190192,26,\n> 143595,23,,,457357,37,+++++,+++,,,,,,,,,,,,,,,,,,,595ms,\n> 2201ms,,284ms,6110us,,,,,,\n> Writing intelligently...done\n> Rewriting...done\n> Reading intelligently...done\n> start 'em...done...done...done...done...done...\n> 1.97,1.97,v6565testdb01,1,1529491960,63G,,,,542936,81,\n> 153952,25,,,446369,37,+++++,+++,,,,,,,,,,,,,,,,,,,347ms,\n> 3678ms,,101ms,5632us,,,,,,\n> Writing intelligently...done\n> Rewriting...done\n> Reading intelligently...done\n> start 'em...done...done...done...done...done...\n> 1.97,1.97,v6565testdb01,1,1529491960,63G,,,,244155,33,\n> 157543,26,,,441115,38,16111,495,,,,,,,,,,,,,,,,,,,638ms,\n> 2667ms,,195ms,9068us,,,,,,\n>\n>\n> ############################################################\n> ####################\n> # Maintenance Setup\n> ############################################################\n> ####################\n>\n> Autovacuum: yes\n>\n> ############################################################\n> ####################\n> # postgresql.conf\n> ############################################################\n> ####################\n>\n> max_connections = 4096 # (change requires restart)\n> shared_buffers = 8GB # (change requires restart)\n> huge_pages = try # on, off, or try\n> work_mem = 4MB # min 64kB\n> maintenance_work_mem = 2GB # min 1MB\n> dynamic_shared_memory_type = posix # the default is the first option\n> shared_preload_libraries = 'pg_stat_statements'\n> pg_stat_statements.max = 10000\n> pg_stat_statements.track = all\n> wal_level = replica # minimal, replica, or logical\n> wal_buffers = 16MB\n> max_wal_size = 2GB\n> min_wal_size = 1GB\n> checkpoint_completion_target = 0.7\n> max_wal_senders = 4 # max number of walsender processes\n> random_page_cost = 2.0\n> effective_cache_size = 24GB\n> default_statistics_target = 100 # range 1-10000\n>\n> ############################################################\n> ####################\n> # Statistics\n> ############################################################\n> ####################\n>\n> frac_mcv |tablename |attname |n_distinct |n_mcv\n> |n_hist |\n> --------------|----------|----------------------------|-----\n> --------|------|-------|\n> |route |uid |-1 |\n> |101 |\n> 0.969699979 |route |routeidentifier |78 |2\n> |76 |\n> 0.44780004 |route |frompointguid |2899 |100\n> |101 |\n> 0.441700101 |route |topointguid |3154 |100\n> |101 |\n> 0.0368666835 |route |sidguid |2254 |100\n> |101 |\n> 0.0418333709 |route |starguid |3182 |100\n> |101 |\n> 0.0515667647 |route |routeinformation |-0.335044593 |100\n> |101 |\n> 0.0528000034 |route |routetype |3 |3\n> | |\n> 0.755399942 |route |startvalid |810 |100\n> |101 |\n> 0.962899983 |route |endvalid |22 |3\n> |19 |\n> 0.00513333362 |route |revisionuid |-0.809282064 |2\n> |101 |\n> 0.97906667 |route |source |52 |4\n> |48 |\n> |route |fufi |0 |\n> | |\n> 0.00923334155 |route |grounddistance_excl_sidstar |-0.552667081 |100\n> |101 |\n> 0.0505000018 |route |from_first |2 |2\n> | |\n> 0.0376333408 |route |dep_airports |326 |52\n> |101 |\n> 0.0367666557 |route |dst_airports |388 |57\n> |101 |\n> |point |uid |-1 |\n> |101 |\n> 0.00185333542 |point |guid |-0.164169937 |100\n> |101 |\n> 0.0573133379 |point |airportguid |23575 |100\n> |101 |\n> 0.175699964 |point |identifier |209296 |1000\n> |1001 |\n> 0.754063368 |point |icaocode |254 |41\n> |101 |\n> 0.00352332788 |point |name |37853 |100\n> |101 |\n> 0.999230027 |point |type |11 |6\n> |5 |\n> |point |coordinates |-1 |\n> | |\n> 0.607223332 |point |fir |281 |62\n> |101 |\n> 0.0247033276 |point |navaidfrequency |744 |100\n> |101 |\n> 0.0320866667 |point |elevation |14013 |100\n> |101 |\n> 0.0011433335 |point |magneticvariance |-0.587834716 |100\n> |101 |\n> 0.978270054 |point |startvalid |35 |12\n> |23 |\n> 0.978176594 |point |endvalid |30 |11\n> |19 |\n> 0.978123426 |point |revisionuid |62 |12\n> |50 |\n> 0.99999994 |point |source |3 |3\n> | |\n> 0.777056634 |point |leveltype |7 |7\n> | |\n>\n> ############################################################\n> ####################\n>\n> I am looking forward to your suggestions.\n>\n> Thanks in advance!\n>\n> Sasa Vilic\n>\n>\n\nHi Matthew,thank you for query response.There is no particular reason for using GIST instead of GIN. We only recently discovered pg_trgm so we are new to this. What I read is that GIN can be faster then GIST but it really depends on query and on amount of data. Nevertheless, both index are by magnitude order slower then btree index, right? I tried simple query on our production server (explain analyze select * from navdata.point where identifier like 'LOWW') where I am 100% sure there is SSD and we have random_page_cost = 1, and query itself takes 43 ms. That is not much of the difference compared to test server. What interest me, is whenever PG is aware of different costs for different index types. Given that there is also index on guid which is used on relationship, in our case it is always better to use that index and filter, then to use both indexes and BitmapAnd. Regarding test server, I believe that it is a SSD, but I will get confirmation for this. I tried changing random_page_cost on test server from 2.0 to 1.0 (that should be right value for SSD, right?) and also to 4.0 and I get same results. RegardsSasa VilicOn 20 June 2018 at 15:29, Matthew Hall <[email protected]> wrote:Is there a reason you used GIST on your pg_trgm indices and not GIN? In my tests and previous posts on here, it nearly always performs worse. Also, did you make sure if it's really SSD and set the random_page_cost accordingly?Matthew HallOn Jun 20, 2018, at 8:21 AM, Sasa Vilic <[email protected]> wrote:Hi everyone,we have a new query that performs badly with specific input parameters. Weget worst performance when input data is most restrictive. I have partiallyidentified a problem: it always happens when index scan is done in inner loopand index type is pg_trgm. We also noticed that for simple query( select * from point where identifier = 'LOWW' vs select * from point where identifier LIKE 'LOWW')the difference between btree index and pg_trgm index can be quite high:0.009 ms vs 32.0 ms.What I would like to know is whenever query planner is aware that some indextypes are more expensive the the others and whenever it can take that intoaccount?I will describe background first, then give you query and its analysis fordifferent parameters and in the end I will write about all required informationregarding setup (Postgres version, Schema, metadata, hardware, etc.)I would like to know whenever this is a bug in query planner or not and whatcould we do about it.################################################################################# Background################################################################################We have a database with navigational data for civil aviation.Current query is working on two tables: point and route.Point represents a navigational point on Earth and route describes a route between two points.Query that we have finds all routes between two set of points. A set is adynamically/loosely defined by pattern given by the user input. So for example if user wants to find all routes between international airports in Austria toward London Heathrow, he or she would use 'LOW%' as :from_point_identifier and 'EGLL' as :to_point_identifier. Please keep in mind that is a simple case,and that user is allowed to define search term any way he/she see it fit,i.e. '%OW%', 'EG%'.SELECT r.*FROM navdata.route r INNER JOIN navdata.point op ON r.frompointguid = op.guid INNER JOIN navdata.point dp ON r.topointguid = dp.guidWHERE r.routeidentifier ILIKE :route_identifier AND tsrange(r.startvalid, r.endvalid) @> :validity :: TIMESTAMP AND (NOT :use_sources :: BOOLEAN OR r.source = ANY (:sources :: VARCHAR [])) AND CONCAT(op.identifier, '') ILIKE :from_point_identifier AND op.type = ANY (:point_types :: VARCHAR []) AND tsrange(op.startvalid, op.endvalid) @> :validity :: TIMESTAMP AND dp.identifier ILIKE :to_point_identifier :: VARCHAR AND dp.type = ANY (:point_types :: VARCHAR []) AND tsrange(dp.startvalid, dp.endvalid) @> :validity :: TIMESTAMPORDER BY r.routeidentifierLIMIT 1000Most of the tables we have follows this layout principle:* uid - is primary key* guid - is globally unique key (i.e. London Heathrow could for example change it identifier EGLL, but our internal guid will stay same)* startvalid, endvalid - defines for which period is entry valid. Entires with same guid should not have overlapping validity. We don't use foreign keys for two reasons:* We need to do live migration without downtime. Creating a foreign key on huge dataset could take quite some time* Relationship between entities are defined based on guid and not on uid (primary key). ################################################################################# Query analysis################################################################################--------------------------------------------------------------------------------# Case 1 : We search for all outgoing routes from Vienna International Airport--------------------------------------------------------------------------------EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)SELECT r.*FROM navdata.route r INNER JOIN navdata.point op ON r.frompointguid = op.guid INNER JOIN navdata.point dp ON r.topointguid = dp.guidWHERE r.routeidentifier ILIKE '%' AND tsrange(r.startvalid, r.endvalid) @> now() :: TIMESTAMP AND (NOT false :: BOOLEAN OR r.source = ANY (ARRAY[] :: VARCHAR [])) AND op.identifier ILIKE '%LOWW%' AND op.type = ANY (ARRAY['PA'] :: VARCHAR []) AND tsrange(op.startvalid, op.endvalid) @> now() :: TIMESTAMP AND dp.identifier ILIKE '%' :: VARCHAR AND dp.type = ANY (ARRAY['PA'] :: VARCHAR []) AND tsrange(dp.startvalid, dp.endvalid) @> now() :: TIMESTAMPORDER BY r.routeidentifierLIMIT 1000Limit (cost=666.58..666.58 rows=1 width=349) (actual time=358.466..359.688 rows=1000 loops=1) Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry Buffers: shared hit=29786 read=1 -> Sort (cost=666.58..666.58 rows=1 width=349) (actual time=358.464..358.942 rows=1000 loops=1) Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry Sort Key: r.routeidentifier Sort Method: quicksort Memory: 582kB Buffers: shared hit=29786 read=1 -> Nested Loop (cost=149.94..666.57 rows=1 width=349) (actual time=291.681..356.261 rows=1540 loops=1) Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry Buffers: shared hit=29786 read=1 -> Nested Loop (cost=149.51..653.92 rows=1 width=349) (actual time=291.652..300.076 rows=1546 loops=1) Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry Buffers: shared hit=13331 read=1 -> Bitmap Heap Scan on navdata.point op (cost=5.75..358.28 rows=2 width=16) (actual time=95.933..96.155 rows=1 loops=1) Output: op.uid, op.guid, op.airportguid, op.identifier, op.icaocode, op.name, op.type, op.coordinates, op.fir, op.navaidfrequency, op.elevation, op.magneticvariance, op.startvalid, op.endvalid, op.revisionuid, op.source, op.leveltype Recheck Cond: ((op.identifier)::text ~~* '%LOWW%'::text) Filter: (((op.type)::text = ANY ('{PA}'::text[])) AND (tsrange(op.startvalid, op.endvalid) @> (now())::timestamp without time zone)) Rows Removed by Filter: 50 Heap Blocks: exact=51 Buffers: shared hit=4974 read=1 -> Bitmap Index Scan on idx_point_08 (cost=0.00..5.75 rows=178 width=0) (actual time=95.871..95.871 rows=51 loops=1) Index Cond: ((op.identifier)::text ~~* '%LOWW%'::text) Buffers: shared hit=4924 -> Bitmap Heap Scan on navdata.route r (cost=143.77..147.80 rows=2 width=349) (actual time=195.711..202.308 rows=1546 loops=1) Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry Recheck Cond: ((r.frompointguid = op.guid) AND (tsrange(r.startvalid, r.endvalid) @> (now())::timestamp without time zone)) Filter: ((r.routeidentifier)::text ~~* '%'::text) Heap Blocks: exact=1231 Buffers: shared hit=8357 -> BitmapAnd (cost=143.77..143.77 rows=2 width=0) (actual time=195.501..195.501 rows=0 loops=1) Buffers: shared hit=7126 -> Bitmap Index Scan on idx_route_02 (cost=0.00..6.85 rows=324 width=0) (actual time=0.707..0.707 rows=4295 loops=1) Index Cond: (r.frompointguid = op.guid) Buffers: shared hit=21 -> Bitmap Index Scan on idx_route_07 (cost=0.00..135.49 rows=4693 width=0) (actual time=193.881..193.881 rows=579054 loops=1) Index Cond: (tsrange(r.startvalid, r.endvalid) @> (now())::timestamp without time zone) Buffers: shared hit=7105 -> Index Scan using cidx_point on navdata.point dp (cost=0.43..12.63 rows=1 width=16) (actual time=0.009..0.034 rows=1 loops=1546) Output: dp.uid, dp.guid, dp.airportguid, dp.identifier, dp.icaocode, dp.name, dp.type, dp.coordinates, dp.fir, dp.navaidfrequency, dp.elevation, dp.magneticvariance, dp.startvalid, dp.endvalid, dp.revisionuid, dp.source, dp.leveltype Index Cond: (dp.guid = r.topointguid) Filter: (((dp.type)::text = ANY ('{PA}'::text[])) AND ((dp.identifier)::text ~~* '%'::text) AND (tsrange(dp.startvalid, dp.endvalid) @> (now())::timestamp without time zone)) Rows Removed by Filter: 7 Buffers: shared hit=16455Planning time: 4.603 msExecution time: 360.180 ms* 360 ms. That is quite fine for our standards. *--------------------------------------------------------------------------------# Case 2 : We search for all routes between Vienna International Airport andLondon Heathrow (here is where trouble begins)--------------------------------------------------------------------------------EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)SELECT r.*FROM navdata.route r INNER JOIN navdata.point op ON r.frompointguid = op.guid INNER JOIN navdata.point dp ON r.topointguid = dp.guidWHERE r.routeidentifier ILIKE '%' AND tsrange(r.startvalid, r.endvalid) @> now() :: TIMESTAMP AND (NOT false :: BOOLEAN OR r.source = ANY (ARRAY[] :: VARCHAR [])) AND op.identifier ILIKE '%LOWW%' AND op.type = ANY (ARRAY['PA'] :: VARCHAR []) AND tsrange(op.startvalid, op.endvalid) @> now() :: TIMESTAMP AND dp.identifier ILIKE '%EGLL%' :: VARCHAR AND dp.type = ANY (ARRAY['PA'] :: VARCHAR []) AND tsrange(dp.startvalid, dp.endvalid) @> now() :: TIMESTAMPORDER BY r.routeidentifierLIMIT 1000Limit (cost=659.57..659.58 rows=1 width=349) (actual time=223118.664..223118.714 rows=36 loops=1) Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry Buffers: shared hit=12033194 -> Sort (cost=659.57..659.58 rows=1 width=349) (actual time=223118.661..223118.681 rows=36 loops=1) Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry Sort Key: r.routeidentifier Sort Method: quicksort Memory: 35kB Buffers: shared hit=12033194 -> Nested Loop (cost=157.35..659.56 rows=1 width=349) (actual time=4290.975..223118.490 rows=36 loops=1) Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry Buffers: shared hit=12033194 -> Nested Loop (cost=149.32..649.49 rows=1 width=349) (actual time=319.717..367.139 rows=2439 loops=1) Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry Buffers: shared hit=15788 -> Bitmap Heap Scan on navdata.point dp (cost=5.75..358.28 rows=2 width=16) (actual time=124.922..125.008 rows=1 loops=1) Output: dp.uid, dp.guid, dp.airportguid, dp.identifier, dp.icaocode, dp.name, dp.type, dp.coordinates, dp.fir, dp.navaidfrequency, dp.elevation, dp.magneticvariance, dp.startvalid, dp.endvalid, dp.revisionuid, dp.source, dp.leveltype Recheck Cond: ((dp.identifier)::text ~~* '%EGLL%'::text) Filter: (((dp.type)::text = ANY ('{PA}'::text[])) AND (tsrange(dp.startvalid, dp.endvalid) @> (now())::timestamp without time zone)) Rows Removed by Filter: 6 Heap Blocks: exact=7 Buffers: shared hit=6786 -> Bitmap Index Scan on idx_point_08 (cost=0.00..5.75 rows=178 width=0) (actual time=124.882..124.882 rows=7 loops=1) Index Cond: ((dp.identifier)::text ~~* '%EGLL%'::text) Buffers: shared hit=6779 -> Bitmap Heap Scan on navdata.route r (cost=143.57..145.60 rows=1 width=349) (actual time=194.785..237.128 rows=2439 loops=1) Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry Recheck Cond: ((r.topointguid = dp.guid) AND (tsrange(r.startvalid, r.endvalid) @> (now())::timestamp without time zone)) Filter: ((r.routeidentifier)::text ~~* '%'::text) Heap Blocks: exact=1834 Buffers: shared hit=9002 -> BitmapAnd (cost=143.57..143.57 rows=1 width=0) (actual time=194.460..194.460 rows=0 loops=1) Buffers: shared hit=7168 -> Bitmap Index Scan on idx_route_03 (cost=0.00..6.66 rows=298 width=0) (actual time=2.326..2.326 rows=15148 loops=1) Index Cond: (r.topointguid = dp.guid) Buffers: shared hit=63 -> Bitmap Index Scan on idx_route_07 (cost=0.00..135.49 rows=4693 width=0) (actual time=190.001..190.001 rows=579054 loops=1) Index Cond: (tsrange(r.startvalid, r.endvalid) @> (now())::timestamp without time zone) Buffers: shared hit=7105 -> Bitmap Heap Scan on navdata.point op (cost=8.03..10.06 rows=1 width=16) (actual time=91.321..91.321 rows=0 loops=2439) Output: op.uid, op.guid, op.airportguid, op.identifier, op.icaocode, op.name, op.type, op.coordinates, op.fir, op.navaidfrequency, op.elevation, op.magneticvariance, op.startvalid, op.endvalid, op.revisionuid, op.source, op.leveltype Recheck Cond: ((op.guid = r.frompointguid) AND ((op.identifier)::text ~~* '%LOWW%'::text)) Filter: (((op.type)::text = ANY ('{PA}'::text[])) AND (tsrange(op.startvalid, op.endvalid) @> (now())::timestamp without time zone)) Rows Removed by Filter: 0 Heap Blocks: exact=252 Buffers: shared hit=12017406 -> BitmapAnd (cost=8.03..8.03 rows=1 width=0) (actual time=91.315..91.315 rows=0 loops=2439) Buffers: shared hit=12017154 -> Bitmap Index Scan on cidx_point (cost=0.00..2.04 rows=6 width=0) (actual time=0.017..0.017 rows=8 loops=2439) Index Cond: (op.guid = r.frompointguid) Buffers: shared hit=7518 -> Bitmap Index Scan on idx_point_08 (cost=0.00..5.75 rows=178 width=0) (actual time=91.288..91.288 rows=51 loops=2439) Index Cond: ((op.identifier)::text ~~* '%LOWW%'::text) Buffers: shared hit=12009636Planning time: 5.162 msExecution time: 223118.858 ms* Please pay attention to index scan on idx_point_08. It takes on average 91 msand it is executed 2439 times = 221949 ms. That is where we spend most of the time. *--------------------------------------------------------------------------------# Case 3 : We again search for all routes between Vienna International Airportand London Heathrow, but this time I use CONCAT(op.identifier, '') asoptimization fence.--------------------------------------------------------------------------------EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)SELECT r.*FROM navdata.route r INNER JOIN navdata.point op ON r.frompointguid = op.guid INNER JOIN navdata.point dp ON r.topointguid = dp.guidWHERE r.routeidentifier ILIKE '%' AND tsrange(r.startvalid, r.endvalid) @> now() :: TIMESTAMP AND (NOT false :: BOOLEAN OR r.source = ANY (ARRAY[] :: VARCHAR [])) AND CONCAT(op.identifier, '') ILIKE '%LOWW%' AND op.type = ANY (ARRAY['PA'] :: VARCHAR []) AND tsrange(op.startvalid, op.endvalid) @> now() :: TIMESTAMP AND dp.identifier ILIKE '%EGLL%' :: VARCHAR AND dp.type = ANY (ARRAY['PA'] :: VARCHAR []) AND tsrange(dp.startvalid, dp.endvalid) @> now() :: TIMESTAMPORDER BY r.routeidentifierLIMIT 1000Limit (cost=662.16..662.17 rows=1 width=349) (actual time=411.756..411.808 rows=36 loops=1) Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry Buffers: shared hit=43025 -> Sort (cost=662.16..662.17 rows=1 width=349) (actual time=411.755..411.776 rows=36 loops=1) Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry Sort Key: r.routeidentifier Sort Method: quicksort Memory: 35kB Buffers: shared hit=43025 -> Nested Loop (cost=149.75..662.15 rows=1 width=349) (actual time=316.518..411.656 rows=36 loops=1) Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry Buffers: shared hit=43025 -> Nested Loop (cost=149.32..649.49 rows=1 width=349) (actual time=314.704..326.873 rows=2439 loops=1) Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry Buffers: shared hit=15788 -> Bitmap Heap Scan on navdata.point dp (cost=5.75..358.28 rows=2 width=16) (actual time=123.267..123.310 rows=1 loops=1) Output: dp.uid, dp.guid, dp.airportguid, dp.identifier, dp.icaocode, dp.name, dp.type, dp.coordinates, dp.fir, dp.navaidfrequency, dp.elevation, dp.magneticvariance, dp.startvalid, dp.endvalid, dp.revisionuid, dp.source, dp.leveltype Recheck Cond: ((dp.identifier)::text ~~* '%EGLL%'::text) Filter: (((dp.type)::text = ANY ('{PA}'::text[])) AND (tsrange(dp.startvalid, dp.endvalid) @> (now())::timestamp without time zone)) Rows Removed by Filter: 6 Heap Blocks: exact=7 Buffers: shared hit=6786 -> Bitmap Index Scan on idx_point_08 (cost=0.00..5.75 rows=178 width=0) (actual time=123.232..123.232 rows=7 loops=1) Index Cond: ((dp.identifier)::text ~~* '%EGLL%'::text) Buffers: shared hit=6779 -> Bitmap Heap Scan on navdata.route r (cost=143.57..145.60 rows=1 width=349) (actual time=191.429..201.176 rows=2439 loops=1) Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry Recheck Cond: ((r.topointguid = dp.guid) AND (tsrange(r.startvalid, r.endvalid) @> (now())::timestamp without time zone)) Filter: ((r.routeidentifier)::text ~~* '%'::text) Heap Blocks: exact=1834 Buffers: shared hit=9002 -> BitmapAnd (cost=143.57..143.57 rows=1 width=0) (actual time=191.097..191.097 rows=0 loops=1) Buffers: shared hit=7168 -> Bitmap Index Scan on idx_route_03 (cost=0.00..6.66 rows=298 width=0) (actual time=2.349..2.349 rows=15148 loops=1) Index Cond: (r.topointguid = dp.guid) Buffers: shared hit=63 -> Bitmap Index Scan on idx_route_07 (cost=0.00..135.49 rows=4693 width=0) (actual time=186.640..186.640 rows=579054 loops=1) Index Cond: (tsrange(r.startvalid, r.endvalid) @> (now())::timestamp without time zone) Buffers: shared hit=7105 -> Index Scan using cidx_point on navdata.point op (cost=0.43..12.65 rows=1 width=16) (actual time=0.033..0.033 rows=0 loops=2439) Output: op.uid, op.guid, op.airportguid, op.identifier, op.icaocode, op.name, op.type, op.coordinates, op.fir, op.navaidfrequency, op.elevation, op.magneticvariance, op.startvalid, op.endvalid, op.revisionuid, op.source, op.leveltype Index Cond: (op.guid = r.frompointguid) Filter: (((op.type)::text = ANY ('{PA}'::text[])) AND (concat(op.identifier, '') ~~* '%LOWW%'::text) AND (tsrange(op.startvalid, op.endvalid) @> (now())::timestamp without time zone)) Rows Removed by Filter: 8 Buffers: shared hit=27237Planning time: 3.381 msExecution time: 411.944 ms* We are back into acceptable margin. *################################################################################# Postgres version################################################################################PostgreSQL 9.6.9 on x86_64-pc-linux-gnu (Ubuntu 9.6.9-2.pgdg16.04+1), compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0 20160609, 64-bit################################################################################# Schema################################################################################Currently, our tables are heavily indexed due to refactoring process and need towork with old and new version of software. Once we are finished, lot of indexes shell be removed.CREATE TABLE navdata.point ( uid uuid NOT NULL, guid uuid NULL, airportguid uuid NULL, identifier varchar(5) NULL, icaocode varchar(2) NULL, \"name\" varchar(255) NULL, \"type\" varchar(2) NULL, coordinates geography NULL, fir varchar(5) NULL, navaidfrequency float8 NULL, elevation float8 NULL, magneticvariance float8 NULL, startvalid timestamp NULL, endvalid timestamp NULL, revisionuid uuid NULL, \"source\" varchar(4) NULL, leveltype varchar(1) NULL, CONSTRAINT point_pkey PRIMARY KEY (uid))WITH ( OIDS=FALSE) ;CREATE INDEX cidx_point ON navdata.point USING btree (guid) ;CREATE INDEX idx_point_01 ON navdata.point USING btree (identifier, guid) ;CREATE INDEX idx_point_03 ON navdata.point USING btree (identifier) ;CREATE INDEX idx_point_04 ON navdata.point USING gist (coordinates) WHERE (airportguid IS NULL) ;CREATE INDEX idx_point_05 ON navdata.point USING btree (identifier text_pattern_ops) ;CREATE INDEX idx_point_06 ON navdata.point USING btree (airportguid) ;CREATE INDEX idx_point_07 ON navdata.point USING gist (coordinates) ;CREATE INDEX idx_point_08 ON navdata.point USING gist (identifier gist_trgm_ops) ;CREATE INDEX idx_point_09 ON navdata.point USING btree (type) ;CREATE INDEX idx_point_10 ON navdata.point USING gist (name gist_trgm_ops) ;CREATE INDEX idx_point_11 ON navdata.point USING btree (type, identifier text_pattern_ops) ;CREATE INDEX idx_point_12 ON navdata.point USING gist (upper((identifier)::text) gist_trgm_ops) ;CREATE INDEX idx_point_13 ON navdata.point USING gist (upper((name)::text) gist_trgm_ops) ;CREATE INDEX idx_point_tmp ON navdata.point USING btree (leveltype) ;CREATE INDEX point_validity_idx ON navdata.point USING gist (tsrange(startvalid, endvalid)) ;CREATE TABLE navdata.route ( uid uuid NOT NULL, routeidentifier varchar(3) NULL, frompointguid uuid NULL, topointguid uuid NULL, sidguid uuid NULL, starguid uuid NULL, routeinformation varchar NULL, routetype varchar(5) NULL, startvalid timestamp NULL, endvalid timestamp NULL, revisionuid uuid NULL, \"source\" varchar(4) NULL, fufi uuid NULL, grounddistance_excl_sidstar float8 NULL, from_first bool NULL, dep_airports varchar NULL, dst_airports varchar NULL, tag varchar NULL, expanded_route_string varchar NULL, route_geometry geometry NULL, CONSTRAINT route_pkey PRIMARY KEY (uid))WITH ( OIDS=FALSE) ;CREATE INDEX idx_route_01 ON navdata.route USING btree (uid) ;CREATE INDEX idx_route_02 ON navdata.route USING btree (frompointguid) ;CREATE INDEX idx_route_03 ON navdata.route USING btree (topointguid) ;CREATE INDEX idx_route_04 ON navdata.route USING btree (fufi) ;CREATE INDEX idx_route_05 ON navdata.route USING btree (source, routeidentifier, startvalid, endvalid) ;CREATE INDEX idx_route_06 ON navdata.route USING gist (routeinformation gist_trgm_ops) ;CREATE INDEX idx_route_07 ON navdata.route USING gist (tsrange(startvalid, endvalid)) ;CREATE INDEX idx_route_09 ON navdata.route USING gist (routeidentifier gist_trgm_ops) ;################################################################################# Table metadata################################################################################relname |relpages |reltuples |relallvisible |relkind |relnatts |relhassubclass |reloptions |pg_table_size |--------|---------|----------|--------------|--------|---------|---------------|-----------|--------------|route |36600 |938573 |36595 |r |22 |false |NULL |299941888 |point |95241 |2156454 |95241 |r |17 |false |NULL |780460032 |################################################################################# History################################################################################This is a new query, because data layer is being refactored.################################################################################# Hardware################################################################################Postgres is running on virtual machine.* CPU: 8 cores assignedprocessor : 7vendor_id : AuthenticAMDcpu family : 21model : 2model name : AMD Opteron(tm) Processor 6380stepping : 0microcode : 0xffffffffcpu MHz : 2500.020cache size : 2048 KBphysical id : 0siblings : 8core id : 7cpu cores : 8apicid : 7initial apicid : 7fpu : yesfpu_exception : yescpuid level : 13wp : yesflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt lm rep_good nopl extd_apicid pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 popcnt aes xsave avx f16c hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw xop fma4 vmmcall bmi1 aratbugs : fxsave_leak sysret_ss_attrsbogomips : 4998.98TLB size : 1536 4K pagesclflush size : 64cache_alignment : 64address sizes : 42 bits physical, 48 bits virtualpower management:* Memory: 32 GB* Disk: Should be ssd, but unfortunattely I don't know which model.################################################################################# bonnie++################################################################################Using uid:111, gid:118.format_version,bonnie_version,name,concurrency,seed,file_size,io_chunk_size,putc,putc_cpu,put_block,put_block_cpu,rewrite,rewrite_cpu,getc,getc_cpu,get_block,get_block_cpu,seeks,seeks_cpu,num_files,max_size,min_size,num_dirs,file_chunk_size,seq_create,seq_create_cpu,seq_stat,seq_stat_cpu,seq_del,seq_del_cpu,ran_create,ran_create_cpu,ran_stat,ran_stat_cpu,ran_del,ran_del_cpu,putc_latency,put_block_latency,rewrite_latency,getc_latency,get_block_latency,seeks_latency,seq_create_latency,seq_stat_latency,seq_del_latency,ran_create_latency,ran_stat_latency,ran_del_latencyWriting intelligently...doneRewriting...doneReading intelligently...donestart 'em...done...done...done...done...done...1.97,1.97,v6565testdb01,1,1529491960,63G,,,,133872,20,96641,17,,,469654,41,+++++,+++,,,,,,,,,,,,,,,,,,,2117ms,2935ms,,270ms,4760us,,,,,,Writing intelligently...doneRewriting...doneReading intelligently...donestart 'em...done...done...done...done...done...1.97,1.97,v6565testdb01,1,1529491960,63G,,,,190192,26,143595,23,,,457357,37,+++++,+++,,,,,,,,,,,,,,,,,,,595ms,2201ms,,284ms,6110us,,,,,,Writing intelligently...doneRewriting...doneReading intelligently...donestart 'em...done...done...done...done...done...1.97,1.97,v6565testdb01,1,1529491960,63G,,,,542936,81,153952,25,,,446369,37,+++++,+++,,,,,,,,,,,,,,,,,,,347ms,3678ms,,101ms,5632us,,,,,,Writing intelligently...doneRewriting...doneReading intelligently...donestart 'em...done...done...done...done...done...1.97,1.97,v6565testdb01,1,1529491960,63G,,,,244155,33,157543,26,,,441115,38,16111,495,,,,,,,,,,,,,,,,,,,638ms,2667ms,,195ms,9068us,,,,,,################################################################################# Maintenance Setup################################################################################Autovacuum: yes################################################################################# postgresql.conf################################################################################max_connections = 4096 # (change requires restart)shared_buffers = 8GB # (change requires restart)huge_pages = try # on, off, or trywork_mem = 4MB # min 64kBmaintenance_work_mem = 2GB # min 1MBdynamic_shared_memory_type = posix # the default is the first optionshared_preload_libraries = 'pg_stat_statements'pg_stat_statements.max = 10000pg_stat_statements.track = allwal_level = replica # minimal, replica, or logicalwal_buffers = 16MBmax_wal_size = 2GBmin_wal_size = 1GBcheckpoint_completion_target = 0.7max_wal_senders = 4 # max number of walsender processesrandom_page_cost = 2.0effective_cache_size = 24GBdefault_statistics_target = 100 # range 1-10000################################################################################# Statistics################################################################################frac_mcv |tablename |attname |n_distinct |n_mcv |n_hist |--------------|----------|----------------------------|-------------|------|-------| |route |uid |-1 | |101 |0.969699979 |route |routeidentifier |78 |2 |76 |0.44780004 |route |frompointguid |2899 |100 |101 |0.441700101 |route |topointguid |3154 |100 |101 |0.0368666835 |route |sidguid |2254 |100 |101 |0.0418333709 |route |starguid |3182 |100 |101 |0.0515667647 |route |routeinformation |-0.335044593 |100 |101 |0.0528000034 |route |routetype |3 |3 | |0.755399942 |route |startvalid |810 |100 |101 |0.962899983 |route |endvalid |22 |3 |19 |0.00513333362 |route |revisionuid |-0.809282064 |2 |101 |0.97906667 |route |source |52 |4 |48 | |route |fufi |0 | | |0.00923334155 |route |grounddistance_excl_sidstar |-0.552667081 |100 |101 |0.0505000018 |route |from_first |2 |2 | |0.0376333408 |route |dep_airports |326 |52 |101 |0.0367666557 |route |dst_airports |388 |57 |101 | |point |uid |-1 | |101 |0.00185333542 |point |guid |-0.164169937 |100 |101 |0.0573133379 |point |airportguid |23575 |100 |101 |0.175699964 |point |identifier |209296 |1000 |1001 |0.754063368 |point |icaocode |254 |41 |101 |0.00352332788 |point |name |37853 |100 |101 |0.999230027 |point |type |11 |6 |5 | |point |coordinates |-1 | | |0.607223332 |point |fir |281 |62 |101 |0.0247033276 |point |navaidfrequency |744 |100 |101 |0.0320866667 |point |elevation |14013 |100 |101 |0.0011433335 |point |magneticvariance |-0.587834716 |100 |101 |0.978270054 |point |startvalid |35 |12 |23 |0.978176594 |point |endvalid |30 |11 |19 |0.978123426 |point |revisionuid |62 |12 |50 |0.99999994 |point |source |3 |3 | |0.777056634 |point |leveltype |7 |7 | |################################################################################I am looking forward to your suggestions.Thanks in advance!Sasa Vilic",
"msg_date": "Wed, 20 Jun 2018 16:13:18 +0200",
"msg_from": "Sasa Vilic <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow query when pg_trgm is in inner lopp"
},
{
"msg_contents": "On Wed, Jun 20, 2018 at 9:21 AM, Sasa Vilic <[email protected]> wrote:\n\n\n> Query that we have finds all routes between two set of points. A set is a\n> dynamically/loosely defined by pattern given by the user input. So for\n> example\n> if user wants to find all routes between international airports in Austria\n> toward London Heathrow, he or she would use 'LOW%' as\n> :from_point_identifier\n> and 'EGLL' as :to_point_identifier. Please keep in mind that is a simple\n> case,\n> and that user is allowed to define search term any way he/she see it fit,\n> i.e. '%OW%', 'EG%'.\n>\n\n\nLetting users do substring searches on airport codes in the middle of a\ncomplex query makes no sense. Do all airports with 'OW' in the middle of\nthem having something in common with each other? If people can't remember\nthe real airport code of the airport they are using, you should offer a\nlook-up tool which they can use to figure that out **before** hitting the\nmain query.\n\nBut taking for granted your weird use case, the most obvious improvement to\nthe PostgreSQL code that I can see is in the executor, not the planner.\nThere is no reason to recompute the bitmap on idx_point_08 each time\nthrough the nested loop, as the outcome of that scan doesn't depend on the\nouter tuple. Presumably the reason this happens is that it is being\n'BitmapAnd'ed with another bitmap index scan which does depend on the outer\ntuple, and it is just not smart enough to reuse the stable bitmap while\nrecomputing the parameterized one.\n\nCheers,\n\nJeff\n\nOn Wed, Jun 20, 2018 at 9:21 AM, Sasa Vilic <[email protected]> wrote: Query that we have finds all routes between two set of points. A set is adynamically/loosely defined by pattern given by the user input. So for example if user wants to find all routes between international airports in Austria toward London Heathrow, he or she would use 'LOW%' as :from_point_identifier and 'EGLL' as :to_point_identifier. Please keep in mind that is a simple case,and that user is allowed to define search term any way he/she see it fit,i.e. '%OW%', 'EG%'.Letting users do substring searches on airport codes in the middle of a complex query makes no sense. Do all airports with 'OW' in the middle of them having something in common with each other? If people can't remember the real airport code of the airport they are using, you should offer a look-up tool which they can use to figure that out **before** hitting the main query.But taking for granted your weird use case, the most obvious improvement to the PostgreSQL code that I can see is in the executor, not the planner. There is no reason to recompute the bitmap on idx_point_08 each time through the nested loop, as the outcome of that scan doesn't depend on the outer tuple. Presumably the reason this happens is that it is being 'BitmapAnd'ed with another bitmap index scan which does depend on the outer tuple, and it is just not smart enough to reuse the stable bitmap while recomputing the parameterized one.Cheers,Jeff",
"msg_date": "Wed, 20 Jun 2018 10:53:52 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query when pg_trgm is in inner lopp"
},
{
"msg_contents": "Hi Jeff,\n\nthe way I see it, is it a poor man's implementation of 'full-text' search.\nI just discussed it with out navdata team and we might be redefine how do\nwe do the search. Regardless of that, I think that issue with Postgres\nstands.\n\nI tried now to see, how the query would behave if we always had\nleft-anchored pattern and that would allow us to stick to btree indexes.\n\n* Query\n------------------------------------\nEXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)\nSELECT\n r.*\nFROM navdata.route r\n INNER JOIN navdata.point op ON r.frompointguid = op.guid\n INNER JOIN navdata.point dp ON r.topointguid = dp.guid\nWHERE\n r.routeidentifier LIKE '%'\n AND tsrange(r.startvalid, r.endvalid) @> now() :: TIMESTAMP\n AND (NOT false :: BOOLEAN OR r.source = ANY (ARRAY[] :: VARCHAR []))\n AND op.identifier LIKE 'LOW%'\n AND op.type = ANY (ARRAY['PA'] :: VARCHAR [])\n AND tsrange(op.startvalid, op.endvalid) @> now() :: TIMESTAMP\n AND dp.identifier LIKE '%' :: VARCHAR\n AND dp.type = ANY (ARRAY['PA'] :: VARCHAR [])\n AND tsrange(dp.startvalid, dp.endvalid) @> now() :: TIMESTAMP\nORDER BY r.routeidentifier\nLIMIT 1000;\n\n* Analysis\n---------------------------------------\nLimit (cost=646.48..646.48 rows=1 width=349) (actual\ntime=1375.359..1376.447 rows=1000 loops=1)\n Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid,\nr.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid,\nr.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar,\nr.from_first, r.dep_airports, r.dst_airports, r.tag,\nr.expanded_route_string, r.route_geometry\n Buffers: shared hit=79276\n -> Sort (cost=646.48..646.48 rows=1 width=349) (actual\ntime=1375.356..1375.785 rows=1000 loops=1)\n Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid,\nr.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid,\nr.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar,\nr.from_first, r.dep_airports, r.dst_airports, r.tag,\nr.expanded_route_string, r.route_geometry\n Sort Key: r.routeidentifier\n Sort Method: top-N heapsort Memory: 321kB\n Buffers: shared hit=79276\n -> Nested Loop (cost=250.30..646.47 rows=1 width=349) (actual\ntime=202.826..1372.178 rows=2596 loops=1)\n Output: r.uid, r.routeidentifier, r.frompointguid,\nr.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype,\nr.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi,\nr.grounddistance_excl_sidstar, r.from_first, r.dep_airports,\nr.dst_airports, r.tag, r.expanded_route_string, r.route_geometry\n Buffers: shared hit=79276\n -> Nested Loop (cost=249.87..621.96 rows=1 width=349)\n(actual time=202.781..1301.135 rows=2602 loops=1)\n Output: r.uid, r.routeidentifier, r.frompointguid,\nr.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype,\nr.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi,\nr.grounddistance_excl_sidstar, r.from_first, r.dep_airports,\nr.dst_airports, r.tag, r.expanded_route_string, r.route_geometry\n Buffers: shared hit=51974\n -> Index Scan using idx_point_11 on navdata.point op\n(cost=0.43..107.02 rows=2 width=16) (actual time=0.055..0.214 rows=7\nloops=1)\n Output: op.uid, op.guid, op.airportguid,\nop.identifier, op.icaocode, op.name, op.type, op.coordinates, op.fir,\nop.navaidfrequency, op.elevation, op.magneticvariance, op.startvalid,\nop.endvalid, op.revisionuid, op.source, op.leveltype\n Index Cond: (((op.type)::text = ANY\n('{PA}'::text[])) AND ((op.identifier)::text ~>=~ 'LOW'::text) AND\n((op.identifier)::text ~<~ 'LOX'::text))\n Filter: (((op.identifier)::text ~~ 'LOW%'::text)\nAND (tsrange(op.startvalid, op.endvalid) @> (now())::timestamp without time\nzone))\n Rows Removed by Filter: 42\n Buffers: shared hit=52\n -> Bitmap Heap Scan on navdata.route r\n(cost=249.44..257.45 rows=2 width=349) (actual time=183.255..185.491\nrows=372 loops=7)\n Output: r.uid, r.routeidentifier,\nr.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation,\nr.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi,\nr.grounddistance_excl_sidstar, r.from_first, r.dep_airports,\nr.dst_airports, r.tag, r.expanded_route_string, r.route_geometry\n Recheck Cond: ((r.frompointguid = op.guid) AND\n(tsrange(r.startvalid, r.endvalid) @> (now())::timestamp without time zone))\n Filter: ((r.routeidentifier)::text ~~ '%'::text)\n Heap Blocks: exact=2140\n Buffers: shared hit=51922\n -> BitmapAnd (cost=249.44..249.44 rows=2\nwidth=0) (actual time=183.197..183.197 rows=0 loops=7)\n Buffers: shared hit=49782\n -> Bitmap Index Scan on idx_route_02\n(cost=0.00..10.96 rows=338 width=0) (actual time=0.162..0.162 rows=884\nloops=7)\n Index Cond: (r.frompointguid =\nop.guid)\n Buffers: shared hit=47\n -> Bitmap Index Scan on idx_route_07\n(cost=0.00..237.01 rows=4896 width=0) (actual time=182.858..182.858\nrows=579062 loops=7)\n Index Cond: (tsrange(r.startvalid,\nr.endvalid) @> (now())::timestamp without time zone)\n Buffers: shared hit=49735\n -> Index Scan using cidx_point on navdata.point dp\n(cost=0.43..24.50 rows=1 width=16) (actual time=0.008..0.025 rows=1\nloops=2602)\n Output: dp.uid, dp.guid, dp.airportguid, dp.identifier,\ndp.icaocode, dp.name, dp.type, dp.coordinates, dp.fir, dp.navaidfrequency,\ndp.elevation, dp.magneticvariance, dp.startvalid, dp.endvalid,\ndp.revisionuid, dp.source, dp.leveltype\n Index Cond: (dp.guid = r.topointguid)\n Filter: (((dp.type)::text = ANY ('{PA}'::text[])) AND\n((dp.identifier)::text ~~ '%'::text) AND (tsrange(dp.startvalid,\ndp.endvalid) @> (now())::timestamp without time zone))\n Rows Removed by Filter: 6\n Buffers: shared hit=27302\nPlanning time: 12.202 ms\nExecution time: 1376.912 ms\n\nWhy think is weird here is this:\n\n -> BitmapAnd (cost=249.44..249.44 rows=2 width=0)\n(actual time=183.197..183.197 rows=0 loops=7)\n Buffers: shared hit=49782\n -> Bitmap Index Scan on idx_route_02\n(cost=0.00..10.96 rows=338 width=0) (actual time=0.162..0.162 rows=884\nloops=7)\n Index Cond: (r.frompointguid =\nop.guid)\n Buffers: shared hit=47\n -> Bitmap Index Scan on idx_route_07\n(cost=0.00..237.01 rows=4896 width=0) (actual time=182.858..182.858\nrows=579062 loops=7)\n Index Cond: (tsrange(r.startvalid,\nr.endvalid) @> (now())::timestamp without time zone)\n Buffers: shared hit=49735\n\nWhy would postgres choose to use second index idx_route_07 at all when row\nestimate is way higher then on idx_route_02? Wouldn't it be better just to\nuse one index with lower number of estimated rows and then filter?\n\nThanks\nSasa\n\nOn 20 June 2018 at 16:53, Jeff Janes <[email protected]> wrote:\n\n> On Wed, Jun 20, 2018 at 9:21 AM, Sasa Vilic <[email protected]> wrote:\n>\n>\n>> Query that we have finds all routes between two set of points. A set is a\n>> dynamically/loosely defined by pattern given by the user input. So for\n>> example\n>> if user wants to find all routes between international airports in\n>> Austria\n>> toward London Heathrow, he or she would use 'LOW%' as\n>> :from_point_identifier\n>> and 'EGLL' as :to_point_identifier. Please keep in mind that is a simple\n>> case,\n>> and that user is allowed to define search term any way he/she see it fit,\n>> i.e. '%OW%', 'EG%'.\n>>\n>\n>\n> Letting users do substring searches on airport codes in the middle of a\n> complex query makes no sense. Do all airports with 'OW' in the middle of\n> them having something in common with each other? If people can't remember\n> the real airport code of the airport they are using, you should offer a\n> look-up tool which they can use to figure that out **before** hitting the\n> main query.\n>\n> But taking for granted your weird use case, the most obvious improvement\n> to the PostgreSQL code that I can see is in the executor, not the planner.\n> There is no reason to recompute the bitmap on idx_point_08 each time\n> through the nested loop, as the outcome of that scan doesn't depend on the\n> outer tuple. Presumably the reason this happens is that it is being\n> 'BitmapAnd'ed with another bitmap index scan which does depend on the outer\n> tuple, and it is just not smart enough to reuse the stable bitmap while\n> recomputing the parameterized one.\n>\n> Cheers,\n>\n> Jeff\n>\n\nHi Jeff,the way I see it, is it a poor man's implementation of 'full-text' search. I just discussed it with out navdata team and we might be redefine how do we do the search. Regardless of that, I think that issue with Postgres stands. I tried now to see, how the query would behave if we always had left-anchored pattern and that would allow us to stick to btree indexes.* Query------------------------------------EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)SELECT r.*FROM navdata.route r INNER JOIN navdata.point op ON r.frompointguid = op.guid INNER JOIN navdata.point dp ON r.topointguid = dp.guidWHERE r.routeidentifier LIKE '%' AND tsrange(r.startvalid, r.endvalid) @> now() :: TIMESTAMP AND (NOT false :: BOOLEAN OR r.source = ANY (ARRAY[] :: VARCHAR [])) AND op.identifier LIKE 'LOW%' AND op.type = ANY (ARRAY['PA'] :: VARCHAR []) AND tsrange(op.startvalid, op.endvalid) @> now() :: TIMESTAMP AND dp.identifier LIKE '%' :: VARCHAR AND dp.type = ANY (ARRAY['PA'] :: VARCHAR []) AND tsrange(dp.startvalid, dp.endvalid) @> now() :: TIMESTAMPORDER BY r.routeidentifierLIMIT 1000;* Analysis---------------------------------------Limit (cost=646.48..646.48 rows=1 width=349) (actual time=1375.359..1376.447 rows=1000 loops=1) Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry Buffers: shared hit=79276 -> Sort (cost=646.48..646.48 rows=1 width=349) (actual time=1375.356..1375.785 rows=1000 loops=1) Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry Sort Key: r.routeidentifier Sort Method: top-N heapsort Memory: 321kB Buffers: shared hit=79276 -> Nested Loop (cost=250.30..646.47 rows=1 width=349) (actual time=202.826..1372.178 rows=2596 loops=1) Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry Buffers: shared hit=79276 -> Nested Loop (cost=249.87..621.96 rows=1 width=349) (actual time=202.781..1301.135 rows=2602 loops=1) Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry Buffers: shared hit=51974 -> Index Scan using idx_point_11 on navdata.point op (cost=0.43..107.02 rows=2 width=16) (actual time=0.055..0.214 rows=7 loops=1) Output: op.uid, op.guid, op.airportguid, op.identifier, op.icaocode, op.name, op.type, op.coordinates, op.fir, op.navaidfrequency, op.elevation, op.magneticvariance, op.startvalid, op.endvalid, op.revisionuid, op.source, op.leveltype Index Cond: (((op.type)::text = ANY ('{PA}'::text[])) AND ((op.identifier)::text ~>=~ 'LOW'::text) AND ((op.identifier)::text ~<~ 'LOX'::text)) Filter: (((op.identifier)::text ~~ 'LOW%'::text) AND (tsrange(op.startvalid, op.endvalid) @> (now())::timestamp without time zone)) Rows Removed by Filter: 42 Buffers: shared hit=52 -> Bitmap Heap Scan on navdata.route r (cost=249.44..257.45 rows=2 width=349) (actual time=183.255..185.491 rows=372 loops=7) Output: r.uid, r.routeidentifier, r.frompointguid, r.topointguid, r.sidguid, r.starguid, r.routeinformation, r.routetype, r.startvalid, r.endvalid, r.revisionuid, r.source, r.fufi, r.grounddistance_excl_sidstar, r.from_first, r.dep_airports, r.dst_airports, r.tag, r.expanded_route_string, r.route_geometry Recheck Cond: ((r.frompointguid = op.guid) AND (tsrange(r.startvalid, r.endvalid) @> (now())::timestamp without time zone)) Filter: ((r.routeidentifier)::text ~~ '%'::text) Heap Blocks: exact=2140 Buffers: shared hit=51922 -> BitmapAnd (cost=249.44..249.44 rows=2 width=0) (actual time=183.197..183.197 rows=0 loops=7) Buffers: shared hit=49782 -> Bitmap Index Scan on idx_route_02 (cost=0.00..10.96 rows=338 width=0) (actual time=0.162..0.162 rows=884 loops=7) Index Cond: (r.frompointguid = op.guid) Buffers: shared hit=47 -> Bitmap Index Scan on idx_route_07 (cost=0.00..237.01 rows=4896 width=0) (actual time=182.858..182.858 rows=579062 loops=7) Index Cond: (tsrange(r.startvalid, r.endvalid) @> (now())::timestamp without time zone) Buffers: shared hit=49735 -> Index Scan using cidx_point on navdata.point dp (cost=0.43..24.50 rows=1 width=16) (actual time=0.008..0.025 rows=1 loops=2602) Output: dp.uid, dp.guid, dp.airportguid, dp.identifier, dp.icaocode, dp.name, dp.type, dp.coordinates, dp.fir, dp.navaidfrequency, dp.elevation, dp.magneticvariance, dp.startvalid, dp.endvalid, dp.revisionuid, dp.source, dp.leveltype Index Cond: (dp.guid = r.topointguid) Filter: (((dp.type)::text = ANY ('{PA}'::text[])) AND ((dp.identifier)::text ~~ '%'::text) AND (tsrange(dp.startvalid, dp.endvalid) @> (now())::timestamp without time zone)) Rows Removed by Filter: 6 Buffers: shared hit=27302Planning time: 12.202 msExecution time: 1376.912 msWhy think is weird here is this: -> BitmapAnd (cost=249.44..249.44 rows=2 width=0) (actual time=183.197..183.197 rows=0 loops=7) Buffers: shared hit=49782 \n -> Bitmap Index Scan on idx_route_02 (cost=0.00..10.96 rows=338 \nwidth=0) (actual time=0.162..0.162 rows=884 loops=7) Index Cond: (r.frompointguid = op.guid) Buffers: shared hit=47 \n -> Bitmap Index Scan on idx_route_07 (cost=0.00..237.01 rows=4896 \nwidth=0) (actual time=182.858..182.858 rows=579062 loops=7) Index Cond: (tsrange(r.startvalid, r.endvalid) @> (now())::timestamp without time zone) Buffers: shared hit=49735Why would postgres choose to use second index idx_route_07 at all when row estimate is way higher then on idx_route_02? Wouldn't it be better just to use one index with lower number of estimated rows and then filter?ThanksSasaOn 20 June 2018 at 16:53, Jeff Janes <[email protected]> wrote:On Wed, Jun 20, 2018 at 9:21 AM, Sasa Vilic <[email protected]> wrote: Query that we have finds all routes between two set of points. A set is adynamically/loosely defined by pattern given by the user input. So for example if user wants to find all routes between international airports in Austria toward London Heathrow, he or she would use 'LOW%' as :from_point_identifier and 'EGLL' as :to_point_identifier. Please keep in mind that is a simple case,and that user is allowed to define search term any way he/she see it fit,i.e. '%OW%', 'EG%'.Letting users do substring searches on airport codes in the middle of a complex query makes no sense. Do all airports with 'OW' in the middle of them having something in common with each other? If people can't remember the real airport code of the airport they are using, you should offer a look-up tool which they can use to figure that out **before** hitting the main query.But taking for granted your weird use case, the most obvious improvement to the PostgreSQL code that I can see is in the executor, not the planner. There is no reason to recompute the bitmap on idx_point_08 each time through the nested loop, as the outcome of that scan doesn't depend on the outer tuple. Presumably the reason this happens is that it is being 'BitmapAnd'ed with another bitmap index scan which does depend on the outer tuple, and it is just not smart enough to reuse the stable bitmap while recomputing the parameterized one.Cheers,Jeff",
"msg_date": "Wed, 20 Jun 2018 17:38:34 +0200",
"msg_from": "Sasa Vilic <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow query when pg_trgm is in inner lopp"
}
] |
[
{
"msg_contents": "Hello,\n\nThe following basic inner join is taking too much time for me. (I’m using count(videos.id <http://videos.id/>) instead of count(*) because my actual query looks different, but I simplified it here to the essence).\nI’ve tried following random people's suggestions and adjusting the random_page_cost(decreasing it from 4 to 1.1) without a stable improvement. Any hints on what is wrong here? Thank you.\n\nThe query\n\nSELECT COUNT(videos.id) FROM videos JOIN accounts ON accounts.channel_id = videos.channel_id;\n\nThe accounts table has 744 rows, videos table has 2.2M rows, the join produces 135k rows.\n\nRunning on Amazon RDS, with default 10.1 parameters\n\n version\n---------------------------------------------------------------------------------------------------------\n PostgreSQL 10.1 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.2 20140120 (Red Hat 4.8.2-16), 64-bit\n\nExecution plan https://explain.depesz.com/s/gf7 <https://explain.depesz.com/s/gf7>\n\nStructure and statistics of the tables involved\n\n=> \\d videos\n Table \"public.videos\"\n Column | Type | Collation | Nullable | Default\n------------------------+-----------------------------+-----------+----------+---------------------------------------------------\n id | bigint | | not null | nextval('videos_id_seq'::regclass)\n vendor_id | character varying | | not null |\n channel_id | bigint | | |\n published_at | timestamp without time zone | | |\n title | text | | |\n description | text | | |\n thumbnails | jsonb | | |\n tags | character varying[] | | |\n category_id | character varying | | |\n default_language | character varying | | |\n default_audio_language | character varying | | |\n duration | integer | | |\n stereoscopic | boolean | | |\n hd | boolean | | |\n captioned | boolean | | |\n licensed | boolean | | |\n projection | character varying | | |\n privacy_status | character varying | | |\n license | character varying | | |\n embeddable | boolean | | |\n terminated_at | timestamp without time zone | | |\n created_at | timestamp without time zone | | not null |\n updated_at | timestamp without time zone | | not null |\n featured_game_id | bigint | | |\nIndexes:\n \"videos_pkey\" PRIMARY KEY, btree (id)\n \"index_videos_on_vendor_id\" UNIQUE, btree (vendor_id)\n \"index_videos_on_channel_id\" btree (channel_id)\n \"index_videos_on_featured_game_id\" btree (featured_game_id)\nForeign-key constraints:\n \"fk_rails_257f68ae55\" FOREIGN KEY (channel_id) REFERENCES channels(id)\n \"fk_rails_ce1b3e10b0\" FOREIGN KEY (featured_game_id) REFERENCES games(id)\nReferenced by:\n TABLE \"video_fetch_statuses\" CONSTRAINT \"fk_rails_3bfdf013b8\" FOREIGN KEY (video_id) REFERENCES videos(id)\n TABLE \"video_daily_facts\" CONSTRAINT \"fk_rails_dc0eca9ebb\" FOREIGN KEY (video_id) REFERENCES videos(id)\n\n\n=> SELECT relname, relpages, reltuples, relallvisible, relkind, relnatts, relhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE relname='videos’;\n\n relname | relpages | reltuples | relallvisible | relkind | relnatts | relhassubclass | reloptions | pg_table_size\n-----------------------+----------+-------------+---------------+---------+----------+----------------+------------+---------------\n videos | 471495 | 2.25694e+06 | 471389 | r | 24 | f | | 4447764480\n\n\n=> SELECT (SELECT sum(x) FROM unnest(most_common_freqs) x) frac_MCV, tablename, attname, n_distinct, array_length(most_common_vals,1) n_mcv, array_length(histogram_bounds,1) n_hist FROM pg_stats WHERE attname='channel_id' AND tablename='videos' ORDER BY 1 DESC;\n\n frac_mcv | tablename | attname | n_distinct | n_mcv | n_hist\n----------+-----------------------+----------------+------------+-------+--------\n 0.1704 | videos | channel_id | 1915 | 100 | 101\n\n\n\n=> \\d accounts\n Table \"public.accounts\"\n Column | Type | Collation | Nullable | Default\n----------------+-----------------------------+-----------+----------+--------------------------------------------------\n id | bigint | | not null | nextval('accounts_id_seq'::regclass)\n channel_id | bigint | | not null |\n refresh_token | character varying | | not null |\n created_at | timestamp without time zone | | not null |\n updated_at | timestamp without time zone | | not null |\nIndexes:\n \"accounts_pkey\" PRIMARY KEY, btree (id)\n \"index_accounts_on_channel_id\" UNIQUE, btree (channel_id)\n \"index_accounts_on_refresh_token\" UNIQUE, btree (refresh_token)\nForeign-key constraints:\n \"fk_rails_11d6d9bea2\" FOREIGN KEY (channel_id) REFERENCES channels(id)\n\n\n=> SELECT relname, relpages, reltuples, relallvisible, relkind, relnatts, relhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE relname='accounts’;\n\n relname | relpages | reltuples | relallvisible | relkind | relnatts | relhassubclass | reloptions | pg_table_size\n----------------------+----------+-----------+---------------+---------+----------+----------------+------------+---------------\n accounts | 23 | 744 | 23 | r | 5 | f | | 229376\n\n\n=> SELECT (SELECT sum(x) FROM unnest(most_common_freqs) x) frac_MCV, tablename, attname, n_distinct, array_length(most_common_vals,1) n_mcv, array_length(histogram_bounds,1) n_hist FROM pg_stats WHERE attname='channel_id' AND tablename='accounts' ORDER BY 1 DESC;\n\n frac_mcv | tablename | attname | n_distinct | n_mcv | n_hist\n----------+----------------------+----------------+------------+-------+--------\n | accounts | channel_id | -1 | | 101\n\n\n\nHello,The following basic inner join is taking too much time for me. (I’m using count(videos.id) instead of count(*) because my actual query looks different, but I simplified it here to the essence).I’ve tried following random people's suggestions and adjusting the random_page_cost(decreasing it from 4 to 1.1) without a stable improvement. Any hints on what is wrong here? Thank you.The querySELECT COUNT(videos.id) FROM videos JOIN accounts ON accounts.channel_id = videos.channel_id;The accounts table has 744 rows, videos table has 2.2M rows, the join produces 135k rows.Running on Amazon RDS, with default 10.1 parameters version--------------------------------------------------------------------------------------------------------- PostgreSQL 10.1 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.2 20140120 (Red Hat 4.8.2-16), 64-bitExecution plan https://explain.depesz.com/s/gf7Structure and statistics of the tables involved=> \\d videos Table \"public.videos\" Column | Type | Collation | Nullable | Default------------------------+-----------------------------+-----------+----------+--------------------------------------------------- id | bigint | | not null | nextval('videos_id_seq'::regclass) vendor_id | character varying | | not null | channel_id | bigint | | | published_at | timestamp without time zone | | | title | text | | | description | text | | | thumbnails | jsonb | | | tags | character varying[] | | | category_id | character varying | | | default_language | character varying | | | default_audio_language | character varying | | | duration | integer | | | stereoscopic | boolean | | | hd | boolean | | | captioned | boolean | | | licensed | boolean | | | projection | character varying | | | privacy_status | character varying | | | license | character varying | | | embeddable | boolean | | | terminated_at | timestamp without time zone | | | created_at | timestamp without time zone | | not null | updated_at | timestamp without time zone | | not null | featured_game_id | bigint | | |Indexes: \"videos_pkey\" PRIMARY KEY, btree (id) \"index_videos_on_vendor_id\" UNIQUE, btree (vendor_id) \"index_videos_on_channel_id\" btree (channel_id) \"index_videos_on_featured_game_id\" btree (featured_game_id)Foreign-key constraints: \"fk_rails_257f68ae55\" FOREIGN KEY (channel_id) REFERENCES channels(id) \"fk_rails_ce1b3e10b0\" FOREIGN KEY (featured_game_id) REFERENCES games(id)Referenced by: TABLE \"video_fetch_statuses\" CONSTRAINT \"fk_rails_3bfdf013b8\" FOREIGN KEY (video_id) REFERENCES videos(id) TABLE \"video_daily_facts\" CONSTRAINT \"fk_rails_dc0eca9ebb\" FOREIGN KEY (video_id) REFERENCES videos(id)=> SELECT relname, relpages, reltuples, relallvisible, relkind, relnatts, relhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE relname='videos’; relname | relpages | reltuples | relallvisible | relkind | relnatts | relhassubclass | reloptions | pg_table_size-----------------------+----------+-------------+---------------+---------+----------+----------------+------------+--------------- videos | 471495 | 2.25694e+06 | 471389 | r | 24 | f | | 4447764480=> SELECT (SELECT sum(x) FROM unnest(most_common_freqs) x) frac_MCV, tablename, attname, n_distinct, array_length(most_common_vals,1) n_mcv, array_length(histogram_bounds,1) n_hist FROM pg_stats WHERE attname='channel_id' AND tablename='videos' ORDER BY 1 DESC; frac_mcv | tablename | attname | n_distinct | n_mcv | n_hist----------+-----------------------+----------------+------------+-------+-------- 0.1704 | videos | channel_id | 1915 | 100 | 101=> \\d accounts Table \"public.accounts\" Column | Type | Collation | Nullable | Default----------------+-----------------------------+-----------+----------+-------------------------------------------------- id | bigint | | not null | nextval('accounts_id_seq'::regclass) channel_id | bigint | | not null | refresh_token | character varying | | not null | created_at | timestamp without time zone | | not null | updated_at | timestamp without time zone | | not null |Indexes: \"accounts_pkey\" PRIMARY KEY, btree (id) \"index_accounts_on_channel_id\" UNIQUE, btree (channel_id) \"index_accounts_on_refresh_token\" UNIQUE, btree (refresh_token)Foreign-key constraints: \"fk_rails_11d6d9bea2\" FOREIGN KEY (channel_id) REFERENCES channels(id)=> SELECT relname, relpages, reltuples, relallvisible, relkind, relnatts, relhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE relname='accounts’; relname | relpages | reltuples | relallvisible | relkind | relnatts | relhassubclass | reloptions | pg_table_size----------------------+----------+-----------+---------------+---------+----------+----------------+------------+--------------- accounts | 23 | 744 | 23 | r | 5 | f | | 229376=> SELECT (SELECT sum(x) FROM unnest(most_common_freqs) x) frac_MCV, tablename, attname, n_distinct, array_length(most_common_vals,1) n_mcv, array_length(histogram_bounds,1) n_hist FROM pg_stats WHERE attname='channel_id' AND tablename='accounts' ORDER BY 1 DESC; frac_mcv | tablename | attname | n_distinct | n_mcv | n_hist----------+----------------------+----------------+------------+-------+-------- | accounts | channel_id | -1 | | 101",
"msg_date": "Mon, 25 Jun 2018 17:55:49 +0200",
"msg_from": "Roman Kushnir <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow join"
},
{
"msg_contents": "Hi,\n\nThanks for providing all this info :)\n\nOn Mon, Jun 25, 2018 at 05:55:49PM +0200, Roman Kushnir wrote:\n> Hello,\n> \n> The following basic inner join is taking too much time for me. (I’m using count(videos.id <http://videos.id/>) instead of count(*) because my actual query looks different, but I simplified it here to the essence).\n> I’ve tried following random people's suggestions and adjusting the random_page_cost(decreasing it from 4 to 1.1) without a stable improvement. Any hints on what is wrong here? Thank you.\n\n> Running on Amazon RDS, with default 10.1 parameters\n\nAll default ?\nhttps://wiki.postgresql.org/wiki/Server_Configuration\n\nIt looks like nearly the entire time is spent reading this table:\n\n\tParallel Seq Scan on videos ... (ACTUAL TIME=0.687..55,555.774...)\n\tBuffers: shared hit=7138 read=464357\n\nPerhaps shared_buffers should be at least several times larger, and perhaps up\nto 4gb to keep the entire table in RAM. You could maybe also benefit from\nbetter device readahead (blockdev --setra or lvchange -r or\n/sys/block/sd?/queue/read_ahead_kb)\n\nAlso, it looks like there's a row count misestimate, which probably doesn't\nmatter for the query you sent, but maybe affects your larger query:\n\tHash Join (... ROWS=365,328 ... ) (... ROWS=45,307 ... )\n\nIf that matters, maybe it'd help to increase statistics on channel_id.\nActually, I see both tables have FK into channels.id:\n\n> \"fk_rails_11d6d9bea2\" FOREIGN KEY (channel_id) REFERENCES channels(id)\n> \"fk_rails_257f68ae55\" FOREIGN KEY (channel_id) REFERENCES channels(id)\n\nI don't see the definition of \"channels\" (and it looks like the query I put on\nthe wiki doesn't show null_frac), but I think that postgres since 9.6 should be\nable to infer good join statistics from the existence of the FKs. Maybe that\nonly works if you actually JOIN to the channels table (?). But if anything\nthat's only a 2ndary problem, if at all.\n\nJustin\n\n",
"msg_date": "Mon, 25 Jun 2018 11:45:22 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow join"
},
{
"msg_contents": "Hi Justin,\n\nThank you for your comments.\n\nAs you mentioned the size of shared buffers, my first thought was to just switch to a larger machine as this one only has 2 gigs of RAM. But then it occurred to me that the whole videos table is getting loaded into memory while only 2 small columns are actually used! So I created a covering index on videos (channel_id, id) and the query now completes in 190ms!\n\nThanks, you helped me a lot.\n\n\n> On Jun 25, 2018, at 6:45 PM, Justin Pryzby <[email protected]> wrote:\n> \n> Hi,\n> \n> Thanks for providing all this info :)\n> \n> On Mon, Jun 25, 2018 at 05:55:49PM +0200, Roman Kushnir wrote:\n>> Hello,\n>> \n>> The following basic inner join is taking too much time for me. (I’m using count(videos.id <http://videos.id/>) instead of count(*) because my actual query looks different, but I simplified it here to the essence).\n>> I’ve tried following random people's suggestions and adjusting the random_page_cost(decreasing it from 4 to 1.1) without a stable improvement. Any hints on what is wrong here? Thank you.\n> \n>> Running on Amazon RDS, with default 10.1 parameters\n> \n> All default ?\n> https://wiki.postgresql.org/wiki/Server_Configuration\n> \n> It looks like nearly the entire time is spent reading this table:\n> \n> \tParallel Seq Scan on videos ... (ACTUAL TIME=0.687..55,555.774...)\n> \tBuffers: shared hit=7138 read=464357\n> \n> Perhaps shared_buffers should be at least several times larger, and perhaps up\n> to 4gb to keep the entire table in RAM. You could maybe also benefit from\n> better device readahead (blockdev --setra or lvchange -r or\n> /sys/block/sd?/queue/read_ahead_kb)\n> \n> Also, it looks like there's a row count misestimate, which probably doesn't\n> matter for the query you sent, but maybe affects your larger query:\n> \tHash Join (... ROWS=365,328 ... ) (... ROWS=45,307 ... )\n> \n> If that matters, maybe it'd help to increase statistics on channel_id.\n> Actually, I see both tables have FK into channels.id:\n> \n>> \"fk_rails_11d6d9bea2\" FOREIGN KEY (channel_id) REFERENCES channels(id)\n>> \"fk_rails_257f68ae55\" FOREIGN KEY (channel_id) REFERENCES channels(id)\n> \n> I don't see the definition of \"channels\" (and it looks like the query I put on\n> the wiki doesn't show null_frac), but I think that postgres since 9.6 should be\n> able to infer good join statistics from the existence of the FKs. Maybe that\n> only works if you actually JOIN to the channels table (?). But if anything\n> that's only a 2ndary problem, if at all.\n> \n> Justin\n\n\nHi Justin,Thank you for your comments.As you mentioned the size of shared buffers, my first thought was to just switch to a larger machine as this one only has 2 gigs of RAM. But then it occurred to me that the whole videos table is getting loaded into memory while only 2 small columns are actually used! So I created a covering index on videos (channel_id, id) and the query now completes in 190ms!Thanks, you helped me a lot.On Jun 25, 2018, at 6:45 PM, Justin Pryzby <[email protected]> wrote:Hi,Thanks for providing all this info :)On Mon, Jun 25, 2018 at 05:55:49PM +0200, Roman Kushnir wrote:Hello,The following basic inner join is taking too much time for me. (I’m using count(videos.id <http://videos.id/>) instead of count(*) because my actual query looks different, but I simplified it here to the essence).I’ve tried following random people's suggestions and adjusting the random_page_cost(decreasing it from 4 to 1.1) without a stable improvement. Any hints on what is wrong here? Thank you.Running on Amazon RDS, with default 10.1 parametersAll default ?https://wiki.postgresql.org/wiki/Server_ConfigurationIt looks like nearly the entire time is spent reading this table: Parallel Seq Scan on videos ... (ACTUAL TIME=0.687..55,555.774...) Buffers: shared hit=7138 read=464357Perhaps shared_buffers should be at least several times larger, and perhaps upto 4gb to keep the entire table in RAM. You could maybe also benefit frombetter device readahead (blockdev --setra or lvchange -r or/sys/block/sd?/queue/read_ahead_kb)Also, it looks like there's a row count misestimate, which probably doesn'tmatter for the query you sent, but maybe affects your larger query: Hash Join (... ROWS=365,328 ... ) (... ROWS=45,307 ... )If that matters, maybe it'd help to increase statistics on channel_id.Actually, I see both tables have FK into channels.id: \"fk_rails_11d6d9bea2\" FOREIGN KEY (channel_id) REFERENCES channels(id) \"fk_rails_257f68ae55\" FOREIGN KEY (channel_id) REFERENCES channels(id)I don't see the definition of \"channels\" (and it looks like the query I put onthe wiki doesn't show null_frac), but I think that postgres since 9.6 should beable to infer good join statistics from the existence of the FKs. Maybe thatonly works if you actually JOIN to the channels table (?). But if anythingthat's only a 2ndary problem, if at all.Justin",
"msg_date": "Mon, 25 Jun 2018 21:39:32 +0200",
"msg_from": "Roman Kushnir <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow join"
},
{
"msg_contents": "Roman Kushnir wrote:\n> The following basic inner join is taking too much time for me. (I’m using count(videos.id)\n> instead of count(*) because my actual query looks different, but I simplified it here to the essence).\n> I’ve tried following random people's suggestions and adjusting the random_page_cost\n> (decreasing it from 4 to 1.1) without a stable improvement. Any hints on what is wrong here? Thank you.\n> \n> The query\n> \n> SELECT COUNT(videos.id) FROM videos JOIN accounts ON accounts.channel_id = videos.channel_id;\n> \n> The accounts table has 744 rows, videos table has 2.2M rows, the join produces 135k rows.\n> \n> Running on Amazon RDS, with default 10.1 parameters\n> \n> version\n> ---------------------------------------------------------------------------------------------------------\n> PostgreSQL 10.1 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.2 20140120 (Red Hat 4.8.2-16), 64-bit\n> \n> Execution plan https://explain.depesz.com/s/gf7\n\nYour time is spent here:\n\n> -> Parallel Seq Scan on videos (cost=0.00..480898.90 rows=940390 width=16) (actual time=0.687..55555.774 rows=764042 loops=3)\n> Buffers: shared hit=7138 read=464357\n\n55 seconds to scan 3.5 GB is not so bad.\n\nWhat I wonder is how it is that you have less than two rows per table block.\nCould it be that the table is very bloated?\n\nIf you can, you could \"VACUUM (FULL) videos\" and see if that makes a difference.\nIf you can bring the table size down, it will speed up query performance.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n",
"msg_date": "Wed, 27 Jun 2018 10:19:18 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow join"
},
{
"msg_contents": "Hi Laurenz,\n\nYou’re right about the table being bloated, the videos.description column is large. I thought about moving it to a separate table, but having an index only on the columns used in the query seems to have compensated for that already.\nThank you.\n\n> On Jun 27, 2018, at 10:19 AM, Laurenz Albe <[email protected]> wrote:\n> \n> Roman Kushnir wrote:\n>> The following basic inner join is taking too much time for me. (I’m using count(videos.id)\n>> instead of count(*) because my actual query looks different, but I simplified it here to the essence).\n>> I’ve tried following random people's suggestions and adjusting the random_page_cost\n>> (decreasing it from 4 to 1.1) without a stable improvement. Any hints on what is wrong here? Thank you.\n>> \n>> The query\n>> \n>> SELECT COUNT(videos.id) FROM videos JOIN accounts ON accounts.channel_id = videos.channel_id;\n>> \n>> The accounts table has 744 rows, videos table has 2.2M rows, the join produces 135k rows.\n>> \n>> Running on Amazon RDS, with default 10.1 parameters\n>> \n>> version\n>> ---------------------------------------------------------------------------------------------------------\n>> PostgreSQL 10.1 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.2 20140120 (Red Hat 4.8.2-16), 64-bit\n>> \n>> Execution plan https://explain.depesz.com/s/gf7\n> \n> Your time is spent here:\n> \n>> -> Parallel Seq Scan on videos (cost=0.00..480898.90 rows=940390 width=16) (actual time=0.687..55555.774 rows=764042 loops=3)\n>> Buffers: shared hit=7138 read=464357\n> \n> 55 seconds to scan 3.5 GB is not so bad.\n> \n> What I wonder is how it is that you have less than two rows per table block.\n> Could it be that the table is very bloated?\n> \n> If you can, you could \"VACUUM (FULL) videos\" and see if that makes a difference.\n> If you can bring the table size down, it will speed up query performance.\n> \n> Yours,\n> Laurenz Albe\n> -- \n> Cybertec | https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Wed, 27 Jun 2018 11:02:38 +0200",
"msg_from": "Roman Kushnir <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow join"
}
] |
[
{
"msg_contents": "I have strange issue with pgbench where it fails to execute step to create\nprimary keys when I specify scaling factor / transactions to some\nreasonable high value - eg. 8k.\n\nI am doing\n\n$ pgbench -i -s 8000 sampledb\n\n$ pgbench -c 10 -j 2 -t 8000 sampledb\n\nwhen I fails I see\n\n=====\nvacuum...\nstarting vacuum...end.\n====\n\nand good case\n\nvacuum...\nset primary keys...\ndone.\nstarting vacuum...end.\n\nOnly see this with a bit reasonable values for transactions / scaling\nfactor ( eg. 4k,8k )\n\nI suspected that there could be issue with memory/disk allocated to\nmachine, but for above case it will generate cca 150 GB of data and I have\ndisk with 350 GB of free space, and allocating 30GB of memory for\npostgresql machine ( tried with even more resources )\nI am using pgbench (PostgreSQL) 9.6.5 .\nIf you have any idea, please share , thank you\nElvir\n\nI have strange issue with pgbench where it fails to execute step to create primary keys when I specify scaling factor / transactions to some reasonable high value - eg. 8k.I am doing$ pgbench -i -s 8000 sampledb$ pgbench -c 10 -j 2 -t 8000 sampledbwhen I fails I see =====vacuum...starting vacuum...end.==== and good case vacuum...set primary keys...done.starting vacuum...end.Only see this with a bit reasonable values for transactions / scaling factor ( eg. 4k,8k )I suspected that there could be issue with memory/disk allocated to machine, but for above case it will generate cca 150 GB of data and I have disk with 350 GB of free space, and allocating 30GB of memory for postgresql machine ( tried with even more resources ) I am using pgbench (PostgreSQL) 9.6.5 . If you have any idea, please share , thank youElvir",
"msg_date": "Tue, 26 Jun 2018 14:21:00 +0200",
"msg_from": "=?UTF-8?B?RWx2aXIgS3VyacSH?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "\"set primary keys...\" is missing when using hight values for\n transactions / scaling factor with pgbench"
},
{
"msg_contents": "On 27 June 2018 at 00:21, Elvir Kurić <[email protected]> wrote:\n> I have strange issue with pgbench where it fails to execute step to create\n> primary keys when I specify scaling factor / transactions to some reasonable\n> high value - eg. 8k.\n\nThe primary keys are only created in -i mode, which can't be used in\nconjunction with the options you've mentioned.\n\npgbench will perform a vacuum before an actual test run, so perhaps\nthat's what you're seeing. You may also have noticed it also didn't\nperform the create tables and data population too without -i.\n\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Wed, 27 Jun 2018 00:41:31 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"set primary keys...\" is missing when using hight values for\n transactions / scaling factor with pgbench"
},
{
"msg_contents": "thank you David.\n\nI first run initialize step\n\n$ pgbench -i -s 8000 sampledb\n\nand then run step\n\n$ pgbench -c 10 -j 2 -t 8000 sampledb\n\nif I change -s/-t to lower value , eg, 100 above commands will show\n\n-- \nset primary keys...\ndone.\n-- -\nI am not getting it ,why it fails when I rise -t/-s to 8000 - with same\ncommands.\n\nDo you suggest that above is not correct way?\n\n\n\n\nOn Tue, Jun 26, 2018 at 2:41 PM, David Rowley <[email protected]>\nwrote:\n\n> On 27 June 2018 at 00:21, Elvir Kurić <[email protected]> wrote:\n> > I have strange issue with pgbench where it fails to execute step to\n> create\n> > primary keys when I specify scaling factor / transactions to some\n> reasonable\n> > high value - eg. 8k.\n>\n> The primary keys are only created in -i mode, which can't be used in\n> conjunction with the options you've mentioned.\n>\n> pgbench will perform a vacuum before an actual test run, so perhaps\n> that's what you're seeing. You may also have noticed it also didn't\n> perform the create tables and data population too without -i.\n>\n>\n> --\n> David Rowley http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Training & Services\n>\n\nthank you David. I first run initialize step $ pgbench -i -s 8000 sampledband then run step $ pgbench -c 10 -j 2 -t 8000 sampledbif I change -s/-t to lower value , eg, 100 above commands will show -- set primary keys...done.-- -I am not getting it ,why it fails when I rise -t/-s to 8000 - with same commands. Do you suggest that above is not correct way?On Tue, Jun 26, 2018 at 2:41 PM, David Rowley <[email protected]> wrote:On 27 June 2018 at 00:21, Elvir Kurić <[email protected]> wrote:\n> I have strange issue with pgbench where it fails to execute step to create\n> primary keys when I specify scaling factor / transactions to some reasonable\n> high value - eg. 8k.\n\nThe primary keys are only created in -i mode, which can't be used in\nconjunction with the options you've mentioned.\n\npgbench will perform a vacuum before an actual test run, so perhaps\nthat's what you're seeing. You may also have noticed it also didn't\nperform the create tables and data population too without -i.\n\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Tue, 26 Jun 2018 14:51:10 +0200",
"msg_from": "=?UTF-8?B?RWx2aXIgS3VyacSH?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: \"set primary keys...\" is missing when using hight values for\n transactions / scaling factor with pgbench"
}
] |
[
{
"msg_contents": "Hi All,\r\n\r\nI’m having performance trouble with a particular set of queries. It goes a bit like this\r\n\r\n1) queue table is initially empty, and very narrow (1 bigint column)\r\n2) we insert ~30 million rows into queue table\r\n3) we do a join with queue table to delete from another table (delete from a using queue where a.id<http://a.id> = queue.id<http://queue.id>), but postgres stats say that queue table is empty, so it uses a nested loop over all 30 million rows, taking forever\r\n\r\nIf I kill the query in 3 and let it run again after autoanalyze has done it’s thing then it is very quick\r\n\r\nThis queue table is empty 99% of the time, and the query in 3 runs immediately after step 2. Is there any likelyhood that tweaking the autoanalyze params would help in this case? I don’t want to explicitly analyze the table between steps 2 and three either as there are other patterns of use where for example 0 rows are inserted in step 2 and this is expected to run very very quickly. Do I have any other options?\r\n\r\nPostgres 9.5 ATM, but an upgrade is in planning.\r\n\r\n\r\nThanks in advance\r\n\r\nDavid Wheeler\r\nSoftware developer\r\n\r\n[cid:2C4D0888-9F8B-463F-BD54-2B60A322210C]\r\n\r\n\r\nE [email protected]<mailto:[email protected]>\r\nD +61 3 9663 3554 W http://dgitsystems.com\r\nLevel 8, 620 Bourke St, Melbourne VIC 3000.",
"msg_date": "Wed, 27 Jun 2018 03:45:26 +0000",
"msg_from": "David Wheeler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Queue table that quickly grows causes query planner to choose poor\n plan"
},
{
"msg_contents": "On Wed, Jun 27, 2018 at 03:45:26AM +0000, David Wheeler wrote:\n> Hi All,\n> \n> I’m having performance trouble with a particular set of queries. It goes a bit like this\n> \n> 1) queue table is initially empty, and very narrow (1 bigint column)\n> 2) we insert ~30 million rows into queue table\n> 3) we do a join with queue table to delete from another table (delete from a using queue where a.id<http://a.id> = queue.id<http://queue.id>), but postgres stats say that queue table is empty, so it uses a nested loop over all 30 million rows, taking forever\n\nIf it's within a transaction, then autovacuum couldn't begin to help until it\ncommits. (And if it's not, then it'll be slow on its own).\n\nIt seems to me that you can't rely on autoanalyze to finish between commiting\nstep 2 and beginning step 3. So you're left with options like: SET\nenable_nestloop=off; or manual ANALZYE (or I guess VACUUM would be adequate to\nset reltuples). Maybe you can conditionalize that: if inserted>9: ANALYZE queue.\n\nJustin\n\n",
"msg_date": "Wed, 27 Jun 2018 13:05:18 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Queue table that quickly grows causes query planner to choose\n poor plan"
},
{
"msg_contents": "David Wheeler <[email protected]> writes:\n> I'm having performance trouble with a particular set of queries. It goes a bit like this\n\n> 1) queue table is initially empty, and very narrow (1 bigint column)\n> 2) we insert ~30 million rows into queue table\n> 3) we do a join with queue table to delete from another table (delete from a using queue where a.id<http://a.id> = queue.id<http://queue.id>), but postgres stats say that queue table is empty, so it uses a nested loop over all 30 million rows, taking forever\n\nAlthough there's no way to have any useful pg_statistic stats if you won't\ndo an ANALYZE, the planner nonetheless can see the table's current\nphysical size, and what it normally does is to multiply the last-reported\ntuple density (reltuples/relpages) by the current size. So if you're\ngetting an \"empty table\" estimate anyway, I have to suppose that the\ntable's state involves reltuples = 0 and relpages > 0. That's not a\ngood place to be in; it constrains the planner to believe that the table\nis in fact devoid of tuples, because that's what the last ANALYZE saw.\n\nNow, the initial state for a freshly-created or freshly-truncated table\nis *not* that. It is reltuples = 0 and relpages = 0, representing an\nundefined tuple density. Given that, the planner will make some guess\nabout average tuple size --- which is likely to be a very good guess,\nfor a table with only fixed-width columns --- and then compute a rowcount\nestimate using that plus the observed physical size.\n\nSo I think your problem comes from oscillating between really-empty\nand not-at-all-empty, and not using an idiomatic way of going back\nto the empty state. Have you tried using TRUNCATE instead of DELETE?\n\n> This queue table is empty 99% of the time, and the query in 3 runs immediately after step 2. Is there any likelyhood that tweaking the autoanalyze params would help in this case? I don't want to explicitly analyze the table between steps 2 and three either as there are other patterns of use where for example 0 rows are inserted in step 2 and this is expected to run very very quickly. Do I have any other options?\n\nI am not following your aversion to sticking an ANALYZE in there,\neither. It's not like inserting 30 million rows would be free.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 27 Jun 2018 14:27:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Queue table that quickly grows causes query planner to choose\n poor plan"
},
{
"msg_contents": "Hi Tom,\r\n\r\nThanks for your reply, that’s very helpful and informative.\r\n\r\nAlthough there's no way to have any useful pg_statistic stats if you won't\r\ndo an ANALYZE, the planner nonetheless can see the table's current\r\nphysical size, and what it normally does is to multiply the last-reported\r\ntuple density (reltuples/relpages) by the current size. So if you're\r\ngetting an \"empty table\" estimate anyway, I have to suppose that the\r\ntable's state involves reltuples = 0 and relpages > 0. That's not a\r\ngood place to be in; it constrains the planner to believe that the table\r\nis in fact devoid of tuples, because that's what the last ANALYZE saw.\r\n\r\nThat appears to be correct. I assumed that because the table was analyzed and found to be empty then the autovacuum would probably have cleared all the tuples too, but that’s not the case.\r\n\r\n relpages | reltuples\r\n----------+-------------\r\n 0 | 2.33795e+06\r\n\r\nI am not following your aversion to sticking an ANALYZE in there,\r\neither. It's not like inserting 30 million rows would be free.\r\n\r\nThere are many usage profiles for these tables. Sometimes there will be a single insert of 30 million rows, sometimes there will be several inserts of up to 100 million rows each in different threads, sometimes there will be many (~80 000) inserts of 0 rows (for which an ANALYSE is simply a waste) - I don’t want to cause undue performance penalty on the other usage profiles.\r\n\r\nBut as Justin rightly points out I can selectively ANALYSE only when > x rows are inserted, which I think is the best way forward.\r\n\r\nDavid Wheeler\r\nSoftware developer\r\n\r\n[cid:2C4D0888-9F8B-463F-BD54-2B60A322210C]\r\n\r\n\r\nE [email protected]<mailto:[email protected]>\r\nD +61 3 9663 3554 W http://dgitsystems.com\r\nLevel 8, 620 Bourke St, Melbourne VIC 3000.\r\n\r\n\r\nOn 28 Jun 2018, at 4:27 am, Tom Lane <[email protected]<mailto:[email protected]>> wrote:\r\n\r\nDavid Wheeler <[email protected]<mailto:[email protected]>> writes:\r\nI'm having performance trouble with a particular set of queries. It goes a bit like this\r\n\r\n1) queue table is initially empty, and very narrow (1 bigint column)\r\n2) we insert ~30 million rows into queue table\r\n3) we do a join with queue table to delete from another table (delete from a using queue where a.id<http://a.id><http://a.id> = queue.id<http://queue.id><http://queue.id>), but postgres stats say that queue table is empty, so it uses a nested loop over all 30 million rows, taking forever\r\n\r\nAlthough there's no way to have any useful pg_statistic stats if you won't\r\ndo an ANALYZE, the planner nonetheless can see the table's current\r\nphysical size, and what it normally does is to multiply the last-reported\r\ntuple density (reltuples/relpages) by the current size. So if you're\r\ngetting an \"empty table\" estimate anyway, I have to suppose that the\r\ntable's state involves reltuples = 0 and relpages > 0. That's not a\r\ngood place to be in; it constrains the planner to believe that the table\r\nis in fact devoid of tuples, because that's what the last ANALYZE saw.\r\n\r\nNow, the initial state for a freshly-created or freshly-truncated table\r\nis *not* that. It is reltuples = 0 and relpages = 0, representing an\r\nundefined tuple density. Given that, the planner will make some guess\r\nabout average tuple size --- which is likely to be a very good guess,\r\nfor a table with only fixed-width columns --- and then compute a rowcount\r\nestimate using that plus the observed physical size.\r\n\r\nSo I think your problem comes from oscillating between really-empty\r\nand not-at-all-empty, and not using an idiomatic way of going back\r\nto the empty state. Have you tried using TRUNCATE instead of DELETE?\r\n\r\nThis queue table is empty 99% of the time, and the query in 3 runs immediately after step 2. Is there any likelyhood that tweaking the autoanalyze params would help in this case? I don't want to explicitly analyze the table between steps 2 and three either as there are other patterns of use where for example 0 rows are inserted in step 2 and this is expected to run very very quickly. Do I have any other options?\r\n\r\nI am not following your aversion to sticking an ANALYZE in there,\r\neither. It's not like inserting 30 million rows would be free.\r\n\r\nregards, tom lane",
"msg_date": "Thu, 28 Jun 2018 00:00:57 +0000",
"msg_from": "David Wheeler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Queue table that quickly grows causes query planner to choose\n poor plan"
}
] |
[
{
"msg_contents": "Hi Team,\n\nWhile taking pgdump we are getting error message cache lookup failed for\nfunction 7418447. While trying select * from pg_proc where oid=7418447\nreturns zero rows. Please help us on this.\n\nHi Team,While taking pgdump we are getting error message cache lookup failed for function 7418447. While trying select * from pg_proc where oid=7418447 returns zero rows. Please help us on this.",
"msg_date": "Wed, 27 Jun 2018 23:30:52 +0800",
"msg_from": "Rambabu V <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bug in PostgreSQL"
},
{
"msg_contents": "On Wed, Jun 27, 2018 at 8:31 AM Rambabu V <[email protected]> wrote:\n\n> Hi Team,\n>\n> While taking pgdump we are getting error message cache lookup failed for\n> function 7418447. While trying select * from pg_proc where oid=7418447\n> returns zero rows. Please help us on this.\n>\n\nSearching on that error messages yields a suggestion from Tom Lane to try\nthat select with enable_indexscan and enable_bitmapscan\nturned off.\n\nThis question is probably better asked in the \"General\" mailing list. Also\nplease include the OS and PostgreSQL version as well as any other\nobservations that may shed light on the issue.\n\nCheers,\nSteve\n\nOn Wed, Jun 27, 2018 at 8:31 AM Rambabu V <[email protected]> wrote:Hi Team,While taking pgdump we are getting error message cache lookup failed for function 7418447. While trying select * from pg_proc where oid=7418447 returns zero rows. Please help us on this. Searching on that error messages yields a suggestion from Tom Lane to try that select with enable_indexscan and enable_bitmapscanturned off.This question is probably better asked in the \"General\" mailing list. Also please include the OS and PostgreSQL version as well as any other observations that may shed light on the issue.Cheers,Steve",
"msg_date": "Wed, 27 Jun 2018 08:50:37 -0700",
"msg_from": "Steve Crawford <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bug in PostgreSQL"
},
{
"msg_contents": "OID is a temp-var that is only consistent within a Query Execution.\n\nhttps://www.postgresql.org/docs/current/static/datatype-oid.html\n\nOn Thu, Jun 28, 2018 at 12:50 AM, Steve Crawford <\[email protected]> wrote:\n\n>\n>\n> On Wed, Jun 27, 2018 at 8:31 AM Rambabu V <[email protected]> wrote:\n>\n>> Hi Team,\n>>\n>> While taking pgdump we are getting error message cache lookup failed for\n>> function 7418447. While trying select * from pg_proc where oid=7418447\n>> returns zero rows. Please help us on this.\n>>\n>\n> Searching on that error messages yields a suggestion from Tom Lane to try\n> that select with enable_indexscan and enable_bitmapscan\n> turned off.\n>\n> This question is probably better asked in the \"General\" mailing list. Also\n> please include the OS and PostgreSQL version as well as any other\n> observations that may shed light on the issue.\n>\n> Cheers,\n> Steve\n>\n\n\n\n-- \n-Joseph Curtin\nhttp://www.jbcurtin.com\n<http://www.jbcurtin.com/>github <http://goo.gl/d5uPH>\n@jbcurtin\n\nOID is a temp-var that is only consistent within a Query Execution.https://www.postgresql.org/docs/current/static/datatype-oid.htmlOn Thu, Jun 28, 2018 at 12:50 AM, Steve Crawford <[email protected]> wrote:On Wed, Jun 27, 2018 at 8:31 AM Rambabu V <[email protected]> wrote:Hi Team,While taking pgdump we are getting error message cache lookup failed for function 7418447. While trying select * from pg_proc where oid=7418447 returns zero rows. Please help us on this. Searching on that error messages yields a suggestion from Tom Lane to try that select with enable_indexscan and enable_bitmapscanturned off.This question is probably better asked in the \"General\" mailing list. Also please include the OS and PostgreSQL version as well as any other observations that may shed light on the issue.Cheers,Steve\n\n-- -Joseph Curtinhttp://www.jbcurtin.comgithub@jbcurtin",
"msg_date": "Thu, 28 Jun 2018 00:53:52 +0900",
"msg_from": "Joseph Curtin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bug in PostgreSQL"
},
{
"msg_contents": "Rambabu V wrote:\n> While taking pgdump we are getting error message cache lookup failed for function 7418447.\n> While trying select * from pg_proc where oid=7418447 returns zero rows. Please help us on this. \n\nThat means that some catalog data are corrupted.\n\nIf possible, restore from a backup.\n\nDid you experiences any crashes recently? Is your storage reliable?\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n",
"msg_date": "Thu, 28 Jun 2018 08:55:21 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bug in PostgreSQL"
},
{
"msg_contents": "Hi,\n\nOn 2018-06-28 08:55:21 +0200, Laurenz Albe wrote:\n> Rambabu V wrote:\n> > While taking pgdump we are getting error message cache lookup failed for function 7418447.\n> > While trying select * from pg_proc where oid=7418447 returns zero rows. Please help us on this. \n> \n> That means that some catalog data are corrupted.\n\nIt does *NOT* have mean that. You can get such reports e.g. because\nthere was concurrent DDL. Most things are protected against via locks,\nbut there's enough of a window between getting the list of objects and\nlocking to cause issues for tables. And for functions it's fairly easy\nto get into trouble because there's a mismatch between the snapshot\npg_dump uses (a normal transactional snapshot) and the snapshot used to\ndeparse expressions etc (a fresh catalog snapshot that takes into\naccount concurrent commits).\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Thu, 28 Jun 2018 09:13:16 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bug in PostgreSQL"
}
] |
[
{
"msg_contents": "Hello,\n\nApologies if this is not in the correct forum for the non-urgent question\nthat follows.\n\nI was reading the pgconf 2018 ppt slides by Jonathan Katz (from slide 110\nonwards)\nhttp://www.pgcon.org/2018/schedule/attachments/480_realtime-application.pdf\n\nWhere is mentioned trigger overhead, and provided an alternative solution\n(logical replication slot monitoring).\n\nMy 2 part question is.\n\n1) Does anybody have any benchmarks re: trigger overhead/performance or have\nany experience to give a sort of indication, at all?\n\n2) Is anybody aware of any other clever alternatives, pg extensions or\ngithub code etc as an alternative to using triggers?\n\nThanks in advance,\n\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n",
"msg_date": "Sun, 1 Jul 2018 02:31:24 -0700 (MST)",
"msg_from": "AJG <[email protected]>",
"msg_from_op": true,
"msg_subject": "Trigger overhead/performance and alternatives?"
},
{
"msg_contents": "On 01.07.18 11:31, AJG wrote:\n> Where is mentioned trigger overhead, and provided an alternative solution\n> (logical replication slot monitoring).\n> \n> My 2 part question is.\n> \n> 1) Does anybody have any benchmarks re: trigger overhead/performance or have\n> any experience to give a sort of indication, at all?\n\nThat really depends on a lot of things, how you write the triggers, what\nthey do, etc. You should probably measure that yourself.\n\n> 2) Is anybody aware of any other clever alternatives, pg extensions or\n> github code etc as an alternative to using triggers?\n\nMaybe wal2json will give you a starting point.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Sat, 7 Jul 2018 09:47:47 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trigger overhead/performance and alternatives?"
}
] |
[
{
"msg_contents": "Pessoal, estou em um cliente que tem um servidor Linux (não sei qual\ndistribuição) mas de um tempo para cá está aparecendo uma mensagem sobre\nexcesso de conexões e então para de fazer conexões.\n\nAssistindo a uma palestra, o cidadão citou que se eu alterasse essa\nconfiguração para 50, o postgresql começaria a matar as conexões com algum\ntempo automaticamente.\n\nGostaria de um parecer dos colegas.\n\nMuito obrigado\n\n\n-- \nMello Júnior\n41.3252-3555\n\n\nPessoal, estou em um cliente que tem um servidor Linux (não sei qual distribuição) mas de um tempo para cá está aparecendo uma mensagem sobre excesso de conexões e então para de fazer conexões.Assistindo a uma palestra, o cidadão citou que se eu alterasse essa configuração para 50, o postgresql começaria a matar as conexões com algum tempo automaticamente.Gostaria de um parecer dos colegas.Muito obrigado\n-- Mello Júnior41.3252-3555",
"msg_date": "Wed, 4 Jul 2018 10:19:41 -0300",
"msg_from": "=?UTF-8?Q?Jos=C3=A9_Mello_J=C3=BAnior?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "tcp_keepalives"
},
{
"msg_contents": "Hi,\n\nSorry for my poor Portugese; I didn't translate back since I cannot verify its\naccuracy.\n\nOn Wed, Jul 04, 2018 at 10:19:41AM -0300, Jos� Mello J�nior wrote:\n> Pessoal, estou em um cliente que tem um servidor Linux (n�o sei qual\n> distribui��o) mas de um tempo para c� est� aparecendo uma mensagem sobre\n> excesso de conex�es e ent�o para de fazer conex�es.\n\ntranslated:\n| Personally, I'm on a client that has a Linux server (I do not know which\n| one distribution) but from time to time a message is over connections\n| and then stop connections.\n\nTCP keepalives allow closing a connection which is already broken, such as due\nto NAT router \"forgetting\" about the connection (perhaps because it's idle, or\nperhaps because the router rebooted). Keepalives may also help to AVOID the\nconnection from breaking in the first place, by continuing to send traffic on\nan otherwise idle connection.\n\nIs the connection from a client on a remote subnet ? If not, keepalives will\nhave no effect.\n\nWhat is the current value of max_connections? And when you hit\nmax_connections, causing new connections to be rejected, what are all the\nexisting connections doing?\nSELECT backend_start, pid, datname, usename, state, left(query,222) FROM pg_stat_activity ORDER BY 1;\n\nDoes the application keep connections opened forever ? Or does it keep opened\nthe \"record\" number of connections opened from a multi-process or thread pool?\n\nDo you run both the client applications and the database server ?\n\nAre there many application connections at once ?\n\nYou could check with: netstat -anpe |grep 5432 |grep PID\nI'd typically expect only one connection to postgres per client process.\n\n> Assistindo a uma palestra, o cidad�o citou que se eu alterasse essa\n> configura��o para 50, o postgresql come�aria a matar as conex�es com algum\n> tempo automaticamente.\n\ntranslated:\n| While attending a lecture, the citizen mentioned that if I changed this\n| setting to 50, postgresql would start killing the connections with some\n| time automatically.\n\nUnless I misunderstand, I wouldn't think so: TCP keepalives only close\nconnections which are *already* broken.\n\nIt's also possible that during a spike in transactions, clients are bogging\neach other down, and I recall some people have reported success using a\nconnection pooler (like pgbouncer) to reject a small fraction of the clients\nduring the peak, to allow the rest of the client requests to be quickly\nserviced, rather than all of them running slowly and many timing out.\n\nJustin\n\n",
"msg_date": "Wed, 4 Jul 2018 22:56:36 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tcp_keepalives"
}
] |
[
{
"msg_contents": "Hi,\nI installed postgresql v9.6/10 in our company. When I tried to create the\nextension plpython I got the next error :\nERROR: could not open extension control file\n\"/PostgreSQL/9.6/share/postgresql/extension/plpythonu.control\": No such\nfile or directory\n\nWhen I try to install the extension with yum it downloads the extension\nthat is suitable for postgres 9.2 and moreover it also tries to install\npostgres 9.2 as one of the extensions dependencies.\n\nWhere can I find the source files of the extension for my version or how\ncan I install it ?\n\nThanks , Mariel.\n\nHi,I installed postgresql v9.6/10 in our company. When I tried to create the extension plpython I got the next error : ERROR: could not open extension control file \"/PostgreSQL/9.6/share/postgresql/extension/plpythonu.control\": No such file or directoryWhen I try to install the extension with yum it downloads the extension that is suitable for postgres 9.2 and moreover it also tries to install postgres 9.2 as one of the extensions dependencies.Where can I find the source files of the extension for my version or how can I install it ?Thanks , Mariel.",
"msg_date": "Sun, 8 Jul 2018 16:06:50 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "where can I download the binaries of plpython extension"
},
{
"msg_contents": "On Sun, Jul 08, 2018 at 04:06:50PM +0300, Mariel Cherkassky wrote:\n> Hi,\n> I installed postgresql v9.6/10 in our company.\n\nWhich version did you install and how ? Compiled or binaries from some repo ?\nUsing PGDG repo or some other one ? \n\n> When I try to install the extension with yum it downloads the extension\n> that is suitable for postgres 9.2 and moreover it also tries to install\n> postgres 9.2 as one of the extensions dependencies.\nI guess you have a version of RHEL for which the bundled, RH version of\npostgres is v9.2 (?) I see that's true for centos7:\npostgresql-plpython.x86_64 9.2.23-3.el7_4 base\n\nIf you're using the PGDG repo (and probably if you're using another one, too),\nkeep in mind that most of the PG packages (at least since around version 9)\nhave as a suffix of their package name the major version: eg\npostgresql10-contrib, postgis24_10-client, pgfincore10, pg_repack10).\n\nThat allows co-installing different major versions of postgres.\n\n> Where can I find the source files of the extension for my version or how\n> can I install it ?\nSuggest trying to yum install postgresql10-plpython\n\nJustin\n\n",
"msg_date": "Sun, 8 Jul 2018 08:20:25 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: where can I download the binaries of plpython extension"
},
{
"msg_contents": "I download the source files from the official website compiled them and\ninstalled postgresql manually.\nIn what repository does the postgresql10-plpython exist ? or even the 9\nversion ? I dont find them via yum search.\n\n2018-07-08 16:20 GMT+03:00 Justin Pryzby <[email protected]>:\n\n> On Sun, Jul 08, 2018 at 04:06:50PM +0300, Mariel Cherkassky wrote:\n> > Hi,\n> > I installed postgresql v9.6/10 in our company.\n>\n> Which version did you install and how ? Compiled or binaries from some\n> repo ?\n> Using PGDG repo or some other one ?\n>\n> > When I try to install the extension with yum it downloads the extension\n> > that is suitable for postgres 9.2 and moreover it also tries to install\n> > postgres 9.2 as one of the extensions dependencies.\n> I guess you have a version of RHEL for which the bundled, RH version of\n> postgres is v9.2 (?) I see that's true for centos7:\n> postgresql-plpython.x86_64\n> 9.2.23-3.el7_4\n> base\n>\n> If you're using the PGDG repo (and probably if you're using another one,\n> too),\n> keep in mind that most of the PG packages (at least since around version 9)\n> have as a suffix of their package name the major version: eg\n> postgresql10-contrib, postgis24_10-client, pgfincore10, pg_repack10).\n>\n> That allows co-installing different major versions of postgres.\n>\n> > Where can I find the source files of the extension for my version or how\n> > can I install it ?\n> Suggest trying to yum install postgresql10-plpython\n>\n> Justin\n>\n\n\nI download the source files from the official website compiled them and installed postgresql manually. \nIn what repository does the postgresql10-plpython exist ? or even the 9 version ? I dont find them via yum search.2018-07-08 16:20 GMT+03:00 Justin Pryzby <[email protected]>:On Sun, Jul 08, 2018 at 04:06:50PM +0300, Mariel Cherkassky wrote:\n> Hi,\n> I installed postgresql v9.6/10 in our company.\n\nWhich version did you install and how ? Compiled or binaries from some repo ?\nUsing PGDG repo or some other one ? \n\n> When I try to install the extension with yum it downloads the extension\n> that is suitable for postgres 9.2 and moreover it also tries to install\n> postgres 9.2 as one of the extensions dependencies.\nI guess you have a version of RHEL for which the bundled, RH version of\npostgres is v9.2 (?) I see that's true for centos7:\npostgresql-plpython.x86_64 9.2.23-3.el7_4 base\n\nIf you're using the PGDG repo (and probably if you're using another one, too),\nkeep in mind that most of the PG packages (at least since around version 9)\nhave as a suffix of their package name the major version: eg\npostgresql10-contrib, postgis24_10-client, pgfincore10, pg_repack10).\n\nThat allows co-installing different major versions of postgres.\n\n> Where can I find the source files of the extension for my version or how\n> can I install it ?\nSuggest trying to yum install postgresql10-plpython\n\nJustin",
"msg_date": "Sun, 8 Jul 2018 16:24:10 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: where can I download the binaries of plpython extension"
},
{
"msg_contents": "On Sun, Jul 08, 2018 at 04:24:10PM +0300, Mariel Cherkassky wrote:\n> I download the source files from the official website compiled them and\n> installed postgresql manually.\n> In what repository does the postgresql10-plpython exist ? or even the 9\n> version ? I dont find them via yum search.\n\nIf you're using yum:\n\nhttps://www.postgresql.org/download/\n=> https://www.postgresql.org/download/linux/redhat/\n=> http://yum.postgresql.org/\n\nNote I believe those are technically considered \"unofficial\" RPMs provided by\nEnterpriseDB.\n\nJustin\n\n",
"msg_date": "Sun, 8 Jul 2018 08:33:00 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: where can I download the binaries of plpython extension"
},
{
"msg_contents": "When installing the postgresql10-plpython one of its dependencies is\nthe postgresql10-server. However, I dont want to install the server but as\nyou can see it is a must. What can I do ?\n\n2018-07-08 16:33 GMT+03:00 Justin Pryzby <[email protected]>:\n\n> On Sun, Jul 08, 2018 at 04:24:10PM +0300, Mariel Cherkassky wrote:\n> > I download the source files from the official website compiled them and\n> > installed postgresql manually.\n> > In what repository does the postgresql10-plpython exist ? or even the 9\n> > version ? I dont find them via yum search.\n>\n> If you're using yum:\n>\n> https://www.postgresql.org/download/\n> => https://www.postgresql.org/download/linux/redhat/\n> => http://yum.postgresql.org/\n>\n> Note I believe those are technically considered \"unofficial\" RPMs provided\n> by\n> EnterpriseDB.\n>\n> Justin\n>\n\nWhen installing the postgresql10-plpython one of its dependencies is the postgresql10-server. However, I dont want to install the server but as you can see it is a must. What can I do ?2018-07-08 16:33 GMT+03:00 Justin Pryzby <[email protected]>:On Sun, Jul 08, 2018 at 04:24:10PM +0300, Mariel Cherkassky wrote:\n> I download the source files from the official website compiled them and\n> installed postgresql manually.\n> In what repository does the postgresql10-plpython exist ? or even the 9\n> version ? I dont find them via yum search.\n\nIf you're using yum:\n\nhttps://www.postgresql.org/download/\n=> https://www.postgresql.org/download/linux/redhat/\n=> http://yum.postgresql.org/\n\nNote I believe those are technically considered \"unofficial\" RPMs provided by\nEnterpriseDB.\n\nJustin",
"msg_date": "Sun, 8 Jul 2018 16:38:21 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: where can I download the binaries of plpython extension"
},
{
"msg_contents": "Mariel Cherkassky <[email protected]> writes:\n> When installing the postgresql10-plpython one of its dependencies is\n> the postgresql10-server. However, I dont want to install the server but as\n> you can see it is a must. What can I do ?\n\nUm ... of what value do you think plpython is without a server for it\nto run in?\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sun, 08 Jul 2018 09:43:15 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: where can I download the binaries of plpython extension"
},
{
"msg_contents": "On Sun, Jul 08, 2018 at 04:38:21PM +0300, Mariel Cherkassky wrote:\n> When installing the postgresql10-plpython one of its dependencies is\n> the postgresql10-server. However, I dont want to install the server but as\n> you can see it is a must. What can I do ?\n\nAll it does is install files allowing loading the language into the server as\nextension; Why do you want the language without the server ?\n\n[pryzbyj@database ~]$ rpm -ql postgresql10-plpython\n/usr/pgsql-10/lib/plpython2.so\n/usr/pgsql-10/share/extension/plpython2u--1.0.sql\n/usr/pgsql-10/share/extension/plpython2u--unpackaged--1.0.sql\n/usr/pgsql-10/share/extension/plpython2u.control\n[...]\n/usr/pgsql-10/share/locale/de/LC_MESSAGES/plpython-10.mo\n[...]\n\nBut anyway, is it a problem ? You could let it install the server binaries to\n/usr/pgsql-10 and then ignore them. And actually I believe RH has the ability\nfor an admin to \"prune\" paths after package installation (The usual example is\n/usr/share/doc). You could do that if you want.\n\nOr if you just want to look at the files, you can use rpm2cpio ./rpm |cpio -i --make\n\nOr you can install it on a VM.\n\nJustin\n\n",
"msg_date": "Sun, 8 Jul 2018 08:43:56 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: where can I download the binaries of plpython extension"
},
{
"msg_contents": "As I mentioned earlier, I already have a running postgresql instance on the\nmachibe but on different pathes. I didnt want to install another one with\nthe default pathes because I didnt want people to think that the default\npathes are the correct ones. If I'll install the package to the default\nvalues then the solution is just coppying the plpythonu.control to my\ninstance`s extensions directory ?\n\n2018-07-08 16:43 GMT+03:00 Justin Pryzby <[email protected]>:\n\n> On Sun, Jul 08, 2018 at 04:38:21PM +0300, Mariel Cherkassky wrote:\n> > When installing the postgresql10-plpython one of its dependencies is\n> > the postgresql10-server. However, I dont want to install the server but\n> as\n> > you can see it is a must. What can I do ?\n>\n> All it does is install files allowing loading the language into the server\n> as\n> extension; Why do you want the language without the server ?\n>\n> [pryzbyj@database ~]$ rpm -ql postgresql10-plpython\n> /usr/pgsql-10/lib/plpython2.so\n> /usr/pgsql-10/share/extension/plpython2u--1.0.sql\n> /usr/pgsql-10/share/extension/plpython2u--unpackaged--1.0.sql\n> /usr/pgsql-10/share/extension/plpython2u.control\n> [...]\n> /usr/pgsql-10/share/locale/de/LC_MESSAGES/plpython-10.mo\n> [...]\n>\n> But anyway, is it a problem ? You could let it install the server\n> binaries to\n> /usr/pgsql-10 and then ignore them. And actually I believe RH has the\n> ability\n> for an admin to \"prune\" paths after package installation (The usual\n> example is\n> /usr/share/doc). You could do that if you want.\n>\n> Or if you just want to look at the files, you can use rpm2cpio ./rpm |cpio\n> -i --make\n>\n> Or you can install it on a VM.\n>\n> Justin\n>\n\nAs I mentioned earlier, I already have a running postgresql instance on the machibe but on different pathes. I didnt want to install another one with the default pathes because I didnt want people to think that the default pathes are the correct ones. If I'll install the package to the default values then the solution is just coppying the\n\nplpythonu.control to my instance`s extensions directory ?2018-07-08 16:43 GMT+03:00 Justin Pryzby <[email protected]>:On Sun, Jul 08, 2018 at 04:38:21PM +0300, Mariel Cherkassky wrote:\n> When installing the postgresql10-plpython one of its dependencies is\n> the postgresql10-server. However, I dont want to install the server but as\n> you can see it is a must. What can I do ?\n\nAll it does is install files allowing loading the language into the server as\nextension; Why do you want the language without the server ?\n\n[pryzbyj@database ~]$ rpm -ql postgresql10-plpython\n/usr/pgsql-10/lib/plpython2.so\n/usr/pgsql-10/share/extension/plpython2u--1.0.sql\n/usr/pgsql-10/share/extension/plpython2u--unpackaged--1.0.sql\n/usr/pgsql-10/share/extension/plpython2u.control\n[...]\n/usr/pgsql-10/share/locale/de/LC_MESSAGES/plpython-10.mo\n[...]\n\nBut anyway, is it a problem ? You could let it install the server binaries to\n/usr/pgsql-10 and then ignore them. And actually I believe RH has the ability\nfor an admin to \"prune\" paths after package installation (The usual example is\n/usr/share/doc). You could do that if you want.\n\nOr if you just want to look at the files, you can use rpm2cpio ./rpm |cpio -i --make\n\nOr you can install it on a VM.\n\nJustin",
"msg_date": "Sun, 8 Jul 2018 16:46:47 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: where can I download the binaries of plpython extension"
},
{
"msg_contents": "On Sun, Jul 08, 2018 at 04:46:47PM +0300, Mariel Cherkassky wrote:\n> As I mentioned earlier, I already have a running postgresql instance on the\n> machibe but on different pathes. I didnt want to install another one with\n> the default pathes because I didnt want people to think that the default\n> pathes are the correct ones. If I'll install the package to the default\n> values then the solution is just coppying the plpythonu.control to my\n> instance`s extensions directory ?\n\nI'm not sure about compatibilty of differently compiled binaries (different\n--configure flags, different compiler/version, different PG minor versions),\nbut I think that could work..\n\nAs I mentioned, you could also EXTRACT the PGDG postgresql10-plpython files\nwithout installing the -server.\n\nOr you could compile+install the plpython extension. I'm not sure but I think\nthat would be ./configure --with-python.\n\n..However if it were me, I'd schedule a time to stop the server, move the\ncustom-compiled binaries out of the way, and restart using the PGDG binaries\npointing at the original data dir. I think the only condition for doing this\nis keep the same major version (10) and to avoid lower minor versions (eg. once\nyou start with PGDG 10.4 binaries you should avoid going back and starting with\nlocally-compiled 10.3 binaries).\n\nJustin\n\n",
"msg_date": "Sun, 8 Jul 2018 09:18:23 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: where can I download the binaries of plpython extension"
},
{
"msg_contents": "I still got the binaries of the installation and I found that I have the\nnext directory :\npostgresql-10.4/src/pl/\ncd postgresql-10.4/src/pl/plpython\n-rw-r--r-- 1 postgres postgres 653 May 7 23:51 Makefile\ndrwxr-xr-x 3 postgres postgres 33 May 8 00:03 plpgsql\ndrwxr-xr-x 5 postgres postgres 4096 May 8 00:03 plperl\ndrwxr-xr-x 5 postgres postgres 319 May 8 00:06 tcl\ndrwxr-xr-x 5 postgres postgres 4096 May 8 00:06 plpython\n\nis there a way to install the extension from here ?\n\n\n\n2018-07-08 17:18 GMT+03:00 Justin Pryzby <[email protected]>:\n\n> On Sun, Jul 08, 2018 at 04:46:47PM +0300, Mariel Cherkassky wrote:\n> > As I mentioned earlier, I already have a running postgresql instance on\n> the\n> > machibe but on different pathes. I didnt want to install another one with\n> > the default pathes because I didnt want people to think that the default\n> > pathes are the correct ones. If I'll install the package to the default\n> > values then the solution is just coppying the plpythonu.control to my\n> > instance`s extensions directory ?\n>\n> I'm not sure about compatibilty of differently compiled binaries (different\n> --configure flags, different compiler/version, different PG minor\n> versions),\n> but I think that could work..\n>\n> As I mentioned, you could also EXTRACT the PGDG postgresql10-plpython files\n> without installing the -server.\n>\n> Or you could compile+install the plpython extension. I'm not sure but I\n> think\n> that would be ./configure --with-python.\n>\n> ..However if it were me, I'd schedule a time to stop the server, move the\n> custom-compiled binaries out of the way, and restart using the PGDG\n> binaries\n> pointing at the original data dir. I think the only condition for doing\n> this\n> is keep the same major version (10) and to avoid lower minor versions (eg.\n> once\n> you start with PGDG 10.4 binaries you should avoid going back and starting\n> with\n> locally-compiled 10.3 binaries).\n>\n> Justin\n>\n\nI still got the binaries of the installation and I found that I have the next directory : postgresql-10.4/src/pl/cd \n\npostgresql-10.4/src/pl/plpython-rw-r--r-- 1 postgres postgres 653 May 7 23:51 Makefiledrwxr-xr-x 3 postgres postgres 33 May 8 00:03 plpgsqldrwxr-xr-x 5 postgres postgres 4096 May 8 00:03 plperldrwxr-xr-x 5 postgres postgres 319 May 8 00:06 tcldrwxr-xr-x 5 postgres postgres 4096 May 8 00:06 plpythonis there a way to install the extension from here ?2018-07-08 17:18 GMT+03:00 Justin Pryzby <[email protected]>:On Sun, Jul 08, 2018 at 04:46:47PM +0300, Mariel Cherkassky wrote:\n> As I mentioned earlier, I already have a running postgresql instance on the\n> machibe but on different pathes. I didnt want to install another one with\n> the default pathes because I didnt want people to think that the default\n> pathes are the correct ones. If I'll install the package to the default\n> values then the solution is just coppying the plpythonu.control to my\n> instance`s extensions directory ?\n\nI'm not sure about compatibilty of differently compiled binaries (different\n--configure flags, different compiler/version, different PG minor versions),\nbut I think that could work..\n\nAs I mentioned, you could also EXTRACT the PGDG postgresql10-plpython files\nwithout installing the -server.\n\nOr you could compile+install the plpython extension. I'm not sure but I think\nthat would be ./configure --with-python.\n\n..However if it were me, I'd schedule a time to stop the server, move the\ncustom-compiled binaries out of the way, and restart using the PGDG binaries\npointing at the original data dir. I think the only condition for doing this\nis keep the same major version (10) and to avoid lower minor versions (eg. once\nyou start with PGDG 10.4 binaries you should avoid going back and starting with\nlocally-compiled 10.3 binaries).\n\nJustin",
"msg_date": "Sun, 8 Jul 2018 17:36:06 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: where can I download the binaries of plpython extension"
},
{
"msg_contents": "On Sun, Jul 08, 2018 at 05:36:06PM +0300, Mariel Cherkassky wrote:\n> I still got the binaries of the installation and I found that I have the\n> next directory : postgresql-10.4/src/pl/\n> cd postgresql-10.4/src/pl/plpython\n> -rw-r--r-- 1 postgres postgres 653 May 7 23:51 Makefile\n> drwxr-xr-x 3 postgres postgres 33 May 8 00:03 plpgsql\n> drwxr-xr-x 5 postgres postgres 4096 May 8 00:03 plperl\n> drwxr-xr-x 5 postgres postgres 319 May 8 00:06 tcl\n> drwxr-xr-x 5 postgres postgres 4096 May 8 00:06 plpython\n> \n> is there a way to install the extension from here ?\n\nI think you're asking about this option:\n\n> > Or you could compile+install the plpython extension. I'm not sure but I\n> > think that would be ./configure --with-python.\n\nJustin\n\n",
"msg_date": "Sun, 8 Jul 2018 12:25:14 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: where can I download the binaries of plpython extension"
},
{
"msg_contents": "Yes, it worked. Thanks!\n\nOn Sun, Jul 8, 2018, 8:25 PM Justin Pryzby <[email protected]> wrote:\n\n> On Sun, Jul 08, 2018 at 05:36:06PM +0300, Mariel Cherkassky wrote:\n> > I still got the binaries of the installation and I found that I have the\n> > next directory : postgresql-10.4/src/pl/\n> > cd postgresql-10.4/src/pl/plpython\n> > -rw-r--r-- 1 postgres postgres 653 May 7 23:51 Makefile\n> > drwxr-xr-x 3 postgres postgres 33 May 8 00:03 plpgsql\n> > drwxr-xr-x 5 postgres postgres 4096 May 8 00:03 plperl\n> > drwxr-xr-x 5 postgres postgres 319 May 8 00:06 tcl\n> > drwxr-xr-x 5 postgres postgres 4096 May 8 00:06 plpython\n> >\n> > is there a way to install the extension from here ?\n>\n> I think you're asking about this option:\n>\n> > > Or you could compile+install the plpython extension. I'm not sure but\n> I\n> > > think that would be ./configure --with-python.\n>\n> Justin\n>\n\nYes, it worked. Thanks!On Sun, Jul 8, 2018, 8:25 PM Justin Pryzby <[email protected]> wrote:On Sun, Jul 08, 2018 at 05:36:06PM +0300, Mariel Cherkassky wrote:\n> I still got the binaries of the installation and I found that I have the\n> next directory : postgresql-10.4/src/pl/\n> cd postgresql-10.4/src/pl/plpython\n> -rw-r--r-- 1 postgres postgres 653 May 7 23:51 Makefile\n> drwxr-xr-x 3 postgres postgres 33 May 8 00:03 plpgsql\n> drwxr-xr-x 5 postgres postgres 4096 May 8 00:03 plperl\n> drwxr-xr-x 5 postgres postgres 319 May 8 00:06 tcl\n> drwxr-xr-x 5 postgres postgres 4096 May 8 00:06 plpython\n> \n> is there a way to install the extension from here ?\n\nI think you're asking about this option:\n\n> > Or you could compile+install the plpython extension. I'm not sure but I\n> > think that would be ./configure --with-python.\n\nJustin",
"msg_date": "Mon, 9 Jul 2018 09:18:28 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: where can I download the binaries of plpython extension"
}
] |
[
{
"msg_contents": "Hi,\nI'm trying to install the pgwatch2 tool in our company without using the docker option. I followed the instructions that are specified in the github page but I'm facing an error during STEP 4.2 when I try to add my cluster to be the /dbs page in order to monitor it. After I add it I'm getting the error :\nERROR: Could not connect to InfluxDB\n\n\nOn the same machine I have an influxdb database running :\nps -ef | grep influx\ninfluxdb 3680 1 0 17:12 ? 00:00:01 influxd -config /PostgreSQL/influxdb/config/influxdb.conf\n\nWhen I look at the log of the influxdb I see that every time I press the \"New\" button under DBS page the next row :\n[httpd] ::1 - root [08/Jul/2018:17:19:04 +0300] \"GET /query?q=SHOW+TAG+VALUES+WITH+KEY+%3D+%22dbname%22&db=pgwatch2 HTTP/1.1\" 401 33 \"-\" \"python-requests/2.19.1\" de27bc5c-82b9-11e8-8003-000000000000 141\n\nWhat else do you recommend to check ?\n\nThanks , Mariel.\n\n\n\n\n\n\n\n\n\nHi,\nI'm trying to install the pgwatch2 tool in our company without using the docker option. I followed the instructions that are specified in the github page but I'm facing an error during\n STEP 4.2 when I try to add my cluster to be the /dbs page in order to monitor it. After I add it I'm getting the error :\n\n\nERROR: Could not connect to InfluxDB\n \n \nOn the same machine I have an influxdb database running :\n\nps -ef | grep influx\ninfluxdb 3680 1 0 17:12 ? 00:00:01 influxd -config /PostgreSQL/influxdb/config/influxdb.conf\n \nWhen I look at the log of the influxdb I see that every time I press the \"New\" button under DBS page the next row :\n\n[httpd] ::1 - root [08/Jul/2018:17:19:04 +0300] \"GET /query?q=SHOW+TAG+VALUES+WITH+KEY+%3D+%22dbname%22&db=pgwatch2 HTTP/1.1\" 401 33 \"-\" \"python-requests/2.19.1\" de27bc5c-82b9-11e8-8003-000000000000\n 141\n \nWhat else do you recommend to check ?\n \nThanks , Mariel.",
"msg_date": "Sun, 8 Jul 2018 14:22:57 +0000",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Problems with installing pgwatch2 without docker"
},
{
"msg_contents": "\n\nOn 07/08/2018 10:22 AM, Mariel Cherkassky wrote:\n>\n> Hi,\n>\n> I'm trying to install the pgwatch2 tool in our company without using \n> the docker option. I followed the instructions that are specified in \n> the github page but I'm facing an error during STEP 4.2 when I try to \n> add my cluster to be the /dbs page in order to monitor it. After I add \n> it I'm getting the error :\n>\n> ERROR: Could not connect to InfluxDB\n>\n> On the same machine I have an influxdb database running :\n>\n> ps -ef | grep influx\n>\n> influxdb 3680���� 1� 0 17:12 ?������� 00:00:01 influxd -config \n> /PostgreSQL/influxdb/config/influxdb.conf\n>\n> When I look at the log of the influxdb I see that every time I press \n> the \"New\" button under DBS page the next row :\n>\n> [httpd] ::1 - root [08/Jul/2018:17:19:04 +0300] \"GET \n> /query?q=SHOW+TAG+VALUES+WITH+KEY+%3D+%22dbname%22&db=pgwatch2 \n> HTTP/1.1\" 401 33 \"-\" \"python-requests/2.19.1\" \n> de27bc5c-82b9-11e8-8003-000000000000 141\n>\n> What else do you recommend to check ?\n>\n> Thanks , Mariel.\n>\n\n\nPlease stop asking questions in inappropriate forums. This is not a \nperformance issue, so it definitely doesn't belong on this list.. If it \nbelongs on a postgres forum at all it belongs on pgsql-general. More \nlikely, you should be asking in the pgwatch2 forums, not Postgres forums.\n\ncheers\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 8 Jul 2018 12:35:57 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problems with installing pgwatch2 without docker"
}
] |
[
{
"msg_contents": "Hi,\n\nI am having a query that has an order by and a limit clause. The\ncolumn on which I am doing order by is indexed (default b tree index).\nHowever the index is not being used. On tweaking the query a bit I\nfound that when I use left join index is not used whereas when I use\ninner join the index is used.\n\nUnfortunately, the behaviour we expect is that of left join only. My\nquestion is, is there any way to modify/improve the query to improve\nthe query speed or is this the best that is possible for this case.\n\nPlease find below a simplified version of the queries. I tried the\nqueries on 9.3 and 10 versions and both gave similar results.\n\n\nTable structure\n\nperformance_test=# \\d+ child\n Table \"public.child\"\n Column | Type | Collation | Nullable | Default\n | Storage | Stats target | Description\n--------+--------+-----------+----------+-----------------------------------+----------+--------------+-------------\n id | bigint | | not null |\nnextval('child_id_seq'::regclass) | plain | |\n name | text | | not null |\n | extended | |\nIndexes:\n \"child_pkey\" PRIMARY KEY, btree (id)\n \"child_name_unique\" UNIQUE CONSTRAINT, btree (name)\nReferenced by:\n TABLE \"parent\" CONSTRAINT \"parent_child_id_fkey\" FOREIGN KEY\n(child_id) REFERENCES child(id)\n\n\nperformance_test=# \\d+ parent\n Table \"public.parent\"\n Column | Type | Collation | Nullable | Default\n | Storage | Stats target | Description\n----------+--------+-----------+----------+------------------------------------+----------+--------------+-------------\n id | bigint | | not null |\nnextval('parent_id_seq'::regclass) | plain | |\n name | text | | not null |\n | extended | |\n child_id | bigint | | |\n | plain | |\nIndexes:\n \"parent_pkey\" PRIMARY KEY, btree (id)\n \"parent_name_unique\" UNIQUE CONSTRAINT, btree (name)\n \"parent_child_id_idx\" btree (child_id)\nForeign-key constraints:\n \"parent_child_id_fkey\" FOREIGN KEY (child_id) REFERENCES child(id)\n\n\n\nQuery used to populate data\n\nperformance_test=# insert into child(name) select concat('child ',\ngen.id) as name from (select generate_series(1,100000) as id) as gen;\n\nperformance_test=# insert into parent(name, child_id) select\nconcat('parent ', gen.id) as name, (id%100000) + 1 from (select\ngenerate_series(1,1000000) as id) as gen;\n\n\nLeft join with order by using child name\n\nperformance_test=# explain analyze select * from parent left join\nchild on parent.child_id = child.id order by child.name limit 10;\n QUERY\nPLAN\n--------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=69318.55..69318.58 rows=10 width=59) (actual\ntime=790.708..790.709 rows=10 loops=1)\n -> Sort (cost=69318.55..71818.55 rows=1000000 width=59) (actual\ntime=790.705..790.706 rows=10 loops=1)\n Sort Key: child.name\n Sort Method: top-N heapsort Memory: 27kB\n -> Hash Left Join (cost=3473.00..47708.91 rows=1000000\nwidth=59) (actual time=51.066..401.028 rows=1000000 loops=1)\n Hash Cond: (parent.child_id = child.id)\n -> Seq Scan on parent (cost=0.00..17353.00\nrows=1000000 width=29) (actual time=0.026..67.848 rows=1000000\nloops=1)\n -> Hash (cost=1637.00..1637.00 rows=100000 width=19)\n(actual time=50.879..50.879 rows=100000 loops=1)\n Buckets: 65536 Batches: 2 Memory Usage: 3053kB\n -> Seq Scan on child (cost=0.00..1637.00\nrows=100000 width=19) (actual time=0.018..17.281 rows=100000 loops=1)\n Planning time: 1.191 ms\n Execution time: 790.797 ms\n(12 rows)\n\n\n\nInner join with sorting according to child name\n\nperformance_test=# explain analyze select * from parent inner join\nchild on parent.child_id = child.id order by child.name limit 10;\n\nQUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.84..2.03 rows=10 width=59) (actual time=0.156..0.193\nrows=10 loops=1)\n -> Nested Loop (cost=0.84..119132.56 rows=1000000 width=59)\n(actual time=0.154..0.186 rows=10 loops=1)\n -> Index Scan using child_name_unique on child\n(cost=0.42..5448.56 rows=100000 width=19) (actual time=0.126..0.126\nrows=1 loops=1)\n -> Index Scan using parent_child_id_idx on parent\n(cost=0.42..1.04 rows=10 width=29) (actual time=0.019..0.045 rows=10\nloops=1)\n Index Cond: (child_id = child.id)\n Planning time: 0.941 ms\n Execution time: 0.283 ms\n(7 rows)\n\n\n\n\nVersion\n\nperformance_test=# select version();\n version\n-----------------------------------------------------------------------------------------------------------------------------------\n PostgreSQL 10.4 (Ubuntu 10.4-2.pgdg14.04+1) on x86_64-pc-linux-gnu,\ncompiled by gcc (Ubuntu 4.8.4-2ubuntu1~14.04.4) 4.8.4, 64-bit\n(1 row)\n\n\nAny help from Postgres experts would be great.\n\nThanks,\nNanda\n\n",
"msg_date": "Mon, 9 Jul 2018 17:17:23 +0530",
"msg_from": "Nandakumar M <[email protected]>",
"msg_from_op": true,
"msg_subject": "Need help with optimising simple query"
},
{
"msg_contents": "Nandakumar M <[email protected]> writes:\n> I am having a query that has an order by and a limit clause. The\n> column on which I am doing order by is indexed (default b tree index).\n> However the index is not being used. On tweaking the query a bit I\n> found that when I use left join index is not used whereas when I use\n> inner join the index is used.\n\nThe reason the index isn't being used is that the sort order the query\nrequests isn't the same as the order provided by the index. Here:\n\n> performance_test=# explain analyze select * from parent left join\n> child on parent.child_id = child.id order by child.name limit 10;\n\nyou're asking to sort by a column that will include null values for\nchild.name anywhere that there's a parent row without a match for\nchild_id. Those rows aren't even represented in the index on child.name,\nmuch less placed in the right order.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 09 Jul 2018 10:23:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need help with optimising simple query"
},
{
"msg_contents": "Hi Tom,\n\nIs there something that I can do to improve the performance of such\nqueries (where ordering is done based on child table column and join\nis left join)? Maybe a combined index or something like that? Or is it\npossible to modify the query to get same result but execute faster.\nOne ad-hoc optimisation (which gives somewhat better performance) that\ncame to mind is to have a sub query for child table like\n\nperformance_test=# explain analyze select * from parent left join\n(select * from child order by name limit 10) as child on\nparent.child_id = child.id order by child.name limit 10;\n\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=42714.84..42714.86 rows=10 width=59) (actual\ntime=311.623..311.624 rows=10 loops=1)\n -> Sort (cost=42714.84..45214.84 rows=1000000 width=59) (actual\ntime=311.622..311.622 rows=10 loops=1)\n Sort Key: child.name\n Sort Method: top-N heapsort Memory: 26kB\n -> Hash Left Join (cost=1.19..21105.20 rows=1000000\nwidth=59) (actual time=0.120..204.386 rows=1000000 loops=1)\n Hash Cond: (parent.child_id = child.id)\n -> Seq Scan on parent (cost=0.00..17353.00\nrows=1000000 width=29) (actual time=0.073..73.052 rows=1000000\nloops=1)\n -> Hash (cost=1.06..1.06 rows=10 width=19) (actual\ntime=0.035..0.035 rows=10 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n -> Limit (cost=0.42..0.96 rows=10 width=19)\n(actual time=0.014..0.027 rows=10 loops=1)\n -> Index Scan using child_name_unique on\nchild (cost=0.42..5448.56 rows=100000 width=19) (actual\ntime=0.013..0.024 rows=10 loops=1)\n Planning time: 0.505 ms\n Execution time: 311.682 ms\n(13 rows)\n\nTime: 312.673 ms\n\nIs there something I can do that will improve the query performance\nmuch more than this?\n\nThanks.\n\nRegards,\nNanda\n\nOn Mon, 9 Jul 2018, 19:53 Tom Lane, <[email protected]> wrote:\n>\n> Nandakumar M <[email protected]> writes:\n> > I am having a query that has an order by and a limit clause. The\n> > column on which I am doing order by is indexed (default b tree index).\n> > However the index is not being used. On tweaking the query a bit I\n> > found that when I use left join index is not used whereas when I use\n> > inner join the index is used.\n>\n> The reason the index isn't being used is that the sort order the query\n> requests isn't the same as the order provided by the index. Here:\n>\n> > performance_test=# explain analyze select * from parent left join\n> > child on parent.child_id = child.id order by child.name limit 10;\n>\n> you're asking to sort by a column that will include null values for\n> child.name anywhere that there's a parent row without a match for\n> child_id. Those rows aren't even represented in the index on child.name,\n> much less placed in the right order.\n>\n> regards, tom lane\n\n",
"msg_date": "Tue, 10 Jul 2018 11:06:59 +0530",
"msg_from": "Nandakumar M <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Need help with optimising simple query"
},
{
"msg_contents": " I didn't find any CREATE TABLE's in your description, or else I would have\ntried it with the sequences and all that, but I think this ought to work. \n\npostgres=# explain select * from ((select * from parent inner join child on\nparent.child_id = child.id limit 10) union all (select * from parent left\nouter join child on parent.child_id = child.id where child.id is null limit\n10)) as v limit 10;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------\n Limit (cost=0.15..3.72 rows=10 width=88)\n -> Append (cost=0.15..7.29 rows=20 width=88)\n -> Limit (cost=0.15..2.46 rows=10 width=88)\n -> Nested Loop (cost=0.15..246.54 rows=1070 width=88)\n -> Seq Scan on parent (cost=0.00..20.70 rows=1070\nwidth=48)\n -> Index Scan using child_pkey on child \n(cost=0.15..0.21 rows=1 width=40)\n Index Cond: (id = parent.child_id)\n -> Limit (cost=0.15..4.63 rows=10 width=88)\n -> Nested Loop Anti Join (cost=0.15..239.71 rows=535\nwidth=88)\n -> Seq Scan on parent parent_1 (cost=0.00..20.70\nrows=1070 width=48)\n -> Index Scan using child_pkey on child child_1 \n(cost=0.15..0.21 rows=1 width=40)\n Index Cond: (parent_1.child_id = id)\n(12 rows)\n\n\n\n\n\n-----\nJim Finnerty, AWS, Amazon Aurora PostgreSQL\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n",
"msg_date": "Sat, 29 Dec 2018 18:25:09 -0700 (MST)",
"msg_from": "Jim Finnerty <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need help with optimising simple query"
},
{
"msg_contents": "...on second thought, the placement of the IS NULL predicate isn't quite\nright, but if you futz with it a bit I think you can make it work\n\n\n\n-----\nJim Finnerty, AWS, Amazon Aurora PostgreSQL\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n",
"msg_date": "Sat, 29 Dec 2018 18:31:10 -0700 (MST)",
"msg_from": "Jim Finnerty <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need help with optimising simple query"
}
] |
[
{
"msg_contents": "Hi all,\n\n\nI'm having a good bit of trouble making production-ready a query I wouldn't\nhave thought would be too challenging.\n\nBelow is the relevant portions of my table, which is partitioned on values\nof part_key from 1-5. The number of rows is on the order of 10^7, and the\nnumber of unique \"parent_id\" values is on the order of 10^4. \"tmstmp\" is\ncontinuous (more or less). Neither parent_id nor tmstmp is nullable. The\ntable receives a fair amount of INSERTS, and also a number of UPDATES.\n\ndb=> \\d a\n Table \"public.a\"\n Column | Type | Collation | Nullable\n| Default\n-------------------------+--------------------------+-----------+----------+----------------------------------------------\n id | integer | |\n| nextval('a_id_seq'::regclass)\n parent_id | integer | | not null |\n tmstmp | timestamp with time zone | | not null |\n part_key | integer | | |\nPartition key: LIST (part_key)\nNumber of partitions: 5 (Use \\d+ to list them.)\n\ndb=> \\d a_partition1\n Table \"public.a_partition1\"\n Column | Type | Collation | Nullable\n| Default\n-------------------------+--------------------------+-----------+----------+----------------------------------------------\n id | integer | |\n| nextval('a_id_seq'::regclass)\n parent_id | integer | | not null |\n tmstmp | timestamp with time zone | | not null |\n part_key | integer | | |\nPartition of: a FOR VALUES IN (1)\nIndexes:\n \"a_pk_idx1\" UNIQUE, btree (id)\n \"a_tmstmp_idx1\" btree (tmstmp)\n \"a_parent_id_idx1\" btree (parent_id)\nCheck constraints:\n \"a_partition_check_part_key1\" CHECK (part_key = 1)\nForeign-key constraints:\n \"a_partition_parent_id_fk_b_id1\" FOREIGN KEY (parent_id) REFERENCES\nb(id) DEFERRABLE INITIALLY DEFERRED\n\ndb=> SELECT relname, relpages, reltuples, relallvisible, relkind, relnatts,\nrelhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE relname\nlike 'a_partition%';\n relname | relpages | reltuples | relallvisible\n| relkind | relnatts | relhassubclass | reloptions | pg_table_size\n-----------------------------------+----------+-------------+---------------+---------+----------+----------------+------------+---------------\n a_partition1 | 152146 | 873939 | 106863\n| r | 26 | f | | 287197233152\n a_partition2 | 669987 | 3.62268e+06 | 0\n| r | 26 | f | | 161877745664\n a_partition3 | 562069 | 2.94414e+06 | 213794\n| r | 26 | f | | 132375994368\n a_partition4 | 729880 | 3.95513e+06 | 69761\n| r | 26 | f | | 188689047552\n a_partition5 | 834132 | 4.9748e+06 | 52622\n| r | 26 | f | | 218596630528\n(5 rows)\n\n\n\nI'm interested in filtering by parent_id (an indexed foreign-key\nrelationship, though I think it shouldn't matter in this context), then\nsorting the result by a timestamp field (also indexed). I only need 20 at a\ntime.\n\n\nThe naive query works well for a very small number of ids (up to 3):\n\nEXPLAIN (ANALYZE, BUFFERS)\nSELECT \"a\".\"id\"\nFROM \"a\"\nWHERE \"a\".\"parent_id\" IN (\n 37066,41174,28843\n)\nORDER BY \"a\".\"tmstmp\" DESC\nLIMIT 20;\n\n Limit (cost=9838.23..9838.28 rows=20 width=12) (actual\ntime=13.307..13.307 rows=0 loops=1)\n Buffers: shared hit=29 read=16\n -> Sort (cost=9838.23..9860.12 rows=8755 width=12) (actual\ntime=13.306..13.306 rows=0 loops=1)\n Sort Key: a_partition1.tmstmp DESC\n Sort Method: quicksort Memory: 25kB\n Buffers: shared hit=29 read=16\n -> Append (cost=0.43..9605.26 rows=8755 width=12) (actual\ntime=13.302..13.302 rows=0 loops=1)\n Buffers: shared hit=29 read=16\n -> Index Scan using a_parent_id_idx1 on a_partition1\n(cost=0.43..4985.45 rows=4455 width=12) (actual time=4.007..4.007 rows=0\nloops=1)\n Index Cond: (parent_id = ANY\n('{37066,41174,28843}'::integer[]))\n Buffers: shared hit=6 read=3\n -> Index Scan using a_parent_id_idx2 on a_partition2\n(cost=0.43..1072.79 rows=956 width=12) (actual time=3.521..3.521 rows=0\nloops=1)\n Index Cond: (parent_id = ANY\n('{37066,41174,28843}'::integer[]))\n Buffers: shared hit=5 read=4\n -> Index Scan using a_parent_id_idx3 on a_partition3\n(cost=0.43..839.30 rows=899 width=12) (actual time=2.172..2.172 rows=0\nloops=1)\n Index Cond: (parent_id = ANY\n('{37066,41174,28843}'::integer[]))\n Buffers: shared hit=6 read=3\n -> Index Scan using a_parent_id_idx4 on a_partition4\n(cost=0.43..1041.11 rows=959 width=12) (actual time=1.822..1.822 rows=0\nloops=1)\n Index Cond: (parent_id = ANY\n('{37066,41174,28843}'::integer[]))\n Buffers: shared hit=6 read=3\n -> Index Scan using a_parent_id_idx5 on a_partition5\n(cost=0.43..1666.61 rows=1486 width=12) (actual time=1.777..1.777 rows=0\nloops=1)\n Index Cond: (parent_id = ANY\n('{37066,41174,28843}'::integer[]))\n Buffers: shared hit=6 read=3\n Planning time: 0.559 ms\n Execution time: 13.343 ms\n(25 rows)\n\n\nBut as soon as the number included in the filter goes up slightly (at about\n4), the query plan changes the indexes it uses in a way that makes it\nimpossibly slow. The below is only an EXPLAIN because the query times out:\n\nEXPLAIN\nSELECT \"a\".\"id\"\nFROM \"a\"\nWHERE \"a\".\"parent_id\" IN (\n 34226,24506,40987,27162\n)\nORDER BY \"a\".\"tmstmp\" DESC\nLIMIT 20;\n\n Limit (cost=2.22..8663.99 rows=20 width=12)\n -> Merge Append (cost=2.22..5055447.67 rows=11673 width=12)\n Sort Key: a_partition1.tmstmp DESC\n -> Index Scan Backward using a_tmstmp_idx1 on a_partition1\n(cost=0.43..1665521.55 rows=5940 width=12)\n Filter: (parent_id = ANY\n('{34226,24506,40987,27162}'::integer[]))\n -> Index Scan Backward using a_tmstmp_idx2 on a_partition2\n(cost=0.43..880517.20 rows=1274 width=12)\n Filter: (parent_id = ANY\n('{34226,24506,40987,27162}'::integer[]))\n -> Index Scan Backward using a_tmstmp_idx3 on a_partition3\n(cost=0.43..639224.73 rows=1199 width=12)\n Filter: (parent_id = ANY\n('{34226,24506,40987,27162}'::integer[]))\n -> Index Scan Backward using a_tmstmp_idx4 on a_partition4\n(cost=0.43..852881.68 rows=1278 width=12)\n Filter: (parent_id = ANY\n('{34226,24506,40987,27162}'::integer[]))\n -> Index Scan Backward using a_tmstmp_idx5 on a_partition5\n(cost=0.43..1017137.75 rows=1982 width=12)\n Filter: (parent_id = ANY\n('{34226,24506,40987,27162}'::integer[]))\n(13 rows)\n\n\nSomething about the estimated row counts (this problem persisted after I\ntried ANALYZEing) forces usage of the tmstmp index and Merge Append (which\nseems wise) but also a filter condition on parent_id over an index\ncondition, which is apparently prohibitively slow.\n\nI tried creating a multicolumn index like:\n\nCREATE INDEX \"a_partition1_parent_and_tmstmp_idx\" on \"a_partition2\" USING\nbtree (\"parent_id\", \"tmstmp\" DESC);\n\nBut this didn't help (it wasn't used).\n\nI also found that removing the LIMIT makes an acceptably fast (though\ndifferent) query plan which uses the parent_id instead instead of the\ntmstmp one:\n\nEXPLAIN (ANALYZE, BUFFERS)\nSELECT \"a\".\"id\"\nFROM \"a\"\nWHERE \"a\".\"parent_id\" IN (\n 17942,32006,36733,9698,27948\n)\nORDER BY \"a\".\"tmstmp\" DESC;\n\n Sort (cost=17004.82..17041.29 rows=14591 width=12) (actual\ntime=109.650..109.677 rows=127 loops=1)\n Sort Key: a_partition1.tmstmp DESC\n Sort Method: quicksort Memory: 30kB\n Buffers: shared hit=60 read=142 written=12\n -> Append (cost=0.43..15995.64 rows=14591 width=12) (actual\ntime=9.206..109.504 rows=127 loops=1)\n Buffers: shared hit=60 read=142 written=12\n -> Index Scan using a_parent_id_idx1 on a_partition1\n(cost=0.43..8301.03 rows=7425 width=12) (actual time=9.205..11.116 rows=1\nloops=1)\n Index Cond: (parent_id = ANY\n('{17942,32006,36733,9698,27948}'::integer[]))\n Buffers: shared hit=10 read=6\n -> Index Scan using a_parent_id_idx2 on a_partition2\n(cost=0.43..1786.52 rows=1593 width=12) (actual time=7.116..76.000 rows=98\nloops=1)\n Index Cond: (parent_id = ANY\n('{17942,32006,36733,9698,27948}'::integer[]))\n Buffers: shared hit=15 read=97 written=10\n -> Index Scan using a_parent_id_idx3 on a_partition3\n(cost=0.43..1397.74 rows=1498 width=12) (actual time=3.160..3.160 rows=0\nloops=1)\n Index Cond: (parent_id = ANY\n('{17942,32006,36733,9698,27948}'::integer[]))\n Buffers: shared hit=10 read=5\n -> Index Scan using a_parent_id_idx4 on a_partition4\n(cost=0.43..1733.77 rows=1598 width=12) (actual time=1.975..16.960 rows=28\nloops=1)\n Index Cond: (parent_id = ANY\n('{17942,32006,36733,9698,27948}'::integer[]))\n Buffers: shared hit=14 read=30 written=2\n -> Index Scan using a_parent_id_idx5 on a_partition5\n(cost=0.43..2776.58 rows=2477 width=12) (actual time=2.155..2.155 rows=0\nloops=1)\n Index Cond: (parent_id = ANY\n('{17942,32006,36733,9698,27948}'::integer[]))\n Buffers: shared hit=11 read=4\n Planning time: 0.764 ms\n Execution time: 109.748 ms\n(23 rows)\n\n\nEventually, I stumbled upon these links:\n- https://stackoverflow.com/a/27237698\n-\nhttp://datamangling.com/2014/01/17/limit-1-and-performance-in-a-postgres-query/\n\nAnd from these, was able to partially resolve my issue by attaching an\nextra ORDER BY:\n\nEXPLAIN (ANALYZE, BUFFERS)\nSELECT \"a\".\"id\"\nFROM \"a\" WHERE \"a\".\"parent_id\" IN (3909,26840,32181,22998,9632)\nORDER BY \"a\".\"tmstmp\" DESC,\n \"a\".\"id\" DESC\nLIMIT 20;\n\n Limit (cost=16383.91..16383.96 rows=20 width=12) (actual\ntime=29.804..29.808 rows=6 loops=1)\n Buffers: shared hit=39 read=42\n -> Sort (cost=16383.91..16420.38 rows=14591 width=12) (actual\ntime=29.803..29.804 rows=6 loops=1)\n Sort Key: a_partition1.tmstmp DESC, a_partition1.id DESC\n Sort Method: quicksort Memory: 25kB\n Buffers: shared hit=39 read=42\n -> Append (cost=0.43..15995.64 rows=14591 width=12) (actual\ntime=12.509..29.789 rows=6 loops=1)\n Buffers: shared hit=39 read=42\n -> Index Scan using a_parent_id_idx1 on a_partition1\n(cost=0.43..8301.03 rows=7425 width=12) (actual time=4.046..4.046 rows=0\nloops=1)\n Index Cond: (parent_id = ANY\n('{3909,26840,32181,22998,9632}'::integer[]))\n Buffers: shared hit=7 read=8\n -> Index Scan using a_parent_id_idx2 on a_partition2\n(cost=0.43..1786.52 rows=1593 width=12) (actual time=5.447..5.447 rows=0\nloops=1)\n Index Cond: (parent_id = ANY\n('{3909,26840,32181,22998,9632}'::integer[]))\n Buffers: shared hit=7 read=8\n -> Index Scan using a_parent_id_idx3 on a_partition3\n(cost=0.43..1397.74 rows=1498 width=12) (actual time=3.013..3.618 rows=1\nloops=1)\n Index Cond: (parent_id = ANY\n('{3909,26840,32181,22998,9632}'::integer[]))\n Buffers: shared hit=9 read=7\n -> Index Scan using a_parent_id_idx4 on a_partition4\n(cost=0.43..1733.77 rows=1598 width=12) (actual time=7.337..7.337 rows=0\nloops=1)\n Index Cond: (parent_id = ANY\n('{3909,26840,32181,22998,9632}'::integer[]))\n Buffers: shared hit=8 read=7\n -> Index Scan using a_parent_id_idx5 on a_partition5\n(cost=0.43..2776.58 rows=2477 width=12) (actual time=3.401..9.332 rows=5\nloops=1)\n Index Cond: (parent_id = ANY\n('{3909,26840,32181,22998,9632}'::integer[]))\n Buffers: shared hit=8 read=12\n Planning time: 0.601 ms\n Execution time: 29.851 ms\n(25 rows)\n\nThis query plan (which is the same as when LIMIT is removed) has been a\ngood short term solution when the number of \"parent_id\"s I'm using is still\nrelatively small, but unfortunately queries grow untenably slow as the\nnumber of \"parent_id\"s involved increases:\n\nEXPLAIN (ANALYZE, BUFFERS)\nSELECT \"a\".\"id\"\nFROM \"a\"\nWHERE \"a\".\"parent_id\" IN (\n\n49694,35150,48216,45743,49529,47239,40752,46735,7692,33855,25062,47821,6947,9735,12611,33190,9523,39200,50056,14912,9624,35552,35058,35365,47363,30358,853,21719,20568,16211,49372,6087,16111,38337,38891,10792,4556,27171,37731,38587,40402,26109,1477,36932,12191,5459,49307,21132,8697,4131,47869,49246,30447,23795,14389,19743,1369,15689,1820,7826,50623,2179,28090,46430,40117,32603,6886,35318,1026,6991,21360,50370,21721,33558,39162,49753,17974,45599,23256,8483,20864,33426,990,10068,38,13186,19338,43727,12319,50658,19243,6267,12498,12214,542,43339,5933,11376,49748,9335,22248,14763,13375,48554,50595,27006,43481,2805,49012,20000,849,27214,38576,36449,39854,13708,26841,3837,39971,5090,35680,49468,24738,27145,15380,40463,1250,22007,20962,8747,1383,45856,20025,35346,17121,3387,42172,5340,13004,11554,4607,16991,45034,20212,48020,19356,41234,30633,33657,1508,30660,35022,41408,19213,32748,48274,7335,50376,36496,1217,5701,37507,23706,25798,15126,663,7699,36665,18675,32723,17437,24179,15565,26280,50598,44490,44275,24679,34843,50,11995,50661,5002,47661,14505,32048,46696,10699,13889,8921,4567,7012,832,2401,5055,33306,9385,50169,49990,42236\n)\nORDER BY \"a\".\"tmstmp\" DESC,\n \"a\".\"id\" DESC\nLIMIT 20;\n\n Limit (cost=641000.51..641000.56 rows=20 width=12) (actual\ntime=80813.649..80813.661 rows=20 loops=1)\n Buffers: shared hit=11926 read=93553 dirtied=2063 written=27769\n -> Sort (cost=641000.51..642534.60 rows=613634 width=12) (actual\ntime=80813.647..80813.652 rows=20 loops=1)\n Sort Key: a_partition1.tmstmp DESC, a_partition1.id DESC\n Sort Method: top-N heapsort Memory: 25kB\n Buffers: shared hit=11926 read=93553 dirtied=2063 written=27769\n -> Append (cost=0.43..624671.93 rows=613634 width=12) (actual\ntime=2.244..80715.314 rows=104279 loops=1)\n Buffers: shared hit=11926 read=93553 dirtied=2063\nwritten=27769\n -> Index Scan using a_parent_id_idx1 on a_partition1\n(cost=0.43..304861.89 rows=300407 width=12) (actual time=2.243..34766.236\nrows=46309 loops=1)\n Index Cond: (parent_id = ANY\n('{49694,35150,48216,45743,49529,47239,40752,46735,7692,33855,25062,47821,6947,9735,12611,33190,9523,39200,50056,14912,9624,35552,35058,35365,47363,30358,853,21719,20568,16211,49372,6087,16111,38337,38891,10792,4556,27171,37731,38587,40402,26109,1477,36932,12191,5459,49307,21132,8697,4131,47869,49246,30447,23795,14389,19743,1369,15689,1820,7826,50623,2179,28090,46430,40117,32603,6886,35318,1026,6991,21360,50370,21721,33558,39162,49753,17974,45599,23256,8483,20864,33426,990,10068,38,13186,19338,43727,12319,50658,19243,6267,12498,12214,542,43339,5933,11376,49748,9335,22248,14763,13375,48554,50595,27006,43481,2805,49012,20000,849,27214,38576,36449,39854,13708,26841,3837,39971,5090,35680,49468,24738,27145,15380,40463,1250,22007,20962,8747,1383,45856,20025,35346,17121,3387,42172,5340,13004,11554,4607,16991,45034,20212,48020,19356,41234,30633,33657,1508,30660,35022,41408,19213,32748,48274,7335,50376,36496,1217,5701,37507,23706,25798,15126,663,7699,36665,18675,32723,17437,24179,15565,26280,50598,44490,44275,24679,34843,50,11995,50661,5002,47661,14505,32048,46696,10699,13889,8921,4567,7012,832,2401,5055,33306,9385,50169,49990,42236}'::integer[]))\n Buffers: shared hit=9485 read=35740 dirtied=2033\nwritten=12713\n -> Index Scan using a_parent_id_idx2 on a_partition2\n(cost=0.43..80641.75 rows=75349 width=12) (actual time=8.280..12640.675\nrows=16628 loops=1)\n Index Cond: (parent_id = ANY\n('{49694,35150,48216,45743,49529,47239,40752,46735,7692,33855,25062,47821,6947,9735,12611,33190,9523,39200,50056,14912,9624,35552,35058,35365,47363,30358,853,21719,20568,16211,49372,6087,16111,38337,38891,10792,4556,27171,37731,38587,40402,26109,1477,36932,12191,5459,49307,21132,8697,4131,47869,49246,30447,23795,14389,19743,1369,15689,1820,7826,50623,2179,28090,46430,40117,32603,6886,35318,1026,6991,21360,50370,21721,33558,39162,49753,17974,45599,23256,8483,20864,33426,990,10068,38,13186,19338,43727,12319,50658,19243,6267,12498,12214,542,43339,5933,11376,49748,9335,22248,14763,13375,48554,50595,27006,43481,2805,49012,20000,849,27214,38576,36449,39854,13708,26841,3837,39971,5090,35680,49468,24738,27145,15380,40463,1250,22007,20962,8747,1383,45856,20025,35346,17121,3387,42172,5340,13004,11554,4607,16991,45034,20212,48020,19356,41234,30633,33657,1508,30660,35022,41408,19213,32748,48274,7335,50376,36496,1217,5701,37507,23706,25798,15126,663,7699,36665,18675,32723,17437,24179,15565,26280,50598,44490,44275,24679,34843,50,11995,50661,5002,47661,14505,32048,46696,10699,13889,8921,4567,7012,832,2401,5055,33306,9385,50169,49990,42236}'::integer[]))\n Buffers: shared hit=558 read=16625 dirtied=8\nwritten=6334\n -> Index Scan using a_parent_id_idx3 on a_partition3\n(cost=0.43..57551.91 rows=65008 width=12) (actual time=3.721..13759.664\nrows=12973 loops=1)\n Index Cond: (parent_id = ANY\n('{49694,35150,48216,45743,49529,47239,40752,46735,7692,33855,25062,47821,6947,9735,12611,33190,9523,39200,50056,14912,9624,35552,35058,35365,47363,30358,853,21719,20568,16211,49372,6087,16111,38337,38891,10792,4556,27171,37731,38587,40402,26109,1477,36932,12191,5459,49307,21132,8697,4131,47869,49246,30447,23795,14389,19743,1369,15689,1820,7826,50623,2179,28090,46430,40117,32603,6886,35318,1026,6991,21360,50370,21721,33558,39162,49753,17974,45599,23256,8483,20864,33426,990,10068,38,13186,19338,43727,12319,50658,19243,6267,12498,12214,542,43339,5933,11376,49748,9335,22248,14763,13375,48554,50595,27006,43481,2805,49012,20000,849,27214,38576,36449,39854,13708,26841,3837,39971,5090,35680,49468,24738,27145,15380,40463,1250,22007,20962,8747,1383,45856,20025,35346,17121,3387,42172,5340,13004,11554,4607,16991,45034,20212,48020,19356,41234,30633,33657,1508,30660,35022,41408,19213,32748,48274,7335,50376,36496,1217,5701,37507,23706,25798,15126,663,7699,36665,18675,32723,17437,24179,15565,26280,50598,44490,44275,24679,34843,50,11995,50661,5002,47661,14505,32048,46696,10699,13889,8921,4567,7012,832,2401,5055,33306,9385,50169,49990,42236}'::integer[]))\n Buffers: shared hit=592 read=12943 dirtied=2\nwritten=3136\n -> Index Scan using a_parent_id_idx4 on a_partition4\n(cost=0.43..70062.42 rows=67402 width=12) (actual time=5.999..5949.033\nrows=7476 loops=1)\n Index Cond: (parent_id = ANY\n('{49694,35150,48216,45743,49529,47239,40752,46735,7692,33855,25062,47821,6947,9735,12611,33190,9523,39200,50056,14912,9624,35552,35058,35365,47363,30358,853,21719,20568,16211,49372,6087,16111,38337,38891,10792,4556,27171,37731,38587,40402,26109,1477,36932,12191,5459,49307,21132,8697,4131,47869,49246,30447,23795,14389,19743,1369,15689,1820,7826,50623,2179,28090,46430,40117,32603,6886,35318,1026,6991,21360,50370,21721,33558,39162,49753,17974,45599,23256,8483,20864,33426,990,10068,38,13186,19338,43727,12319,50658,19243,6267,12498,12214,542,43339,5933,11376,49748,9335,22248,14763,13375,48554,50595,27006,43481,2805,49012,20000,849,27214,38576,36449,39854,13708,26841,3837,39971,5090,35680,49468,24738,27145,15380,40463,1250,22007,20962,8747,1383,45856,20025,35346,17121,3387,42172,5340,13004,11554,4607,16991,45034,20212,48020,19356,41234,30633,33657,1508,30660,35022,41408,19213,32748,48274,7335,50376,36496,1217,5701,37507,23706,25798,15126,663,7699,36665,18675,32723,17437,24179,15565,26280,50598,44490,44275,24679,34843,50,11995,50661,5002,47661,14505,32048,46696,10699,13889,8921,4567,7012,832,2401,5055,33306,9385,50169,49990,42236}'::integer[]))\n Buffers: shared hit=449 read=7665 dirtied=10\nwritten=1242\n -> Index Scan using a_parent_id_idx5 on a_partition5\n(cost=0.43..111553.96 rows=105468 width=12) (actual time=7.806..13519.564\nrows=20893 loops=1)\n Index Cond: (parent_id = ANY\n('{49694,35150,48216,45743,49529,47239,40752,46735,7692,33855,25062,47821,6947,9735,12611,33190,9523,39200,50056,14912,9624,35552,35058,35365,47363,30358,853,21719,20568,16211,49372,6087,16111,38337,38891,10792,4556,27171,37731,38587,40402,26109,1477,36932,12191,5459,49307,21132,8697,4131,47869,49246,30447,23795,14389,19743,1369,15689,1820,7826,50623,2179,28090,46430,40117,32603,6886,35318,1026,6991,21360,50370,21721,33558,39162,49753,17974,45599,23256,8483,20864,33426,990,10068,38,13186,19338,43727,12319,50658,19243,6267,12498,12214,542,43339,5933,11376,49748,9335,22248,14763,13375,48554,50595,27006,43481,2805,49012,20000,849,27214,38576,36449,39854,13708,26841,3837,39971,5090,35680,49468,24738,27145,15380,40463,1250,22007,20962,8747,1383,45856,20025,35346,17121,3387,42172,5340,13004,11554,4607,16991,45034,20212,48020,19356,41234,30633,33657,1508,30660,35022,41408,19213,32748,48274,7335,50376,36496,1217,5701,37507,23706,25798,15126,663,7699,36665,18675,32723,17437,24179,15565,26280,50598,44490,44275,24679,34843,50,11995,50661,5002,47661,14505,32048,46696,10699,13889,8921,4567,7012,832,2401,5055,33306,9385,50169,49990,42236}'::integer[]))\n Buffers: shared hit=842 read=20580 dirtied=10\nwritten=4344\n Planning time: 8.366 ms\n Execution time: 80813.714 ms\n(25 rows)\n\n\nThough it still finishes, I'd really like to speed this up, especially\nbecause there are many situations where I will be using many more than the\n200 \"parent_id\"s above.\n\n\nI'd be very grateful for help with one or both of these questions:\n1) Why is adding an unnecessary (from the perspective of result\ncorrectness) ORDER BY valuable for forcing the parent_id index usage, and\ndoes that indicate that there is something wrong with my\ntable/indexes/statistics/etc.?\n2) Is there any way I can improve my query time when there are many\n\"parent_id\"s involved? I seem to only be able to get the query plan to use\nat most one of the parent_id index and the tmstmp index at a time. Perhaps\nthe correct multicolumn index would help?\n\nA couple of notes (happy to be scolded for either of these things if\nthey're problematic):\n1) I've masked these queries somewhat for privacy reasons and cleaned up\nthe table \"a\" of some extra fields and indexes.\n2) I'm using different sets of \"parent_id\"s to try to overcome caching so\nthe BUFFERS aspect can be as accurate as possible.\n\nAs for other maybe relevant information:\n\n- version: 10.3\n- hardware is AWS so probably not huge issue.\n- work_mem is quite high (order of GBs)\n\n\nThanks in advance for your help!\n\nHi all,I'm having a good bit of trouble making production-ready a query I wouldn't have thought would be too challenging.Below is the relevant portions of my table, which is partitioned on values of part_key from 1-5. The number of rows is on the order of 10^7, and the number of unique \"parent_id\" values is on the order of 10^4. \"tmstmp\" is continuous (more or less). Neither parent_id nor tmstmp is nullable. The table receives a fair amount of INSERTS, and also a number of UPDATES.db=> \\d a Table \"public.a\" Column | Type | Collation | Nullable | Default-------------------------+--------------------------+-----------+----------+---------------------------------------------- id | integer | | | nextval('a_id_seq'::regclass) parent_id | integer | | not null | tmstmp | timestamp with time zone | | not null | part_key | integer | | |Partition key: LIST (part_key)Number of partitions: 5 (Use \\d+ to list them.)db=> \\d a_partition1 Table \"public.a_partition1\" Column | Type | Collation | Nullable | Default-------------------------+--------------------------+-----------+----------+---------------------------------------------- id | integer | | | nextval('a_id_seq'::regclass) parent_id | integer | | not null | tmstmp | timestamp with time zone | | not null | part_key | integer | | |Partition of: a FOR VALUES IN (1)Indexes: \"a_pk_idx1\" UNIQUE, btree (id) \"a_tmstmp_idx1\" btree (tmstmp) \"a_parent_id_idx1\" btree (parent_id)Check constraints: \"a_partition_check_part_key1\" CHECK (part_key = 1)Foreign-key constraints: \"a_partition_parent_id_fk_b_id1\" FOREIGN KEY (parent_id) REFERENCES b(id) DEFERRABLE INITIALLY DEFERREDdb=> SELECT relname, relpages, reltuples, relallvisible, relkind, relnatts, relhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE relname like 'a_partition%'; relname | relpages | reltuples | relallvisible | relkind | relnatts | relhassubclass | reloptions | pg_table_size-----------------------------------+----------+-------------+---------------+---------+----------+----------------+------------+--------------- a_partition1 | 152146 | 873939 | 106863 | r | 26 | f | | 287197233152 a_partition2 | 669987 | 3.62268e+06 | 0 | r | 26 | f | | 161877745664 a_partition3 | 562069 | 2.94414e+06 | 213794 | r | 26 | f | | 132375994368 a_partition4 | 729880 | 3.95513e+06 | 69761 | r | 26 | f | | 188689047552 a_partition5 | 834132 | 4.9748e+06 | 52622 | r | 26 | f | | 218596630528(5 rows)I'm interested in filtering by parent_id (an indexed foreign-key relationship, though I think it shouldn't matter in this context), then sorting the result by a timestamp field (also indexed). I only need 20 at a time.The naive query works well for a very small number of ids (up to 3):EXPLAIN (ANALYZE, BUFFERS)SELECT \"a\".\"id\"FROM \"a\"WHERE \"a\".\"parent_id\" IN ( 37066,41174,28843)ORDER BY \"a\".\"tmstmp\" DESCLIMIT 20; Limit (cost=9838.23..9838.28 rows=20 width=12) (actual time=13.307..13.307 rows=0 loops=1) Buffers: shared hit=29 read=16 -> Sort (cost=9838.23..9860.12 rows=8755 width=12) (actual time=13.306..13.306 rows=0 loops=1) Sort Key: a_partition1.tmstmp DESC Sort Method: quicksort Memory: 25kB Buffers: shared hit=29 read=16 -> Append (cost=0.43..9605.26 rows=8755 width=12) (actual time=13.302..13.302 rows=0 loops=1) Buffers: shared hit=29 read=16 -> Index Scan using a_parent_id_idx1 on a_partition1 (cost=0.43..4985.45 rows=4455 width=12) (actual time=4.007..4.007 rows=0 loops=1) Index Cond: (parent_id = ANY ('{37066,41174,28843}'::integer[])) Buffers: shared hit=6 read=3 -> Index Scan using a_parent_id_idx2 on a_partition2 (cost=0.43..1072.79 rows=956 width=12) (actual time=3.521..3.521 rows=0 loops=1) Index Cond: (parent_id = ANY ('{37066,41174,28843}'::integer[])) Buffers: shared hit=5 read=4 -> Index Scan using a_parent_id_idx3 on a_partition3 (cost=0.43..839.30 rows=899 width=12) (actual time=2.172..2.172 rows=0 loops=1) Index Cond: (parent_id = ANY ('{37066,41174,28843}'::integer[])) Buffers: shared hit=6 read=3 -> Index Scan using a_parent_id_idx4 on a_partition4 (cost=0.43..1041.11 rows=959 width=12) (actual time=1.822..1.822 rows=0 loops=1) Index Cond: (parent_id = ANY ('{37066,41174,28843}'::integer[])) Buffers: shared hit=6 read=3 -> Index Scan using a_parent_id_idx5 on a_partition5 (cost=0.43..1666.61 rows=1486 width=12) (actual time=1.777..1.777 rows=0 loops=1) Index Cond: (parent_id = ANY ('{37066,41174,28843}'::integer[])) Buffers: shared hit=6 read=3 Planning time: 0.559 ms Execution time: 13.343 ms(25 rows)But as soon as the number included in the filter goes up slightly (at about 4), the query plan changes the indexes it uses in a way that makes it impossibly slow. The below is only an EXPLAIN because the query times out:EXPLAINSELECT \"a\".\"id\"FROM \"a\"WHERE \"a\".\"parent_id\" IN ( 34226,24506,40987,27162)ORDER BY \"a\".\"tmstmp\" DESCLIMIT 20; Limit (cost=2.22..8663.99 rows=20 width=12) -> Merge Append (cost=2.22..5055447.67 rows=11673 width=12) Sort Key: a_partition1.tmstmp DESC -> Index Scan Backward using a_tmstmp_idx1 on a_partition1 (cost=0.43..1665521.55 rows=5940 width=12) Filter: (parent_id = ANY ('{34226,24506,40987,27162}'::integer[])) -> Index Scan Backward using a_tmstmp_idx2 on a_partition2 (cost=0.43..880517.20 rows=1274 width=12) Filter: (parent_id = ANY ('{34226,24506,40987,27162}'::integer[])) -> Index Scan Backward using a_tmstmp_idx3 on a_partition3 (cost=0.43..639224.73 rows=1199 width=12) Filter: (parent_id = ANY ('{34226,24506,40987,27162}'::integer[])) -> Index Scan Backward using a_tmstmp_idx4 on a_partition4 (cost=0.43..852881.68 rows=1278 width=12) Filter: (parent_id = ANY ('{34226,24506,40987,27162}'::integer[])) -> Index Scan Backward using a_tmstmp_idx5 on a_partition5 (cost=0.43..1017137.75 rows=1982 width=12) Filter: (parent_id = ANY ('{34226,24506,40987,27162}'::integer[]))(13 rows)Something about the estimated row counts (this problem persisted after I tried ANALYZEing) forces usage of the tmstmp index and Merge Append (which seems wise) but also a filter condition on parent_id over an index condition, which is apparently prohibitively slow.I tried creating a multicolumn index like:CREATE INDEX \"a_partition1_parent_and_tmstmp_idx\" on \"a_partition2\" USING btree (\"parent_id\", \"tmstmp\" DESC);But this didn't help (it wasn't used).I also found that removing the LIMIT makes an acceptably fast (though different) query plan which uses the parent_id instead instead of the tmstmp one:EXPLAIN (ANALYZE, BUFFERS)SELECT \"a\".\"id\"FROM \"a\"WHERE \"a\".\"parent_id\" IN ( 17942,32006,36733,9698,27948)ORDER BY \"a\".\"tmstmp\" DESC; Sort (cost=17004.82..17041.29 rows=14591 width=12) (actual time=109.650..109.677 rows=127 loops=1) Sort Key: a_partition1.tmstmp DESC Sort Method: quicksort Memory: 30kB Buffers: shared hit=60 read=142 written=12 -> Append (cost=0.43..15995.64 rows=14591 width=12) (actual time=9.206..109.504 rows=127 loops=1) Buffers: shared hit=60 read=142 written=12 -> Index Scan using a_parent_id_idx1 on a_partition1 (cost=0.43..8301.03 rows=7425 width=12) (actual time=9.205..11.116 rows=1 loops=1) Index Cond: (parent_id = ANY ('{17942,32006,36733,9698,27948}'::integer[])) Buffers: shared hit=10 read=6 -> Index Scan using a_parent_id_idx2 on a_partition2 (cost=0.43..1786.52 rows=1593 width=12) (actual time=7.116..76.000 rows=98 loops=1) Index Cond: (parent_id = ANY ('{17942,32006,36733,9698,27948}'::integer[])) Buffers: shared hit=15 read=97 written=10 -> Index Scan using a_parent_id_idx3 on a_partition3 (cost=0.43..1397.74 rows=1498 width=12) (actual time=3.160..3.160 rows=0 loops=1) Index Cond: (parent_id = ANY ('{17942,32006,36733,9698,27948}'::integer[])) Buffers: shared hit=10 read=5 -> Index Scan using a_parent_id_idx4 on a_partition4 (cost=0.43..1733.77 rows=1598 width=12) (actual time=1.975..16.960 rows=28 loops=1) Index Cond: (parent_id = ANY ('{17942,32006,36733,9698,27948}'::integer[])) Buffers: shared hit=14 read=30 written=2 -> Index Scan using a_parent_id_idx5 on a_partition5 (cost=0.43..2776.58 rows=2477 width=12) (actual time=2.155..2.155 rows=0 loops=1) Index Cond: (parent_id = ANY ('{17942,32006,36733,9698,27948}'::integer[])) Buffers: shared hit=11 read=4 Planning time: 0.764 ms Execution time: 109.748 ms(23 rows)Eventually, I stumbled upon these links:- https://stackoverflow.com/a/27237698- http://datamangling.com/2014/01/17/limit-1-and-performance-in-a-postgres-query/And from these, was able to partially resolve my issue by attaching an extra ORDER BY:EXPLAIN (ANALYZE, BUFFERS)SELECT \"a\".\"id\"FROM \"a\" WHERE \"a\".\"parent_id\" IN (3909,26840,32181,22998,9632)ORDER BY \"a\".\"tmstmp\" DESC, \"a\".\"id\" DESCLIMIT 20; Limit (cost=16383.91..16383.96 rows=20 width=12) (actual time=29.804..29.808 rows=6 loops=1) Buffers: shared hit=39 read=42 -> Sort (cost=16383.91..16420.38 rows=14591 width=12) (actual time=29.803..29.804 rows=6 loops=1) Sort Key: a_partition1.tmstmp DESC, a_partition1.id DESC Sort Method: quicksort Memory: 25kB Buffers: shared hit=39 read=42 -> Append (cost=0.43..15995.64 rows=14591 width=12) (actual time=12.509..29.789 rows=6 loops=1) Buffers: shared hit=39 read=42 -> Index Scan using a_parent_id_idx1 on a_partition1 (cost=0.43..8301.03 rows=7425 width=12) (actual time=4.046..4.046 rows=0 loops=1) Index Cond: (parent_id = ANY ('{3909,26840,32181,22998,9632}'::integer[])) Buffers: shared hit=7 read=8 -> Index Scan using a_parent_id_idx2 on a_partition2 (cost=0.43..1786.52 rows=1593 width=12) (actual time=5.447..5.447 rows=0 loops=1) Index Cond: (parent_id = ANY ('{3909,26840,32181,22998,9632}'::integer[])) Buffers: shared hit=7 read=8 -> Index Scan using a_parent_id_idx3 on a_partition3 (cost=0.43..1397.74 rows=1498 width=12) (actual time=3.013..3.618 rows=1 loops=1) Index Cond: (parent_id = ANY ('{3909,26840,32181,22998,9632}'::integer[])) Buffers: shared hit=9 read=7 -> Index Scan using a_parent_id_idx4 on a_partition4 (cost=0.43..1733.77 rows=1598 width=12) (actual time=7.337..7.337 rows=0 loops=1) Index Cond: (parent_id = ANY ('{3909,26840,32181,22998,9632}'::integer[])) Buffers: shared hit=8 read=7 -> Index Scan using a_parent_id_idx5 on a_partition5 (cost=0.43..2776.58 rows=2477 width=12) (actual time=3.401..9.332 rows=5 loops=1) Index Cond: (parent_id = ANY ('{3909,26840,32181,22998,9632}'::integer[])) Buffers: shared hit=8 read=12 Planning time: 0.601 ms Execution time: 29.851 ms(25 rows)This query plan (which is the same as when LIMIT is removed) has been a good short term solution when the number of \"parent_id\"s I'm using is still relatively small, but unfortunately queries grow untenably slow as the number of \"parent_id\"s involved increases:EXPLAIN (ANALYZE, BUFFERS)SELECT \"a\".\"id\"FROM \"a\"WHERE \"a\".\"parent_id\" IN ( 49694,35150,48216,45743,49529,47239,40752,46735,7692,33855,25062,47821,6947,9735,12611,33190,9523,39200,50056,14912,9624,35552,35058,35365,47363,30358,853,21719,20568,16211,49372,6087,16111,38337,38891,10792,4556,27171,37731,38587,40402,26109,1477,36932,12191,5459,49307,21132,8697,4131,47869,49246,30447,23795,14389,19743,1369,15689,1820,7826,50623,2179,28090,46430,40117,32603,6886,35318,1026,6991,21360,50370,21721,33558,39162,49753,17974,45599,23256,8483,20864,33426,990,10068,38,13186,19338,43727,12319,50658,19243,6267,12498,12214,542,43339,5933,11376,49748,9335,22248,14763,13375,48554,50595,27006,43481,2805,49012,20000,849,27214,38576,36449,39854,13708,26841,3837,39971,5090,35680,49468,24738,27145,15380,40463,1250,22007,20962,8747,1383,45856,20025,35346,17121,3387,42172,5340,13004,11554,4607,16991,45034,20212,48020,19356,41234,30633,33657,1508,30660,35022,41408,19213,32748,48274,7335,50376,36496,1217,5701,37507,23706,25798,15126,663,7699,36665,18675,32723,17437,24179,15565,26280,50598,44490,44275,24679,34843,50,11995,50661,5002,47661,14505,32048,46696,10699,13889,8921,4567,7012,832,2401,5055,33306,9385,50169,49990,42236)ORDER BY \"a\".\"tmstmp\" DESC, \"a\".\"id\" DESCLIMIT 20; Limit (cost=641000.51..641000.56 rows=20 width=12) (actual time=80813.649..80813.661 rows=20 loops=1) Buffers: shared hit=11926 read=93553 dirtied=2063 written=27769 -> Sort (cost=641000.51..642534.60 rows=613634 width=12) (actual time=80813.647..80813.652 rows=20 loops=1) Sort Key: a_partition1.tmstmp DESC, a_partition1.id DESC Sort Method: top-N heapsort Memory: 25kB Buffers: shared hit=11926 read=93553 dirtied=2063 written=27769 -> Append (cost=0.43..624671.93 rows=613634 width=12) (actual time=2.244..80715.314 rows=104279 loops=1) Buffers: shared hit=11926 read=93553 dirtied=2063 written=27769 -> Index Scan using a_parent_id_idx1 on a_partition1 (cost=0.43..304861.89 rows=300407 width=12) (actual time=2.243..34766.236 rows=46309 loops=1) Index Cond: (parent_id = ANY ('{49694,35150,48216,45743,49529,47239,40752,46735,7692,33855,25062,47821,6947,9735,12611,33190,9523,39200,50056,14912,9624,35552,35058,35365,47363,30358,853,21719,20568,16211,49372,6087,16111,38337,38891,10792,4556,27171,37731,38587,40402,26109,1477,36932,12191,5459,49307,21132,8697,4131,47869,49246,30447,23795,14389,19743,1369,15689,1820,7826,50623,2179,28090,46430,40117,32603,6886,35318,1026,6991,21360,50370,21721,33558,39162,49753,17974,45599,23256,8483,20864,33426,990,10068,38,13186,19338,43727,12319,50658,19243,6267,12498,12214,542,43339,5933,11376,49748,9335,22248,14763,13375,48554,50595,27006,43481,2805,49012,20000,849,27214,38576,36449,39854,13708,26841,3837,39971,5090,35680,49468,24738,27145,15380,40463,1250,22007,20962,8747,1383,45856,20025,35346,17121,3387,42172,5340,13004,11554,4607,16991,45034,20212,48020,19356,41234,30633,33657,1508,30660,35022,41408,19213,32748,48274,7335,50376,36496,1217,5701,37507,23706,25798,15126,663,7699,36665,18675,32723,17437,24179,15565,26280,50598,44490,44275,24679,34843,50,11995,50661,5002,47661,14505,32048,46696,10699,13889,8921,4567,7012,832,2401,5055,33306,9385,50169,49990,42236}'::integer[])) Buffers: shared hit=9485 read=35740 dirtied=2033 written=12713 -> Index Scan using a_parent_id_idx2 on a_partition2 (cost=0.43..80641.75 rows=75349 width=12) (actual time=8.280..12640.675 rows=16628 loops=1) Index Cond: (parent_id = ANY ('{49694,35150,48216,45743,49529,47239,40752,46735,7692,33855,25062,47821,6947,9735,12611,33190,9523,39200,50056,14912,9624,35552,35058,35365,47363,30358,853,21719,20568,16211,49372,6087,16111,38337,38891,10792,4556,27171,37731,38587,40402,26109,1477,36932,12191,5459,49307,21132,8697,4131,47869,49246,30447,23795,14389,19743,1369,15689,1820,7826,50623,2179,28090,46430,40117,32603,6886,35318,1026,6991,21360,50370,21721,33558,39162,49753,17974,45599,23256,8483,20864,33426,990,10068,38,13186,19338,43727,12319,50658,19243,6267,12498,12214,542,43339,5933,11376,49748,9335,22248,14763,13375,48554,50595,27006,43481,2805,49012,20000,849,27214,38576,36449,39854,13708,26841,3837,39971,5090,35680,49468,24738,27145,15380,40463,1250,22007,20962,8747,1383,45856,20025,35346,17121,3387,42172,5340,13004,11554,4607,16991,45034,20212,48020,19356,41234,30633,33657,1508,30660,35022,41408,19213,32748,48274,7335,50376,36496,1217,5701,37507,23706,25798,15126,663,7699,36665,18675,32723,17437,24179,15565,26280,50598,44490,44275,24679,34843,50,11995,50661,5002,47661,14505,32048,46696,10699,13889,8921,4567,7012,832,2401,5055,33306,9385,50169,49990,42236}'::integer[])) Buffers: shared hit=558 read=16625 dirtied=8 written=6334 -> Index Scan using a_parent_id_idx3 on a_partition3 (cost=0.43..57551.91 rows=65008 width=12) (actual time=3.721..13759.664 rows=12973 loops=1) Index Cond: (parent_id = ANY ('{49694,35150,48216,45743,49529,47239,40752,46735,7692,33855,25062,47821,6947,9735,12611,33190,9523,39200,50056,14912,9624,35552,35058,35365,47363,30358,853,21719,20568,16211,49372,6087,16111,38337,38891,10792,4556,27171,37731,38587,40402,26109,1477,36932,12191,5459,49307,21132,8697,4131,47869,49246,30447,23795,14389,19743,1369,15689,1820,7826,50623,2179,28090,46430,40117,32603,6886,35318,1026,6991,21360,50370,21721,33558,39162,49753,17974,45599,23256,8483,20864,33426,990,10068,38,13186,19338,43727,12319,50658,19243,6267,12498,12214,542,43339,5933,11376,49748,9335,22248,14763,13375,48554,50595,27006,43481,2805,49012,20000,849,27214,38576,36449,39854,13708,26841,3837,39971,5090,35680,49468,24738,27145,15380,40463,1250,22007,20962,8747,1383,45856,20025,35346,17121,3387,42172,5340,13004,11554,4607,16991,45034,20212,48020,19356,41234,30633,33657,1508,30660,35022,41408,19213,32748,48274,7335,50376,36496,1217,5701,37507,23706,25798,15126,663,7699,36665,18675,32723,17437,24179,15565,26280,50598,44490,44275,24679,34843,50,11995,50661,5002,47661,14505,32048,46696,10699,13889,8921,4567,7012,832,2401,5055,33306,9385,50169,49990,42236}'::integer[])) Buffers: shared hit=592 read=12943 dirtied=2 written=3136 -> Index Scan using a_parent_id_idx4 on a_partition4 (cost=0.43..70062.42 rows=67402 width=12) (actual time=5.999..5949.033 rows=7476 loops=1) Index Cond: (parent_id = ANY ('{49694,35150,48216,45743,49529,47239,40752,46735,7692,33855,25062,47821,6947,9735,12611,33190,9523,39200,50056,14912,9624,35552,35058,35365,47363,30358,853,21719,20568,16211,49372,6087,16111,38337,38891,10792,4556,27171,37731,38587,40402,26109,1477,36932,12191,5459,49307,21132,8697,4131,47869,49246,30447,23795,14389,19743,1369,15689,1820,7826,50623,2179,28090,46430,40117,32603,6886,35318,1026,6991,21360,50370,21721,33558,39162,49753,17974,45599,23256,8483,20864,33426,990,10068,38,13186,19338,43727,12319,50658,19243,6267,12498,12214,542,43339,5933,11376,49748,9335,22248,14763,13375,48554,50595,27006,43481,2805,49012,20000,849,27214,38576,36449,39854,13708,26841,3837,39971,5090,35680,49468,24738,27145,15380,40463,1250,22007,20962,8747,1383,45856,20025,35346,17121,3387,42172,5340,13004,11554,4607,16991,45034,20212,48020,19356,41234,30633,33657,1508,30660,35022,41408,19213,32748,48274,7335,50376,36496,1217,5701,37507,23706,25798,15126,663,7699,36665,18675,32723,17437,24179,15565,26280,50598,44490,44275,24679,34843,50,11995,50661,5002,47661,14505,32048,46696,10699,13889,8921,4567,7012,832,2401,5055,33306,9385,50169,49990,42236}'::integer[])) Buffers: shared hit=449 read=7665 dirtied=10 written=1242 -> Index Scan using a_parent_id_idx5 on a_partition5 (cost=0.43..111553.96 rows=105468 width=12) (actual time=7.806..13519.564 rows=20893 loops=1) Index Cond: (parent_id = ANY ('{49694,35150,48216,45743,49529,47239,40752,46735,7692,33855,25062,47821,6947,9735,12611,33190,9523,39200,50056,14912,9624,35552,35058,35365,47363,30358,853,21719,20568,16211,49372,6087,16111,38337,38891,10792,4556,27171,37731,38587,40402,26109,1477,36932,12191,5459,49307,21132,8697,4131,47869,49246,30447,23795,14389,19743,1369,15689,1820,7826,50623,2179,28090,46430,40117,32603,6886,35318,1026,6991,21360,50370,21721,33558,39162,49753,17974,45599,23256,8483,20864,33426,990,10068,38,13186,19338,43727,12319,50658,19243,6267,12498,12214,542,43339,5933,11376,49748,9335,22248,14763,13375,48554,50595,27006,43481,2805,49012,20000,849,27214,38576,36449,39854,13708,26841,3837,39971,5090,35680,49468,24738,27145,15380,40463,1250,22007,20962,8747,1383,45856,20025,35346,17121,3387,42172,5340,13004,11554,4607,16991,45034,20212,48020,19356,41234,30633,33657,1508,30660,35022,41408,19213,32748,48274,7335,50376,36496,1217,5701,37507,23706,25798,15126,663,7699,36665,18675,32723,17437,24179,15565,26280,50598,44490,44275,24679,34843,50,11995,50661,5002,47661,14505,32048,46696,10699,13889,8921,4567,7012,832,2401,5055,33306,9385,50169,49990,42236}'::integer[])) Buffers: shared hit=842 read=20580 dirtied=10 written=4344 Planning time: 8.366 ms Execution time: 80813.714 ms(25 rows)Though it still finishes, I'd really like to speed this up, especially because there are many situations where I will be using many more than the 200 \"parent_id\"s above.I'd be very grateful for help with one or both of these questions:1) Why is adding an unnecessary (from the perspective of result correctness) ORDER BY valuable for forcing the parent_id index usage, and does that indicate that there is something wrong with my table/indexes/statistics/etc.?2) Is there any way I can improve my query time when there are many \"parent_id\"s involved? I seem to only be able to get the query plan to use at most one of the parent_id index and the tmstmp index at a time. Perhaps the correct multicolumn index would help?A couple of notes (happy to be scolded for either of these things if they're problematic):1) I've masked these queries somewhat for privacy reasons and cleaned up the table \"a\" of some extra fields and indexes.2) I'm using different sets of \"parent_id\"s to try to overcome caching so the BUFFERS aspect can be as accurate as possible.As for other maybe relevant information:- version: 10.3- hardware is AWS so probably not huge issue.- work_mem is quite high (order of GBs)Thanks in advance for your help!",
"msg_date": "Tue, 10 Jul 2018 11:07:42 -0400",
"msg_from": "Lincoln Swaine-Moore <[email protected]>",
"msg_from_op": true,
"msg_subject": "Improving Performance of Query ~ Filter by A, Sort by B"
},
{
"msg_contents": "Hello,\n\nI have tested it with release 11 and limit 20 is pushed to each partition\nwhen using index on tmstmp.\n\nCould you tell us what is the result of your query applyed to one partition \n\nEXPLAIN ANALYZE\nSELECT \"a\".\"id\"\nFROM a_partition1 \"a\"\nWHERE \"a\".\"parent_id\" IN (\n 34226,24506,40987,27162\n)\nORDER BY \"a\".\"tmstmp\" DESC\nLIMIT 20;\n\nMay be that limit 20 is not pushed to partitions in your version ?\nRegards\nPAscal\n\n\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n",
"msg_date": "Wed, 11 Jul 2018 14:41:55 -0700 (MST)",
"msg_from": "legrand legrand <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improving Performance of Query ~ Filter by A, Sort by B"
},
{
"msg_contents": "Thanks for looking into this!\n\nHere's the result (I turned off the timeout and got it to finish):\n\nEXPLAIN ANALYZE\nSELECT \"a\".\"id\"\nFROM a_partition1 \"a\"\nWHERE \"a\".\"parent_id\" IN (\n 49188,14816,14758,8402\n)\nORDER BY \"a\".\"tmstmp\" DESC\nLIMIT 20;\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.43..5710.03 rows=20 width=12) (actual\ntime=1141878.105..1142350.296 rows=20 loops=1)\n -> Index Scan Backward using a_tmstmp_idx1 on a_partition1 a\n(cost=0.43..1662350.21 rows=5823 width=12) (actual\ntime=1141878.103..1142350.274 rows=20 loops=1)\n Filter: (parent_id = ANY ('{49188,14816,14758,8402}'::integer[]))\n Rows Removed by Filter: 7931478\n Planning time: 0.122 ms\n Execution time: 1142350.336 ms\n(6 rows)\n(Note: I've chosen parent_ids that I know are associated with the part_key\n1, but the query plan was the same with the 4 parent_ids in your query.)\n\nLooks like it's using the filter in the same way as the query on the parent\ntable, so seems be a problem beyond the partitioning.\n\nAnd as soon as I cut it back to 3 parent_ids, jumps to a query plan\nusing a_parent_id_idx1\nagain:\n\nEXPLAIN ANALYZE\nSELECT \"a\".\"id\"\nFROM a_partition1 \"a\"\nWHERE \"a\".\"parent_id\" IN (\n 19948,21436,41220\n)\nORDER BY \"a\".\"tmstmp\" DESC\nLIMIT 20;\n\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=5004.57..5004.62 rows=20 width=12) (actual\ntime=36.329..36.341 rows=20 loops=1)\n -> Sort (cost=5004.57..5015.49 rows=4367 width=12) (actual\ntime=36.328..36.332 rows=20 loops=1)\n Sort Key: tmstmp DESC\n Sort Method: top-N heapsort Memory: 26kB\n -> Index Scan using a_parent_id_idx1 on a_partition1 a\n(cost=0.43..4888.37 rows=4367 width=12) (actual time=5.581..36.270 rows=50\nloops=1)\n Index Cond: (parent_id = ANY\n('{19948,21436,41220}'::integer[]))\n Planning time: 0.117 ms\n Execution time: 36.379 ms\n(8 rows)\n\n\nThanks again for your help!\n\n\n\n\nOn Wed, Jul 11, 2018 at 5:41 PM, legrand legrand <\[email protected]> wrote:\n\n> Hello,\n>\n> I have tested it with release 11 and limit 20 is pushed to each partition\n> when using index on tmstmp.\n>\n> Could you tell us what is the result of your query applyed to one\n> partition\n>\n> EXPLAIN ANALYZE\n> SELECT \"a\".\"id\"\n> FROM a_partition1 \"a\"\n> WHERE \"a\".\"parent_id\" IN (\n> 34226,24506,40987,27162\n> )\n> ORDER BY \"a\".\"tmstmp\" DESC\n> LIMIT 20;\n>\n> May be that limit 20 is not pushed to partitions in your version ?\n> Regards\n> PAscal\n>\n>\n>\n>\n>\n> --\n> Sent from: http://www.postgresql-archive.org/PostgreSQL-performance-\n> f2050081.html\n>\n>\n\n\n-- \nLincoln Swaine-Moore\n\nThanks for looking into this!Here's the result (I turned off the timeout and got it to finish):EXPLAIN ANALYZESELECT \"a\".\"id\"FROM a_partition1 \"a\"WHERE \"a\".\"parent_id\" IN ( 49188,14816,14758,8402)ORDER BY \"a\".\"tmstmp\" DESCLIMIT 20; QUERY PLAN------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit (cost=0.43..5710.03 rows=20 width=12) (actual time=1141878.105..1142350.296 rows=20 loops=1) -> Index Scan Backward using a_tmstmp_idx1 on a_partition1 a (cost=0.43..1662350.21 rows=5823 width=12) (actual time=1141878.103..1142350.274 rows=20 loops=1) Filter: (parent_id = ANY ('{49188,14816,14758,8402}'::integer[])) Rows Removed by Filter: 7931478 Planning time: 0.122 ms Execution time: 1142350.336 ms(6 rows)(Note: I've chosen parent_ids that I know are associated with the part_key 1, but the query plan was the same with the 4 parent_ids in your query.)Looks like it's using the filter in the same way as the query on the parent table, so seems be a problem beyond the partitioning.And as soon as I cut it back to 3 parent_ids, jumps to a query plan using a_parent_id_idx1 again:EXPLAIN ANALYZESELECT \"a\".\"id\"FROM a_partition1 \"a\"WHERE \"a\".\"parent_id\" IN ( 19948,21436,41220)ORDER BY \"a\".\"tmstmp\" DESCLIMIT 20; QUERY PLAN-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit (cost=5004.57..5004.62 rows=20 width=12) (actual time=36.329..36.341 rows=20 loops=1) -> Sort (cost=5004.57..5015.49 rows=4367 width=12) (actual time=36.328..36.332 rows=20 loops=1) Sort Key: tmstmp DESC Sort Method: top-N heapsort Memory: 26kB -> Index Scan using a_parent_id_idx1 on a_partition1 a (cost=0.43..4888.37 rows=4367 width=12) (actual time=5.581..36.270 rows=50 loops=1) Index Cond: (parent_id = ANY ('{19948,21436,41220}'::integer[])) Planning time: 0.117 ms Execution time: 36.379 ms(8 rows)Thanks again for your help!On Wed, Jul 11, 2018 at 5:41 PM, legrand legrand <[email protected]> wrote:Hello,\n\nI have tested it with release 11 and limit 20 is pushed to each partition\nwhen using index on tmstmp.\n\nCould you tell us what is the result of your query applyed to one partition \n\nEXPLAIN ANALYZE\nSELECT \"a\".\"id\"\nFROM a_partition1 \"a\"\nWHERE \"a\".\"parent_id\" IN (\n 34226,24506,40987,27162\n)\nORDER BY \"a\".\"tmstmp\" DESC\nLIMIT 20;\n\nMay be that limit 20 is not pushed to partitions in your version ?\nRegards\nPAscal\n\n\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n-- Lincoln Swaine-Moore",
"msg_date": "Wed, 11 Jul 2018 19:31:46 -0400",
"msg_from": "Lincoln Swaine-Moore <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improving Performance of Query ~ Filter by A, Sort by B"
},
{
"msg_contents": "Lincoln Swaine-Moore <[email protected]> writes:\n> Here's the result (I turned off the timeout and got it to finish):\n> ...\n\nI think the core of the problem here is bad rowcount estimation. We can't\ntell from your output how many rows really match\n\n> WHERE \"a\".\"parent_id\" IN (\n> 49188,14816,14758,8402\n> )\n\nbut the planner is guessing there are 5823 of them. In the case with\nonly three IN items, we have\n\n> -> Index Scan using a_parent_id_idx1 on a_partition1 a (cost=0.43..4888.37 rows=4367 width=12) (actual time=5.581..36.270 rows=50 loops=1)\n> Index Cond: (parent_id = ANY ('{19948,21436,41220}'::integer[]))\n\nso the planner thinks there are 4367 matching rows but there are only 50.\nAnytime you've got a factor-of-100 estimation error, you're going to be\nreally lucky if you get a decent plan.\n\nI suggest increasing the statistics target for the parent_id column\nin hopes of getting better estimates for the number of matches.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 11 Jul 2018 23:10:18 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improving Performance of Query ~ Filter by A, Sort by B"
},
{
"msg_contents": "On Tue, Jul 10, 2018 at 11:07 AM, Lincoln Swaine-Moore <\[email protected]> wrote:\n\n>\n>\n>\n> Something about the estimated row counts (this problem persisted after I\n> tried ANALYZEing)\n>\n\nWhat is your default_statistics_target? What can you tell us about the\ndistribution of parent_id? (exponential, power law, etc?). Can you show\nthe results for select * from pg_stats where tablename='a' and\nattname='parent_id' \\x\\g\\x ?\n\n\n> forces usage of the tmstmp index and Merge Append (which seems wise) but\n> also a filter condition on parent_id over an index condition, which is\n> apparently prohibitively slow.\n>\n> I tried creating a multicolumn index like:\n>\n> CREATE INDEX \"a_partition1_parent_and_tmstmp_idx\" on \"a_partition2\" USING\n> btree (\"parent_id\", \"tmstmp\" DESC);\n>\n> But this didn't help (it wasn't used).\n>\n\nYou could try reversing the order and adding a column to be (tmstmp,\nparent_id, id) and keeping the table well vacuumed. This would allow the\nslow plan to still walk the indexes in tmstmp order but do it as an\nindex-only scan, so it could omit the extra trip to the table. That trip to\nthe table must be awfully slow to explain the numbers you show later in the\nthread.\n\n...\n\n\n> This query plan (which is the same as when LIMIT is removed) has been a\n> good short term solution when the number of \"parent_id\"s I'm using is still\n> relatively small, but unfortunately queries grow untenably slow as the\n> number of \"parent_id\"s involved increases:\n>\n\nWhat happens when you remove that extra order by phrase that you added?\nThe original slow plan should become much faster when the number of\nparent_ids is large (it has to dig through fewer index entries before\naccumulating 20 good ones), so you should try going back to that.\n\n...\n\n\n> I'd be very grateful for help with one or both of these questions:\n> 1) Why is adding an unnecessary (from the perspective of result\n> correctness) ORDER BY valuable for forcing the parent_id index usage, and\n> does that indicate that there is something wrong with my\n> table/indexes/statistics/etc.?\n>\n\nIt finds the indexes on tmstmp to be falsely attractive, as it can walk in\ntmstmp order and so avoid the sort. (Really not the sort itself, but the\nfact that sort has to first read every row to be sorted, while walking an\nindex can abort once the LIMIT is satisfied). Adding an extra phrase to\nthe ORDER BY means the index is no longer capable of delivering rows in the\nneeded order, so it no longer looks falsely attractive. The same thing\ncould be obtained by doing a dummy operation, such as ORDER BY tmstmp + '0\nseconds' DESC. I prefer that method, as it is more obviously a tuning\ntrick. Adding in \"id\" looks more like a legitimate desire to break any\nties that might occasionally occur in tmstmp.\n\nAs Tom pointed out, there clearly is something wrong with your statistics,\nalthough we don't know what is causing it to go wrong. Fixing the\nstatistics isn't guaranteed to fix the problem, but it would be a good\nstart.\n\n\n\n\n> 2) Is there any way I can improve my query time when there are many\n> \"parent_id\"s involved? I seem to only be able to get the query plan to use\n> at most one of the parent_id index and the tmstmp index at a time. Perhaps\n> the correct multicolumn index would help?\n>\n\nA few things mentioned above might help.\n\nBut if they don't, is there any chance you could redesign your partitioning\nso that all parent_id queries together will always be in the same\npartition? And if not, could you just get rid of the partitioning\naltogether? 1e7 row is not all that many and doesn't generally need\npartitioning. Unless it is serving a specific purpose, it is probably\ncosting you more than you are getting.\n\nFinally, could you rewrite it as a join to a VALUES list, rather than as an\nin-list?\n\nCheers,\n\nJeff\n\nOn Tue, Jul 10, 2018 at 11:07 AM, Lincoln Swaine-Moore <[email protected]> wrote:Something about the estimated row counts (this problem persisted after I tried ANALYZEing) What is your default_statistics_target? What can you tell us about the distribution of parent_id? (exponential, power law, etc?). Can you show the results for select * from pg_stats where tablename='a' and attname='parent_id' \\x\\g\\x ? forces usage of the tmstmp index and Merge Append (which seems wise) but also a filter condition on parent_id over an index condition, which is apparently prohibitively slow.I tried creating a multicolumn index like:CREATE INDEX \"a_partition1_parent_and_tmstmp_idx\" on \"a_partition2\" USING btree (\"parent_id\", \"tmstmp\" DESC);But this didn't help (it wasn't used).You could try reversing the order and adding a column to be (tmstmp, parent_id, id) and keeping the table well vacuumed. This would allow the slow plan to still walk the indexes in tmstmp order but do it as an index-only scan, so it could omit the extra trip to the table. That trip to the table must be awfully slow to explain the numbers you show later in the thread.... This query plan (which is the same as when LIMIT is removed) has been a good short term solution when the number of \"parent_id\"s I'm using is still relatively small, but unfortunately queries grow untenably slow as the number of \"parent_id\"s involved increases:What happens when you remove that extra order by phrase that you added? The original slow plan should become much faster when the number of parent_ids is large (it has to dig through fewer index entries before accumulating 20 good ones), so you should try going back to that....I'd be very grateful for help with one or both of these questions:1) Why is adding an unnecessary (from the perspective of result correctness) ORDER BY valuable for forcing the parent_id index usage, and does that indicate that there is something wrong with my table/indexes/statistics/etc.?It finds the indexes on tmstmp to be falsely attractive, as it can walk in tmstmp order and so avoid the sort. (Really not the sort itself, but the fact that sort has to first read every row to be sorted, while walking an index can abort once the LIMIT is satisfied). Adding an extra phrase to the ORDER BY means the index is no longer capable of delivering rows in the needed order, so it no longer looks falsely attractive. The same thing could be obtained by doing a dummy operation, such as ORDER BY tmstmp + '0 seconds' DESC. I prefer that method, as it is more obviously a tuning trick. Adding in \"id\" looks more like a legitimate desire to break any ties that might occasionally occur in tmstmp.As Tom pointed out, there clearly is something wrong with your statistics, although we don't know what is causing it to go wrong. Fixing the statistics isn't guaranteed to fix the problem, but it would be a good start. 2) Is there any way I can improve my query time when there are many \"parent_id\"s involved? I seem to only be able to get the query plan to use at most one of the parent_id index and the tmstmp index at a time. Perhaps the correct multicolumn index would help?A few things mentioned above might help. But if they don't, is there any chance you could redesign your partitioning so that all parent_id queries together will always be in the same partition? And if not, could you just get rid of the partitioning altogether? 1e7 row is not all that many and doesn't generally need partitioning. Unless it is serving a specific purpose, it is probably costing you more than you are getting. Finally, could you rewrite it as a join to a VALUES list, rather than as an in-list? Cheers,Jeff",
"msg_date": "Sat, 14 Jul 2018 23:25:21 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improving Performance of Query ~ Filter by A, Sort by B"
},
{
"msg_contents": "Tom and Jeff,\n\nThanks very much for the suggestions!\n\nHere's what I've found so far after playing around for a few more days:\n\nWhat is your default_statistics_target? What can you tell us about the\n> distribution of parent_id? (exponential, power law, etc?). Can you show\n> the results for select * from pg_stats where tablename='a' and\n> attname='parent_id' \\x\\g\\x ?\n\n\nThe default_statistics_target is 500, which I agree seems quite\ninsufficient for these purposes. I bumped this up to 2000, and saw some\nimprovement in the row count estimation, but pretty much the same query\nplans. Unfortunately the distribution of counts is not intended to be\ncorrelated to parent_id, which is one reason I imagine the histograms might\nnot be particularly effective unless theres one bucket for every value.\nHere is the output you requested:\n\nselect * from pg_stats where tablename='a' and attname='parent_id';\n\nschemaname | public\ntablename | a\nattname | parent_id\ninherited | t\nnull_frac | 0\navg_width | 4\nn_distinct | 18871\nmost_common_vals | {15503,49787,49786,24595,49784,17549, ...} (2000\nvalues)\nmost_common_freqs |\n{0.0252983,0.02435,0.0241317,0.02329,0.019095,0.0103967,0.00758833,0.004245,\n...} (2000 values)\nhistogram_bounds |\n{2,12,17,24,28,36,47,59,74,80,86,98,108,121,135,141,147,160,169,177,190,204,\n...} (2001 values)\ncorrelation | -0.161576\nmost_common_elems |\nmost_common_elem_freqs |\nelem_count_histogram |\n\n\nInterestingly, the number of elements in these most_common_vals is as\nexpected (2000) for the parent table, but it's lower for the partition\ntables, despite the statistics level being the same.\n\nSELECT attstattarget\nFROM pg_attribute\nWHERE attrelid in ('a_partition1'::regclass, 'a'::regclass)\nAND attname = 'parent_id';\n-[ RECORD 1 ]-+-----\nattstattarget | 2000\n-[ RECORD 2 ]-+-----\nattstattarget | 2000\n\n\nselect * from pg_stats where tablename='a_partition1' and\nattname='parent_id';\n\nschemaname | public\ntablename | a_partition1\nattname | parent_id\ninherited | f\nnull_frac | 0\navg_width | 4\nn_distinct | 3969\nmost_common_vals |\n{15503,49787,49786,24595,49784,10451,20136,17604,9683, ...} (400-ish values)\nmost_common_freqs |\n{0.0807067,0.0769483,0.0749433,0.073565,0.0606433,0.0127917,0.011265,0.0112367,\n...} (400-ish values)\nhistogram_bounds | {5,24,27,27,33,38,41,69,74,74, ...} (1500-ish\nvalues)\ncorrelation | 0.402414\nmost_common_elems |\nmost_common_elem_freqs |\nelem_count_histogram |\n\nA few questions re: statistics:\n 1) would it be helpful to bump column statistics to, say, 20k (the number\nof distinct values of parent_id)?\n 2) is the discrepancy between the statistics on the parent and child table\nbe expected? certainly I would think that the statistics would be\ndifferent, but I would've imagined they would have histograms of the same\nsize given the settings being the same.\n 3) is there a way to manually specify the the distribution of rows to be\neven? that is, set the frequency of each value to be ~ n_rows/n_distinct.\nThis isn't quite accurate, but is a reasonable assumption about the\ndistribution, and might generate better query plans.\n\nThe same thing could be obtained by doing a dummy operation, such as ORDER\n> BY tmstmp + '0 seconds' DESC. I prefer that method, as it is more\n> obviously a tuning trick. Adding in \"id\" looks more like a legitimate\n> desire to break any ties that might occasionally occur in tmstmp.\n\n\nI 100% agree that that is more clear. Thanks for the suggestion!\n\nFinally, could you rewrite it as a join to a VALUES list, rather than as an\n> in-list?\n\n\nI should've mentioned this in my initial post, but I tried doing this\nwithout much success.\n\nYou could try reversing the order and adding a column to be (tmstmp,\n> parent_id, id) and keeping the table well vacuumed. This would allow the\n> slow plan to still walk the indexes in tmstmp order but do it as an\n> index-only scan, so it could omit the extra trip to the table. That trip to\n> the table must be awfully slow to explain the numbers you show later in the\n> thread.\n\n\nJust to clarify, do you mean building indexes like:\nCREATE INDEX \"a_tmstmp_parent_id_id_idx_[PART_KEY]\" on\n\"a_partition[PART_KEY]\" USING btree(\"tmstmp\", \"parent_id\", \"id\")\nThat seems promising! Is the intuition here that we want the first key of\nthe index to be the one we are ultimately ordering by? Sounds like I make\nhave had that flipped initially. My understanding of this whole situation\n(and please do correct me if this doesn't make sense) is the big bottleneck\nhere is reading pages from disk (when looking at stopped up queries, the\nwait_event is DataFileRead), and so anything that can be done to minimize\nthe pages read will be valuable. Which is why I would love to get the query\nplan to use the tmstmp index without having to filter thereafter by\nparent_id.\n\nWhat happens when you remove that extra order by phrase that you added?\n> The original slow plan should become much faster when the number of\n> parent_ids is large (it has to dig through fewer index entries before\n> accumulating 20 good ones), so you should try going back to that.\n\n\nUnfortunately, I've found that even when the number of parent_ids is large\n(2000), it's still prohibitively slow (haven't got it to finish), and\nmaintains a query plan that involves an Index Scan Backward across the\na_tmstmp_idxs (with a filter for parent_id).\n\nAnd if not, could you just get rid of the partitioning altogether? 1e7 row\n> is not all that many and doesn't generally need partitioning. Unless it is\n> serving a specific purpose, it is probably costing you more than you are\n> getting.\n\n\nUnfortunately, the partitioning is serving a specific purpose (to avoid\nsome writing overhead, it's useful to be able to drop GIN indexes on one\npartition at a time during heavy writes). But given that the queries are\nslow on a single partition anyway, I suspect the partitioning isn't the\nmain problem here.\n\nBut if they don't, is there any chance you could redesign your partitioning\n> so that all parent_id queries together will always be in the same partition?\n\n\nIn effect, the parent_ids are distributed by part_key, so I've found that\nat least a stopgap solution is manually splitting apart the parent_ids by\npartition when generating the query. This has given the following query\nplan, which still doesn't use the tmstmp index (which would be ideal and to\nmy understanding allow proper use of the LIMIT to minimize reads), but it\ndoes make things an order of magnitude faster by forcing the use of Bitmap.\nApologies for the giant output; this approach balloons the size of the\nquery itself.\n\nSELECT \"a\".\"id\"\nFROM \"a\" WHERE (\n (\"a\".\"parent_id\" IN (\n\n14467,30724,3976,13323,23971,44688,1938,275,19931,26527,42529,19491,11428,39110,9260,22446,44253,6705,15033,6461,34878,26687,45248,12739,24518,12359,12745,33738,32590,11856,14802,10451,16697,18904,41273,3547,12637,20190,36575,16096,19937,39652,2278,5616,5235,8570,15100\n )\n AND\n \"a\".\"part_key\" = 1\n )\n OR (\n \"a\".\"parent_id\" IN (\n\n50309,37126,36876,41878,9112,43673,10782,1696,11042,11168,34632,31157,7156,49213,29504,7200,38594,38598,5064,8783,27862,38201,473,19687,8296,49641,28394,29803,31597,19313,33395,32244,4348\n )\n AND\n \"a\".\"part_key\" = 2\n )\n OR (\n \"a\".\"parent_id\" IN (\n\n33024,9092,28678,29449,15757,36366,39963,46737,17156,39226,25628,14237,13125,17569,10914,39075,49734,40999,41756,8751,45490,29365,9143,5050,2463,35267,1220,12869,20294,24776,16329,5578,46605,30545,3544,37341,37086,28383,42527,45027,44292,35849,46314,2540,19696,21876,49156\n )\n AND\n \"a\".\"part_key\" = 3\n )\n OR (\n \"a\".\"parent_id\" IN (\n\n42242,50388,21916,13987,28708,4136,24617,31789,36533,28854,24247,15455,20805,38728,8908,20563,13908,20438,21087,4329,1131,46837,6505,16724,39675,35071\n )\n AND\n \"a\".\"part_key\" = 4\n )\n OR (\n \"a\".\"parent_id\" IN (\n\n11522,8964,47879,10380,46970,11278,6543,9489,27283,30958,40214,35076,21023,19746,11044,45605,41259,35245,36911,7344,20276,35126,3257,49134,1091,24388,5447,35017,4042,7627,42061,5582,30419,7508,19278,34778,38394,34153,20714,10860,7917,21614,10223,50801,28629,6782,7765\n )\n AND\n \"a\".\"part_key\" = 5\n )\n)\nORDER BY\n\"a\".\"tmstmp\" DESC, \"a\".\"id\" DESC\nLIMIT 20;\n\nLimit (cost=449977.86..449977.91 rows=20 width=1412) (actual\ntime=8967.465..8967.477 rows=20 loops=1)\n Output: a_partition1.id, a_partition1.tmstmp\n Buffers: shared hit=1641 read=125625 written=13428\n -> Sort (cost=449977.86..450397.07 rows=167683 width=1412) (actual\ntime=8967.464..8967.468 rows=20 loops=1)\n Output: a_partition1.id, a_partition1.tmstmp\n Sort Key: a_partition1.tmstmp DESC, a_partition1.id DESC\n Sort Method: top-N heapsort Memory: 85kB\n Buffers: shared hit=1641 read=125625 written=13428\n -> Append (cost=2534.33..445515.88 rows=167683 width=1412)\n(actual time=1231.579..8756.610 rows=145077 loops=1)\n Buffers: shared hit=1641 read=125625 written=13428\n -> Bitmap Heap Scan on public.a_partition1\n(cost=2534.33..246197.54 rows=126041 width=1393) (actual\ntime=1231.578..4414.027 rows=115364 loops=1)\n Output: a_partition1.id, a_partition1.tmstmp\n Recheck Cond: ((a_partition1.parent_id = ANY\n('{14467,30724,3976,13323,23971,44688,1938,275,19931,26527,42529,19491,11428,39110,9260,22446,44253,6705,15033,6461,34878,26687,45248,12739,24518,12359,12745,33738,32590,11856,14802,10451,16697,18904,41273,3547,12637,20190,36575,16096,19937,39652,2278,5616,5235,8570,15100}'::integer[]))\nOR (a_partition1.parent_id = ANY\n('{50309,37126,36876,41878,9112,43673,10782,1696,11042,11168,34632,31157,7156,49213,29504,7200,38594,38598,5064,8783,27862,38201,473,19687,8296,49641,28394,29803,31597,19313,33395,32244,4348}'::integer[]))\nOR (a_partition1.parent_id = ANY\n('{33024,9092,28678,29449,15757,36366,39963,46737,17156,39226,25628,14237,13125,17569,10914,39075,49734,40999,41756,8751,45490,29365,9143,5050,2463,35267,1220,12869,20294,24776,16329,5578,46605,30545,3544,37341,37086,28383,42527,45027,44292,35849,46314,2540,19696,21876,49156}'::integer[]))\nOR (a_partition1.parent_id = ANY\n('{42242,50388,21916,13987,28708,4136,24617,31789,36533,28854,24247,15455,20805,38728,8908,20563,13908,20438,21087,4329,1131,46837,6505,16724,39675,35071}'::integer[]))\nOR (a_partition1.parent_id = ANY\n('{11522,8964,47879,10380,46970,11278,6543,9489,27283,30958,40214,35076,21023,19746,11044,45605,41259,35245,36911,7344,20276,35126,3257,49134,1091,24388,5447,35017,4042,7627,42061,5582,30419,7508,19278,34778,38394,34153,20714,10860,7917,21614,10223,50801,28629,6782,7765}'::integer[])))\n Filter: (((a_partition1.parent_id = ANY\n('{14467,30724,3976,13323,23971,44688,1938,275,19931,26527,42529,19491,11428,39110,9260,22446,44253,6705,15033,6461,34878,26687,45248,12739,24518,12359,12745,33738,32590,11856,14802,10451,16697,18904,41273,3547,12637,20190,36575,16096,19937,39652,2278,5616,5235,8570,15100}'::integer[]))\nAND (a_partition1.part_key = 1)) OR ((a_partition1.parent_id = ANY\n('{50309,37126,36876,41878,9112,43673,10782,1696,11042,11168,34632,31157,7156,49213,29504,7200,38594,38598,5064,8783,27862,38201,473,19687,8296,49641,28394,29803,31597,19313,33395,32244,4348}'::integer[]))\nAND (a_partition1.part_key = 2)) OR ((a_partition1.parent_id = ANY\n('{33024,9092,28678,29449,15757,36366,39963,46737,17156,39226,25628,14237,13125,17569,10914,39075,49734,40999,41756,8751,45490,29365,9143,5050,2463,35267,1220,12869,20294,24776,16329,5578,46605,30545,3544,37341,37086,28383,42527,45027,44292,35849,46314,2540,19696,21876,49156}'::integer[]))\nAND (a_partition1.part_key = 3)) OR ((a_partition1.parent_id = ANY\n('{42242,50388,21916,13987,28708,4136,24617,31789,36533,28854,24247,15455,20805,38728,8908,20563,13908,20438,21087,4329,1131,46837,6505,16724,39675,35071}'::integer[]))\nAND (a_partition1.part_key = 4)) OR ((a_partition1.parent_id = ANY\n('{11522,8964,47879,10380,46970,11278,6543,9489,27283,30958,40214,35076,21023,19746,11044,45605,41259,35245,36911,7344,20276,35126,3257,49134,1091,24388,5447,35017,4042,7627,42061,5582,30419,7508,19278,34778,38394,34153,20714,10860,7917,21614,10223,50801,28629,6782,7765}'::integer[]))\nAND (a_partition1.part_key = 5)))\n Heap Blocks: exact=93942\n Buffers: shared hit=397 read=94547 written=6930\n -> BitmapOr (cost=2534.33..2534.33 rows=192032\nwidth=0) (actual time=1214.569..1214.569 rows=0 loops=1)\n Buffers: shared hit=397 read=605 written=10\n -> Bitmap Index Scan on a_parent_id_idx1\n(cost=0.00..1479.43 rows=126041 width=0) (actual time=1091.952..1091.952\nrows=115364 loops=1)\n Index Cond: (a_partition1.parent_id = ANY\n('{14467,30724,3976,13323,23971,44688,1938,275,19931,26527,42529,19491,11428,39110,9260,22446,44253,6705,15033,6461,34878,26687,45248,12739,24518,12359,12745,33738,32590,11856,14802,10451,16697,18904,41273,3547,12637,20190,36575,16096,19937,39652,2278,5616,5235,8570,15100}'::integer[]))\n Buffers: shared hit=82 read=460 written=8\n -> Bitmap Index Scan on a_parent_id_idx1\n(cost=0.00..193.55 rows=14233 width=0) (actual time=26.911..26.911 rows=0\nloops=1)\n Index Cond: (a_partition1.parent_id = ANY\n('{50309,37126,36876,41878,9112,43673,10782,1696,11042,11168,34632,31157,7156,49213,29504,7200,38594,38598,5064,8783,27862,38201,473,19687,8296,49641,28394,29803,31597,19313,33395,32244,4348}'::integer[]))\n Buffers: shared hit=66 read=33\n -> Bitmap Index Scan on a_parent_id_idx1\n(cost=0.00..275.65 rows=20271 width=0) (actual time=41.874..41.874 rows=0\nloops=1)\n Index Cond: (a_partition1.parent_id = ANY\n('{33024,9092,28678,29449,15757,36366,39963,46737,17156,39226,25628,14237,13125,17569,10914,39075,49734,40999,41756,8751,45490,29365,9143,5050,2463,35267,1220,12869,20294,24776,16329,5578,46605,30545,3544,37341,37086,28383,42527,45027,44292,35849,46314,2540,19696,21876,49156}'::integer[]))\n Buffers: shared hit=97 read=45 written=1\n -> Bitmap Index Scan on a_parent_id_idx1\n(cost=0.00..152.49 rows=11214 width=0) (actual time=23.542..23.542 rows=0\nloops=1)\n Index Cond: (a_partition1.parent_id = ANY\n('{42242,50388,21916,13987,28708,4136,24617,31789,36533,28854,24247,15455,20805,38728,8908,20563,13908,20438,21087,4329,1131,46837,6505,16724,39675,35071}'::integer[]))\n Buffers: shared hit=52 read=26 written=1\n -> Bitmap Index Scan on a_parent_id_idx1\n(cost=0.00..275.65 rows=20271 width=0) (actual time=30.271..30.271 rows=0\nloops=1)\n Index Cond: (a_partition1.parent_id = ANY\n('{11522,8964,47879,10380,46970,11278,6543,9489,27283,30958,40214,35076,21023,19746,11044,45605,41259,35245,36911,7344,20276,35126,3257,49134,1091,24388,5447,35017,4042,7627,42061,5582,30419,7508,19278,34778,38394,34153,20714,10860,7917,21614,10223,50801,28629,6782,7765}'::integer[]))\n Buffers: shared hit=100 read=41\n -> Bitmap Heap Scan on public.a_partition2\n(cost=850.51..78634.20 rows=19852 width=1485) (actual\ntime=316.458..2166.105 rows=16908 loops=1)\n Output: a_partition2.id, a_partition1.tmstmp\n Recheck Cond: ((a_partition2.parent_id = ANY\n('{14467,30724,3976,13323,23971,44688,1938,275,19931,26527,42529,19491,11428,39110,9260,22446,44253,6705,15033,6461,34878,26687,45248,12739,24518,12359,12745,33738,32590,11856,14802,10451,16697,18904,41273,3547,12637,20190,36575,16096,19937,39652,2278,5616,5235,8570,15100}'::integer[]))\nOR (a_partition2.parent_id = ANY\n('{50309,37126,36876,41878,9112,43673,10782,1696,11042,11168,34632,31157,7156,49213,29504,7200,38594,38598,5064,8783,27862,38201,473,19687,8296,49641,28394,29803,31597,19313,33395,32244,4348}'::integer[]))\nOR (a_partition2.parent_id = ANY\n('{33024,9092,28678,29449,15757,36366,39963,46737,17156,39226,25628,14237,13125,17569,10914,39075,49734,40999,41756,8751,45490,29365,9143,5050,2463,35267,1220,12869,20294,24776,16329,5578,46605,30545,3544,37341,37086,28383,42527,45027,44292,35849,46314,2540,19696,21876,49156}'::integer[]))\nOR (a_partition2.parent_id = ANY\n('{42242,50388,21916,13987,28708,4136,24617,31789,36533,28854,24247,15455,20805,38728,8908,20563,13908,20438,21087,4329,1131,46837,6505,16724,39675,35071}'::integer[]))\nOR (a_partition2.parent_id = ANY\n('{11522,8964,47879,10380,46970,11278,6543,9489,27283,30958,40214,35076,21023,19746,11044,45605,41259,35245,36911,7344,20276,35126,3257,49134,1091,24388,5447,35017,4042,7627,42061,5582,30419,7508,19278,34778,38394,34153,20714,10860,7917,21614,10223,50801,28629,6782,7765}'::integer[])))\n Filter: (((a_partition2.parent_id = ANY\n('{14467,30724,3976,13323,23971,44688,1938,275,19931,26527,42529,19491,11428,39110,9260,22446,44253,6705,15033,6461,34878,26687,45248,12739,24518,12359,12745,33738,32590,11856,14802,10451,16697,18904,41273,3547,12637,20190,36575,16096,19937,39652,2278,5616,5235,8570,15100}'::integer[]))\nAND (a_partition2.part_key = 1)) OR ((a_partition2.parent_id = ANY\n('{50309,37126,36876,41878,9112,43673,10782,1696,11042,11168,34632,31157,7156,49213,29504,7200,38594,38598,5064,8783,27862,38201,473,19687,8296,49641,28394,29803,31597,19313,33395,32244,4348}'::integer[]))\nAND (a_partition2.part_key = 2)) OR ((a_partition2.parent_id = ANY\n('{33024,9092,28678,29449,15757,36366,39963,46737,17156,39226,25628,14237,13125,17569,10914,39075,49734,40999,41756,8751,45490,29365,9143,5050,2463,35267,1220,12869,20294,24776,16329,5578,46605,30545,3544,37341,37086,28383,42527,45027,44292,35849,46314,2540,19696,21876,49156}'::integer[]))\nAND (a_partition2.part_key = 3)) OR ((a_partition2.parent_id = ANY\n('{42242,50388,21916,13987,28708,4136,24617,31789,36533,28854,24247,15455,20805,38728,8908,20563,13908,20438,21087,4329,1131,46837,6505,16724,39675,35071}'::integer[]))\nAND (a_partition2.part_key = 4)) OR ((a_partition2.parent_id = ANY\n('{11522,8964,47879,10380,46970,11278,6543,9489,27283,30958,40214,35076,21023,19746,11044,45605,41259,35245,36911,7344,20276,35126,3257,49134,1091,24388,5447,35017,4042,7627,42061,5582,30419,7508,19278,34778,38394,34153,20714,10860,7917,21614,10223,50801,28629,6782,7765}'::integer[]))\nAND (a_partition2.part_key = 5)))\n Heap Blocks: exact=17769\n Buffers: shared hit=402 read=18034 written=3804\n -> BitmapOr (cost=850.51..850.51 rows=59567 width=0)\n(actual time=313.191..313.191 rows=0 loops=1)\n Buffers: shared hit=402 read=265 written=40\n -> Bitmap Index Scan on a_parent_id_idx2\n(cost=0.00..155.81 rows=11177 width=0) (actual time=65.671..65.671 rows=0\nloops=1)\n Index Cond: (a_partition2.parent_id = ANY\n('{14467,30724,3976,13323,23971,44688,1938,275,19931,26527,42529,19491,11428,39110,9260,22446,44253,6705,15033,6461,34878,26687,45248,12739,24518,12359,12745,33738,32590,11856,14802,10451,16697,18904,41273,3547,12637,20190,36575,16096,19937,39652,2278,5616,5235,8570,15100}'::integer[]))\n Buffers: shared hit=84 read=57 written=11\n -> Bitmap Index Scan on a_parent_id_idx2\n(cost=0.00..272.08 rows=19852 width=0) (actual time=116.974..116.974\nrows=18267 loops=1)\n Index Cond: (a_partition2.parent_id = ANY\n('{50309,37126,36876,41878,9112,43673,10782,1696,11042,11168,34632,31157,7156,49213,29504,7200,38594,38598,5064,8783,27862,38201,473,19687,8296,49641,28394,29803,31597,19313,33395,32244,4348}'::integer[]))\n Buffers: shared hit=68 read=98 written=18\n -> Bitmap Index Scan on a_parent_id_idx2\n(cost=0.00..155.81 rows=11177 width=0) (actual time=58.915..58.915 rows=0\nloops=1)\n Index Cond: (a_partition2.parent_id = ANY\n('{33024,9092,28678,29449,15757,36366,39963,46737,17156,39226,25628,14237,13125,17569,10914,39075,49734,40999,41756,8751,45490,29365,9143,5050,2463,35267,1220,12869,20294,24776,16329,5578,46605,30545,3544,37341,37086,28383,42527,45027,44292,35849,46314,2540,19696,21876,49156}'::integer[]))\n Buffers: shared hit=100 read=41 written=5\n -> Bitmap Index Scan on a_parent_id_idx2\n(cost=0.00..86.19 rows=6183 width=0) (actual time=25.370..25.370 rows=0\nloops=1)\n Index Cond: (a_partition2.parent_id = ANY\n('{42242,50388,21916,13987,28708,4136,24617,31789,36533,28854,24247,15455,20805,38728,8908,20563,13908,20438,21087,4329,1131,46837,6505,16724,39675,35071}'::integer[]))\n Buffers: shared hit=53 read=25 written=3\n -> Bitmap Index Scan on a_parent_id_idx2\n(cost=0.00..155.81 rows=11177 width=0) (actual time=46.254..46.254 rows=0\nloops=1)\n Index Cond: (a_partition2.parent_id = ANY\n('{11522,8964,47879,10380,46970,11278,6543,9489,27283,30958,40214,35076,21023,19746,11044,45605,41259,35245,36911,7344,20276,35126,3257,49134,1091,24388,5447,35017,4042,7627,42061,5582,30419,7508,19278,34778,38394,34153,20714,10860,7917,21614,10223,50801,28629,6782,7765}'::integer[]))\n Buffers: shared hit=97 read=44 written=3\n -> Bitmap Heap Scan on public.a_partition3\n(cost=692.99..56079.33 rows=13517 width=1467) (actual\ntime=766.172..1555.761 rows=7594 loops=1)\n Output: a_partition3.id, a_partition1.tmstmp\n Recheck Cond: ((a_partition3.parent_id = ANY\n('{14467,30724,3976,13323,23971,44688,1938,275,19931,26527,42529,19491,11428,39110,9260,22446,44253,6705,15033,6461,34878,26687,45248,12739,24518,12359,12745,33738,32590,11856,14802,10451,16697,18904,41273,3547,12637,20190,36575,16096,19937,39652,2278,5616,5235,8570,15100}'::integer[]))\nOR (a_partition3.parent_id = ANY\n('{50309,37126,36876,41878,9112,43673,10782,1696,11042,11168,34632,31157,7156,49213,29504,7200,38594,38598,5064,8783,27862,38201,473,19687,8296,49641,28394,29803,31597,19313,33395,32244,4348}'::integer[]))\nOR (a_partition3.parent_id = ANY\n('{33024,9092,28678,29449,15757,36366,39963,46737,17156,39226,25628,14237,13125,17569,10914,39075,49734,40999,41756,8751,45490,29365,9143,5050,2463,35267,1220,12869,20294,24776,16329,5578,46605,30545,3544,37341,37086,28383,42527,45027,44292,35849,46314,2540,19696,21876,49156}'::integer[]))\nOR (a_partition3.parent_id = ANY\n('{42242,50388,21916,13987,28708,4136,24617,31789,36533,28854,24247,15455,20805,38728,8908,20563,13908,20438,21087,4329,1131,46837,6505,16724,39675,35071}'::integer[]))\nOR (a_partition3.parent_id = ANY\n('{11522,8964,47879,10380,46970,11278,6543,9489,27283,30958,40214,35076,21023,19746,11044,45605,41259,35245,36911,7344,20276,35126,3257,49134,1091,24388,5447,35017,4042,7627,42061,5582,30419,7508,19278,34778,38394,34153,20714,10860,7917,21614,10223,50801,28629,6782,7765}'::integer[])))\n Filter: (((a_partition3.parent_id = ANY\n('{14467,30724,3976,13323,23971,44688,1938,275,19931,26527,42529,19491,11428,39110,9260,22446,44253,6705,15033,6461,34878,26687,45248,12739,24518,12359,12745,33738,32590,11856,14802,10451,16697,18904,41273,3547,12637,20190,36575,16096,19937,39652,2278,5616,5235,8570,15100}'::integer[]))\nAND (a_partition3.part_key = 1)) OR ((a_partition3.parent_id = ANY\n('{50309,37126,36876,41878,9112,43673,10782,1696,11042,11168,34632,31157,7156,49213,29504,7200,38594,38598,5064,8783,27862,38201,473,19687,8296,49641,28394,29803,31597,19313,33395,32244,4348}'::integer[]))\nAND (a_partition3.part_key = 2)) OR ((a_partition3.parent_id = ANY\n('{33024,9092,28678,29449,15757,36366,39963,46737,17156,39226,25628,14237,13125,17569,10914,39075,49734,40999,41756,8751,45490,29365,9143,5050,2463,35267,1220,12869,20294,24776,16329,5578,46605,30545,3544,37341,37086,28383,42527,45027,44292,35849,46314,2540,19696,21876,49156}'::integer[]))\nAND (a_partition3.part_key = 3)) OR ((a_partition3.parent_id = ANY\n('{42242,50388,21916,13987,28708,4136,24617,31789,36533,28854,24247,15455,20805,38728,8908,20563,13908,20438,21087,4329,1131,46837,6505,16724,39675,35071}'::integer[]))\nAND (a_partition3.part_key = 4)) OR ((a_partition3.parent_id = ANY\n('{11522,8964,47879,10380,46970,11278,6543,9489,27283,30958,40214,35076,21023,19746,11044,45605,41259,35245,36911,7344,20276,35126,3257,49134,1091,24388,5447,35017,4042,7627,42061,5582,30419,7508,19278,34778,38394,34153,20714,10860,7917,21614,10223,50801,28629,6782,7765}'::integer[]))\nAND (a_partition3.part_key = 5)))\n Heap Blocks: exact=7472\n Buffers: shared hit=432 read=7682 written=1576\n -> BitmapOr (cost=692.99..692.99 rows=42391 width=0)\n(actual time=764.238..764.238 rows=0 loops=1)\n Buffers: shared hit=432 read=210 written=51\n -> Bitmap Index Scan on a_parent_id_idx3\n(cost=0.00..138.53 rows=8870 width=0) (actual time=118.907..118.907 rows=0\nloops=1)\n Index Cond: (a_partition3.parent_id = ANY\n('{14467,30724,3976,13323,23971,44688,1938,275,19931,26527,42529,19491,11428,39110,9260,22446,44253,6705,15033,6461,34878,26687,45248,12739,24518,12359,12745,33738,32590,11856,14802,10451,16697,18904,41273,3547,12637,20190,36575,16096,19937,39652,2278,5616,5235,8570,15100}'::integer[]))\n Buffers: shared hit=91 read=52 written=7\n -> Bitmap Index Scan on a_parent_id_idx3\n(cost=0.00..97.27 rows=6228 width=0) (actual time=26.656..26.656 rows=0\nloops=1)\n Index Cond: (a_partition3.parent_id = ANY\n('{50309,37126,36876,41878,9112,43673,10782,1696,11042,11168,34632,31157,7156,49213,29504,7200,38594,38598,5064,8783,27862,38201,473,19687,8296,49641,28394,29803,31597,19313,33395,32244,4348}'::integer[]))\n Buffers: shared hit=71 read=28 written=3\n -> Bitmap Index Scan on a_parent_id_idx3\n(cost=0.00..225.13 rows=13517 width=0) (actual time=393.115..393.115\nrows=7594 loops=1)\n Index Cond: (a_partition3.parent_id = ANY\n('{33024,9092,28678,29449,15757,36366,39963,46737,17156,39226,25628,14237,13125,17569,10914,39075,49734,40999,41756,8751,45490,29365,9143,5050,2463,35267,1220,12869,20294,24776,16329,5578,46605,30545,3544,37341,37086,28383,42527,45027,44292,35849,46314,2540,19696,21876,49156}'::integer[]))\n Buffers: shared hit=107 read=74 written=25\n -> Bitmap Index Scan on a_parent_id_idx3\n(cost=0.00..76.64 rows=4907 width=0) (actual time=36.979..36.979 rows=0\nloops=1)\n Index Cond: (a_partition3.parent_id = ANY\n('{42242,50388,21916,13987,28708,4136,24617,31789,36533,28854,24247,15455,20805,38728,8908,20563,13908,20438,21087,4329,1131,46837,6505,16724,39675,35071}'::integer[]))\n Buffers: shared hit=56 read=22 written=9\n -> Bitmap Index Scan on a_parent_id_idx3\n(cost=0.00..138.53 rows=8870 width=0) (actual time=188.574..188.574 rows=0\nloops=1)\n Index Cond: (a_partition3.parent_id = ANY\n('{11522,8964,47879,10380,46970,11278,6543,9489,27283,30958,40214,35076,21023,19746,11044,45605,41259,35245,36911,7344,20276,35126,3257,49134,1091,24388,5447,35017,4042,7627,42061,5582,30419,7508,19278,34778,38394,34153,20714,10860,7917,21614,10223,50801,28629,6782,7765}'::integer[]))\n Buffers: shared hit=107 read=34 written=7\n -> Bitmap Heap Scan on public.a_partition4\n(cost=709.71..64604.81 rows=8273 width=1470) (actual time=268.111..543.570\nrows=5211 loops=1)\n Output: a_partition4.id, a_partition1.tmstmp\n Recheck Cond: ((a_partition4.parent_id = ANY\n('{14467,30724,3976,13323,23971,44688,1938,275,19931,26527,42529,19491,11428,39110,9260,22446,44253,6705,15033,6461,34878,26687,45248,12739,24518,12359,12745,33738,32590,11856,14802,10451,16697,18904,41273,3547,12637,20190,36575,16096,19937,39652,2278,5616,5235,8570,15100}'::integer[]))\nOR (a_partition4.parent_id = ANY\n('{50309,37126,36876,41878,9112,43673,10782,1696,11042,11168,34632,31157,7156,49213,29504,7200,38594,38598,5064,8783,27862,38201,473,19687,8296,49641,28394,29803,31597,19313,33395,32244,4348}'::integer[]))\nOR (a_partition4.parent_id = ANY\n('{33024,9092,28678,29449,15757,36366,39963,46737,17156,39226,25628,14237,13125,17569,10914,39075,49734,40999,41756,8751,45490,29365,9143,5050,2463,35267,1220,12869,20294,24776,16329,5578,46605,30545,3544,37341,37086,28383,42527,45027,44292,35849,46314,2540,19696,21876,49156}'::integer[]))\nOR (a_partition4.parent_id = ANY\n('{42242,50388,21916,13987,28708,4136,24617,31789,36533,28854,24247,15455,20805,38728,8908,20563,13908,20438,21087,4329,1131,46837,6505,16724,39675,35071}'::integer[]))\nOR (a_partition4.parent_id = ANY\n('{11522,8964,47879,10380,46970,11278,6543,9489,27283,30958,40214,35076,21023,19746,11044,45605,41259,35245,36911,7344,20276,35126,3257,49134,1091,24388,5447,35017,4042,7627,42061,5582,30419,7508,19278,34778,38394,34153,20714,10860,7917,21614,10223,50801,28629,6782,7765}'::integer[])))\n Filter: (((a_partition4.parent_id = ANY\n('{14467,30724,3976,13323,23971,44688,1938,275,19931,26527,42529,19491,11428,39110,9260,22446,44253,6705,15033,6461,34878,26687,45248,12739,24518,12359,12745,33738,32590,11856,14802,10451,16697,18904,41273,3547,12637,20190,36575,16096,19937,39652,2278,5616,5235,8570,15100}'::integer[]))\nAND (a_partition4.part_key = 1)) OR ((a_partition4.parent_id = ANY\n('{50309,37126,36876,41878,9112,43673,10782,1696,11042,11168,34632,31157,7156,49213,29504,7200,38594,38598,5064,8783,27862,38201,473,19687,8296,49641,28394,29803,31597,19313,33395,32244,4348}'::integer[]))\nAND (a_partition4.part_key = 2)) OR ((a_partition4.parent_id = ANY\n('{33024,9092,28678,29449,15757,36366,39963,46737,17156,39226,25628,14237,13125,17569,10914,39075,49734,40999,41756,8751,45490,29365,9143,5050,2463,35267,1220,12869,20294,24776,16329,5578,46605,30545,3544,37341,37086,28383,42527,45027,44292,35849,46314,2540,19696,21876,49156}'::integer[]))\nAND (a_partition4.part_key = 3)) OR ((a_partition4.parent_id = ANY\n('{42242,50388,21916,13987,28708,4136,24617,31789,36533,28854,24247,15455,20805,38728,8908,20563,13908,20438,21087,4329,1131,46837,6505,16724,39675,35071}'::integer[]))\nAND (a_partition4.part_key = 4)) OR ((a_partition4.parent_id = ANY\n('{11522,8964,47879,10380,46970,11278,6543,9489,27283,30958,40214,35076,21023,19746,11044,45605,41259,35245,36911,7344,20276,35126,3257,49134,1091,24388,5447,35017,4042,7627,42061,5582,30419,7508,19278,34778,38394,34153,20714,10860,7917,21614,10223,50801,28629,6782,7765}'::integer[]))\nAND (a_partition4.part_key = 5)))\n Heap Blocks: exact=5153\n Buffers: shared hit=410 read=5362 written=1118\n -> BitmapOr (cost=709.71..709.71 rows=48654 width=0)\n(actual time=267.028..267.028 rows=0 loops=1)\n Buffers: shared hit=410 read=209 written=50\n -> Bitmap Index Scan on a_parent_id_idx4\n(cost=0.00..153.69 rows=10908 width=0) (actual time=60.586..60.586 rows=0\nloops=1)\n Index Cond: (a_partition4.parent_id = ANY\n('{14467,30724,3976,13323,23971,44688,1938,275,19931,26527,42529,19491,11428,39110,9260,22446,44253,6705,15033,6461,34878,26687,45248,12739,24518,12359,12745,33738,32590,11856,14802,10451,16697,18904,41273,3547,12637,20190,36575,16096,19937,39652,2278,5616,5235,8570,15100}'::integer[]))\n Buffers: shared hit=90 read=52 written=11\n -> Bitmap Index Scan on a_parent_id_idx4\n(cost=0.00..107.91 rows=7659 width=0) (actual time=47.041..47.041 rows=0\nloops=1)\n Index Cond: (a_partition4.parent_id = ANY\n('{50309,37126,36876,41878,9112,43673,10782,1696,11042,11168,34632,31157,7156,49213,29504,7200,38594,38598,5064,8783,27862,38201,473,19687,8296,49641,28394,29803,31597,19313,33395,32244,4348}'::integer[]))\n Buffers: shared hit=64 read=35 written=7\n -> Bitmap Index Scan on a_parent_id_idx4\n(cost=0.00..153.69 rows=10908 width=0) (actual time=54.352..54.352 rows=0\nloops=1)\n Index Cond: (a_partition4.parent_id = ANY\n('{33024,9092,28678,29449,15757,36366,39963,46737,17156,39226,25628,14237,13125,17569,10914,39075,49734,40999,41756,8751,45490,29365,9143,5050,2463,35267,1220,12869,20294,24776,16329,5578,46605,30545,3544,37341,37086,28383,42527,45027,44292,35849,46314,2540,19696,21876,49156}'::integer[]))\n Buffers: shared hit=101 read=40 written=8\n -> Bitmap Index Scan on a_parent_id_idx4\n(cost=0.00..130.39 rows=8273 width=0) (actual time=54.690..54.690 rows=5220\nloops=1)\n Index Cond: (a_partition4.parent_id = ANY\n('{42242,50388,21916,13987,28708,4136,24617,31789,36533,28854,24247,15455,20805,38728,8908,20563,13908,20438,21087,4329,1131,46837,6505,16724,39675,35071}'::integer[]))\n Buffers: shared hit=56 read=40 written=13\n -> Bitmap Index Scan on a_parent_id_idx4\n(cost=0.00..153.69 rows=10908 width=0) (actual time=50.353..50.353 rows=0\nloops=1)\n Index Cond: (a_partition4.parent_id = ANY\n('{11522,8964,47879,10380,46970,11278,6543,9489,27283,30958,40214,35076,21023,19746,11044,45605,41259,35245,36911,7344,20276,35126,3257,49134,1091,24388,5447,35017,4042,7627,42061,5582,30419,7508,19278,34778,38394,34153,20714,10860,7917,21614,10223,50801,28629,6782,7765}'::integer[]))\n Buffers: shared hit=99 read=42 written=11\nPlanning time: 8.002 ms\nExecution time: 8967.750 ms\n\nWhen I remove the ID sorting hack from this query, it goes back to a nasty\nindex scan on tmstmp with a filter key on the whole WHERE clause.\n\nThanks again for your help!\n\n\nOn Sat, Jul 14, 2018 at 11:25 PM, Jeff Janes <[email protected]> wrote:\n\n> On Tue, Jul 10, 2018 at 11:07 AM, Lincoln Swaine-Moore <\n> [email protected]> wrote:\n>\n>>\n>>\n>>\n>> Something about the estimated row counts (this problem persisted after I\n>> tried ANALYZEing)\n>>\n>\n> What is your default_statistics_target? What can you tell us about the\n> distribution of parent_id? (exponential, power law, etc?). Can you show\n> the results for select * from pg_stats where tablename='a' and\n> attname='parent_id' \\x\\g\\x ?\n>\n>\n>> forces usage of the tmstmp index and Merge Append (which seems wise) but\n>> also a filter condition on parent_id over an index condition, which is\n>> apparently prohibitively slow.\n>>\n>> I tried creating a multicolumn index like:\n>>\n>> CREATE INDEX \"a_partition1_parent_and_tmstmp_idx\" on \"a_partition2\"\n>> USING btree (\"parent_id\", \"tmstmp\" DESC);\n>>\n>> But this didn't help (it wasn't used).\n>>\n>\n> You could try reversing the order and adding a column to be (tmstmp,\n> parent_id, id) and keeping the table well vacuumed. This would allow the\n> slow plan to still walk the indexes in tmstmp order but do it as an\n> index-only scan, so it could omit the extra trip to the table. That trip to\n> the table must be awfully slow to explain the numbers you show later in the\n> thread.\n>\n> ...\n>\n>\n>> This query plan (which is the same as when LIMIT is removed) has been a\n>> good short term solution when the number of \"parent_id\"s I'm using is still\n>> relatively small, but unfortunately queries grow untenably slow as the\n>> number of \"parent_id\"s involved increases:\n>>\n>\n> What happens when you remove that extra order by phrase that you added?\n> The original slow plan should become much faster when the number of\n> parent_ids is large (it has to dig through fewer index entries before\n> accumulating 20 good ones), so you should try going back to that.\n>\n> ...\n>\n>\n>> I'd be very grateful for help with one or both of these questions:\n>> 1) Why is adding an unnecessary (from the perspective of result\n>> correctness) ORDER BY valuable for forcing the parent_id index usage, and\n>> does that indicate that there is something wrong with my\n>> table/indexes/statistics/etc.?\n>>\n>\n> It finds the indexes on tmstmp to be falsely attractive, as it can walk in\n> tmstmp order and so avoid the sort. (Really not the sort itself, but the\n> fact that sort has to first read every row to be sorted, while walking an\n> index can abort once the LIMIT is satisfied). Adding an extra phrase to\n> the ORDER BY means the index is no longer capable of delivering rows in the\n> needed order, so it no longer looks falsely attractive. The same thing\n> could be obtained by doing a dummy operation, such as ORDER BY tmstmp + '0\n> seconds' DESC. I prefer that method, as it is more obviously a tuning\n> trick. Adding in \"id\" looks more like a legitimate desire to break any\n> ties that might occasionally occur in tmstmp.\n>\n> As Tom pointed out, there clearly is something wrong with your statistics,\n> although we don't know what is causing it to go wrong. Fixing the\n> statistics isn't guaranteed to fix the problem, but it would be a good\n> start.\n>\n>\n>\n>\n>> 2) Is there any way I can improve my query time when there are many\n>> \"parent_id\"s involved? I seem to only be able to get the query plan to use\n>> at most one of the parent_id index and the tmstmp index at a time. Perhaps\n>> the correct multicolumn index would help?\n>>\n>\n> A few things mentioned above might help.\n>\n> But if they don't, is there any chance you could redesign your\n> partitioning so that all parent_id queries together will always be in the\n> same partition? And if not, could you just get rid of the partitioning\n> altogether? 1e7 row is not all that many and doesn't generally need\n> partitioning. Unless it is serving a specific purpose, it is probably\n> costing you more than you are getting.\n>\n> Finally, could you rewrite it as a join to a VALUES list, rather than as\n> an in-list?\n>\n> Cheers,\n>\n> Jeff\n>\n\n\n\n-- \nLincoln Swaine-Moore\n\nTom and Jeff,Thanks very much for the suggestions!Here's what I've found so far after playing around for a few more days:What is your default_statistics_target? What can you tell us about the distribution of parent_id? (exponential, power law, etc?). Can you show the results for select * from pg_stats where tablename='a' and attname='parent_id' \\x\\g\\x ?The default_statistics_target is 500, which I agree seems quite insufficient for these purposes. I bumped this up to 2000, and saw some improvement in the row count estimation, but pretty much the same query plans. Unfortunately the distribution of counts is not intended to be correlated to parent_id, which is one reason I imagine the histograms might not be particularly effective unless theres one bucket for every value. Here is the output you requested:select * from pg_stats where tablename='a' and attname='parent_id';schemaname | publictablename | aattname | parent_idinherited | tnull_frac | 0avg_width | 4n_distinct | 18871most_common_vals | {15503,49787,49786,24595,49784,17549, ...} (2000 values)most_common_freqs | {0.0252983,0.02435,0.0241317,0.02329,0.019095,0.0103967,0.00758833,0.004245, ...} (2000 values)histogram_bounds | {2,12,17,24,28,36,47,59,74,80,86,98,108,121,135,141,147,160,169,177,190,204, ...} (2001 values)correlation | -0.161576most_common_elems |most_common_elem_freqs |elem_count_histogram |Interestingly, the number of elements in these most_common_vals is as expected (2000) for the parent table, but it's lower for the partition tables, despite the statistics level being the same.SELECT attstattargetFROM pg_attributeWHERE attrelid in ('a_partition1'::regclass, 'a'::regclass)AND attname = 'parent_id';-[ RECORD 1 ]-+-----attstattarget | 2000-[ RECORD 2 ]-+-----attstattarget | 2000select * from pg_stats where tablename='a_partition1' and attname='parent_id';schemaname | publictablename | a_partition1attname | parent_idinherited | fnull_frac | 0avg_width | 4n_distinct | 3969most_common_vals | {15503,49787,49786,24595,49784,10451,20136,17604,9683, ...} (400-ish values)most_common_freqs | {0.0807067,0.0769483,0.0749433,0.073565,0.0606433,0.0127917,0.011265,0.0112367, ...} (400-ish values)histogram_bounds | {5,24,27,27,33,38,41,69,74,74, ...} (1500-ish values)correlation | 0.402414most_common_elems |most_common_elem_freqs |elem_count_histogram |A few questions re: statistics: 1) would it be helpful to bump column statistics to, say, 20k (the number of distinct values of parent_id)? 2) is the discrepancy between the statistics on the parent and child table be expected? certainly I would think that the statistics would be different, but I would've imagined they would have histograms of the same size given the settings being the same. 3) is there a way to manually specify the the distribution of rows to be even? that is, set the frequency of each value to be ~ n_rows/n_distinct. This isn't quite accurate, but is a reasonable assumption about the distribution, and might generate better query plans.The same thing could be obtained by doing a dummy operation, such as ORDER BY tmstmp + '0 seconds' DESC. I prefer that method, as it is more obviously a tuning trick. Adding in \"id\" looks more like a legitimate desire to break any ties that might occasionally occur in tmstmp.I 100% agree that that is more clear. Thanks for the suggestion!Finally, could you rewrite it as a join to a VALUES list, rather than as an in-list?I should've mentioned this in my initial post, but I tried doing this without much success.You could try reversing the order and adding a column to be (tmstmp, parent_id, id) and keeping the table well vacuumed. This would allow the slow plan to still walk the indexes in tmstmp order but do it as an index-only scan, so it could omit the extra trip to the table. That trip to the table must be awfully slow to explain the numbers you show later in the thread. Just to clarify, do you mean building indexes like:CREATE INDEX \"a_tmstmp_parent_id_id_idx_[PART_KEY]\" on \"a_partition[PART_KEY]\" USING btree(\"tmstmp\", \"parent_id\", \"id\")That seems promising! Is the intuition here that we want the first key of the index to be the one we are ultimately ordering by? Sounds like I make have had that flipped initially. My understanding of this whole situation (and please do correct me if this doesn't make sense) is the big bottleneck here is reading pages from disk (when looking at stopped up queries, the wait_event is DataFileRead), and so anything that can be done to minimize the pages read will be valuable. Which is why I would love to get the query plan to use the tmstmp index without having to filter thereafter by parent_id.What happens when you remove that extra order by phrase that you added? The original slow plan should become much faster when the number of parent_ids is large (it has to dig through fewer index entries before accumulating 20 good ones), so you should try going back to that.Unfortunately, I've found that even when the number of parent_ids is large (2000), it's still prohibitively slow (haven't got it to finish), and maintains a query plan that involves an Index Scan Backward across the a_tmstmp_idxs (with a filter for parent_id).And if not, could you just get rid of the partitioning altogether? 1e7 row is not all that many and doesn't generally need partitioning. Unless it is serving a specific purpose, it is probably costing you more than you are getting. Unfortunately, the partitioning is serving a specific purpose (to avoid some writing overhead, it's useful to be able to drop GIN indexes on one partition at a time during heavy writes). But given that the queries are slow on a single partition anyway, I suspect the partitioning isn't the main problem here.But if they don't, is there any chance you could redesign your partitioning so that all parent_id queries together will always be in the same partition? In effect, the parent_ids are distributed by part_key, so I've found that at least a stopgap solution is manually splitting apart the parent_ids by partition when generating the query. This has given the following query plan, which still doesn't use the tmstmp index (which would be ideal and to my understanding allow proper use of the LIMIT to minimize reads), but it does make things an order of magnitude faster by forcing the use of Bitmap. Apologies for the giant output; this approach balloons the size of the query itself.SELECT \"a\".\"id\"FROM \"a\" WHERE ( (\"a\".\"parent_id\" IN ( 14467,30724,3976,13323,23971,44688,1938,275,19931,26527,42529,19491,11428,39110,9260,22446,44253,6705,15033,6461,34878,26687,45248,12739,24518,12359,12745,33738,32590,11856,14802,10451,16697,18904,41273,3547,12637,20190,36575,16096,19937,39652,2278,5616,5235,8570,15100 ) AND \"a\".\"part_key\" = 1 ) OR ( \"a\".\"parent_id\" IN ( 50309,37126,36876,41878,9112,43673,10782,1696,11042,11168,34632,31157,7156,49213,29504,7200,38594,38598,5064,8783,27862,38201,473,19687,8296,49641,28394,29803,31597,19313,33395,32244,4348 ) AND \"a\".\"part_key\" = 2 ) OR ( \"a\".\"parent_id\" IN ( 33024,9092,28678,29449,15757,36366,39963,46737,17156,39226,25628,14237,13125,17569,10914,39075,49734,40999,41756,8751,45490,29365,9143,5050,2463,35267,1220,12869,20294,24776,16329,5578,46605,30545,3544,37341,37086,28383,42527,45027,44292,35849,46314,2540,19696,21876,49156 ) AND \"a\".\"part_key\" = 3 ) OR ( \"a\".\"parent_id\" IN ( 42242,50388,21916,13987,28708,4136,24617,31789,36533,28854,24247,15455,20805,38728,8908,20563,13908,20438,21087,4329,1131,46837,6505,16724,39675,35071 ) AND \"a\".\"part_key\" = 4 ) OR ( \"a\".\"parent_id\" IN ( 11522,8964,47879,10380,46970,11278,6543,9489,27283,30958,40214,35076,21023,19746,11044,45605,41259,35245,36911,7344,20276,35126,3257,49134,1091,24388,5447,35017,4042,7627,42061,5582,30419,7508,19278,34778,38394,34153,20714,10860,7917,21614,10223,50801,28629,6782,7765 ) AND \"a\".\"part_key\" = 5 ))ORDER BY\"a\".\"tmstmp\" DESC, \"a\".\"id\" DESCLIMIT 20;Limit (cost=449977.86..449977.91 rows=20 width=1412) (actual time=8967.465..8967.477 rows=20 loops=1) Output: a_partition1.id, a_partition1.tmstmp Buffers: shared hit=1641 read=125625 written=13428 -> Sort (cost=449977.86..450397.07 rows=167683 width=1412) (actual time=8967.464..8967.468 rows=20 loops=1) Output: a_partition1.id, a_partition1.tmstmp Sort Key: a_partition1.tmstmp DESC, a_partition1.id DESC Sort Method: top-N heapsort Memory: 85kB Buffers: shared hit=1641 read=125625 written=13428 -> Append (cost=2534.33..445515.88 rows=167683 width=1412) (actual time=1231.579..8756.610 rows=145077 loops=1) Buffers: shared hit=1641 read=125625 written=13428 -> Bitmap Heap Scan on public.a_partition1 (cost=2534.33..246197.54 rows=126041 width=1393) (actual time=1231.578..4414.027 rows=115364 loops=1) Output: a_partition1.id, a_partition1.tmstmp Recheck Cond: ((a_partition1.parent_id = ANY ('{14467,30724,3976,13323,23971,44688,1938,275,19931,26527,42529,19491,11428,39110,9260,22446,44253,6705,15033,6461,34878,26687,45248,12739,24518,12359,12745,33738,32590,11856,14802,10451,16697,18904,41273,3547,12637,20190,36575,16096,19937,39652,2278,5616,5235,8570,15100}'::integer[])) OR (a_partition1.parent_id = ANY ('{50309,37126,36876,41878,9112,43673,10782,1696,11042,11168,34632,31157,7156,49213,29504,7200,38594,38598,5064,8783,27862,38201,473,19687,8296,49641,28394,29803,31597,19313,33395,32244,4348}'::integer[])) OR (a_partition1.parent_id = ANY ('{33024,9092,28678,29449,15757,36366,39963,46737,17156,39226,25628,14237,13125,17569,10914,39075,49734,40999,41756,8751,45490,29365,9143,5050,2463,35267,1220,12869,20294,24776,16329,5578,46605,30545,3544,37341,37086,28383,42527,45027,44292,35849,46314,2540,19696,21876,49156}'::integer[])) OR (a_partition1.parent_id = ANY ('{42242,50388,21916,13987,28708,4136,24617,31789,36533,28854,24247,15455,20805,38728,8908,20563,13908,20438,21087,4329,1131,46837,6505,16724,39675,35071}'::integer[])) OR (a_partition1.parent_id = ANY ('{11522,8964,47879,10380,46970,11278,6543,9489,27283,30958,40214,35076,21023,19746,11044,45605,41259,35245,36911,7344,20276,35126,3257,49134,1091,24388,5447,35017,4042,7627,42061,5582,30419,7508,19278,34778,38394,34153,20714,10860,7917,21614,10223,50801,28629,6782,7765}'::integer[]))) Filter: (((a_partition1.parent_id = ANY ('{14467,30724,3976,13323,23971,44688,1938,275,19931,26527,42529,19491,11428,39110,9260,22446,44253,6705,15033,6461,34878,26687,45248,12739,24518,12359,12745,33738,32590,11856,14802,10451,16697,18904,41273,3547,12637,20190,36575,16096,19937,39652,2278,5616,5235,8570,15100}'::integer[])) AND (a_partition1.part_key = 1)) OR ((a_partition1.parent_id = ANY ('{50309,37126,36876,41878,9112,43673,10782,1696,11042,11168,34632,31157,7156,49213,29504,7200,38594,38598,5064,8783,27862,38201,473,19687,8296,49641,28394,29803,31597,19313,33395,32244,4348}'::integer[])) AND (a_partition1.part_key = 2)) OR ((a_partition1.parent_id = ANY ('{33024,9092,28678,29449,15757,36366,39963,46737,17156,39226,25628,14237,13125,17569,10914,39075,49734,40999,41756,8751,45490,29365,9143,5050,2463,35267,1220,12869,20294,24776,16329,5578,46605,30545,3544,37341,37086,28383,42527,45027,44292,35849,46314,2540,19696,21876,49156}'::integer[])) AND (a_partition1.part_key = 3)) OR ((a_partition1.parent_id = ANY ('{42242,50388,21916,13987,28708,4136,24617,31789,36533,28854,24247,15455,20805,38728,8908,20563,13908,20438,21087,4329,1131,46837,6505,16724,39675,35071}'::integer[])) AND (a_partition1.part_key = 4)) OR ((a_partition1.parent_id = ANY ('{11522,8964,47879,10380,46970,11278,6543,9489,27283,30958,40214,35076,21023,19746,11044,45605,41259,35245,36911,7344,20276,35126,3257,49134,1091,24388,5447,35017,4042,7627,42061,5582,30419,7508,19278,34778,38394,34153,20714,10860,7917,21614,10223,50801,28629,6782,7765}'::integer[])) AND (a_partition1.part_key = 5))) Heap Blocks: exact=93942 Buffers: shared hit=397 read=94547 written=6930 -> BitmapOr (cost=2534.33..2534.33 rows=192032 width=0) (actual time=1214.569..1214.569 rows=0 loops=1) Buffers: shared hit=397 read=605 written=10 -> Bitmap Index Scan on a_parent_id_idx1 (cost=0.00..1479.43 rows=126041 width=0) (actual time=1091.952..1091.952 rows=115364 loops=1) Index Cond: (a_partition1.parent_id = ANY ('{14467,30724,3976,13323,23971,44688,1938,275,19931,26527,42529,19491,11428,39110,9260,22446,44253,6705,15033,6461,34878,26687,45248,12739,24518,12359,12745,33738,32590,11856,14802,10451,16697,18904,41273,3547,12637,20190,36575,16096,19937,39652,2278,5616,5235,8570,15100}'::integer[])) Buffers: shared hit=82 read=460 written=8 -> Bitmap Index Scan on a_parent_id_idx1 (cost=0.00..193.55 rows=14233 width=0) (actual time=26.911..26.911 rows=0 loops=1) Index Cond: (a_partition1.parent_id = ANY ('{50309,37126,36876,41878,9112,43673,10782,1696,11042,11168,34632,31157,7156,49213,29504,7200,38594,38598,5064,8783,27862,38201,473,19687,8296,49641,28394,29803,31597,19313,33395,32244,4348}'::integer[])) Buffers: shared hit=66 read=33 -> Bitmap Index Scan on a_parent_id_idx1 (cost=0.00..275.65 rows=20271 width=0) (actual time=41.874..41.874 rows=0 loops=1) Index Cond: (a_partition1.parent_id = ANY ('{33024,9092,28678,29449,15757,36366,39963,46737,17156,39226,25628,14237,13125,17569,10914,39075,49734,40999,41756,8751,45490,29365,9143,5050,2463,35267,1220,12869,20294,24776,16329,5578,46605,30545,3544,37341,37086,28383,42527,45027,44292,35849,46314,2540,19696,21876,49156}'::integer[])) Buffers: shared hit=97 read=45 written=1 -> Bitmap Index Scan on a_parent_id_idx1 (cost=0.00..152.49 rows=11214 width=0) (actual time=23.542..23.542 rows=0 loops=1) Index Cond: (a_partition1.parent_id = ANY ('{42242,50388,21916,13987,28708,4136,24617,31789,36533,28854,24247,15455,20805,38728,8908,20563,13908,20438,21087,4329,1131,46837,6505,16724,39675,35071}'::integer[])) Buffers: shared hit=52 read=26 written=1 -> Bitmap Index Scan on a_parent_id_idx1 (cost=0.00..275.65 rows=20271 width=0) (actual time=30.271..30.271 rows=0 loops=1) Index Cond: (a_partition1.parent_id = ANY ('{11522,8964,47879,10380,46970,11278,6543,9489,27283,30958,40214,35076,21023,19746,11044,45605,41259,35245,36911,7344,20276,35126,3257,49134,1091,24388,5447,35017,4042,7627,42061,5582,30419,7508,19278,34778,38394,34153,20714,10860,7917,21614,10223,50801,28629,6782,7765}'::integer[])) Buffers: shared hit=100 read=41 -> Bitmap Heap Scan on public.a_partition2 (cost=850.51..78634.20 rows=19852 width=1485) (actual time=316.458..2166.105 rows=16908 loops=1) Output: a_partition2.id, a_partition1.tmstmp Recheck Cond: ((a_partition2.parent_id = ANY ('{14467,30724,3976,13323,23971,44688,1938,275,19931,26527,42529,19491,11428,39110,9260,22446,44253,6705,15033,6461,34878,26687,45248,12739,24518,12359,12745,33738,32590,11856,14802,10451,16697,18904,41273,3547,12637,20190,36575,16096,19937,39652,2278,5616,5235,8570,15100}'::integer[])) OR (a_partition2.parent_id = ANY ('{50309,37126,36876,41878,9112,43673,10782,1696,11042,11168,34632,31157,7156,49213,29504,7200,38594,38598,5064,8783,27862,38201,473,19687,8296,49641,28394,29803,31597,19313,33395,32244,4348}'::integer[])) OR (a_partition2.parent_id = ANY ('{33024,9092,28678,29449,15757,36366,39963,46737,17156,39226,25628,14237,13125,17569,10914,39075,49734,40999,41756,8751,45490,29365,9143,5050,2463,35267,1220,12869,20294,24776,16329,5578,46605,30545,3544,37341,37086,28383,42527,45027,44292,35849,46314,2540,19696,21876,49156}'::integer[])) OR (a_partition2.parent_id = ANY ('{42242,50388,21916,13987,28708,4136,24617,31789,36533,28854,24247,15455,20805,38728,8908,20563,13908,20438,21087,4329,1131,46837,6505,16724,39675,35071}'::integer[])) OR (a_partition2.parent_id = ANY ('{11522,8964,47879,10380,46970,11278,6543,9489,27283,30958,40214,35076,21023,19746,11044,45605,41259,35245,36911,7344,20276,35126,3257,49134,1091,24388,5447,35017,4042,7627,42061,5582,30419,7508,19278,34778,38394,34153,20714,10860,7917,21614,10223,50801,28629,6782,7765}'::integer[]))) Filter: (((a_partition2.parent_id = ANY ('{14467,30724,3976,13323,23971,44688,1938,275,19931,26527,42529,19491,11428,39110,9260,22446,44253,6705,15033,6461,34878,26687,45248,12739,24518,12359,12745,33738,32590,11856,14802,10451,16697,18904,41273,3547,12637,20190,36575,16096,19937,39652,2278,5616,5235,8570,15100}'::integer[])) AND (a_partition2.part_key = 1)) OR ((a_partition2.parent_id = ANY ('{50309,37126,36876,41878,9112,43673,10782,1696,11042,11168,34632,31157,7156,49213,29504,7200,38594,38598,5064,8783,27862,38201,473,19687,8296,49641,28394,29803,31597,19313,33395,32244,4348}'::integer[])) AND (a_partition2.part_key = 2)) OR ((a_partition2.parent_id = ANY ('{33024,9092,28678,29449,15757,36366,39963,46737,17156,39226,25628,14237,13125,17569,10914,39075,49734,40999,41756,8751,45490,29365,9143,5050,2463,35267,1220,12869,20294,24776,16329,5578,46605,30545,3544,37341,37086,28383,42527,45027,44292,35849,46314,2540,19696,21876,49156}'::integer[])) AND (a_partition2.part_key = 3)) OR ((a_partition2.parent_id = ANY ('{42242,50388,21916,13987,28708,4136,24617,31789,36533,28854,24247,15455,20805,38728,8908,20563,13908,20438,21087,4329,1131,46837,6505,16724,39675,35071}'::integer[])) AND (a_partition2.part_key = 4)) OR ((a_partition2.parent_id = ANY ('{11522,8964,47879,10380,46970,11278,6543,9489,27283,30958,40214,35076,21023,19746,11044,45605,41259,35245,36911,7344,20276,35126,3257,49134,1091,24388,5447,35017,4042,7627,42061,5582,30419,7508,19278,34778,38394,34153,20714,10860,7917,21614,10223,50801,28629,6782,7765}'::integer[])) AND (a_partition2.part_key = 5))) Heap Blocks: exact=17769 Buffers: shared hit=402 read=18034 written=3804 -> BitmapOr (cost=850.51..850.51 rows=59567 width=0) (actual time=313.191..313.191 rows=0 loops=1) Buffers: shared hit=402 read=265 written=40 -> Bitmap Index Scan on a_parent_id_idx2 (cost=0.00..155.81 rows=11177 width=0) (actual time=65.671..65.671 rows=0 loops=1) Index Cond: (a_partition2.parent_id = ANY ('{14467,30724,3976,13323,23971,44688,1938,275,19931,26527,42529,19491,11428,39110,9260,22446,44253,6705,15033,6461,34878,26687,45248,12739,24518,12359,12745,33738,32590,11856,14802,10451,16697,18904,41273,3547,12637,20190,36575,16096,19937,39652,2278,5616,5235,8570,15100}'::integer[])) Buffers: shared hit=84 read=57 written=11 -> Bitmap Index Scan on a_parent_id_idx2 (cost=0.00..272.08 rows=19852 width=0) (actual time=116.974..116.974 rows=18267 loops=1) Index Cond: (a_partition2.parent_id = ANY ('{50309,37126,36876,41878,9112,43673,10782,1696,11042,11168,34632,31157,7156,49213,29504,7200,38594,38598,5064,8783,27862,38201,473,19687,8296,49641,28394,29803,31597,19313,33395,32244,4348}'::integer[])) Buffers: shared hit=68 read=98 written=18 -> Bitmap Index Scan on a_parent_id_idx2 (cost=0.00..155.81 rows=11177 width=0) (actual time=58.915..58.915 rows=0 loops=1) Index Cond: (a_partition2.parent_id = ANY ('{33024,9092,28678,29449,15757,36366,39963,46737,17156,39226,25628,14237,13125,17569,10914,39075,49734,40999,41756,8751,45490,29365,9143,5050,2463,35267,1220,12869,20294,24776,16329,5578,46605,30545,3544,37341,37086,28383,42527,45027,44292,35849,46314,2540,19696,21876,49156}'::integer[])) Buffers: shared hit=100 read=41 written=5 -> Bitmap Index Scan on a_parent_id_idx2 (cost=0.00..86.19 rows=6183 width=0) (actual time=25.370..25.370 rows=0 loops=1) Index Cond: (a_partition2.parent_id = ANY ('{42242,50388,21916,13987,28708,4136,24617,31789,36533,28854,24247,15455,20805,38728,8908,20563,13908,20438,21087,4329,1131,46837,6505,16724,39675,35071}'::integer[])) Buffers: shared hit=53 read=25 written=3 -> Bitmap Index Scan on a_parent_id_idx2 (cost=0.00..155.81 rows=11177 width=0) (actual time=46.254..46.254 rows=0 loops=1) Index Cond: (a_partition2.parent_id = ANY ('{11522,8964,47879,10380,46970,11278,6543,9489,27283,30958,40214,35076,21023,19746,11044,45605,41259,35245,36911,7344,20276,35126,3257,49134,1091,24388,5447,35017,4042,7627,42061,5582,30419,7508,19278,34778,38394,34153,20714,10860,7917,21614,10223,50801,28629,6782,7765}'::integer[])) Buffers: shared hit=97 read=44 written=3 -> Bitmap Heap Scan on public.a_partition3 (cost=692.99..56079.33 rows=13517 width=1467) (actual time=766.172..1555.761 rows=7594 loops=1) Output: a_partition3.id, a_partition1.tmstmp Recheck Cond: ((a_partition3.parent_id = ANY ('{14467,30724,3976,13323,23971,44688,1938,275,19931,26527,42529,19491,11428,39110,9260,22446,44253,6705,15033,6461,34878,26687,45248,12739,24518,12359,12745,33738,32590,11856,14802,10451,16697,18904,41273,3547,12637,20190,36575,16096,19937,39652,2278,5616,5235,8570,15100}'::integer[])) OR (a_partition3.parent_id = ANY ('{50309,37126,36876,41878,9112,43673,10782,1696,11042,11168,34632,31157,7156,49213,29504,7200,38594,38598,5064,8783,27862,38201,473,19687,8296,49641,28394,29803,31597,19313,33395,32244,4348}'::integer[])) OR (a_partition3.parent_id = ANY ('{33024,9092,28678,29449,15757,36366,39963,46737,17156,39226,25628,14237,13125,17569,10914,39075,49734,40999,41756,8751,45490,29365,9143,5050,2463,35267,1220,12869,20294,24776,16329,5578,46605,30545,3544,37341,37086,28383,42527,45027,44292,35849,46314,2540,19696,21876,49156}'::integer[])) OR (a_partition3.parent_id = ANY ('{42242,50388,21916,13987,28708,4136,24617,31789,36533,28854,24247,15455,20805,38728,8908,20563,13908,20438,21087,4329,1131,46837,6505,16724,39675,35071}'::integer[])) OR (a_partition3.parent_id = ANY ('{11522,8964,47879,10380,46970,11278,6543,9489,27283,30958,40214,35076,21023,19746,11044,45605,41259,35245,36911,7344,20276,35126,3257,49134,1091,24388,5447,35017,4042,7627,42061,5582,30419,7508,19278,34778,38394,34153,20714,10860,7917,21614,10223,50801,28629,6782,7765}'::integer[]))) Filter: (((a_partition3.parent_id = ANY ('{14467,30724,3976,13323,23971,44688,1938,275,19931,26527,42529,19491,11428,39110,9260,22446,44253,6705,15033,6461,34878,26687,45248,12739,24518,12359,12745,33738,32590,11856,14802,10451,16697,18904,41273,3547,12637,20190,36575,16096,19937,39652,2278,5616,5235,8570,15100}'::integer[])) AND (a_partition3.part_key = 1)) OR ((a_partition3.parent_id = ANY ('{50309,37126,36876,41878,9112,43673,10782,1696,11042,11168,34632,31157,7156,49213,29504,7200,38594,38598,5064,8783,27862,38201,473,19687,8296,49641,28394,29803,31597,19313,33395,32244,4348}'::integer[])) AND (a_partition3.part_key = 2)) OR ((a_partition3.parent_id = ANY ('{33024,9092,28678,29449,15757,36366,39963,46737,17156,39226,25628,14237,13125,17569,10914,39075,49734,40999,41756,8751,45490,29365,9143,5050,2463,35267,1220,12869,20294,24776,16329,5578,46605,30545,3544,37341,37086,28383,42527,45027,44292,35849,46314,2540,19696,21876,49156}'::integer[])) AND (a_partition3.part_key = 3)) OR ((a_partition3.parent_id = ANY ('{42242,50388,21916,13987,28708,4136,24617,31789,36533,28854,24247,15455,20805,38728,8908,20563,13908,20438,21087,4329,1131,46837,6505,16724,39675,35071}'::integer[])) AND (a_partition3.part_key = 4)) OR ((a_partition3.parent_id = ANY ('{11522,8964,47879,10380,46970,11278,6543,9489,27283,30958,40214,35076,21023,19746,11044,45605,41259,35245,36911,7344,20276,35126,3257,49134,1091,24388,5447,35017,4042,7627,42061,5582,30419,7508,19278,34778,38394,34153,20714,10860,7917,21614,10223,50801,28629,6782,7765}'::integer[])) AND (a_partition3.part_key = 5))) Heap Blocks: exact=7472 Buffers: shared hit=432 read=7682 written=1576 -> BitmapOr (cost=692.99..692.99 rows=42391 width=0) (actual time=764.238..764.238 rows=0 loops=1) Buffers: shared hit=432 read=210 written=51 -> Bitmap Index Scan on a_parent_id_idx3 (cost=0.00..138.53 rows=8870 width=0) (actual time=118.907..118.907 rows=0 loops=1) Index Cond: (a_partition3.parent_id = ANY ('{14467,30724,3976,13323,23971,44688,1938,275,19931,26527,42529,19491,11428,39110,9260,22446,44253,6705,15033,6461,34878,26687,45248,12739,24518,12359,12745,33738,32590,11856,14802,10451,16697,18904,41273,3547,12637,20190,36575,16096,19937,39652,2278,5616,5235,8570,15100}'::integer[])) Buffers: shared hit=91 read=52 written=7 -> Bitmap Index Scan on a_parent_id_idx3 (cost=0.00..97.27 rows=6228 width=0) (actual time=26.656..26.656 rows=0 loops=1) Index Cond: (a_partition3.parent_id = ANY ('{50309,37126,36876,41878,9112,43673,10782,1696,11042,11168,34632,31157,7156,49213,29504,7200,38594,38598,5064,8783,27862,38201,473,19687,8296,49641,28394,29803,31597,19313,33395,32244,4348}'::integer[])) Buffers: shared hit=71 read=28 written=3 -> Bitmap Index Scan on a_parent_id_idx3 (cost=0.00..225.13 rows=13517 width=0) (actual time=393.115..393.115 rows=7594 loops=1) Index Cond: (a_partition3.parent_id = ANY ('{33024,9092,28678,29449,15757,36366,39963,46737,17156,39226,25628,14237,13125,17569,10914,39075,49734,40999,41756,8751,45490,29365,9143,5050,2463,35267,1220,12869,20294,24776,16329,5578,46605,30545,3544,37341,37086,28383,42527,45027,44292,35849,46314,2540,19696,21876,49156}'::integer[])) Buffers: shared hit=107 read=74 written=25 -> Bitmap Index Scan on a_parent_id_idx3 (cost=0.00..76.64 rows=4907 width=0) (actual time=36.979..36.979 rows=0 loops=1) Index Cond: (a_partition3.parent_id = ANY ('{42242,50388,21916,13987,28708,4136,24617,31789,36533,28854,24247,15455,20805,38728,8908,20563,13908,20438,21087,4329,1131,46837,6505,16724,39675,35071}'::integer[])) Buffers: shared hit=56 read=22 written=9 -> Bitmap Index Scan on a_parent_id_idx3 (cost=0.00..138.53 rows=8870 width=0) (actual time=188.574..188.574 rows=0 loops=1) Index Cond: (a_partition3.parent_id = ANY ('{11522,8964,47879,10380,46970,11278,6543,9489,27283,30958,40214,35076,21023,19746,11044,45605,41259,35245,36911,7344,20276,35126,3257,49134,1091,24388,5447,35017,4042,7627,42061,5582,30419,7508,19278,34778,38394,34153,20714,10860,7917,21614,10223,50801,28629,6782,7765}'::integer[])) Buffers: shared hit=107 read=34 written=7 -> Bitmap Heap Scan on public.a_partition4 (cost=709.71..64604.81 rows=8273 width=1470) (actual time=268.111..543.570 rows=5211 loops=1) Output: a_partition4.id, a_partition1.tmstmp Recheck Cond: ((a_partition4.parent_id = ANY ('{14467,30724,3976,13323,23971,44688,1938,275,19931,26527,42529,19491,11428,39110,9260,22446,44253,6705,15033,6461,34878,26687,45248,12739,24518,12359,12745,33738,32590,11856,14802,10451,16697,18904,41273,3547,12637,20190,36575,16096,19937,39652,2278,5616,5235,8570,15100}'::integer[])) OR (a_partition4.parent_id = ANY ('{50309,37126,36876,41878,9112,43673,10782,1696,11042,11168,34632,31157,7156,49213,29504,7200,38594,38598,5064,8783,27862,38201,473,19687,8296,49641,28394,29803,31597,19313,33395,32244,4348}'::integer[])) OR (a_partition4.parent_id = ANY ('{33024,9092,28678,29449,15757,36366,39963,46737,17156,39226,25628,14237,13125,17569,10914,39075,49734,40999,41756,8751,45490,29365,9143,5050,2463,35267,1220,12869,20294,24776,16329,5578,46605,30545,3544,37341,37086,28383,42527,45027,44292,35849,46314,2540,19696,21876,49156}'::integer[])) OR (a_partition4.parent_id = ANY ('{42242,50388,21916,13987,28708,4136,24617,31789,36533,28854,24247,15455,20805,38728,8908,20563,13908,20438,21087,4329,1131,46837,6505,16724,39675,35071}'::integer[])) OR (a_partition4.parent_id = ANY ('{11522,8964,47879,10380,46970,11278,6543,9489,27283,30958,40214,35076,21023,19746,11044,45605,41259,35245,36911,7344,20276,35126,3257,49134,1091,24388,5447,35017,4042,7627,42061,5582,30419,7508,19278,34778,38394,34153,20714,10860,7917,21614,10223,50801,28629,6782,7765}'::integer[]))) Filter: (((a_partition4.parent_id = ANY ('{14467,30724,3976,13323,23971,44688,1938,275,19931,26527,42529,19491,11428,39110,9260,22446,44253,6705,15033,6461,34878,26687,45248,12739,24518,12359,12745,33738,32590,11856,14802,10451,16697,18904,41273,3547,12637,20190,36575,16096,19937,39652,2278,5616,5235,8570,15100}'::integer[])) AND (a_partition4.part_key = 1)) OR ((a_partition4.parent_id = ANY ('{50309,37126,36876,41878,9112,43673,10782,1696,11042,11168,34632,31157,7156,49213,29504,7200,38594,38598,5064,8783,27862,38201,473,19687,8296,49641,28394,29803,31597,19313,33395,32244,4348}'::integer[])) AND (a_partition4.part_key = 2)) OR ((a_partition4.parent_id = ANY ('{33024,9092,28678,29449,15757,36366,39963,46737,17156,39226,25628,14237,13125,17569,10914,39075,49734,40999,41756,8751,45490,29365,9143,5050,2463,35267,1220,12869,20294,24776,16329,5578,46605,30545,3544,37341,37086,28383,42527,45027,44292,35849,46314,2540,19696,21876,49156}'::integer[])) AND (a_partition4.part_key = 3)) OR ((a_partition4.parent_id = ANY ('{42242,50388,21916,13987,28708,4136,24617,31789,36533,28854,24247,15455,20805,38728,8908,20563,13908,20438,21087,4329,1131,46837,6505,16724,39675,35071}'::integer[])) AND (a_partition4.part_key = 4)) OR ((a_partition4.parent_id = ANY ('{11522,8964,47879,10380,46970,11278,6543,9489,27283,30958,40214,35076,21023,19746,11044,45605,41259,35245,36911,7344,20276,35126,3257,49134,1091,24388,5447,35017,4042,7627,42061,5582,30419,7508,19278,34778,38394,34153,20714,10860,7917,21614,10223,50801,28629,6782,7765}'::integer[])) AND (a_partition4.part_key = 5))) Heap Blocks: exact=5153 Buffers: shared hit=410 read=5362 written=1118 -> BitmapOr (cost=709.71..709.71 rows=48654 width=0) (actual time=267.028..267.028 rows=0 loops=1) Buffers: shared hit=410 read=209 written=50 -> Bitmap Index Scan on a_parent_id_idx4 (cost=0.00..153.69 rows=10908 width=0) (actual time=60.586..60.586 rows=0 loops=1) Index Cond: (a_partition4.parent_id = ANY ('{14467,30724,3976,13323,23971,44688,1938,275,19931,26527,42529,19491,11428,39110,9260,22446,44253,6705,15033,6461,34878,26687,45248,12739,24518,12359,12745,33738,32590,11856,14802,10451,16697,18904,41273,3547,12637,20190,36575,16096,19937,39652,2278,5616,5235,8570,15100}'::integer[])) Buffers: shared hit=90 read=52 written=11 -> Bitmap Index Scan on a_parent_id_idx4 (cost=0.00..107.91 rows=7659 width=0) (actual time=47.041..47.041 rows=0 loops=1) Index Cond: (a_partition4.parent_id = ANY ('{50309,37126,36876,41878,9112,43673,10782,1696,11042,11168,34632,31157,7156,49213,29504,7200,38594,38598,5064,8783,27862,38201,473,19687,8296,49641,28394,29803,31597,19313,33395,32244,4348}'::integer[])) Buffers: shared hit=64 read=35 written=7 -> Bitmap Index Scan on a_parent_id_idx4 (cost=0.00..153.69 rows=10908 width=0) (actual time=54.352..54.352 rows=0 loops=1) Index Cond: (a_partition4.parent_id = ANY ('{33024,9092,28678,29449,15757,36366,39963,46737,17156,39226,25628,14237,13125,17569,10914,39075,49734,40999,41756,8751,45490,29365,9143,5050,2463,35267,1220,12869,20294,24776,16329,5578,46605,30545,3544,37341,37086,28383,42527,45027,44292,35849,46314,2540,19696,21876,49156}'::integer[])) Buffers: shared hit=101 read=40 written=8 -> Bitmap Index Scan on a_parent_id_idx4 (cost=0.00..130.39 rows=8273 width=0) (actual time=54.690..54.690 rows=5220 loops=1) Index Cond: (a_partition4.parent_id = ANY ('{42242,50388,21916,13987,28708,4136,24617,31789,36533,28854,24247,15455,20805,38728,8908,20563,13908,20438,21087,4329,1131,46837,6505,16724,39675,35071}'::integer[])) Buffers: shared hit=56 read=40 written=13 -> Bitmap Index Scan on a_parent_id_idx4 (cost=0.00..153.69 rows=10908 width=0) (actual time=50.353..50.353 rows=0 loops=1) Index Cond: (a_partition4.parent_id = ANY ('{11522,8964,47879,10380,46970,11278,6543,9489,27283,30958,40214,35076,21023,19746,11044,45605,41259,35245,36911,7344,20276,35126,3257,49134,1091,24388,5447,35017,4042,7627,42061,5582,30419,7508,19278,34778,38394,34153,20714,10860,7917,21614,10223,50801,28629,6782,7765}'::integer[])) Buffers: shared hit=99 read=42 written=11Planning time: 8.002 msExecution time: 8967.750 msWhen I remove the ID sorting hack from this query, it goes back to a nasty index scan on tmstmp with a filter key on the whole WHERE clause.Thanks again for your help!On Sat, Jul 14, 2018 at 11:25 PM, Jeff Janes <[email protected]> wrote:On Tue, Jul 10, 2018 at 11:07 AM, Lincoln Swaine-Moore <[email protected]> wrote:Something about the estimated row counts (this problem persisted after I tried ANALYZEing) What is your default_statistics_target? What can you tell us about the distribution of parent_id? (exponential, power law, etc?). Can you show the results for select * from pg_stats where tablename='a' and attname='parent_id' \\x\\g\\x ? forces usage of the tmstmp index and Merge Append (which seems wise) but also a filter condition on parent_id over an index condition, which is apparently prohibitively slow.I tried creating a multicolumn index like:CREATE INDEX \"a_partition1_parent_and_tmstmp_idx\" on \"a_partition2\" USING btree (\"parent_id\", \"tmstmp\" DESC);But this didn't help (it wasn't used).You could try reversing the order and adding a column to be (tmstmp, parent_id, id) and keeping the table well vacuumed. This would allow the slow plan to still walk the indexes in tmstmp order but do it as an index-only scan, so it could omit the extra trip to the table. That trip to the table must be awfully slow to explain the numbers you show later in the thread.... This query plan (which is the same as when LIMIT is removed) has been a good short term solution when the number of \"parent_id\"s I'm using is still relatively small, but unfortunately queries grow untenably slow as the number of \"parent_id\"s involved increases:What happens when you remove that extra order by phrase that you added? The original slow plan should become much faster when the number of parent_ids is large (it has to dig through fewer index entries before accumulating 20 good ones), so you should try going back to that....I'd be very grateful for help with one or both of these questions:1) Why is adding an unnecessary (from the perspective of result correctness) ORDER BY valuable for forcing the parent_id index usage, and does that indicate that there is something wrong with my table/indexes/statistics/etc.?It finds the indexes on tmstmp to be falsely attractive, as it can walk in tmstmp order and so avoid the sort. (Really not the sort itself, but the fact that sort has to first read every row to be sorted, while walking an index can abort once the LIMIT is satisfied). Adding an extra phrase to the ORDER BY means the index is no longer capable of delivering rows in the needed order, so it no longer looks falsely attractive. The same thing could be obtained by doing a dummy operation, such as ORDER BY tmstmp + '0 seconds' DESC. I prefer that method, as it is more obviously a tuning trick. Adding in \"id\" looks more like a legitimate desire to break any ties that might occasionally occur in tmstmp.As Tom pointed out, there clearly is something wrong with your statistics, although we don't know what is causing it to go wrong. Fixing the statistics isn't guaranteed to fix the problem, but it would be a good start. 2) Is there any way I can improve my query time when there are many \"parent_id\"s involved? I seem to only be able to get the query plan to use at most one of the parent_id index and the tmstmp index at a time. Perhaps the correct multicolumn index would help?A few things mentioned above might help. But if they don't, is there any chance you could redesign your partitioning so that all parent_id queries together will always be in the same partition? And if not, could you just get rid of the partitioning altogether? 1e7 row is not all that many and doesn't generally need partitioning. Unless it is serving a specific purpose, it is probably costing you more than you are getting. Finally, could you rewrite it as a join to a VALUES list, rather than as an in-list? Cheers,Jeff\n-- Lincoln Swaine-Moore",
"msg_date": "Mon, 16 Jul 2018 17:29:00 -0400",
"msg_from": "Lincoln Swaine-Moore <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improving Performance of Query ~ Filter by A, Sort by B"
},
{
"msg_contents": "On Mon, Jul 16, 2018 at 5:29 PM, Lincoln Swaine-Moore <\[email protected]> wrote:\n\n> Tom and Jeff,\n>\n> Thanks very much for the suggestions!\n>\n> Here's what I've found so far after playing around for a few more days:\n>\n> What is your default_statistics_target? What can you tell us about the\n>> distribution of parent_id? (exponential, power law, etc?). Can you show\n>> the results for select * from pg_stats where tablename='a' and\n>> attname='parent_id' \\x\\g\\x ?\n>\n>\n> The default_statistics_target is 500, which I agree seems quite\n> insufficient for these purposes. I bumped this up to 2000, and saw some\n> improvement in the row count estimation, but pretty much the same query\n> plans. Unfortunately the distribution of counts is not intended to be\n> correlated to parent_id, which is one reason I imagine the histograms might\n> not be particularly effective unless theres one bucket for every value.\n> Here is the output you requested:\n>\n> select * from pg_stats where tablename='a' and attname='parent_id';\n>\n> schemaname | public\n> tablename | a\n> attname | parent_id\n> inherited | t\n> null_frac | 0\n> avg_width | 4\n> n_distinct | 18871\n> most_common_vals | {15503,49787,49786,24595,49784,17549, ...} (2000\n> values)\n> most_common_freqs | {0.0252983,0.02435,0.0241317,\n> 0.02329,0.019095,0.0103967,0.00758833,0.004245, ...} (2000 values)\n>\n\nYou showed the 8 most common frequencies. But could you also show the last\ncouple of them? When your queried parent_id value is not on the MCV list,\nit is the frequency of the least frequent one on the list which puts an\nupper limit on how frequent the one you queried for can be.\n\n\n\n> A few questions re: statistics:\n> 1) would it be helpful to bump column statistics to, say, 20k (the number\n> of distinct values of parent_id)?\n>\n\nOnly one way to find out...\nHowever you can only go up to 10k, not 20k.\n\n\n\n> 2) is the discrepancy between the statistics on the parent and child\n> table be expected? certainly I would think that the statistics would be\n> different, but I would've imagined they would have histograms of the same\n> size given the settings being the same.\n>\n\nIs the n_distinct estimate accurate for the partition? There is an\nalgorithm (which will change in v11) to stop the MCV from filling the\nentire statistics target size if it thinks adding more won't be useful.\nBut I don't know why the histogram boundary list would be short. But, I\ndoubt that that is very important here. The histogram is only used for\ninequality/range, not for equality/set membership.\n\n\n\n> 3) is there a way to manually specify the the distribution of rows to be\n> even? that is, set the frequency of each value to be ~ n_rows/n_distinct.\n> This isn't quite accurate, but is a reasonable assumption about the\n> distribution, and might generate better query plans.\n>\n\n\nThis would be going in the wrong direction. Your queries seem to\npreferentially use rare parent_ids, not typical parent_ids. In fact, it\nseems like many of your hard-coded parent_ids don't exist in the table at\nall. That certainly isn't going to help the planner any. Could you\nsomehow remove those before constructing the query?\n\nYou might also take a step back, where is that list of parent_ids coming\nfrom in the first place, and why couldn't you convert the list of literals\ninto a query that returns that list naturally?\n\n\n> You could try reversing the order and adding a column to be (tmstmp,\n>> parent_id, id) and keeping the table well vacuumed. This would allow the\n>> slow plan to still walk the indexes in tmstmp order but do it as an\n>> index-only scan, so it could omit the extra trip to the table. That trip to\n>> the table must be awfully slow to explain the numbers you show later in the\n>> thread.\n>\n>\n> Just to clarify, do you mean building indexes like:\n> CREATE INDEX \"a_tmstmp_parent_id_id_idx_[PART_KEY]\" on\n> \"a_partition[PART_KEY]\" USING btree(\"tmstmp\", \"parent_id\", \"id\")\n> That seems promising! Is the intuition here that we want the first key of\n> the index to be the one we are ultimately ordering by? Sounds like I make\n> have had that flipped initially. My understanding of this whole situation\n> (and please do correct me if this doesn't make sense) is the big bottleneck\n> here is reading pages from disk (when looking at stopped up queries, the\n> wait_event is DataFileRead), and so anything that can be done to minimize\n> the pages read will be valuable. Which is why I would love to get the query\n> plan to use the tmstmp index without having to filter thereafter by\n> parent_id.\n>\n\nYes, that is the index.\n\nYou really want it to filter by parent_id in the index, rather than going\nto the table to do the filter on parent_id. The index pages with tmstmp as\nthe leading column are going to be more tightly packed with potentially\nrelevant rows, while the table pages are less likely to be densely packed.\nSo filtering in the index leads to less IO. Currently, the only way to\nmake that in-index filtering happen is with an index-only scan. So the\ntmstmp needs to be first, and other two need to be present in either\norder. Note that if you change select list of your query to select more\nthan just \"id\", you will again defeat the index-only-scan.\n\n\n>\n> What happens when you remove that extra order by phrase that you added?\n>> The original slow plan should become much faster when the number of\n>> parent_ids is large (it has to dig through fewer index entries before\n>> accumulating 20 good ones), so you should try going back to that.\n>\n>\n> Unfortunately, I've found that even when the number of parent_ids is large\n> (2000), it's still prohibitively slow (haven't got it to finish), and\n> maintains a query plan that involves an Index Scan Backward across the\n> a_tmstmp_idxs (with a filter for parent_id).\n>\n\n\nI think the reason for this is that your 2000 literal parent_id values are\npreferentially rare or entirely absent from the table. Therefore adding\nmore parent_ids doesn't cause more rows to meet the filter qualification,\nso you don't accumulate a LIMIT worth of rows very fast.\n\n\nThe original index you discussed, '(parent_id, tmpstmp desc)', would be\nideal if the parent_id were tested for equality rather than set\nmembership. So another approach to speeding this up would be to rewrite\nthe query to test for equality on each parent_id separately and then\ncombine the results.\n\nselect id from (\n (select * from a where parent_id= 34226 order by tmstmp desc limit 20)\n union all\n (select * from a where parent_id= 24506 order by tmstmp desc limit 20)\n union all\n (select * from a where parent_id= 40986 order by tmstmp desc limit 20)\n union all\n (select * from a where parent_id= 27162 order by tmstmp desc limit 20)\n) foo\nORDER BY tmstmp DESC LIMIT 20;\n\nI don't know what is going to happen when you go from 4 parent_ids up to\n2000, though.\n\nIt would be nice if PostgreSQL would plan your original query this way\nitself, but it lacks the machinery to do so. I think there was a patch to\nadd that machinery, but I don't remember seeing any work on it recently.\n\nCheers,\n\nJeff\n\nOn Mon, Jul 16, 2018 at 5:29 PM, Lincoln Swaine-Moore <[email protected]> wrote:Tom and Jeff,Thanks very much for the suggestions!Here's what I've found so far after playing around for a few more days:What is your default_statistics_target? What can you tell us about the distribution of parent_id? (exponential, power law, etc?). Can you show the results for select * from pg_stats where tablename='a' and attname='parent_id' \\x\\g\\x ?The default_statistics_target is 500, which I agree seems quite insufficient for these purposes. I bumped this up to 2000, and saw some improvement in the row count estimation, but pretty much the same query plans. Unfortunately the distribution of counts is not intended to be correlated to parent_id, which is one reason I imagine the histograms might not be particularly effective unless theres one bucket for every value. Here is the output you requested:select * from pg_stats where tablename='a' and attname='parent_id';schemaname | publictablename | aattname | parent_idinherited | tnull_frac | 0avg_width | 4n_distinct | 18871most_common_vals | {15503,49787,49786,24595,49784,17549, ...} (2000 values)most_common_freqs | {0.0252983,0.02435,0.0241317,0.02329,0.019095,0.0103967,0.00758833,0.004245, ...} (2000 values)You showed the 8 most common frequencies. But could you also show the last couple of them? When your queried parent_id value is not on the MCV list, it is the frequency of the least frequent one on the list which puts an upper limit on how frequent the one you queried for can be.A few questions re: statistics: 1) would it be helpful to bump column statistics to, say, 20k (the number of distinct values of parent_id)?Only one way to find out...However you can only go up to 10k, not 20k. 2) is the discrepancy between the statistics on the parent and child table be expected? certainly I would think that the statistics would be different, but I would've imagined they would have histograms of the same size given the settings being the same.Is the n_distinct estimate accurate for the partition? There is an algorithm (which will change in v11) to stop the MCV from filling the entire statistics target size if it thinks adding more won't be useful. But I don't know why the histogram boundary list would be short. But, I doubt that that is very important here. The histogram is only used for inequality/range, not for equality/set membership. 3) is there a way to manually specify the the distribution of rows to be even? that is, set the frequency of each value to be ~ n_rows/n_distinct. This isn't quite accurate, but is a reasonable assumption about the distribution, and might generate better query plans.This would be going in the wrong direction. Your queries seem to preferentially use rare parent_ids, not typical parent_ids. In fact, it seems like many of your hard-coded parent_ids don't exist in the table at all. That certainly isn't going to help the planner any. Could you somehow remove those before constructing the query? You might also take a step back, where is that list of parent_ids coming from in the first place, and why couldn't you convert the list of literals into a query that returns that list naturally?You could try reversing the order and adding a column to be (tmstmp, parent_id, id) and keeping the table well vacuumed. This would allow the slow plan to still walk the indexes in tmstmp order but do it as an index-only scan, so it could omit the extra trip to the table. That trip to the table must be awfully slow to explain the numbers you show later in the thread. Just to clarify, do you mean building indexes like:CREATE INDEX \"a_tmstmp_parent_id_id_idx_[PART_KEY]\" on \"a_partition[PART_KEY]\" USING btree(\"tmstmp\", \"parent_id\", \"id\")That seems promising! Is the intuition here that we want the first key of the index to be the one we are ultimately ordering by? Sounds like I make have had that flipped initially. My understanding of this whole situation (and please do correct me if this doesn't make sense) is the big bottleneck here is reading pages from disk (when looking at stopped up queries, the wait_event is DataFileRead), and so anything that can be done to minimize the pages read will be valuable. Which is why I would love to get the query plan to use the tmstmp index without having to filter thereafter by parent_id.Yes, that is the index. You really want it to filter by parent_id in the index, rather than going to the table to do the filter on parent_id. The index pages with tmstmp as the leading column are going to be more tightly packed with potentially relevant rows, while the table pages are less likely to be densely packed. So filtering in the index leads to less IO. Currently, the only way to make that in-index filtering happen is with an index-only scan. So the tmstmp needs to be first, and other two need to be present in either order. Note that if you change select list of your query to select more than just \"id\", you will again defeat the index-only-scan. What happens when you remove that extra order by phrase that you added? The original slow plan should become much faster when the number of parent_ids is large (it has to dig through fewer index entries before accumulating 20 good ones), so you should try going back to that.Unfortunately, I've found that even when the number of parent_ids is large (2000), it's still prohibitively slow (haven't got it to finish), and maintains a query plan that involves an Index Scan Backward across the a_tmstmp_idxs (with a filter for parent_id).I think the reason for this is that your 2000 literal parent_id values are preferentially rare or entirely absent from the table. Therefore adding more parent_ids doesn't cause more rows to meet the filter qualification, so you don't accumulate a LIMIT worth of rows very fast.The original index you discussed, '(parent_id, tmpstmp desc)', would be ideal if the parent_id were tested for equality rather than set membership. So another approach to speeding this up would be to rewrite the query to test for equality on each parent_id separately and then combine the results.select id from ( (select * from a where parent_id= 34226 order by tmstmp desc limit 20) union all\n (select * from a where parent_id= 24506 order by tmstmp desc limit 20)\n\n union all (select * from a where parent_id= 40986 order by tmstmp desc limit 20) \n union all (select * from a where parent_id= 27162 order by tmstmp desc limit 20) ) fooORDER BY tmstmp DESC LIMIT 20; I don't know what is going to happen when you go from 4 parent_ids up to 2000, though.It would be nice if PostgreSQL would plan your original query this way itself, but it lacks the machinery to do so. I think there was a patch to add that machinery, but I don't remember seeing any work on it recently.Cheers,Jeff",
"msg_date": "Tue, 17 Jul 2018 11:59:33 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improving Performance of Query ~ Filter by A, Sort by B"
}
] |
[
{
"msg_contents": "I'm looking for a way of gathering performance stats in a more usable\nway than turning on `log_statement_stats` (or other related modules).\nThe problem I have with the log_*_stats family of modules is that they\nlog every single query, which makes them unusable in production. Aside\nfrom consuming space, there's also the problem that the log system\nwouldn't be able to keep up with the rate.\n\nThere are a couple ideas that pop into mind that would make these stats\nmore usable:\n1. Only log when the statement would otherwise already be logged. Such\nas due to the `log_statement` or `log_min_duration_statement` settings.\n2. Make stats available in `pg_stat_statements` (or alternate view that\ncould be joined on). The block stats are already available here, but\nothers like CPU usage, page faults, and context switches are not.\n\nTo answer why I want this data: I want to be able to identify queries\nwhich are consuming large amounts of CPU time so that I can either\noptimize the query or optimize the application making the query, and\nfree up CPU resources on the database. The `pg_stat_statements` view\nprovides the `total_time` metric, but many things can contribute to\nquery time other than CPU usage, and CPU usage is my primary concern at\nthe moment.\n\nDo these seem like reasonable requests? And if so, what's the procedure\nfor getting them implemented?\nAny thoughts on whether they would be hard to implement? I'm unfamiliar\nwith the PostgresQL code base, but might be willing to attempt an\nimplementation if it wouldn't be terribly difficult.\n\n-Patrick\n\n\n\n\n\n\n I'm looking for a way of gathering performance stats in a more\n usable way than turning on `log_statement_stats` (or other related\n modules). The problem I have with the log_*_stats family of modules\n is that they log every single query, which makes them unusable in\n production. Aside from consuming space, there's also the problem\n that the log system wouldn't be able to keep up with the rate.\n\n There are a couple ideas that pop into mind that would make these\n stats more usable:\n 1. Only log when the statement would otherwise already be logged.\n Such as due to the `log_statement` or `log_min_duration_statement`\n settings.\n 2. Make stats available in `pg_stat_statements` (or alternate view\n that could be joined on). The block stats are already available\n here, but others like CPU usage, page faults, and context switches\n are not.\n\n To answer why I want this data: I want to be able to identify\n queries which are consuming large amounts of CPU time so that I can\n either optimize the query or optimize the application making the\n query, and free up CPU resources on the database. The\n `pg_stat_statements` view provides the `total_time` metric, but many\n things can contribute to query time other than CPU usage, and CPU\n usage is my primary concern at the moment.\n\n Do these seem like reasonable requests? And if so, what's the\n procedure for getting them implemented?\n Any thoughts on whether they would be hard to implement? I'm\n unfamiliar with the PostgresQL code base, but might be willing to\n attempt an implementation if it wouldn't be terribly difficult.\n\n -Patrick",
"msg_date": "Tue, 10 Jul 2018 13:54:12 -0400",
"msg_from": "Patrick Hemmer <[email protected]>",
"msg_from_op": true,
"msg_subject": "performance statistics monitoring without spamming logs"
},
{
"msg_contents": "On Tue, Jul 10, 2018 at 01:54:12PM -0400, Patrick Hemmer wrote:\n> I'm looking for a way of gathering performance stats in a more usable\n> way than turning on `log_statement_stats` (or other related modules).\n> The problem I have with the log_*_stats family of modules is that they\n> log every single query, which makes them unusable in production. Aside\n> from consuming space, there's also the problem that the log system\n> wouldn't be able to keep up with the rate.\n> \n> There are a couple ideas that pop into mind that would make these stats\n> more usable:\n> 1. Only log when the statement would otherwise already be logged. Such\n> as due to the `log_statement` or `log_min_duration_statement` settings.\n\nDid you see: (Added Adrien to Cc);\nhttps://commitfest.postgresql.org/18/1691/\n\nI don't think the existing patch does what you want, but perhaps all that's\nneeded is this:\n\n if (save_log_statement_stats)\n+ if (log_sample_rate==1 || was_logged)\n ShowUsage(\"EXECUTE MESSAGE STATISTICS\");\n\nIn any case, I'm thinking that your request could/should be considered by\nwhatever future patch implements sampling (if not implemented/included in the\npatch itself).\n\nIf that doesn't do what's needed, that patch might still be a good crash course\nin how to start implementing what you need (perhaps on top of that patch).\n\n> 2. Make stats available in `pg_stat_statements` (or alternate view that\n> could be joined on). The block stats are already available here, but\n> others like CPU usage, page faults, and context switches are not.\n\npg_stat_statements is ./contrib/pg_stat_statements/pg_stat_statements.c which is 3k LOC.\n\ngetrusage stuff and log_*_stat stuff is in src/backend/tcop/postgres.c\n\nJustin\n\n",
"msg_date": "Tue, 10 Jul 2018 13:38:28 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance statistics monitoring without spamming logs"
},
{
"msg_contents": "On Tue, Jul 10, 2018 at 11:38 AM, Justin Pryzby <[email protected]>\nwrote:\n>\n> > 2. Make stats available in `pg_stat_statements` (or alternate view that\n> > could be joined on). The block stats are already available here, but\n> > others like CPU usage, page faults, and context switches are not.\n>\n> pg_stat_statements is ./contrib/pg_stat_statements/pg_stat_statements.c\n> which is 3k LOC.\n>\n> getrusage stuff and log_*_stat stuff is in src/backend/tcop/postgres.c\n\n\nBefore you start implementing something here, take a look at pg_stat_kcache\n[0]\n\nWhich already aims to collect a few more system statistics than what\npg_stat_statements provides today, and might be a good basis to extend from.\n\nIt might also be worth to look at pg_stat_activity wait event sampling to\ndetermine where a system spends time, see e.g. pg_wait_sampling [1] for one\napproach to this.\n\n[0]: https://github.com/powa-team/pg_stat_kcache\n[1]: https://github.com/postgrespro/pg_wait_sampling\n\nBest,\nLukas\n\n-- \nLukas Fittl\n\nOn Tue, Jul 10, 2018 at 11:38 AM, Justin Pryzby <[email protected]> wrote:\n> 2. Make stats available in `pg_stat_statements` (or alternate view that\n> could be joined on). The block stats are already available here, but\n> others like CPU usage, page faults, and context switches are not.\n\npg_stat_statements is ./contrib/pg_stat_statements/pg_stat_statements.c which is 3k LOC.\n\ngetrusage stuff and log_*_stat stuff is in src/backend/tcop/postgres.cBefore you start implementing something here, take a look at pg_stat_kcache [0]Which already aims to collect a few more system statistics than what pg_stat_statements provides today, and might be a good basis to extend from.It might also be worth to look at pg_stat_activity wait event sampling to determine where a system spends time, see e.g. pg_wait_sampling [1] for one approach to this.[0]: https://github.com/powa-team/pg_stat_kcache[1]: https://github.com/postgrespro/pg_wait_samplingBest,Lukas -- Lukas Fittl",
"msg_date": "Thu, 12 Jul 2018 15:25:25 -0700",
"msg_from": "Lukas Fittl <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance statistics monitoring without spamming logs"
},
{
"msg_contents": "On 07/13/2018 12:25 AM, Lukas Fittl wrote:\n> On Tue, Jul 10, 2018 at 11:38 AM, Justin Pryzby <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n> > 2. Make stats available in `pg_stat_statements` (or alternate view that\n> > could be joined on). The block stats are already available here, but\n> > others like CPU usage, page faults, and context switches are not.\n> \n> pg_stat_statements is\n> ./contrib/pg_stat_statements/pg_stat_statements.c which is 3k LOC.\n> \n> getrusage stuff and log_*_stat stuff is in src/backend/tcop/postgres.c\n> \n> \n> Before you start implementing something here, take a look at \n> pg_stat_kcache [0]\n> \n> Which already aims to collect a few more system statistics than what \n> pg_stat_statements provides today, and might be a good basis to extend from.\n> \n> It might also be worth to look at pg_stat_activity wait event sampling \n> to determine where a system spends time, see e.g. pg_wait_sampling \n> [1] for one approach to this.\n> \n\nHi,\n\nYou should look Powa stack :\n\nhttps://github.com/powa-team/powa\n\nPowa can aggregate metrics from different extensions such as \npg_stat_statements, pg_stat_kcache and pg_wait_sampling recently : \nhttps://rjuju.github.io/postgresql/2018/07/09/wait-events-support-for-powa.html\n\nRegards,\n\n> [0]: https://github.com/powa-team/pg_stat_kcache \n> <https://github.com/powa-team/pg_stat_kcache>\n> [1]: https://github.com/postgrespro/pg_wait_sampling \n> <https://github.com/postgrespro/pg_wait_sampling>\n> \n> Best,\n> Lukas\n> \n> -- \n> Lukas Fittl\n\n\n",
"msg_date": "Fri, 13 Jul 2018 09:23:59 +0200",
"msg_from": "Adrien NAYRAT <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance statistics monitoring without spamming logs"
},
{
"msg_contents": "On 07/10/2018 08:38 PM, Justin Pryzby wrote:\n> On Tue, Jul 10, 2018 at 01:54:12PM -0400, Patrick Hemmer wrote:\n>> I'm looking for a way of gathering performance stats in a more usable\n>> way than turning on `log_statement_stats` (or other related modules).\n>> The problem I have with the log_*_stats family of modules is that they\n>> log every single query, which makes them unusable in production. Aside\n>> from consuming space, there's also the problem that the log system\n>> wouldn't be able to keep up with the rate.\n>>\n>> There are a couple ideas that pop into mind that would make these stats\n>> more usable:\n>> 1. Only log when the statement would otherwise already be logged. Such\n>> as due to the `log_statement` or `log_min_duration_statement` settings.\n> \n> Did you see: (Added Adrien to Cc);\n> https://commitfest.postgresql.org/18/1691/\n> \n> I don't think the existing patch does what you want, but perhaps all that's\n> needed is this:\n> \n> if (save_log_statement_stats)\n> + if (log_sample_rate==1 || was_logged)\n> ShowUsage(\"EXECUTE MESSAGE STATISTICS\");\n> \n> In any case, I'm thinking that your request could/should be considered by\n> whatever future patch implements sampling (if not implemented/included in the\n> patch itself).\n\nHi,\n\nThanks for Cc, it seems a good idea. Will think about it ;)\n\n> \n> If that doesn't do what's needed, that patch might still be a good crash course\n> in how to start implementing what you need (perhaps on top of that patch).\n> \n>> 2. Make stats available in `pg_stat_statements` (or alternate view that\n>> could be joined on). The block stats are already available here, but\n>> others like CPU usage, page faults, and context switches are not.\n> \n> pg_stat_statements is ./contrib/pg_stat_statements/pg_stat_statements.c which is 3k LOC.\n> \n> getrusage stuff and log_*_stat stuff is in src/backend/tcop/postgres.c\n> \n> Justin\n> \n\n\n",
"msg_date": "Fri, 13 Jul 2018 09:24:52 +0200",
"msg_from": "Adrien NAYRAT <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance statistics monitoring without spamming logs"
},
{
"msg_contents": "On Fri, Jul 13, 2018 at 9:23 AM, Adrien NAYRAT\n<[email protected]> wrote:\n> On 07/13/2018 12:25 AM, Lukas Fittl wrote:\n>>\n>> On Tue, Jul 10, 2018 at 11:38 AM, Justin Pryzby <[email protected]\n>> <mailto:[email protected]>> wrote:\n>>\n>> > 2. Make stats available in `pg_stat_statements` (or alternate view\n>> that\n>> > could be joined on). The block stats are already available here, but\n>> > others like CPU usage, page faults, and context switches are not.\n>>\n>> pg_stat_statements is\n>> ./contrib/pg_stat_statements/pg_stat_statements.c which is 3k LOC.\n>>\n>> getrusage stuff and log_*_stat stuff is in src/backend/tcop/postgres.c\n>>\n>>\n>> Before you start implementing something here, take a look at\n>> pg_stat_kcache [0]\n>>\n>> Which already aims to collect a few more system statistics than what\n>> pg_stat_statements provides today, and might be a good basis to extend from.\n>>\n\nAlso no one asked for it before, but we can definitely add all the\nother fields returned by get_rusage(2) in pg_stat_kcache. You can\nalso look at https://github.com/markwkm/pg_proctab.\n\n",
"msg_date": "Fri, 13 Jul 2018 09:55:14 +0200",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance statistics monitoring without spamming logs"
},
{
"msg_contents": "Hi,\n\nI'm replying to an old thread from -performance:\nhttps://www.postgresql.org/message-id/flat/7ffb9dbe-c76f-8ca3-12ee-7914ede872e6%40stormcloud9.net\n\nI was looking at:\nhttps://commitfest.postgresql.org/20/1691/\n\"New GUC to sample log queries\"\n\nOn Tue, Jul 10, 2018 at 01:54:12PM -0400, Patrick Hemmer wrote:\n> I'm looking for a way of gathering performance stats in a more usable\n> way than turning on `log_statement_stats` (or other related modules).\n> The problem I have with the log_*_stats family of modules is that they\n> log every single query, which makes them unusable in production. Aside\n> from consuming space, there's also the problem that the log system\n> wouldn't be able to keep up with the rate.\n> \n> There are a couple ideas that pop into mind that would make these stats\n> more usable:\n> 1. Only log when the statement would otherwise already be logged. Such\n> as due to the `log_statement` or `log_min_duration_statement` settings.\n\n..but instead came back to this parallel thread and concluded that I'm\ninterested in exactly that behavior: log_statement_stats only if\nlog_min_duration_statement exceeded.\n\nIf there's agreement that's desirable behavior, should I change\nlog_statement_stats to a GUC (on/off/logged) ?\n\nOr, would it be reasonable to instead change the existing behavior of\nlog_statement_stats=on to mean what I want: only log statement stats if\nstatement is otherwise logged. If log_statement_stats is considered to be a\ndeveloper option, most likely to be enabled either in a development or other\nsegregated or non-production environment, or possibly for only a single session\nfor diagnostics.\n\nMy own use case is that I'd like to know if a longrunning query was doing lots\nof analytics (aggregation/sorting), or possibly spinning away on a nested\nnested loop. But I only care about the longest queries, and then probably look\nat the ratio of user CPU/clock time.\n\nJustin\n\n",
"msg_date": "Wed, 21 Nov 2018 23:41:15 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance statistics monitoring without spamming logs"
},
{
"msg_contents": "On 11/22/18 6:41 AM, Justin Pryzby wrote:\n> and then probably look\n> at the ratio of user CPU/clock time.\n\nMaybe pg_stat_kcache could help you :\n\nhttps://github.com/powa-team/pg_stat_kcache\nhttps://rjuju.github.io/postgresql/2018/07/17/pg_stat_kcache-2-1-is-out.html\n\n",
"msg_date": "Thu, 22 Nov 2018 09:20:19 +0100",
"msg_from": "Adrien NAYRAT <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance statistics monitoring without spamming logs"
}
] |
[
{
"msg_contents": "Hello,\n\nI have a table which has billions of rows and I want to select it by\nbitwise operation like,\n\n=# CREATE TABLE IF NOT EXISTS t_bitwise (\n id INTEGER NOT NULL\n ,status_bigint BITINT NOT NULL\n ,status_bit BIT(32) NOT NULL\n);\n\n=# INSERT INTO t_bitwise (id, status_bigint, status_bit) SELECT\n id\n ,(random() * 4294967295)::BIGINT\n ,(random() * 4294967295)::BIGINT::BIT(32)\nFROM generate_series(1, 3000000000) as t(id);\n\n=# SELECT * FROM t_bitwise WHERE\n status_bigint & 170 = 170\n OR status_bigint & 256 = 256;\n\n=# SELECT * FROM t_bitwise WHERE\n status_bit & b'00000000000000000000000010101010'::BIT(32) =\nb'00000000000000000000000010101010'::BIT(32)\n OR status_bit & b'00000000000000000000000100000000'::BIT(32) =\nb'00000000000000000000000100000000'::BIT(32);\n\nYes, these SELECT statements scan all rows. I guess possible index types are\n\n- Expression indexes ?\n- Partial Indexes ?\n- GIN ?\n- GIST ?\n- bloom index ?\n\nI googled but I feel there is no good solution and it would be good if\nI hava \"bloom index specific for bitwise operation\".\n\nIn case of normal bloom index, a value is hashed into a few bits which\nis mapped to a signature (default 80 bits).\nThis is a lossy representation of the original value, and as such is\nprone to reporting false positives which requires \"Recheck\" process at\nSELECT. The more rows or the more complex condition, the more\nexecution time.\n\nMy idea is that, in case of index for bitwise operation, each bit\nshould be mapped to exactly same bit on a signature (One to one\nmapping). No false positives. No \"Recheck\" process is required. If the\ntarget coulmn is BIT(32), just 32 bits signature lengh is enough.\n\nIs there any index module like this ? Since I am not familiar with C\nand Postgresql, I can not write my own module.\n\nAny help would be great for me.\n\nThanks,\nTakao\n\n",
"msg_date": "Wed, 11 Jul 2018 15:02:59 +0900",
"msg_from": "Takao Magoori <[email protected]>",
"msg_from_op": true,
"msg_subject": "Special bloom index of INT, BIGINT, BIT, VARBIT for bitwise operation"
},
{
"msg_contents": "\n\nOn 07/11/2018 08:02 AM, Takao Magoori wrote:\n> Hello,\n> \n> I have a table which has billions of rows and I want to select it by\n> bitwise operation like,\n> \n> =# CREATE TABLE IF NOT EXISTS t_bitwise (\n> id INTEGER NOT NULL\n> ,status_bigint BITINT NOT NULL\n> ,status_bit BIT(32) NOT NULL\n> );\n> \n> =# INSERT INTO t_bitwise (id, status_bigint, status_bit) SELECT\n> id\n> ,(random() * 4294967295)::BIGINT\n> ,(random() * 4294967295)::BIGINT::BIT(32)\n> FROM generate_series(1, 3000000000) as t(id);\n> \n> =# SELECT * FROM t_bitwise WHERE\n> status_bigint & 170 = 170\n> OR status_bigint & 256 = 256;\n> \n> =# SELECT * FROM t_bitwise WHERE\n> status_bit & b'00000000000000000000000010101010'::BIT(32) =\n> b'00000000000000000000000010101010'::BIT(32)\n> OR status_bit & b'00000000000000000000000100000000'::BIT(32) =\n> b'00000000000000000000000100000000'::BIT(32);\n> \n> Yes, these SELECT statements scan all rows. I guess possible index types are\n> \n> - Expression indexes ?\n> - Partial Indexes ?\n> - GIN ?\n> - GIST ?\n> - bloom index ?\n> \n> I googled but I feel there is no good solution and it would be good if\n> I hava \"bloom index specific for bitwise operation\".\n> \n> In case of normal bloom index, a value is hashed into a few bits which\n> is mapped to a signature (default 80 bits).\n> This is a lossy representation of the original value, and as such is\n> prone to reporting false positives which requires \"Recheck\" process at\n> SELECT. The more rows or the more complex condition, the more\n> execution time.\n> \n> My idea is that, in case of index for bitwise operation, each bit\n> should be mapped to exactly same bit on a signature (One to one\n> mapping). No false positives. No \"Recheck\" process is required. If the\n> target coulmn is BIT(32), just 32 bits signature lengh is enough.\n> \n\nSo you're not computing a hash at all, so calling this \"bloom index\" is \nrather misleading.\n\nAnother question is where do you expect the performance advantage to \ncome from? If the table is as narrow as this, such index will have \nalmost the same size, and it won't allow fast lookups.\n\n> Is there any index module like this ? Since I am not familiar with C\n> and Postgresql, I can not write my own module.\n> \n\nI doubt there's a suitable index available currently. Perhaps a simple \nbtree index could help, assuming it's (a) smaller than the table, (b) we \ncan use IOS and (c) only a small fraction of the table matches.\n\nAnother idea is to use partial indexes - one for each bit, i.e.\n\n CREATE INDEX ON t_bitwise (id)\n WHERE status_bit & b'10000000'::BIT(32) = b'10000000'::BIT(32);\n\nAnd then build the query accordingly.\n\nI wonder if it might work with BRIN indexes of some kind. If the range \nsummary is defined as OR of the values, that might help, depending on \nvariability within the page range. But that would probably require some \ndevelopment.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Tue, 17 Jul 2018 16:57:32 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Special bloom index of INT, BIGINT, BIT, VARBIT for bitwise\n operation"
}
] |
[
{
"msg_contents": "Hi,\r\n\r\nAll.\r\nCan anyone give me a hand?\r\n\r\nI meet a problem:High concurrency but simple updating causes deadlock\r\n\r\n1.System info\r\n\r\nLinux version 4.8.0\r\n\r\nUbuntu 5.4.0-6ubuntu1~16.04.4\r\n\r\n2.Pg info\r\n\r\nPostgreSQL 9.5.12 on i686-pc-linux-gnu, compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0 20160609, 32-bit\r\n\r\nChanges inpostgresql.conf:\r\nmax_connections = 1000 //100 to 1000\r\n\r\n3.Database for test——2000 row same data,\r\n\r\n ipcid | surdevip | surdevport | devfactory | surchanmode | surchannum | username | password | transprotocol | mediastreamtype | streamid | bsmvalid | smdevip | smdevport | smtransprotocol\r\n\r\n------------+----------+------------+------------+-------------+------------+----------+----------+---------------+-----------------+----------+----------+---------+-----------+-----------------\r\n\r\n 320460291 | Name | 8000 | 100 | 100 | 100 | admin | 666666 | 100 | 100 | hello | 1 | smpIp | 666 | 17\r\n\r\n 168201188 | Name | 8000 | 100 | 100 | 100 | admin | 666666 | 100 | 100 | hello | 1 | smpIp | 666 | 27\r\n\r\n1360154585 | Name | 8000 | 100 | 100 | 100 | admin | 666666 | 100 | 100 | hello | 1 | smpIp | 666 | 70\r\n\r\n 820068220 | Name | 8000 | 100 | 100 | 100 | admin | 666666 | 100 | 100 | hello | 1 | smpIp | 666 | 49\r\n\r\n。。。。。。2k row totally\r\n\r\n 4.Operation:Multi-user thread update\r\nEach thread do the same cmd : Pgexc(“UPDATE TEST6_CHAN_LIST_INFO SET streamId= 'hello', smDevPort= '666' WHERE transProtocol=100”)\r\n\r\n \r\n\r\n5.Error info\r\n\r\nError info in my code\r\n\r\nERROR: [func:insetDB line:1284]DB_Table_Update\r\n\r\nERROR: [func:DB_Table_Update line:705]PQexec(UPDATE TEST6_CHAN_LIST_INFO SET streamId= 'hello', smDevPort= '666' WHERE transProtocol=100) : ERROR: deadlock detected\r\n\r\nDETAIL: Process 2366 waits for ShareLock on transaction 12316; blocked by process 2368.\r\n\r\nProcess 2368 waits for ShareLock on transaction 12289; blocked by process 2342.\r\n\r\nProcess 2342 waits for ExclusiveLock on tuple (9,1) of relation 18639 of database 18638; blocked by process 2366.\r\n\r\nHINT: See server log for query details.\r\n\r\nCONTEXT: while locking tuple (9,1) in relation \"test6_chan_list_info\"\r\n\r\nError info in pg log\r\n\r\nERROR: deadlock detected\r\n\r\nDETAIL: Process 10938 waits for ExclusiveLock on tuple (1078,61) of relation 18639 of database 18638; blocked by process 10911.\r\n\r\n Process 10911 waits for ShareLock on transaction 19494; blocked by process 10807.\r\n\r\n Process 10807 waits for ShareLock on transaction 19560; blocked by process 10938.\r\n\r\n Process 10938: UPDATE TEST6_CHAN_LIST_INFO SET streamId= 'hello', smDevPort= '666' WHERE transProtocol=100\r\n\r\n Process 10911: UPDATE TEST6_CHAN_LIST_INFO SET streamId= 'hello', smDevPort= '666' WHERE transProtocol=100\r\n\r\n Process 10807: UPDATE TEST6_CHAN_LIST_INFO SET streamId= 'hello', smDevPort= '666' WHERE transProtocol=100\r\n\r\nHINT: See server log for query details.\r\n\r\nSTATEMENT: UPDATE TEST6_CHAN_LIST_INFO SET streamId= 'hello', smDevPort= '666' WHERE transProtocol=100\r\n\r\nERROR: deadlock detected\r\n\r\nDETAIL: Process 10939 waits for ShareLock on transaction 19567; blocked by process 10945.\r\n\r\n Process 10945 waits for ShareLock on transaction 19494; blocked by process 10807.\r\n\r\n Process 10807 waits for ExclusiveLock on tuple (279,1) of relation 18639 of database 18638; blocked by process 10939.\r\n\r\n Process 10939: UPDATE TEST6_CHAN_LIST_INFO SET streamId= 'hello', smDevPort= '666' WHERE transProtocol=100\r\n\r\n Process 10945: UPDATE TEST6_CHAN_LIST_INFO SET streamId= 'hello', smDevPort= '666' WHERE transProtocol=100\r\n\r\n Process 10807: UPDATE TEST6_CHAN_LIST_INFO SET streamId= 'hello', smDevPort= '666' WHERE transProtocol=100\r\n\r\nHINT: See server log for query details.\r\n\r\nCONTEXT: while locking tuple (279,1) in relation \"test6_chan_list_info\"\r\n\r\nSTATEMENT: UPDATE TEST6_CHAN_LIST_INFO SET streamId= 'hello', smDevPort= '666' WHERE transProtocol=100\r\n \r\n 6.my quetion \r\n 6.1.is it possible meet dead lock with high conurrency simple update?\r\n 6.2.if yes, why,and how to avoid?\r\n \r\n thanks very much!!!\r\n \r\n Yours,\r\n \r\n Leo from China\nHi,All.Can anyone give me a hand?I meet a problem:High concurrency but simple updating causes deadlock1.System infoLinux version 4.8.0Ubuntu 5.4.0-6ubuntu1~16.04.42.Pg infoPostgreSQL 9.5.12 on i686-pc-linux-gnu, compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0 20160609, 32-bitChanges inpostgresql.conf:max_connections = 1000 //100 to 10003.Database for test——2000 row same data, ipcid | surdevip | surdevport | devfactory | surchanmode | surchannum | username | password | transprotocol | mediastreamtype | streamid | bsmvalid | smdevip | smdevport | smtransprotocol------------+----------+------------+------------+-------------+------------+----------+----------+---------------+-----------------+----------+----------+---------+-----------+----------------- 320460291 | Name | 8000 | 100 | 100 | 100 | admin | 666666 | 100 | 100 | hello | 1 | smpIp | 666 | 17 168201188 | Name | 8000 | 100 | 100 | 100 | admin | 666666 | 100 | 100 | hello | 1 | smpIp | 666 | 271360154585 | Name | 8000 | 100 | 100 | 100 | admin | 666666 | 100 | 100 | hello | 1 | smpIp | 666 | 70 820068220 | Name | 8000 | 100 | 100 | 100 | admin | 666666 | 100 | 100 | hello | 1 | smpIp | 666 | 49。。。。。。2k row totally 4.Operation:Multi-user thread updateEach thread do the same cmd : Pgexc(“UPDATE TEST6_CHAN_LIST_INFO SET streamId= 'hello', smDevPort= '666' WHERE transProtocol=100”) 5.Error infoError info in my codeERROR: [func:insetDB line:1284]DB_Table_UpdateERROR: [func:DB_Table_Update line:705]PQexec(UPDATE TEST6_CHAN_LIST_INFO SET streamId= 'hello', smDevPort= '666' WHERE transProtocol=100) : ERROR: deadlock detectedDETAIL: Process 2366 waits for ShareLock on transaction 12316; blocked by process 2368.Process 2368 waits for ShareLock on transaction 12289; blocked by process 2342.Process 2342 waits for ExclusiveLock on tuple (9,1) of relation 18639 of database 18638; blocked by process 2366.HINT: See server log for query details.CONTEXT: while locking tuple (9,1) in relation \"test6_chan_list_info\"Error info in pg logERROR: deadlock detectedDETAIL: Process 10938 waits for ExclusiveLock on tuple (1078,61) of relation 18639 of database 18638; blocked by process 10911. Process 10911 waits for ShareLock on transaction 19494; blocked by process 10807. Process 10807 waits for ShareLock on transaction 19560; blocked by process 10938. Process 10938: UPDATE TEST6_CHAN_LIST_INFO SET streamId= 'hello', smDevPort= '666' WHERE transProtocol=100 Process 10911: UPDATE TEST6_CHAN_LIST_INFO SET streamId= 'hello', smDevPort= '666' WHERE transProtocol=100 Process 10807: UPDATE TEST6_CHAN_LIST_INFO SET streamId= 'hello', smDevPort= '666' WHERE transProtocol=100HINT: See server log for query details.STATEMENT: UPDATE TEST6_CHAN_LIST_INFO SET streamId= 'hello', smDevPort= '666' WHERE transProtocol=100ERROR: deadlock detectedDETAIL: Process 10939 waits for ShareLock on transaction 19567; blocked by process 10945. Process 10945 waits for ShareLock on transaction 19494; blocked by process 10807. Process 10807 waits for ExclusiveLock on tuple (279,1) of relation 18639 of database 18638; blocked by process 10939. Process 10939: UPDATE TEST6_CHAN_LIST_INFO SET streamId= 'hello', smDevPort= '666' WHERE transProtocol=100 Process 10945: UPDATE TEST6_CHAN_LIST_INFO SET streamId= 'hello', smDevPort= '666' WHERE transProtocol=100 Process 10807: UPDATE TEST6_CHAN_LIST_INFO SET streamId= 'hello', smDevPort= '666' WHERE transProtocol=100HINT: See server log for query details.CONTEXT: while locking tuple (279,1) in relation \"test6_chan_list_info\"STATEMENT: UPDATE TEST6_CHAN_LIST_INFO SET streamId= 'hello', smDevPort= '666' WHERE transProtocol=100\n \n6.my quetion \n6.1.is it possible meet dead lock with high conurrency simple update?\n6.2.if yes, why,and how to avoid?\n \nthanks very much!!!\n \nYours,\n \nLeo from China",
"msg_date": "Wed, 11 Jul 2018 22:16:09 +0800",
"msg_from": "\"=?gb18030?B?t+M=?=\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "High concurrency but simple updating causes deadlock"
},
{
"msg_contents": "In this case this happens because the update modifies several rows and different transactions may try to modify those rows (and obtain locks for them) in different order.\nE.g. one transaction first gets row 1 and then row 2, and the second transaction first updates row 2 and then row 1.\n\nThe only way to overcome this that I know is to first to select for update with order by clause so that all transactions lock rows in the same order and do not cause deadlock conflicts.\n\nRegards,\nRoman Konoval\[email protected]\n\n\n\n> On Jul 11, 2018, at 16:16, 枫 <[email protected]> wrote:\n> \n> Hi,\n> \n> All.\n> Can anyone give me a hand?\n> \n> I meet a problem:High concurrency but simple updating causes deadlock\n> \n> 1.System info\n> \n> Linux version 4.8.0\n> \n> Ubuntu 5.4.0-6ubuntu1~16.04.4\n> \n> 2.Pg info\n> \n> PostgreSQL 9.5.12 on i686-pc-linux-gnu, compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0 20160609, 32-bit\n> \n> Changes inpostgresql.conf:\n> max_connections = 1000 //100 to 1000\n> \n> 3.Database for test——2000 row same data,\n> \n> ipcid | surdevip | surdevport | devfactory | surchanmode | surchannum | username | password | transprotocol | mediastreamtype | streamid | bsmvalid | smdevip | smdevport | smtransprotocol\n> \n> ------------+----------+------------+------------+-------------+------------+----------+----------+---------------+-----------------+----------+----------+---------+-----------+-----------------\n> \n> 320460291 | Name | 8000 | 100 | 100 | 100 | admin | 666666 | 100 | 100 | hello | 1 | smpIp | 666 | 17\n> \n> 168201188 | Name | 8000 | 100 | 100 | 100 | admin | 666666 | 100 | 100 | hello | 1 | smpIp | 666 | 27\n> \n> 1360154585 | Name | 8000 | 100 | 100 | 100 | admin | 666666 | 100 | 100 | hello | 1 | smpIp | 666 | 70\n> \n> 820068220 | Name | 8000 | 100 | 100 | 100 | admin | 666666 | 100 | 100 | hello | 1 | smpIp | 666 | 49\n> \n> 。。。。。。2k row totally\n> \n> 4.Operation:Multi-user thread update\n> Each thread do the same cmd : Pgexc(“UPDATE TEST6_CHAN_LIST_INFO SET streamId= 'hello', smDevPort= '666' WHERE transProtocol=100”)\n> \n> \n> \n> 5.Error info\n> \n> Error info in my code\n> \n> ERROR: [func:insetDB line:1284]DB_Table_Update\n> \n> ERROR: [func:DB_Table_Update line:705]PQexec(UPDATE TEST6_CHAN_LIST_INFO SET streamId= 'hello', smDevPort= '666' WHERE transProtocol=100) : ERROR: deadlock detected\n> \n> DETAIL: Process 2366 waits for ShareLock on transaction 12316; blocked by process 2368.\n> \n> Process 2368 waits for ShareLock on transaction 12289; blocked by process 2342.\n> \n> Process 2342 waits for ExclusiveLock on tuple (9,1) of relation 18639 of database 18638; blocked by process 2366.\n> \n> HINT: See server log for query details.\n> \n> CONTEXT: while locking tuple (9,1) in relation \"test6_chan_list_info\"\n> \n> Error info in pg log\n> \n> ERROR: deadlock detected\n> \n> DETAIL: Process 10938 waits for ExclusiveLock on tuple (1078,61) of relation 18639 of database 18638; blocked by process 10911.\n> \n> Process 10911 waits for ShareLock on transaction 19494; blocked by process 10807.\n> \n> Process 10807 waits for ShareLock on transaction 19560; blocked by process 10938.\n> \n> Process 10938: UPDATE TEST6_CHAN_LIST_INFO SET streamId= 'hello', smDevPort= '666' WHERE transProtocol=100\n> \n> Process 10911: UPDATE TEST6_CHAN_LIST_INFO SET streamId= 'hello', smDevPort= '666' WHERE transProtocol=100\n> \n> Process 10807: UPDATE TEST6_CHAN_LIST_INFO SET streamId= 'hello', smDevPort= '666' WHERE transProtocol=100\n> \n> HINT: See server log for query details.\n> \n> STATEMENT: UPDATE TEST6_CHAN_LIST_INFO SET streamId= 'hello', smDevPort= '666' WHERE transProtocol=100\n> \n> ERROR: deadlock detected\n> \n> DETAIL: Process 10939 waits for ShareLock on transaction 19567; blocked by process 10945.\n> \n> Process 10945 waits for ShareLock on transaction 19494; blocked by process 10807.\n> \n> Process 10807 waits for ExclusiveLock on tuple (279,1) of relation 18639 of database 18638; blocked by process 10939.\n> \n> Process 10939: UPDATE TEST6_CHAN_LIST_INFO SET streamId= 'hello', smDevPort= '666' WHERE transProtocol=100\n> \n> Process 10945: UPDATE TEST6_CHAN_LIST_INFO SET streamId= 'hello', smDevPort= '666' WHERE transProtocol=100\n> \n> Process 10807: UPDATE TEST6_CHAN_LIST_INFO SET streamId= 'hello', smDevPort= '666' WHERE transProtocol=100\n> \n> HINT: See server log for query details.\n> \n> CONTEXT: while locking tuple (279,1) in relation \"test6_chan_list_info\"\n> \n> STATEMENT: UPDATE TEST6_CHAN_LIST_INFO SET streamId= 'hello', smDevPort= '666' WHERE transProtocol=100\n> \n> 6.my quetion\n> 6.1.is it possible meet dead lock with high conurrency simple update?\n> 6.2.if yes, why,and how to avoid?\n> \n> thanks very much!!!\n> \n> Yours,\n> \n> Leo from China\n\n\n",
"msg_date": "Thu, 12 Jul 2018 12:55:09 +0200",
"msg_from": "Roman Konoval <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High concurrency but simple updating causes deadlock"
}
] |
[
{
"msg_contents": "Dear expert,\n\nCould you please review and suggest to optimize performance of the PLSQL procedure in PostgreSQL?\nI have attached the same.\n\nThanks in advance\n\nRegards,\nDinesh Chandra\n\n________________________________\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.",
"msg_date": "Thu, 12 Jul 2018 03:18:49 +0000",
"msg_from": "Dinesh Chandra 12108 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Suggestion to optimize performance of the PLSQL procedure."
}
] |
[
{
"msg_contents": "Dear,\nSome of you can help me understand this.\n\nThis query plan is executed in the query below (query 9 of TPC-H\nBenchmark, with scale 40, database with approximately 40 gb).\n\nThe experiment consisted of running the query on a HDD (Raid zero).\nThen the same query is executed on an SSD (Raid Zero).\n\nWhy did the HDD (7200 rpm) perform better?\nHDD - TIME 9 MINUTES\nSSD - TIME 15 MINUTES\n\nAs far as I know, the SSD has a reading that is 300 times faster than SSD.\n\n--- Execution Plans---\nssd 40g\nhttps://explain.depesz.com/s/rHkh\n\nhdd 40g\nhttps://explain.depesz.com/s/l4sq\n\nQuery ------------------------------------\n\nselect\n nation,\n o_year,\n sum(amount) as sum_profit\nfrom\n (\n select\n n_name as nation,\n extract(year from o_orderdate) as o_year,\n l_extendedprice * (1 - l_discount) - ps_supplycost *\nl_quantity as amount\n from\n part,\n supplier,\n lineitem,\n partsupp,\n orders,\n nation\n where\n s_suppkey = l_suppkey\n and ps_suppkey = l_suppkey\n and ps_partkey = l_partkey\n and p_partkey = l_partkey\n and o_orderkey = l_orderkey\n and s_nationkey = n_nationkey\n and p_name like '%orchid%'\n ) as profit\ngroup by\n nation,\n o_year\norder by\n nation,\n o_year desc\n\n",
"msg_date": "Mon, 16 Jul 2018 22:00:41 -0700",
"msg_from": "Neto pr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Why HDD performance is better than SSD in this case"
},
{
"msg_contents": "What's the on disk cache size for each drive? The better HDD performance\nproblem won't be sustained with large amounts of data and several different\nqueries.\n\n - - Ben Scherrey\n\nOn Tue, Jul 17, 2018, 12:01 PM Neto pr <[email protected]> wrote:\n\n> Dear,\n> Some of you can help me understand this.\n>\n> This query plan is executed in the query below (query 9 of TPC-H\n> Benchmark, with scale 40, database with approximately 40 gb).\n>\n> The experiment consisted of running the query on a HDD (Raid zero).\n> Then the same query is executed on an SSD (Raid Zero).\n>\n> Why did the HDD (7200 rpm) perform better?\n> HDD - TIME 9 MINUTES\n> SSD - TIME 15 MINUTES\n>\n> As far as I know, the SSD has a reading that is 300 times faster than SSD.\n>\n> --- Execution Plans---\n> ssd 40g\n> https://explain.depesz.com/s/rHkh\n>\n> hdd 40g\n> https://explain.depesz.com/s/l4sq\n>\n> Query ------------------------------------\n>\n> select\n> nation,\n> o_year,\n> sum(amount) as sum_profit\n> from\n> (\n> select\n> n_name as nation,\n> extract(year from o_orderdate) as o_year,\n> l_extendedprice * (1 - l_discount) - ps_supplycost *\n> l_quantity as amount\n> from\n> part,\n> supplier,\n> lineitem,\n> partsupp,\n> orders,\n> nation\n> where\n> s_suppkey = l_suppkey\n> and ps_suppkey = l_suppkey\n> and ps_partkey = l_partkey\n> and p_partkey = l_partkey\n> and o_orderkey = l_orderkey\n> and s_nationkey = n_nationkey\n> and p_name like '%orchid%'\n> ) as profit\n> group by\n> nation,\n> o_year\n> order by\n> nation,\n> o_year desc\n>\n>\n\nWhat's the on disk cache size for each drive? The better HDD performance problem won't be sustained with large amounts of data and several different queries. - - Ben Scherrey On Tue, Jul 17, 2018, 12:01 PM Neto pr <[email protected]> wrote:Dear,\nSome of you can help me understand this.\n\nThis query plan is executed in the query below (query 9 of TPC-H\nBenchmark, with scale 40, database with approximately 40 gb).\n\nThe experiment consisted of running the query on a HDD (Raid zero).\nThen the same query is executed on an SSD (Raid Zero).\n\nWhy did the HDD (7200 rpm) perform better?\nHDD - TIME 9 MINUTES\nSSD - TIME 15 MINUTES\n\nAs far as I know, the SSD has a reading that is 300 times faster than SSD.\n\n--- Execution Plans---\nssd 40g\nhttps://explain.depesz.com/s/rHkh\n\nhdd 40g\nhttps://explain.depesz.com/s/l4sq\n\nQuery ------------------------------------\n\nselect\n nation,\n o_year,\n sum(amount) as sum_profit\nfrom\n (\n select\n n_name as nation,\n extract(year from o_orderdate) as o_year,\n l_extendedprice * (1 - l_discount) - ps_supplycost *\nl_quantity as amount\n from\n part,\n supplier,\n lineitem,\n partsupp,\n orders,\n nation\n where\n s_suppkey = l_suppkey\n and ps_suppkey = l_suppkey\n and ps_partkey = l_partkey\n and p_partkey = l_partkey\n and o_orderkey = l_orderkey\n and s_nationkey = n_nationkey\n and p_name like '%orchid%'\n ) as profit\ngroup by\n nation,\n o_year\norder by\n nation,\n o_year desc",
"msg_date": "Tue, 17 Jul 2018 12:17:22 +0700",
"msg_from": "Benjamin Scherrey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why HDD performance is better than SSD in this case"
},
{
"msg_contents": "Can you show the configuration of postgresql.conf?\nQuery configuration method:\nSelect name, setting from pg_settings where name ~ 'buffers|cpu|^enable';\n\n\nOn 2018年07月17日 13:17, Benjamin Scherrey wrote:\n> What's the on disk cache size for each drive? The better HDD \n> performance problem won't be sustained with large amounts of data and \n> several different queries.\n>\n> - - Ben Scherrey\n>\n> On Tue, Jul 17, 2018, 12:01 PM Neto pr <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n> Dear,\n> Some of you can help me understand this.\n>\n> This query plan is executed in the query below (query 9 of TPC-H\n> Benchmark, with scale 40, database with approximately 40 gb).\n>\n> The experiment consisted of running the query on a HDD (Raid zero).\n> Then the same query is executed on an SSD (Raid Zero).\n>\n> Why did the HDD (7200 rpm) perform better?\n> HDD - TIME 9 MINUTES\n> SSD - TIME 15 MINUTES\n>\n> As far as I know, the SSD has a reading that is 300 times faster\n> than SSD.\n>\n> --- Execution Plans---\n> ssd 40g\n> https://explain.depesz.com/s/rHkh\n>\n> hdd 40g\n> https://explain.depesz.com/s/l4sq\n>\n> Query ------------------------------------\n>\n> select\n> nation,\n> o_year,\n> sum(amount) as sum_profit\n> from\n> (\n> select\n> n_name as nation,\n> extract(year from o_orderdate) as o_year,\n> l_extendedprice * (1 - l_discount) - ps_supplycost *\n> l_quantity as amount\n> from\n> part,\n> supplier,\n> lineitem,\n> partsupp,\n> orders,\n> nation\n> where\n> s_suppkey = l_suppkey\n> and ps_suppkey = l_suppkey\n> and ps_partkey = l_partkey\n> and p_partkey = l_partkey\n> and o_orderkey = l_orderkey\n> and s_nationkey = n_nationkey\n> and p_name like '%orchid%'\n> ) as profit\n> group by\n> nation,\n> o_year\n> order by\n> nation,\n> o_year desc\n>\n\n-- \n<b>张文升</b> winston<br />\nPostgreSQL DBA<br />\n\n\n\n\n\n\n\nCan you\n show the configuration of postgresql.conf?\nQuery configuration method:\nSelect name, setting from pg_settings where name\n ~ 'buffers|cpu|^enable';\n\nOn 2018年07月17日 13:17, Benjamin Scherrey\n wrote:\n\n\n\nWhat's the on disk cache size for each drive? The better\n HDD performance problem won't be sustained with large amounts\n of data and several different queries. \n\n\n - - Ben Scherrey \n\n\nOn Tue, Jul 17, 2018, 12:01 PM Neto pr <[email protected]>\n wrote:\n\nDear,\n Some of you can help me understand this.\n\n This query plan is executed in the query below (query 9 of\n TPC-H\n Benchmark, with scale 40, database with approximately 40\n gb).\n\n The experiment consisted of running the query on a HDD\n (Raid zero).\n Then the same query is executed on an SSD (Raid Zero).\n\n Why did the HDD (7200 rpm) perform better?\n HDD - TIME 9 MINUTES\n SSD - TIME 15 MINUTES\n\n As far as I know, the SSD has a reading that is 300 times\n faster than SSD.\n\n --- Execution Plans---\n ssd 40g\nhttps://explain.depesz.com/s/rHkh\n\n hdd 40g\nhttps://explain.depesz.com/s/l4sq\n\n Query ------------------------------------\n\n select\n nation,\n o_year,\n sum(amount) as sum_profit\n from\n (\n select\n n_name as nation,\n extract(year from o_orderdate) as o_year,\n l_extendedprice * (1 - l_discount) -\n ps_supplycost *\n l_quantity as amount\n from\n part,\n supplier,\n lineitem,\n partsupp,\n orders,\n nation\n where\n s_suppkey = l_suppkey\n and ps_suppkey = l_suppkey\n and ps_partkey = l_partkey\n and p_partkey = l_partkey\n and o_orderkey = l_orderkey\n and s_nationkey = n_nationkey\n and p_name like '%orchid%'\n ) as profit\n group by\n nation,\n o_year\n order by\n nation,\n o_year desc\n\n\n\n\n\n\n\n-- \n<b>张文升</b> winston<br />\nPostgreSQL DBA<br />",
"msg_date": "Tue, 17 Jul 2018 13:43:52 +0800",
"msg_from": "winston cheung <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why HDD performance is better than SSD in this case"
},
{
"msg_contents": "Can you post make and model of the SSD concerned? In general the cheaper \nconsumer grade ones cannot do sustained read/writes at anything like \ntheir quoted max values.\n\nregards\n\nMark\n\n\nOn 17/07/18 17:00, Neto pr wrote:\n> Dear,\n> Some of you can help me understand this.\n>\n> This query plan is executed in the query below (query 9 of TPC-H\n> Benchmark, with scale 40, database with approximately 40 gb).\n>\n> The experiment consisted of running the query on a HDD (Raid zero).\n> Then the same query is executed on an SSD (Raid Zero).\n>\n> Why did the HDD (7200 rpm) perform better?\n> HDD - TIME 9 MINUTES\n> SSD - TIME 15 MINUTES\n>\n> As far as I know, the SSD has a reading that is 300 times faster than SSD.\n>\n> --- Execution Plans---\n> ssd 40g\n> https://explain.depesz.com/s/rHkh\n>\n> hdd 40g\n> https://explain.depesz.com/s/l4sq\n>\n> Query ------------------------------------\n>\n> select\n> nation,\n> o_year,\n> sum(amount) as sum_profit\n> from\n> (\n> select\n> n_name as nation,\n> extract(year from o_orderdate) as o_year,\n> l_extendedprice * (1 - l_discount) - ps_supplycost *\n> l_quantity as amount\n> from\n> part,\n> supplier,\n> lineitem,\n> partsupp,\n> orders,\n> nation\n> where\n> s_suppkey = l_suppkey\n> and ps_suppkey = l_suppkey\n> and ps_partkey = l_partkey\n> and p_partkey = l_partkey\n> and o_orderkey = l_orderkey\n> and s_nationkey = n_nationkey\n> and p_name like '%orchid%'\n> ) as profit\n> group by\n> nation,\n> o_year\n> order by\n> nation,\n> o_year desc\n>\n\n\n",
"msg_date": "Tue, 17 Jul 2018 18:07:44 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why HDD performance is better than SSD in this case"
},
{
"msg_contents": "> Why did the HDD (7200 rpm) perform better?\r\n\r\nAre these different systems? Have you ruled out that during the HDD test the\r\ndata was available in memory?",
"msg_date": "Tue, 17 Jul 2018 07:40:51 +0000",
"msg_from": "Robert Zenz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why HDD performance is better than SSD in this case"
},
{
"msg_contents": "As already mentioned by Robert, please let us know if you made sure that\nnothing was fished from RAM, over the faster test.\n\nIn other words, make sure that all caches are dropped between one test\nand another.\n\nAlso,to better picture the situation, would be good to know:\n\n- which SSD (brand/model) are you using?\n- which HDD?\n- how are the disks configured? RAID? or not?\n- on which OS?\n- what are the mount options? SSD requires tuning\n- did you make sure that no other query was running at the time of the\nbench?\n- are you making a comparison on the same machine?\n- is it HW or VM? benchs should better run on bare metal to avoid\nresults pollution (eg: other VMS on the same hypervisor using the disk,\nhost caching and so on)\n- how many times did you run the tests?\n- did you change postgres configuration over tests?\n- can you post postgres config?\n- what about vacuums or maintenance tasks running in the background?\n\nAlso, to benchmark disks i would not use a custom query but pgbench.\n\nBe aware: running benchmarks is a science, therefore needs a scientific\napproach :)\n\nregards\n\nfabio pardi\n\n\n\nOn 07/17/2018 07:00 AM, Neto pr wrote:\n> Dear,\n> Some of you can help me understand this.\n> \n> This query plan is executed in the query below (query 9 of TPC-H\n> Benchmark, with scale 40, database with approximately 40 gb).\n> \n> The experiment consisted of running the query on a HDD (Raid zero).\n> Then the same query is executed on an SSD (Raid Zero).\n> \n> Why did the HDD (7200 rpm) perform better?\n> HDD - TIME 9 MINUTES\n> SSD - TIME 15 MINUTES\n> \n> As far as I know, the SSD has a reading that is 300 times faster than SSD.\n> \n> --- Execution Plans---\n> ssd 40g\n> https://explain.depesz.com/s/rHkh\n> \n> hdd 40g\n> https://explain.depesz.com/s/l4sq\n> \n> Query ------------------------------------\n> \n> select\n> nation,\n> o_year,\n> sum(amount) as sum_profit\n> from\n> (\n> select\n> n_name as nation,\n> extract(year from o_orderdate) as o_year,\n> l_extendedprice * (1 - l_discount) - ps_supplycost *\n> l_quantity as amount\n> from\n> part,\n> supplier,\n> lineitem,\n> partsupp,\n> orders,\n> nation\n> where\n> s_suppkey = l_suppkey\n> and ps_suppkey = l_suppkey\n> and ps_partkey = l_partkey\n> and p_partkey = l_partkey\n> and o_orderkey = l_orderkey\n> and s_nationkey = n_nationkey\n> and p_name like '%orchid%'\n> ) as profit\n> group by\n> nation,\n> o_year\n> order by\n> nation,\n> o_year desc\n> \n\n",
"msg_date": "Tue, 17 Jul 2018 10:08:03 +0200",
"msg_from": "Fabio Pardi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why HDD performance is better than SSD in this case"
},
{
"msg_contents": "Sorry.. I replied in the wrong message before ...\nfollows my response.\n-------------\n\nThanks all, but I still have not figured it out.\nThis is really strange because the tests were done on the same machine\n(I use HP ML110 Proliant 8gb RAM - Xeon 2.8 ghz processor (4\ncores), and POSTGRESQL 10.1.\n- Only the mentioned query running at the time of the test.\n- I repeated the query 7 times and did not change the results.\n- Before running each batch of 7 executions, I discarded the Operating\nSystem cache and restarted DBMS like this:\n(echo 3> / proc / sys / vm / drop_caches;\n\ndiscs:\n- 2 units of Samsung Evo SSD 500 GB (mounted on ZERO RAID)\n- 2 SATA 7500 Krpm HDD units - 1TB (mounted on ZERO RAID)\n\n- The Operating System and the Postgresql DBMS are installed on the SSD disk.\n\nBest Regards\n[ ]`s Neto\n\n2018-07-17 1:08 GMT-07:00 Fabio Pardi <[email protected]>:\n> As already mentioned by Robert, please let us know if you made sure that\n> nothing was fished from RAM, over the faster test.\n>\n> In other words, make sure that all caches are dropped between one test\n> and another.\n>\n> Also,to better picture the situation, would be good to know:\n>\n> - which SSD (brand/model) are you using?\n> - which HDD?\n> - how are the disks configured? RAID? or not?\n> - on which OS?\n> - what are the mount options? SSD requires tuning\n> - did you make sure that no other query was running at the time of the\n> bench?\n> - are you making a comparison on the same machine?\n> - is it HW or VM? benchs should better run on bare metal to avoid\n> results pollution (eg: other VMS on the same hypervisor using the disk,\n> host caching and so on)\n> - how many times did you run the tests?\n> - did you change postgres configuration over tests?\n> - can you post postgres config?\n> - what about vacuums or maintenance tasks running in the background?\n>\n> Also, to benchmark disks i would not use a custom query but pgbench.\n>\n> Be aware: running benchmarks is a science, therefore needs a scientific\n> approach :)\n>\n> regards\n>\n> fabio pardi\n>\n>\n>\n> On 07/17/2018 07:00 AM, Neto pr wrote:\n>> Dear,\n>> Some of you can help me understand this.\n>>\n>> This query plan is executed in the query below (query 9 of TPC-H\n>> Benchmark, with scale 40, database with approximately 40 gb).\n>>\n>> The experiment consisted of running the query on a HDD (Raid zero).\n>> Then the same query is executed on an SSD (Raid Zero).\n>>\n>> Why did the HDD (7200 rpm) perform better?\n>> HDD - TIME 9 MINUTES\n>> SSD - TIME 15 MINUTES\n>>\n>> As far as I know, the SSD has a reading that is 300 times faster than SSD.\n>>\n>> --- Execution Plans---\n>> ssd 40g\n>> https://explain.depesz.com/s/rHkh\n>>\n>> hdd 40g\n>> https://explain.depesz.com/s/l4sq\n>>\n>> Query ------------------------------------\n>>\n>> select\n>> nation,\n>> o_year,\n>> sum(amount) as sum_profit\n>> from\n>> (\n>> select\n>> n_name as nation,\n>> extract(year from o_orderdate) as o_year,\n>> l_extendedprice * (1 - l_discount) - ps_supplycost *\n>> l_quantity as amount\n>> from\n>> part,\n>> supplier,\n>> lineitem,\n>> partsupp,\n>> orders,\n>> nation\n>> where\n>> s_suppkey = l_suppkey\n>> and ps_suppkey = l_suppkey\n>> and ps_partkey = l_partkey\n>> and p_partkey = l_partkey\n>> and o_orderkey = l_orderkey\n>> and s_nationkey = n_nationkey\n>> and p_name like '%orchid%'\n>> ) as profit\n>> group by\n>> nation,\n>> o_year\n>> order by\n>> nation,\n>> o_year desc\n>>\n>\n\n",
"msg_date": "Tue, 17 Jul 2018 06:04:04 -0700",
"msg_from": "Neto pr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why HDD performance is better than SSD in this case"
},
{
"msg_contents": "2018-07-17 10:04 GMT-03:00 Neto pr <[email protected]>:\n> Sorry.. I replied in the wrong message before ...\n> follows my response.\n> -------------\n>\n> Thanks all, but I still have not figured it out.\n> This is really strange because the tests were done on the same machine\n> (I use HP ML110 Proliant 8gb RAM - Xeon 2.8 ghz processor (4\n> cores), and POSTGRESQL 10.1.\n> - Only the mentioned query running at the time of the test.\n> - I repeated the query 7 times and did not change the results.\n> - Before running each batch of 7 executions, I discarded the Operating\n> System cache and restarted DBMS like this:\n> (echo 3> / proc / sys / vm / drop_caches;\n>\n> discs:\n> - 2 units of Samsung Evo SSD 500 GB (mounted on ZERO RAID)\n> - 2 SATA 7500 Krpm HDD units - 1TB (mounted on ZERO RAID)\n>\n> - The Operating System and the Postgresql DBMS are installed on the SSD disk.\n>\n\nOne more information.\nI used default configuration to Postgresql.conf\nOnly exception is to :\nrandom_page_cost on SSD is 1.1\n\n\n> Best Regards\n> [ ]`s Neto\n>\n> 2018-07-17 1:08 GMT-07:00 Fabio Pardi <[email protected]>:\n>> As already mentioned by Robert, please let us know if you made sure that\n>> nothing was fished from RAM, over the faster test.\n>>\n>> In other words, make sure that all caches are dropped between one test\n>> and another.\n>>\n>> Also,to better picture the situation, would be good to know:\n>>\n>> - which SSD (brand/model) are you using?\n>> - which HDD?\n>> - how are the disks configured? RAID? or not?\n>> - on which OS?\n>> - what are the mount options? SSD requires tuning\n>> - did you make sure that no other query was running at the time of the\n>> bench?\n>> - are you making a comparison on the same machine?\n>> - is it HW or VM? benchs should better run on bare metal to avoid\n>> results pollution (eg: other VMS on the same hypervisor using the disk,\n>> host caching and so on)\n>> - how many times did you run the tests?\n>> - did you change postgres configuration over tests?\n>> - can you post postgres config?\n>> - what about vacuums or maintenance tasks running in the background?\n>>\n>> Also, to benchmark disks i would not use a custom query but pgbench.\n>>\n>> Be aware: running benchmarks is a science, therefore needs a scientific\n>> approach :)\n>>\n>> regards\n>>\n>> fabio pardi\n>>\n>>\n>>\n>> On 07/17/2018 07:00 AM, Neto pr wrote:\n>>> Dear,\n>>> Some of you can help me understand this.\n>>>\n>>> This query plan is executed in the query below (query 9 of TPC-H\n>>> Benchmark, with scale 40, database with approximately 40 gb).\n>>>\n>>> The experiment consisted of running the query on a HDD (Raid zero).\n>>> Then the same query is executed on an SSD (Raid Zero).\n>>>\n>>> Why did the HDD (7200 rpm) perform better?\n>>> HDD - TIME 9 MINUTES\n>>> SSD - TIME 15 MINUTES\n>>>\n>>> As far as I know, the SSD has a reading that is 300 times faster than SSD.\n>>>\n>>> --- Execution Plans---\n>>> ssd 40g\n>>> https://explain.depesz.com/s/rHkh\n>>>\n>>> hdd 40g\n>>> https://explain.depesz.com/s/l4sq\n>>>\n>>> Query ------------------------------------\n>>>\n>>> select\n>>> nation,\n>>> o_year,\n>>> sum(amount) as sum_profit\n>>> from\n>>> (\n>>> select\n>>> n_name as nation,\n>>> extract(year from o_orderdate) as o_year,\n>>> l_extendedprice * (1 - l_discount) - ps_supplycost *\n>>> l_quantity as amount\n>>> from\n>>> part,\n>>> supplier,\n>>> lineitem,\n>>> partsupp,\n>>> orders,\n>>> nation\n>>> where\n>>> s_suppkey = l_suppkey\n>>> and ps_suppkey = l_suppkey\n>>> and ps_partkey = l_partkey\n>>> and p_partkey = l_partkey\n>>> and o_orderkey = l_orderkey\n>>> and s_nationkey = n_nationkey\n>>> and p_name like '%orchid%'\n>>> ) as profit\n>>> group by\n>>> nation,\n>>> o_year\n>>> order by\n>>> nation,\n>>> o_year desc\n>>>\n>>\n\n",
"msg_date": "Tue, 17 Jul 2018 10:19:34 -0300",
"msg_from": "Neto pr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why HDD performance is better than SSD in this case"
},
{
"msg_contents": "On Tue, Jul 17, 2018 at 1:00 AM, Neto pr <[email protected]> wrote:\n\n> Dear,\n> Some of you can help me understand this.\n>\n> This query plan is executed in the query below (query 9 of TPC-H\n> Benchmark, with scale 40, database with approximately 40 gb).\n>\n> The experiment consisted of running the query on a HDD (Raid zero).\n> Then the same query is executed on an SSD (Raid Zero).\n>\n> Why did the HDD (7200 rpm) perform better?\n> HDD - TIME 9 MINUTES\n> SSD - TIME 15 MINUTES\n>\n> As far as I know, the SSD has a reading that is 300 times faster than SSD.\n>\n\nIs the 300 times faster comparing random to random, or sequential to\nsequential? Maybe your SSD simply fails to perform as advertised. This\nwould not surprise me at all.\n\nTo remove some confounding variables, can you turn off parallelism and\nrepeat the queries? (Yes, they will probably get slower. But is the\nrelative timings still the same?) Also, turn on track_io_timings and\nrepeat the \"EXPLAIN (ANALYZE, BUFFERS)\", perhaps with TIMINGS OFF.\n\nAlso, see how long it takes to read the entire database, or just the\nlargest table, outside of postgres.\n\nSomething like:\n\ntime tar -f - $PGDATA/base | wc -c\n\nor\n\ntime cat $PGDATA/base/<database oid>/<large table file node>* | wc -c\n\nCheers,\n\nJeff\n\nOn Tue, Jul 17, 2018 at 1:00 AM, Neto pr <[email protected]> wrote:Dear,\nSome of you can help me understand this.\n\nThis query plan is executed in the query below (query 9 of TPC-H\nBenchmark, with scale 40, database with approximately 40 gb).\n\nThe experiment consisted of running the query on a HDD (Raid zero).\nThen the same query is executed on an SSD (Raid Zero).\n\nWhy did the HDD (7200 rpm) perform better?\nHDD - TIME 9 MINUTES\nSSD - TIME 15 MINUTES\n\nAs far as I know, the SSD has a reading that is 300 times faster than SSD.Is the 300 times faster comparing random to random, or sequential to sequential? Maybe your SSD simply fails to perform as advertised. This would not surprise me at all.To remove some confounding variables, can you turn off parallelism and repeat the queries? (Yes, they will probably get slower. But is the relative timings still the same?) Also, turn on track_io_timings and repeat the \"EXPLAIN (ANALYZE, BUFFERS)\", perhaps with TIMINGS OFF.Also, see how long it takes to read the entire database, or just the largest table, outside of postgres.Something like:time tar -f - $PGDATA/base | wc -cortime cat $PGDATA/base/<database oid>/<large table file node>* | wc -c Cheers,Jeff",
"msg_date": "Tue, 17 Jul 2018 09:28:48 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why HDD performance is better than SSD in this case"
},
{
"msg_contents": "Hi Neto,\n\nYou should list the SSD model also - there are pleinty of Samsung EVO \ndrives - and they are not professional grade.\n\nAmong the the possible issues, the most likely (from my point of view) are:\n\n- TRIM command doesn't go through the RAID (which is really likely) - so \nthe SSD controller think it's full, and keep pushing blocks around to \nlevel wear, causing massive perf degradation - please check this config \non you RAID driver/adapter\n\n- TRIM is not configured on the OS level for the SSD\n\n- Partitions is not correctly aligned on the SSD blocks\n\n\nWithout so little details on your system, we can only try to guess the \nreal issues\n\n\nNicolas\n\nNicolas CHARLES\nLe 17/07/2018 à 15:19, Neto pr a écrit :\n> 2018-07-17 10:04 GMT-03:00 Neto pr <[email protected]>:\n>> Sorry.. I replied in the wrong message before ...\n>> follows my response.\n>> -------------\n>>\n>> Thanks all, but I still have not figured it out.\n>> This is really strange because the tests were done on the same machine\n>> (I use HP ML110 Proliant 8gb RAM - Xeon 2.8 ghz processor (4\n>> cores), and POSTGRESQL 10.1.\n>> - Only the mentioned query running at the time of the test.\n>> - I repeated the query 7 times and did not change the results.\n>> - Before running each batch of 7 executions, I discarded the Operating\n>> System cache and restarted DBMS like this:\n>> (echo 3> / proc / sys / vm / drop_caches;\n>>\n>> discs:\n>> - 2 units of Samsung Evo SSD 500 GB (mounted on ZERO RAID)\n>> - 2 SATA 7500 Krpm HDD units - 1TB (mounted on ZERO RAID)\n>>\n>> - The Operating System and the Postgresql DBMS are installed on the SSD disk.\n>>\n> One more information.\n> I used default configuration to Postgresql.conf\n> Only exception is to :\n> random_page_cost on SSD is 1.1\n>\n>\n>> Best Regards\n>> [ ]`s Neto\n>>\n>> 2018-07-17 1:08 GMT-07:00 Fabio Pardi <[email protected]>:\n>>> As already mentioned by Robert, please let us know if you made sure that\n>>> nothing was fished from RAM, over the faster test.\n>>>\n>>> In other words, make sure that all caches are dropped between one test\n>>> and another.\n>>>\n>>> Also,to better picture the situation, would be good to know:\n>>>\n>>> - which SSD (brand/model) are you using?\n>>> - which HDD?\n>>> - how are the disks configured? RAID? or not?\n>>> - on which OS?\n>>> - what are the mount options? SSD requires tuning\n>>> - did you make sure that no other query was running at the time of the\n>>> bench?\n>>> - are you making a comparison on the same machine?\n>>> - is it HW or VM? benchs should better run on bare metal to avoid\n>>> results pollution (eg: other VMS on the same hypervisor using the disk,\n>>> host caching and so on)\n>>> - how many times did you run the tests?\n>>> - did you change postgres configuration over tests?\n>>> - can you post postgres config?\n>>> - what about vacuums or maintenance tasks running in the background?\n>>>\n>>> Also, to benchmark disks i would not use a custom query but pgbench.\n>>>\n>>> Be aware: running benchmarks is a science, therefore needs a scientific\n>>> approach :)\n>>>\n>>> regards\n>>>\n>>> fabio pardi\n>>>\n>>>\n>>>\n>>> On 07/17/2018 07:00 AM, Neto pr wrote:\n>>>> Dear,\n>>>> Some of you can help me understand this.\n>>>>\n>>>> This query plan is executed in the query below (query 9 of TPC-H\n>>>> Benchmark, with scale 40, database with approximately 40 gb).\n>>>>\n>>>> The experiment consisted of running the query on a HDD (Raid zero).\n>>>> Then the same query is executed on an SSD (Raid Zero).\n>>>>\n>>>> Why did the HDD (7200 rpm) perform better?\n>>>> HDD - TIME 9 MINUTES\n>>>> SSD - TIME 15 MINUTES\n>>>>\n>>>> As far as I know, the SSD has a reading that is 300 times faster than SSD.\n>>>>\n>>>> --- Execution Plans---\n>>>> ssd 40g\n>>>> https://explain.depesz.com/s/rHkh\n>>>>\n>>>> hdd 40g\n>>>> https://explain.depesz.com/s/l4sq\n>>>>\n>>>> Query ------------------------------------\n>>>>\n>>>> select\n>>>> nation,\n>>>> o_year,\n>>>> sum(amount) as sum_profit\n>>>> from\n>>>> (\n>>>> select\n>>>> n_name as nation,\n>>>> extract(year from o_orderdate) as o_year,\n>>>> l_extendedprice * (1 - l_discount) - ps_supplycost *\n>>>> l_quantity as amount\n>>>> from\n>>>> part,\n>>>> supplier,\n>>>> lineitem,\n>>>> partsupp,\n>>>> orders,\n>>>> nation\n>>>> where\n>>>> s_suppkey = l_suppkey\n>>>> and ps_suppkey = l_suppkey\n>>>> and ps_partkey = l_partkey\n>>>> and p_partkey = l_partkey\n>>>> and o_orderkey = l_orderkey\n>>>> and s_nationkey = n_nationkey\n>>>> and p_name like '%orchid%'\n>>>> ) as profit\n>>>> group by\n>>>> nation,\n>>>> o_year\n>>>> order by\n>>>> nation,\n>>>> o_year desc\n>>>>\n\n\n",
"msg_date": "Tue, 17 Jul 2018 15:44:28 +0200",
"msg_from": "Nicolas Charles <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why HDD performance is better than SSD in this case"
},
{
"msg_contents": "On 17.07.2018 15:44, Nicolas Charles wrote:\r\n> - Partitions is not correctly aligned on the SSD blocks\r\n\r\nDoes that really make a noticeable difference? If yes, have you got some further\r\nreading material on that?",
"msg_date": "Tue, 17 Jul 2018 13:50:03 +0000",
"msg_from": "Robert Zenz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why HDD performance is better than SSD in this case"
},
{
"msg_contents": "If you have a RAID cache, i would disable it, since we are only focusing\non the disks. Cache can give you inconsistent data (even it looks like\nis not the case here).\n\nAlso, we can do a step backward, and exclude postgres from the picture\nfor the moment.\n\ntry to perform a dd test in reading from disk, and let us know.\n\nlike:\n\n- create big_enough_file\n- empty OS cache\n- dd if=big_enough_file of=/dev/null\n\nand post the results for both disks.\n\nAlso i think it makes not much sense testing on RAID 0. I would start\nperforming tests on a single disk, bypassing RAID (or, as mentioned, at\nleast disabling cache).\n\nThe findings should narrow the focus\n\n\nregards,\n\nfabio pardi\n\nOn 07/17/2018 03:19 PM, Neto pr wrote:\n> 2018-07-17 10:04 GMT-03:00 Neto pr <[email protected]>:\n>> Sorry.. I replied in the wrong message before ...\n>> follows my response.\n>> -------------\n>>\n>> Thanks all, but I still have not figured it out.\n>> This is really strange because the tests were done on the same machine\n>> (I use HP ML110 Proliant 8gb RAM - Xeon 2.8 ghz processor (4\n>> cores), and POSTGRESQL 10.1.\n>> - Only the mentioned query running at the time of the test.\n>> - I repeated the query 7 times and did not change the results.\n>> - Before running each batch of 7 executions, I discarded the Operating\n>> System cache and restarted DBMS like this:\n>> (echo 3> / proc / sys / vm / drop_caches;\n>>\n>> discs:\n>> - 2 units of Samsung Evo SSD 500 GB (mounted on ZERO RAID)\n>> - 2 SATA 7500 Krpm HDD units - 1TB (mounted on ZERO RAID)\n>>\n>> - The Operating System and the Postgresql DBMS are installed on the SSD disk.\n>>\n> \n> One more information.\n> I used default configuration to Postgresql.conf\n> Only exception is to :\n> random_page_cost on SSD is 1.1\n> \n> \n>> Best Regards\n>> [ ]`s Neto\n>>\n>> 2018-07-17 1:08 GMT-07:00 Fabio Pardi <[email protected]>:\n>>> As already mentioned by Robert, please let us know if you made sure that\n>>> nothing was fished from RAM, over the faster test.\n>>>\n>>> In other words, make sure that all caches are dropped between one test\n>>> and another.\n>>>\n>>> Also,to better picture the situation, would be good to know:\n>>>\n>>> - which SSD (brand/model) are you using?\n>>> - which HDD?\n>>> - how are the disks configured? RAID? or not?\n>>> - on which OS?\n>>> - what are the mount options? SSD requires tuning\n>>> - did you make sure that no other query was running at the time of the\n>>> bench?\n>>> - are you making a comparison on the same machine?\n>>> - is it HW or VM? benchs should better run on bare metal to avoid\n>>> results pollution (eg: other VMS on the same hypervisor using the disk,\n>>> host caching and so on)\n>>> - how many times did you run the tests?\n>>> - did you change postgres configuration over tests?\n>>> - can you post postgres config?\n>>> - what about vacuums or maintenance tasks running in the background?\n>>>\n>>> Also, to benchmark disks i would not use a custom query but pgbench.\n>>>\n>>> Be aware: running benchmarks is a science, therefore needs a scientific\n>>> approach :)\n>>>\n>>> regards\n>>>\n>>> fabio pardi\n>>>\n>>>\n>>>\n>>> On 07/17/2018 07:00 AM, Neto pr wrote:\n>>>> Dear,\n>>>> Some of you can help me understand this.\n>>>>\n>>>> This query plan is executed in the query below (query 9 of TPC-H\n>>>> Benchmark, with scale 40, database with approximately 40 gb).\n>>>>\n>>>> The experiment consisted of running the query on a HDD (Raid zero).\n>>>> Then the same query is executed on an SSD (Raid Zero).\n>>>>\n>>>> Why did the HDD (7200 rpm) perform better?\n>>>> HDD - TIME 9 MINUTES\n>>>> SSD - TIME 15 MINUTES\n>>>>\n>>>> As far as I know, the SSD has a reading that is 300 times faster than SSD.\n>>>>\n>>>> --- Execution Plans---\n>>>> ssd 40g\n>>>> https://explain.depesz.com/s/rHkh\n>>>>\n>>>> hdd 40g\n>>>> https://explain.depesz.com/s/l4sq\n>>>>\n>>>> Query ------------------------------------\n>>>>\n>>>> select\n>>>> nation,\n>>>> o_year,\n>>>> sum(amount) as sum_profit\n>>>> from\n>>>> (\n>>>> select\n>>>> n_name as nation,\n>>>> extract(year from o_orderdate) as o_year,\n>>>> l_extendedprice * (1 - l_discount) - ps_supplycost *\n>>>> l_quantity as amount\n>>>> from\n>>>> part,\n>>>> supplier,\n>>>> lineitem,\n>>>> partsupp,\n>>>> orders,\n>>>> nation\n>>>> where\n>>>> s_suppkey = l_suppkey\n>>>> and ps_suppkey = l_suppkey\n>>>> and ps_partkey = l_partkey\n>>>> and p_partkey = l_partkey\n>>>> and o_orderkey = l_orderkey\n>>>> and s_nationkey = n_nationkey\n>>>> and p_name like '%orchid%'\n>>>> ) as profit\n>>>> group by\n>>>> nation,\n>>>> o_year\n>>>> order by\n>>>> nation,\n>>>> o_year desc\n>>>>\n>>>\n\n",
"msg_date": "Tue, 17 Jul 2018 15:55:31 +0200",
"msg_from": "Fabio Pardi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why HDD performance is better than SSD in this case"
},
{
"msg_contents": "2018-07-17 10:44 GMT-03:00 Nicolas Charles <[email protected]>:\n> Hi Neto,\n>\n> You should list the SSD model also - there are pleinty of Samsung EVO drives\n> - and they are not professional grade.\n>\n> Among the the possible issues, the most likely (from my point of view) are:\n>\n> - TRIM command doesn't go through the RAID (which is really likely) - so the\n> SSD controller think it's full, and keep pushing blocks around to level\n> wear, causing massive perf degradation - please check this config on you\n> RAID driver/adapter\n>\n> - TRIM is not configured on the OS level for the SSD\n>\n> - Partitions is not correctly aligned on the SSD blocks\n>\n>\n> Without so little details on your system, we can only try to guess the real\n> issues\n>\n\nThank you Nicolas, for your tips.\nI believe your assumption is right.\n\nThis SSD really is not professional, even if Samsung's advertisement\nsays yes. If I have to buy another SSD I will prefer INTEL SSDs.\n\nI had a previous problem with it (Sansung EVO) as it lost in\nperformance to a SAS HDD, but however, the SAS HDD was a 12 Gb/s\ntransfer rate and the SSD was 6 Gb/s.\n\nBut now I tested against an HDD (7200 RPM) that has the same transfer\nrate as the SSD 6 Gb/sec. and could not lose in performance.\n\nMaybe it's the unconfigured trim.\n\nCould you give me some help on how I could check if my RAID is\nconfigured for this, I use Hardware RAID using HP software (HP Storage\nProvider on boot).\nAnd on Debian 8 Operating System, how could I check the TRIM configuration ?\n\nBest\n[]'s Neto\n>\n> Nicolas\n>\n> Nicolas CHARLES\n>\n> Le 17/07/2018 à 15:19, Neto pr a écrit :\n>>\n>> 2018-07-17 10:04 GMT-03:00 Neto pr <[email protected]>:\n>>>\n>>> Sorry.. I replied in the wrong message before ...\n>>> follows my response.\n>>> -------------\n>>>\n>>> Thanks all, but I still have not figured it out.\n>>> This is really strange because the tests were done on the same machine\n>>> (I use HP ML110 Proliant 8gb RAM - Xeon 2.8 ghz processor (4\n>>> cores), and POSTGRESQL 10.1.\n>>> - Only the mentioned query running at the time of the test.\n>>> - I repeated the query 7 times and did not change the results.\n>>> - Before running each batch of 7 executions, I discarded the Operating\n>>> System cache and restarted DBMS like this:\n>>> (echo 3> / proc / sys / vm / drop_caches;\n>>>\n>>> discs:\n>>> - 2 units of Samsung Evo SSD 500 GB (mounted on ZERO RAID)\n>>> - 2 SATA 7500 Krpm HDD units - 1TB (mounted on ZERO RAID)\n>>>\n>>> - The Operating System and the Postgresql DBMS are installed on the SSD\n>>> disk.\n>>>\n>> One more information.\n>> I used default configuration to Postgresql.conf\n>> Only exception is to :\n>> random_page_cost on SSD is 1.1\n>>\n>>\n>>> Best Regards\n>>> [ ]`s Neto\n>>>\n>>> 2018-07-17 1:08 GMT-07:00 Fabio Pardi <[email protected]>:\n>>>>\n>>>> As already mentioned by Robert, please let us know if you made sure that\n>>>> nothing was fished from RAM, over the faster test.\n>>>>\n>>>> In other words, make sure that all caches are dropped between one test\n>>>> and another.\n>>>>\n>>>> Also,to better picture the situation, would be good to know:\n>>>>\n>>>> - which SSD (brand/model) are you using?\n>>>> - which HDD?\n>>>> - how are the disks configured? RAID? or not?\n>>>> - on which OS?\n>>>> - what are the mount options? SSD requires tuning\n>>>> - did you make sure that no other query was running at the time of the\n>>>> bench?\n>>>> - are you making a comparison on the same machine?\n>>>> - is it HW or VM? benchs should better run on bare metal to avoid\n>>>> results pollution (eg: other VMS on the same hypervisor using the disk,\n>>>> host caching and so on)\n>>>> - how many times did you run the tests?\n>>>> - did you change postgres configuration over tests?\n>>>> - can you post postgres config?\n>>>> - what about vacuums or maintenance tasks running in the background?\n>>>>\n>>>> Also, to benchmark disks i would not use a custom query but pgbench.\n>>>>\n>>>> Be aware: running benchmarks is a science, therefore needs a scientific\n>>>> approach :)\n>>>>\n>>>> regards\n>>>>\n>>>> fabio pardi\n>>>>\n>>>>\n>>>>\n>>>> On 07/17/2018 07:00 AM, Neto pr wrote:\n>>>>>\n>>>>> Dear,\n>>>>> Some of you can help me understand this.\n>>>>>\n>>>>> This query plan is executed in the query below (query 9 of TPC-H\n>>>>> Benchmark, with scale 40, database with approximately 40 gb).\n>>>>>\n>>>>> The experiment consisted of running the query on a HDD (Raid zero).\n>>>>> Then the same query is executed on an SSD (Raid Zero).\n>>>>>\n>>>>> Why did the HDD (7200 rpm) perform better?\n>>>>> HDD - TIME 9 MINUTES\n>>>>> SSD - TIME 15 MINUTES\n>>>>>\n>>>>> As far as I know, the SSD has a reading that is 300 times faster than\n>>>>> SSD.\n>>>>>\n>>>>> --- Execution Plans---\n>>>>> ssd 40g\n>>>>> https://explain.depesz.com/s/rHkh\n>>>>>\n>>>>> hdd 40g\n>>>>> https://explain.depesz.com/s/l4sq\n>>>>>\n>>>>> Query ------------------------------------\n>>>>>\n>>>>> select\n>>>>> nation,\n>>>>> o_year,\n>>>>> sum(amount) as sum_profit\n>>>>> from\n>>>>> (\n>>>>> select\n>>>>> n_name as nation,\n>>>>> extract(year from o_orderdate) as o_year,\n>>>>> l_extendedprice * (1 - l_discount) - ps_supplycost *\n>>>>> l_quantity as amount\n>>>>> from\n>>>>> part,\n>>>>> supplier,\n>>>>> lineitem,\n>>>>> partsupp,\n>>>>> orders,\n>>>>> nation\n>>>>> where\n>>>>> s_suppkey = l_suppkey\n>>>>> and ps_suppkey = l_suppkey\n>>>>> and ps_partkey = l_partkey\n>>>>> and p_partkey = l_partkey\n>>>>> and o_orderkey = l_orderkey\n>>>>> and s_nationkey = n_nationkey\n>>>>> and p_name like '%orchid%'\n>>>>> ) as profit\n>>>>> group by\n>>>>> nation,\n>>>>> o_year\n>>>>> order by\n>>>>> nation,\n>>>>> o_year desc\n>>>>>\n>\n\n",
"msg_date": "Tue, 17 Jul 2018 11:00:48 -0300",
"msg_from": "Neto pr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why HDD performance is better than SSD in this case"
},
{
"msg_contents": "2018-07-17 10:55 GMT-03:00 Fabio Pardi <[email protected]>:\n> If you have a RAID cache, i would disable it, since we are only focusing\n> on the disks. Cache can give you inconsistent data (even it looks like\n> is not the case here).\n>\n> Also, we can do a step backward, and exclude postgres from the picture\n> for the moment.\n>\n> try to perform a dd test in reading from disk, and let us know.\n>\n> like:\n>\n> - create big_enough_file\n> - empty OS cache\n> - dd if=big_enough_file of=/dev/null\n>\n> and post the results for both disks.\n>\n> Also i think it makes not much sense testing on RAID 0. I would start\n> performing tests on a single disk, bypassing RAID (or, as mentioned, at\n> least disabling cache).\n>\n\nBut in my case, both the 2 SSDs and the 2 HDDs are in RAID ZERO.\nThis way it would not be a valid test ? Because the 2 environments are\nin RAID ZERO.\n\n\n\n> The findings should narrow the focus\n>\n>\n> regards,\n>\n> fabio pardi\n>\n> On 07/17/2018 03:19 PM, Neto pr wrote:\n>> 2018-07-17 10:04 GMT-03:00 Neto pr <[email protected]>:\n>>> Sorry.. I replied in the wrong message before ...\n>>> follows my response.\n>>> -------------\n>>>\n>>> Thanks all, but I still have not figured it out.\n>>> This is really strange because the tests were done on the same machine\n>>> (I use HP ML110 Proliant 8gb RAM - Xeon 2.8 ghz processor (4\n>>> cores), and POSTGRESQL 10.1.\n>>> - Only the mentioned query running at the time of the test.\n>>> - I repeated the query 7 times and did not change the results.\n>>> - Before running each batch of 7 executions, I discarded the Operating\n>>> System cache and restarted DBMS like this:\n>>> (echo 3> / proc / sys / vm / drop_caches;\n>>>\n>>> discs:\n>>> - 2 units of Samsung Evo SSD 500 GB (mounted on ZERO RAID)\n>>> - 2 SATA 7500 Krpm HDD units - 1TB (mounted on ZERO RAID)\n>>>\n>>> - The Operating System and the Postgresql DBMS are installed on the SSD disk.\n>>>\n>>\n>> One more information.\n>> I used default configuration to Postgresql.conf\n>> Only exception is to :\n>> random_page_cost on SSD is 1.1\n>>\n>>\n>>> Best Regards\n>>> [ ]`s Neto\n>>>\n>>> 2018-07-17 1:08 GMT-07:00 Fabio Pardi <[email protected]>:\n>>>> As already mentioned by Robert, please let us know if you made sure that\n>>>> nothing was fished from RAM, over the faster test.\n>>>>\n>>>> In other words, make sure that all caches are dropped between one test\n>>>> and another.\n>>>>\n>>>> Also,to better picture the situation, would be good to know:\n>>>>\n>>>> - which SSD (brand/model) are you using?\n>>>> - which HDD?\n>>>> - how are the disks configured? RAID? or not?\n>>>> - on which OS?\n>>>> - what are the mount options? SSD requires tuning\n>>>> - did you make sure that no other query was running at the time of the\n>>>> bench?\n>>>> - are you making a comparison on the same machine?\n>>>> - is it HW or VM? benchs should better run on bare metal to avoid\n>>>> results pollution (eg: other VMS on the same hypervisor using the disk,\n>>>> host caching and so on)\n>>>> - how many times did you run the tests?\n>>>> - did you change postgres configuration over tests?\n>>>> - can you post postgres config?\n>>>> - what about vacuums or maintenance tasks running in the background?\n>>>>\n>>>> Also, to benchmark disks i would not use a custom query but pgbench.\n>>>>\n>>>> Be aware: running benchmarks is a science, therefore needs a scientific\n>>>> approach :)\n>>>>\n>>>> regards\n>>>>\n>>>> fabio pardi\n>>>>\n>>>>\n>>>>\n>>>> On 07/17/2018 07:00 AM, Neto pr wrote:\n>>>>> Dear,\n>>>>> Some of you can help me understand this.\n>>>>>\n>>>>> This query plan is executed in the query below (query 9 of TPC-H\n>>>>> Benchmark, with scale 40, database with approximately 40 gb).\n>>>>>\n>>>>> The experiment consisted of running the query on a HDD (Raid zero).\n>>>>> Then the same query is executed on an SSD (Raid Zero).\n>>>>>\n>>>>> Why did the HDD (7200 rpm) perform better?\n>>>>> HDD - TIME 9 MINUTES\n>>>>> SSD - TIME 15 MINUTES\n>>>>>\n>>>>> As far as I know, the SSD has a reading that is 300 times faster than SSD.\n>>>>>\n>>>>> --- Execution Plans---\n>>>>> ssd 40g\n>>>>> https://explain.depesz.com/s/rHkh\n>>>>>\n>>>>> hdd 40g\n>>>>> https://explain.depesz.com/s/l4sq\n>>>>>\n>>>>> Query ------------------------------------\n>>>>>\n>>>>> select\n>>>>> nation,\n>>>>> o_year,\n>>>>> sum(amount) as sum_profit\n>>>>> from\n>>>>> (\n>>>>> select\n>>>>> n_name as nation,\n>>>>> extract(year from o_orderdate) as o_year,\n>>>>> l_extendedprice * (1 - l_discount) - ps_supplycost *\n>>>>> l_quantity as amount\n>>>>> from\n>>>>> part,\n>>>>> supplier,\n>>>>> lineitem,\n>>>>> partsupp,\n>>>>> orders,\n>>>>> nation\n>>>>> where\n>>>>> s_suppkey = l_suppkey\n>>>>> and ps_suppkey = l_suppkey\n>>>>> and ps_partkey = l_partkey\n>>>>> and p_partkey = l_partkey\n>>>>> and o_orderkey = l_orderkey\n>>>>> and s_nationkey = n_nationkey\n>>>>> and p_name like '%orchid%'\n>>>>> ) as profit\n>>>>> group by\n>>>>> nation,\n>>>>> o_year\n>>>>> order by\n>>>>> nation,\n>>>>> o_year desc\n>>>>>\n>>>>\n\n",
"msg_date": "Tue, 17 Jul 2018 11:05:50 -0300",
"msg_from": "Neto pr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why HDD performance is better than SSD in this case"
},
{
"msg_contents": "Le 17/07/2018 à 15:50, Robert Zenz a écrit :\n> On 17.07.2018 15:44, Nicolas Charles wrote:\n>> - Partitions is not correctly aligned on the SSD blocks\n> Does that really make a noticeable difference? If yes, have you got some further\n> reading material on that?\n\nI was pretty sure it was, but looking at the litterature, it's not that \nobvious. Especially \nhttps://blog.pgaddict.com/posts/postgresql-performance-on-ext4-and-xfs \nseems to mentions that it doesn't change a thing, while others stress \nout that it is mandatory ( https://www.alibabacloud.com/forum/read-415, \nhttps://www.thomas-krenn.com/en/wiki/Partition_Alignment )\nEven on mecanical drives it's been reported to cause slowdown ( \nhttps://www.ibm.com/developerworks/library/l-linux-on-4kb-sector-disks/index.html \n)\n\nMaybe the OS are more clever now, making it not so important as before ?\n\nNicolas\n\n",
"msg_date": "Tue, 17 Jul 2018 16:08:28 +0200",
"msg_from": "Nicolas Charles <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why HDD performance is better than SSD in this case"
},
{
"msg_contents": "Le 17/07/2018 à 16:00, Neto pr a écrit :\n> 2018-07-17 10:44 GMT-03:00 Nicolas Charles <[email protected]>:\n>> Hi Neto,\n>>\n>> You should list the SSD model also - there are pleinty of Samsung EVO drives\n>> - and they are not professional grade.\n>>\n>> Among the the possible issues, the most likely (from my point of view) are:\n>>\n>> - TRIM command doesn't go through the RAID (which is really likely) - so the\n>> SSD controller think it's full, and keep pushing blocks around to level\n>> wear, causing massive perf degradation - please check this config on you\n>> RAID driver/adapter\n>>\n>> - TRIM is not configured on the OS level for the SSD\n>>\n>> - Partitions is not correctly aligned on the SSD blocks\n>>\n>>\n>> Without so little details on your system, we can only try to guess the real\n>> issues\n>>\n> Thank you Nicolas, for your tips.\n> I believe your assumption is right.\n>\n> This SSD really is not professional, even if Samsung's advertisement\n> says yes. If I have to buy another SSD I will prefer INTEL SSDs.\n>\n> I had a previous problem with it (Sansung EVO) as it lost in\n> performance to a SAS HDD, but however, the SAS HDD was a 12 Gb/s\n> transfer rate and the SSD was 6 Gb/s.\n>\n> But now I tested against an HDD (7200 RPM) that has the same transfer\n> rate as the SSD 6 Gb/sec. and could not lose in performance.\n>\n> Maybe it's the unconfigured trim.\n>\n> Could you give me some help on how I could check if my RAID is\n> configured for this, I use Hardware RAID using HP software (HP Storage\n> Provider on boot).\n> And on Debian 8 Operating System, how could I check the TRIM configuration ?\n>\n> Best\n> []'s Neto\n\nI'm no expert in HP system, but you can have a look at this thread and \nreferenced links\nFor the trim option in Debian, you need to define the mount options of \nyour partition, in /etc/fstab, to include \"discard\" (see \nhttps://wiki.archlinux.org/index.php/Solid_State_Drive#Continuous_TRIM )\n\nRegards,\nNicolas\n\n>> Nicolas\n>>\n>> Nicolas CHARLES\n>>\n>> Le 17/07/2018 à 15:19, Neto pr a écrit :\n>>> 2018-07-17 10:04 GMT-03:00 Neto pr <[email protected]>:\n>>>> Sorry.. I replied in the wrong message before ...\n>>>> follows my response.\n>>>> -------------\n>>>>\n>>>> Thanks all, but I still have not figured it out.\n>>>> This is really strange because the tests were done on the same machine\n>>>> (I use HP ML110 Proliant 8gb RAM - Xeon 2.8 ghz processor (4\n>>>> cores), and POSTGRESQL 10.1.\n>>>> - Only the mentioned query running at the time of the test.\n>>>> - I repeated the query 7 times and did not change the results.\n>>>> - Before running each batch of 7 executions, I discarded the Operating\n>>>> System cache and restarted DBMS like this:\n>>>> (echo 3> / proc / sys / vm / drop_caches;\n>>>>\n>>>> discs:\n>>>> - 2 units of Samsung Evo SSD 500 GB (mounted on ZERO RAID)\n>>>> - 2 SATA 7500 Krpm HDD units - 1TB (mounted on ZERO RAID)\n>>>>\n>>>> - The Operating System and the Postgresql DBMS are installed on the SSD\n>>>> disk.\n>>>>\n>>> One more information.\n>>> I used default configuration to Postgresql.conf\n>>> Only exception is to :\n>>> random_page_cost on SSD is 1.1\n>>>\n>>>\n>>>> Best Regards\n>>>> [ ]`s Neto\n>>>>\n>>>> 2018-07-17 1:08 GMT-07:00 Fabio Pardi <[email protected]>:\n>>>>> As already mentioned by Robert, please let us know if you made sure that\n>>>>> nothing was fished from RAM, over the faster test.\n>>>>>\n>>>>> In other words, make sure that all caches are dropped between one test\n>>>>> and another.\n>>>>>\n>>>>> Also,to better picture the situation, would be good to know:\n>>>>>\n>>>>> - which SSD (brand/model) are you using?\n>>>>> - which HDD?\n>>>>> - how are the disks configured? RAID? or not?\n>>>>> - on which OS?\n>>>>> - what are the mount options? SSD requires tuning\n>>>>> - did you make sure that no other query was running at the time of the\n>>>>> bench?\n>>>>> - are you making a comparison on the same machine?\n>>>>> - is it HW or VM? benchs should better run on bare metal to avoid\n>>>>> results pollution (eg: other VMS on the same hypervisor using the disk,\n>>>>> host caching and so on)\n>>>>> - how many times did you run the tests?\n>>>>> - did you change postgres configuration over tests?\n>>>>> - can you post postgres config?\n>>>>> - what about vacuums or maintenance tasks running in the background?\n>>>>>\n>>>>> Also, to benchmark disks i would not use a custom query but pgbench.\n>>>>>\n>>>>> Be aware: running benchmarks is a science, therefore needs a scientific\n>>>>> approach :)\n>>>>>\n>>>>> regards\n>>>>>\n>>>>> fabio pardi\n>>>>>\n>>>>>\n>>>>>\n>>>>> On 07/17/2018 07:00 AM, Neto pr wrote:\n>>>>>> Dear,\n>>>>>> Some of you can help me understand this.\n>>>>>>\n>>>>>> This query plan is executed in the query below (query 9 of TPC-H\n>>>>>> Benchmark, with scale 40, database with approximately 40 gb).\n>>>>>>\n>>>>>> The experiment consisted of running the query on a HDD (Raid zero).\n>>>>>> Then the same query is executed on an SSD (Raid Zero).\n>>>>>>\n>>>>>> Why did the HDD (7200 rpm) perform better?\n>>>>>> HDD - TIME 9 MINUTES\n>>>>>> SSD - TIME 15 MINUTES\n>>>>>>\n>>>>>> As far as I know, the SSD has a reading that is 300 times faster than\n>>>>>> SSD.\n>>>>>>\n>>>>>> --- Execution Plans---\n>>>>>> ssd 40g\n>>>>>> https://explain.depesz.com/s/rHkh\n>>>>>>\n>>>>>> hdd 40g\n>>>>>> https://explain.depesz.com/s/l4sq\n>>>>>>\n>>>>>> Query ------------------------------------\n>>>>>>\n>>>>>> select\n>>>>>> nation,\n>>>>>> o_year,\n>>>>>> sum(amount) as sum_profit\n>>>>>> from\n>>>>>> (\n>>>>>> select\n>>>>>> n_name as nation,\n>>>>>> extract(year from o_orderdate) as o_year,\n>>>>>> l_extendedprice * (1 - l_discount) - ps_supplycost *\n>>>>>> l_quantity as amount\n>>>>>> from\n>>>>>> part,\n>>>>>> supplier,\n>>>>>> lineitem,\n>>>>>> partsupp,\n>>>>>> orders,\n>>>>>> nation\n>>>>>> where\n>>>>>> s_suppkey = l_suppkey\n>>>>>> and ps_suppkey = l_suppkey\n>>>>>> and ps_partkey = l_partkey\n>>>>>> and p_partkey = l_partkey\n>>>>>> and o_orderkey = l_orderkey\n>>>>>> and s_nationkey = n_nationkey\n>>>>>> and p_name like '%orchid%'\n>>>>>> ) as profit\n>>>>>> group by\n>>>>>> nation,\n>>>>>> o_year\n>>>>>> order by\n>>>>>> nation,\n>>>>>> o_year desc\n>>>>>>\n\n\n",
"msg_date": "Tue, 17 Jul 2018 16:16:05 +0200",
"msg_from": "Nicolas Charles <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why HDD performance is better than SSD in this case"
},
{
"msg_contents": "\n\nOn 07/17/2018 04:05 PM, Neto pr wrote:\n> 2018-07-17 10:55 GMT-03:00 Fabio Pardi <[email protected]>:\n\n>> Also i think it makes not much sense testing on RAID 0. I would start\n>> performing tests on a single disk, bypassing RAID (or, as mentioned, at\n>> least disabling cache).\n>>\n> \n> But in my case, both the 2 SSDs and the 2 HDDs are in RAID ZERO.\n> This way it would not be a valid test ? Because the 2 environments are\n> in RAID ZERO.\n> \n> \n\nin theory, probably yes and maybe not.\nIn RAID 0, data is (usually) striped in a round robin fashion, so you\nshould rely on the fact that, in average, data is spread 50% on each\ndisk. For the sake of knowledge, you can check what your RAID controller\nis actually using as algorithm to spread data over RAID 0.\n\nBut you might be in an unlucky case in which more data is on one disk\nthan in another.\nUnlucky or created by the events, like you deleted the records which are\non disk 0 and you only are querying those on disk 1, for instance.\n\nThe fact is, that more complexity you add to your test, the less the\nresults will be closer to your expectations.\n\nSince you are testing disks, and not RAID, i would start empirically and\nperform the test straight on 1 disk.\nA simple test, like dd i mentioned here above.\nIf dd, or other more tailored tests on disks show that SSD is way slow,\nthen you can focus on tuning your disk. or trashing it :)\n\nWhen you are satisfied with your results, you can build up complexity\nfrom the reliable/consolidated level you reached.\n\nAs side note: why to run a test on a setup you can never use on production?\n\nregards,\n\nfabio pardi\n\n\n\n",
"msg_date": "Tue, 17 Jul 2018 16:43:50 +0200",
"msg_from": "Fabio Pardi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why HDD performance is better than SSD in this case"
},
{
"msg_contents": "Ok, so dropping the cache is good.\n\nHow are you ensuring that you have one test setup on the HDDs and one on \nthe SSDs? i.e do you have 2 postgres instances? or are you using one \ninstance with tablespaces to locate the relevant tables? If the 2nd case \nthen you will get pollution of shared_buffers if you don't restart \nbetween the HHD and SSD tests. If you have 2 instances then you need to \ncarefully check the parameters are set the same (and probably shut the \nHDD instance down when testing the SSD etc).\n\nI can see a couple of things in your setup that might pessimize the SDD \ncase:\n- you have OS on the SSD - if you tests make the system swap then this \nwill wreck the SSD result\n- you have RAID 0 SSD...some of the cheaper ones slow down when you do \nthis. maybe test with a single SSD\n\nregards\nMark\n\nOn 18/07/18 01:04, Neto pr wrote (note snippage):\n> (echo 3> / proc / sys / vm / drop_caches;\n>\n> discs:\n> - 2 units of Samsung Evo SSD 500 GB (mounted on ZERO RAID)\n> - 2 SATA 7500 Krpm HDD units - 1TB (mounted on ZERO RAID)\n>\n> - The Operating System and the Postgresql DBMS are installed on the SSD disk.\n>\n>\n\n\n",
"msg_date": "Wed, 18 Jul 2018 11:04:44 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why HDD performance is better than SSD in this case"
},
{
"msg_contents": "Yeah,\n\nA +1 to telling us the model. In particular the later EVOs use TLC nand \nwith a small SLC cache... and when you exhaust the SLC cache the \nperformance can be worse than a HDD...\n\n\nOn 18/07/18 01:44, Nicolas Charles wrote:\n> Hi Neto,\n>\n> You should list the SSD model also - there are pleinty of Samsung EVO \n> drives - and they are not professional grade.\n>\n> Among the the possible issues, the most likely (from my point of view) \n> are:\n>\n> - TRIM command doesn't go through the RAID (which is really likely) - \n> so the SSD controller think it's full, and keep pushing blocks around \n> to level wear, causing massive perf degradation - please check this \n> config on you RAID driver/adapter\n>\n> - TRIM is not configured on the OS level for the SSD\n>\n> - Partitions is not correctly aligned on the SSD blocks\n>\n>\n> Without so little details on your system, we can only try to guess the \n> real issues\n>\n>\n> Nicolas\n>\n> Nicolas CHARLES\n> Le 17/07/2018 à 15:19, Neto pr a écrit :\n>> 2018-07-17 10:04 GMT-03:00 Neto pr <[email protected]>:\n>>> Sorry.. I replied in the wrong message before ...\n>>> follows my response.\n>>> -------------\n>>>\n>>> Thanks all, but I still have not figured it out.\n>>> This is really strange because the tests were done on the same machine\n>>> (I use HP ML110 Proliant 8gb RAM - Xeon 2.8 ghz processor (4\n>>> cores), and POSTGRESQL 10.1.\n>>> - Only the mentioned query running at the time of the test.\n>>> - I repeated the query 7 times and did not change the results.\n>>> - Before running each batch of 7 executions, I discarded the Operating\n>>> System cache and restarted DBMS like this:\n>>> (echo 3> / proc / sys / vm / drop_caches;\n>>>\n>>> discs:\n>>> - 2 units of Samsung Evo SSD 500 GB (mounted on ZERO RAID)\n>>> - 2 SATA 7500 Krpm HDD units - 1TB (mounted on ZERO RAID)\n>>>\n>>> - The Operating System and the Postgresql DBMS are installed on the \n>>> SSD disk.\n>>>\n>> One more information.\n>> I used default configuration to Postgresql.conf\n>> Only exception is to :\n>> random_page_cost on SSD is 1.1\n>>\n>>\n>>> Best Regards\n>>> [ ]`s Neto\n>>>\n>>> 2018-07-17 1:08 GMT-07:00 Fabio Pardi <[email protected]>:\n>>>> As already mentioned by Robert, please let us know if you made sure \n>>>> that\n>>>> nothing was fished from RAM, over the faster test.\n>>>>\n>>>> In other words, make sure that all caches are dropped between one test\n>>>> and another.\n>>>>\n>>>> Also,to better picture the situation, would be good to know:\n>>>>\n>>>> - which SSD (brand/model) are you using?\n>>>> - which HDD?\n>>>> - how are the disks configured? RAID? or not?\n>>>> - on which OS?\n>>>> - what are the mount options? SSD requires tuning\n>>>> - did you make sure that no other query was running at the time of the\n>>>> bench?\n>>>> - are you making a comparison on the same machine?\n>>>> - is it HW or VM? benchs should better run on bare metal to avoid\n>>>> results pollution (eg: other VMS on the same hypervisor using the \n>>>> disk,\n>>>> host caching and so on)\n>>>> - how many times did you run the tests?\n>>>> - did you change postgres configuration over tests?\n>>>> - can you post postgres config?\n>>>> - what about vacuums or maintenance tasks running in the background?\n>>>>\n>>>> Also, to benchmark disks i would not use a custom query but pgbench.\n>>>>\n>>>> Be aware: running benchmarks is a science, therefore needs a \n>>>> scientific\n>>>> approach :)\n>>>>\n>>>> regards\n>>>>\n>>>> fabio pardi\n>>>>\n>>>>\n>>>>\n>>>> On 07/17/2018 07:00 AM, Neto pr wrote:\n>>>>> Dear,\n>>>>> Some of you can help me understand this.\n>>>>>\n>>>>> This query plan is executed in the query below (query 9 of TPC-H\n>>>>> Benchmark, with scale 40, database with approximately 40 gb).\n>>>>>\n>>>>> The experiment consisted of running the query on a HDD (Raid zero).\n>>>>> Then the same query is executed on an SSD (Raid Zero).\n>>>>>\n>>>>> Why did the HDD (7200 rpm) perform better?\n>>>>> HDD - TIME 9 MINUTES\n>>>>> SSD - TIME 15 MINUTES\n>>>>>\n>>>>> As far as I know, the SSD has a reading that is 300 times faster \n>>>>> than SSD.\n>>>>>\n>>>>> --- Execution Plans---\n>>>>> ssd 40g\n>>>>> https://explain.depesz.com/s/rHkh\n>>>>>\n>>>>> hdd 40g\n>>>>> https://explain.depesz.com/s/l4sq\n>>>>>\n>>>>> Query ------------------------------------\n>>>>>\n>>>>> select\n>>>>> nation,\n>>>>> o_year,\n>>>>> sum(amount) as sum_profit\n>>>>> from\n>>>>> (\n>>>>> select\n>>>>> n_name as nation,\n>>>>> extract(year from o_orderdate) as o_year,\n>>>>> l_extendedprice * (1 - l_discount) - ps_supplycost *\n>>>>> l_quantity as amount\n>>>>> from\n>>>>> part,\n>>>>> supplier,\n>>>>> lineitem,\n>>>>> partsupp,\n>>>>> orders,\n>>>>> nation\n>>>>> where\n>>>>> s_suppkey = l_suppkey\n>>>>> and ps_suppkey = l_suppkey\n>>>>> and ps_partkey = l_partkey\n>>>>> and p_partkey = l_partkey\n>>>>> and o_orderkey = l_orderkey\n>>>>> and s_nationkey = n_nationkey\n>>>>> and p_name like '%orchid%'\n>>>>> ) as profit\n>>>>> group by\n>>>>> nation,\n>>>>> o_year\n>>>>> order by\n>>>>> nation,\n>>>>> o_year desc\n>>>>>\n>\n>\n\n\n",
"msg_date": "Wed, 18 Jul 2018 11:28:20 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why HDD performance is better than SSD in this case"
},
{
"msg_contents": "2018-07-17 20:04 GMT-03:00 Mark Kirkwood <[email protected]>:\n> Ok, so dropping the cache is good.\n>\n> How are you ensuring that you have one test setup on the HDDs and one on the\n> SSDs? i.e do you have 2 postgres instances? or are you using one instance\n> with tablespaces to locate the relevant tables? If the 2nd case then you\n> will get pollution of shared_buffers if you don't restart between the HHD\n> and SSD tests. If you have 2 instances then you need to carefully check the\n> parameters are set the same (and probably shut the HDD instance down when\n> testing the SSD etc).\n>\nDear Mark\nTo ensure that the test is honest and has the same configuration the\nO.S. and also DBMS, my O.S. is installed on the SSD and DBMS as well.\nI have an instance only of DBMS and two database.\n- a database called tpch40gnorhdd with tablespace on the HDD disk.\n- a database called tpch40gnorssd with tablespace on the SSD disk.\nSee below:\n\npostgres=# \\l\n List of databases\n Name | Owner | Encoding | Collate | Ctype |\nAccess privileges\n---------------+----------+----------+-------------+-------------+-----------------------\n postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |\n template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |\n=c/postgres +\n | | | | |\npostgres=CTc/postgres\n template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |\n=c/postgres +\n | | | | |\npostgres=CTc/postgres\n tpch40gnorhdd | user1 | UTF8 | en_US.UTF-8 | en_US.UTF-8 |\n tpch40gnorssd | user1 | UTF8 | en_US.UTF-8 | en_US.UTF-8 |\n(5 rows)\n\npostgres=#\n\nAfter 7 query execution in a database tpch40gnorhdd I restart the DBMS\n(/etc/init.d/pg101norssd restart and drop cache of the O.S.) and go to\nexecution test with the database tpch40gnorssd.\nYou think in this case there is pollution of shared_buffers?\nWhy do you think having O.S. on SSD is bad? Do you could explain better?\n\nBest regards\n[]`s Neto\n\n> I can see a couple of things in your setup that might pessimize the SDD\n> case:\n> - you have OS on the SSD - if you tests make the system swap then this will\n> wreck the SSD result\n> - you have RAID 0 SSD...some of the cheaper ones slow down when you do this.\n> maybe test with a single SSD\n>\n> regards\n> Mark\n>\n> On 18/07/18 01:04, Neto pr wrote (note snippage):\n>\n>> (echo 3> / proc / sys / vm / drop_caches;\n>>\n>> discs:\n>> - 2 units of Samsung Evo SSD 500 GB (mounted on ZERO RAID)\n>> - 2 SATA 7500 Krpm HDD units - 1TB (mounted on ZERO RAID)\n>>\n>> - The Operating System and the Postgresql DBMS are installed on the SSD\n>> disk.\n>>\n>>\n>\n\n",
"msg_date": "Tue, 17 Jul 2018 22:13:08 -0300",
"msg_from": "Neto pr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why HDD performance is better than SSD in this case"
},
{
"msg_contents": "2018-07-17 22:13 GMT-03:00 Neto pr <[email protected]>:\n> 2018-07-17 20:04 GMT-03:00 Mark Kirkwood <[email protected]>:\n>> Ok, so dropping the cache is good.\n>>\n>> How are you ensuring that you have one test setup on the HDDs and one on the\n>> SSDs? i.e do you have 2 postgres instances? or are you using one instance\n>> with tablespaces to locate the relevant tables? If the 2nd case then you\n>> will get pollution of shared_buffers if you don't restart between the HHD\n>> and SSD tests. If you have 2 instances then you need to carefully check the\n>> parameters are set the same (and probably shut the HDD instance down when\n>> testing the SSD etc).\n>>\n> Dear Mark\n> To ensure that the test is honest and has the same configuration the\n> O.S. and also DBMS, my O.S. is installed on the SSD and DBMS as well.\n> I have an instance only of DBMS and two database.\n> - a database called tpch40gnorhdd with tablespace on the HDD disk.\n> - a database called tpch40gnorssd with tablespace on the SSD disk.\n> See below:\n>\n> postgres=# \\l\n> List of databases\n> Name | Owner | Encoding | Collate | Ctype |\n> Access privileges\n> ---------------+----------+----------+-------------+-------------+-----------------------\n> postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |\n> template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |\n> =c/postgres +\n> | | | | |\n> postgres=CTc/postgres\n> template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |\n> =c/postgres +\n> | | | | |\n> postgres=CTc/postgres\n> tpch40gnorhdd | user1 | UTF8 | en_US.UTF-8 | en_US.UTF-8 |\n> tpch40gnorssd | user1 | UTF8 | en_US.UTF-8 | en_US.UTF-8 |\n> (5 rows)\n>\n> postgres=#\n>\n> After 7 query execution in a database tpch40gnorhdd I restart the DBMS\n> (/etc/init.d/pg101norssd restart and drop cache of the O.S.) and go to\n> execution test with the database tpch40gnorssd.\n> You think in this case there is pollution of shared_buffers?\n> Why do you think having O.S. on SSD is bad? Do you could explain better?\n>\n> Best regards\n> []`s Neto\n>\n\n+1 information about EVO SSD Samsung:\n\n Model: 850 Evo 500 GB SATA III 6Gb/s -\nhttp://www.samsung.com/semiconductor/minisite/ssd/product/consumer/850evo/\n\n\n>> I can see a couple of things in your setup that might pessimize the SDD\n>> case:\n>> - you have OS on the SSD - if you tests make the system swap then this will\n>> wreck the SSD result\n>> - you have RAID 0 SSD...some of the cheaper ones slow down when you do this.\n>> maybe test with a single SSD\n>>\n>> regards\n>> Mark\n>>\n>> On 18/07/18 01:04, Neto pr wrote (note snippage):\n>>\n>>> (echo 3> / proc / sys / vm / drop_caches;\n>>>\n>>> discs:\n>>> - 2 units of Samsung Evo SSD 500 GB (mounted on ZERO RAID)\n>>> - 2 SATA 7500 Krpm HDD units - 1TB (mounted on ZERO RAID)\n>>>\n>>> - The Operating System and the Postgresql DBMS are installed on the SSD\n>>> disk.\n>>>\n>>>\n>>\n\n",
"msg_date": "Tue, 17 Jul 2018 22:16:45 -0300",
"msg_from": "Neto pr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why HDD performance is better than SSD in this case"
},
{
"msg_contents": "2018-07-17 11:43 GMT-03:00 Fabio Pardi <[email protected]>:\n>\n>\n> On 07/17/2018 04:05 PM, Neto pr wrote:\n>> 2018-07-17 10:55 GMT-03:00 Fabio Pardi <[email protected]>:\n>\n>>> Also i think it makes not much sense testing on RAID 0. I would start\n>>> performing tests on a single disk, bypassing RAID (or, as mentioned, at\n>>> least disabling cache).\n>>>\n>>\n>> But in my case, both the 2 SSDs and the 2 HDDs are in RAID ZERO.\n>> This way it would not be a valid test ? Because the 2 environments are\n>> in RAID ZERO.\n>>\n>>\n>\n> in theory, probably yes and maybe not.\n> In RAID 0, data is (usually) striped in a round robin fashion, so you\n> should rely on the fact that, in average, data is spread 50% on each\n> disk. For the sake of knowledge, you can check what your RAID controller\n> is actually using as algorithm to spread data over RAID 0.\n>\n> But you might be in an unlucky case in which more data is on one disk\n> than in another.\n> Unlucky or created by the events, like you deleted the records which are\n> on disk 0 and you only are querying those on disk 1, for instance.\n>\n> The fact is, that more complexity you add to your test, the less the\n> results will be closer to your expectations.\n>\n> Since you are testing disks, and not RAID, i would start empirically and\n> perform the test straight on 1 disk.\n> A simple test, like dd i mentioned here above.\n> If dd, or other more tailored tests on disks show that SSD is way slow,\n> then you can focus on tuning your disk. or trashing it :)\n>\n> When you are satisfied with your results, you can build up complexity\n> from the reliable/consolidated level you reached.\n>\n> As side note: why to run a test on a setup you can never use on production?\n>\n> regards,\n>\n> fabio pardi\n>\n\nFabio, I understood and I agree with you about testing without RAID,\nthis way it would be easier to avoid problems unrelated to my test on\ndisks (SSD and HDD).\n\nCan you just explain why you said it below?\n\n\"As side note: why to run a test on a setup you can never use on production?\"\n\nYou think that a RAID ZERO configuration for a DBMS is little used?\nWhich one do you think would be good? I accept suggestions because I\nam in the middle of a work for my\nresearch of the postgraduate course and I can change the environment\nto something that is more useful and really used in real production\nenvironments.\n\nBest Regards\n[]`s Neto\n>\n\n",
"msg_date": "Tue, 17 Jul 2018 22:24:27 -0300",
"msg_from": "Neto pr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why HDD performance is better than SSD in this case"
},
{
"msg_contents": "Hi Neto,\n\nRAID 0 to store production data should never be used. Never a good idea, in my opinion.\n\nSimple reason is that when you lose one disk, you lose everything.\n\nIf your goal is to bench the disk, go for single disk.\n\nIf you want to be closer to a production setup, go for RAID 10, or pick a RAID setup close to what your needs and capabilities are (more reads? more writes? SSD? HDD? cache? ...? )\n\nIf you only have 2 disks, your obliged (redundant) choice is RAID 1.\n\nregards,\n\nfabio pardi\n\n\nOn 18/07/18 03:24, Neto pr wrote:\n>\n>> As side note: why to run a test on a setup you can never use on production?\n>>\n>> regards,\n>>\n>> fabio pardi\n>>\n>\n> Can you just explain why you said it below?\n>\n> \"As side note: why to run a test on a setup you can never use on production?\"\n>\n> You think that a RAID ZERO configuration for a DBMS is little used?\n> Which one do you think would be good? I accept suggestions because I\n> am in the middle of a work for my\n> research of the postgraduate course and I can change the environment\n> to something that is more useful and really used in real production\n> environments.\n>\n> Best Regards\n> []`s Neto\n\n\n",
"msg_date": "Wed, 18 Jul 2018 09:46:32 +0200",
"msg_from": "Fabio Pardi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why HDD performance is better than SSD in this case"
},
{
"msg_contents": "Le 18/07/2018 à 03:16, Neto pr a écrit :\n> 2018-07-17 22:13 GMT-03:00 Neto pr <[email protected]>:\n>> 2018-07-17 20:04 GMT-03:00 Mark Kirkwood <[email protected]>:\n>>> Ok, so dropping the cache is good.\n>>>\n>>> How are you ensuring that you have one test setup on the HDDs and one on the\n>>> SSDs? i.e do you have 2 postgres instances? or are you using one instance\n>>> with tablespaces to locate the relevant tables? If the 2nd case then you\n>>> will get pollution of shared_buffers if you don't restart between the HHD\n>>> and SSD tests. If you have 2 instances then you need to carefully check the\n>>> parameters are set the same (and probably shut the HDD instance down when\n>>> testing the SSD etc).\n>>>\n>> Dear Mark\n>> To ensure that the test is honest and has the same configuration the\n>> O.S. and also DBMS, my O.S. is installed on the SSD and DBMS as well.\n>> I have an instance only of DBMS and two database.\n>> - a database called tpch40gnorhdd with tablespace on the HDD disk.\n>> - a database called tpch40gnorssd with tablespace on the SSD disk.\n>> See below:\n>>\n>> postgres=# \\l\n>> List of databases\n>> Name | Owner | Encoding | Collate | Ctype |\n>> Access privileges\n>> ---------------+----------+----------+-------------+-------------+-----------------------\n>> postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |\n>> template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |\n>> =c/postgres +\n>> | | | | |\n>> postgres=CTc/postgres\n>> template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |\n>> =c/postgres +\n>> | | | | |\n>> postgres=CTc/postgres\n>> tpch40gnorhdd | user1 | UTF8 | en_US.UTF-8 | en_US.UTF-8 |\n>> tpch40gnorssd | user1 | UTF8 | en_US.UTF-8 | en_US.UTF-8 |\n>> (5 rows)\n>>\n>> postgres=#\n>>\n>> After 7 query execution in a database tpch40gnorhdd I restart the DBMS\n>> (/etc/init.d/pg101norssd restart and drop cache of the O.S.) and go to\n>> execution test with the database tpch40gnorssd.\n>> You think in this case there is pollution of shared_buffers?\n>> Why do you think having O.S. on SSD is bad? Do you could explain better?\n>>\n>> Best regards\n>> []`s Neto\n>>\n> +1 information about EVO SSD Samsung:\n>\n> Model: 850 Evo 500 GB SATA III 6Gb/s -\n> http://www.samsung.com/semiconductor/minisite/ssd/product/consumer/850evo/\nAs stated on his ML on january, Samsung 850 Evo is not a particularly \nfast SSD - especially it's not really consistent in term of performance \n( see https://www.anandtech.com/show/8747/samsung-ssd-850-evo-review/5 \nand https://www.anandtech.com/bench/product/1913 ). This is not a \nproduct for professional usage, and you should not expect great \nperformance from it - as reported by these benchmark, you can have a \n34ms latency in very intensive usage:\nATSB - The Destroyer (99th Percentile Write Latency)99th Percentile \nLatency in Microseconds - Lower is Better *34923\n\n*Even average write latency of the Samsung 850 Evo is 3,3 ms in \nintensive workload\n\nWhy are you using this type of SSD for your benchmark ? What do you plan \nto achieve ?\n\n>\n>>> I can see a couple of things in your setup that might pessimize the SDD\n>>> case:\n>>> - you have OS on the SSD - if you tests make the system swap then this will\n>>> wreck the SSD result\n>>> - you have RAID 0 SSD...some of the cheaper ones slow down when you do this.\n>>> maybe test with a single SSD\n>>>\n>>> regards\n>>> Mark\n>>>\n>>> On 18/07/18 01:04, Neto pr wrote (note snippage):\n>>>\n>>>> (echo 3> / proc / sys / vm / drop_caches;\n>>>>\n>>>> discs:\n>>>> - 2 units of Samsung Evo SSD 500 GB (mounted on ZERO RAID)\n>>>> - 2 SATA 7500 Krpm HDD units - 1TB (mounted on ZERO RAID)\n>>>>\n>>>> - The Operating System and the Postgresql DBMS are installed on the SSD\n>>>> disk.\n>>>>\n>>>>\n\n\n\n\n\n\n\n Le 18/07/2018 à 03:16, Neto pr a écrit :\n\n2018-07-17 22:13 GMT-03:00 Neto pr <[email protected]>:\n\n\n2018-07-17 20:04 GMT-03:00 Mark Kirkwood <[email protected]>:\n\n\nOk, so dropping the cache is good.\n\nHow are you ensuring that you have one test setup on the HDDs and one on the\nSSDs? i.e do you have 2 postgres instances? or are you using one instance\nwith tablespaces to locate the relevant tables? If the 2nd case then you\nwill get pollution of shared_buffers if you don't restart between the HHD\nand SSD tests. If you have 2 instances then you need to carefully check the\nparameters are set the same (and probably shut the HDD instance down when\ntesting the SSD etc).\n\n\n\nDear Mark\nTo ensure that the test is honest and has the same configuration the\nO.S. and also DBMS, my O.S. is installed on the SSD and DBMS as well.\nI have an instance only of DBMS and two database.\n- a database called tpch40gnorhdd with tablespace on the HDD disk.\n- a database called tpch40gnorssd with tablespace on the SSD disk.\nSee below:\n\npostgres=# \\l\n List of databases\n Name | Owner | Encoding | Collate | Ctype |\nAccess privileges\n---------------+----------+----------+-------------+-------------+-----------------------\n postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |\n template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |\n=c/postgres +\n | | | | |\npostgres=CTc/postgres\n template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |\n=c/postgres +\n | | | | |\npostgres=CTc/postgres\n tpch40gnorhdd | user1 | UTF8 | en_US.UTF-8 | en_US.UTF-8 |\n tpch40gnorssd | user1 | UTF8 | en_US.UTF-8 | en_US.UTF-8 |\n(5 rows)\n\npostgres=#\n\nAfter 7 query execution in a database tpch40gnorhdd I restart the DBMS\n(/etc/init.d/pg101norssd restart and drop cache of the O.S.) and go to\nexecution test with the database tpch40gnorssd.\nYou think in this case there is pollution of shared_buffers?\nWhy do you think having O.S. on SSD is bad? Do you could explain better?\n\nBest regards\n[]`s Neto\n\n\n\n\n+1 information about EVO SSD Samsung:\n\n Model: 850 Evo 500 GB SATA III 6Gb/s -\nhttp://www.samsung.com/semiconductor/minisite/ssd/product/consumer/850evo/\n\n\n As stated on his ML on january, Samsung 850 Evo is not a\n particularly fast SSD - especially it's not really consistent in\n term of performance ( see https://www.anandtech.com/show/8747/samsung-ssd-850-evo-review/5\n and https://www.anandtech.com/bench/product/1913\n ). This is not a product for professional usage, and you should not\n expect great performance from it - as reported by these benchmark,\n you can have a 34ms latency in very intensive usage:\nATSB - The Destroyer (99th Percentile Write Latency)\n 99th Percentile Latency in Microseconds - Lower is Better 34923\n \n\nEven average write latency of the Samsung 850 Evo is 3,3 ms\n in intensive workload\n\n Why are you using this type of SSD for your benchmark ? What do you\n plan to achieve ? \n\n\n\n\n\n\n\nI can see a couple of things in your setup that might pessimize the SDD\ncase:\n- you have OS on the SSD - if you tests make the system swap then this will\nwreck the SSD result\n- you have RAID 0 SSD...some of the cheaper ones slow down when you do this.\nmaybe test with a single SSD\n\nregards\nMark\n\nOn 18/07/18 01:04, Neto pr wrote (note snippage):\n\n\n\n(echo 3> / proc / sys / vm / drop_caches;\n\ndiscs:\n- 2 units of Samsung Evo SSD 500 GB (mounted on ZERO RAID)\n- 2 SATA 7500 Krpm HDD units - 1TB (mounted on ZERO RAID)\n\n- The Operating System and the Postgresql DBMS are installed on the SSD\ndisk.",
"msg_date": "Wed, 18 Jul 2018 11:33:39 +0200",
"msg_from": "Nicolas Charles <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why HDD performance is better than SSD in this case"
},
{
"msg_contents": "Ok, so you are using 1 instance and tablespaces. Also I see you are \nrestarting the instance between HDD and SSD tests, so all good there.\n\nThe point I made about having the OS on the SSD's means that if these \ntests make your system swap, and your swap device is on the SSDs (which \nis probably is by default), then swap activity will compete with db \naccess activity for IOPS on your SSDs and spoil the results of your test \n(i.e slow down your SSDs).\n\nYou can check this using top, sar or iostat to see *if* you are swapping \nduring the tests.\n\nIdeally you would design your setup to use 3 separate devices:\n\n- one device (or array) for os, swap, tmp etc\n\n- one device (HDD array) for you 'HDD' tablespace\n\n- one device (SDD array) for your 'SDD' tablespace\n\nregards\n\nMark\n\n\nOn 18/07/18 13:13, Neto pr wrote:\n>\n> Dear Mark\n> To ensure that the test is honest and has the same configuration the\n> O.S. and also DBMS, my O.S. is installed on the SSD and DBMS as well.\n> I have an instance only of DBMS and two database.\n> - a database called tpch40gnorhdd with tablespace on the HDD disk.\n> - a database called tpch40gnorssd with tablespace on the SSD disk.\n> See below:\n>\n> postgres=# \\l\n> List of databases\n> Name | Owner | Encoding | Collate | Ctype |\n> Access privileges\n> ---------------+----------+----------+-------------+-------------+-----------------------\n> postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |\n> template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |\n> =c/postgres +\n> | | | | |\n> postgres=CTc/postgres\n> template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |\n> =c/postgres +\n> | | | | |\n> postgres=CTc/postgres\n> tpch40gnorhdd | user1 | UTF8 | en_US.UTF-8 | en_US.UTF-8 |\n> tpch40gnorssd | user1 | UTF8 | en_US.UTF-8 | en_US.UTF-8 |\n> (5 rows)\n>\n> postgres=#\n>\n> After 7 query execution in a database tpch40gnorhdd I restart the DBMS\n> (/etc/init.d/pg101norssd restart and drop cache of the O.S.) and go to\n> execution test with the database tpch40gnorssd.\n> You think in this case there is pollution of shared_buffers?\n> Why do you think having O.S. on SSD is bad? Do you could explain better?\n>\n>\n\n\n",
"msg_date": "Wed, 18 Jul 2018 21:35:29 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why HDD performance is better than SSD in this case"
},
{
"msg_contents": ">Model: 850 Evo 500 GB SATA III 6Gb/s -\n\nplease check the SSD *\"DRIVE HEALTH STATUS\"* and the* \"S.M.A.R.T values of\nspecified disk\" * ....\nfor example - with the \"smartctl\" tool ( https://www.smartmontools.org/\n) ( -x \"Show all information for device\" )\nExpected output with \"Samsung SSD 850 EVO 500GB\"\nhttps://superuser.com/questions/1169810/smart-data-of-a-new-ssd\n\nRegards,\n Imre\n\n\n\n\n\nNeto pr <[email protected]> ezt írta (időpont: 2018. júl. 18., Sze, 3:17):\n\n> 2018-07-17 22:13 GMT-03:00 Neto pr <[email protected]>:\n> > 2018-07-17 20:04 GMT-03:00 Mark Kirkwood <[email protected]\n> >:\n> >> Ok, so dropping the cache is good.\n> >>\n> >> How are you ensuring that you have one test setup on the HDDs and one\n> on the\n> >> SSDs? i.e do you have 2 postgres instances? or are you using one\n> instance\n> >> with tablespaces to locate the relevant tables? If the 2nd case then you\n> >> will get pollution of shared_buffers if you don't restart between the\n> HHD\n> >> and SSD tests. If you have 2 instances then you need to carefully check\n> the\n> >> parameters are set the same (and probably shut the HDD instance down\n> when\n> >> testing the SSD etc).\n> >>\n> > Dear Mark\n> > To ensure that the test is honest and has the same configuration the\n> > O.S. and also DBMS, my O.S. is installed on the SSD and DBMS as well.\n> > I have an instance only of DBMS and two database.\n> > - a database called tpch40gnorhdd with tablespace on the HDD disk.\n> > - a database called tpch40gnorssd with tablespace on the SSD disk.\n> > See below:\n> >\n> > postgres=# \\l\n> > List of databases\n> > Name | Owner | Encoding | Collate | Ctype |\n> > Access privileges\n> >\n> ---------------+----------+----------+-------------+-------------+-----------------------\n> > postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |\n> > template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |\n> > =c/postgres +\n> > | | | | |\n> > postgres=CTc/postgres\n> > template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |\n> > =c/postgres +\n> > | | | | |\n> > postgres=CTc/postgres\n> > tpch40gnorhdd | user1 | UTF8 | en_US.UTF-8 | en_US.UTF-8 |\n> > tpch40gnorssd | user1 | UTF8 | en_US.UTF-8 | en_US.UTF-8 |\n> > (5 rows)\n> >\n> > postgres=#\n> >\n> > After 7 query execution in a database tpch40gnorhdd I restart the DBMS\n> > (/etc/init.d/pg101norssd restart and drop cache of the O.S.) and go to\n> > execution test with the database tpch40gnorssd.\n> > You think in this case there is pollution of shared_buffers?\n> > Why do you think having O.S. on SSD is bad? Do you could explain better?\n> >\n> > Best regards\n> > []`s Neto\n> >\n>\n> +1 information about EVO SSD Samsung:\n>\n> Model: 850 Evo 500 GB SATA III 6Gb/s -\n> http://www.samsung.com/semiconductor/minisite/ssd/product/consumer/850evo/\n>\n>\n> >> I can see a couple of things in your setup that might pessimize the SDD\n> >> case:\n> >> - you have OS on the SSD - if you tests make the system swap then this\n> will\n> >> wreck the SSD result\n> >> - you have RAID 0 SSD...some of the cheaper ones slow down when you do\n> this.\n> >> maybe test with a single SSD\n> >>\n> >> regards\n> >> Mark\n> >>\n> >> On 18/07/18 01:04, Neto pr wrote (note snippage):\n> >>\n> >>> (echo 3> / proc / sys / vm / drop_caches;\n> >>>\n> >>> discs:\n> >>> - 2 units of Samsung Evo SSD 500 GB (mounted on ZERO RAID)\n> >>> - 2 SATA 7500 Krpm HDD units - 1TB (mounted on ZERO RAID)\n> >>>\n> >>> - The Operating System and the Postgresql DBMS are installed on the SSD\n> >>> disk.\n> >>>\n> >>>\n> >>\n>\n>\n\n>Model: 850 Evo 500 GB SATA III 6Gb/s -please check the SSD \"DRIVE HEALTH STATUS\" and the \"S.M.A.R.T values of specified disk\" ....for example - with the \"smartctl\" tool ( https://www.smartmontools.org/ ) ( -x \"Show all information for device\" )Expected output with \"Samsung SSD 850 EVO 500GB\" https://superuser.com/questions/1169810/smart-data-of-a-new-ssdRegards, ImreNeto pr <[email protected]> ezt írta (időpont: 2018. júl. 18., Sze, 3:17):2018-07-17 22:13 GMT-03:00 Neto pr <[email protected]>:\n> 2018-07-17 20:04 GMT-03:00 Mark Kirkwood <[email protected]>:\n>> Ok, so dropping the cache is good.\n>>\n>> How are you ensuring that you have one test setup on the HDDs and one on the\n>> SSDs? i.e do you have 2 postgres instances? or are you using one instance\n>> with tablespaces to locate the relevant tables? If the 2nd case then you\n>> will get pollution of shared_buffers if you don't restart between the HHD\n>> and SSD tests. If you have 2 instances then you need to carefully check the\n>> parameters are set the same (and probably shut the HDD instance down when\n>> testing the SSD etc).\n>>\n> Dear Mark\n> To ensure that the test is honest and has the same configuration the\n> O.S. and also DBMS, my O.S. is installed on the SSD and DBMS as well.\n> I have an instance only of DBMS and two database.\n> - a database called tpch40gnorhdd with tablespace on the HDD disk.\n> - a database called tpch40gnorssd with tablespace on the SSD disk.\n> See below:\n>\n> postgres=# \\l\n> List of databases\n> Name | Owner | Encoding | Collate | Ctype |\n> Access privileges\n> ---------------+----------+----------+-------------+-------------+-----------------------\n> postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |\n> template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |\n> =c/postgres +\n> | | | | |\n> postgres=CTc/postgres\n> template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |\n> =c/postgres +\n> | | | | |\n> postgres=CTc/postgres\n> tpch40gnorhdd | user1 | UTF8 | en_US.UTF-8 | en_US.UTF-8 |\n> tpch40gnorssd | user1 | UTF8 | en_US.UTF-8 | en_US.UTF-8 |\n> (5 rows)\n>\n> postgres=#\n>\n> After 7 query execution in a database tpch40gnorhdd I restart the DBMS\n> (/etc/init.d/pg101norssd restart and drop cache of the O.S.) and go to\n> execution test with the database tpch40gnorssd.\n> You think in this case there is pollution of shared_buffers?\n> Why do you think having O.S. on SSD is bad? Do you could explain better?\n>\n> Best regards\n> []`s Neto\n>\n\n+1 information about EVO SSD Samsung:\n\n Model: 850 Evo 500 GB SATA III 6Gb/s -\nhttp://www.samsung.com/semiconductor/minisite/ssd/product/consumer/850evo/\n\n\n>> I can see a couple of things in your setup that might pessimize the SDD\n>> case:\n>> - you have OS on the SSD - if you tests make the system swap then this will\n>> wreck the SSD result\n>> - you have RAID 0 SSD...some of the cheaper ones slow down when you do this.\n>> maybe test with a single SSD\n>>\n>> regards\n>> Mark\n>>\n>> On 18/07/18 01:04, Neto pr wrote (note snippage):\n>>\n>>> (echo 3> / proc / sys / vm / drop_caches;\n>>>\n>>> discs:\n>>> - 2 units of Samsung Evo SSD 500 GB (mounted on ZERO RAID)\n>>> - 2 SATA 7500 Krpm HDD units - 1TB (mounted on ZERO RAID)\n>>>\n>>> - The Operating System and the Postgresql DBMS are installed on the SSD\n>>> disk.\n>>>\n>>>\n>>",
"msg_date": "Wed, 18 Jul 2018 19:35:16 +0200",
"msg_from": "Imre Samu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why HDD performance is better than SSD in this case"
},
{
"msg_contents": "One more thought on this:\n\nQuery 9 does a lot pf sorting to disk - so there will be writes for that \nand all the reads for the table scans. Thus the location of your \ninstance's pgsql_tmp directory(s) will significantly influence results.\n\nI'm wondering if in your HDD test the pgsql_tmp on the *SSD's* is being \nused. This would make the HDDs look faster (obviously - as they only \nneed to do reads now). You can check this with iostat while the HDD test \nis being run, there should be *no* activity on the SSDs...if there is \nyou have just found one reason for the results being quicker than it \nshould be.\n\nFWIW: I had a play with this: ran two version 10.4 instances, one on a \nsingle 7200 rpm HDD, one on a (ahem slow) Intel 600p NVME. Running query \n9 on the scale 40 databases I get:\n\n- SSD 30 minutes\n\n- HDD 70 minutes\n\nNo I'm running these on an a Intel i7 3.4 Ghz 16 GB RAM setup. Also both \npostgres instances have default config apart from random_page_cost.\n\nComparing my results with yours - the SSD one is consistent...if I had \ntwo SSDs in RAID0 I might halve the time (I might try this). However my \nHDD result is not at all like yours (mine makes more sense to be \nfair...would expect HDD to be slower in general).\n\nCheers (thanks for an interesting puzzle)!\n\nMark\n\n\n\nOn 18/07/18 13:13, Neto pr wrote:\n>\n> Dear Mark\n> To ensure that the test is honest and has the same configuration the\n> O.S. and also DBMS, my O.S. is installed on the SSD and DBMS as well.\n> I have an instance only of DBMS and two database.\n> - a database called tpch40gnorhdd with tablespace on the HDD disk.\n> - a database called tpch40gnorssd with tablespace on the SSD disk.\n> See below:\n>\n>\n\n\n",
"msg_date": "Fri, 20 Jul 2018 11:30:29 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why HDD performance is better than SSD in this case"
},
{
"msg_contents": "FWIW:\n\nre-running query 9 using the SSD setup as 2x crucial M550 RAID0: 10 minutes.\n\n\nOn 20/07/18 11:30, Mark Kirkwood wrote:\n> One more thought on this:\n>\n> Query 9 does a lot pf sorting to disk - so there will be writes for \n> that and all the reads for the table scans. Thus the location of your \n> instance's pgsql_tmp directory(s) will significantly influence results.\n>\n> I'm wondering if in your HDD test the pgsql_tmp on the *SSD's* is \n> being used. This would make the HDDs look faster (obviously - as they \n> only need to do reads now). You can check this with iostat while the \n> HDD test is being run, there should be *no* activity on the SSDs...if \n> there is you have just found one reason for the results being quicker \n> than it should be.\n>\n> FWIW: I had a play with this: ran two version 10.4 instances, one on a \n> single 7200 rpm HDD, one on a (ahem slow) Intel 600p NVME. Running \n> query 9 on the scale 40 databases I get:\n>\n> - SSD 30 minutes\n>\n> - HDD 70 minutes\n>\n> No I'm running these on an a Intel i7 3.4 Ghz 16 GB RAM setup. Also \n> both postgres instances have default config apart from random_page_cost.\n>\n> Comparing my results with yours - the SSD one is consistent...if I had \n> two SSDs in RAID0 I might halve the time (I might try this). However \n> my HDD result is not at all like yours (mine makes more sense to be \n> fair...would expect HDD to be slower in general).\n>\n> Cheers (thanks for an interesting puzzle)!\n>\n> Mark\n>\n>\n>\n> On 18/07/18 13:13, Neto pr wrote:\n>>\n>> Dear Mark\n>> To ensure that the test is honest and has the same configuration the\n>> O.S. and also DBMS, my O.S. is installed on the SSD and DBMS as well.\n>> I have an instance only of DBMS and two database.\n>> - a database called tpch40gnorhdd with tablespace on the HDD disk.\n>> - a database called tpch40gnorssd with tablespace on the SSD disk.\n>> See below:\n>>\n>>\n>\n>\n\n\n",
"msg_date": "Fri, 20 Jul 2018 12:33:04 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why HDD performance is better than SSD in this case"
},
{
"msg_contents": "2018-07-19 21:33 GMT-03:00 Mark Kirkwood <[email protected]>:\n> FWIW:\n>\n> re-running query 9 using the SSD setup as 2x crucial M550 RAID0: 10 minutes.\n>\n>\n>\n> On 20/07/18 11:30, Mark Kirkwood wrote:\n>>\n>> One more thought on this:\n>>\n>> Query 9 does a lot pf sorting to disk - so there will be writes for that\n>> and all the reads for the table scans. Thus the location of your instance's\n>> pgsql_tmp directory(s) will significantly influence results.\n>>\n>> I'm wondering if in your HDD test the pgsql_tmp on the *SSD's* is being\n>> used. This would make the HDDs look faster (obviously - as they only need to\n>> do reads now). You can check this with iostat while the HDD test is being\n>> run, there should be *no* activity on the SSDs...if there is you have just\n>> found one reason for the results being quicker than it should be.\n>>\n>> FWIW: I had a play with this: ran two version 10.4 instances, one on a\n>> single 7200 rpm HDD, one on a (ahem slow) Intel 600p NVME. Running query 9\n>> on the scale 40 databases I get:\n>>\n>> - SSD 30 minutes\n>>\n>> - HDD 70 minutes\n>>\n>> No I'm running these on an a Intel i7 3.4 Ghz 16 GB RAM setup. Also both\n>> postgres instances have default config apart from random_page_cost.\n>>\n>> Comparing my results with yours - the SSD one is consistent...if I had two\n>> SSDs in RAID0 I might halve the time (I might try this). However my HDD\n>> result is not at all like yours (mine makes more sense to be fair...would\n>> expect HDD to be slower in general).\n>>\n>> Cheers (thanks for an interesting puzzle)!\n>>\n>> Mark\n>>\n\nMark,\nThis query 9 is very hard, see my results for other queries (attached\n- test with secondary index and without secondary index - only primary\nkeys), the SSD always wins in performance.\nOnly for this query that he was the loser, so I put this topic in the list.\n\nToday I will not be able to check your test information in more\ndetail, but I will return with more information soon.\n\nBest Regards\nNeto\n\n\n\n\n\n>>\n>>\n>> On 18/07/18 13:13, Neto pr wrote:\n>>>\n>>>\n>>> Dear Mark\n>>> To ensure that the test is honest and has the same configuration the\n>>> O.S. and also DBMS, my O.S. is installed on the SSD and DBMS as well.\n>>> I have an instance only of DBMS and two database.\n>>> - a database called tpch40gnorhdd with tablespace on the HDD disk.\n>>> - a database called tpch40gnorssd with tablespace on the SSD disk.\n>>> See below:\n>>>\n>>>\n>>\n>>\n>",
"msg_date": "Thu, 19 Jul 2018 21:52:11 -0300",
"msg_from": "Neto pr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why HDD performance is better than SSD in this case"
},
{
"msg_contents": "And perhaps more interesting:\n\nRe-running query 9 against the (single) HDD setup *but* with pgsql_tmp \nsymlinked to the 2x SSD RAID0: 15 minutes\n\nI'm thinking that you have inadvertently configured your HDD test in \nthis way (you get 9 minutes because you have 2x HDDs). Essentially most \nof the time taken for this query is in writing and reading files for \nsorting/hashing, so where pgsql_tmp is located hugely influences the \noverall time.\n\nregards\n\nMark\n\n\nOn 20/07/18 12:33, Mark Kirkwood wrote:\n> FWIW:\n>\n> re-running query 9 using the SSD setup as 2x crucial M550 RAID0: 10 \n> minutes.\n>\n>\n> On 20/07/18 11:30, Mark Kirkwood wrote:\n>> One more thought on this:\n>>\n>> Query 9 does a lot pf sorting to disk - so there will be writes for \n>> that and all the reads for the table scans. Thus the location of your \n>> instance's pgsql_tmp directory(s) will significantly influence results.\n>>\n>> I'm wondering if in your HDD test the pgsql_tmp on the *SSD's* is \n>> being used. This would make the HDDs look faster (obviously - as they \n>> only need to do reads now). You can check this with iostat while the \n>> HDD test is being run, there should be *no* activity on the SSDs...if \n>> there is you have just found one reason for the results being quicker \n>> than it should be.\n>>\n>> FWIW: I had a play with this: ran two version 10.4 instances, one on \n>> a single 7200 rpm HDD, one on a (ahem slow) Intel 600p NVME. Running \n>> query 9 on the scale 40 databases I get:\n>>\n>> - SSD 30 minutes\n>>\n>> - HDD 70 minutes\n>>\n>> No I'm running these on an a Intel i7 3.4 Ghz 16 GB RAM setup. Also \n>> both postgres instances have default config apart from random_page_cost.\n>>\n>> Comparing my results with yours - the SSD one is consistent...if I \n>> had two SSDs in RAID0 I might halve the time (I might try this). \n>> However my HDD result is not at all like yours (mine makes more sense \n>> to be fair...would expect HDD to be slower in general).\n>>\n>> Cheers (thanks for an interesting puzzle)!\n>>\n>> Mark\n>>\n>>\n>>\n>> On 18/07/18 13:13, Neto pr wrote:\n>>>\n>>> Dear Mark\n>>> To ensure that the test is honest and has the same configuration the\n>>> O.S. and also DBMS, my O.S. is installed on the SSD and DBMS as well.\n>>> I have an instance only of DBMS and two database.\n>>> - a database called tpch40gnorhdd with tablespace on the HDD disk.\n>>> - a database called tpch40gnorssd with tablespace on the SSD disk.\n>>> See below:\n>>>\n>>>\n>>\n>>\n>\n>\n\n\n",
"msg_date": "Fri, 20 Jul 2018 13:49:45 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why HDD performance is better than SSD in this case"
},
{
"msg_contents": "Hi Neto,\n\nIn a great piece of timing my new experimental SSD arrived (WD Black 500 \nG 3D NAND MVME) [1]. Doing some runs of query 9:\n\n- default config: 7 minutes\n\n- work_mem=1G, max_parallel_workers_per_gather=4: 3-4 minutes\n\n\nIncreasing either of these much more got me OOM errors (16 G of ram). \nAnyway, enjoy your benchmarking!\n\nCheers\nMark\n\n[1] Yeah - I know this is a 'do not use me in production' type of SSD. \nIt is however fine for prototyping and DSS scenario run type use.\n\nOn 20/07/18 12:52, Neto pr wrote:\n>\n> Mark,\n> This query 9 is very hard, see my results for other queries (attached\n> - test with secondary index and without secondary index - only primary\n> keys), the SSD always wins in performance.\n> Only for this query that he was the loser, so I put this topic in the list.\n>\n> Today I will not be able to check your test information in more\n> detail, but I will return with more information soon.\n>\n> Best Regards\n> Neto\n>\n>\n>\n>\n>\n\n\n",
"msg_date": "Mon, 23 Jul 2018 12:07:16 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why HDD performance is better than SSD in this case"
}
] |
[
{
"msg_contents": "On Wed, 18 Jul 2018 09:46:32 +0200, Fabio Pardi <[email protected]>\nwrote:\n\n> RAID 0 to store production data should never be used. Never a good \n> idea, in my opinion.\n\nRAID 0� by itself� should never be used.� Combined with other RAID \nlevels, it can boost performance without sacrificing reliability.\nhttps://en.wikipedia.org/wiki/Nested_RAID_levels\n\n\nPersonally, I don't like RAID 0 + ? schemes because they use too many \ndisks (with associated reliability issues).� The required performance \nusually can be achieved in other ways.� But YMMV.\nGeorge\n\n",
"msg_date": "Wed, 18 Jul 2018 12:44:10 -0400",
"msg_from": "George Neuner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why HDD performance is better than SSD in this case"
}
] |
[
{
"msg_contents": "Is there a tool which does this for PostgreSQL?\n\nTake a \"snapshot\" of what the server is doing about 10 times per second.\nWrite this to a file.\nAfter N hours you can aggregate the file.\nWhat does the server do most of the time?\nWhich tables/index gets used the most.\n\nBefore optimizing a database, I would like to know what is going\non in the production system.\n\nI know that there are internal tables like pg_stat_statements.\nBut I guess doing a snapshot every N millseconds will present a\nbetter picture of what is going in in real life.\n\nIs there already a tool which goes this way?\n\nOr is there a better way?\n\nRegards,\n Thomas Güttler\n\n-- \nThomas Guettler http://www.thomas-guettler.de/\nI am looking for feedback: https://github.com/guettli/programming-guidelines\n\n",
"msg_date": "Mon, 23 Jul 2018 13:18:02 +0200",
"msg_from": "=?UTF-8?Q?Thomas_G=c3=bcttler?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Profile what the production server is doing"
},
{
"msg_contents": "Hi,\n\nOn Mon, Jul 23, 2018 at 1:18 PM, Thomas Güttler\n<[email protected]> wrote:\n> Is there a tool which does this for PostgreSQL?\n>\n> Take a \"snapshot\" of what the server is doing about 10 times per second.\n> Write this to a file.\n> After N hours you can aggregate the file.\n> What does the server do most of the time?\n> Which tables/index gets used the most.\n>\n> Before optimizing a database, I would like to know what is going\n> on in the production system.\n>\n> I know that there are internal tables like pg_stat_statements.\n> But I guess doing a snapshot every N millseconds will present a\n> better picture of what is going in in real life.\n>\n> Is there already a tool which goes this way?\n\nYou can look at powa (https://powa.readthedocs.io/) which aims to\nprovide this kind of information.\n\n",
"msg_date": "Mon, 23 Jul 2018 13:38:04 +0200",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Profile what the production server is doing"
},
{
"msg_contents": "I'm biased, but I think VividCortex (my company's product) is amazing at\nthis.\n\nOn Mon, Jul 23, 2018 at 7:18 AM Thomas Güttler <[email protected]>\nwrote:\n\n> Is there a tool which does this for PostgreSQL?\n>\n> Take a \"snapshot\" of what the server is doing about 10 times per second.\n> Write this to a file.\n> After N hours you can aggregate the file.\n> What does the server do most of the time?\n> Which tables/index gets used the most.\n>\n> Before optimizing a database, I would like to know what is going\n> on in the production system.\n>\n> I know that there are internal tables like pg_stat_statements.\n> But I guess doing a snapshot every N millseconds will present a\n> better picture of what is going in in real life.\n>\n> Is there already a tool which goes this way?\n>\n> Or is there a better way?\n>\n> Regards,\n> Thomas Güttler\n>\n> --\n> Thomas Guettler http://www.thomas-guettler.de/\n> I am looking for feedback:\n> https://github.com/guettli/programming-guidelines\n>\n>\n\nI'm biased, but I think VividCortex (my company's product) is amazing at this.On Mon, Jul 23, 2018 at 7:18 AM Thomas Güttler <[email protected]> wrote:Is there a tool which does this for PostgreSQL?\n\nTake a \"snapshot\" of what the server is doing about 10 times per second.\nWrite this to a file.\nAfter N hours you can aggregate the file.\nWhat does the server do most of the time?\nWhich tables/index gets used the most.\n\nBefore optimizing a database, I would like to know what is going\non in the production system.\n\nI know that there are internal tables like pg_stat_statements.\nBut I guess doing a snapshot every N millseconds will present a\nbetter picture of what is going in in real life.\n\nIs there already a tool which goes this way?\n\nOr is there a better way?\n\nRegards,\n Thomas Güttler\n\n-- \nThomas Guettler http://www.thomas-guettler.de/\nI am looking for feedback: https://github.com/guettli/programming-guidelines",
"msg_date": "Mon, 23 Jul 2018 10:01:31 -0400",
"msg_from": "Baron Schwartz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Profile what the production server is doing"
},
{
"msg_contents": "pgobserver might do that as well, particulary useful for functions\nperformances.\n\nhttps://github.com/zalando/PGObserver\n\nOn Mon, Jul 23, 2018 at 1:18 PM, Thomas Güttler <\[email protected]> wrote:\n\n> Is there a tool which does this for PostgreSQL?\n>\n> Take a \"snapshot\" of what the server is doing about 10 times per second.\n> Write this to a file.\n> After N hours you can aggregate the file.\n> What does the server do most of the time?\n> Which tables/index gets used the most.\n>\n> Before optimizing a database, I would like to know what is going\n> on in the production system.\n>\n> I know that there are internal tables like pg_stat_statements.\n> But I guess doing a snapshot every N millseconds will present a\n> better picture of what is going in in real life.\n>\n> Is there already a tool which goes this way?\n>\n> Or is there a better way?\n>\n> Regards,\n> Thomas Güttler\n>\n> --\n> Thomas Guettler http://www.thomas-guettler.de/\n> I am looking for feedback: https://github.com/guettli/pro\n> gramming-guidelines\n>\n>\n\npgobserver might do that as well, particulary useful for functions performances.https://github.com/zalando/PGObserverOn Mon, Jul 23, 2018 at 1:18 PM, Thomas Güttler <[email protected]> wrote:Is there a tool which does this for PostgreSQL?\n\nTake a \"snapshot\" of what the server is doing about 10 times per second.\nWrite this to a file.\nAfter N hours you can aggregate the file.\nWhat does the server do most of the time?\nWhich tables/index gets used the most.\n\nBefore optimizing a database, I would like to know what is going\non in the production system.\n\nI know that there are internal tables like pg_stat_statements.\nBut I guess doing a snapshot every N millseconds will present a\nbetter picture of what is going in in real life.\n\nIs there already a tool which goes this way?\n\nOr is there a better way?\n\nRegards,\n Thomas Güttler\n\n-- \nThomas Guettler http://www.thomas-guettler.de/\nI am looking for feedback: https://github.com/guettli/programming-guidelines",
"msg_date": "Mon, 23 Jul 2018 17:16:57 +0200",
"msg_from": "Flo Rance <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Profile what the production server is doing"
},
{
"msg_contents": "\n\nAm 23.07.2018 um 13:38 schrieb Julien Rouhaud:\n> Hi,\n> \n> On Mon, Jul 23, 2018 at 1:18 PM, Thomas Güttler\n> <[email protected]> wrote:\n>> Is there a tool which does this for PostgreSQL?\n>>\n>> Take a \"snapshot\" of what the server is doing about 10 times per second.\n>> Write this to a file.\n>> After N hours you can aggregate the file.\n>> What does the server do most of the time?\n>> Which tables/index gets used the most.\n>>\n>> Before optimizing a database, I would like to know what is going\n>> on in the production system.\n>>\n>> I know that there are internal tables like pg_stat_statements.\n>> But I guess doing a snapshot every N millseconds will present a\n>> better picture of what is going in in real life.\n>>\n>> Is there already a tool which goes this way?\n> \n> You can look at powa (https://powa.readthedocs.io/) which aims to\n> provide this kind of information.\n> \n\nAFAIK powa is based on pg_stat_statements not on statistical samples.\nBut maye I am wrong.\n\nRegards,\n Thomas Güttler\n\n-- \nThomas Guettler http://www.thomas-guettler.de/\nI am looking for feedback: https://github.com/guettli/programming-guidelines\n\n",
"msg_date": "Wed, 25 Jul 2018 11:14:21 +0200",
"msg_from": "=?UTF-8?Q?Thomas_G=c3=bcttler?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Profile what the production server is doing"
},
{
"msg_contents": "\n\nAm 23.07.2018 um 16:01 schrieb Baron Schwartz:\n> I'm biased, but I think VividCortex (my company's product) is amazing at this.\n\nLooks goog, but \"Contact us for pricing options\" from https://www.vividcortex.com/product/pricing\n\nWhy do you hide your prices?\n\nRegards,\n Thomas Güttler\n\n> \n> On Mon, Jul 23, 2018 at 7:18 AM Thomas Güttler <[email protected] <mailto:[email protected]>> wrote:\n> \n> Is there a tool which does this for PostgreSQL?\n> \n> Take a \"snapshot\" of what the server is doing about 10 times per second.\n> Write this to a file.\n> After N hours you can aggregate the file.\n> What does the server do most of the time?\n> Which tables/index gets used the most.\n> \n> Before optimizing a database, I would like to know what is going\n> on in the production system.\n> \n> I know that there are internal tables like pg_stat_statements.\n> But I guess doing a snapshot every N millseconds will present a\n> better picture of what is going in in real life.\n> \n> Is there already a tool which goes this way?\n> \n> Or is there a better way?\n> \n> Regards,\n> Thomas Güttler\n> \n> -- \n> Thomas Guettler http://www.thomas-guettler.de/\n> I am looking for feedback: https://github.com/guettli/programming-guidelines\n> \n\n-- \nThomas Guettler http://www.thomas-guettler.de/\nI am looking for feedback: https://github.com/guettli/programming-guidelines\n\n",
"msg_date": "Wed, 25 Jul 2018 11:17:16 +0200",
"msg_from": "=?UTF-8?Q?Thomas_G=c3=bcttler?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Profile what the production server is doing"
},
{
"msg_contents": "\n\nAm 23.07.2018 um 17:16 schrieb Flo Rance:\n> pgobserver might do that as well, particulary useful for functions performances.\n> \n> https://github.com/zalando/PGObserver\n> \n\n\nThank you for pointing me to this.\n\nAfter googling for \"PGObserver powa\" I found nice collection of current tools:\n\n https://www.quora.com/What-are-the-best-graphical-Monitoring-tools-for-Postgresql\n\nBTW, PGObserver seems a bit dated. There are only very few updates during the last months.\nIs there an successor?\n\nRegards,\n Thomas\n\n-- \nThomas Guettler http://www.thomas-guettler.de/\nI am looking for feedback: https://github.com/guettli/programming-guidelines\n\n",
"msg_date": "Wed, 25 Jul 2018 11:39:14 +0200",
"msg_from": "=?UTF-8?Q?Thomas_G=c3=bcttler?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Profile what the production server is doing"
},
{
"msg_contents": "As you already open an issue on their website regarding this point, you\nshould maybe wait for them to answer.\n\nAs far as I know, it's still used by some companies in production.\n\nFlo\n\nOn Wed, Jul 25, 2018 at 11:39 AM, Thomas Güttler <\[email protected]> wrote:\n\n>\n>\n> Am 23.07.2018 um 17:16 schrieb Flo Rance:\n>\n>> pgobserver might do that as well, particulary useful for functions\n>> performances.\n>>\n>> https://github.com/zalando/PGObserver\n>>\n>>\n>\n> Thank you for pointing me to this.\n>\n> After googling for \"PGObserver powa\" I found nice collection of current\n> tools:\n>\n> https://www.quora.com/What-are-the-best-graphical-Monitorin\n> g-tools-for-Postgresql\n>\n> BTW, PGObserver seems a bit dated. There are only very few updates during\n> the last months.\n> Is there an successor?\n>\n> Regards,\n> Thomas\n>\n> --\n> Thomas Guettler http://www.thomas-guettler.de/\n> I am looking for feedback: https://github.com/guettli/pro\n> gramming-guidelines\n>\n>\n\nAs you already open an issue on their website regarding this point, you should maybe wait for them to answer.As far as I know, it's still used by some companies in production.FloOn Wed, Jul 25, 2018 at 11:39 AM, Thomas Güttler <[email protected]> wrote:\n\nAm 23.07.2018 um 17:16 schrieb Flo Rance:\n\npgobserver might do that as well, particulary useful for functions performances.\n\nhttps://github.com/zalando/PGObserver\n\n\n\n\nThank you for pointing me to this.\n\nAfter googling for \"PGObserver powa\" I found nice collection of current tools:\n\n https://www.quora.com/What-are-the-best-graphical-Monitoring-tools-for-Postgresql\n\nBTW, PGObserver seems a bit dated. There are only very few updates during the last months.\nIs there an successor?\n\nRegards,\n Thomas\n\n-- \nThomas Guettler http://www.thomas-guettler.de/\nI am looking for feedback: https://github.com/guettli/programming-guidelines",
"msg_date": "Wed, 25 Jul 2018 11:49:51 +0200",
"msg_from": "Flo Rance <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Profile what the production server is doing"
},
{
"msg_contents": "On Wed, Jul 25, 2018 at 11:14 AM, Thomas Güttler\n<[email protected]> wrote:\n>\n> AFAIK powa is based on pg_stat_statements not on statistical samples.\n> But maye I am wrong.\n\nIndeed, it's based on pg_stat_statements, but other extensions are\nsupported too. Since pg_stat_statements already provides cumulated\ncounters, there's no need to do sampling. But if you're interested in\nwait events information for instance, it supports (in development\nversion) pg_wait_sampling extension, which does sampling to provide\nefficient and informative informations.\n\n",
"msg_date": "Wed, 25 Jul 2018 12:25:49 +0200",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Profile what the production server is doing"
},
{
"msg_contents": "This sound good. Looks like an automated bootleneck detection\ncould be possible with pg_wait_sampling.\n\nRegards,\n Thomas\n\nAm 25.07.2018 um 12:25 schrieb Julien Rouhaud:\n> On Wed, Jul 25, 2018 at 11:14 AM, Thomas Güttler\n> <[email protected]> wrote:\n>>\n>> AFAIK powa is based on pg_stat_statements not on statistical samples.\n>> But maye I am wrong.\n> \n> Indeed, it's based on pg_stat_statements, but other extensions are\n> supported too. Since pg_stat_statements already provides cumulated\n> counters, there's no need to do sampling. But if you're interested in\n> wait events information for instance, it supports (in development\n> version) pg_wait_sampling extension, which does sampling to provide\n> efficient and informative informations.\n> \n\n-- \nThomas Guettler http://www.thomas-guettler.de/\nI am looking for feedback: https://github.com/guettli/programming-guidelines\n\n",
"msg_date": "Thu, 26 Jul 2018 13:27:38 +0200",
"msg_from": "=?UTF-8?Q?Thomas_G=c3=bcttler?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Automated bottleneck detection"
},
{
"msg_contents": "Wow, freakin cool, can't wait to start fiddling with pg_wait_sampling. \nReckon we can get lightweight locks and spinlocks history with this cool \nnew extension instead of awkwardly and repeatedly querying the \npg_stat_activity table.\n\nRegards,\nMichael Vitale\n\n> Thomas Güttler <mailto:[email protected]>\n> Thursday, July 26, 2018 7:27 AM\n> This sound good. Looks like an automated bootleneck detection\n> could be possible with pg_wait_sampling.\n>\n> Regards,\n> Thomas\n>\n>\n>\n\n\n\n\nWow, freakin cool, can't \nwait to start fiddling with pg_wait_sampling. Reckon we can get \nlightweight locks and spinlocks history with this cool new extension \ninstead of awkwardly and repeatedly querying the pg_stat_activity table.\n\nRegards,\nMichael Vitale\n\n\n\n \nThomas Güttler Thursday,\n July 26, 2018 7:27 AM \nThis sound good. Looks like an \nautomated bootleneck detection\ncould be possible with pg_wait_sampling.\n\nRegards,\n Thomas",
"msg_date": "Thu, 26 Jul 2018 08:05:10 -0400",
"msg_from": "MichaelDBA <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Automated bottleneck detection"
}
] |
[
{
"msg_contents": "Hi,\n\nI have the following table:\n\n Table \"public.totoz\"\n Column | Type | Collation | Nullable | Default\n-----------+--------------------------+-----------+----------+---------\n name | character varying(512) | | not null |\nIndexes:\n \"totoz_pkey\" PRIMARY KEY, btree (name)\n \"totoz_name_trgrm_idx\" gin (name gin_trgm_ops)\n\n\n\nWhen I run the following query, it uses the totoz_name_trgrm_idx as expected:\n\nexplain analyze select name from totoz where name ilike '%tot%';\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on totoz (cost=48.02..59.69 rows=3 width=11)\n(actual time=0.205..0.446 rows=88 loops=1)\n Recheck Cond: ((name)::text ~~* '%tot%'::text)\n Heap Blocks: exact=85\n -> Bitmap Index Scan on totoz_name_trgrm_idx (cost=0.00..48.02\nrows=3 width=0) (actual time=0.177..0.177 rows=88 loops=1)\n Index Cond: ((name)::text ~~* '%tot%'::text)\n Planning time: 0.302 ms\n Execution time: 0.486 ms\n(7 rows)\n\n\n\nHowever when I run the same (as far as I understand it) query but with\nthe ALL operator, the index is not used:\n\nexplain analyze select name from totoz where name ilike all(array['%tot%']);\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------\n Index Only Scan using totoz_pkey on totoz (cost=0.29..1843.64 rows=3\nwidth=11) (actual time=3.854..20.757 rows=88 loops=1)\n Filter: ((name)::text ~~* ALL ('{%tot%}'::text[]))\n Rows Removed by Filter: 30525\n Heap Fetches: 132\n Planning time: 0.230 ms\n Execution time: 20.778 ms\n(6 rows)\n\n\nI'd have expected the second query to use the totoz_name_trgrm_idx but\nit doesn't. Why is that?\n\nThanks for your help!\n\n",
"msg_date": "Thu, 26 Jul 2018 17:53:33 +0200",
"msg_from": "Nicolas Even <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query with \"ILIKE ALL\" does not use the index"
},
{
"msg_contents": "Nicolas Even <[email protected]> writes:\n> However when I run the same (as far as I understand it) query but with\n> the ALL operator, the index is not used:\n> explain analyze select name from totoz where name ilike all(array['%tot%']);\n\nThere's only index support for \"op ANY (array)\", not \"op ALL (array)\".\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 26 Jul 2018 12:44:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with \"ILIKE ALL\" does not use the index"
},
{
"msg_contents": "On Jul 26, 2018, at 9:44 AM, Tom Lane <[email protected]> wrote:\n> \n> Nicolas Even <[email protected]> writes:\n>> However when I run the same (as far as I understand it) query but with\n>> the ALL operator, the index is not used:\n>> explain analyze select name from totoz where name ilike all(array['%tot%']);\n> \n> There's only index support for \"op ANY (array)\", not \"op ALL (array)\".\n> \n> \t\t\tregards, tom lane\n\nNicolas,\n\nCould you work around the limitation with a two-clause WHERE?\n\nFirst clause ANY, second clause ALL.\n\nI've done some similar tricks on similar sorts of queries.\n\nMatthew.\n",
"msg_date": "Thu, 26 Jul 2018 10:22:21 -0700",
"msg_from": "Matthew Hall <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with \"ILIKE ALL\" does not use the index"
},
{
"msg_contents": "Hi Matthew,\n\nI finally used \"WHERE name ILIKE arr[1] AND name ILIKE ALL(arr)\" which\nworks well enough for my use case.\n\nThank you\nNicolas\n\nOn 26 July 2018 at 19:22, Matthew Hall <[email protected]> wrote:\n> On Jul 26, 2018, at 9:44 AM, Tom Lane <[email protected]> wrote:\n>>\n>> Nicolas Even <[email protected]> writes:\n>>> However when I run the same (as far as I understand it) query but with\n>>> the ALL operator, the index is not used:\n>>> explain analyze select name from totoz where name ilike all(array['%tot%']);\n>>\n>> There's only index support for \"op ANY (array)\", not \"op ALL (array)\".\n>>\n>> regards, tom lane\n>\n> Nicolas,\n>\n> Could you work around the limitation with a two-clause WHERE?\n>\n> First clause ANY, second clause ALL.\n>\n> I've done some similar tricks on similar sorts of queries.\n>\n> Matthew.\n\n",
"msg_date": "Thu, 26 Jul 2018 21:17:50 +0200",
"msg_from": "Nicolas Even <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query with \"ILIKE ALL\" does not use the index"
},
{
"msg_contents": "Thank you Tom\n\nOn 26 July 2018 at 18:44, Tom Lane <[email protected]> wrote:\n> Nicolas Even <[email protected]> writes:\n>> However when I run the same (as far as I understand it) query but with\n>> the ALL operator, the index is not used:\n>> explain analyze select name from totoz where name ilike all(array['%tot%']);\n>\n> There's only index support for \"op ANY (array)\", not \"op ALL (array)\".\n>\n> regards, tom lane\n\n",
"msg_date": "Thu, 26 Jul 2018 21:32:27 +0200",
"msg_from": "Nicolas Even <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query with \"ILIKE ALL\" does not use the index"
}
] |
[
{
"msg_contents": "Hello All,\n\nI created a table with 200 bigint column, 200 varchar column. (Postgres\n10.4)\n\ncreate table i200c200 ( pk bigint primary key, int1 bigint, int2\nbigint,....., int200 bigint, char1 varchar(255),......, char200\nvarchar(255)) ;\n\nInserted values only in pk,int1,int200 columns with some random data ( from\ngenerate series) and remaining columns are all null. The table has 1000000\nrows.\n\nI found performance variance between accessing int1 and int200 column which\nis quite large.\n\nReports from pg_stat_statements:\n\n query | total_time | min_time |\nmax_time | mean_time | stddev_time\n-----------------------------------------+------------+----------+----------+-----------+--------------------\n select pk,int1 from i200c200 limit 200 | 0.65 | 0.102 |\n0.138 | 0.13 | 0.0140142784330839\n select pk,int199 from i200c200 limit $1 | 1.207 | 0.18 |\n0.332 | 0.2414 | 0.0500583659341773\n select pk,int200 from i200c200 limit 200| 1.67 | 0.215 |\n0.434 | 0.334 | 0.0697825193010399\n\nExplain Analyse:\n\nexplain analyse select pk,int1 from i200c200 limit 1000;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..23.33 rows=1000 width=16) (actual\ntime=0.014..0.390 rows=1000 loops=1)\n -> Seq Scan on i200c200 (cost=0.00..23334.00 rows=1000000\nwidth=16) (actual time=0.013..0.268 rows=1000 loops=1)\n Planning time: 0.066 ms\n Execution time: 0.475 ms\n\n explain analyse select pk,int200 from i200c200 limit 1000;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..23.33 rows=1000 width=16) (actual\ntime=0.012..1.001 rows=1000 loops=1)\n -> Seq Scan on i200c200 (cost=0.00..23334.00 rows=1000000\nwidth=16) (actual time=0.011..0.894 rows=1000 loops=1)\n Planning time: 0.049 ms\n Execution time: 1.067 ms\n\nI am curious in getting this postgres behaviour and its internals.\n\nNote: I have the tried the same query with int199 column which is null in\nall rows,it is still performance variant.Since,postgres doesn't store null\nvalues in data instead it store in null bit map,there should not be this\nvariation(because i'm having data only for pk,int1,int200).I am wondering\nthat this null bit map lookup is slowing down this , because each row in my\ntable is having a null bit map of size (408 bits).As newbie I am wondering\nwhether this null bit map lookup for non-earlier column is taking too much\ntime (for scanning the null bit map itself).Am i thinking in right way?\n\nThanks in advance,\n\nDineshkumar.P\n\nPostgres Newbie.\n\nHello All,I created a table with 200 bigint column, 200 varchar column. (Postgres 10.4)create table i200c200 ( pk bigint primary key, int1 bigint, int2 bigint,....., int200 bigint, char1 varchar(255),......, char200 varchar(255)) ;Inserted values only in pk,int1,int200 columns with some random data ( from generate series) and remaining columns are all null. The table has 1000000 rows.I found performance variance between accessing int1 and int200 column which is quite large.Reports from pg_stat_statements: query | total_time | min_time | max_time | mean_time | stddev_time \n-----------------------------------------+------------+----------+----------+-----------+--------------------\n select pk,int1 from i200c200 limit 200 | 0.65 | 0.102 | 0.138 | 0.13 | 0.0140142784330839\n select pk,int199 from i200c200 limit $1 | 1.207 | 0.18 | 0.332 | 0.2414 | 0.0500583659341773 \n select pk,int200 from i200c200 limit 200| 1.67 | 0.215 | 0.434 | 0.334 | 0.0697825193010399Explain Analyse:explain analyse select pk,int1 from i200c200 limit 1000;\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..23.33 rows=1000 width=16) (actual time=0.014..0.390 rows=1000 loops=1)\n -> Seq Scan on i200c200 (cost=0.00..23334.00 rows=1000000 width=16) (actual time=0.013..0.268 rows=1000 loops=1)\n Planning time: 0.066 ms\n Execution time: 0.475 ms\n\n explain analyse select pk,int200 from i200c200 limit 1000;\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..23.33 rows=1000 width=16) (actual time=0.012..1.001 rows=1000 loops=1)\n -> Seq Scan on i200c200 (cost=0.00..23334.00 rows=1000000 width=16) (actual time=0.011..0.894 rows=1000 loops=1)\n Planning time: 0.049 ms\n Execution time: 1.067 msI am curious in getting this postgres behaviour and its internals.Note: I have the tried the same query with int199 column which is null in all rows,it is still performance variant.Since,postgres doesn't store null values in data instead it store in null bit map,there should not be this variation(because i'm having data only for pk,int1,int200).I am wondering that this null bit map lookup is slowing down this , because each row in my table is having a null bit map of size (408 bits).As newbie I am wondering whether this null bit map lookup for non-earlier column is taking too much time (for scanning the null bit map itself).Am i thinking in right way?Thanks in advance,Dineshkumar.PPostgres Newbie.",
"msg_date": "Sun, 29 Jul 2018 11:08:31 +0530",
"msg_from": "Dinesh Kumar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance difference in accessing differrent columns in a Postgres\n Table"
},
{
"msg_contents": "On 29 July 2018 at 17:38, Dinesh Kumar <[email protected]> wrote:\n> I found performance variance between accessing int1 and int200 column which\n> is quite large.\n\nHave a look at slot_deform_tuple and heap_deform_tuple. You'll see\nthat tuples are deformed starting at the first attribute. If you ask\nfor attribute 200 then it must deform 1-199 first.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Mon, 30 Jul 2018 09:53:55 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance difference in accessing differrent columns in a\n Postgres Table"
},
{
"msg_contents": "David Rowley <[email protected]> writes:\n> On 29 July 2018 at 17:38, Dinesh Kumar <[email protected]> wrote:\n>> I found performance variance between accessing int1 and int200 column which\n>> is quite large.\n\n> Have a look at slot_deform_tuple and heap_deform_tuple. You'll see\n> that tuples are deformed starting at the first attribute. If you ask\n> for attribute 200 then it must deform 1-199 first.\n\nNote that that can be optimized away in some cases, though evidently\nnot the one the OP is testing. From memory, you need a tuple that\ncontains no nulls, and all the columns to the left of the target\ncolumn have to be fixed-width datatypes. Otherwise, the offset to\nthe target column is uncertain, and we have to search for it.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sun, 29 Jul 2018 19:00:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance difference in accessing differrent columns in a\n Postgres Table"
},
{
"msg_contents": "2018-07-30 1:00 GMT+02:00 Tom Lane <[email protected]>:\n\n> David Rowley <[email protected]> writes:\n> > On 29 July 2018 at 17:38, Dinesh Kumar <[email protected]> wrote:\n> >> I found performance variance between accessing int1 and int200 column\n> which\n> >> is quite large.\n>\n> > Have a look at slot_deform_tuple and heap_deform_tuple. You'll see\n> > that tuples are deformed starting at the first attribute. If you ask\n> > for attribute 200 then it must deform 1-199 first.\n>\n> Note that that can be optimized away in some cases, though evidently\n> not the one the OP is testing. From memory, you need a tuple that\n> contains no nulls, and all the columns to the left of the target\n> column have to be fixed-width datatypes. Otherwise, the offset to\n> the target column is uncertain, and we have to search for it.\n>\n\nJIT decrease a overhead of this.\n\nRegards\n\nPavel\n\n\n> regards, tom lane\n>\n>\n\n2018-07-30 1:00 GMT+02:00 Tom Lane <[email protected]>:David Rowley <[email protected]> writes:\n> On 29 July 2018 at 17:38, Dinesh Kumar <[email protected]> wrote:\n>> I found performance variance between accessing int1 and int200 column which\n>> is quite large.\n\n> Have a look at slot_deform_tuple and heap_deform_tuple. You'll see\n> that tuples are deformed starting at the first attribute. If you ask\n> for attribute 200 then it must deform 1-199 first.\n\nNote that that can be optimized away in some cases, though evidently\nnot the one the OP is testing. From memory, you need a tuple that\ncontains no nulls, and all the columns to the left of the target\ncolumn have to be fixed-width datatypes. Otherwise, the offset to\nthe target column is uncertain, and we have to search for it.JIT decrease a overhead of this.RegardsPavel\n\n regards, tom lane",
"msg_date": "Mon, 30 Jul 2018 06:11:46 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance difference in accessing differrent columns in a\n Postgres Table"
},
{
"msg_contents": "On Mon, Jul 30, 2018 at 12:11 AM, Pavel Stehule <[email protected]>\nwrote:\n\n> 2018-07-30 1:00 GMT+02:00 Tom Lane <[email protected]>:\n>\n>> David Rowley <[email protected]> writes:\n>> > On 29 July 2018 at 17:38, Dinesh Kumar <[email protected]> wrote:\n>> >> I found performance variance between accessing int1 and int200 column\n>> which\n>> >> is quite large.\n>>\n>> > Have a look at slot_deform_tuple and heap_deform_tuple. You'll see\n>> > that tuples are deformed starting at the first attribute. If you ask\n>> > for attribute 200 then it must deform 1-199 first.\n>>\n>> Note that that can be optimized away in some cases, though evidently\n>> not the one the OP is testing. From memory, you need a tuple that\n>> contains no nulls, and all the columns to the left of the target\n>> column have to be fixed-width datatypes. Otherwise, the offset to\n>> the target column is uncertain, and we have to search for it.\n>>\n>\n> JIT decrease a overhead of this.\n>\n\nThe bottleneck here is such a simple construct, I don't see how JIT could\nimprove it by much.\n\nAnd indeed, in my hands JIT makes it almost 3 times worse.\n\nRun against ab87b8fedce3fa77ca0d6, I get 12669.619 ms for the 2nd JIT\nexecution and 4594.994 ms for the JIT=off.\n\nCheers,\n\nJeff",
"msg_date": "Mon, 30 Jul 2018 07:19:07 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance difference in accessing differrent columns in a\n Postgres Table"
},
{
"msg_contents": "2018-07-30 13:19 GMT+02:00 Jeff Janes <[email protected]>:\n\n> On Mon, Jul 30, 2018 at 12:11 AM, Pavel Stehule <[email protected]>\n> wrote:\n>\n>> 2018-07-30 1:00 GMT+02:00 Tom Lane <[email protected]>:\n>>\n>>> David Rowley <[email protected]> writes:\n>>> > On 29 July 2018 at 17:38, Dinesh Kumar <[email protected]> wrote:\n>>> >> I found performance variance between accessing int1 and int200 column\n>>> which\n>>> >> is quite large.\n>>>\n>>> > Have a look at slot_deform_tuple and heap_deform_tuple. You'll see\n>>> > that tuples are deformed starting at the first attribute. If you ask\n>>> > for attribute 200 then it must deform 1-199 first.\n>>>\n>>> Note that that can be optimized away in some cases, though evidently\n>>> not the one the OP is testing. From memory, you need a tuple that\n>>> contains no nulls, and all the columns to the left of the target\n>>> column have to be fixed-width datatypes. Otherwise, the offset to\n>>> the target column is uncertain, and we have to search for it.\n>>>\n>>\n>> JIT decrease a overhead of this.\n>>\n>\n> The bottleneck here is such a simple construct, I don't see how JIT could\n> improve it by much.\n>\n> And indeed, in my hands JIT makes it almost 3 times worse.\n>\n> Run against ab87b8fedce3fa77ca0d6, I get 12669.619 ms for the 2nd JIT\n> execution and 4594.994 ms for the JIT=off.\n>\n\nlook on\nhttp://www.postgresql-archive.org/PATCH-LLVM-tuple-deforming-improvements-td6029385.html\nthread, please.\n\nRegards\n\nPavel\n\n\n> Cheers,\n>\n> Jeff\n>\n\n2018-07-30 13:19 GMT+02:00 Jeff Janes <[email protected]>:On Mon, Jul 30, 2018 at 12:11 AM, Pavel Stehule <[email protected]> wrote:2018-07-30 1:00 GMT+02:00 Tom Lane <[email protected]>:David Rowley <[email protected]> writes:\n> On 29 July 2018 at 17:38, Dinesh Kumar <[email protected]> wrote:\n>> I found performance variance between accessing int1 and int200 column which\n>> is quite large.\n\n> Have a look at slot_deform_tuple and heap_deform_tuple. You'll see\n> that tuples are deformed starting at the first attribute. If you ask\n> for attribute 200 then it must deform 1-199 first.\n\nNote that that can be optimized away in some cases, though evidently\nnot the one the OP is testing. From memory, you need a tuple that\ncontains no nulls, and all the columns to the left of the target\ncolumn have to be fixed-width datatypes. Otherwise, the offset to\nthe target column is uncertain, and we have to search for it.JIT decrease a overhead of this.The bottleneck here is such a simple construct, I don't see how JIT could improve it by much.And indeed, in my hands JIT makes it almost 3 times worse.Run against ab87b8fedce3fa77ca0d6, I get 12669.619 ms for the 2nd JIT execution and 4594.994 ms for the JIT=off.look on http://www.postgresql-archive.org/PATCH-LLVM-tuple-deforming-improvements-td6029385.html thread, please.RegardsPavel Cheers,Jeff",
"msg_date": "Mon, 30 Jul 2018 18:01:34 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance difference in accessing differrent columns in a\n Postgres Table"
},
{
"msg_contents": "On 2018-07-30 07:19:07 -0400, Jeff Janes wrote:\n> On Mon, Jul 30, 2018 at 12:11 AM, Pavel Stehule <[email protected]>\n> wrote:\n> \n> > 2018-07-30 1:00 GMT+02:00 Tom Lane <[email protected]>:\n> >\n> >> David Rowley <[email protected]> writes:\n> >> > On 29 July 2018 at 17:38, Dinesh Kumar <[email protected]> wrote:\n> >> >> I found performance variance between accessing int1 and int200 column\n> >> which\n> >> >> is quite large.\n> >>\n> >> > Have a look at slot_deform_tuple and heap_deform_tuple. You'll see\n> >> > that tuples are deformed starting at the first attribute. If you ask\n> >> > for attribute 200 then it must deform 1-199 first.\n> >>\n> >> Note that that can be optimized away in some cases, though evidently\n> >> not the one the OP is testing. From memory, you need a tuple that\n> >> contains no nulls, and all the columns to the left of the target\n> >> column have to be fixed-width datatypes. Otherwise, the offset to\n> >> the target column is uncertain, and we have to search for it.\n> >>\n> >\n> > JIT decrease a overhead of this.\n> >\n> \n> The bottleneck here is such a simple construct, I don't see how JIT could\n> improve it by much.\n\nThe deparsing can become quite a bit faster with JITing, because we know\nthe column types and width. If intermittent columns are NOT NULL and\nfixed width, we can even optimize processing them at runtime nearly\nentirely.\n\n\n> And indeed, in my hands JIT makes it almost 3 times worse.\n\nNot in my measurement. Your example won't use JIT at all, because it's\nbelow the cost threshold. So I think you might just be seeing cache +\nhint bit effects?\n\n> Run against ab87b8fedce3fa77ca0d6, I get 12669.619 ms for the 2nd JIT\n> execution and 4594.994 ms for the JIT=off.\n\nEven with a debug LLVM build, which greatly increases compilation\noverhead, I actually see quite the benefit when I force JIT to be used:\n\n\npostgres[26832][1]=# ;SET jit_above_cost = -1; set jit_optimize_above_cost = 0; set jit_inline_above_cost = 0;\npostgres[26832][1]=# explain (analyze, buffers, timing off) select pk, int200 from i200c200;\n┌───────────────────────────────────────────────────────────────────────────────────────────────────┐\n│ QUERY PLAN │\n├───────────────────────────────────────────────────────────────────────────────────────────────────┤\n│ Seq Scan on i200c200 (cost=0.00..233332.28 rows=9999828 width=16) (actual rows=10000000 loops=1) │\n│ Buffers: shared hit=133334 │\n│ Planning Time: 0.069 ms │\n│ Execution Time: 3645.069 ms │\n└───────────────────────────────────────────────────────────────────────────────────────────────────┘\n(4 rows)\n\n\n\npostgres[26832][1]=# ;SET jit_above_cost = 0; set jit_optimize_above_cost = 0; set jit_inline_above_cost = 0;\npostgres[26832][1]=# explain (analyze, buffers, timing off) select pk, int200 from i200c200;\n┌───────────────────────────────────────────────────────────────────────────────────────────────────┐\n│ QUERY PLAN │\n├───────────────────────────────────────────────────────────────────────────────────────────────────┤\n│ Seq Scan on i200c200 (cost=0.00..233332.28 rows=9999828 width=16) (actual rows=10000000 loops=1) │\n│ Buffers: shared hit=133334 │\n│ Planning Time: 0.070 ms │\n│ JIT: │\n│ Functions: 2 │\n│ Inlining: true │\n│ Optimization: true │\n│ Execution Time: 3191.683 ms │\n└───────────────────────────────────────────────────────────────────────────────────────────────────┘\n(8 rows)\n\nNow that's not *huge*, but nothing either. And it's a win even though\nJITing takes it good own time (we need to improve on that).\n\n\nIf I force all the bigint columns to be NOT NULL DEFAULT 0 the results\nget more drastic:\n\npostgres[28528][1]=# ;SET jit_above_cost = 0; set jit_optimize_above_cost = 0; set jit_inline_above_cost = 0;\n\n┌─────────────────────────────────────────────────────────────────────────────────────────────────────┐\n│ QUERY PLAN │\n├─────────────────────────────────────────────────────────────────────────────────────────────────────┤\n│ Seq Scan on i200c200 (cost=0.00..2600000.00 rows=10000000 width=16) (actual rows=10000000 loops=1) │\n│ Buffers: shared hit=2500000 │\n│ Planning Time: 0.066 ms │\n│ JIT: │\n│ Functions: 2 │\n│ Inlining: true │\n│ Optimization: true │\n│ Execution Time: 4837.872 ms │\n└─────────────────────────────────────────────────────────────────────────────────────────────────────┘\n(8 rows)\n\npostgres[28528][1]=# ;SET jit_above_cost = -1; set jit_optimize_above_cost = 0; set jit_inline_above_cost = 0;\npostgres[28528][1]=# explain (analyze, buffers, timing off) select pk, int200 from i200c200;\n┌─────────────────────────────────────────────────────────────────────────────────────────────────────┐\n│ QUERY PLAN │\n├─────────────────────────────────────────────────────────────────────────────────────────────────────┤\n│ Seq Scan on i200c200 (cost=0.00..2600000.00 rows=10000000 width=16) (actual rows=10000000 loops=1) │\n│ Buffers: shared hit=2500000 │\n│ Planning Time: 0.067 ms │\n│ Execution Time: 8192.236 ms │\n└─────────────────────────────────────────────────────────────────────────────────────────────────────┘\n\nthat's because the JITed version essentially now boils down to a near\noptimal loop around the intermittent bigint columns (which we deform\nbecause we use a slot - at some point we're going to have to do\nbetter). No checks for the NULL bitmap, no alignment considerations,\nall that's optimized away.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Mon, 30 Jul 2018 10:23:35 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance difference in accessing differrent columns in a\n Postgres Table"
},
{
"msg_contents": "On Mon, Jul 30, 2018 at 12:01 PM, Pavel Stehule <[email protected]>\nwrote:\n\n>\n>\n> 2018-07-30 13:19 GMT+02:00 Jeff Janes <[email protected]>:\n>\n>> On Mon, Jul 30, 2018 at 12:11 AM, Pavel Stehule <[email protected]>\n>> wrote:\n>>\n>>> 2018-07-30 1:00 GMT+02:00 Tom Lane <[email protected]>:\n>>>\n>>>> David Rowley <[email protected]> writes:\n>>>> > On 29 July 2018 at 17:38, Dinesh Kumar <[email protected]> wrote:\n>>>> >> I found performance variance between accessing int1 and int200\n>>>> column which\n>>>> >> is quite large.\n>>>>\n>>>> > Have a look at slot_deform_tuple and heap_deform_tuple. You'll see\n>>>> > that tuples are deformed starting at the first attribute. If you ask\n>>>> > for attribute 200 then it must deform 1-199 first.\n>>>>\n>>>> Note that that can be optimized away in some cases, though evidently\n>>>> not the one the OP is testing. From memory, you need a tuple that\n>>>> contains no nulls, and all the columns to the left of the target\n>>>> column have to be fixed-width datatypes. Otherwise, the offset to\n>>>> the target column is uncertain, and we have to search for it.\n>>>>\n>>>\n>>> JIT decrease a overhead of this.\n>>>\n>>\n>> The bottleneck here is such a simple construct, I don't see how JIT could\n>> improve it by much.\n>>\n>> And indeed, in my hands JIT makes it almost 3 times worse.\n>>\n>> Run against ab87b8fedce3fa77ca0d6, I get 12669.619 ms for the 2nd JIT\n>> execution and 4594.994 ms for the JIT=off.\n>>\n>\n> look on http://www.postgresql-archive.org/PATCH-LLVM-tuple-\n> deforming-improvements-td6029385.html thread, please.\n>\n>\nThe opt1 patch did get performance back to \"at least do no harm\" territory,\nbut it didn't improve over JIT=off. Adding the other two didn't get any\nfurther improvement.\n\nI don't know where the time is going with the as-committed JIT. None of\nthe JIT-specific timings reported by EXPLAIN (ANALYZE) add up to anything\nclose to the slow-down I'm seeing. Shouldn't compiling and optimization\ntime show up there?\n\nCheers,\n\nJeff\n\nOn Mon, Jul 30, 2018 at 12:01 PM, Pavel Stehule <[email protected]> wrote:2018-07-30 13:19 GMT+02:00 Jeff Janes <[email protected]>:On Mon, Jul 30, 2018 at 12:11 AM, Pavel Stehule <[email protected]> wrote:2018-07-30 1:00 GMT+02:00 Tom Lane <[email protected]>:David Rowley <[email protected]> writes:\n> On 29 July 2018 at 17:38, Dinesh Kumar <[email protected]> wrote:\n>> I found performance variance between accessing int1 and int200 column which\n>> is quite large.\n\n> Have a look at slot_deform_tuple and heap_deform_tuple. You'll see\n> that tuples are deformed starting at the first attribute. If you ask\n> for attribute 200 then it must deform 1-199 first.\n\nNote that that can be optimized away in some cases, though evidently\nnot the one the OP is testing. From memory, you need a tuple that\ncontains no nulls, and all the columns to the left of the target\ncolumn have to be fixed-width datatypes. Otherwise, the offset to\nthe target column is uncertain, and we have to search for it.JIT decrease a overhead of this.The bottleneck here is such a simple construct, I don't see how JIT could improve it by much.And indeed, in my hands JIT makes it almost 3 times worse.Run against ab87b8fedce3fa77ca0d6, I get 12669.619 ms for the 2nd JIT execution and 4594.994 ms for the JIT=off.look on http://www.postgresql-archive.org/PATCH-LLVM-tuple-deforming-improvements-td6029385.html thread, please.The opt1 patch did get performance back to \"at least do no harm\" territory, but it didn't improve over JIT=off. Adding the other two didn't get any further improvement.I don't know where the time is going with the as-committed JIT. None of the JIT-specific timings reported by EXPLAIN (ANALYZE) add up to anything close to the slow-down I'm seeing. Shouldn't compiling and optimization time show up there?Cheers,Jeff",
"msg_date": "Mon, 30 Jul 2018 13:31:33 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance difference in accessing differrent columns in a\n Postgres Table"
},
{
"msg_contents": "Hi,\n\nOn 2018-07-30 18:01:34 +0200, Pavel Stehule wrote:\n> look on\n> http://www.postgresql-archive.org/PATCH-LLVM-tuple-deforming-improvements-td6029385.html\n> thread, please.\n\nGiven the results I just posted in the sibling email I don't think those\nissues apply here.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Mon, 30 Jul 2018 10:45:32 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance difference in accessing differrent columns in a\n Postgres Table"
},
{
"msg_contents": "Hi,\n\nOn 2018-07-30 13:31:33 -0400, Jeff Janes wrote:\n> I don't know where the time is going with the as-committed JIT. None of\n> the JIT-specific timings reported by EXPLAIN (ANALYZE) add up to anything\n> close to the slow-down I'm seeing. Shouldn't compiling and optimization\n> time show up there?\n\nAs my timings showed, I don't see the slowdown you're reporting. Could\nyou post a few EXPLAIN ANALYZEs?\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Mon, 30 Jul 2018 12:02:24 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance difference in accessing differrent columns in a\n Postgres Table"
},
{
"msg_contents": "On Mon, Jul 30, 2018 at 1:23 PM, Andres Freund <[email protected]> wrote:\n\n> On 2018-07-30 07:19:07 -0400, Jeff Janes wrote:\n>\n> > And indeed, in my hands JIT makes it almost 3 times worse.\n>\n> Not in my measurement. Your example won't use JIT at all, because it's\n> below the cost threshold. So I think you might just be seeing cache +\n> hint bit effects?\n>\n\nNo, it is definitely JIT. The explain plans show it, and the cost of the\nquery is 230,000 while the default setting of jit_above_cost is 100,000.\nIt is fully reproducible by repeatedly toggling the JIT setting. It\ndoesn't seem to be the cost of compiling the code that slows it down (I'm\nassuming the code is compiled once per tuple descriptor, not once per\ntuple), but rather the efficiency of the compiled code.\n\n\n\n>\n> > Run against ab87b8fedce3fa77ca0d6, I get 12669.619 ms for the 2nd JIT\n> > execution and 4594.994 ms for the JIT=off.\n>\n> Even with a debug LLVM build, which greatly increases compilation\n> overhead, I actually see quite the benefit when I force JIT to be used:\n>\n\nI don't see a change when I compile without --enable-debug,\nand jit_debugging_support is off, or in 11beta2 nonexistent. How can I\nknow if I have a debug LLVM build, and turn it off if I do?\n\n\n>\n>\n> postgres[26832][1]=# ;SET jit_above_cost = -1; set jit_optimize_above_cost\n> = 0; set jit_inline_above_cost = 0;\n> postgres[26832][1]=# explain (analyze, buffers, timing off) select pk,\n> int200 from i200c200;\n>\n\nLowering jit_optimize_above_cost does redeem this for me. It brings it\nback to being a tie with JIT=OFF. I don't see any further improvement by\nlowering jit_inline_above_cost, and overall it is just a statistical tie\nwith JIT=off, not an improvement as you get, but at least it isn't a\nsubstantial loss.\n\nUnder what conditions would I want to do jit without doing optimizations on\nit? Is there a rule of thumb that could be documented, or do we just use\nthe experimental method for each query?\n\nI don't know how sensitive JIT is to hardware. I'm using Ubuntu 16.04 on\nVirtualBox (running on Windows 10) on an i5-7200U, which might be important.\n\nI had previously done a poor-man's JIT where I created 4 versions of the\nmain 'for' loop in slot_deform_tuple. I did a branch on \"if(hasnulls)\",\nand then each branch had two loops, one for when 'slow' is false, and then\none for after 'slow' becomes true so we don't have to keep setting it true\nagain once it already is, in a tight loop. I didn't see noticeable\nimprovement there (although perhaps I would have on different hardware), so\ndidn't see how JIT could help with this almost-entirely-null case. I'm not\ntrying to address JIT in general, just as it applies to this particular\ncase.\n\nUnrelated to JIT and relevant to the 'select pk, int199' case but not the\n'select pk, int200' case, it seems we have gone to some length to make slot\ndeforming be efficient for incremental use, but then just deform in bulk\nanyway up to maximum attnum used in the query, at least in this case. Is\nthat because incremental deforming is not cache efficient?\n\nCheers,\n\nJeff\n\nOn Mon, Jul 30, 2018 at 1:23 PM, Andres Freund <[email protected]> wrote:On 2018-07-30 07:19:07 -0400, Jeff Janes wrote:\n> And indeed, in my hands JIT makes it almost 3 times worse.\n\nNot in my measurement. Your example won't use JIT at all, because it's\nbelow the cost threshold. So I think you might just be seeing cache +\nhint bit effects?No, it is definitely JIT. The explain plans show it, and the cost of the query is 230,000 while the default setting of jit_above_cost is 100,000. It is fully reproducible by repeatedly toggling the JIT setting. It doesn't seem to be the cost of compiling the code that slows it down (I'm assuming the code is compiled once per tuple descriptor, not once per tuple), but rather the efficiency of the compiled code. \n\n> Run against ab87b8fedce3fa77ca0d6, I get 12669.619 ms for the 2nd JIT\n> execution and 4594.994 ms for the JIT=off.\n\nEven with a debug LLVM build, which greatly increases compilation\noverhead, I actually see quite the benefit when I force JIT to be used:I don't see a change when I compile without --enable-debug, and jit_debugging_support is off, or in 11beta2 nonexistent. How can I know if I have a debug LLVM build, and turn it off if I do? \n\n\npostgres[26832][1]=# ;SET jit_above_cost = -1; set jit_optimize_above_cost = 0; set jit_inline_above_cost = 0;\npostgres[26832][1]=# explain (analyze, buffers, timing off) select pk, int200 from i200c200;Lowering jit_optimize_above_cost does redeem this for me. It brings it back to being a tie with JIT=OFF. I don't see any further improvement by lowering jit_inline_above_cost, and overall it is just a statistical tie with JIT=off, not an improvement as you get, but at least it isn't a substantial loss.Under what conditions would I want to do jit without doing optimizations on it? Is there a rule of thumb that could be documented, or do we just use the experimental method for each query?I don't know how sensitive JIT is to hardware. I'm using Ubuntu 16.04 on VirtualBox (running on Windows 10) on an i5-7200U, which might be important.I had previously done a poor-man's JIT where I created 4 versions of the main 'for' loop in slot_deform_tuple. I did a branch on \"if(hasnulls)\", and then each branch had two loops, one for when 'slow' is false, and then one for after 'slow' becomes true so we don't have to keep setting it true again once it already is, in a tight loop. I didn't see noticeable improvement there (although perhaps I would have on different hardware), so didn't see how JIT could help with this almost-entirely-null case. I'm not trying to address JIT in general, just as it applies to this particular case.Unrelated to JIT and relevant to the 'select pk, int199' case but not the 'select pk, int200' case, it seems we have gone to some length to make slot deforming be efficient for incremental use, but then just deform in bulk anyway up to maximum attnum used in the query, at least in this case. Is that because incremental deforming is not cache efficient?Cheers,Jeff",
"msg_date": "Tue, 31 Jul 2018 12:56:26 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance difference in accessing differrent columns in a\n Postgres Table"
},
{
"msg_contents": "Hi,\n\nOn 2018-07-31 12:56:26 -0400, Jeff Janes wrote:\n> On Mon, Jul 30, 2018 at 1:23 PM, Andres Freund <[email protected]> wrote:\n> \n> > On 2018-07-30 07:19:07 -0400, Jeff Janes wrote:\n> >\n> > > And indeed, in my hands JIT makes it almost 3 times worse.\n> >\n> > Not in my measurement. Your example won't use JIT at all, because it's\n> > below the cost threshold. So I think you might just be seeing cache +\n> > hint bit effects?\n> >\n> \n> No, it is definitely JIT. The explain plans show it, and the cost of the\n> query is 230,000 while the default setting of jit_above_cost is 100,000.\n> It is fully reproducible by repeatedly toggling the JIT setting. It\n> doesn't seem to be the cost of compiling the code that slows it down (I'm\n> assuming the code is compiled once per tuple descriptor, not once per\n> tuple), but rather the efficiency of the compiled code.\n\nInteresting. I see a smaller benefit without opt, but still one. I guess\nthat depends on code emission.\n\n\n> > > Run against ab87b8fedce3fa77ca0d6, I get 12669.619 ms for the 2nd JIT\n> > > execution and 4594.994 ms for the JIT=off.\n> >\n> > Even with a debug LLVM build, which greatly increases compilation\n> > overhead, I actually see quite the benefit when I force JIT to be used:\n> >\n> \n> I don't see a change when I compile without --enable-debug,\n> and jit_debugging_support is off, or in 11beta2 nonexistent. How can I\n> know if I have a debug LLVM build, and turn it off if I do?\n\nllvm-config --assertion-mode should tell you.\n\n\n> > postgres[26832][1]=# ;SET jit_above_cost = -1; set jit_optimize_above_cost\n> > = 0; set jit_inline_above_cost = 0;\n> > postgres[26832][1]=# explain (analyze, buffers, timing off) select pk,\n> > int200 from i200c200;\n> >\n> \n> Lowering jit_optimize_above_cost does redeem this for me. It brings it\n> back to being a tie with JIT=OFF. I don't see any further improvement by\n> lowering jit_inline_above_cost, and overall it is just a statistical tie\n> with JIT=off, not an improvement as you get, but at least it isn't a\n> substantial loss.\n\nInteresting, as posted, I do see quite measurable improvements. What's\nyour version of LLVM?\n\n\n> Under what conditions would I want to do jit without doing optimizations on\n> it? Is there a rule of thumb that could be documented, or do we just use\n> the experimental method for each query?\n\nI don't think we quite know yet. Optimization for larger queries can\ntake a while. For expression heavy queries there's a window where JITing\ncan help, but optimization can be beneficial.\n\n\n> I had previously done a poor-man's JIT where I created 4 versions of the\n> main 'for' loop in slot_deform_tuple. I did a branch on \"if(hasnulls)\",\n> and then each branch had two loops, one for when 'slow' is false, and then\n> one for after 'slow' becomes true so we don't have to keep setting it true\n> again once it already is, in a tight loop. I didn't see noticeable\n> improvement there (although perhaps I would have on different hardware), so\n> didn't see how JIT could help with this almost-entirely-null case. I'm not\n> trying to address JIT in general, just as it applies to this particular\n> case.\n\nI don't see how it follows from that observation that JITing can't be\nbeneficial? The bitmap access alone can be optimized if you unroll the\nloop (as now the offsets into it are constant). The offset computations\ninto tts_values/isnull aren't dynamic anymore. The loop counter is\ngone. And nearly all tuple have hasnulls set, so specializing for that\ncase isn't going to get you that much, it's perfectly predictable.\n\n\n> Unrelated to JIT and relevant to the 'select pk, int199' case but not the\n> 'select pk, int200' case, it seems we have gone to some length to make slot\n> deforming be efficient for incremental use, but then just deform in bulk\n> anyway up to maximum attnum used in the query, at least in this case. Is\n> that because incremental deforming is not cache efficient?\n\nWell, that's not *quite* how it works: We always deform up to the point\nused in a certain \"level\" of the query. E.g. if a select's where clause\nneeds something up to attribute 3, the seqscan might deform only up to\nthere, even though an aggregate ontop of that might need up to 10. But\nyes, you're right, uselessly incrementally deforming isn't cache\nefficient.\n\nI think before long we're going to have to change the slot mechanism so\nwe don't deform columns we don't actually need. I.e we'll need something\nlike a bitmap of needed columns and skip over unneeded ones. When not\nJITed that'll allow us to skip copying such columns (removing an 8 and 1\nbyte write), when JITing we can do better, and e.g. entirely skip\nprocessing fixed width NOT MULL columns that aren't needed.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Tue, 31 Jul 2018 10:35:48 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance difference in accessing differrent columns in a\n Postgres Table"
},
{
"msg_contents": "On Mon, Jul 30, 2018 at 3:02 PM, Andres Freund <[email protected]> wrote:\n\n> Hi,\n>\n> On 2018-07-30 13:31:33 -0400, Jeff Janes wrote:\n> > I don't know where the time is going with the as-committed JIT. None of\n> > the JIT-specific timings reported by EXPLAIN (ANALYZE) add up to anything\n> > close to the slow-down I'm seeing. Shouldn't compiling and optimization\n> > time show up there?\n>\n> As my timings showed, I don't see the slowdown you're reporting. Could\n> you post a few EXPLAIN ANALYZEs?\n>\n\n\nI don't think you showed any timings where jit_above_cost < query cost <\njit_optimize_above_cost, which is where I saw the slow down. (That is also\nwhere things naturally land for me using default settings)\n\nI've repeated my test case on a default build (./configure --with-llvm\n--prefix=....) and default postgresql.conf, using the post-11BETA2 commit\n5a71d3e.\n\n\nI've attached the full test case, and the full output.\n\nHere are the last two executions, with jit=on and jit=off, respectively.\nDoing it with TIMING OFF doesn't meaningfully change things, nor does\nincreasing shared_buffers beyond the default.\n\n\n\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------\n Seq Scan on i200c200 (cost=0.00..233332.28 rows=9999828 width=16) (actual\ntime=29.317..11966.291 rows=10000000 loops=1)\n Planning Time: 0.034 ms\n JIT:\n Functions: 2\n Generation Time: 1.589 ms\n Inlining: false\n Inlining Time: 0.000 ms\n Optimization: false\n Optimization Time: 9.002 ms\n Emission Time: 19.948 ms\n Execution Time: 12375.493 ms\n(11 rows)\n\nTime: 12376.281 ms (00:12.376)\nSET\nTime: 1.955 ms\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------\n Seq Scan on i200c200 (cost=0.00..233332.28 rows=9999828 width=16) (actual\ntime=0.063..3897.302 rows=10000000 loops=1)\n Planning Time: 0.037 ms\n Execution Time: 4292.400 ms\n(3 rows)\n\nTime: 4293.196 ms (00:04.293)\n\nCheers,\n\nJeff",
"msg_date": "Wed, 1 Aug 2018 11:21:21 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance difference in accessing differrent columns in a\n Postgres Table"
},
{
"msg_contents": "Hi All,\nI was wondering whether the case is solved or still continuing. As a\nPostgres newbie, I can't understand any of the terms (JIT, tuple\ndeformation) as you mentioned above. Please anyone let me know , what is\nthe current scenario.\n\nThanks,\nDineshkumar.\n\nOn Wed, Aug 1, 2018 at 8:51 PM Jeff Janes <[email protected]> wrote:\n\n> On Mon, Jul 30, 2018 at 3:02 PM, Andres Freund <[email protected]> wrote:\n>\n>> Hi,\n>>\n>> On 2018-07-30 13:31:33 -0400, Jeff Janes wrote:\n>> > I don't know where the time is going with the as-committed JIT. None of\n>> > the JIT-specific timings reported by EXPLAIN (ANALYZE) add up to\n>> anything\n>> > close to the slow-down I'm seeing. Shouldn't compiling and optimization\n>> > time show up there?\n>>\n>> As my timings showed, I don't see the slowdown you're reporting. Could\n>> you post a few EXPLAIN ANALYZEs?\n>>\n>\n>\n> I don't think you showed any timings where jit_above_cost < query cost <\n> jit_optimize_above_cost, which is where I saw the slow down. (That is also\n> where things naturally land for me using default settings)\n>\n> I've repeated my test case on a default build (./configure --with-llvm\n> --prefix=....) and default postgresql.conf, using the post-11BETA2 commit\n> 5a71d3e.\n>\n>\n> I've attached the full test case, and the full output.\n>\n> Here are the last two executions, with jit=on and jit=off, respectively.\n> Doing it with TIMING OFF doesn't meaningfully change things, nor does\n> increasing shared_buffers beyond the default.\n>\n>\n>\n> QUERY PLAN\n>\n> --------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on i200c200 (cost=0.00..233332.28 rows=9999828 width=16)\n> (actual time=29.317..11966.291 rows=10000000 loops=1)\n> Planning Time: 0.034 ms\n> JIT:\n> Functions: 2\n> Generation Time: 1.589 ms\n> Inlining: false\n> Inlining Time: 0.000 ms\n> Optimization: false\n> Optimization Time: 9.002 ms\n> Emission Time: 19.948 ms\n> Execution Time: 12375.493 ms\n> (11 rows)\n>\n> Time: 12376.281 ms (00:12.376)\n> SET\n> Time: 1.955 ms\n> QUERY PLAN\n>\n> ------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on i200c200 (cost=0.00..233332.28 rows=9999828 width=16)\n> (actual time=0.063..3897.302 rows=10000000 loops=1)\n> Planning Time: 0.037 ms\n> Execution Time: 4292.400 ms\n> (3 rows)\n>\n> Time: 4293.196 ms (00:04.293)\n>\n> Cheers,\n>\n> Jeff\n>\n\nHi All,I was wondering whether the case is solved or still continuing. As a Postgres newbie, I can't understand any of the terms (JIT, tuple deformation) as you mentioned above. Please anyone let me know , what is the current scenario.Thanks,Dineshkumar.On Wed, Aug 1, 2018 at 8:51 PM Jeff Janes <[email protected]> wrote:On Mon, Jul 30, 2018 at 3:02 PM, Andres Freund <[email protected]> wrote:Hi,\n\nOn 2018-07-30 13:31:33 -0400, Jeff Janes wrote:\n> I don't know where the time is going with the as-committed JIT. None of\n> the JIT-specific timings reported by EXPLAIN (ANALYZE) add up to anything\n> close to the slow-down I'm seeing. Shouldn't compiling and optimization\n> time show up there?\n\nAs my timings showed, I don't see the slowdown you're reporting. Could\nyou post a few EXPLAIN ANALYZEs?I don't think you showed any timings where jit_above_cost < query cost < jit_optimize_above_cost, which is where I saw the slow down. (That is also where things naturally land for me using default settings)I've repeated my test case on a default build (./configure --with-llvm --prefix=....) and default postgresql.conf, using the post-11BETA2 commit 5a71d3e.I've attached the full test case, and the full output.Here are the last two executions, with jit=on and jit=off, respectively. Doing it with TIMING OFF doesn't meaningfully change things, nor does increasing shared_buffers beyond the default.\n QUERY PLAN-------------------------------------------------------------------------------------------------------------------------- Seq Scan on i200c200 (cost=0.00..233332.28 rows=9999828 width=16) (actual time=29.317..11966.291 rows=10000000 loops=1) Planning Time: 0.034 ms JIT: Functions: 2 Generation Time: 1.589 ms Inlining: false Inlining Time: 0.000 ms Optimization: false Optimization Time: 9.002 ms Emission Time: 19.948 ms Execution Time: 12375.493 ms(11 rows)Time: 12376.281 ms (00:12.376)SETTime: 1.955 ms QUERY PLAN------------------------------------------------------------------------------------------------------------------------ Seq Scan on i200c200 (cost=0.00..233332.28 rows=9999828 width=16) (actual time=0.063..3897.302 rows=10000000 loops=1) Planning Time: 0.037 ms Execution Time: 4292.400 ms(3 rows)Time: 4293.196 ms (00:04.293)Cheers,Jeff",
"msg_date": "Wed, 5 Sep 2018 09:51:45 +0530",
"msg_from": "Dinesh Kumar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance difference in accessing differrent columns in a\n Postgres Table"
},
{
"msg_contents": "On Wed, Sep 5, 2018 at 12:21 AM Dinesh Kumar <[email protected]> wrote:\n\n> Hi All,\n> I was wondering whether the case is solved or still continuing. As a\n> Postgres newbie, I can't understand any of the terms (JIT, tuple\n> deformation) as you mentioned above. Please anyone let me know , what is\n> the current scenario.\n>\n>\nJIT is a just-in-time compilation, which will be new in v11. Tuple\ndeforming is how you get the row from the on-disk format to the in-memory\nformat.\n\nSome people see small improvements in tuple deforming using JIT in your\nsituation, some see large decreases, depending on settings and apparently\non hardware. But regardless, JIT is not going to reduce your particular\nuse case (many nullable and actually null columns, referencing a\nhigh-numbered column) down to being constant-time operation in the number\nof preceding columns. Maybe JIT will reduce the penalty for accessing a\nhigh-numbered column by 30%, but won't reduce the penalty by 30 fold. Put\nyour NOT NULL columns first and then most frequently accessed NULLable\ncolumns right after them, if you can.\n\nCheers,\n\nJeff\n\n>\n\nOn Wed, Sep 5, 2018 at 12:21 AM Dinesh Kumar <[email protected]> wrote:Hi All,I was wondering whether the case is solved or still continuing. As a Postgres newbie, I can't understand any of the terms (JIT, tuple deformation) as you mentioned above. Please anyone let me know , what is the current scenario.JIT is a just-in-time compilation, which will be new in v11. Tuple deforming is how you get the row from the on-disk format to the in-memory format.Some people see small improvements in tuple deforming using JIT in your situation, some see large decreases, depending on settings and apparently on hardware. But regardless, JIT is not going to reduce your particular use case (many nullable and actually null columns, referencing a high-numbered column) down to being constant-time operation in the number of preceding columns. Maybe JIT will reduce the penalty for accessing a high-numbered column by 30%, but won't reduce the penalty by 30 fold. Put your NOT NULL columns first and then most frequently accessed NULLable columns right after them, if you can.Cheers,Jeff",
"msg_date": "Wed, 5 Sep 2018 12:00:59 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance difference in accessing differrent columns in a\n Postgres Table"
},
{
"msg_contents": "On Wed, Sep 5, 2018 at 12:00 PM Jeff Janes <[email protected]> wrote:\n\n> On Wed, Sep 5, 2018 at 12:21 AM Dinesh Kumar <[email protected]> wrote:\n>\n>> Hi All,\n>> I was wondering whether the case is solved or still continuing. As a\n>> Postgres newbie, I can't understand any of the terms (JIT, tuple\n>> deformation) as you mentioned above. Please anyone let me know , what is\n>> the current scenario.\n>>\n>>\n> JIT is a just-in-time compilation, which will be new in v11. Tuple\n> deforming is how you get the row from the on-disk format to the in-memory\n> format.\n>\n> Some people see small improvements in tuple deforming using JIT in your\n> situation, some see large decreases, depending on settings and apparently\n> on hardware. But regardless, JIT is not going to reduce your particular\n> use case (many nullable and actually null columns, referencing a\n> high-numbered column) down to being constant-time operation in the number\n> of preceding columns. Maybe JIT will reduce the penalty for accessing a\n> high-numbered column by 30%, but won't reduce the penalty by 30 fold. Put\n> your NOT NULL columns first and then most frequently accessed NULLable\n> columns right after them, if you can.\n>\n\nCorrection: NOT NULL columns with fixed width types first. Then of the\ncolumns which are either nullable or variable width types, put the most\nfrequently accessed earlier.\n\nOn Wed, Sep 5, 2018 at 12:00 PM Jeff Janes <[email protected]> wrote:On Wed, Sep 5, 2018 at 12:21 AM Dinesh Kumar <[email protected]> wrote:Hi All,I was wondering whether the case is solved or still continuing. As a Postgres newbie, I can't understand any of the terms (JIT, tuple deformation) as you mentioned above. Please anyone let me know , what is the current scenario.JIT is a just-in-time compilation, which will be new in v11. Tuple deforming is how you get the row from the on-disk format to the in-memory format.Some people see small improvements in tuple deforming using JIT in your situation, some see large decreases, depending on settings and apparently on hardware. But regardless, JIT is not going to reduce your particular use case (many nullable and actually null columns, referencing a high-numbered column) down to being constant-time operation in the number of preceding columns. Maybe JIT will reduce the penalty for accessing a high-numbered column by 30%, but won't reduce the penalty by 30 fold. Put your NOT NULL columns first and then most frequently accessed NULLable columns right after them, if you can.Correction: NOT NULL columns with fixed width types first. Then of the columns which are either nullable or variable width types, put the most frequently accessed earlier.",
"msg_date": "Wed, 5 Sep 2018 12:07:15 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance difference in accessing differrent columns in a\n Postgres Table"
},
{
"msg_contents": "Ok, will do that. Thanks a lot.\n\nOn Wed, Sep 5, 2018 at 9:37 PM Jeff Janes <[email protected]> wrote:\n\n>\n>\n> On Wed, Sep 5, 2018 at 12:00 PM Jeff Janes <[email protected]> wrote:\n>\n>> On Wed, Sep 5, 2018 at 12:21 AM Dinesh Kumar <[email protected]> wrote:\n>>\n>>> Hi All,\n>>> I was wondering whether the case is solved or still continuing. As a\n>>> Postgres newbie, I can't understand any of the terms (JIT, tuple\n>>> deformation) as you mentioned above. Please anyone let me know , what is\n>>> the current scenario.\n>>>\n>>>\n>> JIT is a just-in-time compilation, which will be new in v11. Tuple\n>> deforming is how you get the row from the on-disk format to the in-memory\n>> format.\n>>\n>> Some people see small improvements in tuple deforming using JIT in your\n>> situation, some see large decreases, depending on settings and apparently\n>> on hardware. But regardless, JIT is not going to reduce your particular\n>> use case (many nullable and actually null columns, referencing a\n>> high-numbered column) down to being constant-time operation in the number\n>> of preceding columns. Maybe JIT will reduce the penalty for accessing a\n>> high-numbered column by 30%, but won't reduce the penalty by 30 fold. Put\n>> your NOT NULL columns first and then most frequently accessed NULLable\n>> columns right after them, if you can.\n>>\n>\n> Correction: NOT NULL columns with fixed width types first. Then of the\n> columns which are either nullable or variable width types, put the most\n> frequently accessed earlier.\n>\n>\n\nOk, will do that. Thanks a lot.On Wed, Sep 5, 2018 at 9:37 PM Jeff Janes <[email protected]> wrote:On Wed, Sep 5, 2018 at 12:00 PM Jeff Janes <[email protected]> wrote:On Wed, Sep 5, 2018 at 12:21 AM Dinesh Kumar <[email protected]> wrote:Hi All,I was wondering whether the case is solved or still continuing. As a Postgres newbie, I can't understand any of the terms (JIT, tuple deformation) as you mentioned above. Please anyone let me know , what is the current scenario.JIT is a just-in-time compilation, which will be new in v11. Tuple deforming is how you get the row from the on-disk format to the in-memory format.Some people see small improvements in tuple deforming using JIT in your situation, some see large decreases, depending on settings and apparently on hardware. But regardless, JIT is not going to reduce your particular use case (many nullable and actually null columns, referencing a high-numbered column) down to being constant-time operation in the number of preceding columns. Maybe JIT will reduce the penalty for accessing a high-numbered column by 30%, but won't reduce the penalty by 30 fold. Put your NOT NULL columns first and then most frequently accessed NULLable columns right after them, if you can.Correction: NOT NULL columns with fixed width types first. Then of the columns which are either nullable or variable width types, put the most frequently accessed earlier.",
"msg_date": "Fri, 7 Sep 2018 08:58:42 +0530",
"msg_from": "Dinesh Kumar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance difference in accessing differrent columns in a\n Postgres Table"
}
] |
[
{
"msg_contents": "Hi,\nI'm using postgresql v10.4. I have a local partitioned table (by range -\ndata, every day has its own table). I'm using the oracle_fdw extension to\nbring data from the oracle partitioned table into my local postgresql\n(insert into local select * from remote_oracle). Currently, I dont have any\nindexes on the postgresql`s table. It takes me 10 hours to copy 200G over\nthe network and it is very slow.\nAny recommandations what can I change or improve ?\n\n\nThanks , Mariel.\n\nHi,I'm using postgresql v10.4. I have a local partitioned table (by range - data, every day has its own table). I'm using the oracle_fdw extension to bring data from the oracle partitioned table into my local postgresql (insert into local select * from remote_oracle). Currently, I dont have any indexes on the postgresql`s table. It takes me 10 hours to copy 200G over the network and it is very slow.Any recommandations what can I change or improve ?Thanks , Mariel.",
"msg_date": "Mon, 13 Aug 2018 23:34:46 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "increase insert into local table from remote oracle table preformance"
},
{
"msg_contents": "Hi,\nI'm using postgresql v10.4. I have a local partitioned table (by range -\ndata, every day has its own table). I'm using the oracle_fdw extension to\nbring data from the oracle partitioned table into my local postgresql\n(insert into local select * from remote_oracle). Currently, I dont have any\nindexes on the postgresql`s table. It takes me 10 hours to copy 200G over\nthe network and it is very slow.\nAny recommandations what can I change or improve ?\n\n\nThanks , Mariel.\n\nHi,I'm using postgresql v10.4. I have a local partitioned table (by range - data, every day has its own table). I'm using the oracle_fdw extension to bring data from the oracle partitioned table into my local postgresql (insert into local select * from remote_oracle). Currently, I dont have any indexes on the postgresql`s table. It takes me 10 hours to copy 200G over the network and it is very slow.Any recommandations what can I change or improve ?Thanks , Mariel.",
"msg_date": "Mon, 13 Aug 2018 23:35:05 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fwd: increase insert into local table from remote oracle table\n preformance"
},
{
"msg_contents": "Did you try \n- runing multiple inserts in parallel,\n- Stop wal archiving,\n- Tune fetch sise ?\n\nRegards\nPAscal\n\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n",
"msg_date": "Mon, 13 Aug 2018 14:03:58 -0700 (MST)",
"msg_from": "legrand legrand <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: increase insert into local table from remote oracle table\n preformance"
},
{
"msg_contents": "Hi, we probably need more inforation to offer anything useful here, e.g:\n\n- The network bandwidth between the 2 hosts\n\n- the number of partitions on the Postgres end (i.e how many days in \nyour case)\n\n- single or batched INSERT\n\nThe lack of indexes is probably not going to effect INSERT performance \nthat much, but the number of partition tables has a huge impact, so we \nneed to know this stuff!\n\nCheers\n\nMark\n\n\nOn 14/08/18 08:34, Mariel Cherkassky wrote:\n> Hi,\n> I'm using postgresql v10.4. I have a local partitioned table (by range \n> - data, every day has its own table). I'm using the oracle_fdw \n> extension to bring data from the oracle partitioned table into my \n> local postgresql (insert into local select * from remote_oracle). \n> Currently, I dont have any indexes on the postgresql`s table. It takes \n> me 10 hours to copy 200G over the network and it is very slow.\n> Any recommandations what can I change or improve ?\n>\n>\n> Thanks , Mariel.\n\n\n",
"msg_date": "Tue, 14 Aug 2018 17:21:23 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: increase insert into local table from remote oracle table\n preformance"
},
{
"msg_contents": "Mariel Cherkassky wrote:\n> Hi,\n> I'm using postgresql v10.4. I have a local partitioned table (by range - data, every day has its own table).\n> I'm using the oracle_fdw extension to bring data from the oracle partitioned table into my local postgresql\n> (insert into local select * from remote_oracle). Currently, I dont have any indexes on the postgresql`s table.\n> It takes me 10 hours to copy 200G over the network and it is very slow.\n> Any recommandations what can I change or improve ?\n\nHard to say anything with so little data.\n\nYou could try a bigger value for the \"prefetch\" option.\n\nOne known reason for slow performance is if there are LOBs in the Oracle table.\n\nYou could parallelize processing by running several such INSERTs in\nparallel, perhaps one per partition, and inserting directly into\nthe partitions.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n",
"msg_date": "Tue, 14 Aug 2018 09:12:40 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: increase insert into local table from remote oracle table\n preformance"
},
{
"msg_contents": "Hi,\nI'll try to answer all your question so that you will have more information\nabout the situation :\n\nI have one main table that is called main_table_hist. The \"main_table _hist\"\nis partitioned by range (date column) and includes data that is considered\nas \"history data\" . I'm trying to copy the data from the oracle table to my\nlocal postgresql table (about 5T). For every day in the year I have in the\noracle table partition and therefore I will create for every day in year\n(365 in total) a partition in postgresql. Every partition of day consist of\n4 different partitions by list (text values). So In total my tables\nhierarchy should look like that :\nmain_table_hist\n 14/08/2018_main\n 14/08/2018_value1\n 14/08/2018_value2\n 14/08/2018_value3\n 14/08/2018_value1\n\nMoreover, I have another table that is called \"present_data\" that consist\nof 7 partitions (the data of the last 7 days - 300G) that I'm loading from\ncsv files (daily). Every night I need to deattach the last day partition\nand attach it to the history table.\n\nThis hierarchy works well in oracle and I'm trying to build it on\npostgresql. Right now I'm trying to copy the history data from the remote\ndatabase but as I suggested it takes 10 hours for 200G.\n\nSome details :\n-Seting the wals to minimum is possible but I cant do that as a daily work\naround because that means restarting the database.\n I must have wals generated in order to restore the \"present_data\" in case\nof disaster.\n-The network\n-My network bandwidth is 1GB.\n-The column in the table are from types : character varying,big\nint,timestamp,numeric. In other words no blobs.\n-I have many check constraints on the table.\n- Laurenz - \"You could try a bigger value for the \"prefetch\" option.\"- Do\nyou have an example how to do it ?\n-Inserting directly into the right parittion might increase the preformance\n?\n\nThanks , Mariel.\n\n\n2018-08-14 0:03 GMT+03:00 legrand legrand <[email protected]>:\n\n> Did you try\n> - runing multiple inserts in parallel,\n> - Stop wal archiving,\n> - Tune fetch sise ?\n>\n> Regards\n> PAscal\n>\n>\n>\n>\n> --\n> Sent from: http://www.postgresql-archive.org/PostgreSQL-performance-\n> f2050081.html\n>\n>\n\nHi,I'll try to answer all your question so that you will have more information about the situation : I have one main table that is called main_table_hist. The \"main_table\n\n_hist\" is partitioned by range (date column) and includes data that is considered as \"history data\" . I'm trying to copy the data from the oracle table to my local postgresql table (about 5T). For every day in the year I have in the oracle table partition and therefore I will create for every day in year (365 in total) a partition in postgresql. Every partition of day consist of 4 different partitions by list (text values). So In total my tables hierarchy should look like that : main_table_hist 14/08/2018_main 14/08/2018_value1 14/08/2018_value2\n 14/08/2018_value3\n 14/08/2018_value1\nMoreover, I have another table that is called \"present_data\" that consist of 7 partitions (the data of the last 7 days - 300G) that I'm loading from csv files (daily). Every night I need to deattach the last day partition and attach it to the history table. This hierarchy works well in oracle and I'm trying to build it on postgresql. Right now I'm trying to copy the history data from the remote database but as I suggested it takes 10 hours for 200G.Some details : -Seting the wals to minimum is possible but I cant do that as a daily work around because that means restarting the database. I must have wals generated in order to restore the \"present_data\" in case of disaster.-The network -My network bandwidth is 1GB.-The column in the table are from types : character varying,big int,timestamp,numeric. In other words no blobs.-I have many check constraints on the table.- Laurenz - \"You could try a bigger value for the \"prefetch\" option.\"- Do you have an example how to do it ?\n-Inserting directly into the right parittion might increase the preformance ?Thanks , Mariel.2018-08-14 0:03 GMT+03:00 legrand legrand <[email protected]>:Did you try \n- runing multiple inserts in parallel,\n- Stop wal archiving,\n- Tune fetch sise ?\n\nRegards\nPAscal\n\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html",
"msg_date": "Tue, 14 Aug 2018 14:33:37 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fwd: increase insert into local table from remote oracle table\n preformance"
},
{
"msg_contents": "main ideas are:\n\n- inserting directly to the right partition:\n perform as many inserts as pg partitions found in main_table_hist, like\n INSERT INTO 14/08/2018_value1 select * from remote_oracle_hist where\nday=to_date('14/08/2018','DD/MM/YYYY') and value='value1'\n\nplease check execution plan (in Oracle db) using EXPLAIN ANALYZE\n\n- all those inserts should be executed in // (with 4 or 8 sql scripts)\n\n- wal archiving should be disabled during hist data recovery only (not\nduring day to day operations)\n\n- for prefetch see\n\nhttps://github.com/laurenz/oracle_fdw\n\nprefetch (optional, defaults to \"200\")\n\nSets the number of rows that will be fetched with a single round-trip\nbetween PostgreSQL and Oracle during a foreign table scan. This is\nimplemented using Oracle row prefetching. The value must be between 0 and\n10240, where a value of zero disables prefetching.\n\nHigher values can speed up performance, but will use more memory on the\nPostgreSQL server.\n\n\nRegards\nPAscal\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n",
"msg_date": "Tue, 14 Aug 2018 15:28:34 -0700 (MST)",
"msg_from": "legrand legrand <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: increase insert into local table from remote oracle table\n preformance"
},
{
"msg_contents": "Inserting directly into the partition didnt help, the performance are just\nthe same. I tried to increase the prefetch value to 1000(alter foreign\ntable hist_oracle options (add prefetch '1000') but still no change - 15\nminutes for one partition(6GB).\n\nOn the oracle side the plan is full scan on the partition (I'm copying the\nentire partition into a postgresql partition..)\n\n2018-08-15 1:28 GMT+03:00 legrand legrand <[email protected]>:\n\n> main ideas are:\n>\n> - inserting directly to the right partition:\n> perform as many inserts as pg partitions found in main_table_hist, like\n> INSERT INTO 14/08/2018_value1 select * from remote_oracle_hist where\n> day=to_date('14/08/2018','DD/MM/YYYY') and value='value1'\n>\n> please check execution plan (in Oracle db) using EXPLAIN ANALYZE\n>\n> - all those inserts should be executed in // (with 4 or 8 sql scripts)\n>\n> - wal archiving should be disabled during hist data recovery only (not\n> during day to day operations)\n>\n> - for prefetch see\n>\n> https://github.com/laurenz/oracle_fdw\n>\n> prefetch (optional, defaults to \"200\")\n>\n> Sets the number of rows that will be fetched with a single round-trip\n> between PostgreSQL and Oracle during a foreign table scan. This is\n> implemented using Oracle row prefetching. The value must be between 0 and\n> 10240, where a value of zero disables prefetching.\n>\n> Higher values can speed up performance, but will use more memory on the\n> PostgreSQL server.\n>\n>\n> Regards\n> PAscal\n>\n>\n>\n> --\n> Sent from: http://www.postgresql-archive.org/PostgreSQL-performance-\n> f2050081.html\n>\n>\n\nInserting directly into the partition didnt help, the performance are just the same. I tried to increase the prefetch value to 1000(alter foreign table hist_oracle options (add prefetch '1000') but still no change - 15 minutes for one partition(6GB).On the oracle side the plan is full scan on the partition (I'm copying the entire partition into a postgresql partition..)2018-08-15 1:28 GMT+03:00 legrand legrand <[email protected]>:main ideas are:\n\n- inserting directly to the right partition:\n perform as many inserts as pg partitions found in main_table_hist, like\n INSERT INTO 14/08/2018_value1 select * from remote_oracle_hist where\nday=to_date('14/08/2018','DD/MM/YYYY') and value='value1'\n\nplease check execution plan (in Oracle db) using EXPLAIN ANALYZE\n\n- all those inserts should be executed in // (with 4 or 8 sql scripts)\n\n- wal archiving should be disabled during hist data recovery only (not\nduring day to day operations)\n\n- for prefetch see\n\nhttps://github.com/laurenz/oracle_fdw\n\nprefetch (optional, defaults to \"200\")\n\nSets the number of rows that will be fetched with a single round-trip\nbetween PostgreSQL and Oracle during a foreign table scan. This is\nimplemented using Oracle row prefetching. The value must be between 0 and\n10240, where a value of zero disables prefetching.\n\nHigher values can speed up performance, but will use more memory on the\nPostgreSQL server.\n\n\nRegards\nPAscal\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html",
"msg_date": "Wed, 15 Aug 2018 11:43:11 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fwd: increase insert into local table from remote oracle table\n preformance"
},
{
"msg_contents": "You need to track down your limited resource. IO, CPU, or network. I would say it’s unlikely to be CPU, but you never know. Look at the activities on each server and see what resource is maxed out. My guess is IO, but you could also have your network choked. \n\nSent from my iPad\n\n> On Aug 14, 2018, at 02:12, Laurenz Albe <[email protected]> wrote:\n> \n> Mariel Cherkassky wrote:\n>> Hi,\n>> I'm using postgresql v10.4. I have a local partitioned table (by range - data, every day has its own table).\n>> I'm using the oracle_fdw extension to bring data from the oracle partitioned table into my local postgresql\n>> (insert into local select * from remote_oracle). Currently, I dont have any indexes on the postgresql`s table.\n>> It takes me 10 hours to copy 200G over the network and it is very slow.\n>> Any recommandations what can I change or improve ?\n> \n> Hard to say anything with so little data.\n> \n> You could try a bigger value for the \"prefetch\" option.\n> \n> One known reason for slow performance is if there are LOBs in the Oracle table.\n> \n> You could parallelize processing by running several such INSERTs in\n> parallel, perhaps one per partition, and inserting directly into\n> the partitions.\n> \n> Yours,\n> Laurenz Albe\n> -- \n> Cybertec | https://www.cybertec-postgresql.com\n> \n\n",
"msg_date": "Wed, 15 Aug 2018 06:42:37 -0500",
"msg_from": "Andrew Kerber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: increase insert into local table from remote oracle table\n preformance"
},
{
"msg_contents": "The Postgres command of choice to load bulk data is COPY https://www.postgresql.org/docs/current/static/sql-copy.html <https://www.postgresql.org/docs/current/static/sql-copy.html> is much faster than anything else.\n\nIt’s likely that the slowest part could be Oracle exporting it’s data. Try to use sqlplus to export the data and see how long does it take, you won’t be able to make the process faster than Oracle can export it’s data.\n\nIf it’s fast enough, format the resulting file in a suitable format for Postgres ‘COPY FROM’ command.\n\nFinally you can pipe the Oracle export command and the Postgres COPY FROM command, so the process can run twice as fast. \n\nYou can make it even faster if you divide the exported data by any criteria and run those export | import scripts in parallel. \n\n\n\n\n\n> El 15 ago 2018, a las 10:43, Mariel Cherkassky <[email protected]> escribió:\n> \n> Inserting directly into the partition didnt help, the performance are just the same. I tried to increase the prefetch value to 1000(alter foreign table hist_oracle options (add prefetch '1000') but still no change - 15 minutes for one partition(6GB).\n> \n> On the oracle side the plan is full scan on the partition (I'm copying the entire partition into a postgresql partition..)\n> \n> 2018-08-15 1:28 GMT+03:00 legrand legrand <[email protected] <mailto:[email protected]>>:\n> main ideas are:\n> \n> - inserting directly to the right partition:\n> perform as many inserts as pg partitions found in main_table_hist, like\n> INSERT INTO 14/08/2018_value1 select * from remote_oracle_hist where\n> day=to_date('14/08/2018','DD/MM/YYYY') and value='value1'\n> \n> please check execution plan (in Oracle db) using EXPLAIN ANALYZE\n> \n> - all those inserts should be executed in // (with 4 or 8 sql scripts)\n> \n> - wal archiving should be disabled during hist data recovery only (not\n> during day to day operations)\n> \n> - for prefetch see\n> \n> https://github.com/laurenz/oracle_fdw <https://github.com/laurenz/oracle_fdw>\n> \n> prefetch (optional, defaults to \"200\")\n> \n> Sets the number of rows that will be fetched with a single round-trip\n> between PostgreSQL and Oracle during a foreign table scan. This is\n> implemented using Oracle row prefetching. The value must be between 0 and\n> 10240, where a value of zero disables prefetching.\n> \n> Higher values can speed up performance, but will use more memory on the\n> PostgreSQL server.\n> \n> \n> Regards\n> PAscal\n> \n> \n> \n> --\n> Sent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html <http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html>\n> \n> \n\n\nThe Postgres command of choice to load bulk data is COPY https://www.postgresql.org/docs/current/static/sql-copy.html is much faster than anything else.It’s likely that the slowest part could be Oracle exporting it’s data. Try to use sqlplus to export the data and see how long does it take, you won’t be able to make the process faster than Oracle can export it’s data.If it’s fast enough, format the resulting file in a suitable format for Postgres ‘COPY FROM’ command.Finally you can pipe the Oracle export command and the Postgres COPY FROM command, so the process can run twice as fast. You can make it even faster if you divide the exported data by any criteria and run those export | import scripts in parallel. El 15 ago 2018, a las 10:43, Mariel Cherkassky <[email protected]> escribió:Inserting directly into the partition didnt help, the performance are just the same. I tried to increase the prefetch value to 1000(alter foreign table hist_oracle options (add prefetch '1000') but still no change - 15 minutes for one partition(6GB).On the oracle side the plan is full scan on the partition (I'm copying the entire partition into a postgresql partition..)2018-08-15 1:28 GMT+03:00 legrand legrand <[email protected]>:main ideas are:\n\n- inserting directly to the right partition:\n perform as many inserts as pg partitions found in main_table_hist, like\n INSERT INTO 14/08/2018_value1 select * from remote_oracle_hist where\nday=to_date('14/08/2018','DD/MM/YYYY') and value='value1'\n\nplease check execution plan (in Oracle db) using EXPLAIN ANALYZE\n\n- all those inserts should be executed in // (with 4 or 8 sql scripts)\n\n- wal archiving should be disabled during hist data recovery only (not\nduring day to day operations)\n\n- for prefetch see\n\nhttps://github.com/laurenz/oracle_fdw\n\nprefetch (optional, defaults to \"200\")\n\nSets the number of rows that will be fetched with a single round-trip\nbetween PostgreSQL and Oracle during a foreign table scan. This is\nimplemented using Oracle row prefetching. The value must be between 0 and\n10240, where a value of zero disables prefetching.\n\nHigher values can speed up performance, but will use more memory on the\nPostgreSQL server.\n\n\nRegards\nPAscal\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html",
"msg_date": "Wed, 15 Aug 2018 17:52:49 +0200",
"msg_from": "Daniel Blanch Bataller <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: increase insert into local table from remote oracle table\n preformance"
},
{
"msg_contents": "This is not so bad, you where at 10h for 200GB (20GB/h),\nAnd now at 24GB/h, it makes a 20% increase ;0)\n\nCould you tell us what are the résults with parallel exécutions\n(Before to switch to unload reload strategy)\n\nRegards\nPAscal\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n",
"msg_date": "Wed, 15 Aug 2018 13:26:56 -0700 (MST)",
"msg_from": "legrand legrand <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: increase insert into local table from remote oracle table\n preformance"
}
] |
[
{
"msg_contents": "Hi,\n\nI have been puzzled by very different replication performance (meaning\n50-100x slower) between identical replicas (both in “hardware” and\nconfiguration) once the amount of data to replicate increases. I’ve gone\ndown a number of dead ends and am missing something\n(\nlikely obvious\n)\nthat I hope folks with a deeper knowledge can point out. I’ve tried to boil\ndown the data need to describe the issue to a minimum.\n Thanks for taking the time to read and for any ideas you can share.\n\n# The setup\n\nWe run\na cluster of\nlarge, SSD-backed, i3.16xl (64 cores visible to Linux, ~500GB of RAM, with\n8GB of shared_buffers, fast NVMe drives) nodes\n, each\nrunning PG 9.3\non linux\nin a vanilla streaming asynchronous replication setup: 1 primary node, 1\nreplica designated for failover (left alone) and 6 read replicas, taking\nqueries.\n\nUnder normal circumstances this is working exactly as planned but when I\ndial up the number of INSERTs on the primary to ~10k rows per second, or\nroughly 50MB of data per second (not enough to saturate the network between\nnodes)\n, read replicas falls hopelessly and consistently behind until read traffic\nis diverted away\n. The INSERTs themselves are fairly straightforward: a 20-bytea checksum is\ncomputed off-node\nand used as a unicity constraint at insert time. Each record is 4,500 bytes\nwide on average.\n\nH\nere’s the table where inserts happen.\n\n Table “T”\n Column | Type | Modifiers\n | Storage |\n----------------+-----------------------------+----------------------------------------+----------+\n key | bigint | not null default\nT.next_key()\n\n| plain |\n a | integer | not null\n | plain |\n b | integer |\n | plain |\n c | text |\n | extended |\n d | text |\n | extended |\n e | text[] |\n | extended |\n f | integer | not null\n | plain |\n created | timestamp without time zone | not null default now()\n | plain |\n cksum | bytea | not null\n | extended |\nIndexes:\n “T_pkey\" PRIMARY KEY, btree (key)\n “T_cksum” UNIQUE, btree (cksum)\n “T_created_idx\" btree (created)\n “T_full_idx\" btree (a, b, c, d, e)\n “T_a_idx\" btree (a)\n\n\n# The symptoms\n\nOnce the primary starts to process INSERTs to the tune of 10k/s (roughly\n5\n0MB/s or 150GB/h), replication throughput becomes bi-modal\n within minutes.\n\n1. We see read replicas fall behind and we can measure their replication\nthroughput to be\nconsistently\n1-2% of what the primary is sustaining, by measuring the replication delay\n(in second) every second. We quickly get\nthat metric\nto 0.98-0.99 (1 means that replication is completely stuck\nas it falls behind by one second every second\n). CPU, memory\n, I/O\n(per core iowait)\nor network\n(throughput)\nas a whole resource are not\nvisibly\nmaxed out\n.\n\n2. If we stop incoming queries from one of the replicas, we see it catch up\nat 2x insert throughput (roughly 80MB/s or 300GB/h) as it is cutting\nthrough the backlog. A perf sample shows a good chunk of time spent in\n`mdnblocks`. I/O wait remains\nat\na few %\n(2-10) of cpu cycles. If you can open the attached screenshot you can see\nthe lag going down on each replica as soon as we stop sending reads at it.\n\n\nIn both cases the recovery process maxes out 1 core\nas expected\n.\n\n# The question\n\nWhat surprised me is the bi-modal nature of throughput without gradual\ndegradation\nor a very clear indication of the contentious resource (I/O? Buffer access?)\n.\nThe bi-modal throughput\n would be consistent with replication being\neffectively\nscheduled to run\nat full speed\n1% or 2% of the time (the rest being allocated to queries) but I have not\nfound something in the documentation or in the code that\nsupports that view.\n\nIs this the right way to think about what’s observed?\nIf not, what could be a good next hypothesis to test?\n\n\n# References\n\nHere are some settings that may help and a perf profile of a recovery\nprocess that runs without any competing read traffic processing the INSERT\nbacklog (I don't unfortunately have the same profile on a lagging read\nreplica).\n\n name | setting\n------------------------------+-----------\n max_wal_senders | 299\n max_wal_size | 10240\n min_wal_size | 5\n wal_block_size | 8192\n wal_buffers | 2048\n wal_compression | off\n wal_keep_segments | 0\n wal_level | replica\n wal_log_hints | off\n wal_receiver_status_interval | 10\n wal_receiver_timeout | 60000\n wal_retrieve_retry_interval | 5000\n wal_segment_size | 2048\n wal_sender_timeout | 60000\n wal_sync_method | fdatasync\n wal_writer_delay | 200\n wal_writer_flush_after | 128\nshared_buffers | 1048576\nwork_mem | 32768\nmaintenance_work_mem | 2097152\n\nrecovery process sampled at 997Hz on a lagging replica without read traffic.\n\nSamples: 9K of event 'cycles', Event count (approx.): 25040027878\n Children Self Command Shared Object Symbol\n+ 97.81% 0.44% postgres postgres [.] StartupXLOG\n+ 82.41% 0.00% postgres postgres [.] StartupProcessMain\n+ 82.41% 0.00% postgres postgres [.] AuxiliaryProcessMain\n+ 82.41% 0.00% postgres postgres [.] 0xffffaa514b8004dd\n+ 82.41% 0.00% postgres postgres [.] PostmasterMain\n+ 82.41% 0.00% postgres postgres [.] main\n+ 82.41% 0.00% postgres libc-2.23.so [.] __libc_start_main\n+ 82.41% 0.00% postgres [unknown] [k] 0x3bb6258d4c544155\n+ 50.41% 0.09% postgres postgres [.]\nXLogReadBufferExtended\n+ 40.14% 0.70% postgres postgres [.] XLogReadRecord\n+ 39.92% 0.00% postgres postgres [.] 0xffffaa514b69524e\n+ 30.25% 26.78% postgres postgres [.] mdnblocks\n\n+ 27.35% 0.00% postgres postgres [.] heap_redo\n+ 26.23% 0.01% postgres postgres [.] XLogReadBuffer\n+ 25.37% 0.05% postgres postgres [.] btree_redo\n+ 22.49% 0.07% postgres postgres [.]\nReadBufferWithoutRelcache\n+ 18.72% 0.00% postgres postgres [.] 0xffffaa514b6a2e6a\n\n+ 18.64% 18.64% postgres postgres [.] 0x00000000000fde6a\n\n+ 18.10% 0.00% postgres postgres [.] 0xffffaa514b65a867\n+ 15.80% 0.06% postgres [kernel.kallsyms] [k]\nentry_SYSCALL_64_fastpath\n+ 13.16% 0.02% postgres postgres [.] RestoreBackupBlock\n+ 12.90% 0.00% postgres postgres [.] 0xffffaa514b675271\n+ 12.53% 0.00% postgres postgres [.] 0xffffaa514b69270e\n+ 10.29% 0.00% postgres postgres [.] 0xffffaa514b826672\n+ 10.00% 0.03% postgres libc-2.23.so [.] write\n+ 9.91% 0.00% postgres postgres [.] 0xffffaa514b823ffe\n+ 9.71% 0.00% postgres postgres [.] mdwrite\n+ 9.45% 0.24% postgres libc-2.23.so [.] read\n+ 9.25% 0.03% postgres [kernel.kallsyms] [k] sys_write\n+ 9.15% 0.00% postgres [kernel.kallsyms] [k] vfs_write\n+ 8.98% 0.01% postgres [kernel.kallsyms] [k] new_sync_write\n+ 8.98% 0.00% postgres [kernel.kallsyms] [k] __vfs_write\n+ 8.96% 0.03% postgres [xfs] [k] xfs_file_write_iter\n+ 8.91% 0.08% postgres [xfs] [k]\nxfs_file_buffered_aio_write\n+ 8.64% 0.00% postgres postgres [.] 0xffffaa514b65ab10\n+ 7.87% 0.00% postgres postgres [.] 0xffffaa514b6752d0\n+ 7.35% 0.04% postgres [kernel.kallsyms] [k] generic_perform_write\n+ 5.77% 0.11% postgres libc-2.23.so [.] lseek64\n+ 4.99% 0.00% postgres postgres [.] 0xffffaa514b6a3347\n+ 4.80% 0.15% postgres [kernel.kallsyms] [k] sys_read\n+ 4.74% 4.74% postgres [kernel.kallsyms] [k]\ncopy_user_enhanced_fast_string",
"msg_date": "Tue, 14 Aug 2018 15:18:55 +0200",
"msg_from": "=?UTF-8?B?QWxleGlzIEzDqi1RdcO0Yw==?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bi-modal streaming replication throughput"
},
{
"msg_contents": "On Tue, Aug 14, 2018 at 9:18 AM, Alexis Lê-Quôc <[email protected]> wrote:\n\n>\neach\n running PG 9.3\n on linux\n\n\nThat is the oldest version which is still supported. There have been a lot\nof improvements since then, including to performance. You should see if an\nupgrade solves the problem. If not, at least you will have access to\nbetter tools (like pg_stat_activity.wait_event_type), and people will be\nmore enthusiastic about helping you figure it out knowing it is not an\nalready-solved problem.\n\n\n>\n> Here are some settings that may help and a perf profile of a recovery\n> process that runs without any competing read traffic processing the INSERT\n> backlog (I don't unfortunately have the same profile on a lagging read\n> replica).\n>\n\nUnfortunately the perf when the problem is not occuring won't be very\nhelpful. You need it from when the problem is occurring. Also, I find\nstrace and gdb to more helpful than perf in this type of situation where\nyou already know it is not CPU bound, although perhaps that is just my own\nlack of skill with perf. You need to know why it is not on the CPU, not\nwhat it is doing when it is on the CPU.\n\nWhere the settings you showed all of the non-default settings?\n\nI assume max_standby_streaming_delay is at the default value of 30s? Are\nyou getting query cancellations due conflicts with recovery, or anything\nelse suspicious in the log? What is the maximum lag you see measured in\nseconds?\n\nCheers,\n\nJeff\n\nOn Tue, Aug 14, 2018 at 9:18 AM, Alexis Lê-Quôc <[email protected]> wrote:> \n\neach running PG 9.3 on linux That is the oldest version which is still supported. There have been a lot of improvements since then, including to performance. You should see if an upgrade solves the problem. If not, at least you will have access to better tools (like pg_stat_activity.wait_event_type), and people will be more enthusiastic about helping you figure it out knowing it is not an already-solved problem. \nHere are some settings that may help and a perf profile of a recovery process that runs without any competing read traffic processing the INSERT backlog (I don't unfortunately have the same profile on a lagging read replica).Unfortunately the perf when the problem is not occuring won't be very helpful. You need it from when the problem is occurring. Also, I find strace and gdb to more helpful than perf in this type of situation where you already know it is not CPU bound, although perhaps that is just my own lack of skill with perf. You need to know why it is not on the CPU, not what it is doing when it is on the CPU.Where the settings you showed all of the non-default settings?I assume max_standby_streaming_delay is at the default value of 30s? Are you getting query cancellations due conflicts with recovery, or anything else suspicious in the log? What is the maximum lag you see measured in seconds?Cheers,Jeff",
"msg_date": "Tue, 14 Aug 2018 10:51:25 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bi-modal streaming replication throughput"
},
{
"msg_contents": "Hi,\n\nOn 2018-08-14 15:18:55 +0200, Alexis L�-Qu�c wrote:\n> We run\n> a cluster of\n> large, SSD-backed, i3.16xl (64 cores visible to Linux, ~500GB of RAM, with\n> 8GB of shared_buffers, fast NVMe drives) nodes\n> , each\n> running PG 9.3\n> on linux\n> in a vanilla streaming asynchronous replication setup: 1 primary node, 1\n> replica designated for failover (left alone) and 6 read replicas, taking\n> queries.\n\n9.3 is extremely old, we've made numerous performance improvements in\nareas potentially related to your problem.\n\n\n> Under normal circumstances this is working exactly as planned but when I\n> dial up the number of INSERTs on the primary to ~10k rows per second, or\n> roughly 50MB of data per second (not enough to saturate the network between\n> nodes)\n> , read replicas falls hopelessly and consistently behind until read traffic\n> is diverted away\n> .\n\nDo you use hot_standby_feedback=on?\n\n\n\n> 1. We see read replicas fall behind and we can measure their replication\n> throughput to be\n> consistently\n> 1-2% of what the primary is sustaining, by measuring the replication delay\n> (in second) every second. We quickly get\n> that metric\n> to 0.98-0.99 (1 means that replication is completely stuck\n> as it falls behind by one second every second\n> ). CPU, memory\n> , I/O\n> (per core iowait)\n> or network\n> (throughput)\n> as a whole resource are not\n> visibly\n> maxed out\n\nAre individual *cores* maxed out however? IIUC you're measuring overall\nCPU util, right? Recovery (streaming replication apply) is largely\nsingle threaded.\n\n\n> Here are some settings that may help and a perf profile of a recovery\n> process that runs without any competing read traffic processing the INSERT\n> backlog (I don't unfortunately have the same profile on a lagging read\n> replica).\n\nUnfortunately that's not going to help us much identifying the\ncontention...\n\n\n> + 30.25% 26.78% postgres postgres [.] mdnblocks\n\nThis I've likely fixed ~two years back:\n\nhttp://archives.postgresql.org/message-id/72a98a639574d2e25ed94652848555900c81a799\n\n\n> + 18.64% 18.64% postgres postgres [.] 0x00000000000fde6a\n\nHm, too bad that this is without a symbol - 18% self is quite a\nbit. What perf options are you using?\n\n\n> + 4.74% 4.74% postgres [kernel.kallsyms] [k]\n> copy_user_enhanced_fast_string\n\nPossible that a slightly bigger shared buffer would help you.\n\nIt'd probably more helpful to look at a perf report --no-children for\nthis kind of analysis.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Tue, 14 Aug 2018 10:46:45 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bi-modal streaming replication throughput"
},
{
"msg_contents": "Hi,\n\nOn 2018-08-14 10:46:45 -0700, Andres Freund wrote:\n> On 2018-08-14 15:18:55 +0200, Alexis L�-Qu�c wrote:\n> > + 30.25% 26.78% postgres postgres [.] mdnblocks\n> \n> This I've likely fixed ~two years back:\n> \n> http://archives.postgresql.org/message-id/72a98a639574d2e25ed94652848555900c81a799\n\nErr, wrong keyboard shortcut *and* wrong commit hash:\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=45e191e3aa62d47a8bc1a33f784286b2051f45cb\n\n- Andres\n\n",
"msg_date": "Tue, 14 Aug 2018 10:50:02 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bi-modal streaming replication throughput"
},
{
"msg_contents": "On Tue, Aug 14, 2018 at 7:50 PM Andres Freund <[email protected]> wrote:\n\n> Hi,\n>\n> On 2018-08-14 10:46:45 -0700, Andres Freund wrote:\n> > On 2018-08-14 15:18:55 +0200, Alexis Lê-Quôc wrote:\n> > > + 30.25% 26.78% postgres postgres [.] mdnblocks\n> >\n> > This I've likely fixed ~two years back:\n> >\n> >\n> http://archives.postgresql.org/message-id/72a98a639574d2e25ed94652848555900c81a799\n>\n> Err, wrong keyboard shortcut *and* wrong commit hash:\n>\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=45e191e3aa62d47a8bc1a33f784286b2051f45cb\n>\n> - Andres\n>\n\n Thanks for the commit in the first place and the reference now; that's a\nvery logical explanation. We're now off 9.3.\n\nOn Tue, Aug 14, 2018 at 7:50 PM Andres Freund <[email protected]> wrote:Hi,\n\nOn 2018-08-14 10:46:45 -0700, Andres Freund wrote:\n> On 2018-08-14 15:18:55 +0200, Alexis Lê-Quôc wrote:\n> > + 30.25% 26.78% postgres postgres [.] mdnblocks\n> \n> This I've likely fixed ~two years back:\n> \n> http://archives.postgresql.org/message-id/72a98a639574d2e25ed94652848555900c81a799\n\nErr, wrong keyboard shortcut *and* wrong commit hash:\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=45e191e3aa62d47a8bc1a33f784286b2051f45cb\n\n- Andres Thanks for the commit in the first place and the reference now; that's a very logical explanation. We're now off 9.3.",
"msg_date": "Fri, 17 Aug 2018 15:21:19 +0200",
"msg_from": "=?UTF-8?B?QWxleGlzIEzDqi1RdcO0Yw==?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bi-modal streaming replication throughput"
},
{
"msg_contents": "On 2018-08-17 15:21:19 +0200, Alexis L�-Qu�c wrote:\n> On Tue, Aug 14, 2018 at 7:50 PM Andres Freund <[email protected]> wrote:\n> \n> > Hi,\n> >\n> > On 2018-08-14 10:46:45 -0700, Andres Freund wrote:\n> > > On 2018-08-14 15:18:55 +0200, Alexis L�-Qu�c wrote:\n> > > > + 30.25% 26.78% postgres postgres [.] mdnblocks\n> > >\n> > > This I've likely fixed ~two years back:\n> > >\n> > >\n> > http://archives.postgresql.org/message-id/72a98a639574d2e25ed94652848555900c81a799\n> >\n> > Err, wrong keyboard shortcut *and* wrong commit hash:\n> >\n> > https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=45e191e3aa62d47a8bc1a33f784286b2051f45cb\n> >\n> > - Andres\n> >\n> \n> Thanks for the commit in the first place and the reference now; that's a\n> very logical explanation. We're now off 9.3.\n\nGlad to help. Did the migration appear to have resolved the worst\nperformance issues?\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Fri, 17 Aug 2018 06:46:34 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bi-modal streaming replication throughput"
}
] |
[
{
"msg_contents": "Hello PostgreSQL community,\r\nI am helping with a benchmarking exercise using PGSQL (I chair the TPC subcommittee<http://www.tpc.org/tpcx-v/default.asp> that has released a benchmark using PGSQL). A requirement of the benchmark is having enough log space allocated for 8 hours of running without needing to archive, back up, etc. I am trying to a) figure out how I can establish the exact space usage for the auditor; and b) how I can reduce the log space usage. Looking at iostat and pgstatspack, it looks like we will need to allocate something like 1.5TB of log space for a 5TB database, which is a huge ratio. (Yes, in the real world, we’d probably archive or ship the logs; but for benchmarking, that doesn’t work)\r\n\r\npgstatspack gives me something like below:\r\n\r\n\r\nbackground writer stats\r\n\r\n checkpoints_timed | checkpoints_req | buffers_checkpoint | buffers_clean | maxwritten_clean | buffers_backend | buffers_alloc\r\n\r\n-------------------+-----------------+--------------------+---------------+------------------+-----------------+---------------\r\n\r\n 22 | 0 | 6416768 | 2252636 | 0 | 280211 | 9786558\r\n\r\n(1 row)\r\n\r\n\r\n\r\n\r\n\r\nbackground writer relative stats\r\n\r\n checkpoints_timed | minutes_between_checkpoint | buffers_checkpoint | buffers_clean | buffers_backend | total_writes | avg_checkpoint_write\r\n\r\n-------------------+----------------------------+--------------------+---------------+-----------------+--------------+----------------------\r\n\r\n 100% | 6 | 71% | 25% | 3% | 8.659 MB/s | 2278.000 MB\r\n\r\nI can calculate how many checkpoint segments I have used from the MB/s. But is there a more direct way of seeing how/when a checkpoint segment is filled up and we move on to the next one?\r\n\r\nAlso, it looks like the full_page_writes parameter is the only thing that can help reduce the log usage size, but that I have to set it to 1 to avoid corruption after a system crash, which is a requirement. Another requirement is a very short, 6-minute checkpoint time, which means we will likely write the full page very often. Yes, my hands are tied!\r\n\r\nHere are the relevant non-default settings:\r\n\r\n\r\nshared_buffers = 18000MB # min 128kB\r\n\r\ntemp_buffers = 2MB # min 800kB\r\n\r\nmaintenance_work_mem = 5MB # min 1MB\r\n\r\nbgwriter_delay = 10ms # 10-10000ms between rounds\r\n\r\nbgwriter_lru_maxpages = 200 # 0-1000 max buffers written/round\r\n\r\neffective_io_concurrency = 10 # 1-1000; 0 disables prefetching\r\n\r\nwal_sync_method = open_datasync # the default is the first option\r\n\r\nwal_buffers = 16MB # min 32kB, -1 sets based on shared_buffers\r\n\r\nwal_writer_delay = 10ms # 1-10000 milliseconds\r\n\r\ncheckpoint_segments = 750 # in logfile segments, min 1, 16MB each\r\n\r\ncheckpoint_timeout = 6min # range 30s-1h\r\n\r\ncheckpoint_completion_target = 0.9 # checkpoint target duration, 0.0 - 1.0\r\n\r\neffective_cache_size = 512MB\r\n\r\ndefault_statistics_target = 10000 # range 1-10000\r\n\r\nlog_destination = 'stderr' # Valid values are combinations of\r\n\r\nlogging_collector = on # Enable capturing of stderr and csvlog\r\n\r\nlog_directory = 'pg_log' # directory where log files are written,\r\n\r\nlog_filename = 'postgresql-%a.log' # log file name pattern,\r\n\r\nlog_truncate_on_rotation = on # If on, an existing log file with the\r\n\r\nlog_rotation_age = 1d # Automatic rotation of logfiles will\r\n\r\nlog_rotation_size = 0 # Automatic rotation of logfiles will\r\n\r\nlog_checkpoints = on\r\n\r\n\n\n\n\n\n\n\n\n\nHello PostgreSQL community,\nI am helping with a benchmarking exercise using PGSQL (I chair\r\nthe TPC subcommittee that has released a benchmark using PGSQL). A requirement of the benchmark is having enough log space allocated for 8 hours of running without needing to archive, back up, etc. I am trying\r\n to a) figure out how I can establish the exact space usage for the auditor; and b) how I can reduce the log space usage. Looking at iostat and pgstatspack, it looks like we will need to allocate something like 1.5TB of log space for a 5TB database, which is\r\n a huge ratio. (Yes, in the real world, we’d probably archive or ship the logs; but for benchmarking, that doesn’t work)\n \npgstatspack gives me something like below:\n \nbackground writer stats\n checkpoints_timed | checkpoints_req | buffers_checkpoint | buffers_clean | maxwritten_clean | buffers_backend | buffers_alloc \n-------------------+-----------------+--------------------+---------------+------------------+-----------------+---------------\n 22 |\r\n 0 | \r\n6416768 | \r\n2252636 | \r\n0 | \n280211 | \n9786558\n(1 row)\n \n \nbackground writer relative stats\n checkpoints_timed | minutes_between_checkpoint | buffers_checkpoint | buffers_clean | buffers_backend | total_writes | avg_checkpoint_write \n-------------------+----------------------------+--------------------+---------------+-----------------+--------------+----------------------\n 100% \r\n| \r\n6 | 71% \r\n| 25% \r\n| 3% \r\n| 8.659 MB/s \r\n| 2278.000 MB\n \nI can calculate how many checkpoint segments I have used from the MB/s. But is there a more direct way of seeing how/when a checkpoint segment is filled up and we move on to the next one?\n \nAlso, it looks like the full_page_writes parameter is the only thing that can help reduce the log usage size, but that I have to set it to 1 to avoid corruption after a system crash, which is a requirement.\r\n Another requirement is a very short, 6-minute checkpoint time, which means we will likely write the full page very often. Yes, my hands are tied!\n \nHere are the relevant non-default settings:\n \nshared_buffers = 18000MB \r\n# min 128kB\ntemp_buffers = 2MB \r\n# min 800kB\nmaintenance_work_mem = 5MB \r\n# min 1MB\nbgwriter_delay = 10ms \r\n# 10-10000ms between rounds\nbgwriter_lru_maxpages = 200 \r\n# 0-1000 max buffers written/round\neffective_io_concurrency = 10 \r\n# 1-1000; 0 disables prefetching\nwal_sync_method = open_datasync \r\n# the default is the first option\nwal_buffers = 16MB \r\n# min 32kB, -1 sets based on shared_buffers\nwal_writer_delay = 10ms \r\n# 1-10000 milliseconds\ncheckpoint_segments = 750 \r\n# in logfile segments, min 1, 16MB each\ncheckpoint_timeout = 6min \r\n# range 30s-1h\ncheckpoint_completion_target = 0.9\n# checkpoint target duration, 0.0 - 1.0\neffective_cache_size = 512MB\ndefault_statistics_target = 10000 \r\n# range 1-10000\nlog_destination = 'stderr' \r\n# Valid values are combinations of\nlogging_collector = on \r\n# Enable capturing of stderr and csvlog\nlog_directory = 'pg_log' \r\n# directory where log files are written,\nlog_filename = 'postgresql-%a.log'\n# log file name pattern,\nlog_truncate_on_rotation = on \r\n# If on, an existing log file with the\nlog_rotation_age = 1d \r\n# Automatic rotation of logfiles will\nlog_rotation_size = 0 \r\n# Automatic rotation of logfiles will\nlog_checkpoints = on",
"msg_date": "Tue, 14 Aug 2018 18:51:34 +0000",
"msg_from": "Reza Taheri <[email protected]>",
"msg_from_op": true,
"msg_subject": "Calculating how much redo log space has been used"
},
{
"msg_contents": "Hi,\n\nOn 2018-08-14 18:51:34 +0000, Reza Taheri wrote:\n> Also, it looks like the full_page_writes parameter is the only thing\n> that can help reduce the log usage size\n\nThere's also wal_compression.\n\n\n> Another requirement is a very short, 6-minute checkpoint time, which\n> means we will likely write the full page very often. Yes, my hands are\n> tied!\n\nWhy is that a requirement / how is specifically phrased? Is it a bounded\nrecovery time?\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Tue, 14 Aug 2018 12:31:18 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Calculating how much redo log space has been used"
},
{
"msg_contents": "> -----Original Message-----\n> From: Andres Freund [mailto:[email protected]]\n> Sent: Tuesday, August 14, 2018 12:31 PM\n> To: Reza Taheri <[email protected]>\n> Cc: [email protected]\n> Subject: Re: Calculating how much redo log space has been used\n> \n> Hi,\n> \n> On 2018-08-14 18:51:34 +0000, Reza Taheri wrote:\n> > Also, it looks like the full_page_writes parameter is the only thing\n> > that can help reduce the log usage size\n> \n> There's also wal_compression.\n> \n> \n> > Another requirement is a very short, 6-minute checkpoint time, which\n> > means we will likely write the full page very often. Yes, my hands are\n> > tied!\n> \n> Why is that a requirement / how is specifically phrased? Is it a bounded\n> recovery time?\n> \n> Greetings,\n> \n> Andres Freund\n\nHi Andres,\nGood to know about wal_compression. It gives us a good reason to upgrade to 9.5 to get that feature.\n\nThe need for a 6-minute checkpoint came from this requirement in the benchmark specification:\n\nthe database contents (excluding the transaction log) stored on Durable Media cannot be more than 12 minutes older than any Committed state of the database.\nComment: This may mean that Database Management Systems implementing traditional checkpoint algorithms may need to perform checkpoints twice as frequently (i.e. every 6 minutes) in order to guarantee that the 12-minute requirement is met.\n\nBut in any case, I now realize that I was going into the weeds, looking at the wrong thing. My original issue was figuring out how quickly we churn through checkpoint segment files, and had been looking at the checkpoint stats in pgstatspack to figure that out. But that's the wrong place to look. I don't think there is anything in the pgstatspack output that can give me that information. I can tell by looking at the timestamps of the checkpoint segment files, but I was hoping to find something that gets logged in pg_log/postgresql-*log and tells me when we switch to a new log\n\nThanks,\nReza\n\n",
"msg_date": "Wed, 15 Aug 2018 20:03:00 +0000",
"msg_from": "Reza Taheri <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Calculating how much redo log space has been used"
}
] |
[
{
"msg_contents": "One of our database API's is run concurrently by near 40 sessions. We see\nall of them waiting back and forth on this wait state.\n\nThere is one scenario described in some forum where sessions connected a\nread-only replica are affected. This does not apply to our use case.\n\nWhy is it called Subtrans Control Lock?\nWhat are the common user session scenarios causing this wait?\n - I have read some describe the use of SQL savepoints or PL/pgSQL\nexception handling.\nWhat are known resolution measures?\n\n\n----------------------------------------\nThank you\n\nOne of our database API's is run concurrently by near 40 sessions. We see all of them waiting back and forth on this wait state.There is one scenario described in some forum where sessions connected a read-only replica are affected. This does not apply to our use case.Why is it called Subtrans Control Lock?What are the common user session scenarios causing this wait? - I have read some describe the use of SQL savepoints or PL/pgSQL exception handling. What are known resolution measures? ----------------------------------------Thank you",
"msg_date": "Thu, 16 Aug 2018 14:19:11 -0400",
"msg_from": "Fred Habash <[email protected]>",
"msg_from_op": true,
"msg_subject": "Guideline To Resolve LWLock:SubtransControlLock"
},
{
"msg_contents": "On 2018-Aug-16, Fred Habash wrote:\n\n> One of our database API's is run concurrently by near 40 sessions. We see\n> all of them waiting back and forth on this wait state.\n\nWhat version are you running?\n\n> Why is it called Subtrans Control Lock?\n\nIt controls access to the pg_subtrans structure, which is used to record\nparent/child transaction relationships (as you say, savepoints and\nEXCEPTIONs in plpgsql are the most common uses, but not the only ones).\nNormally lookup of these is optimized away, but once you cross a\nthreshold it cannot any longer.\n\n> What are the common user session scenarios causing this wait?\n> - I have read some describe the use of SQL savepoints or PL/pgSQL\n> exception handling.\n> What are known resolution measures?\n\nAre you in a position to recompile Postgres?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Thu, 16 Aug 2018 15:36:32 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Guideline To Resolve LWLock:SubtransControlLock"
},
{
"msg_contents": "Aurora Postgres 9.6.3\nSo, no chance to recompile (AFAIK).\n\nIs there a design anti-pattern at the schema or data access level that we\nshould look for and correct?\n\nAnd as for the recompile, are you thinking 'NUM_SUBTRANS_BUFFERS'?\n\nThanks\n\n\n\nOn Thu, Aug 16, 2018 at 2:36 PM Alvaro Herrera <[email protected]>\nwrote:\n\n> On 2018-Aug-16, Fred Habash wrote:\n>\n> > One of our database API's is run concurrently by near 40 sessions. We see\n> > all of them waiting back and forth on this wait state.\n>\n> What version are you running?\n>\n> > Why is it called Subtrans Control Lock?\n>\n> It controls access to the pg_subtrans structure, which is used to record\n> parent/child transaction relationships (as you say, savepoints and\n> EXCEPTIONs in plpgsql are the most common uses, but not the only ones).\n> Normally lookup of these is optimized away, but once you cross a\n> threshold it cannot any longer.\n>\n> > What are the common user session scenarios causing this wait?\n> > - I have read some describe the use of SQL savepoints or PL/pgSQL\n> > exception handling.\n> > What are known resolution measures?\n>\n> Are you in a position to recompile Postgres?\n>\n> --\n> Álvaro Herrera https://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\n\n-- \n\n----------------------------------------\nThank you\n\nAurora Postgres 9.6.3So, no chance to recompile (AFAIK).Is there a design anti-pattern at the schema or data access level that we should look for and correct? And as for the recompile, are you thinking 'NUM_SUBTRANS_BUFFERS'?Thanks On Thu, Aug 16, 2018 at 2:36 PM Alvaro Herrera <[email protected]> wrote:On 2018-Aug-16, Fred Habash wrote:\n\n> One of our database API's is run concurrently by near 40 sessions. We see\n> all of them waiting back and forth on this wait state.\n\nWhat version are you running?\n\n> Why is it called Subtrans Control Lock?\n\nIt controls access to the pg_subtrans structure, which is used to record\nparent/child transaction relationships (as you say, savepoints and\nEXCEPTIONs in plpgsql are the most common uses, but not the only ones).\nNormally lookup of these is optimized away, but once you cross a\nthreshold it cannot any longer.\n\n> What are the common user session scenarios causing this wait?\n> - I have read some describe the use of SQL savepoints or PL/pgSQL\n> exception handling.\n> What are known resolution measures?\n\nAre you in a position to recompile Postgres?\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n-- ----------------------------------------Thank you",
"msg_date": "Fri, 17 Aug 2018 14:07:12 -0400",
"msg_from": "Fred Habash <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Guideline To Resolve LWLock:SubtransControlLock"
},
{
"msg_contents": "On 2018-Aug-17, Fred Habash wrote:\n\n> Aurora Postgres 9.6.3\n\nOh, okay, I don't know this one. Did you contact Amazon support?\n\n> So, no chance to recompile (AFAIK).\n> Is there a design anti-pattern at the schema or data access level that we\n> should look for and correct?\n\nMaybe ...\n\n> And as for the recompile, are you thinking 'NUM_SUBTRANS_BUFFERS'?\n\nYes, that's one option, but there's also TOTAL_MAX_CACHED_SUBXIDS.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Fri, 17 Aug 2018 15:26:45 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Guideline To Resolve LWLock:SubtransControlLock"
},
{
"msg_contents": "Thanks. \nHow do we go about calculating appropriate values for these two parameters ...\n\n> 'NUM_SUBTRANS_BUFFERS'?\nTOTAL_MAX_CACHED_SUBXIDS\n\nAnd do both require a recompile?\n\n\n————-\nThank you. \n\nOn Aug 17, 2018, at 2:26 PM, Alvaro Herrera <[email protected]> wrote:\n\n>> And as for the recompile, are you thinking 'NUM_SUBTRANS_BUFFERS'?\n> \n> Yes, that's one option, but there's also TOTAL_MAX_CACHED_SUBXIDS.\n\nThanks. How do we go about calculating appropriate values for these two parameters ...'NUM_SUBTRANS_BUFFERS'?TOTAL_MAX_CACHED_SUBXIDSAnd do both require a recompile?————-Thank you. On Aug 17, 2018, at 2:26 PM, Alvaro Herrera <[email protected]> wrote:And as for the recompile, are you thinking 'NUM_SUBTRANS_BUFFERS'?Yes, that's one option, but there's also TOTAL_MAX_CACHED_SUBXIDS.",
"msg_date": "Mon, 20 Aug 2018 16:52:47 -0400",
"msg_from": "Fred Habash <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Guideline To Resolve LWLock:SubtransControlLock"
},
{
"msg_contents": "On 2018-Aug-20, Fred Habash wrote:\n\n> How do we go about calculating appropriate values for these two parameters ...\n\nI don't know a lot about your system, so don't have anything to go on.\nAlso, Aurora is mostly unknown to me. What did Amazon say?\n\n> > 'NUM_SUBTRANS_BUFFERS'?\n> TOTAL_MAX_CACHED_SUBXIDS\n> \n> And do both require a recompile?\n\nYes. But maybe they'll just move the contention point a little bit\nbackwards without actually fixing anything.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Mon, 20 Aug 2018 18:00:27 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Guideline To Resolve LWLock:SubtransControlLock"
},
{
"msg_contents": "On 8/17/18 11:07, Fred Habash wrote:\n> Aurora Postgres 9.6.3\n\nHi Fred! The Amazon team does watch the AWS forums and that's the place\nto raise questions that are specific to PostgreSQL on RDS or questions\nspecific to Aurora. In fact we would love to see this question over\nthere since it might be something other people see as well.\n\nhttps://forums.aws.amazon.com/forum.jspa?forumID=227\n\nThat said... FWIW, Aurora PostgreSQL version 9.6.3 uses parent/child\ntransaction relationships pretty much the same way that community\nPostgreSQL 9.6.3 does. The uses you pointed out (savepoints and\nexceptions in plpgsql) are the most common causes of contention I've\nseen - similar to what Alvaro said his experience is. I have seen\napplications grind to a halt on SubtransControlLock when they make heavy\nuse of exception blocks in plpgsql code; in fact it's pretty\nstraightforward to demonstrate this behavior with pgbench on community\nPostgreSQL.\n\nOn 8/20/18 14:00, Alvaro Herrera wrote:\n>> And do both require a recompile?\n>\n> Yes. But maybe they'll just move the contention point a little bit\n> backwards without actually fixing anything.\n\nWhen it comes to resolution, I agree with Alvaro's assessment here;\nunfortunately, I don't know of a great solution on community PostgreSQL\noutside of trying to reduce the use of exception blocks in your plpgsql\ncode. Increasing the cache size can give a little more head room but\ndoesn't move the contention point significantly. That single global\ncontrol lock is hard to get around when you try to use subtransactions\nat scale.\n\n-Jeremy\n\nP.S. This applies on the Aurora PostgreSQL 9.6.3 build too but I'm\ndiscussing here in the context of community PostgreSQL code and we can\nput further Aurora-specific discussion on the AWS forums.\n\n-- \nJeremy Schneider\nDatabase Engineer\nAmazon Web Services\n\n",
"msg_date": "Mon, 20 Aug 2018 15:14:49 -0700",
"msg_from": "Jeremy Schneider <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Guideline To Resolve LWLock:SubtransControlLock"
},
{
"msg_contents": "Thanks, Jeremy …\n\n“ That said... FWIW, Aurora PostgreSQL version 9.6.3 uses parent/child transaction relationships pretty much the same way that community PostgreSQL 9.6.3 does …”\n\nThis is why I posted here first. This particular wait state did not appear to be Aurora specific and was not listed as part of https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/AuroraPostgreSQL.Reference.html#AuroraPostgreSQL.Reference.Waitevents\n\nI go back and forth posting issues between the two forums depending on the nature of it.\n\n\n----------------\nThank you\n\nFrom: Jeremy Schneider\nSent: Monday, August 20, 2018 6:19 PM\nTo: Fred Habash\nCc: [email protected]\nSubject: Re: Guideline To Resolve LWLock:SubtransControlLock\n\nOn 8/17/18 11:07, Fred Habash wrote:\n> Aurora Postgres 9.6.3\n\nHi Fred! The Amazon team does watch the AWS forums and that's the place\nto raise questions that are specific to PostgreSQL on RDS or questions\nspecific to Aurora. In fact we would love to see this question over\nthere since it might be something other people see as well.\n\nhttps://forums.aws.amazon.com/forum.jspa?forumID=227\n\nThat said... FWIW, Aurora PostgreSQL version 9.6.3 uses parent/child\ntransaction relationships pretty much the same way that community\nPostgreSQL 9.6.3 does. The uses you pointed out (savepoints and\nexceptions in plpgsql) are the most common causes of contention I've\nseen - similar to what Alvaro said his experience is. I have seen\napplications grind to a halt on SubtransControlLock when they make heavy\nuse of exception blocks in plpgsql code; in fact it's pretty\nstraightforward to demonstrate this behavior with pgbench on community\nPostgreSQL.\n\nOn 8/20/18 14:00, Alvaro Herrera wrote:\n>> And do both require a recompile?\n>\n> Yes. But maybe they'll just move the contention point a little bit\n> backwards without actually fixing anything.\n\nWhen it comes to resolution, I agree with Alvaro's assessment here;\nunfortunately, I don't know of a great solution on community PostgreSQL\noutside of trying to reduce the use of exception blocks in your plpgsql\ncode. Increasing the cache size can give a little more head room but\ndoesn't move the contention point significantly. That single global\ncontrol lock is hard to get around when you try to use subtransactions\nat scale.\n\n-Jeremy\n\nP.S. This applies on the Aurora PostgreSQL 9.6.3 build too but I'm\ndiscussing here in the context of community PostgreSQL code and we can\nput further Aurora-specific discussion on the AWS forums.\n\n-- \nJeremy Schneider\nDatabase Engineer\nAmazon Web Services\n\n\nThanks, Jeremy … “ That said... FWIW, Aurora PostgreSQL version 9.6.3 uses parent/child transaction relationships pretty much the same way that community PostgreSQL 9.6.3 does …” This is why I posted here first. This particular wait state did not appear to be Aurora specific and was not listed as part of https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/AuroraPostgreSQL.Reference.html#AuroraPostgreSQL.Reference.Waitevents I go back and forth posting issues between the two forums depending on the nature of it. ----------------Thank you From: Jeremy SchneiderSent: Monday, August 20, 2018 6:19 PMTo: Fred HabashCc: [email protected]: Re: Guideline To Resolve LWLock:SubtransControlLock On 8/17/18 11:07, Fred Habash wrote:> Aurora Postgres 9.6.3 Hi Fred! The Amazon team does watch the AWS forums and that's the placeto raise questions that are specific to PostgreSQL on RDS or questionsspecific to Aurora. In fact we would love to see this question overthere since it might be something other people see as well. https://forums.aws.amazon.com/forum.jspa?forumID=227 That said... FWIW, Aurora PostgreSQL version 9.6.3 uses parent/childtransaction relationships pretty much the same way that communityPostgreSQL 9.6.3 does. The uses you pointed out (savepoints andexceptions in plpgsql) are the most common causes of contention I'veseen - similar to what Alvaro said his experience is. I have seenapplications grind to a halt on SubtransControlLock when they make heavyuse of exception blocks in plpgsql code; in fact it's prettystraightforward to demonstrate this behavior with pgbench on communityPostgreSQL. On 8/20/18 14:00, Alvaro Herrera wrote:>> And do both require a recompile?> > Yes. But maybe they'll just move the contention point a little bit> backwards without actually fixing anything. When it comes to resolution, I agree with Alvaro's assessment here;unfortunately, I don't know of a great solution on community PostgreSQLoutside of trying to reduce the use of exception blocks in your plpgsqlcode. Increasing the cache size can give a little more head room butdoesn't move the contention point significantly. That single globalcontrol lock is hard to get around when you try to use subtransactionsat scale. -Jeremy P.S. This applies on the Aurora PostgreSQL 9.6.3 build too but I'mdiscussing here in the context of community PostgreSQL code and we canput further Aurora-specific discussion on the AWS forums. -- Jeremy SchneiderDatabase EngineerAmazon Web Services",
"msg_date": "Wed, 22 Aug 2018 11:48:01 -0400",
"msg_from": "Fd Habash <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Guideline To Resolve LWLock:SubtransControlLock"
},
{
"msg_contents": "Jeremy …\n\nIn your statement, what constitutes ‘heavy use of exception blocks’? \n\nThanks \n\n\n\nI have seen\napplications grind to a halt on SubtransControlLock when they make heavy\nuse of exception blocks in plpgsql code; in fact it's pretty\nstraightforward to demonstrate this behavior with pgbench on community\nPostgreSQL.\n\n----------------\nThank you\n\nFrom: Jeremy Schneider\nSent: Monday, August 20, 2018 6:19 PM\nTo: Fred Habash\nCc: [email protected]\nSubject: Re: Guideline To Resolve LWLock:SubtransControlLock\n\nOn 8/17/18 11:07, Fred Habash wrote:\n> Aurora Postgres 9.6.3\n\nHi Fred! The Amazon team does watch the AWS forums and that's the place\nto raise questions that are specific to PostgreSQL on RDS or questions\nspecific to Aurora. In fact we would love to see this question over\nthere since it might be something other people see as well.\n\nhttps://forums.aws.amazon.com/forum.jspa?forumID=227\n\nThat said... FWIW, Aurora PostgreSQL version 9.6.3 uses parent/child\ntransaction relationships pretty much the same way that community\nPostgreSQL 9.6.3 does. The uses you pointed out (savepoints and\nexceptions in plpgsql) are the most common causes of contention I've\nseen - similar to what Alvaro said his experience is. I have seen\napplications grind to a halt on SubtransControlLock when they make heavy\nuse of exception blocks in plpgsql code; in fact it's pretty\nstraightforward to demonstrate this behavior with pgbench on community\nPostgreSQL.\n\nOn 8/20/18 14:00, Alvaro Herrera wrote:\n>> And do both require a recompile?\n>\n> Yes. But maybe they'll just move the contention point a little bit\n> backwards without actually fixing anything.\n\nWhen it comes to resolution, I agree with Alvaro's assessment here;\nunfortunately, I don't know of a great solution on community PostgreSQL\noutside of trying to reduce the use of exception blocks in your plpgsql\ncode. Increasing the cache size can give a little more head room but\ndoesn't move the contention point significantly. That single global\ncontrol lock is hard to get around when you try to use subtransactions\nat scale.\n\n-Jeremy\n\nP.S. This applies on the Aurora PostgreSQL 9.6.3 build too but I'm\ndiscussing here in the context of community PostgreSQL code and we can\nput further Aurora-specific discussion on the AWS forums.\n\n-- \nJeremy Schneider\nDatabase Engineer\nAmazon Web Services\n\n\nJeremy … In your statement, what constitutes ‘heavy use of exception blocks’? Thanks I have seenapplications grind to a halt on SubtransControlLock when they make heavyuse of exception blocks in plpgsql code; in fact it's prettystraightforward to demonstrate this behavior with pgbench on communityPostgreSQL. ----------------Thank you From: Jeremy SchneiderSent: Monday, August 20, 2018 6:19 PMTo: Fred HabashCc: [email protected]: Re: Guideline To Resolve LWLock:SubtransControlLock On 8/17/18 11:07, Fred Habash wrote:> Aurora Postgres 9.6.3 Hi Fred! The Amazon team does watch the AWS forums and that's the placeto raise questions that are specific to PostgreSQL on RDS or questionsspecific to Aurora. In fact we would love to see this question overthere since it might be something other people see as well. https://forums.aws.amazon.com/forum.jspa?forumID=227 That said... FWIW, Aurora PostgreSQL version 9.6.3 uses parent/childtransaction relationships pretty much the same way that communityPostgreSQL 9.6.3 does. The uses you pointed out (savepoints andexceptions in plpgsql) are the most common causes of contention I'veseen - similar to what Alvaro said his experience is. I have seenapplications grind to a halt on SubtransControlLock when they make heavyuse of exception blocks in plpgsql code; in fact it's prettystraightforward to demonstrate this behavior with pgbench on communityPostgreSQL. On 8/20/18 14:00, Alvaro Herrera wrote:>> And do both require a recompile?> > Yes. But maybe they'll just move the contention point a little bit> backwards without actually fixing anything. When it comes to resolution, I agree with Alvaro's assessment here;unfortunately, I don't know of a great solution on community PostgreSQLoutside of trying to reduce the use of exception blocks in your plpgsqlcode. Increasing the cache size can give a little more head room butdoesn't move the contention point significantly. That single globalcontrol lock is hard to get around when you try to use subtransactionsat scale. -Jeremy P.S. This applies on the Aurora PostgreSQL 9.6.3 build too but I'mdiscussing here in the context of community PostgreSQL code and we canput further Aurora-specific discussion on the AWS forums. -- Jeremy SchneiderDatabase EngineerAmazon Web Services",
"msg_date": "Wed, 22 Aug 2018 16:07:53 -0400",
"msg_from": "Fd Habash <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Guideline To Resolve LWLock:SubtransControlLock"
},
{
"msg_contents": "On 8/22/18 13:07, Fd Habash wrote:\n> In your statement, what constitutes ‘heavy use of exception blocks’? \n> \n> \"I have seen\n> applications grind to a halt on SubtransControlLock when they make heavy\n> use of exception blocks in plpgsql code; in fact it's pretty\n> straightforward to demonstrate this behavior with pgbench on community\n> PostgreSQL.\"\n\nIn one of the most dramatic cases I saw, the customer was migrating from\nanother database system and had a very large workload running on the\nlargest instance class we currently offer. They were quite savvy and had\nalready gone through all of the procedural code they migrated and\nremoved all of the exception blocks. Nonetheless, when they hit their\npeak workload, we observed this wait event.\n\nIt was finally discovered that the framework/ORM they were using had a\ncapability to automatically use savepoints for partial rollback. They\nhad not explicitly configured it (afaik) - but their framework was using\nsavepoints. In some complex code paths we were seeing several hundred\nsubtransactions within one master transaction.\n\nI haven't thoroughly tested yet, but anecdotally I don't think that\nyou'll have a problem with contention on this lock until you get to a\nsufficiently large database server. The machine I described above was a\n32-core box; I suspect that a box with 2 cores is going to be waiting on\nsomething else before it gets stuck here. If you want to see a system\nchoke on this lock, just spin up a 32-core box and run two separate\npgbenchs in parallel (needs to be two)... the first as select-only and\nthe second modified to create some savepoints while updating\npgbench_accounts.\n\nTo directly answer the question \"what constitutes heavy use\": if folks\nare building high-throughput applications that they expect to scale\nnicely on PostgreSQL up to 32-core boxes and beyond, I'd suggest\navoiding savepoints in any key codepaths that are part of the primary\ntransactional workload (low-latency operations that are executed many\ntimes per second).\n\n\nOn 8/22/18 08:48, Fd Habash wrote:\n> “ That said... FWIW, Aurora PostgreSQL version 9.6.3 uses parent/child\n> transaction relationships pretty much the same way that community\n> PostgreSQL 9.6.3 does …”\n>\n> This is why I posted here first. This particular wait state did not\n> appear to be Aurora specific and was not listed as part of\n>\nhttps://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/AuroraPostgreSQL.Reference.html#AuroraPostgreSQL.Reference.Waitevents//\n>\n> I go back and forth posting issues between the two forums depending on\n> the nature of it.\n\nJust added it to the aforementioned Aurora docs, hopefully heading off a\nfew future questions.\n\n-Jeremy\n\n\n-- \nJeremy Schneider\nDatabase Engineer\nAmazon Web Services\n\n",
"msg_date": "Thu, 6 Sep 2018 08:16:29 -0700",
"msg_from": "Jeremy Schneider <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Guideline To Resolve LWLock:SubtransControlLock"
}
] |
[
{
"msg_contents": "Hi. My databases make heavy use of timestamp ranges, and they rely on GIST\nexclusion constraints to ensure that the ranges are disjoint. I've noticed\nthat queries that hit the GIST indexes are EXTREMELY slow, and the queries\nrun much faster if I make trivial changes to avoid the GIST indexes.\n\nHere's the setup for a test case. (Timings were collected on PostgreSQL\n9.5.4 on x86_64-pc-linux-gnu, Intel Xeon E5 in a VM with 3.5 GB RAM):\n\nCREATE TABLE app (\n pk SERIAL PRIMARY KEY,\n group_id TEXT NOT NULL,\n app_time TIMESTAMPTZ NOT NULL\n);\n\nCREATE TABLE group_span (\n pk SERIAL PRIMARY KEY,\n group_id TEXT NOT NULL,\n valid_period TSTZRANGE NOT NULL,\n EXCLUDE USING GIST (group_id WITH =, valid_period WITH &&)\n);\n\nCREATE TABLE member_span (\n pk SERIAL PRIMARY KEY,\n member_id TEXT NOT NULL,\n group_id TEXT NOT NULL,\n valid_period TSTZRANGE NOT NULL,\n EXCLUDE USING GIST\n (member_id WITH =, group_id WITH =, valid_period WITH &&)\n);\n\n-- Fill tables with some random data\n\nINSERT INTO app (group_id, app_time)\nSELECT\n MD5(CONCAT(GENERATE_SERIES(1, 10000), RANDOM())),\n DATE_TRUNC('month', TIMESTAMPTZ '2000-01-01' +\n INTERVAL '3 years' * RANDOM());\n\n-- Give groups a 1-year span, and give some groups a 2nd-year span:\nINSERT INTO group_span (group_id, valid_period)\n(SELECT\n group_id,\n TSTZRANGE(app_time, app_time + INTERVAL '1 year')\n FROM app)\nUNION ALL\n(SELECT\n group_id,\n TSTZRANGE(app_time + INTERVAL '1 year',\n app_time + INTERVAL '2 year')\n FROM app LIMIT 2000);\n\n-- Create members with a random span within their group_span:\nINSERT INTO member_span (member_id, group_id, valid_period)\nSELECT\n MD5(RANDOM()::TEXT),\n group_id,\n TSTZRANGE(\n LOWER(valid_period),\n UPPER(valid_period) - DATE_TRUNC(\n 'days',\n (UPPER(valid_period) - LOWER(valid_period)) * RANDOM()\n )\n )\nFROM group_span;\n\n\nGiven this setup, here's a query that hits the GIST exclusion index on the\n\"member_span\" table. It takes 38 sec on my machine:\n\nSELECT *\nFROM app\nJOIN group_span ON\n app.group_id = group_span.group_id AND\n app.app_time <@ group_span.valid_period\nJOIN member_span ON\n group_span.group_id = member_span.group_id AND\n group_span.valid_period && member_span.valid_period;\n\nHere's the query plan for that query:\n\nNested Loop (cost=319.27..776.39 rows=1 width=196) (actual\ntime=15.370..38406.466 rows=10000 loops=1)\n Join Filter: (app.group_id = member_span.group_id)\n -> Hash Join (cost=319.00..771.00 rows=12 width=104) (actual\ntime=5.790..130.613 rows=10000 loops=1)\n Hash Cond: (group_span.group_id = app.group_id)\n Join Filter: (app.app_time <@ group_span.valid_period)\n Rows Removed by Join Filter: 2000\n -> Seq Scan on group_span (cost=0.00..257.00 rows=12000 width=59)\n(actual time=0.005..16.282 rows=12000 loops=1)\n -> Hash (cost=194.00..194.00 rows=10000 width=45) (actual\ntime=5.758..5.758 rows=10000 loops=1)\n Buckets: 16384 Batches: 1 Memory Usage: 910kB\n -> Seq Scan on app (cost=0.00..194.00 rows=10000 width=45)\n(actual time=0.002..2.426 rows=10000 loops=1)\n -> Index Scan using member_span_member_id_group_id_valid_period_excl on\nmember_span (cost=0.28..0.44 rows=1 width=92) (actual time=1.988..3.817\nrows=1 loops=10000)\n Index Cond: ((group_id = group_span.group_id) AND\n(group_span.valid_period && valid_period))\nPlanning time: 0.784 ms\nExecution time: 38410.227 ms\n\nWe can make a small tweak to the query to make it complicated enough that\nthe execution planner avoids the GIST index. In this particular case, we\ncan replace \"app.app_time <@ group_span.valid_period\" with the\nequivalent \"app.app_time\n>= LOWER(group_span.valid_period) AND app.app_time <\nUPPER(group_span.valid_period)\". This equivalent query is MUCH faster:\n\nSELECT *\nFROM app\nJOIN group_span ON\n app.group_id = group_span.group_id AND\n app.app_time >= LOWER(group_span.valid_period) AND\n app.app_time < UPPER(group_span.valid_period)\nJOIN member_span ON\n group_span.group_id = member_span.group_id AND\n group_span.valid_period && member_span.valid_period;\n\nIt only takes 86 ms, even though it's doing 3 seq scans instead of using\nthe index:\n\nHash Join (cost=953.71..1186.65 rows=8 width=196) (actual\ntime=58.364..84.706 rows=10000 loops=1)\n Hash Cond: (app.group_id = group_span.group_id)\n Join Filter: ((app.app_time >= lower(group_span.valid_period)) AND\n(app.app_time < upper(group_span.valid_period)))\n Rows Removed by Join Filter: 2000\n -> Seq Scan on app (cost=0.00..194.00 rows=10000 width=45) (actual\ntime=0.007..2.391 rows=10000 loops=1)\n -> Hash (cost=952.81..952.81 rows=72 width=151) (actual\ntime=58.343..58.343 rows=12000 loops=1)\n Buckets: 16384 (originally 1024) Batches: 1 (originally 1) Memory\nUsage: 2285kB\n -> Hash Join (cost=407.00..952.81 rows=72 width=151) (actual\ntime=15.048..44.103 rows=12000 loops=1)\n Hash Cond: (member_span.group_id = group_span.group_id)\n Join Filter: (group_span.valid_period &&\nmember_span.valid_period)\n Rows Removed by Join Filter: 4000\n -> Seq Scan on member_span (cost=0.00..305.00 rows=12000\nwidth=92) (actual time=0.001..3.865 rows=12000 loops=1)\n -> Hash (cost=257.00..257.00 rows=12000 width=59) (actual\ntime=15.020..15.020 rows=12000 loops=1)\n Buckets: 16384 Batches: 1 Memory Usage: 1195kB\n -> Seq Scan on group_span (cost=0.00..257.00\nrows=12000 width=59) (actual time=0.003..2.863 rows=12000 loops=1)\nPlanning time: 0.651 ms\nExecution time: 86.721 ms\n\nFor now, I can bypass the GIST index by avoiding range operators in my\nqueries. But why is the GIST index so slow?\n\nHi. My databases make heavy use of timestamp ranges, and they rely on GIST exclusion constraints to ensure that the ranges are disjoint. I've noticed that queries that hit the GIST indexes are EXTREMELY slow, and the queries run much faster if I make trivial changes to avoid the GIST indexes.Here's the setup for a test case. (Timings were collected on PostgreSQL 9.5.4 on x86_64-pc-linux-gnu, Intel Xeon E5 in a VM with 3.5 GB RAM):CREATE TABLE app ( pk SERIAL PRIMARY KEY, group_id TEXT NOT NULL, app_time TIMESTAMPTZ NOT NULL);CREATE TABLE group_span ( pk SERIAL PRIMARY KEY, group_id TEXT NOT NULL, valid_period TSTZRANGE NOT NULL, EXCLUDE USING GIST (group_id WITH =, valid_period WITH &&));CREATE TABLE member_span ( pk SERIAL PRIMARY KEY, member_id TEXT NOT NULL, group_id TEXT NOT NULL, valid_period TSTZRANGE NOT NULL, EXCLUDE USING GIST (member_id WITH =, group_id WITH =, valid_period WITH &&));-- Fill tables with some random dataINSERT INTO app (group_id, app_time)SELECT MD5(CONCAT(GENERATE_SERIES(1, 10000), RANDOM())), DATE_TRUNC('month', TIMESTAMPTZ '2000-01-01' + INTERVAL '3 years' * RANDOM());-- Give groups a 1-year span, and give some groups a 2nd-year span:INSERT INTO group_span (group_id, valid_period)(SELECT group_id, TSTZRANGE(app_time, app_time + INTERVAL '1 year') FROM app)UNION ALL(SELECT group_id, TSTZRANGE(app_time + INTERVAL '1 year', app_time + INTERVAL '2 year') FROM app LIMIT 2000);-- Create members with a random span within their group_span:INSERT INTO member_span (member_id, group_id, valid_period)SELECT MD5(RANDOM()::TEXT), group_id, TSTZRANGE( LOWER(valid_period), UPPER(valid_period) - DATE_TRUNC( 'days', (UPPER(valid_period) - LOWER(valid_period)) * RANDOM() ) )FROM group_span;Given this setup, here's a query that hits the GIST exclusion index on the \"member_span\" table. It takes 38 sec on my machine:SELECT *FROM appJOIN group_span ON app.group_id = group_span.group_id AND app.app_time <@ group_span.valid_periodJOIN member_span ON group_span.group_id = member_span.group_id AND group_span.valid_period && member_span.valid_period;Here's the query plan for that query:Nested Loop (cost=319.27..776.39 rows=1 width=196) (actual time=15.370..38406.466 rows=10000 loops=1) Join Filter: (app.group_id = member_span.group_id) -> Hash Join (cost=319.00..771.00 rows=12 width=104) (actual time=5.790..130.613 rows=10000 loops=1) Hash Cond: (group_span.group_id = app.group_id) Join Filter: (app.app_time <@ group_span.valid_period) Rows Removed by Join Filter: 2000 -> Seq Scan on group_span (cost=0.00..257.00 rows=12000 width=59) (actual time=0.005..16.282 rows=12000 loops=1) -> Hash (cost=194.00..194.00 rows=10000 width=45) (actual time=5.758..5.758 rows=10000 loops=1) Buckets: 16384 Batches: 1 Memory Usage: 910kB -> Seq Scan on app (cost=0.00..194.00 rows=10000 width=45) (actual time=0.002..2.426 rows=10000 loops=1) -> Index Scan using member_span_member_id_group_id_valid_period_excl on member_span (cost=0.28..0.44 rows=1 width=92) (actual time=1.988..3.817 rows=1 loops=10000) Index Cond: ((group_id = group_span.group_id) AND (group_span.valid_period && valid_period))Planning time: 0.784 msExecution time: 38410.227 msWe can make a small tweak to the query to make it complicated enough that the execution planner avoids the GIST index. In this particular case, we can replace \"app.app_time <@ group_span.valid_period\" with the equivalent \"app.app_time >= LOWER(group_span.valid_period) AND app.app_time < UPPER(group_span.valid_period)\". This equivalent query is MUCH faster:SELECT *FROM appJOIN group_span ON app.group_id = group_span.group_id AND app.app_time >= LOWER(group_span.valid_period) AND app.app_time < UPPER(group_span.valid_period)JOIN member_span ON group_span.group_id = member_span.group_id AND group_span.valid_period && member_span.valid_period;It only takes 86 ms, even though it's doing 3 seq scans instead of using the index:Hash Join (cost=953.71..1186.65 rows=8 width=196) (actual time=58.364..84.706 rows=10000 loops=1) Hash Cond: (app.group_id = group_span.group_id) Join Filter: ((app.app_time >= lower(group_span.valid_period)) AND (app.app_time < upper(group_span.valid_period))) Rows Removed by Join Filter: 2000 -> Seq Scan on app (cost=0.00..194.00 rows=10000 width=45) (actual time=0.007..2.391 rows=10000 loops=1) -> Hash (cost=952.81..952.81 rows=72 width=151) (actual time=58.343..58.343 rows=12000 loops=1) Buckets: 16384 (originally 1024) Batches: 1 (originally 1) Memory Usage: 2285kB -> Hash Join (cost=407.00..952.81 rows=72 width=151) (actual time=15.048..44.103 rows=12000 loops=1) Hash Cond: (member_span.group_id = group_span.group_id) Join Filter: (group_span.valid_period && member_span.valid_period) Rows Removed by Join Filter: 4000 -> Seq Scan on member_span (cost=0.00..305.00 rows=12000 width=92) (actual time=0.001..3.865 rows=12000 loops=1) -> Hash (cost=257.00..257.00 rows=12000 width=59) (actual time=15.020..15.020 rows=12000 loops=1) Buckets: 16384 Batches: 1 Memory Usage: 1195kB -> Seq Scan on group_span (cost=0.00..257.00 rows=12000 width=59) (actual time=0.003..2.863 rows=12000 loops=1)Planning time: 0.651 msExecution time: 86.721 msFor now, I can bypass the GIST index by avoiding range operators in my queries. But why is the GIST index so slow?",
"msg_date": "Tue, 28 Aug 2018 23:31:10 -0400",
"msg_from": "David <[email protected]>",
"msg_from_op": true,
"msg_subject": "Extremely slow when query uses GIST exclusion index"
},
{
"msg_contents": "\n\nAm 29.08.2018 um 05:31 schrieb David:\n> For now, I can bypass the GIST index by avoiding range operators in my \n> queries. But why is the GIST index so slow?\n\nyour GiST-Index contains (member_id,group_id,valid_period), but your \nquery is only on the latter 2 fields.\n\n\ntest=*# create index test_index on member_span using gist \n(group_id,valid_period);\nCREATE INDEX\ntest=*# commit;\nCOMMIT\ntest=# explain analyse SELECT *\nFROM app\nJOIN group_span ON\n app.group_id = group_span.group_id AND\n app.app_time <@ group_span.valid_period\nJOIN member_span ON\n group_span.group_id = member_span.group_id AND\n group_span.valid_period && member_span.valid_period;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=319.27..776.18 rows=1 width=196) (actual \ntime=3.156..334.963 rows=10000 loops=1)\n Join Filter: (app.group_id = member_span.group_id)\n -> Hash Join (cost=319.00..771.00 rows=12 width=104) (actual \ntime=3.100..14.040 rows=10000 loops=1)\n Hash Cond: (group_span.group_id = app.group_id)\n Join Filter: (app.app_time <@ group_span.valid_period)\n Rows Removed by Join Filter: 2000\n -> Seq Scan on group_span (cost=0.00..257.00 rows=12000 \nwidth=59) (actual time=0.013..1.865 rows=12000 loops=1)\n -> Hash (cost=194.00..194.00 rows=10000 width=45) (actual \ntime=3.037..3.037 rows=10000 loops=1)\n Buckets: 16384 Batches: 1 Memory Usage: 910kB\n -> Seq Scan on app (cost=0.00..194.00 rows=10000 \nwidth=45) (actual time=0.010..1.201 rows=10000 loops=1)\n -> Index Scan using test_index on member_span (cost=0.28..0.42 \nrows=1 width=92) (actual time=0.027..0.031 rows=1 loops=10000)\n Index Cond: ((group_id = group_span.group_id) AND \n(group_span.valid_period && valid_period))\n Planning time: 2.160 ms\n Execution time: 335.820 ms\n(14 rows)\n\ntest=*#\n\n\nbetter?\n\nOkay, other solution. The problem is the nested loop, we can disable that:\n\n\ntest=*# set enable_nestloop to false;\nSET\ntest=*# explain analyse SELECT *\nFROM app\nJOIN group_span ON\n app.group_id = group_span.group_id AND\n app.app_time <@ group_span.valid_period\nJOIN member_span ON\n group_span.group_id = member_span.group_id AND\n group_span.valid_period && member_span.valid_period;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=771.15..1121.33 rows=1 width=196) (actual \ntime=23.291..32.028 rows=10000 loops=1)\n Hash Cond: (member_span.group_id = app.group_id)\n Join Filter: (group_span.valid_period && member_span.valid_period)\n Rows Removed by Join Filter: 2000\n -> Seq Scan on member_span (cost=0.00..305.00 rows=12000 width=92) \n(actual time=0.019..1.577 rows=12000 loops=1)\n -> Hash (cost=771.00..771.00 rows=12 width=104) (actual \ntime=23.254..23.254 rows=10000 loops=1)\n Buckets: 16384 (originally 1024) Batches: 1 (originally 1) \nMemory Usage: 1486kB\n -> Hash Join (cost=319.00..771.00 rows=12 width=104) (actual \ntime=7.968..18.951 rows=10000 loops=1)\n Hash Cond: (group_span.group_id = app.group_id)\n Join Filter: (app.app_time <@ group_span.valid_period)\n Rows Removed by Join Filter: 2000\n -> Seq Scan on group_span (cost=0.00..257.00 \nrows=12000 width=59) (actual time=0.010..2.068 rows=12000 loops=1)\n -> Hash (cost=194.00..194.00 rows=10000 width=45) \n(actual time=7.900..7.900 rows=10000 loops=1)\n Buckets: 16384 Batches: 1 Memory Usage: 910kB\n -> Seq Scan on app (cost=0.00..194.00 rows=10000 \nwidth=45) (actual time=0.011..3.165 rows=10000 loops=1)\n Planning time: 1.241 ms\n Execution time: 32.676 ms\n(17 rows)\n\ntest=*#\n\n\n\n\n\n\nRegards, Andreas\n\n-- \n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com\n\n\n",
"msg_date": "Wed, 29 Aug 2018 12:50:43 +0200",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extremely slow when query uses GIST exclusion index"
},
{
"msg_contents": "\n\nAm 29.08.2018 um 12:50 schrieb Andreas Kretschmer:\n> Okay, other solution. The problem is the nested loop, we can disable \n> that: \n\noh, i used PG 10, this time 9.5:\n\ntest=# explain analyse SELECT *\nFROM app\nJOIN group_span ON\n app.group_id = group_span.group_id AND\n app.app_time <@ group_span.valid_period\nJOIN member_span ON\n group_span.group_id = member_span.group_id AND\n group_span.valid_period && member_span.valid_period;\nQUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.55..4740.90 rows=180 width=212) (actual \ntime=2.915..17624.676 rows=10000 loops=1)\n Join Filter: (app.group_id = member_span.group_id)\n -> Nested Loop (cost=0.28..4472.00 rows=600 width=112) (actual \ntime=0.292..347.838 rows=10000 loops=1)\n -> Seq Scan on app (cost=0.00..194.00 rows=10000 width=44) \n(actual time=0.012..2.689 rows=10000 loops=1)\n -> Index Scan using group_span_group_id_valid_period_excl on \ngroup_span (cost=0.28..0.42 rows=1 width=68) (actual time=0.029..0.033 \nrows=1 loops=10000)\n Index Cond: ((group_id = app.group_id) AND (app.app_time \n<@ valid_period))\n -> Index Scan using \nmember_span_member_id_group_id_valid_period_excl on member_span \n(cost=0.28..0.44 rows=1 width=100) (actual time=0.912..1.726 rows=1 \nloops=10000)\n Index Cond: ((group_id = group_span.group_id) AND \n(group_span.valid_period && valid_period))\n Planning time: 1.554 ms\n Execution time: 17627.266 ms\n(10 rows)\n\ntest=*# set enable_nestloop to false;\nSET\ntest=*# explain analyse SELECT *\nFROM app\nJOIN group_span ON\n app.group_id = group_span.group_id AND\n app.app_time <@ group_span.valid_period\nJOIN member_span ON\n group_span.group_id = member_span.group_id AND\n group_span.valid_period && member_span.valid_period;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=2383.43..14284.93 rows=180 width=212) (actual \ntime=42.440..63.834 rows=10000 loops=1)\n Hash Cond: (app.group_id = member_span.group_id)\n Join Filter: (group_span.valid_period && member_span.valid_period)\n Rows Removed by Join Filter: 2000\n -> Merge Join (cost=1928.43..12478.43 rows=600 width=112) (actual \ntime=34.068..47.954 rows=10000 loops=1)\n Merge Cond: (app.group_id = group_span.group_id)\n Join Filter: (app.app_time <@ group_span.valid_period)\n Rows Removed by Join Filter: 2000\n -> Sort (cost=858.39..883.39 rows=10000 width=44) (actual \ntime=15.331..17.104 rows=10000 loops=1)\n Sort Key: app.group_id\n Sort Method: quicksort Memory: 1166kB\n -> Seq Scan on app (cost=0.00..194.00 rows=10000 \nwidth=44) (actual time=0.004..1.070 rows=10000 loops=1)\n -> Sort (cost=1070.04..1100.04 rows=12000 width=68) (actual \ntime=18.720..20.712 rows=12000 loops=1)\n Sort Key: group_span.group_id\n Sort Method: quicksort Memory: 2072kB\n -> Seq Scan on group_span (cost=0.00..257.00 \nrows=12000 width=68) (actual time=0.007..1.396 rows=12000 loops=1)\n -> Hash (cost=305.00..305.00 rows=12000 width=100) (actual \ntime=8.198..8.198 rows=12000 loops=1)\n Buckets: 16384 Batches: 1 Memory Usage: 1582kB\n -> Seq Scan on member_span (cost=0.00..305.00 rows=12000 \nwidth=100) (actual time=0.011..2.783 rows=12000 loops=1)\n Planning time: 0.468 ms\n Execution time: 64.694 ms\n(21 rows)\n\ntest=*#\n\n\nRegards, Andreas\n\n-- \n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com\n\n\n",
"msg_date": "Wed, 29 Aug 2018 13:25:55 +0200",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extremely slow when query uses GIST exclusion index"
},
{
"msg_contents": "Thanks for your help investigating this! Follow-up below:\n\nOn Wed, Aug 29, 2018 at 7:25 AM, Andreas Kretschmer <[email protected]\n> wrote:\n>\n> Okay, other solution. The problem is the nested loop, we can disable that:\n>>\n> test=*# set enable_nestloop to false;\n\n\nIs it OK to keep this off permanently in production? I thought these\nsettings were just for debugging, and once we've identified the culprit,\nwe're supposed to take other steps (?) to avoid the suboptimal execution\nplan.\n\nyour GiST-Index contains (member_id,group_id,valid_period), but your query\n> is only on the latter 2 fields.\n\n\nYeah, I didn't really want GIST index in the first place -- PostgreSQL\ncreated it automatically as a side effect of the exclusion constraint that\nI need.\n\nYour suggestion to create *another* GIST index is an interesting\nworkaround. But we've seen that the query runs even faster if we didn't\nhave the GIST index(es) at all. So is there any way to tell the planner to\navoid the GIST index altogether?\n\n(Alternatively, could there be a bug that's causing PostgreSQL to\nunderestimate the cost of using the GIST index?)\n\n\n> Nested Loop (cost=319.27..776.18 rows=1 width=196) (actual\n> time=3.156..334.963 rows=10000 loops=1)\n> Join Filter: (app.group_id = member_span.group_id)\n> -> Hash Join (cost=319.00..771.00 rows=12 width=104) (actual\n> time=3.100..14.040 rows=10000 loops=1)\n\n\nHm, also, it looks like one of the oddities of this query is that\nPostgreSQL is severely underestimating the cardinality of the join. It\nseems to think that the join will result in only 1 row, when the join\nactually produces 10,000 rows. Maybe that's why the planner thinks that\nusing the GIST index is cheap? (I.e., the planner thought that it would\nonly need to do 1 GIST index lookup, which is cheaper than a sequential\nscan; but in reality it has to do 10,000 GIST index lookups, which is much\nmore expensive than a scan.) Is there any way to help the planner better\nestimate how big the join output going to be?\n\nThanks for your help investigating this! Follow-up below:On Wed, Aug 29, 2018 at 7:25 AM, Andreas Kretschmer <[email protected]> wrote:Okay, other solution. The problem is the nested loop, we can disable that: \n\ntest=*# set enable_nestloop to false;Is it OK to keep this off permanently in production? I thought these settings were just for debugging, and once we've identified the culprit, we're supposed to take other steps (?) to avoid the suboptimal execution plan.your GiST-Index contains (member_id,group_id,valid_period), but your query is only on the latter 2 fields.Yeah, I didn't really want GIST index in the first place -- PostgreSQL created it automatically as a side effect of the exclusion constraint that I need.Your suggestion to create *another* GIST index is an interesting workaround. But we've seen that the query runs even faster if we didn't have the GIST index(es) at all. So is there any way to tell the planner to avoid the GIST index altogether?(Alternatively, could there be a bug that's causing PostgreSQL to underestimate the cost of using the GIST index?) Nested Loop (cost=319.27..776.18 rows=1 width=196) (actual time=3.156..334.963 rows=10000 loops=1) Join Filter: (app.group_id = member_span.group_id) -> Hash Join (cost=319.00..771.00 rows=12 width=104) (actual time=3.100..14.040 rows=10000 loops=1)Hm, also, it looks like one of the oddities of this query is that PostgreSQL is severely underestimating the cardinality of the join. It seems to think that the join will result in only 1 row, when the join actually produces 10,000 rows. Maybe that's why the planner thinks that using the GIST index is cheap? (I.e., the planner thought that it would only need to do 1 GIST index lookup, which is cheaper than a sequential scan; but in reality it has to do 10,000 GIST index lookups, which is much more expensive than a scan.) Is there any way to help the planner better estimate how big the join output going to be?",
"msg_date": "Wed, 29 Aug 2018 14:10:43 -0400",
"msg_from": "David <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Extremely slow when query uses GIST exclusion index"
},
{
"msg_contents": "\n\nAm 29.08.2018 um 20:10 schrieb David:\n>\n> On Wed, Aug 29, 2018 at 7:25 AM, Andreas Kretschmer \n> <[email protected] <mailto:[email protected]>> wrote:\n>\n> Okay, other solution. The problem is the nested loop, we can\n> disable that:\n>\n> test=*# set enable_nestloop to false;\n>\n>\n> Is it OK to keep this off permanently in production?\n\nno, but you can switch off/on per session, for instance. and you can it \nset to on after that query.\n\n\n>\n> Nested Loop (cost=319.27..776.18 rows=1 width=196) (actual\n> time=3.156..334.963 rows=10000 loops=1)\n> Join Filter: (app.group_id = member_span.group_id)\n> -> Hash Join (cost=319.00..771.00 rows=12 width=104) (actual\n> time=3.100..14.040 rows=10000 loops=1)\n>\n>\n> Hm, also, it looks like one of the oddities of this query is that \n> PostgreSQL is severely underestimating the cardinality of the join.\n\nack, that's the main problem here, i think. It leads to the expensive \nnested loop. Tbh, i don't have a better suggestion now besides the \nworkaround with setting nestloop to off.\n\n\nRegards, Andreas\n\n-- \n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com\n\n\n",
"msg_date": "Wed, 29 Aug 2018 20:48:15 +0200",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extremely slow when query uses GIST exclusion index"
}
] |
[
{
"msg_contents": "Hi,\nI have a big table (with 1.6 milion records). One of the columns is called\nend_date and it`s type is timestamp. I'm trying to find the best way to\ndelete most of the table but not all of it according to a range of dates.\nThe table structure :\nafa=# \\d my_table;\n Table \"public.my_table\"\n Column | Type |\n Modifiers\n---------------------------------+--------------------------+----------------------------------------------------------\n id | bigint | not null\ndefault nextval('my_table_id_seq'::regclass)\n devid| integer | not null\n column_name| integer | not null\n column_name| integer | not null\n column_name| integer | not null\n column_name| timestamp with time zone |\n column_name| integer | not null\n column_name| integer | not null\n column_name| integer | not null\n column_name| integer | not null\n column_name| integer | not null\n column_name| integer | not null\n column_name| integer | not null\n column_name| text | not null\n column_name| integer | not null default 0\n column_name| integer | not null default 0\n column_name| integer | not null default 0\n column_name| integer | not null default 0\n column_name| integer | not null default 0\n column_name| integer | not null default 0\n column_name| integer | not null default 0\n column_name| integer | default 0\n column_name| integer | default 0\n column_name| integer | default 0\n column_name| integer | default 0\n column_name| integer | default 0\n column_name| integer | default 0\n end_date | timestamp with time zone |\n\nIndexes:\n \"my_table_pkey\" PRIMARY KEY, btree (id)\n \"my_table_date_idx\" btree (date)\n \"my_table_device_idx\" btree (devid)\n \"end_date_idx\" btree (end_date)\nForeign-key constraints:\n \"fk_aaaaa\" FOREIGN KEY (devid) REFERENCES device_data(id)\nReferenced by:\n TABLE \"table1\" CONSTRAINT \"application_change_my_table_id_fkey\" FOREIGN\nKEY (my_table_id) REFERENCES my_table(id)\n TABLE \"table2\" CONSTRAINT \"configuration_changes_my_table_id_fkey\"\nFOREIGN KEY (my_table_id) REFERENCES my_table(id)\n TABLE \"table3\" CONSTRAINT \"fk_57hmvnx423bw9h203260r8gic\" FOREIGN KEY\n(my_table) REFERENCES my_table(id)\n TABLE \"table3\" CONSTRAINT \"interface_change_my_table_fk\" FOREIGN KEY\n(my_table) REFERENCES my_table(id)\n TABLE \"table4\" CONSTRAINT \"my_table_id_fkey\" FOREIGN KEY (my_table_id)\nREFERENCES my_table(id) ON DELETE CASCADE\n TABLE \"table5\" CONSTRAINT \"my_table_report_my_table_fk\" FOREIGN KEY\n(my_table_id) REFERENCES my_table(id)\n TABLE \"table6\" CONSTRAINT\n\"my_table_to_policy_change_my_table_foreign_key\" FOREIGN KEY (my_table)\nREFERENCES my_table(id)\n TABLE \"table7\" CONSTRAINT \"network_object_change_my_table_id_fkey\"\nFOREIGN KEY (my_table_id) REFERENCES my_table(id)\n TABLE \"table8\" CONSTRAINT \"orig_nat_rule_change_my_table_id_fkey\"\nFOREIGN KEY (my_table_id) REFERENCES my_table(id)\n TABLE \"table9\" CONSTRAINT \"risk_change_my_table_id_fkey\" FOREIGN KEY\n(my_table_id) REFERENCES my_table(id)\n TABLE \"table10\" CONSTRAINT \"rule_change_my_table_id_fkey\" FOREIGN KEY\n(my_table_id) REFERENCES my_table(id)\n TABLE \"table11\" CONSTRAINT \"service_change_my_table_id_fkey\" FOREIGN\nKEY (my_table_id) REFERENCES my_table(id)\n\nAs you can see alot of other tables uses the id col as a foreign key which\nmake the delete much slower.\n\n*Solution I tried for the query : *\n\ndelete from my_table where end_date <= to_date('12/12/2018','DD/MM/YYYY')\nand end_date > to_date('11/12/2018','DD/MM/YYYY');\n\n QUERY\nPLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------\n Delete on my_table (cost=0.00..65183.30 rows=1573862 width=6) (actual\ntime=5121.344..5121.344 rows=0 loops=1)\n -> Seq Scan on my_table (cost=0.00..65183.30 rows=1573862 width=6)\n(actual time=0.012..2244.393 rows=1572864 loops=1)\n Filter: ((end_date <= to_date('12/12/2018'::text,\n'DD/MM/YYYY'::text)) AND (end_date > to_date('11/12/2018'::text,\n'DD/MM/YYYY'::text)))\n Rows Removed by Filter: 40253\n Planning time: 0.210 ms\n Trigger for constraint table1: time=14730.816 calls=1572864\n Trigger for constraint table2: time=30718.084 calls=1572864\n Trigger for constraint table3: time=28170.363 calls=1572864\n Trigger for constraint table4: time=29573.681 calls=1572864\n Trigger for constraint table5: time=29629.263 calls=1572864\n Trigger for constraint table6: time=29628.489 calls=1572864\n Trigger for constraint table7: time=29798.121 calls=1572864\n Trigger for constraint table8: time=29645.705 calls=1572864\n Trigger for constraint table9: time=29657.177 calls=1572864\n Trigger for constraint table10: time=29487.054 calls=1572864\n Trigger for constraint table11: time=30010.978 calls=1572864\n Trigger for constraint table12: time=26383.924 calls=1572864\n Execution time: 350603.047 ms\n(18 rows)\n\n-----------------------\n\nDELETE FROM my_table WHERE id IN (select id from my_table where end_date <=\nto_date('12/12/2018','DD/MM/YYYY') and end_date >\nto_date('11/12/2018','DD/MM/YYYY'));\n\n\n\n\nQUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------\n Delete on my_table (cost=92522.54..186785.27 rows=1572738 width=12)\n(actual time=9367.477..9367.477 rows=0 loops=1)\n -> Hash Join (cost=92522.54..186785.27 rows=1572738 width=12) (actual\ntime=2871.906..5503.732 rows=1572864 loops=1)\n Hash Cond: (my_table.id = my_table_1.id)\n -> Seq Scan on my_table (cost=0.00..49052.16 rows=1613116\nwidth=14) (actual time=0.004..669.184 rows=1613117 loops=1)\n -> Hash (cost=65183.32..65183.32 rows=1572738 width=14) (actual\ntime=2871.301..2871.301 rows=1572864 loops=1)\n Buckets: 131072 Batches: 32 Memory Usage: 3332kB\n -> Seq Scan on my_table my_table_1 (cost=0.00..65183.32\nrows=1572738 width=14) (actual time=0.009..2115.826 rows=1572864 loops=1)\n Filter: ((end_date <= to_date('12/12/2018'::text,\n'DD/MM/YYYY'::text)) AND (end_date > to_date('11/12/2018'::text,\n'DD/MM/YYYY'::text)))\n Rows Removed by Filter: 40253\n Planning time: 0.419 ms\n Trigger for constraint my_table_id_fkey: time=14291.206 calls=1572864\n Trigger for constraint table2_fk: time=29171.591 calls=1572864\n Trigger for constraint table3_fk: time=26356.711 calls=1572864\n Trigger for constraint table4_fk: time=27579.694 calls=1572864\n Trigger for constraint table5_fk: time=27537.491 calls=1572864\n Trigger for constraint table6_fk: time=27574.169 calls=1572864\n Trigger for constraint table7_fk: time=27716.636 calls=1572864\n Trigger for constraint table8_fk: time=27780.192 calls=1572864\n....\n....\n\n Execution time: 333166.233 ms ~ 5.5 minutes\n(23 rows)\n\n\nLoading into a temp table the data isnt option because I cant truncate the\ntable because of all the dependencies...\n\nAny idea what else can I check ?\n\nHi,I have a big table (with 1.6 milion records). One of the columns is called end_date and it`s type is timestamp. I'm trying to find the best way to delete most of the table but not all of it according to a range of dates. The table structure : afa=# \\d my_table; Table \"public.my_table\" Column | Type | Modifiers---------------------------------+--------------------------+---------------------------------------------------------- id | bigint | not null default nextval('my_table_id_seq'::regclass) devid| integer | not null column_name| integer | not null column_name| integer | not null column_name| integer | not null column_name| timestamp with time zone | column_name| integer | not null column_name| integer | not null column_name| integer | not null column_name| integer | not null column_name| integer | not null column_name| integer | not null column_name| integer | not null column_name| text | not null column_name| integer | not null default 0 column_name| integer | not null default 0 column_name| integer | not null default 0 column_name| integer | not null default 0 column_name| integer | not null default 0 column_name| integer | not null default 0 column_name| integer | not null default 0 column_name| integer | default 0 column_name| integer | default 0 column_name| integer | default 0 column_name| integer | default 0 column_name| integer | default 0 column_name| integer | default 0 end_date | timestamp with time zone |Indexes: \"my_table_pkey\" PRIMARY KEY, btree (id) \"my_table_date_idx\" btree (date) \"my_table_device_idx\" btree (devid) \"end_date_idx\" btree (end_date)Foreign-key constraints: \"fk_aaaaa\" FOREIGN KEY (devid) REFERENCES device_data(id)Referenced by: TABLE \"table1\" CONSTRAINT \"application_change_my_table_id_fkey\" FOREIGN KEY (my_table_id) REFERENCES my_table(id) TABLE \"table2\" CONSTRAINT \"configuration_changes_my_table_id_fkey\" FOREIGN KEY (my_table_id) REFERENCES my_table(id) TABLE \"table3\" CONSTRAINT \"fk_57hmvnx423bw9h203260r8gic\" FOREIGN KEY (my_table) REFERENCES my_table(id) TABLE \"table3\" CONSTRAINT \"interface_change_my_table_fk\" FOREIGN KEY (my_table) REFERENCES my_table(id) TABLE \"table4\" CONSTRAINT \"my_table_id_fkey\" FOREIGN KEY (my_table_id) REFERENCES my_table(id) ON DELETE CASCADE TABLE \"table5\" CONSTRAINT \"my_table_report_my_table_fk\" FOREIGN KEY (my_table_id) REFERENCES my_table(id) TABLE \"table6\" CONSTRAINT \"my_table_to_policy_change_my_table_foreign_key\" FOREIGN KEY (my_table) REFERENCES my_table(id) TABLE \"table7\" CONSTRAINT \"network_object_change_my_table_id_fkey\" FOREIGN KEY (my_table_id) REFERENCES my_table(id) TABLE \"table8\" CONSTRAINT \"orig_nat_rule_change_my_table_id_fkey\" FOREIGN KEY (my_table_id) REFERENCES my_table(id) TABLE \"table9\" CONSTRAINT \"risk_change_my_table_id_fkey\" FOREIGN KEY (my_table_id) REFERENCES my_table(id) TABLE \"table10\" CONSTRAINT \"rule_change_my_table_id_fkey\" FOREIGN KEY (my_table_id) REFERENCES my_table(id) TABLE \"table11\" CONSTRAINT \"service_change_my_table_id_fkey\" FOREIGN KEY (my_table_id) REFERENCES my_table(id)As you can see alot of other tables uses the id col as a foreign key which make the delete much slower.Solution I tried for the query : delete from my_table where end_date <= to_date('12/12/2018','DD/MM/YYYY') and end_date > to_date('11/12/2018','DD/MM/YYYY'); QUERY PLAN-------------------------------------------------------------------------------------------------------------------------------------------------- Delete on my_table (cost=0.00..65183.30 rows=1573862 width=6) (actual time=5121.344..5121.344 rows=0 loops=1) -> Seq Scan on my_table (cost=0.00..65183.30 rows=1573862 width=6) (actual time=0.012..2244.393 rows=1572864 loops=1) Filter: ((end_date <= to_date('12/12/2018'::text, 'DD/MM/YYYY'::text)) AND (end_date > to_date('11/12/2018'::text, 'DD/MM/YYYY'::text))) Rows Removed by Filter: 40253 Planning time: 0.210 ms Trigger for constraint table1: time=14730.816 calls=1572864 Trigger for constraint table2: time=30718.084 calls=1572864 Trigger for constraint table3: time=28170.363 calls=1572864 Trigger for constraint table4: time=29573.681 calls=1572864 Trigger for constraint table5: time=29629.263 calls=1572864 Trigger for constraint table6: time=29628.489 calls=1572864 Trigger for constraint table7: time=29798.121 calls=1572864 Trigger for constraint table8: time=29645.705 calls=1572864 Trigger for constraint table9: time=29657.177 calls=1572864 Trigger for constraint table10: time=29487.054 calls=1572864 Trigger for constraint table11: time=30010.978 calls=1572864 Trigger for constraint table12: time=26383.924 calls=1572864 Execution time: 350603.047 ms(18 rows)-----------------------DELETE FROM my_table WHERE id IN (select id from my_table where end_date <= to_date('12/12/2018','DD/MM/YYYY') and end_date > to_date('11/12/2018','DD/MM/YYYY')); QUERY PLAN-------------------------------------------------------------------------------------------------------------------------------------------------------------- Delete on my_table (cost=92522.54..186785.27 rows=1572738 width=12) (actual time=9367.477..9367.477 rows=0 loops=1) -> Hash Join (cost=92522.54..186785.27 rows=1572738 width=12) (actual time=2871.906..5503.732 rows=1572864 loops=1) Hash Cond: (my_table.id = my_table_1.id) -> Seq Scan on my_table (cost=0.00..49052.16 rows=1613116 width=14) (actual time=0.004..669.184 rows=1613117 loops=1) -> Hash (cost=65183.32..65183.32 rows=1572738 width=14) (actual time=2871.301..2871.301 rows=1572864 loops=1) Buckets: 131072 Batches: 32 Memory Usage: 3332kB -> Seq Scan on my_table my_table_1 (cost=0.00..65183.32 rows=1572738 width=14) (actual time=0.009..2115.826 rows=1572864 loops=1) Filter: ((end_date <= to_date('12/12/2018'::text, 'DD/MM/YYYY'::text)) AND (end_date > to_date('11/12/2018'::text, 'DD/MM/YYYY'::text))) Rows Removed by Filter: 40253 Planning time: 0.419 ms Trigger for constraint my_table_id_fkey: time=14291.206 calls=1572864 Trigger for constraint table2_fk: time=29171.591 calls=1572864 Trigger for constraint table3_fk: time=26356.711 calls=1572864 Trigger for constraint table4_fk: time=27579.694 calls=1572864 Trigger for constraint table5_fk: time=27537.491 calls=1572864 Trigger for constraint table6_fk: time=27574.169 calls=1572864 Trigger for constraint table7_fk: time=27716.636 calls=1572864 Trigger for constraint table8_fk: time=27780.192 calls=1572864........ Execution time: 333166.233 ms ~ 5.5 minutes(23 rows)Loading into a temp table the data isnt option because I cant truncate the table because of all the dependencies...Any idea what else can I check ?",
"msg_date": "Mon, 3 Sep 2018 09:27:52 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "trying to delete most of the table by range of date col"
},
{
"msg_contents": "On Mon, Sep 03, 2018 at 09:27:52AM +0300, Mariel Cherkassky wrote:\n> I'm trying to find the best way to delete most of the table but not all of it\n> according to a range of dates.\n\n> Indexes:\n> \"end_date_idx\" btree (end_date)\n\n> Referenced by:\n> TABLE \"table1\" CONSTRAINT \"application_change_my_table_id_fkey\" FOREIGN\n> KEY (my_table_id) REFERENCES my_table(id)\n> TABLE \"table2\" CONSTRAINT \"configuration_changes_my_table_id_fkey\"\n> FOREIGN KEY (my_table_id) REFERENCES my_table(id)\n...\n\n> As you can see alot of other tables uses the id col as a foreign key which\n> make the delete much slower.\n\n> Trigger for constraint table1: time=14730.816 calls=1572864\n> Trigger for constraint table2: time=30718.084 calls=1572864\n> Trigger for constraint table3: time=28170.363 calls=1572864\n...\n\nDo the other tables have indices on their referencING columns ?\n\nhttps://www.postgresql.org/docs/devel/static/ddl-constraints.html#DDL-CONSTRAINTS-FK\n\"Since a DELETE of a row from the referenced table [...] will require a scan of\nthe referencing table for rows matching the old value, it is often a good idea\nto index the referencing columns too.\"\n\nNote, I believe it's planned in the future for foreign keys to support\nreferenes to partitioned tables, at which point you could just DROP the monthly\npartition...but not supported right now.\n\nJustin\n\n",
"msg_date": "Mon, 3 Sep 2018 02:06:24 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying to delete most of the table by range of date col"
},
{
"msg_contents": "\n\nAm 03.09.2018 um 09:06 schrieb Justin Pryzby:\n> Note, I believe it's planned in the future for foreign keys to support\n> referenes to partitioned tables, at which point you could just DROP the monthly\n> partition...but not supported right now.\n\nthe future is close, that's possible in 11 ;-)\n\nRegards, Andreas\n\n-- \n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com\n\n\n",
"msg_date": "Mon, 3 Sep 2018 09:35:16 +0200",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying to delete most of the table by range of date col"
},
{
"msg_contents": "Hi,\nI already checked and on all the tables that uses the id col of the main\ntable as a foreign key have index on that column.\nI tried all the next 4 solutions :\n\n1)delete from my_table where end_date <=\nto_date('12/12/2018','DD/MM/YYYY') and end_date >\nto_date('11/12/2018','DD/MM/YYYY');\n Execution time: 350603.047 ms ~ 5.8 minutes\n\n2)DELETE FROM my_table WHERE id IN (select id from my_table where end_date\n<= to_date('12/12/2018','DD/MM/YYYY') and end_date >\nto_date('11/12/2018','DD/MM/YYYY'));\n Execution time: 333166.233 ms ~ 5.5 minutes\n\n3) set temp_buffers='1GB';\nSET\n\ncreate temp table id_temp as select id from my_Table where end_date <=\nto_date('12/12/2018','DD/MM/YYYY') and end_date >\nto_date('11/12/2018','DD/MM/YYYY') ;\nSELECT 1572864\nTime: 2196.670 ms\n\n DELETE FROM my_table USING id_temp WHERE my_table.id = id_temp.id;\n Execution time: 459650.621 ms 7.6minutes\n\n4)delete in chunks :\ndo $$\ndeclare\nrec integer;\nbegin\nselect count(*) from my_table into rec where end_date <=\nto_date('12/12/2018','DD/MM/YYYY') and end_date >\nto_date('11/12/2018','DD/MM/YYYY');\nwhile rec > 0 loop\nDELETE FROM my_Table WHERE id IN (select id from my_tablewhere end_date <=\nto_date('12/12/2018','DD/MM/YYYY') and end_date >\nto_date('11/12/2018','DD/MM/YYYY') limit 5000);\nrec := rec - 5000;\nraise notice '5000 records were deleted, current rows :%',rec;\nend loop;\n\nend;\n$$\n;\n\nExecution time : 6 minutes.\n\nSo, it seems that the second solution is the fastest one. It there a reason\nwhy the delete chunks (solution 4) wasnt faster?\n\nבתאריך יום ב׳, 3 בספט׳ 2018 ב-10:35 מאת Andreas Kretschmer <\[email protected]>:\n\n>\n>\n> Am 03.09.2018 um 09:06 schrieb Justin Pryzby:\n> > Note, I believe it's planned in the future for foreign keys to support\n> > referenes to partitioned tables, at which point you could just DROP the\n> monthly\n> > partition...but not supported right now.\n>\n> the future is close, that's possible in 11 ;-)\n>\n> Regards, Andreas\n>\n> --\n> 2ndQuadrant - The PostgreSQL Support Company.\n> www.2ndQuadrant.com\n>\n>\n>\n\nHi,I already checked and on all the tables that uses the id col of the main table as a foreign key have index on that column.I tried all the next 4 solutions : 1)delete from my_table where end_date <= to_date('12/12/2018','DD/MM/YYYY') and end_date > to_date('11/12/2018','DD/MM/YYYY'); Execution time: 350603.047 ms ~ 5.8 minutes2)DELETE FROM my_table WHERE id IN (select id from my_table where end_date <= to_date('12/12/2018','DD/MM/YYYY') and end_date > to_date('11/12/2018','DD/MM/YYYY')); Execution time: 333166.233 ms ~ 5.5 minutes3) set temp_buffers='1GB';SETcreate temp table id_temp as select id from my_Table where end_date <= to_date('12/12/2018','DD/MM/YYYY') and end_date > to_date('11/12/2018','DD/MM/YYYY') ;SELECT 1572864Time: 2196.670 ms DELETE FROM my_table USING id_temp WHERE my_table.id = id_temp.id; Execution time: 459650.621 ms 7.6minutes4)delete in chunks : do $$declare rec integer;beginselect count(*) from my_table into rec where end_date <= to_date('12/12/2018','DD/MM/YYYY') and end_date > to_date('11/12/2018','DD/MM/YYYY');while rec > 0 loopDELETE FROM my_Table WHERE id IN (select id from my_tablewhere end_date <= to_date('12/12/2018','DD/MM/YYYY') and end_date > to_date('11/12/2018','DD/MM/YYYY') limit 5000);rec := rec - 5000;raise notice '5000 records were deleted, current rows :%',rec;end loop;end;$$;Execution time : 6 minutes.So, it seems that the second solution is the fastest one. It there a reason why the delete chunks (solution 4) wasnt faster?בתאריך יום ב׳, 3 בספט׳ 2018 ב-10:35 מאת Andreas Kretschmer <[email protected]>:\n\nAm 03.09.2018 um 09:06 schrieb Justin Pryzby:\n> Note, I believe it's planned in the future for foreign keys to support\n> referenes to partitioned tables, at which point you could just DROP the monthly\n> partition...but not supported right now.\n\nthe future is close, that's possible in 11 ;-)\n\nRegards, Andreas\n\n-- \n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com",
"msg_date": "Mon, 3 Sep 2018 11:17:58 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying to delete most of the table by range of date col"
},
{
"msg_contents": "Hello\n\n> Delete on my_table (cost=0.00..65183.30 rows=1573862 width=6) (actual time=5121.344..5121.344 rows=0 loops=1)\n> -> Seq Scan on my_table (cost=0.00..65183.30 rows=1573862 width=6) (actual time=0.012..2244.393 rows=1572864 loops=1)\n> Filter: ((end_date <= to_date('12/12/2018'::text, 'DD/MM/YYYY'::text)) AND (end_date > to_date('11/12/2018'::text, 'DD/MM/YYYY'::text)))\n> Rows Removed by Filter: 40253\n> Planning time: 0.210 ms\n> Trigger for constraint table1: time=14730.816 calls=1572864\n> Trigger for constraint table2: time=30718.084 calls=1572864\n> Trigger for constraint table3: time=28170.363 calls=1572864\n> Trigger for constraint table4: time=29573.681 calls=1572864\n> Trigger for constraint table5: time=29629.263 calls=1572864\n> Trigger for constraint table6: time=29628.489 calls=1572864\n> Trigger for constraint table7: time=29798.121 calls=1572864\n> Trigger for constraint table8: time=29645.705 calls=1572864\n> Trigger for constraint table9: time=29657.177 calls=1572864\n> Trigger for constraint table10: time=29487.054 calls=1572864\n> Trigger for constraint table11: time=30010.978 calls=1572864\n> Trigger for constraint table12: time=26383.924 calls=1572864\n> Execution time: 350603.047 ms\n\nAs you can see in \"actual time\" - delete was run only 5 sec. All the other time postgresql checked foreign keys triggers. 0,02ms per row seems adequate for index lookup.\nIt may be better drop foreign keys, delete data, and create foreign keys back.\n\nregards, Sergei\n\n",
"msg_date": "Mon, 03 Sep 2018 11:35:05 +0300",
"msg_from": "Sergei Kornilov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying to delete most of the table by range of date col"
},
{
"msg_contents": "Cant drop foreign keys, there are too much.\n\nבתאריך יום ב׳, 3 בספט׳ 2018 ב-11:35 מאת Sergei Kornilov <[email protected]\n>:\n\n> Hello\n>\n> > Delete on my_table (cost=0.00..65183.30 rows=1573862 width=6) (actual\n> time=5121.344..5121.344 rows=0 loops=1)\n> > -> Seq Scan on my_table (cost=0.00..65183.30 rows=1573862 width=6)\n> (actual time=0.012..2244.393 rows=1572864 loops=1)\n> > Filter: ((end_date <= to_date('12/12/2018'::text,\n> 'DD/MM/YYYY'::text)) AND (end_date > to_date('11/12/2018'::text,\n> 'DD/MM/YYYY'::text)))\n> > Rows Removed by Filter: 40253\n> > Planning time: 0.210 ms\n> > Trigger for constraint table1: time=14730.816 calls=1572864\n> > Trigger for constraint table2: time=30718.084 calls=1572864\n> > Trigger for constraint table3: time=28170.363 calls=1572864\n> > Trigger for constraint table4: time=29573.681 calls=1572864\n> > Trigger for constraint table5: time=29629.263 calls=1572864\n> > Trigger for constraint table6: time=29628.489 calls=1572864\n> > Trigger for constraint table7: time=29798.121 calls=1572864\n> > Trigger for constraint table8: time=29645.705 calls=1572864\n> > Trigger for constraint table9: time=29657.177 calls=1572864\n> > Trigger for constraint table10: time=29487.054 calls=1572864\n> > Trigger for constraint table11: time=30010.978 calls=1572864\n> > Trigger for constraint table12: time=26383.924 calls=1572864\n> > Execution time: 350603.047 ms\n>\n> As you can see in \"actual time\" - delete was run only 5 sec. All the other\n> time postgresql checked foreign keys triggers. 0,02ms per row seems\n> adequate for index lookup.\n> It may be better drop foreign keys, delete data, and create foreign keys\n> back.\n>\n> regards, Sergei\n>\n\nCant drop foreign keys, there are too much.בתאריך יום ב׳, 3 בספט׳ 2018 ב-11:35 מאת Sergei Kornilov <[email protected]>:Hello\n\n> Delete on my_table (cost=0.00..65183.30 rows=1573862 width=6) (actual time=5121.344..5121.344 rows=0 loops=1)\n> -> Seq Scan on my_table (cost=0.00..65183.30 rows=1573862 width=6) (actual time=0.012..2244.393 rows=1572864 loops=1)\n> Filter: ((end_date <= to_date('12/12/2018'::text, 'DD/MM/YYYY'::text)) AND (end_date > to_date('11/12/2018'::text, 'DD/MM/YYYY'::text)))\n> Rows Removed by Filter: 40253\n> Planning time: 0.210 ms\n> Trigger for constraint table1: time=14730.816 calls=1572864\n> Trigger for constraint table2: time=30718.084 calls=1572864\n> Trigger for constraint table3: time=28170.363 calls=1572864\n> Trigger for constraint table4: time=29573.681 calls=1572864\n> Trigger for constraint table5: time=29629.263 calls=1572864\n> Trigger for constraint table6: time=29628.489 calls=1572864\n> Trigger for constraint table7: time=29798.121 calls=1572864\n> Trigger for constraint table8: time=29645.705 calls=1572864\n> Trigger for constraint table9: time=29657.177 calls=1572864\n> Trigger for constraint table10: time=29487.054 calls=1572864\n> Trigger for constraint table11: time=30010.978 calls=1572864\n> Trigger for constraint table12: time=26383.924 calls=1572864\n> Execution time: 350603.047 ms\n\nAs you can see in \"actual time\" - delete was run only 5 sec. All the other time postgresql checked foreign keys triggers. 0,02ms per row seems adequate for index lookup.\nIt may be better drop foreign keys, delete data, and create foreign keys back.\n\nregards, Sergei",
"msg_date": "Mon, 3 Sep 2018 11:50:55 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying to delete most of the table by range of date col"
},
{
"msg_contents": "On Mon, Sep 03, 2018 at 11:17:58AM +0300, Mariel Cherkassky wrote:\n> Hi,\n> I already checked and on all the tables that uses the id col of the main\n> table as a foreign key have index on that column.\n> \n> So, it seems that the second solution is the fastest one. It there a reason\n> why the delete chunks (solution 4) wasnt faster?\n\nI suggest running:\n\nSET track_io_timing=on; -- requires superuser\nexplain(ANALYZE,BUFFERS) DELETE [...]\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nMaybe you just need larger shared_buffers ?\n\nJustin\n\n",
"msg_date": "Mon, 3 Sep 2018 04:23:01 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying to delete most of the table by range of date col"
},
{
"msg_contents": "I checked, the results :\n\n1)explain (analyze,buffers) delete from my_table where end_date <=\nto_date('12/12/2018','DD/MM/YYYY') and end_date >\nto_date('11/12/2018','DD/MM/YYYY');\n\n\n QUERY\nPLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------\n Delete on my_table (cost=0.00..97294.80 rows=1571249 width=6) (actual\ntime=4706.791..4706.791 rows=0 loops=1)\n Buffers: shared hit=3242848\n -> Seq Scan on my_table (cost=0.00..97294.80 rows=1571249 width=6)\n(actual time=0.022..2454.686 rows=1572864 loops=1)\n Filter: ((end_date <= to_date('12/12/2018'::text,\n'DD/MM/YYYY'::text)) AND (end_date > to_date('11/12/2018'::text,\n'DD/MM/YYYY'::text)))\n Rows Removed by Filter: 40253\n Buffers: shared hit=65020(*8k/1024)=507MB\n Planning time: 0.182 ms\n\n2)explain (analyze,buffers) DELETE FROM my_table WHERE id IN (select id\nfrom my_table where end_date <= to_date('12/12/2018','DD/MM/YYYY') and\nend_date > to_date('11/12/2018','DD/MM/YYYY'));\n\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------\n Delete on my_table (cost=108908.17..252425.01 rows=1559172 width=12)\n(actual time=11168.090..11168.090 rows=0 loops=1)\n Buffers: shared hit=3307869 dirtied=13804, temp read=13656 written=13594\n -> Hash Join (cost=108908.17..252425.01 rows=1559172 width=12) (actual\ntime=1672.222..6401.288 rows=1572864 loops=1)\n Hash Cond: (my_table_1.id = my_table.id)\n Buffers: shared hit=130040, temp read=13656 written=13594\n -> Seq Scan on my_table my_table_1 (cost=0.00..97075.26\nrows=1559172 width=14) (actual time=0.008..2474.671 rows=1572864 loops=1)\n Filter: ((end_date <= to_date('12/12/2018'::text,\n'DD/MM/YYYY'::text)) AND (end_date > to_date('11/12/2018'::text,\n'DD/MM/YYYY'::text)))\n Rows Removed by Filter: 40253\n Buffers: shared hit=65020\n -> Hash (cost=81047.63..81047.63 rows=1602763 width=14) (actual\ntime=1671.613..1671.613 rows=1613117 loops=1)\n Buckets: 131072 Batches: 32 Memory Usage: 3392kB\n Buffers: shared hit=65020, temp written=6852\n -> Seq Scan on my_table (cost=0.00..81047.63 rows=1602763\nwidth=14) (actual time=0.003..778.311 rows=1613117 loops=1)\n Buffers: shared hit=65020\n\n\n3)explain (analyze,buffers) DELETE FROM my_table my_table USING id_test\nWHERE my_table.id = id_test.id;\n\n\n QUERY\nPLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------\n Delete on my_table my_table (cost=109216.05..178743.05 rows=1572960\nwidth=12) (actual time=7307.465..7307.465 rows=0 loops=1)\n Buffers: shared hit=3210748, local hit=6960, temp read=13656\nwritten=13594\n -> Hash Join (cost=109216.05..178743.05 rows=1572960 width=12) (actual\ntime=1636.744..4489.246 rows=1572864 loops=1)\n Hash Cond: (id_test.id = my_table.id)\n Buffers: shared hit=65020, local hit=6960, temp read=13656\nwritten=13594\n -> Seq Scan on id_test(cost=0.00..22689.60 rows=1572960 width=14)\n(actual time=0.009..642.859 rows=1572864 loops=1)\n Buffers: local hit=6960\n -> Hash (cost=81160.02..81160.02 rows=1614002 width=14) (actual\ntime=1636.228..1636.228 rows=1613117 loops=1)\n Buckets: 131072 Batches: 32 Memory Usage: 3392kB\n Buffers: shared hit=65020, temp written=6852\n -> Seq Scan on my_table my_table (cost=0.00..81160.02\nrows=1614002 width=14) (actual time=0.297..815.133 rows=1613117 loops=1)\n Buffers: shared hit=65020\n\n\nI restarted the cluster after running every query.\n\n\nבתאריך יום ב׳, 3 בספט׳ 2018 ב-12:23 מאת Justin Pryzby <\[email protected]>:\n\n> On Mon, Sep 03, 2018 at 11:17:58AM +0300, Mariel Cherkassky wrote:\n> > Hi,\n> > I already checked and on all the tables that uses the id col of the main\n> > table as a foreign key have index on that column.\n> >\n> > So, it seems that the second solution is the fastest one. It there a\n> reason\n> > why the delete chunks (solution 4) wasnt faster?\n>\n> I suggest running:\n>\n> SET track_io_timing=on; -- requires superuser\n> explain(ANALYZE,BUFFERS) DELETE [...]\n>\n> https://wiki.postgresql.org/wiki/Slow_Query_Questions\n>\n> Maybe you just need larger shared_buffers ?\n>\n> Justin\n>\n\nI checked, the results : 1)explain (analyze,buffers) delete from my_table where end_date <= to_date('12/12/2018','DD/MM/YYYY') and end_date > to_date('11/12/2018','DD/MM/YYYY'); QUERY PLAN-------------------------------------------------------------------------------------------------------------------------------------------------- Delete on my_table (cost=0.00..97294.80 rows=1571249 width=6) (actual time=4706.791..4706.791 rows=0 loops=1) Buffers: shared hit=3242848 -> Seq Scan on my_table (cost=0.00..97294.80 rows=1571249 width=6) (actual time=0.022..2454.686 rows=1572864 loops=1) Filter: ((end_date <= to_date('12/12/2018'::text, 'DD/MM/YYYY'::text)) AND (end_date > to_date('11/12/2018'::text, 'DD/MM/YYYY'::text))) Rows Removed by Filter: 40253 Buffers: shared hit=65020(*8k/1024)=507MB Planning time: 0.182 ms2)explain (analyze,buffers) DELETE FROM my_table WHERE id IN (select id from my_table where end_date <= to_date('12/12/2018','DD/MM/YYYY') and end_date > to_date('11/12/2018','DD/MM/YYYY')); QUERY PLAN-------------------------------------------------------------------------------------------------------------------------------------------------------- Delete on my_table (cost=108908.17..252425.01 rows=1559172 width=12) (actual time=11168.090..11168.090 rows=0 loops=1) Buffers: shared hit=3307869 dirtied=13804, temp read=13656 written=13594 -> Hash Join (cost=108908.17..252425.01 rows=1559172 width=12) (actual time=1672.222..6401.288 rows=1572864 loops=1) Hash Cond: (my_table_1.id = my_table.id) Buffers: shared hit=130040, temp read=13656 written=13594 -> Seq Scan on my_table my_table_1 (cost=0.00..97075.26 rows=1559172 width=14) (actual time=0.008..2474.671 rows=1572864 loops=1) Filter: ((end_date <= to_date('12/12/2018'::text, 'DD/MM/YYYY'::text)) AND (end_date > to_date('11/12/2018'::text, 'DD/MM/YYYY'::text))) Rows Removed by Filter: 40253 Buffers: shared hit=65020 -> Hash (cost=81047.63..81047.63 rows=1602763 width=14) (actual time=1671.613..1671.613 rows=1613117 loops=1) Buckets: 131072 Batches: 32 Memory Usage: 3392kB Buffers: shared hit=65020, temp written=6852 -> Seq Scan on my_table (cost=0.00..81047.63 rows=1602763 width=14) (actual time=0.003..778.311 rows=1613117 loops=1) Buffers: shared hit=650203)explain (analyze,buffers) DELETE FROM my_table my_table USING id_test WHERE my_table.id = id_test.id; QUERY PLAN-------------------------------------------------------------------------------------------------------------------------------------------------- Delete on my_table my_table (cost=109216.05..178743.05 rows=1572960 width=12) (actual time=7307.465..7307.465 rows=0 loops=1) Buffers: shared hit=3210748, local hit=6960, temp read=13656 written=13594 -> Hash Join (cost=109216.05..178743.05 rows=1572960 width=12) (actual time=1636.744..4489.246 rows=1572864 loops=1) Hash Cond: (id_test.id = my_table.id) Buffers: shared hit=65020, local hit=6960, temp read=13656 written=13594 -> Seq Scan on id_test(cost=0.00..22689.60 rows=1572960 width=14) (actual time=0.009..642.859 rows=1572864 loops=1) Buffers: local hit=6960 -> Hash (cost=81160.02..81160.02 rows=1614002 width=14) (actual time=1636.228..1636.228 rows=1613117 loops=1) Buckets: 131072 Batches: 32 Memory Usage: 3392kB Buffers: shared hit=65020, temp written=6852 -> Seq Scan on my_table my_table (cost=0.00..81160.02 rows=1614002 width=14) (actual time=0.297..815.133 rows=1613117 loops=1) Buffers: shared hit=65020I restarted the cluster after running every query.בתאריך יום ב׳, 3 בספט׳ 2018 ב-12:23 מאת Justin Pryzby <[email protected]>:On Mon, Sep 03, 2018 at 11:17:58AM +0300, Mariel Cherkassky wrote:\n> Hi,\n> I already checked and on all the tables that uses the id col of the main\n> table as a foreign key have index on that column.\n> \n> So, it seems that the second solution is the fastest one. It there a reason\n> why the delete chunks (solution 4) wasnt faster?\n\nI suggest running:\n\nSET track_io_timing=on; -- requires superuser\nexplain(ANALYZE,BUFFERS) DELETE [...]\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nMaybe you just need larger shared_buffers ?\n\nJustin",
"msg_date": "Mon, 3 Sep 2018 13:25:04 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying to delete most of the table by range of date col"
},
{
"msg_contents": "This is a terribley inflexible design, why so many foreign keys? If the\ntable requires removing data, rebuild with partitions. Parent keys should\nbe in reference tables, not in fact table.\n\nOn Mon, Sep 3, 2018 at 04:51 Mariel Cherkassky <[email protected]>\nwrote:\n\n> Cant drop foreign keys, there are too much.\n>\n> בתאריך יום ב׳, 3 בספט׳ 2018 ב-11:35 מאת Sergei Kornilov <[email protected]\n> >:\n>\n>> Hello\n>>\n>> > Delete on my_table (cost=0.00..65183.30 rows=1573862 width=6) (actual\n>> time=5121.344..5121.344 rows=0 loops=1)\n>> > -> Seq Scan on my_table (cost=0.00..65183.30 rows=1573862 width=6)\n>> (actual time=0.012..2244.393 rows=1572864 loops=1)\n>> > Filter: ((end_date <= to_date('12/12/2018'::text,\n>> 'DD/MM/YYYY'::text)) AND (end_date > to_date('11/12/2018'::text,\n>> 'DD/MM/YYYY'::text)))\n>> > Rows Removed by Filter: 40253\n>> > Planning time: 0.210 ms\n>> > Trigger for constraint table1: time=14730.816 calls=1572864\n>> > Trigger for constraint table2: time=30718.084 calls=1572864\n>> > Trigger for constraint table3: time=28170.363 calls=1572864\n>> > Trigger for constraint table4: time=29573.681 calls=1572864\n>> > Trigger for constraint table5: time=29629.263 calls=1572864\n>> > Trigger for constraint table6: time=29628.489 calls=1572864\n>> > Trigger for constraint table7: time=29798.121 calls=1572864\n>> > Trigger for constraint table8: time=29645.705 calls=1572864\n>> > Trigger for constraint table9: time=29657.177 calls=1572864\n>> > Trigger for constraint table10: time=29487.054 calls=1572864\n>> > Trigger for constraint table11: time=30010.978 calls=1572864\n>> > Trigger for constraint table12: time=26383.924 calls=1572864\n>> > Execution time: 350603.047 ms\n>>\n>> As you can see in \"actual time\" - delete was run only 5 sec. All the\n>> other time postgresql checked foreign keys triggers. 0,02ms per row seems\n>> adequate for index lookup.\n>> It may be better drop foreign keys, delete data, and create foreign keys\n>> back.\n>>\n>> regards, Sergei\n>>\n>\n\nThis is a terribley inflexible design, why so many foreign keys? If the table requires removing data, rebuild with partitions. Parent keys should be in reference tables, not in fact table. On Mon, Sep 3, 2018 at 04:51 Mariel Cherkassky <[email protected]> wrote:Cant drop foreign keys, there are too much.בתאריך יום ב׳, 3 בספט׳ 2018 ב-11:35 מאת Sergei Kornilov <[email protected]>:Hello\n\n> Delete on my_table (cost=0.00..65183.30 rows=1573862 width=6) (actual time=5121.344..5121.344 rows=0 loops=1)\n> -> Seq Scan on my_table (cost=0.00..65183.30 rows=1573862 width=6) (actual time=0.012..2244.393 rows=1572864 loops=1)\n> Filter: ((end_date <= to_date('12/12/2018'::text, 'DD/MM/YYYY'::text)) AND (end_date > to_date('11/12/2018'::text, 'DD/MM/YYYY'::text)))\n> Rows Removed by Filter: 40253\n> Planning time: 0.210 ms\n> Trigger for constraint table1: time=14730.816 calls=1572864\n> Trigger for constraint table2: time=30718.084 calls=1572864\n> Trigger for constraint table3: time=28170.363 calls=1572864\n> Trigger for constraint table4: time=29573.681 calls=1572864\n> Trigger for constraint table5: time=29629.263 calls=1572864\n> Trigger for constraint table6: time=29628.489 calls=1572864\n> Trigger for constraint table7: time=29798.121 calls=1572864\n> Trigger for constraint table8: time=29645.705 calls=1572864\n> Trigger for constraint table9: time=29657.177 calls=1572864\n> Trigger for constraint table10: time=29487.054 calls=1572864\n> Trigger for constraint table11: time=30010.978 calls=1572864\n> Trigger for constraint table12: time=26383.924 calls=1572864\n> Execution time: 350603.047 ms\n\nAs you can see in \"actual time\" - delete was run only 5 sec. All the other time postgresql checked foreign keys triggers. 0,02ms per row seems adequate for index lookup.\nIt may be better drop foreign keys, delete data, and create foreign keys back.\n\nregards, Sergei",
"msg_date": "Mon, 3 Sep 2018 07:09:35 -0400",
"msg_from": "Carrie Berlin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying to delete most of the table by range of date col"
},
{
"msg_contents": "I'm not responsible for this design but I'm trying to improve it. Using\npartition isnt an option because partitions doesnt support foreign key.\nMoreover, most queries on all those tables uses the id col of the main\ntable.\n\nבתאריך יום ב׳, 3 בספט׳ 2018 ב-14:09 מאת Carrie Berlin <\[email protected]>:\n\n> This is a terribley inflexible design, why so many foreign keys? If the\n> table requires removing data, rebuild with partitions. Parent keys should\n> be in reference tables, not in fact table.\n>\n> On Mon, Sep 3, 2018 at 04:51 Mariel Cherkassky <\n> [email protected]> wrote:\n>\n>> Cant drop foreign keys, there are too much.\n>>\n>> בתאריך יום ב׳, 3 בספט׳ 2018 ב-11:35 מאת Sergei Kornilov <[email protected]\n>> >:\n>>\n>>> Hello\n>>>\n>>> > Delete on my_table (cost=0.00..65183.30 rows=1573862 width=6)\n>>> (actual time=5121.344..5121.344 rows=0 loops=1)\n>>> > -> Seq Scan on my_table (cost=0.00..65183.30 rows=1573862\n>>> width=6) (actual time=0.012..2244.393 rows=1572864 loops=1)\n>>> > Filter: ((end_date <= to_date('12/12/2018'::text,\n>>> 'DD/MM/YYYY'::text)) AND (end_date > to_date('11/12/2018'::text,\n>>> 'DD/MM/YYYY'::text)))\n>>> > Rows Removed by Filter: 40253\n>>> > Planning time: 0.210 ms\n>>> > Trigger for constraint table1: time=14730.816 calls=1572864\n>>> > Trigger for constraint table2: time=30718.084 calls=1572864\n>>> > Trigger for constraint table3: time=28170.363 calls=1572864\n>>> > Trigger for constraint table4: time=29573.681 calls=1572864\n>>> > Trigger for constraint table5: time=29629.263 calls=1572864\n>>> > Trigger for constraint table6: time=29628.489 calls=1572864\n>>> > Trigger for constraint table7: time=29798.121 calls=1572864\n>>> > Trigger for constraint table8: time=29645.705 calls=1572864\n>>> > Trigger for constraint table9: time=29657.177 calls=1572864\n>>> > Trigger for constraint table10: time=29487.054 calls=1572864\n>>> > Trigger for constraint table11: time=30010.978 calls=1572864\n>>> > Trigger for constraint table12: time=26383.924 calls=1572864\n>>> > Execution time: 350603.047 ms\n>>>\n>>> As you can see in \"actual time\" - delete was run only 5 sec. All the\n>>> other time postgresql checked foreign keys triggers. 0,02ms per row seems\n>>> adequate for index lookup.\n>>> It may be better drop foreign keys, delete data, and create foreign keys\n>>> back.\n>>>\n>>> regards, Sergei\n>>>\n>>\n\nI'm not responsible for this design but I'm trying to improve it. Using partition isnt an option because partitions doesnt support foreign key. Moreover, most queries on all those tables uses the id col of the main table. בתאריך יום ב׳, 3 בספט׳ 2018 ב-14:09 מאת Carrie Berlin <[email protected]>:This is a terribley inflexible design, why so many foreign keys? If the table requires removing data, rebuild with partitions. Parent keys should be in reference tables, not in fact table. On Mon, Sep 3, 2018 at 04:51 Mariel Cherkassky <[email protected]> wrote:Cant drop foreign keys, there are too much.בתאריך יום ב׳, 3 בספט׳ 2018 ב-11:35 מאת Sergei Kornilov <[email protected]>:Hello\n\n> Delete on my_table (cost=0.00..65183.30 rows=1573862 width=6) (actual time=5121.344..5121.344 rows=0 loops=1)\n> -> Seq Scan on my_table (cost=0.00..65183.30 rows=1573862 width=6) (actual time=0.012..2244.393 rows=1572864 loops=1)\n> Filter: ((end_date <= to_date('12/12/2018'::text, 'DD/MM/YYYY'::text)) AND (end_date > to_date('11/12/2018'::text, 'DD/MM/YYYY'::text)))\n> Rows Removed by Filter: 40253\n> Planning time: 0.210 ms\n> Trigger for constraint table1: time=14730.816 calls=1572864\n> Trigger for constraint table2: time=30718.084 calls=1572864\n> Trigger for constraint table3: time=28170.363 calls=1572864\n> Trigger for constraint table4: time=29573.681 calls=1572864\n> Trigger for constraint table5: time=29629.263 calls=1572864\n> Trigger for constraint table6: time=29628.489 calls=1572864\n> Trigger for constraint table7: time=29798.121 calls=1572864\n> Trigger for constraint table8: time=29645.705 calls=1572864\n> Trigger for constraint table9: time=29657.177 calls=1572864\n> Trigger for constraint table10: time=29487.054 calls=1572864\n> Trigger for constraint table11: time=30010.978 calls=1572864\n> Trigger for constraint table12: time=26383.924 calls=1572864\n> Execution time: 350603.047 ms\n\nAs you can see in \"actual time\" - delete was run only 5 sec. All the other time postgresql checked foreign keys triggers. 0,02ms per row seems adequate for index lookup.\nIt may be better drop foreign keys, delete data, and create foreign keys back.\n\nregards, Sergei",
"msg_date": "Mon, 3 Sep 2018 14:19:02 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying to delete most of the table by range of date col"
},
{
"msg_contents": "Hi\n>\n> I understand about having to deal with a bad design. How big is the table\n> \"select pg_size_pretty(pg_table_size(table_name)).? If the table is not\n> that large relative to the IOPS on your disk system, another solution is to\n> add a binary column IS_DELETED to the table and modify the queries that hit\n> the table to exclude rows where IS_DELETED=y. Also you need an index on\n> this column. I did this with a user table that was a parent table to 120\n> data tables and users could not be dropped from the system.\n\n\nOn Mon, Sep 3, 2018 at 7:19 AM Mariel Cherkassky <\[email protected]> wrote:\n\n> I'm not responsible for this design but I'm trying to improve it. Using\n> partition isnt an option because partitions doesnt support foreign key.\n> Moreover, most queries on all those tables uses the id col of the main\n> table.\n>\n> בתאריך יום ב׳, 3 בספט׳ 2018 ב-14:09 מאת Carrie Berlin <\n> [email protected]>:\n>\n>> This is a terribley inflexible design, why so many foreign keys? If the\n>> table requires removing data, rebuild with partitions. Parent keys should\n>> be in reference tables, not in fact table.\n>>\n>> On Mon, Sep 3, 2018 at 04:51 Mariel Cherkassky <\n>> [email protected]> wrote:\n>>\n>>> Cant drop foreign keys, there are too much.\n>>>\n>>> בתאריך יום ב׳, 3 בספט׳ 2018 ב-11:35 מאת Sergei Kornilov <\n>>> [email protected]>:\n>>>\n>>>> Hello\n>>>>\n>>>> > Delete on my_table (cost=0.00..65183.30 rows=1573862 width=6)\n>>>> (actual time=5121.344..5121.344 rows=0 loops=1)\n>>>> > -> Seq Scan on my_table (cost=0.00..65183.30 rows=1573862\n>>>> width=6) (actual time=0.012..2244.393 rows=1572864 loops=1)\n>>>> > Filter: ((end_date <= to_date('12/12/2018'::text,\n>>>> 'DD/MM/YYYY'::text)) AND (end_date > to_date('11/12/2018'::text,\n>>>> 'DD/MM/YYYY'::text)))\n>>>> > Rows Removed by Filter: 40253\n>>>> > Planning time: 0.210 ms\n>>>> > Trigger for constraint table1: time=14730.816 calls=1572864\n>>>> > Trigger for constraint table2: time=30718.084 calls=1572864\n>>>> > Trigger for constraint table3: time=28170.363 calls=1572864\n>>>> > Trigger for constraint table4: time=29573.681 calls=1572864\n>>>> > Trigger for constraint table5: time=29629.263 calls=1572864\n>>>> > Trigger for constraint table6: time=29628.489 calls=1572864\n>>>> > Trigger for constraint table7: time=29798.121 calls=1572864\n>>>> > Trigger for constraint table8: time=29645.705 calls=1572864\n>>>> > Trigger for constraint table9: time=29657.177 calls=1572864\n>>>> > Trigger for constraint table10: time=29487.054 calls=1572864\n>>>> > Trigger for constraint table11: time=30010.978 calls=1572864\n>>>> > Trigger for constraint table12: time=26383.924 calls=1572864\n>>>> > Execution time: 350603.047 ms\n>>>>\n>>>> As you can see in \"actual time\" - delete was run only 5 sec. All the\n>>>> other time postgresql checked foreign keys triggers. 0,02ms per row seems\n>>>> adequate for index lookup.\n>>>> It may be better drop foreign keys, delete data, and create foreign\n>>>> keys back.\n>>>>\n>>>> regards, Sergei\n>>>>\n>>>\n\nHiI understand about having to deal with a bad design. How big is the table \"select pg_size_pretty(pg_table_size(table_name)).? If the table is not that large relative to the IOPS on your disk system, another solution is to add a binary column IS_DELETED to the table and modify the queries that hit the table to exclude rows where IS_DELETED=y. Also you need an index on this column. I did this with a user table that was a parent table to 120 data tables and users could not be dropped from the system.On Mon, Sep 3, 2018 at 7:19 AM Mariel Cherkassky <[email protected]> wrote:I'm not responsible for this design but I'm trying to improve it. Using partition isnt an option because partitions doesnt support foreign key. Moreover, most queries on all those tables uses the id col of the main table. בתאריך יום ב׳, 3 בספט׳ 2018 ב-14:09 מאת Carrie Berlin <[email protected]>:This is a terribley inflexible design, why so many foreign keys? If the table requires removing data, rebuild with partitions. Parent keys should be in reference tables, not in fact table. On Mon, Sep 3, 2018 at 04:51 Mariel Cherkassky <[email protected]> wrote:Cant drop foreign keys, there are too much.בתאריך יום ב׳, 3 בספט׳ 2018 ב-11:35 מאת Sergei Kornilov <[email protected]>:Hello\n\n> Delete on my_table (cost=0.00..65183.30 rows=1573862 width=6) (actual time=5121.344..5121.344 rows=0 loops=1)\n> -> Seq Scan on my_table (cost=0.00..65183.30 rows=1573862 width=6) (actual time=0.012..2244.393 rows=1572864 loops=1)\n> Filter: ((end_date <= to_date('12/12/2018'::text, 'DD/MM/YYYY'::text)) AND (end_date > to_date('11/12/2018'::text, 'DD/MM/YYYY'::text)))\n> Rows Removed by Filter: 40253\n> Planning time: 0.210 ms\n> Trigger for constraint table1: time=14730.816 calls=1572864\n> Trigger for constraint table2: time=30718.084 calls=1572864\n> Trigger for constraint table3: time=28170.363 calls=1572864\n> Trigger for constraint table4: time=29573.681 calls=1572864\n> Trigger for constraint table5: time=29629.263 calls=1572864\n> Trigger for constraint table6: time=29628.489 calls=1572864\n> Trigger for constraint table7: time=29798.121 calls=1572864\n> Trigger for constraint table8: time=29645.705 calls=1572864\n> Trigger for constraint table9: time=29657.177 calls=1572864\n> Trigger for constraint table10: time=29487.054 calls=1572864\n> Trigger for constraint table11: time=30010.978 calls=1572864\n> Trigger for constraint table12: time=26383.924 calls=1572864\n> Execution time: 350603.047 ms\n\nAs you can see in \"actual time\" - delete was run only 5 sec. All the other time postgresql checked foreign keys triggers. 0,02ms per row seems adequate for index lookup.\nIt may be better drop foreign keys, delete data, and create foreign keys back.\n\nregards, Sergei",
"msg_date": "Mon, 3 Sep 2018 10:03:00 -0400",
"msg_from": "Carrie Berlin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying to delete most of the table by range of date col"
},
{
"msg_contents": ">\n> 4)delete in chunks :\n> do $$\n> declare\n> rec integer;\n> begin\n> select count(*) from my_table into rec where end_date <=\n> to_date('12/12/2018','DD/MM/YYYY') and end_date >\n> to_date('11/12/2018','DD/MM/YYYY');\n> while rec > 0 loop\n> DELETE FROM my_Table WHERE id IN (select id from my_tablewhere end_date <=\n> to_date('12/12/2018','DD/MM/YYYY') and end_date >\n> to_date('11/12/2018','DD/MM/YYYY') limit 5000);\n> rec := rec - 5000;\n> raise notice '5000 records were deleted, current rows :%',rec;\n> end loop;\n>\n> end;\n> $$\n> ;\n>\n> Execution time : 6 minutes.\n>\n> So, it seems that the second solution is the fastest one. It there a\n> reason why the delete chunks (solution 4) wasnt faster?\n>\n\nWhy would it be faster? The same amount of work needs to get done, no\nmatter how you slice it. Unless there is a specific reason to think it\nwould be faster, I would expect it won't be.\n\nIf you aren't willing to drop the constraints, then I think you just need\nto resign yourself to paying the price of checking those constraints. Maybe\nsome future version of PostgreSQL will be able to do them in parallel.\n\nCheers,\n\nJeff\n\n4)delete in chunks : do $$declare rec integer;beginselect count(*) from my_table into rec where end_date <= to_date('12/12/2018','DD/MM/YYYY') and end_date > to_date('11/12/2018','DD/MM/YYYY');while rec > 0 loopDELETE FROM my_Table WHERE id IN (select id from my_tablewhere end_date <= to_date('12/12/2018','DD/MM/YYYY') and end_date > to_date('11/12/2018','DD/MM/YYYY') limit 5000);rec := rec - 5000;raise notice '5000 records were deleted, current rows :%',rec;end loop;end;$$;Execution time : 6 minutes.So, it seems that the second solution is the fastest one. It there a reason why the delete chunks (solution 4) wasnt faster?Why would it be faster? The same amount of work needs to get done, no matter how you slice it. Unless there is a specific reason to think it would be faster, I would expect it won't be.If you aren't willing to drop the constraints, then I think you just need to resign yourself to paying the price of checking those constraints. Maybe some future version of PostgreSQL will be able to do them in parallel.Cheers,Jeff",
"msg_date": "Mon, 3 Sep 2018 11:25:31 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying to delete most of the table by range of date col"
},
{
"msg_contents": "Hi jefff,\nI tried every solution that I checked on net. I cant disable foreign keys\nor indexes.\n\nTrying to have better performance by just changing the query / changing\nparameters.\n\nבתאריך יום ב׳, 3 בספט׳ 2018 ב-18:25 מאת Jeff Janes <\[email protected]>:\n\n>\n>\n>\n>>\n>> 4)delete in chunks :\n>> do $$\n>> declare\n>> rec integer;\n>> begin\n>> select count(*) from my_table into rec where end_date <=\n>> to_date('12/12/2018','DD/MM/YYYY') and end_date >\n>> to_date('11/12/2018','DD/MM/YYYY');\n>> while rec > 0 loop\n>> DELETE FROM my_Table WHERE id IN (select id from my_tablewhere end_date\n>> <= to_date('12/12/2018','DD/MM/YYYY') and end_date >\n>> to_date('11/12/2018','DD/MM/YYYY') limit 5000);\n>> rec := rec - 5000;\n>> raise notice '5000 records were deleted, current rows :%',rec;\n>> end loop;\n>>\n>> end;\n>> $$\n>> ;\n>>\n>> Execution time : 6 minutes.\n>>\n>> So, it seems that the second solution is the fastest one. It there a\n>> reason why the delete chunks (solution 4) wasnt faster?\n>>\n>\n> Why would it be faster? The same amount of work needs to get done, no\n> matter how you slice it. Unless there is a specific reason to think it\n> would be faster, I would expect it won't be.\n>\n> If you aren't willing to drop the constraints, then I think you just need\n> to resign yourself to paying the price of checking those constraints. Maybe\n> some future version of PostgreSQL will be able to do them in parallel.\n>\n> Cheers,\n>\n> Jeff\n>\n\nHi jefff,I tried every solution that I checked on net. I cant disable foreign keys or indexes.Trying to have better performance by just changing the query / changing parameters.בתאריך יום ב׳, 3 בספט׳ 2018 ב-18:25 מאת Jeff Janes <[email protected]>:4)delete in chunks : do $$declare rec integer;beginselect count(*) from my_table into rec where end_date <= to_date('12/12/2018','DD/MM/YYYY') and end_date > to_date('11/12/2018','DD/MM/YYYY');while rec > 0 loopDELETE FROM my_Table WHERE id IN (select id from my_tablewhere end_date <= to_date('12/12/2018','DD/MM/YYYY') and end_date > to_date('11/12/2018','DD/MM/YYYY') limit 5000);rec := rec - 5000;raise notice '5000 records were deleted, current rows :%',rec;end loop;end;$$;Execution time : 6 minutes.So, it seems that the second solution is the fastest one. It there a reason why the delete chunks (solution 4) wasnt faster?Why would it be faster? The same amount of work needs to get done, no matter how you slice it. Unless there is a specific reason to think it would be faster, I would expect it won't be.If you aren't willing to drop the constraints, then I think you just need to resign yourself to paying the price of checking those constraints. Maybe some future version of PostgreSQL will be able to do them in parallel.Cheers,Jeff",
"msg_date": "Mon, 3 Sep 2018 19:27:09 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying to delete most of the table by range of date col"
}
] |
[
{
"msg_contents": "On windows, how to put an entry in my db startup script to run this query (pg_prewarm) immediately after startng the server, and let the query warm the cache itself.\nAfter starting the server, I want to know what is the server, and it is the database I restarted or windows system?\nThank you. \n\n\n>Hi,\n>On 17 Jan 2018 12:55, \"POUSSEL, Guillaume\" <guillaume(dot)poussel(at)sogeti(dot)com>\n>wrote:\n>Are you on Windows or Linux? I’m on Windows and wondering if the issue is\n>the same on Linux?\n>I have experienced this on Mac and Linux machines.\n>You can try pg_prewarm, on pg_statistic table and its index. But I'd\n>probably just put an entry in my db startup script to run this query\n>immediately after startng the server, and let the query warm the cache\n>itself.\n\n\n\n\n>I will try this suggestion and get back on the thread. Is pg_statistic the\n>only table to be pre cached? Pls let me know if any other table/index needs\n>to be pre warmed.\n>\n>\n>Btw, I don't running a \"select * from pg_statistic\" will fill the shared\n>buffer. Only 256 kb of data will be cached during sequential scans. I will\n>try pg_prewarm\n>\n>\n>Why do you restart your database often\n>\n>\n>Postgres is bundled with our application and deployed by our client.\n>Starting / stopping the server is not under my control.\n>\n>\n>Regards,\n>Nanda\nOn windows, how to put an entry in my db startup script to run this query (pg_prewarm) immediately after startng the server, and let the query warm the cache itself.After starting the server, I want to know what is the server, and it is the database I restarted or windows system?Thank you. >Hi,>On 17 Jan 2018 12:55, \"POUSSEL, Guillaume\" <guillaume(dot)poussel(at)sogeti(dot)com>>wrote:>Are you on Windows or Linux? I’m on Windows and wondering if the issue is>the same on Linux?>I have experienced this on Mac and Linux machines.>You can try pg_prewarm, on pg_statistic table and its index. But I'd>probably just put an entry in my db startup script to run this query>immediately after startng the server, and let the query warm the cache>itself.>I will try this suggestion and get back on the thread. Is pg_statistic the>only table to be pre cached? Pls let me know if any other table/index needs>to be pre warmed.>>>Btw, I don't running a \"select * from pg_statistic\" will fill the shared>buffer. Only 256 kb of data will be cached during sequential scans. I will>try pg_prewarm>>>Why do you restart your database often>>>Postgres is bundled with our application and deployed by our client.>Starting / stopping the server is not under my control.>>>Regards,>Nanda",
"msg_date": "Tue, 4 Sep 2018 15:16:10 +0800 (CST)",
"msg_from": "jimmy <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Query is slow when run for first time; subsequent execution is\n fast"
},
{
"msg_contents": "On Tue, Sep 4, 2018 at 3:16 AM jimmy <[email protected]> wrote:\n\n> On windows, how to put an entry in my db startup script to run this query\n> (pg_prewarm) immediately after startng the server, and let the query warm\n> the cache itself.\n>\n\nStarting with PostgreSQL version 11 (to be released soon), you can use\n pg_prewarm.autoprewarm.\n\nUntil then, maybe this:\nhttps://superuser.com/questions/502160/run-a-scheduled-task-after-a-windows-service-is-started\n\nI've tested neither one.\n\nCheers,\n\nJeff\n\nOn Tue, Sep 4, 2018 at 3:16 AM jimmy <[email protected]> wrote:On windows, how to put an entry in my db startup script to run this query (pg_prewarm) immediately after startng the server, and let the query warm the cache itself.Starting with PostgreSQL version 11 (to be released soon), you can use pg_prewarm.autoprewarm.Until then, maybe this: https://superuser.com/questions/502160/run-a-scheduled-task-after-a-windows-service-is-startedI've tested neither one.Cheers,Jeff",
"msg_date": "Tue, 4 Sep 2018 20:35:19 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query is slow when run for first time;\n subsequent execution is fast"
}
] |
[
{
"msg_contents": "Hello all,\r\n\r\nWe are running postgresql 9.4 and we have a table where we do some full-text searching using a GIN index on a tsvector column:\r\n\r\nCREATE TABLE public.location_search\r\n(\r\n id bigint NOT NULL DEFAULT nextval('location_search_id_seq'::regclass),\r\n <snip some columns>…\r\n search_field_tsvector tsvector\r\n)\r\n\r\nand\r\n\r\nCREATE INDEX location_search_tsvector_idx\r\n ON public.location_search USING gin\r\n (search_field_tsvector)\r\n TABLESPACE pg_default;\r\n\r\nThe search_field_tsvector column contains the data from the location's name and address:\r\n\r\nto_tsvector('pg_catalog.english', COALESCE(NEW.name, '')) || to_tsvector(COALESCE(address, ''))\r\n\r\nThis setup has been running very well, but as our load is getting heavier, the performance seems to be getting much more inconsistent. Our searches are run on a dedicated read replica, so this server is only doing queries against this one table. IO is very low, indicating to me that the data is all in memory. However, we're getting some queries taking upwards of 15-20 seconds, while the average is closer to 1 second.\r\n\r\nA sample query that's running slowly is\r\n\r\nexplain (analyze, buffers)\r\nSELECT ls.location AS locationId FROM location_search ls\r\nWHERE ls.client = 1363\r\nAND ls.favorite = TRUE\r\nAND search_field_tsvector @@ to_tsquery('CA-94:* &E &San:*')\r\nLIMIT 4;\r\n\r\nAnd the explain analyze is:\r\n\r\nLimit (cost=39865.85..39877.29 rows=1 width=8) (actual time=4471.120..4471.120 rows=0 loops=1)\r\n Buffers: shared hit=25613\r\n -> Bitmap Heap Scan on location_search ls (cost=39865.85..39877.29 rows=1 width=8) (actual time=4471.117..4471.117 rows=0 loops=1)\r\n Recheck Cond: (search_field_tsvector @@ to_tsquery('CA-94:* &E &San:*'::text))\r\n Filter: (favorite AND (client = 1363))\r\n Rows Removed by Filter: 74\r\n Heap Blocks: exact=84\r\n Buffers: shared hit=25613\r\n -> Bitmap Index Scan on location_search_tsvector_idx (cost=0.00..39865.85 rows=6 width=0) (actual time=4470.895..4470.895 rows=84 loops=1)\r\n Index Cond: (search_field_tsvector @@ to_tsquery('CA-94:* &E &San:*'::text))\r\n Buffers: shared hit=25529\r\nPlanning time: 0.335 ms\r\nExecution time: 4487.224 ms\r\n\r\nI'm a little bit at a loss to where to start at this - any suggestions would be hugely appreciated!\r\n\r\nThanks,\r\nScott\r\n\r\nThis email message contains information that Motus, LLC considers confidential and/or proprietary, or may later designate as confidential and proprietary. It is intended only for use of the individual or entity named above and should not be forwarded to any other persons or entities without the express consent of Motus, LLC, nor should it be used for any purpose other than in the course of any potential or actual business relationship with Motus, LLC. If the reader of this message is not the intended recipient, or the employee or agent responsible to deliver it to the intended recipient, you are hereby notified that any dissemination, distribution, or copying of this communication is strictly prohibited. If you have received this communication in error, please notify sender immediately and destroy the original message.\r\n\r\nInternal Revenue Service regulations require that certain types of written advice include a disclaimer. To the extent the preceding message contains advice relating to a Federal tax issue, unless expressly stated otherwise the advice is not intended or written to be used, and it cannot be used by the recipient or any other taxpayer, for the purpose of avoiding Federal tax penalties, and was not written to support the promotion or marketing of any transaction or matter discussed herein.\r\n",
"msg_date": "Tue, 4 Sep 2018 18:09:10 +0000",
"msg_from": "Scott Rankin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Inconsistent query times and spiky CPU with GIN tsvector search"
},
{
"msg_contents": "Scott Rankin wrote:\n> We are running postgresql 9.4 and we have a table where we do some\n> full-text searching using a GIN index on a tsvector column:\n> \n> CREATE INDEX location_search_tsvector_idx\n> ON public.location_search USING gin\n> (search_field_tsvector)\n> TABLESPACE pg_default;\n> \n> This setup has been running very well, but as our load is getting heavier,\n> the performance seems to be getting much more inconsistent.\n> Our searches are run on a dedicated read replica, so this server is only\n> doing queries against this one table. IO is very low, indicating to me\n> that the data is all in memory. However, we're getting some queries taking\n> upwards of 15-20 seconds, while the average is closer to 1 second.\n> \n> A sample query that's running slowly is\n> \n> explain (analyze, buffers)\n> SELECT ls.location AS locationId FROM location_search ls\n> WHERE ls.client = 1363\n> AND ls.favorite = TRUE\n> AND search_field_tsvector @@ to_tsquery('CA-94:* &E &San:*')\n> LIMIT 4;\n> \n> And the explain analyze is:\n> \n> Limit (cost=39865.85..39877.29 rows=1 width=8) (actual time=4471.120..4471.120 rows=0 loops=1)\n> Buffers: shared hit=25613\n> -> Bitmap Heap Scan on location_search ls (cost=39865.85..39877.29 rows=1 width=8) (actual time=4471.117..4471.117 rows=0 loops=1)\n> Recheck Cond: (search_field_tsvector @@ to_tsquery('CA-94:* &E &San:*'::text))\n> Filter: (favorite AND (client = 1363))\n> Rows Removed by Filter: 74\n> Heap Blocks: exact=84\n> Buffers: shared hit=25613\n> -> Bitmap Index Scan on location_search_tsvector_idx (cost=0.00..39865.85 rows=6 width=0) (actual time=4470.895..4470.895 rows=84 loops=1)\n> Index Cond: (search_field_tsvector @@ to_tsquery('CA-94:* &E &San:*'::text))\n> Buffers: shared hit=25529\n> Planning time: 0.335 ms\n> Execution time: 4487.224 ms\n\nNot sure, but maybe you are suffering from bad performance because of a\nlong \"GIN pending list\".\n\nIf yes, then the following can help:\n\n ALTER INDEX location_search_tsvector_idx SET (gin_pending_list_limit = 512);\n\nOr you can disable the feature altogether:\n\n ALTER INDEX location_search_tsvector_idx SET (fastupdate = off);\n\nThen clean the pending list with\n\n SELECT gin_clean_pending_list('location_search_tsvector_idx'::regclass);\n\nDisabling the pending list will slow down data modification, but should\nkeep the SELECT performance stable.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Tue, 04 Sep 2018 21:15:19 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent query times and spiky CPU with GIN tsvector search"
}
] |
[
{
"msg_contents": "I have also asked this question on Stackoverflow and DBA stack exchange with no answer. It's a fairly long post, so I will post a link to it, as on Stackoverflow it is formatted nicely \nhttps://stackoverflow.com/questions/52212878/query-gets-very-slow-when-jsonb-operator-is-used\n\nAny idea why my query slows down so much when I add account.residence_details::jsonb ?& array['city', 'state', 'streetName'] ?\nI have also asked this question on Stackoverflow and DBA stack exchange with no answer. It's a fairly long post, so I will post a link to it, as on Stackoverflow it is formatted nicely \n\n\nhttps://stackoverflow.com/questions/52212878/query-gets-very-slow-when-jsonb-operator-is-used\n\n\n\nAny idea why my query slows down so much when I add account.residence_details::jsonb ?& array['city', 'state', 'streetName'] ?",
"msg_date": "Thu, 6 Sep 2018 23:51:46 +0000 (UTC)",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "query gets very slow when :jsonb ?& operator is used"
},
{
"msg_contents": "On Thu, Sep 6, 2018 at 7:52 PM <[email protected]> wrote:\n\n> I have also asked this question on Stackoverflow and DBA stack exchange\n> with no answer. It's a fairly long post, so I will post a link to it, as on\n> Stackoverflow it is formatted nicely\n>\n>\n> https://stackoverflow.com/questions/52212878/query-gets-very-slow-when-jsonb-operator-is-used\n>\n> Any idea why my query slows down so much when I add account.residence_details::jsonb\n> ?& array['city', 'state', 'streetName'] ?\n>\n\nThe planner has no insight into what fraction of rows will satisfy the ?&\ncondition, and falls back on the assumption that very few will. This is\n(apparently) a very bad assumption, and causes it choose a bad plan.\n\nRewriting the `phone_number.account_id IN (subquery)` into an exists query\nmight help.\n\nCheers,\n\nJeff\n\nOn Thu, Sep 6, 2018 at 7:52 PM <[email protected]> wrote:I have also asked this question on Stackoverflow and DBA stack exchange with no answer. It's a fairly long post, so I will post a link to it, as on Stackoverflow it is formatted nicely \n\n\nhttps://stackoverflow.com/questions/52212878/query-gets-very-slow-when-jsonb-operator-is-used\n\n\n\nAny idea why my query slows down so much when I add account.residence_details::jsonb ?& array['city', 'state', 'streetName'] ?The planner has no insight into what fraction of rows will satisfy the ?& condition, and falls back on the assumption that very few will. This is (apparently) a very bad assumption, and causes it choose a bad plan. Rewriting the `phone_number.account_id IN (subquery)` into an exists query might help.Cheers,Jeff",
"msg_date": "Thu, 6 Sep 2018 21:39:00 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query gets very slow when :jsonb ?& operator is used"
}
] |
[
{
"msg_contents": "Hi folks,\n\nI've been seeing some curious behaviour on a postgres server I administer.\n\nIntermittently (one or two times a week), all queries on that host are\nsimultaneously blocked for extended periods (10s of seconds).\n\nThe blocked queries are trivial & not related to locking - I'm seeing\nslowlogs of the form:\n\n`LOG: duration: 22627.299 ms statement: SET client_encoding='''utf-8''';`\n\nwhere this is the first statement on a fresh connection.\n\nIt happens even for connections from the same host - so it doesn't appear\nto be e.g. network slowness, if that is even counted in query duration.\n\nDoes anyone have any hints for where to look for a cause?\n\nThanks,\n\nPatrick\n\n-----\n\nSet up information:\n\nPostgres version: PostgreSQL 9.6.5 on x86_64-pc-linux-gnu, compiled by gcc\n(Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609, 64-bit\n\nFull Table and Index Schema: not applicable, I think\n\nEXPLAIN ANALYSE: n/a\n\nHardware: AWS i3.2xlarge\n\nMaintenance Setup: autovacuum yes, but the times it runs don't correlate to\nthe incidences of slow queries\n\nWAL settings: shipped to S3 with wal-e, stored on same disk for interim\nperiod\n\nHi folks,I've been seeing some curious behaviour on a postgres server I administer.Intermittently (one or two times a week), all queries on that host are simultaneously blocked for extended periods (10s of seconds).The blocked queries are trivial & not related to locking - I'm seeing slowlogs of the form:`LOG: duration: 22627.299 ms statement: SET client_encoding='''utf-8''';` where this is the first statement on a fresh connection.It happens even for connections from the same host - so it doesn't appear to be e.g. network slowness, if that is even counted in query duration.Does anyone have any hints for where to look for a cause?Thanks,Patrick-----Set up information:Postgres version: PostgreSQL 9.6.5 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609, 64-bitFull Table and Index Schema: not applicable, I thinkEXPLAIN ANALYSE: n/aHardware: AWS i3.2xlargeMaintenance Setup: autovacuum yes, but the times it runs don't correlate to the incidences of slow queriesWAL settings: shipped to S3 with wal-e, stored on same disk for interim period",
"msg_date": "Fri, 7 Sep 2018 12:56:12 +0100",
"msg_from": "Patrick Molgaard <[email protected]>",
"msg_from_op": true,
"msg_subject": "Multi-second pauses blocking even trivial activity"
},
{
"msg_contents": "On Fri, Sep 7, 2018 at 8:00 AM Patrick Molgaard <[email protected]> wrote:\n\n> Hi folks,\n>\n> I've been seeing some curious behaviour on a postgres server I administer.\n>\n> Intermittently (one or two times a week), all queries on that host are\n> simultaneously blocked for extended periods (10s of seconds).\n>\n> The blocked queries are trivial & not related to locking - I'm seeing\n> slowlogs of the form:\n>\n> `LOG: duration: 22627.299 ms statement: SET client_encoding='''utf-8''';`\n>\n>\nDo you have log_lock_waits set to on? If not, you might want to turn it on.\n\nCheers,\n\nJeff\n\nOn Fri, Sep 7, 2018 at 8:00 AM Patrick Molgaard <[email protected]> wrote:Hi folks,I've been seeing some curious behaviour on a postgres server I administer.Intermittently (one or two times a week), all queries on that host are simultaneously blocked for extended periods (10s of seconds).The blocked queries are trivial & not related to locking - I'm seeing slowlogs of the form:`LOG: duration: 22627.299 ms statement: SET client_encoding='''utf-8''';` Do you have log_lock_waits set to on? If not, you might want to turn it on.Cheers,Jeff",
"msg_date": "Fri, 7 Sep 2018 10:32:00 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Multi-second pauses blocking even trivial activity"
},
{
"msg_contents": "Hi Jeff,\n\nThanks for your reply. Are locks relevant in this case, though?\n\nTo be clear, the slow statements are the first thing happening on the\nconnection and don't look like they should be acquiring any kind of lock -\neg. 'select version();' also seems to be paused when it occurs.\n\nOr are there some system level locks that a trivial query, touching no\nrelations, might be contending for?\n\nBest\nPatrick\n\nOn Fri, 7 Sep 2018, 15:32 Jeff Janes, <[email protected]> wrote:\n\n> On Fri, Sep 7, 2018 at 8:00 AM Patrick Molgaard <[email protected]>\n> wrote:\n>\n>> Hi folks,\n>>\n>> I've been seeing some curious behaviour on a postgres server I administer.\n>>\n>> Intermittently (one or two times a week), all queries on that host are\n>> simultaneously blocked for extended periods (10s of seconds).\n>>\n>> The blocked queries are trivial & not related to locking - I'm seeing\n>> slowlogs of the form:\n>>\n>> `LOG: duration: 22627.299 ms statement: SET client_encoding='''utf-8''';`\n>>\n>>\n> Do you have log_lock_waits set to on? If not, you might want to turn it\n> on.\n>\n> Cheers,\n>\n> Jeff\n>\n\nHi Jeff,Thanks for your reply. Are locks relevant in this case, though? To be clear, the slow statements are the first thing happening on the connection and don't look like they should be acquiring any kind of lock - eg. 'select version();' also seems to be paused when it occurs.Or are there some system level locks that a trivial query, touching no relations, might be contending for?BestPatrickOn Fri, 7 Sep 2018, 15:32 Jeff Janes, <[email protected]> wrote:On Fri, Sep 7, 2018 at 8:00 AM Patrick Molgaard <[email protected]> wrote:Hi folks,I've been seeing some curious behaviour on a postgres server I administer.Intermittently (one or two times a week), all queries on that host are simultaneously blocked for extended periods (10s of seconds).The blocked queries are trivial & not related to locking - I'm seeing slowlogs of the form:`LOG: duration: 22627.299 ms statement: SET client_encoding='''utf-8''';` Do you have log_lock_waits set to on? If not, you might want to turn it on.Cheers,Jeff",
"msg_date": "Fri, 7 Sep 2018 19:03:17 +0100",
"msg_from": "Patrick Molgaard <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Multi-second pauses blocking even trivial activity"
},
{
"msg_contents": "\n>\n>Intermittently (one or two times a week), all queries on that host are\n>simultaneously blocked for extended periods (10s of seconds).\n>\n>The blocked queries are trivial & not related to locking - I'm seeing\n>slowlogs of the form:\n>\n\n\nplease check if THP are enabled.\n\n\nRegards, Andreas\n\n\n\n-- \n2ndQuadrant - The PostgreSQL Support Company\n\n",
"msg_date": "Fri, 07 Sep 2018 20:12:18 +0200",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Multi-second pauses blocking even trivial activity"
},
{
"msg_contents": "On 09/07/2018 11:12 AM, Andreas Kretschmer wrote:\n>> Intermittently (one or two times a week), all queries on that host are\n>> simultaneously blocked for extended periods (10s of seconds).\n>>\n>> The blocked queries are trivial & not related to locking - I'm seeing\n>> slowlogs of the form:\n>>\n>\n> please check if THP are enabled.\n\nJust to help out those who don't know I believe that Andreas is \nreferring to Transparent Huge Pages.\n\nJD\n\n>\n>\n> Regards, Andreas\n>\n>\n>\n\n-- \nCommand Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc\n*** A fault and talent of mine is to tell it exactly how it is. ***\nPostgreSQL centered full stack support, consulting and development.\nAdvocate: @amplifypostgres || Learn: https://postgresconf.org\n***** Unless otherwise stated, opinions are my own. *****\n\n\n",
"msg_date": "Fri, 7 Sep 2018 11:22:52 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Multi-second pauses blocking even trivial activity"
},
{
"msg_contents": "On Fri, Sep 7, 2018 at 2:03 PM Patrick Molgaard <[email protected]> wrote:\n\n>\n> Hi Jeff,\n>\n> Thanks for your reply. Are locks relevant in this case, though?\n>\n\nI don't know, but why theorize when we can know for sure? It at least\ninvokes VirtualXactLockTableInsert. I don't see how that could block on a\nheavyweight lock, though. But again, why theorize when logging it is simple?\n\nIs it always the first statement in a connection which is blocking, or will\nestablished connections also block at the same time the new ones start to\nblock?\n\nCheers,\n\nJeff\n\n>\n\nOn Fri, Sep 7, 2018 at 2:03 PM Patrick Molgaard <[email protected]> wrote:Hi Jeff,Thanks for your reply. Are locks relevant in this case, though? I don't know, but why theorize when we can know for sure? It at least invokes VirtualXactLockTableInsert. I don't see how that could block on a heavyweight lock, though. But again, why theorize when logging it is simple?Is it always the first statement in a connection which is blocking, or will established connections also block at the same time the new ones start to block? Cheers,Jeff",
"msg_date": "Fri, 7 Sep 2018 15:20:04 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Multi-second pauses blocking even trivial activity"
},
{
"msg_contents": "Oh, to be clear - I'll be implementing your suggestion regardless, it seems\nvaluable whether or not it gets me closer to the root cause this time :)\n\nI was just trying to dig into why it may be relevant -- I want to really\nget a good grip on the mechanism behind this phenomenon.\n\nCheers\nPatrick\nOn Fri, 7 Sep 2018, 20:20 Jeff Janes, <[email protected]> wrote:\n\n> On Fri, Sep 7, 2018 at 2:03 PM Patrick Molgaard <[email protected]>\n> wrote:\n>\n>>\n>> Hi Jeff,\n>>\n>> Thanks for your reply. Are locks relevant in this case, though?\n>>\n>\n> I don't know, but why theorize when we can know for sure? It at least\n> invokes VirtualXactLockTableInsert. I don't see how that could block on a\n> heavyweight lock, though. But again, why theorize when logging it is simple?\n>\n> Is it always the first statement in a connection which is blocking, or\n> will established connections also block at the same time the new ones start\n> to block?\n>\n> Cheers,\n>\n> Jeff\n>\n>>\n\nOh, to be clear - I'll be implementing your suggestion regardless, it seems valuable whether or not it gets me closer to the root cause this time :)I was just trying to dig into why it may be relevant -- I want to really get a good grip on the mechanism behind this phenomenon.CheersPatrickOn Fri, 7 Sep 2018, 20:20 Jeff Janes, <[email protected]> wrote:On Fri, Sep 7, 2018 at 2:03 PM Patrick Molgaard <[email protected]> wrote:Hi Jeff,Thanks for your reply. Are locks relevant in this case, though? I don't know, but why theorize when we can know for sure? It at least invokes VirtualXactLockTableInsert. I don't see how that could block on a heavyweight lock, though. But again, why theorize when logging it is simple?Is it always the first statement in a connection which is blocking, or will established connections also block at the same time the new ones start to block? Cheers,Jeff",
"msg_date": "Sat, 8 Sep 2018 01:32:36 +0100",
"msg_from": "Patrick Molgaard <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Multi-second pauses blocking even trivial activity"
},
{
"msg_contents": "This sounds extremely plausible -- thanks for the tip, Andreas.\n\nBest,\nPatrick\n\nOn Fri, 7 Sep 2018, 19:20 Andreas Kretschmer, <[email protected]>\nwrote:\n\n>\n> >\n> >Intermittently (one or two times a week), all queries on that host are\n> >simultaneously blocked for extended periods (10s of seconds).\n> >\n> >The blocked queries are trivial & not related to locking - I'm seeing\n> >slowlogs of the form:\n> >\n>\n>\n> please check if THP are enabled.\n>\n>\n> Regards, Andreas\n>\n>\n>\n> --\n> 2ndQuadrant - The PostgreSQL Support Company\n>\n>\n\nThis sounds extremely plausible -- thanks for the tip, Andreas.Best,PatrickOn Fri, 7 Sep 2018, 19:20 Andreas Kretschmer, <[email protected]> wrote:\n>\n>Intermittently (one or two times a week), all queries on that host are\n>simultaneously blocked for extended periods (10s of seconds).\n>\n>The blocked queries are trivial & not related to locking - I'm seeing\n>slowlogs of the form:\n>\n\n\nplease check if THP are enabled.\n\n\nRegards, Andreas\n\n\n\n-- \n2ndQuadrant - The PostgreSQL Support Company",
"msg_date": "Sat, 8 Sep 2018 01:34:04 +0100",
"msg_from": "Patrick Molgaard <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Multi-second pauses blocking even trivial activity"
},
{
"msg_contents": "Andreas -- just following up to say that this was indeed the root cause.\nThanks again.\n\nPatrick\n\nOn Sat, 8 Sep 2018, 01:34 Patrick Molgaard, <[email protected]> wrote:\n\n> This sounds extremely plausible -- thanks for the tip, Andreas.\n>\n> Best,\n> Patrick\n>\n>\n> On Fri, 7 Sep 2018, 19:20 Andreas Kretschmer, <[email protected]>\n> wrote:\n>\n>>\n>> >\n>> >Intermittently (one or two times a week), all queries on that host are\n>> >simultaneously blocked for extended periods (10s of seconds).\n>> >\n>> >The blocked queries are trivial & not related to locking - I'm seeing\n>> >slowlogs of the form:\n>> >\n>>\n>>\n>> please check if THP are enabled.\n>>\n>>\n>> Regards, Andreas\n>>\n>>\n>>\n>> --\n>> 2ndQuadrant - The PostgreSQL Support Company\n>>\n>>\n\nAndreas -- just following up to say that this was indeed the root cause. Thanks again.PatrickOn Sat, 8 Sep 2018, 01:34 Patrick Molgaard, <[email protected]> wrote:This sounds extremely plausible -- thanks for the tip, Andreas.Best,PatrickOn Fri, 7 Sep 2018, 19:20 Andreas Kretschmer, <[email protected]> wrote:\n>\n>Intermittently (one or two times a week), all queries on that host are\n>simultaneously blocked for extended periods (10s of seconds).\n>\n>The blocked queries are trivial & not related to locking - I'm seeing\n>slowlogs of the form:\n>\n\n\nplease check if THP are enabled.\n\n\nRegards, Andreas\n\n\n\n-- \n2ndQuadrant - The PostgreSQL Support Company",
"msg_date": "Fri, 21 Sep 2018 20:07:07 +0100",
"msg_from": "Patrick Molgaard <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Multi-second pauses blocking even trivial activity"
},
{
"msg_contents": "\n\nAm 21.09.2018 um 21:07 schrieb Patrick Molgaard:\n> Andreas -- just following up to say that this was indeed the root \n> cause. Thanks again.\n>\n\nglad i could help you.\n\n\nRegards, Andreas\n\n-- \n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com\n\n\n",
"msg_date": "Sat, 22 Sep 2018 12:58:55 +0200",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Multi-second pauses blocking even trivial activity"
}
] |
[
{
"msg_contents": "Hi everybody,\n\nI ran into an issue with using the && array operator on a GIN index of mine. Basically I have a query that looks like this:\n\nSELECT * FROM example WHERE keys && ARRAY[...];\n\nThis works fine for a small number of array elements (N), but gets really slow as N gets bigger in what appears to be O(N^2) complexity.\n\nHowever, from studying the GIN data structure as described by the docs, it seems that the performance for this could be O(N). In fact, it's possible to coerce the query planner into an O(N) plan like this:\n\nSELECT DISTINCT ON (example.id) * FROM unnest(ARRAY[...]) key JOIN example ON keys && ARRAY[key]\n\nIn order to illustrate this better, I've created a jupyter notebook that populates an example table, show the query plans for both queries, and most importantly benchmarks them and plots a time vs array size (N) graph.\n\nhttps://github.com/felixge/pg-slow-gin/blob/master/pg-slow-gin.ipynb <https://github.com/felixge/pg-slow-gin/blob/master/pg-slow-gin.ipynb>\n\nPlease help me understand what causes the O(N^2) performance for query 1 and if query 2 is the best way to work around this issue.\n\nThanks\nFelix Geisendörfer\n\nPS: I'm using Postgres 10, but also verified that this problem exists with Postgres 11.\nHi everybody,I ran into an issue with using the && array operator on a GIN index of mine. Basically I have a query that looks like this:SELECT * FROM example WHERE keys && ARRAY[...];This works fine for a small number of array elements (N), but gets really slow as N gets bigger in what appears to be O(N^2) complexity.However, from studying the GIN data structure as described by the docs, it seems that the performance for this could be O(N). In fact, it's possible to coerce the query planner into an O(N) plan like this:SELECT DISTINCT ON (example.id) * FROM unnest(ARRAY[...]) key JOIN example ON keys && ARRAY[key]In order to illustrate this better, I've created a jupyter notebook that populates an example table, show the query plans for both queries, and most importantly benchmarks them and plots a time vs array size (N) graph.https://github.com/felixge/pg-slow-gin/blob/master/pg-slow-gin.ipynbPlease help me understand what causes the O(N^2) performance for query 1 and if query 2 is the best way to work around this issue.ThanksFelix GeisendörferPS: I'm using Postgres 10, but also verified that this problem exists with Postgres 11.",
"msg_date": "Fri, 7 Sep 2018 17:56:30 +0200",
"msg_from": "=?utf-8?Q?Felix_Geisend=C3=B6rfer?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "GIN Index has O(N^2) complexity for array overlap operator?"
}
] |
[
{
"msg_contents": "I have the following tables:\n- m(pk bigserial primary key, status text): with a single row\n- s(pk bigserial primary key, status text, action_at date, m_fk bigint):\n * 80% of the data has action_at between the current date and 1 year ago\n and status of E or C\n * 20% of the data has action_at between 5 days ago and 25 days into the\n future and status of P, PD, or A\n\nI have two partial indexes:\n- s_pk_action_at on s(pk, action_at) where status in ('P', 'PD', 'A')\n- s_action_at_pk on s(action_at, pk) where status in ('P', 'PD', 'A')\n\nWith the query:\nSELECT s.pk FROM s\nINNER JOIN m ON m.pk = s.m_fk\nWHERE\n s.status IN ('A', 'PD', 'P')\n AND (action_at <= '2018-09-06')\n AND s.status IN ('A', 'P')\n AND m.status = 'A';\n\nI generally expect the index s_action_at_pk to always be preferred over\ns_pk_action_at. And on stock Postgres it does in fact use that index (with\na bitmap index scan).\n\nWe like to set random_page_cost = 2 since we use fast SSDs only. With that\nchange Postgres strongly prefers the index s_pk_action_at unless I both\ndisable the other index and turn off bitmap heap scans.\n\nI'm attaching the following plans:\n- base_plan.txt: default costs; both indexes available\n- base_plan_rpc2.txt: random_page_cost = 2; both indexes available\n- inddisabled_plan_rpc2.txt: random_page_cost = 2; only s_action_at_pk\navailable\n- inddisabled_bhsoff_plan_rpc2.txt: random_page_cost = 2; enable_bitmapscan\n= false; only s_action_at_pk available\n\nA couple of questions:\n- How is s_pk_action_at ever efficient to scan? Given that the highest\ncardinality (primary key) column is first, wouldn't an index scan\neffectively have to scan the entire index?\n- Why does index scan on s_action_at_pk reads over 2x as many blocks as the\nbitmap heap scan with the same index?\n- Would you expect Postgres to generally always prefer using the\ns_action_at_pk index over the s_pk_action_at index for this query? I\nrealize changing the random page cost is part of what's driving this, but I\nstill can't imagine reading the full s_pk_action_at index (assuming that's\nwhat it is doing) could ever be more valuable.\n\nAs a side note, the planner is very bad at understanding a query that\nhappens (I realize you wouldn't write this by hand, but ORMs) when you have\na where clause like:\n s.status IN ('A', 'PD', 'P') AND s.status IN ('A', 'P')\nthe row estimates are significantly different from a where clause with only:\n s.status IN ('A', 'P')\neven though semantically those are identical.",
"msg_date": "Fri, 7 Sep 2018 09:17:25 -0700",
"msg_from": "James Coleman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Partial index plan/cardinality costing"
},
{
"msg_contents": "Bump, and curious if anyone on hackers has any ideas here: of particular\ninterest is why the (pk, created_at) index can possibly be more valuable\nthan the (created_at, pk) variant since the former effectively implies\nhaving to scan the entire index.\nOn Fri, Sep 7, 2018 at 12:17 PM James Coleman <[email protected]> wrote:\n\n> I have the following tables:\n> - m(pk bigserial primary key, status text): with a single row\n> - s(pk bigserial primary key, status text, action_at date, m_fk bigint):\n> * 80% of the data has action_at between the current date and 1 year ago\n> and status of E or C\n> * 20% of the data has action_at between 5 days ago and 25 days into the\n> future and status of P, PD, or A\n>\n> I have two partial indexes:\n> - s_pk_action_at on s(pk, action_at) where status in ('P', 'PD', 'A')\n> - s_action_at_pk on s(action_at, pk) where status in ('P', 'PD', 'A')\n>\n> With the query:\n> SELECT s.pk FROM s\n> INNER JOIN m ON m.pk = s.m_fk\n> WHERE\n> s.status IN ('A', 'PD', 'P')\n> AND (action_at <= '2018-09-06')\n> AND s.status IN ('A', 'P')\n> AND m.status = 'A';\n>\n> I generally expect the index s_action_at_pk to always be preferred over\n> s_pk_action_at. And on stock Postgres it does in fact use that index (with\n> a bitmap index scan).\n>\n> We like to set random_page_cost = 2 since we use fast SSDs only. With that\n> change Postgres strongly prefers the index s_pk_action_at unless I both\n> disable the other index and turn off bitmap heap scans.\n>\n> I'm attaching the following plans:\n> - base_plan.txt: default costs; both indexes available\n> - base_plan_rpc2.txt: random_page_cost = 2; both indexes available\n> - inddisabled_plan_rpc2.txt: random_page_cost = 2; only s_action_at_pk\n> available\n> - inddisabled_bhsoff_plan_rpc2.txt: random_page_cost = 2;\n> enable_bitmapscan = false; only s_action_at_pk available\n>\n> A couple of questions:\n> - How is s_pk_action_at ever efficient to scan? Given that the highest\n> cardinality (primary key) column is first, wouldn't an index scan\n> effectively have to scan the entire index?\n> - Why does index scan on s_action_at_pk reads over 2x as many blocks as\n> the bitmap heap scan with the same index?\n> - Would you expect Postgres to generally always prefer using the\n> s_action_at_pk index over the s_pk_action_at index for this query? I\n> realize changing the random page cost is part of what's driving this, but I\n> still can't imagine reading the full s_pk_action_at index (assuming that's\n> what it is doing) could ever be more valuable.\n>\n> As a side note, the planner is very bad at understanding a query that\n> happens (I realize you wouldn't write this by hand, but ORMs) when you have\n> a where clause like:\n> s.status IN ('A', 'PD', 'P') AND s.status IN ('A', 'P')\n> the row estimates are significantly different from a where clause with\n> only:\n> s.status IN ('A', 'P')\n> even though semantically those are identical.\n>\n>\n>\n\nBump, and curious if anyone on hackers has any ideas here: of particular interest is why the (pk, created_at) index can possibly be more valuable than the (created_at, pk) variant since the former effectively implies having to scan the entire index.On Fri, Sep 7, 2018 at 12:17 PM James Coleman <[email protected]> wrote:I have the following tables:- m(pk bigserial primary key, status text): with a single row- s(pk bigserial primary key, status text, action_at date, m_fk bigint): * 80% of the data has action_at between the current date and 1 year ago and status of E or C * 20% of the data has action_at between 5 days ago and 25 days into the future and status of P, PD, or AI have two partial indexes:- s_pk_action_at on s(pk, action_at) where status in ('P', 'PD', 'A')- s_action_at_pk on s(action_at, pk) where status in ('P', 'PD', 'A')With the query:SELECT s.pk FROM sINNER JOIN m ON m.pk = s.m_fkWHERE s.status IN ('A', 'PD', 'P') AND (action_at <= '2018-09-06') AND s.status IN ('A', 'P') AND m.status = 'A';I generally expect the index s_action_at_pk to always be preferred over s_pk_action_at. And on stock Postgres it does in fact use that index (with a bitmap index scan).We like to set random_page_cost = 2 since we use fast SSDs only. With that change Postgres strongly prefers the index s_pk_action_at unless I both disable the other index and turn off bitmap heap scans.I'm attaching the following plans:- base_plan.txt: default costs; both indexes available- base_plan_rpc2.txt: random_page_cost = 2; both indexes available- inddisabled_plan_rpc2.txt: random_page_cost = 2; only s_action_at_pk available- inddisabled_bhsoff_plan_rpc2.txt: random_page_cost = 2; enable_bitmapscan = false; only s_action_at_pk availableA couple of questions:- How is s_pk_action_at ever efficient to scan? Given that the highest cardinality (primary key) column is first, wouldn't an index scan effectively have to scan the entire index?- Why does index scan on s_action_at_pk reads over 2x as many blocks as the bitmap heap scan with the same index?- Would you expect Postgres to generally always prefer using the s_action_at_pk index over the s_pk_action_at index for this query? I realize changing the random page cost is part of what's driving this, but I still can't imagine reading the full s_pk_action_at index (assuming that's what it is doing) could ever be more valuable.As a side note, the planner is very bad at understanding a query that happens (I realize you wouldn't write this by hand, but ORMs) when you have a where clause like: s.status IN ('A', 'PD', 'P') AND s.status IN ('A', 'P')the row estimates are significantly different from a where clause with only: s.status IN ('A', 'P')even though semantically those are identical.",
"msg_date": "Mon, 8 Oct 2018 18:05:19 -0400",
"msg_from": "James Coleman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Partial index plan/cardinality costing"
},
{
"msg_contents": "Please don't cross-post to lists.\n\n>insert into s(status, action_at, m_fk)\n>select\n> ( CASE WHEN series.n % 100 < 80 THEN\n> (ARRAY['E', 'C'])[(series.n % 2) + 1]\n> ELSE\n> (ARRAY['P', 'PD', 'A'])[((random() * 3)::integer % 3) + 1]\n> END\n> ),\n> (\n> CASE WHEN series.n % 100 < 80 THEN\n> '2018-09-07'::date + ((series.n % 365 - 365)::text || ' day')::interval\n> ELSE\n> '2018-09-07'::date + (((random() * 30)::integer % 30 - 4)::text || ' day')::interval\n> END\n> ),\n> (select m.pk from m limit 1)\n>from generate_series(1, 500000) series(n);\n\n> I have two partial indexes:\n> - s_pk_action_at on s(pk, action_at) where status in ('P', 'PD', 'A')\n> - s_action_at_pk on s(action_at, pk) where status in ('P', 'PD', 'A')\n\n> - How is s_pk_action_at ever efficient to scan? Given that the highest\n> cardinality (primary key) column is first, wouldn't an index scan\n> effectively have to scan the entire index?\n\nThe index probably IS inefficient to scan (you could see that if you force an\nbitmap index scan on s_pk_action_at)...but because of leading pkey column, the\nHEAP is read sequentially, and the planner knows that the heap will be read in\norder of its leading column. Reading the entire index is less expensive than\nreading most of the table (maybe nonsequentially). This is the 2nd effect Jeff\nJanes likes to point out: high correlation means 1) sequential reads; *and*, 2)\na smaller fraction of the table needs to be accessed to read a given number of\ntuples.\n\n> - Why does index scan on s_action_at_pk reads over 2x as many blocks as the\n> bitmap heap scan with the same index?\n\nMaybe because of heap pages accessed multiple times (not sequentially), since\ncorrelation is small on this table loaded with \"modulus\"-style insertions.\n\npryzbyj=# SELECT attname, correlation FROM pg_stats WHERE tablename='s' ;\n attname | correlation \n-----------+-------------\n pk | 1\n status | 0.340651\n action_at | 0.00224239\n m_fk | 1\n\n..so each index tuple is accessing a separate heap page.\n\nIf you create non-partial index and CLUSTER on action_at_idx, then:\n\npryzbyj=# SELECT attname, correlation FROM pg_stats WHERE tablename='s' ;\n attname | correlation\n-----------+-------------\n pk | 0.00354867\n status | 0.420806 action_at | 1\n m_fk | 1\n\n Nested Loop (cost=1907.03..6780.65 rows=11038 width=8) (actual time=2.241..17.839 rows=8922 loops=1)\n Join Filter: (s.m_fk = m.pk)\n Buffers: shared hit=115 read=53\n -> Seq Scan on m (cost=0.00..1.01 rows=1 width=8) (actual time=0.009..0.011 rows=1 loops=1)\n Filter: (status = 'A'::text)\n Buffers: shared hit=1\n -> Bitmap Heap Scan on s (cost=1907.03..6641.66 rows=11038 width=16) (actual time=2.222..9.032 rows=8922 loops=1)\n Recheck Cond: ((action_at <= '2018-09-06'::date) AND (status = ANY ('{P,PD,A}'::text[])))\n Filter: (status = ANY ('{A,P}'::text[]))\n Rows Removed by Filter: 4313\n Heap Blocks: exact=114\n Buffers: shared hit=114 read=53\n -> Bitmap Index Scan on s_action_at_pk (cost=0.00..1904.27 rows=82647 width=0) (actual time=2.185..2.186 rows=13235 loops=1)\n Index Cond: (action_at <= '2018-09-06'::date)\n Buffers: shared read=53\n\nAlso, I don't think it matters here, but action_at and status are correlated.\nPlanner would think that they're independent.\n\nI don't think it's related to other issues, but also note the rowcount estimate is off:\n -> Bitmap Index Scan on s_action_at_pk (cost=0.00..1258.02 rows=82347 width=0) (actual time=1.026..1.026 rows=13402 loops=1) \n Index Cond: (action_at <= '2018-09-06'::date) \n\nJustin\n\n",
"msg_date": "Mon, 8 Oct 2018 19:49:44 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partial index plan/cardinality costing"
}
] |
[
{
"msg_contents": "I am working on adding support for PostgreSQL database for our application.\nIn a lot of our use-cases, data is inserted into temporary tables using\nINSERT INTO statements with bind parameters, and subsequently queries are\nrun by joining to these temp tables. Following is some of the data for these\nINSERT statements:\n\nTable definition: CREATE TEMPORARY TABLE Table1( auid varchar(15) ) ON\nCOMMIT DELETE ROWS;\n\nSQL statement: INSERT INTO Table1 (uidcol) VALUES (:1);\n\nTime taken to insert 24428 rows: 10.077 sec\nTime taken to insert 32512 rows: 16.026 sec\nTime taken to insert 32512 rows: 15.821 sec\nTime taken to insert 6107 rows: 1.514 sec\n\nI am looking for suggestions to improve the performance of these INSERT\nstatements into temporary tables. Database is located on a Linux VM and the\nversion is \"PostgreSQL 10.4 on x86_64-pc-linux-gnu, compiled by gcc (GCC)\n4.4.7 20120313 (Red Hat 4.4.7-18), 64-bit\". The application is running on a\nwindows platform and connecting to the database using psqlODBC driver\nversion 10.03.\n\nPlease let me know if any additional information is needed.\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n",
"msg_date": "Fri, 7 Sep 2018 10:04:02 -0700 (MST)",
"msg_from": "padusuma <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance of INSERT into temporary tables using psqlODBC driver"
},
{
"msg_contents": "\npadusuma <[email protected]> writes:\n\n> I am working on adding support for PostgreSQL database for our application.\n> In a lot of our use-cases, data is inserted into temporary tables using\n> INSERT INTO statements with bind parameters, and subsequently queries are\n> run by joining to these temp tables. Following is some of the data for these\n> INSERT statements:\n>\n> Table definition: CREATE TEMPORARY TABLE Table1( auid varchar(15) ) ON\n> COMMIT DELETE ROWS;\n>\n> SQL statement: INSERT INTO Table1 (uidcol) VALUES (:1);\n>\n> Time taken to insert 24428 rows: 10.077 sec\n> Time taken to insert 32512 rows: 16.026 sec\n> Time taken to insert 32512 rows: 15.821 sec\n> Time taken to insert 6107 rows: 1.514 sec\n>\n> I am looking for suggestions to improve the performance of these INSERT\n> statements into temporary tables. Database is located on a Linux VM and the\n> version is \"PostgreSQL 10.4 on x86_64-pc-linux-gnu, compiled by gcc (GCC)\n> 4.4.7 20120313 (Red Hat 4.4.7-18), 64-bit\". The application is running on a\n> windows platform and connecting to the database using psqlODBC driver\n> version 10.03.\n>\n\nWe are inserting large numbers (millions) of rows into a postgres\ndatabase from a Javascript application and found using the COPY command\nwas much, much faster than doing regular inserts (even with multi-insert\ncommit). If you can do this using the driver you are using, that will\ngive you the largest performance boost. \n\n\n-- \nTim Cross\n\n",
"msg_date": "Sat, 08 Sep 2018 10:05:10 +1000",
"msg_from": "Tim Cross <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of INSERT into temporary tables using psqlODBC driver"
},
{
"msg_contents": ">We are inserting large numbers (millions) of rows into a postgres\n>database from a Javascript application and found using the COPY command\n>was much, much faster than doing regular inserts (even with multi-insert\n>commit). If you can do this using the driver you are using, that will\n>give you the largest performance boost. \n\nThe data to be inserted into temporary tables is obtained from one or more\nqueries run earlier and the data is available as a vector of strings. If I\nneed to use COPY FROM command, then the application would need to create a\nfile with the data to be inserted and the file needs to be readable by the\nuser running database server process, which may not be always possible\nunless the application is running on the same host. I think this approach\nmay not be feasible for our application.\n\nI have increased the value for /temp_buffers/ server parameter from the\ndefault 8 MB to 128 MB. However, this change did not affect the INSERT time\nfor temporary tables.\n\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n",
"msg_date": "Fri, 7 Sep 2018 22:41:01 -0700 (MST)",
"msg_from": "padusuma <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance of INSERT into temporary tables using psqlODBC\n driver"
},
{
"msg_contents": "Hello\n\n> The data to be inserted into temporary tables is obtained from one or more\n> queries run earlier and the data is available as a vector of strings.\nYou can not use \"insert into temp_table select /*anything you wish*/\" statement?\nOr even insert .. select ... returning if you need receive data to application?\n\n> If I need to use COPY FROM command, then the application would need to create a\n> file with the data to be inserted\nYou can not using \"copy from stdin\" statement?\n\nregards, Sergei\n\n",
"msg_date": "Sun, 09 Sep 2018 10:45:27 +0300",
"msg_from": "Sergei Kornilov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of INSERT into temporary tables using psqlODBC driver"
},
{
"msg_contents": "Hello Sergei,\n>> The data to be inserted into temporary tables is obtained from one or\n>> more \n>> queries run earlier and the data is available as a vector of strings. \n>You can not use \"insert into temp_table select /*anything you wish*/\"\nstatement? \n>Or even insert .. select ... returning if you need receive data to\napplication? \nUnfortunately, the existing functionality in our application is in such a\nmanner that the data returned from one or more SELECT queries is processed\nby server business logic and filtered, and the filtered data is then\ninserted into the temporary tables. This is the reason I could not use\ninsert into ... select ... or insert ... select ... returning statements.\n\n>> If I need to use COPY FROM command, then the application would need to\n>> create a \n>> file with the data to be inserted \n>You can not using \"copy from stdin\" statement?\nThank you for suggesting the usage of \"copy from stdin\". I am not sure how\nto pass the values to be inserted as input for \"COPY FROM STDIN\" statement\nfrom my application based on psqlODBC driver. Can someone point me to an\nexample or suggest how to pass data from a client application to \"COPY FROM\nSTDIN\" statement?\nThanks.\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n",
"msg_date": "Sun, 9 Sep 2018 02:39:23 -0700 (MST)",
"msg_from": "padusuma <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance of INSERT into temporary tables using psqlODBC\n driver"
},
{
"msg_contents": "\npadusuma <[email protected]> writes:\n\n>>We are inserting large numbers (millions) of rows into a postgres\n>>database from a Javascript application and found using the COPY command\n>>was much, much faster than doing regular inserts (even with multi-insert\n>>commit). If you can do this using the driver you are using, that will\n>>give you the largest performance boost.\n>\n> The data to be inserted into temporary tables is obtained from one or more\n> queries run earlier and the data is available as a vector of strings. If I\n> need to use COPY FROM command, then the application would need to create a\n> file with the data to be inserted and the file needs to be readable by the\n> user running database server process, which may not be always possible\n> unless the application is running on the same host. I think this approach\n> may not be feasible for our application.\n>\n\nOK, that does make a difference. If your data is already in the\ndatabase, COPY is not going to help you much.\n\n> I have increased the value for /temp_buffers/ server parameter from the\n> default 8 MB to 128 MB. However, this change did not affect the INSERT time\n> for temporary tables.\n\nIt isn't clear why you create vectors of strings rather than just select\ninto or something similar.\n\nThere are no 'quick fixes' which can be applied without real analysis of\nthe system. However, based on the limited information available, you may\nwant to consider -\n\n- Increase work_mem to reduce use of temp files. Need it to be 2 to 3\n times largest temp file (but use common sense)\n\n- Tweak wal checkpoint parameters to prevent wal checkpoints occurring\n too frequently. Note that there is a play off here between frequency\n of checkpoints and boot time after a crash. Fewer wal checkpoints will\n usually improve performance, but recovery time is longer.\n\n- Verify your inserts into temporary tables is the bottleneck and not\n the select from existing data (explain plan etc and adjust indexes\n accordingly).\n\nHow effectively you can increase insert times will depend on what the\nmemory and cpu profile of the system is. More memory, less use of temp\nfiles, faster system, so spend a bit of time to make sure your system is\nconfigured to squeeze as much out of that RAM as you can!\n\n--\nTim Cross\n\n",
"msg_date": "Mon, 10 Sep 2018 08:26:06 +1000",
"msg_from": "Tim Cross <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of INSERT into temporary tables using psqlODBC driver"
},
{
"msg_contents": "Hello Tim,\n\n>> I have increased the value for /temp_buffers/ server parameter from the \n>> default 8 MB to 128 MB. However, this change did not affect the INSERT\n>> time \n>> for temporary tables. \n\n>It isn't clear why you create vectors of strings rather than just select \n>into or something similar. \n\n>There are no 'quick fixes' which can be applied without real analysis of \n>the system. However, based on the limited information available, you may \n>want to consider - \n\n>- Increase work_mem to reduce use of temp files. Need it to be 2 to 3 \n> times largest temp file (but use common sense) \n\nI have already increased the work_mem and maintenance_work_mem to 256MB. I\nwill check on the temp file sizes and adjust the work_mem parameter as you\nsuggested.\n\n>- Tweak wal checkpoint parameters to prevent wal checkpoints occurring \n> too frequently. Note that there is a play off here between frequency \n> of checkpoints and boot time after a crash. Fewer wal checkpoints will \n> usually improve performance, but recovery time is longer. \n\n>- Verify your inserts into temporary tables is the bottleneck and not \n> the select from existing data (explain plan etc and adjust indexes \n> accordingly). \n\nIn few use-cases, I see that multiple inserts took 150 seconds out of total\ndatabase processing time of 175 seconds, and hence, the focus is on these\ninsert statements. I have run ANALYZE statement followed by INSERT INTO\ntemporary tables, before the temporary tables are used in joins in\nsubsequent queries. This reduced the subsequent query processing times due\nto the updated statistics. I will look into adding indexes for these\ntemporary tables as well.\n\n>How effectively you can increase insert times will depend on what the \n>memory and cpu profile of the system is. More memory, less use of temp \n>files, faster system, so spend a bit of time to make sure your system is \n>configured to squeeze as much out of that RAM as you can! \n\nThank you for the suggestions. I will try these out.\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n",
"msg_date": "Sun, 9 Sep 2018 21:47:56 -0700 (MST)",
"msg_from": "padusuma <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance of INSERT into temporary tables using psqlODBC\n driver"
},
{
"msg_contents": "Hello Tim,\n\nI have tried the suggestions provided to the best of my knowledge, but I did\nnot see any improvement in the INSERT performance for temporary tables. The\nLinux host on which PostgreSQL database is installed has 32 GB RAM.\nFollowing are current settings I have in postgresql.conf file:\nshared_buffers = 8GB\ntemp_buffers = 256MB\nwork_mem = 256MB\nmaintenance_work_mem = 256MB\nwal_buffers = 256MB\n\ncheckpoint_timeout = 30min\ncheckpoint_completion_target = 0.75\nmax_wal_size = 1GB\n\neffective_cache_size = 16GB\n\n>>- Increase work_mem to reduce use of temp files. Need it to be 2 to 3 \n>> times largest temp file (but use common sense) \n\n>I have already increased the work_mem and maintenance_work_mem to 256MB. I \n>will check on the temp file sizes and adjust the work_mem parameter as you \n>suggested. \n\n>- Tweak wal checkpoint parameters to prevent wal checkpoints occurring \n> too frequently. Note that there is a play off here between frequency \n> of checkpoints and boot time after a crash. Fewer wal checkpoints will \n> usually improve performance, but recovery time is longer. \n\n>How effectively you can increase insert times will depend on what the \n>memory and cpu profile of the system is. More memory, less use of temp \n>files, faster system, so spend a bit of time to make sure your system is \n>configured to squeeze as much out of that RAM as you can! \n\nPlease let me know if there are any other suggestions that I can try.\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n",
"msg_date": "Thu, 13 Sep 2018 05:57:39 -0700 (MST)",
"msg_from": "padusuma <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance of INSERT into temporary tables using psqlODBC\n driver"
},
{
"msg_contents": "\npadusuma <[email protected]> writes:\n\n> Hello Tim,\n>\n> I have tried the suggestions provided to the best of my knowledge, but I did\n> not see any improvement in the INSERT performance for temporary tables. The\n> Linux host on which PostgreSQL database is installed has 32 GB RAM.\n> Following are current settings I have in postgresql.conf file:\n> shared_buffers = 8GB\n> temp_buffers = 256MB\n> work_mem = 256MB\n> maintenance_work_mem = 256MB\n> wal_buffers = 256MB\n>\n> checkpoint_timeout = 30min\n> checkpoint_completion_target = 0.75\n> max_wal_size = 1GB\n>\n> effective_cache_size = 16GB\n>\n>>>- Increase work_mem to reduce use of temp files. Need it to be 2 to 3\n>>> times largest temp file (but use common sense)\n>\n>>I have already increased the work_mem and maintenance_work_mem to 256MB. I\n>>will check on the temp file sizes and adjust the work_mem parameter as you\n>>suggested.\n>\n>>- Tweak wal checkpoint parameters to prevent wal checkpoints occurring\n>> too frequently. Note that there is a play off here between frequency\n>> of checkpoints and boot time after a crash. Fewer wal checkpoints will\n>> usually improve performance, but recovery time is longer.\n>\n>>How effectively you can increase insert times will depend on what the\n>>memory and cpu profile of the system is. More memory, less use of temp\n>>files, faster system, so spend a bit of time to make sure your system is\n>>configured to squeeze as much out of that RAM as you can!\n>\n> Please let me know if there are any other suggestions that I can try.\n\nHow are you gathering metrics to determine if performance has improved\nor not?\n\nHave you seen any change in your explain (analyze, buffers) plans?\n\nMake sure your table statistics are all up-to-date before performing\neach benchmark test. I often turn off autovacuum when doing this sort of\ntesting so that I know exactly when tables get vacuumed and statistics\nget updated (just ensure you remember to turn it back on when your\nfinished!).\n\nAre the wal checkpoints being triggered every 30 mins or more\nfrequently?\n\nAre you still seeing the system use lots of temp files?\n\nDo you have any indexes on the tables your inserting into?\n\nAs mentioned previously, there are no simple/quick fixes here - you\ncannot just change a setting and see performance improve. It will be\nnecessary to do a lot of experimentation, gathering statistics and\ninvestigate how postgres is using buffers, disk IO etc. All of these\nparameters interact with each other, so it is critical you have good\nmetrics to see exactly what your changes do. It is complex and time\nconsuming. Highly recommend PostgreSQL: High Performance (Ahmed & SMith)\nand Mastering Postgres (Shonig) for valuable background/tips - there\nreally is just far too much to communicate effectively via email.\n\nTim\n\n\n--\nTim Cross\n\n",
"msg_date": "Fri, 14 Sep 2018 08:41:18 +1000",
"msg_from": "Tim Cross <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of INSERT into temporary tables using psqlODBC driver"
},
{
"msg_contents": "Hello Tim,\n\n>How are you gathering metrics to determine if performance has improved \n>or not? \nI am measuring the response times through timer for the execution of SQL\nstatements through psqlODBC driver. The response times for INSERT INTO\ntemp-table statements have not changed with the parameters I modified.\n\n>Have you seen any change in your explain (analyze, buffers) plans? \n\nThere was no change in the EXPLAIN for INSERT INTO statement, but the\nperformance of the queries improved by about 5%.\n\n>Make sure your table statistics are all up-to-date before performing \n>each benchmark test. I often turn off autovacuum when doing this sort of \n>testing so that I know exactly when tables get vacuumed and statistics \n>get updated (just ensure you remember to turn it back on when your \n>finished!). \nI ran the VACUUM ANALYZE statement manually before starting the tests. Even\nthough autovacuum was turned on, it did not get invoked due to the\nthresholds and as bulk of the inserts are in temporary tables.\n\n>Are the wal checkpoints being triggered every 30 mins or more \n>frequently? \nThe wal checkpoints are triggered every 30 mins.\n\n>Are you still seeing the system use lots of temp files? \nI do not see any files in pgsql_tmp folders in the tablespaces where the\ntables are created. Also, I do not see pgsql_tmp folder in base and global\nfolders. Am I checking for these files in the correct location? Also, I ran\nthe following query (taken from another forum) to check the temporary files\ngenerated for all the databases:\nSELECT temp_files AS \"Temporary files\", temp_bytes AS \"Size of temporary\nfiles\" FROM pg_stat_database db;\n\nThe result is 0 for both columns.\n\n>Do you have any indexes on the tables your inserting into? \nI have not created indexes on these temporary tables, but programatically\nexecuted /ANALYZE <temp-table>/ statement after the data is inserted into\nthese temp tables, to generate/update statistics for these tables. Indexes\ndo exist for all regular tables.\n\n>As mentioned previously, there are no simple/quick fixes here - you \n>cannot just change a setting and see performance improve. It will be \n>necessary to do a lot of experimentation, gathering statistics and \n>investigate how postgres is using buffers, disk IO etc. All of these \n>parameters interact with each other, so it is critical you have good \n>metrics to see exactly what your changes do. It is complex and time \n>consuming. Highly recommend PostgreSQL: High Performance (Ahmed & SMith) \n>and Mastering Postgres (Shonig) for valuable background/tips - there \n>really is just far too much to communicate effectively via email.\n\nThank you for the suggestions on the books. I will go through these.\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n",
"msg_date": "Sat, 15 Sep 2018 10:00:03 -0700 (MST)",
"msg_from": "padusuma <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance of INSERT into temporary tables using psqlODBC\n driver"
},
{
"msg_contents": "\npadusuma <[email protected]> writes:\n\n> Hello Tim,\n>\n>>How are you gathering metrics to determine if performance has improved\n>>or not?\n> I am measuring the response times through timer for the execution of SQL\n> statements through psqlODBC driver. The response times for INSERT INTO\n> temp-table statements have not changed with the parameters I modified.\n>\n>>Have you seen any change in your explain (analyze, buffers) plans?\n>\n> There was no change in the EXPLAIN for INSERT INTO statement, but the\n> performance of the queries improved by about 5%.\n>\n>>Make sure your table statistics are all up-to-date before performing\n>>each benchmark test. I often turn off autovacuum when doing this sort of\n>>testing so that I know exactly when tables get vacuumed and statistics\n>>get updated (just ensure you remember to turn it back on when your\n>>finished!).\n> I ran the VACUUM ANALYZE statement manually before starting the tests. Even\n> though autovacuum was turned on, it did not get invoked due to the\n> thresholds and as bulk of the inserts are in temporary tables.\n>\n>>Are the wal checkpoints being triggered every 30 mins or more\n>>frequently?\n> The wal checkpoints are triggered every 30 mins.\n>\n>>Are you still seeing the system use lots of temp files?\n> I do not see any files in pgsql_tmp folders in the tablespaces where the\n> tables are created. Also, I do not see pgsql_tmp folder in base and global\n> folders. Am I checking for these files in the correct location? Also, I ran\n> the following query (taken from another forum) to check the temporary files\n> generated for all the databases:\n> SELECT temp_files AS \"Temporary files\", temp_bytes AS \"Size of temporary\n> files\" FROM pg_stat_database db;\n>\n> The result is 0 for both columns.\n>\n>>Do you have any indexes on the tables your inserting into?\n> I have not created indexes on these temporary tables, but programatically\n> executed /ANALYZE <temp-table>/ statement after the data is inserted into\n> these temp tables, to generate/update statistics for these tables. Indexes\n> do exist for all regular tables.\n>\n>>As mentioned previously, there are no simple/quick fixes here - you\n>>cannot just change a setting and see performance improve. It will be\n>>necessary to do a lot of experimentation, gathering statistics and\n>>investigate how postgres is using buffers, disk IO etc. All of these\n>>parameters interact with each other, so it is critical you have good\n>>metrics to see exactly what your changes do. It is complex and time\n>>consuming. Highly recommend PostgreSQL: High Performance (Ahmed & SMith)\n>>and Mastering Postgres (Shonig) for valuable background/tips - there\n>>really is just far too much to communicate effectively via email.\n>\n> Thank you for the suggestions on the books. I will go through these.\n\nBased on your responses, it sounds like you have done the 'easy' stuff\nwhich often results in improved performance. Now you are going to have\nto dig much harder. It might be worth looking more closely at how\nbuffers/caching is working (pg_buffercache extension might be useful),\nverifying where performance bottlenecks are (this can sometimes be\nsurprising - it may not be where you think it is. Don't forget to\nprofile your client, network/driver throughput, OS level disk I/O\netc). This is where books like PosgreSQL High Performance will be\nuseful.\n\nMy only word of caution is that you are likely to now begin looking at\noptions which can improve throughput, but often come with other 'costs',\nsuch as stability, data integrity or recovery time. These are things\nwhich can only be assessed on a per case basis and largely depend on\nbusiness priorities. It will take time and you will need to make changes\nslowly and do a lot of benchmarking.\n\nIt is really important to have a clear idea as to what would be\nacceptable performance rather than just a vague concept of making things\nfaster. For example, one application I have inserts 1.3+ billion rows\nper day. This represents two 'sets' of data. Our minimum requirement was\nthe ability to process 1 set, but if possible, 2 sets would be\nideal. Initially, with the original technology being used, it took\nbetween 23 and 26 hours to process 1 set. We were able to tune this to\nget it always to be under 24 hours, but there was no way we were going\nto get the level of improvement which would allow more than 1 set to be\nprocessed per day - not with the technology and design that was in\nplace.\n\nA decision was made to re-implement using a different technology and\ndesign. This was where we gained the improvements in performance we\nreally required. While the technology did play a part, it was really the\nre-design which gave us the performance improvement to reach our desired\ngoal of 2 sets per day. Even 3 sets per day is a possibility now.\n\nWe could have spent a lot of time tuning and re-spe'ing hardware etc to\nget to 1 set per day and we would have succeeded, but that would have\nbeen the absolute upper limit. I suspect it would have cost about the\nsame as the re-implementation, but with a much lower upper limit.\n\nRe-implementation of a solution is often a hard case to sell, but it\nmight be the only way to get the performance you want. The big positive\nto a re-implementation is that you usually get a better solution because\nyou are implementing with more knowledge and experience about the\nproblem domain. Design is often cleaner and as a result, easier to\nmaintain. It usually takes a lot less time than the original\nimplementation as well and can be the more economical solution compared\nto fighting a system which has fundamental design limitations that\nrestrict performance.\n\ngood luck,\n\nTim\n--\nTim Cross\n\n",
"msg_date": "Sun, 16 Sep 2018 12:19:31 +1000",
"msg_from": "Tim Cross <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of INSERT into temporary tables using psqlODBC driver"
},
{
"msg_contents": "Hello Tim,\n\n>Re-implementation of a solution is often a hard case to sell, but it \n>might be the only way to get the performance you want. The big positive \n>to a re-implementation is that you usually get a better solution because \n>you are implementing with more knowledge and experience about the \n>problem domain. Design is often cleaner and as a result, easier to \n>maintain. It usually takes a lot less time than the original \n>implementation as well and can be the more economical solution compared \n>to fighting a system which has fundamental design limitations that \n>restrict performance.\n\nThank you for the suggestions and advice. I will definitely look into\nre-implementation of certain parts of our solution as an option to improve\nperformance.\n\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n",
"msg_date": "Sat, 15 Sep 2018 22:29:24 -0700 (MST)",
"msg_from": "padusuma <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance of INSERT into temporary tables using psqlODBC\n driver"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm suggesting to link to:\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\n From either:\nhttps://wiki.postgresql.org/wiki/Main%20Page\nor:\nhttps://wiki.postgresql.org/wiki/Performance_Optimization\n\nI know it's a wiki, but it looks like I'm not allowed to edit the 'Main' page,\nso I'm asking here, which is prolly for the best anyway. Feel free to forward\nto or ask for opinion on the -perform list.\n\nThanks,\nJustin\n\n",
"msg_date": "Fri, 7 Sep 2018 20:29:57 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": true,
"msg_subject": "link to Slow_Query_Questions from wiki/Main Page"
},
{
"msg_contents": "I asked few weeks ago [0] but didn't get a response on -docs so resending here\nfor wider review/discussion/.\n\n[0] https://www.postgresql.org/message-id/flat/20180908012957.GA15350%40telsasoft.com\n\nOn Fri, Sep 07, 2018 at 08:29:57PM -0500, Justin Pryzby wrote:\n> Hi,\n> \n> I'm suggesting to link to:\n> https://wiki.postgresql.org/wiki/Slow_Query_Questions\n> \n> From either:\n> https://wiki.postgresql.org/wiki/Main%20Page\n> or:\n> https://wiki.postgresql.org/wiki/Performance_Optimization\n> \n> I know it's a wiki, but it looks like I'm not allowed to edit the 'Main' page,\n> so I'm asking here, which is prolly for the best anyway. Feel free to forward\n> to or ask for opinion on the -perform list.\n> \n> Thanks,\n> Justin\n\n",
"msg_date": "Tue, 25 Sep 2018 14:35:43 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: link to Slow_Query_Questions from wiki/Main Page"
},
{
"msg_contents": "On 2018-Sep-25, Justin Pryzby wrote:\n\n> I asked few weeks ago [0] but didn't get a response on -docs so resending here\n> for wider review/discussion/.\n\nI support the idea of adding a link to \"Performance Optimization\".\nThat's not a protected page, so you should be able to do it.\n\n\n> [0] https://www.postgresql.org/message-id/flat/20180908012957.GA15350%40telsasoft.com\n> \n> On Fri, Sep 07, 2018 at 08:29:57PM -0500, Justin Pryzby wrote:\n> > Hi,\n> > \n> > I'm suggesting to link to:\n> > https://wiki.postgresql.org/wiki/Slow_Query_Questions\n> > \n> > From either:\n> > https://wiki.postgresql.org/wiki/Main%20Page\n> > or:\n> > https://wiki.postgresql.org/wiki/Performance_Optimization\n> > \n> > I know it's a wiki, but it looks like I'm not allowed to edit the 'Main' page,\n> > so I'm asking here, which is prolly for the best anyway. Feel free to forward\n> > to or ask for opinion on the -perform list.\n> > \n> > Thanks,\n> > Justin\n> \n\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Tue, 25 Sep 2018 16:38:51 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: link to Slow_Query_Questions from wiki/Main Page"
}
] |
[
{
"msg_contents": "Based on my research in the forums and Google , it is described in multiple places that ‘select count(*)’ is expected to be slow in Postgres because of the MVCC controls imposed upon the query leading a table scan. Also, the elapsed time increase linearly with table size. \n\nHowever, I do not know if elapsed time I’m getting is to be expected. \n\nTable reltuples in pg_class = 2,266,649,344 (pretty close)\nQuery = select count(*) from jim.sttyations ;\nElapsed time (ET) = 18.5 hrs\n\nThis is an Aurora cluster running on r4.2xlarge (8 vCPU, 61g). CPU usage during count run hovers around 20% with 20g of freeable memory. \n\nIs this ET expected? If not, what could be slowing it down? I’m currently running explain analyze and I’ll share the final output when done. \n\nI’m familiar with the ideas listed here https://www.citusdata.com/blog/2016/10/12/count-performance/ \n\nTable \"jim.sttyations\"\n Column | Type | Modifiers | Storage | Stats target | Description \n-------------------+--------------------------+----------------------------+----------+--------------+-------------\n stty_id | bigint | not null | plain | | \n stty_hitlist_line | text | not null | extended | | \n stty_status | text | not null default 'Y'::text | extended | | \n stty_status_date | timestamp with time zone | not null | plain | | \n vs_number | integer | not null | plain | | \n stty_date_created | timestamp with time zone | not null | plain | | \n stty_stty_id | bigint | | plain | | \n stty_position | bigint | | plain | | \n mstty_id | bigint | | plain | | \n vsr_number | integer | | plain | | \n stty_date_modified | timestamp with time zone | | plain | | \n stty_stored | text | not null default 'N'::text | extended | | \n stty_sequence | text | | extended | | \n stty_hash | text | | extended | | \nIndexes:\n \"stty_pk\" PRIMARY KEY, btree (stty_id)\n \"stty_indx_fk01\" btree (stty_stty_id)\n \"stty_indx_fk03\" btree (vsr_number)\n \"stty_indx_fk04\" btree (vs_number)\n \"stty_indx_pr01\" btree (mstty_id, stty_id)\nCheck constraints:\n \"stty_cnst_ck01\" CHECK (stty_status = ANY (ARRAY['Y'::text, 'N'::text]))\n \"stty_cnst_ck02\" CHECK (stty_stored = ANY (ARRAY['N'::text, 'Y'::text]))\nForeign-key constraints:\n \"stty_cnst_fk01\" FOREIGN KEY (stty_stty_id) REFERENCES sttyations(stty_id) NOT VALID\n \"stty_cnst_fk02\" FOREIGN KEY (mstty_id) REFERENCES master_sttyations(mstty_id)\n \"stty_cnst_fk03\" FOREIGN KEY (vsr_number) REFERENCES valid_status_reasons(vsr_number)\n\n----------------\nThank you\n\n\nrefpep-> select count(*) from jim.sttyations; \n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=73451291.77..73451291.78 rows=1 width=8)\n Output: count(*)\n -> Index Only Scan using stty_indx_fk03 on jim.sttyations (cost=0.58..67784668.41 rows=2266649344 width=0)\n Output: vsr_number\n(4 rows)\n\n\nBased on my research in the forums and Google , it is described in multiple places that ‘select count(*)’ is expected to be slow in Postgres because of the MVCC controls imposed upon the query leading a table scan. Also, the elapsed time increase linearly with table size. However, I do not know if elapsed time I’m getting is to be expected. Table reltuples in pg_class = 2,266,649,344 (pretty close)Query = select count(*) from jim.sttyations ;Elapsed time (ET) = 18.5 hrs This is an Aurora cluster running on r4.2xlarge (8 vCPU, 61g). CPU usage during count run hovers around 20% with 20g of freeable memory. Is this ET expected? If not, what could be slowing it down? I’m currently running explain analyze and I’ll share the final output when done. I’m familiar with the ideas listed here https://www.citusdata.com/blog/2016/10/12/count-performance/ Table \"jim.sttyations\" Column | Type | Modifiers | Storage | Stats target | Description -------------------+--------------------------+----------------------------+----------+--------------+------------- stty_id | bigint | not null | plain | | stty_hitlist_line | text | not null | extended | | stty_status | text | not null default 'Y'::text | extended | | stty_status_date | timestamp with time zone | not null | plain | | vs_number | integer | not null | plain | | stty_date_created | timestamp with time zone | not null | plain | | stty_stty_id | bigint | | plain | | stty_position | bigint | | plain | | mstty_id | bigint | | plain | | vsr_number | integer | | plain | | stty_date_modified | timestamp with time zone | | plain | | stty_stored | text | not null default 'N'::text | extended | | stty_sequence | text | | extended | | stty_hash | text | | extended | | Indexes: \"stty_pk\" PRIMARY KEY, btree (stty_id) \"stty_indx_fk01\" btree (stty_stty_id) \"stty_indx_fk03\" btree (vsr_number) \"stty_indx_fk04\" btree (vs_number) \"stty_indx_pr01\" btree (mstty_id, stty_id)Check constraints: \"stty_cnst_ck01\" CHECK (stty_status = ANY (ARRAY['Y'::text, 'N'::text])) \"stty_cnst_ck02\" CHECK (stty_stored = ANY (ARRAY['N'::text, 'Y'::text]))Foreign-key constraints: \"stty_cnst_fk01\" FOREIGN KEY (stty_stty_id) REFERENCES sttyations(stty_id) NOT VALID \"stty_cnst_fk02\" FOREIGN KEY (mstty_id) REFERENCES master_sttyations(mstty_id) \"stty_cnst_fk03\" FOREIGN KEY (vsr_number) REFERENCES valid_status_reasons(vsr_number) ----------------Thank you refpep-> select count(*) from jim.sttyations; QUERY PLAN ------------------------------------------------------------------------------------------------------------------ Aggregate (cost=73451291.77..73451291.78 rows=1 width=8) Output: count(*) -> Index Only Scan using stty_indx_fk03 on jim.sttyations (cost=0.58..67784668.41 rows=2266649344 width=0) Output: vsr_number(4 rows)",
"msg_date": "Thu, 13 Sep 2018 13:33:54 -0400",
"msg_from": "Fd Habash <[email protected]>",
"msg_from_op": true,
"msg_subject": "Select count(*) on a 2B Rows Tables Takes ~20 Hours"
},
{
"msg_contents": "On Thu, Sep 13, 2018 at 01:33:54PM -0400, Fd Habash wrote:\n> Is this ET expected? If not, what could be slowing it down? I’m currently running explain analyze and I’ll share the final output when done. \n\nexplain(analyze,BUFFERS) is what's probably interesting\n\nYou're getting an index-only-scan, but maybe still making many accesses to the\nheap (table) for pages which aren't all-visible. You can maybe improve by\nvacuuming (perhaps by daily cronjob or by ALTER TABLE SET autovacuum threshold\nor scale factor).\n\n-- \nJustin Pryzby\nSystem Administrator\nTelsasoft\n+1-952-707-8581\n\n",
"msg_date": "Thu, 13 Sep 2018 13:05:32 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Select count(*) on a 2B Rows Tables Takes ~20 Hours"
},
{
"msg_contents": "Fd Habash <[email protected]> writes:\n> Based on my research in the forums and Google , it is described in multiple places that ‘select count(*)’ is expected to be slow in Postgres because of the MVCC controls imposed upon the query leading a table scan. Also, the elapsed time increase linearly with table size. \n> However, I do not know if elapsed time I’m getting is to be expected. \n\n> Table reltuples in pg_class = 2,266,649,344 (pretty close)\n> Query = select count(*) from jim.sttyations ;\n> Elapsed time (ET) = 18.5 hrs\n\nThat's pretty awful. My recollection is that in recent PG releases,\nSELECT COUNT(*) runs at something on the order of 100ns/row given an\nall-in-memory table. Evidently you're rather badly I/O bound.\n\n> This is an Aurora cluster running on r4.2xlarge (8 vCPU, 61g).\n\nDon't know much about Aurora, but I wonder whether you paid for\nguaranteed (provisioned) IOPS, and if so what service level.\n\n> refpep-> select count(*) from jim.sttyations; \n> QUERY PLAN \n> ------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=73451291.77..73451291.78 rows=1 width=8)\n> Output: count(*)\n> -> Index Only Scan using stty_indx_fk03 on jim.sttyations (cost=0.58..67784668.41 rows=2266649344 width=0)\n> Output: vsr_number\n> (4 rows)\n\nOh, hmm ... the 100ns figure I mentioned was for a seqscan. IOS\ncould be a lot worse for a number of reasons, foremost being that\nif the table isn't mostly all-visible then it'd involve a lot of\nrandom heap access. It might be interesting to try forcing a\nseqscan plan (see enable_indexscan).\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 13 Sep 2018 14:12:02 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Select count(*) on a 2B Rows Tables Takes ~20 Hours"
},
{
"msg_contents": "Just checked metrics while the count was running …\n\nRead latency < 3.5 ms\nWrite latency < 4 ms\nRead throughput ~ 40 MB/sec with sporadic peaks at 100\nRead IOPS ~ 5000\nQDepth < 3\n\n\n----------------\nThank you\n\nFrom: Tom Lane\nSent: Thursday, September 13, 2018 2:12 PM\nTo: Fd Habash\nCc: [email protected]\nSubject: Re: Select count(*) on a 2B Rows Tables Takes ~20 Hours\n\nFd Habash <[email protected]> writes:\n> Based on my research in the forums and Google , it is described in multiple places that ‘select count(*)’ is expected to be slow in Postgres because of the MVCC controls imposed upon the query leading a table scan. Also, the elapsed time increase linearly with table size. \n> However, I do not know if elapsed time I’m getting is to be expected. \n\n> Table reltuples in pg_class = 2,266,649,344 (pretty close)\n> Query = select count(*) from jim.sttyations ;\n> Elapsed time (ET) = 18.5 hrs\n\nThat's pretty awful. My recollection is that in recent PG releases,\nSELECT COUNT(*) runs at something on the order of 100ns/row given an\nall-in-memory table. Evidently you're rather badly I/O bound.\n\n> This is an Aurora cluster running on r4.2xlarge (8 vCPU, 61g).\n\nDon't know much about Aurora, but I wonder whether you paid for\nguaranteed (provisioned) IOPS, and if so what service level.\n\n> refpep-> select count(*) from jim.sttyations; \n> QUERY PLAN \n> ------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=73451291.77..73451291.78 rows=1 width=8)\n> Output: count(*)\n> -> Index Only Scan using stty_indx_fk03 on jim.sttyations (cost=0.58..67784668.41 rows=2266649344 width=0)\n> Output: vsr_number\n> (4 rows)\n\nOh, hmm ... the 100ns figure I mentioned was for a seqscan. IOS\ncould be a lot worse for a number of reasons, foremost being that\nif the table isn't mostly all-visible then it'd involve a lot of\nrandom heap access. It might be interesting to try forcing a\nseqscan plan (see enable_indexscan).\n\n\t\t\tregards, tom lane\n\n\nJust checked metrics while the count was running … Read latency < 3.5 msWrite latency < 4 msRead throughput ~ 40 MB/sec with sporadic peaks at 100Read IOPS ~ 5000QDepth < 3 ----------------Thank you From: Tom LaneSent: Thursday, September 13, 2018 2:12 PMTo: Fd HabashCc: [email protected]: Re: Select count(*) on a 2B Rows Tables Takes ~20 Hours Fd Habash <[email protected]> writes:> Based on my research in the forums and Google , it is described in multiple places that ‘select count(*)’ is expected to be slow in Postgres because of the MVCC controls imposed upon the query leading a table scan. Also, the elapsed time increase linearly with table size. > However, I do not know if elapsed time I’m getting is to be expected. > Table reltuples in pg_class = 2,266,649,344 (pretty close)> Query = select count(*) from jim.sttyations ;> Elapsed time (ET) = 18.5 hrs That's pretty awful. My recollection is that in recent PG releases,SELECT COUNT(*) runs at something on the order of 100ns/row given anall-in-memory table. Evidently you're rather badly I/O bound. > This is an Aurora cluster running on r4.2xlarge (8 vCPU, 61g). Don't know much about Aurora, but I wonder whether you paid forguaranteed (provisioned) IOPS, and if so what service level. > refpep-> select count(*) from jim.sttyations; > QUERY PLAN > ------------------------------------------------------------------------------------------------------------------> Aggregate (cost=73451291.77..73451291.78 rows=1 width=8)> Output: count(*)> -> Index Only Scan using stty_indx_fk03 on jim.sttyations (cost=0.58..67784668.41 rows=2266649344 width=0)> Output: vsr_number> (4 rows) Oh, hmm ... the 100ns figure I mentioned was for a seqscan. IOScould be a lot worse for a number of reasons, foremost being thatif the table isn't mostly all-visible then it'd involve a lot ofrandom heap access. It might be interesting to try forcing aseqscan plan (see enable_indexscan). regards, tom lane",
"msg_date": "Thu, 13 Sep 2018 15:35:23 -0400",
"msg_from": "Fd Habash <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Select count(*) on a 2B Rows Tables Takes ~20 Hours"
},
{
"msg_contents": "Hi,\n\nOn 2018-09-13 14:12:02 -0400, Tom Lane wrote:\n> > This is an Aurora cluster running on r4.2xlarge (8 vCPU, 61g).\n> \n> Don't know much about Aurora, but I wonder whether you paid for\n> guaranteed (provisioned) IOPS, and if so what service level.\n\nGiven that aurora uses direct-io and has the storage layer largely\ncompletely replaced, I'm not sure how much we can help here. My\nunderstanding is that access to blocks can require page-level \"log\nreconciliation\", which can cause adverse IO patterns. The direct-IO\nmeans that cache configuration / prefetching is much more crucial. If a\nlot of those tuples aren't frozen (don't quite know how that works\nthere), the clog accesses will also kill you if the table was filled\nover many transactions, since clog's access characteristics to a lot of\nxids is pretty bad with DIO.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Thu, 13 Sep 2018 12:43:47 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Select count(*) on a 2B Rows Tables Takes ~20 Hours"
},
{
"msg_contents": "Buffers: shared hit=72620045 read=45,297,330\nI/O Timings: read=57,489,958.088\nExecution time: 61,141,110.516 ms\n\nIf I'm reading this correctly, it took 57M ms out of an elapsed time of 61M\nms to read 45M pages from the filesystem?\nIf the average service time per sarr is < 5 ms, Is this a case of bloated\nindex where re-indexing is warranted?\n\nThanks\n\nexplain (analyze,buffers,timing,verbose,costs)\nselect count(*) from jim.pitations ;\n\n QUERY PLAN\n\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nAggregate (cost=72893810.73..72893810.74 rows=1 width=8) (actual\ntime=61141110.437..61141110.437 rows=1 loops=1)\n Output: count(*)\n Buffers: shared hit=72620045 read=45297330\n I/O Timings: read=57489958.088\n -> Index Only Scan using pit_indx_fk03 on jim.pitations\n(cost=0.58..67227187.37 rows=2266649344 width=0) (actual\ntime=42.327..60950272.189 rows=2269623575 loops=1)\n Output: vsr_number\n Heap Fetches: 499950392\n Buffers: shared hit=72620045 read=45297330\n I/O Timings: read=57489958.088\nPlanning time: 14.014 ms\nExecution time: 61,141,110.516 ms\n(11 rows)\nTime: 61141132.309 ms\nrefpep=>\nrefpep=>\nrefpep=>\nScreen session test_pg on ip-10-241-48-178 (system load: 0.00 0.00 0.00)\n\n Sun 16.09.2018 14:52\nScreen sess\n\nBuffers: shared hit=72620045 read=45,297,330I/O Timings: read=57,489,958.088Execution time: 61,141,110.516 ms If I'm reading this correctly, it took 57M ms out of an elapsed time of 61M ms to read 45M pages from the filesystem?If the average service time per sarr is < 5 ms, Is this a case of bloated index where re-indexing is warranted? Thanks explain (analyze,buffers,timing,verbose,costs)select count(*) from jim.pitations ; QUERY PLAN -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------Aggregate (cost=72893810.73..72893810.74 rows=1 width=8) (actual time=61141110.437..61141110.437 rows=1 loops=1) Output: count(*) Buffers: shared hit=72620045 read=45297330 I/O Timings: read=57489958.088 -> Index Only Scan using pit_indx_fk03 on jim.pitations (cost=0.58..67227187.37 rows=2266649344 width=0) (actual time=42.327..60950272.189 rows=2269623575 loops=1) Output: vsr_number Heap Fetches: 499950392 Buffers: shared hit=72620045 read=45297330 I/O Timings: read=57489958.088Planning time: 14.014 msExecution time: 61,141,110.516 ms(11 rows)Time: 61141132.309 msrefpep=>refpep=>refpep=>Screen session test_pg on ip-10-241-48-178 (system load: 0.00 0.00 0.00) Sun 16.09.2018 14:52 Screen sess",
"msg_date": "Mon, 17 Sep 2018 12:22:46 -0400",
"msg_from": "Fred Habash <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Select count(*) on a 2B Rows Tables Takes ~20 Hours"
},
{
"msg_contents": "Fred Habash wrote:\n> If I'm reading this correctly, it took 57M ms out of an elapsed time of 61M ms to read 45M pages from the filesystem?\n> If the average service time per sarr is < 5 ms, Is this a case of bloated index where re-indexing is warranted? \n> \n> explain (analyze,buffers,timing,verbose,costs)\n> select count(*) from jim.pitations ;\n> QUERY PLAN \n> -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=72893810.73..72893810.74 rows=1 width=8) (actual time=61141110.437..61141110.437 rows=1 loops=1)\n> Output: count(*)\n> Buffers: shared hit=72620045 read=45297330\n> I/O Timings: read=57489958.088\n> -> Index Only Scan using pit_indx_fk03 on jim.pitations (cost=0.58..67227187.37 rows=2266649344 width=0) (actual time=42.327..60950272.189 rows=2269623575 loops=1)\n> Output: vsr_number\n> Heap Fetches: 499950392\n> Buffers: shared hit=72620045 read=45297330\n> I/O Timings: read=57489958.088\n> Planning time: 14.014 ms\n> Execution time: 61,141,110.516 ms\n> (11 rows)\n\n2269623575 / (45297330 + 72620045) ~ 20, so you have an average 20\nitems per block. That is few, and the index seems indeed bloated.\n\nLooking at the read times, you average out at about 1 ms per block\nread from I/O, but with that many blocks that's of course still a long time.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Mon, 17 Sep 2018 21:04:46 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Select count(*) on a 2B Rows Tables Takes ~20 Hours"
},
{
"msg_contents": "Aside from I/O going to a different kind of storage, I don't think anything Aurora-specific should be at play here.\n\nWould the 118 million buffer accesses (hits+reads) only include the index scan, or would that number also reflect buffers accessed for the 500 million heap fetches?\n\nWhile Aurora doesn't have a filesystem cache (since it's a different kind of storage), it does default the buffer_cache to 75% to offset this. It appears that as Laurenz has pointed out, this is simply a lot of I/O requests in a serial process. \n\nBTW that's 900GB of data that was read (118 million buffers of 8k each) - on a box with only 61GB of memory available for caching.\n\n-Jeremy\n\nSent from my TI-83\n\n> On Sep 17, 2018, at 12:04 PM, Laurenz Albe <[email protected]> wrote:\n> \n> Fred Habash wrote:\n>> If I'm reading this correctly, it took 57M ms out of an elapsed time of 61M ms to read 45M pages from the filesystem?\n>> If the average service time per sarr is < 5 ms, Is this a case of bloated index where re-indexing is warranted? \n>> \n>> explain (analyze,buffers,timing,verbose,costs)\n>> select count(*) from jim.pitations ;\n>> QUERY PLAN \n>> -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>> Aggregate (cost=72893810.73..72893810.74 rows=1 width=8) (actual time=61141110.437..61141110.437 rows=1 loops=1)\n>> Output: count(*)\n>> Buffers: shared hit=72620045 read=45297330\n>> I/O Timings: read=57489958.088\n>> -> Index Only Scan using pit_indx_fk03 on jim.pitations (cost=0.58..67227187.37 rows=2266649344 width=0) (actual time=42.327..60950272.189 rows=2269623575 loops=1)\n>> Output: vsr_number\n>> Heap Fetches: 499950392\n>> Buffers: shared hit=72620045 read=45297330\n>> I/O Timings: read=57489958.088\n>> Planning time: 14.014 ms\n>> Execution time: 61,141,110.516 ms\n>> (11 rows)\n> \n> 2269623575 / (45297330 + 72620045) ~ 20, so you have an average 20\n> items per block. That is few, and the index seems indeed bloated.\n> \n> Looking at the read times, you average out at about 1 ms per block\n> read from I/O, but with that many blocks that's of course still a long time.\n> \n> Yours,\n> Laurenz Albe\n> -- \n> Cybertec | https://www.cybertec-postgresql.com\n> \n> \n\n",
"msg_date": "Wed, 19 Sep 2018 01:25:46 +0000",
"msg_from": "\"Schneider, Jeremy\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Select count(*) on a 2B Rows Tables Takes ~20 Hours"
}
] |
[
{
"msg_contents": "In API function may invoke 10 queries. Ideally, I would like to know what queries are invoked by it and how long each took. \n\nI’m using pg_stat_statement. I can see the API function statement, but how do I deterministically identify all queries invoked by it?\n\n\n----------------\nThank you\n\n\nIn API function may invoke 10 queries. Ideally, I would like to know what queries are invoked by it and how long each took. I’m using pg_stat_statement. I can see the API function statement, but how do I deterministically identify all queries invoked by it? ----------------Thank you",
"msg_date": "Thu, 13 Sep 2018 15:49:41 -0400",
"msg_from": "Fd Habash <[email protected]>",
"msg_from_op": true,
"msg_subject": "How Do You Associate a Query With its Invoking Procedure? "
},
{
"msg_contents": "Any ideas, please?\n\nOn Thu, Sep 13, 2018, 3:49 PM Fd Habash <[email protected]> wrote:\n\n> In API function may invoke 10 queries. Ideally, I would like to know what\n> queries are invoked by it and how long each took.\n>\n>\n>\n> I’m using pg_stat_statement. I can see the API function statement, but how\n> do I deterministically identify all queries invoked by it?\n>\n>\n>\n>\n>\n> ----------------\n> Thank you\n>\n>\n>\n\nAny ideas, please? On Thu, Sep 13, 2018, 3:49 PM Fd Habash <[email protected]> wrote:In API function may invoke 10 queries. Ideally, I would like to know what queries are invoked by it and how long each took. I’m using pg_stat_statement. I can see the API function statement, but how do I deterministically identify all queries invoked by it? ----------------Thank you",
"msg_date": "Fri, 14 Sep 2018 11:38:12 -0400",
"msg_from": "Fred Habash <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How Do You Associate a Query With its Invoking Procedure?"
},
{
"msg_contents": "On Thu, Sep 13, 2018 at 12:49 PM, Fd Habash <[email protected]> wrote:\n\n> In API function may invoke 10 queries. Ideally, I would like to know what\n> queries are invoked by it and how long each took.\n>\n>\n>\n> I’m using pg_stat_statement. I can see the API function statement, but how\n> do I deterministically identify all queries invoked by it?\n>\n\npg_stat_statement is a global tracker that throws away execution context,\nin this case the process id, needed to track the level of detail you\ndesire. I think the best you can do is log all statements and durations to\nthe log file and parse that.\n\nFor the \"what queries are invoked by it\" you can just read the source\ncode...\n\nAs there is no canned solution to provide the answer you seek the final\nsolution you come up with will be influenced by your access patterns,\nspecific needs, and (in)ability to write C code (though maybe there is an\nextension out there you could leverage...).\n\nDavid J.\n\nOn Thu, Sep 13, 2018 at 12:49 PM, Fd Habash <[email protected]> wrote:In API function may invoke 10 queries. Ideally, I would like to know what queries are invoked by it and how long each took. I’m using pg_stat_statement. I can see the API function statement, but how do I deterministically identify all queries invoked by it?pg_stat_statement is a global tracker that throws away execution context, in this case the process id, needed to track the level of detail you desire. I think the best you can do is log all statements and durations to the log file and parse that.For the \"what queries are invoked by it\" you can just read the source code...As there is no canned solution to provide the answer you seek the final solution you come up with will be influenced by your access patterns, specific needs, and (in)ability to write C code (though maybe there is an extension out there you could leverage...).David J.",
"msg_date": "Fri, 14 Sep 2018 09:33:56 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How Do You Associate a Query With its Invoking Procedure?"
},
{
"msg_contents": "On Fri, Sep 14, 2018 at 12:34 PM David G. Johnston <\[email protected]> wrote:\n\n> On Thu, Sep 13, 2018 at 12:49 PM, Fd Habash <[email protected]> wrote:\n>\n>> In API function may invoke 10 queries. Ideally, I would like to know what\n>> queries are invoked by it and how long each took.\n>>\n>>\n>>\n>> I’m using pg_stat_statement. I can see the API function statement, but\n>> how do I deterministically identify all queries invoked by it?\n>>\n>\n> pg_stat_statement is a global tracker that throws away execution context,\n> in this case the process id, needed to track the level of detail you\n> desire. I think the best you can do is log all statements and durations to\n> the log file and parse that.\n>\n>\nIf you have big queries you almost certainly will want to bump your\n\"track_activity_query_size\" value bigger to be able to capture the whole\nquery.\n\nYou are going to have to find the queries in the api source code. If they\nare not distinct enough to easily figure out which was which you can do\nthings to make them distinct. One of the easiest things is to add a\n\"literal\" column to the query:\n\nselect\n 'query_1',\n first_name,\n...\n\nThen when you look in the query statements in the database you can see that\nliteral column and tell which query it was that invoked it.\n\nYou can also make them unique by renaming columns:\n\nselect\n first_name as 'query1_first_name'\n...\n\nDepending on your ORM or whether your api calls queries directly, you could\nadd comments to the query as well:\nselect\n -- this one is query 1\n first_name,\n...\n\nUnfortunately there is no out of the box \"github hook\" that can\nautomatically connect a query from your postgresql logs to the lines of\ncode in your api.\n\nOn Fri, Sep 14, 2018 at 12:34 PM David G. Johnston <[email protected]> wrote:On Thu, Sep 13, 2018 at 12:49 PM, Fd Habash <[email protected]> wrote:In API function may invoke 10 queries. Ideally, I would like to know what queries are invoked by it and how long each took. I’m using pg_stat_statement. I can see the API function statement, but how do I deterministically identify all queries invoked by it?pg_stat_statement is a global tracker that throws away execution context, in this case the process id, needed to track the level of detail you desire. I think the best you can do is log all statements and durations to the log file and parse that.If you have big queries you almost certainly will want to bump your \"track_activity_query_size\" value bigger to be able to capture the whole query.You are going to have to find the queries in the api source code. If they are not distinct enough to easily figure out which was which you can do things to make them distinct. One of the easiest things is to add a \"literal\" column to the query:select 'query_1', first_name,...Then when you look in the query statements in the database you can see that literal column and tell which query it was that invoked it.You can also make them unique by renaming columns:select first_name as 'query1_first_name'...Depending on your ORM or whether your api calls queries directly, you could add comments to the query as well:select -- this one is query 1 first_name,...Unfortunately there is no out of the box \"github hook\" that can automatically connect a query from your postgresql logs to the lines of code in your api.",
"msg_date": "Fri, 14 Sep 2018 13:14:12 -0400",
"msg_from": "Rick Otten <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How Do You Associate a Query With its Invoking Procedure?"
},
{
"msg_contents": "If you can change the application then one option is to set application_name so that it contains API function name. This should happen before the first call in API function hits the database. After the API function finishes it should reset application_name.\n\nThen you can enable logging of all queries and set the format to include application_name parameter. This way every query is logged and each log entry has an application name.\n\nSeveral things to keep in mind:\n1. logging everything may affect performance\n2. application_name is 64 chars by default\n\nRegards,\nRoman Konoval\[email protected]\n\n\n> On Sep 13, 2018, at 21:49, Fd Habash <[email protected]> wrote:\n> \n> In API function may invoke 10 queries. Ideally, I would like to know what queries are invoked by it and how long each took. \n> \n> I’m using pg_stat_statement. I can see the API function statement, but how do I deterministically identify all queries invoked by it?\n> \n> \n> ----------------\n> Thank you\n\n\nIf you can change the application then one option is to set application_name so that it contains API function name. This should happen before the first call in API function hits the database. After the API function finishes it should reset application_name.Then you can enable logging of all queries and set the format to include application_name parameter. This way every query is logged and each log entry has an application name.Several things to keep in mind:1. logging everything may affect performance2. application_name is 64 chars by default\nRegards,Roman [email protected] Sep 13, 2018, at 21:49, Fd Habash <[email protected]> wrote:In API function may invoke 10 queries. Ideally, I would like to know what queries are invoked by it and how long each took. I’m using pg_stat_statement. I can see the API function statement, but how do I deterministically identify all queries invoked by it? ----------------Thank you",
"msg_date": "Fri, 14 Sep 2018 20:18:55 +0200",
"msg_from": "Roman Konoval <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How Do You Associate a Query With its Invoking Procedure?"
},
{
"msg_contents": "You might find application-level tracing a more practical answer - e.g.\ncheck out Datadog APM for a (commercial) plug and play approach or Jaeger\nfor a self-hostable option.\n\nPatrick\nOn Fri, Sep 14, 2018 at 4:38 PM Fred Habash <[email protected]> wrote:\n\n> Any ideas, please?\n>\n> On Thu, Sep 13, 2018, 3:49 PM Fd Habash <[email protected]> wrote:\n>\n>> In API function may invoke 10 queries. Ideally, I would like to know what\n>> queries are invoked by it and how long each took.\n>>\n>>\n>>\n>> I’m using pg_stat_statement. I can see the API function statement, but\n>> how do I deterministically identify all queries invoked by it?\n>>\n>>\n>>\n>>\n>>\n>> ----------------\n>> Thank you\n>>\n>>\n>>\n>\n\nYou might find application-level tracing a more practical answer - e.g. check out Datadog APM for a (commercial) plug and play approach or Jaeger for a self-hostable option.PatrickOn Fri, Sep 14, 2018 at 4:38 PM Fred Habash <[email protected]> wrote:Any ideas, please? On Thu, Sep 13, 2018, 3:49 PM Fd Habash <[email protected]> wrote:In API function may invoke 10 queries. Ideally, I would like to know what queries are invoked by it and how long each took. I’m using pg_stat_statement. I can see the API function statement, but how do I deterministically identify all queries invoked by it? ----------------Thank you",
"msg_date": "Sat, 15 Sep 2018 10:24:30 +0100",
"msg_from": "Patrick Molgaard <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How Do You Associate a Query With its Invoking Procedure?"
},
{
"msg_contents": "All great ideas.\n\nI was thinking something similar to some other RDBMS engines where SQL is automatically tied to the invoking PROGRAM_ID with zero setup on the client side. I thought there could be something similar in PG somewhere in the catalog. \n\nAs always, great support. This level of support helps a lot in our migration to Postgres. \n\n————-\nThank you. \n\n> On Sep 15, 2018, at 5:24 AM, Patrick Molgaard <[email protected]> wrote:\n> \n> You might find application-level tracing a more practical answer - e.g. check out Datadog APM for a (commercial) plug and play approach or Jaeger for a self-hostable option.\n> \n> Patrick\n>> On Fri, Sep 14, 2018 at 4:38 PM Fred Habash <[email protected]> wrote:\n>> Any ideas, please? \n>> \n>>> On Thu, Sep 13, 2018, 3:49 PM Fd Habash <[email protected]> wrote:\n>>> In API function may invoke 10 queries. Ideally, I would like to know what queries are invoked by it and how long each took.\n>>> \n>>> \n>>> \n>>> I’m using pg_stat_statement. I can see the API function statement, but how do I deterministically identify all queries invoked by it?\n>>> \n>>> \n>>> \n>>> \n>>> \n>>> ----------------\n>>> Thank you\n>>> \n>>> \n\nAll great ideas.I was thinking something similar to some other RDBMS engines where SQL is automatically tied to the invoking PROGRAM_ID with zero setup on the client side. I thought there could be something similar in PG somewhere in the catalog. As always, great support. This level of support helps a lot in our migration to Postgres. ————-Thank you. On Sep 15, 2018, at 5:24 AM, Patrick Molgaard <[email protected]> wrote:You might find application-level tracing a more practical answer - e.g. check out Datadog APM for a (commercial) plug and play approach or Jaeger for a self-hostable option.PatrickOn Fri, Sep 14, 2018 at 4:38 PM Fred Habash <[email protected]> wrote:Any ideas, please? On Thu, Sep 13, 2018, 3:49 PM Fd Habash <[email protected]> wrote:In API function may invoke 10 queries. Ideally, I would like to know what queries are invoked by it and how long each took. I’m using pg_stat_statement. I can see the API function statement, but how do I deterministically identify all queries invoked by it? ----------------Thank you",
"msg_date": "Sun, 16 Sep 2018 17:53:58 -0400",
"msg_from": "Fred Habash <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How Do You Associate a Query With its Invoking Procedure?"
}
] |
[
{
"msg_contents": "It is some time since I've written to the postgres lists. My apologies\nif this is the wrong list to post this to.\n\nWe are looking to upgrade our current database server infrastructure so\nthat it is suitable for the next 3 years or so.\n\nPresently we have two physical servers with the same specs:\n\n - 220GB database partition on RAID10 SSD on HW RAID\n - 128GB RAM\n - 8 * Xeon E5-2609\n\n(The HW RAID card is a MegaRAID SAS 9361-8i with BBU)\n\nThe second server is a hot standby to the first, and we presently have\nabout 350 databases in the cluster. \n\nWe envisage needing about 800GB of primary database storage in the next\nthree years, with 1000 databases in the cluster.\n\nWe are imagining either splitting the cluster into two and (to have four\nmain servers) or increasing the disk capacity and RAM in each server.\nThe second seems preferable from a day-to-day management basis, but it\nwouldn't be too difficult to deploy our software upgrades across two\nmachines rather than one.\n\nResources on the main machines seem to be perfectly adequate at present\nbut it is difficult to know at what stage queries might start spilling\nto disk. We presently occasionally hit 45% CPU utilisation, load average\npeaking at 4.0 and we occasionally go into swap in a minor way (although\nwe can't determine the reason for going into swap). There is close to no\niowait in normal operation.\n\nIt also seems a bit incongruous writing about physical machines these\ndays, but I can't find pricing on a UK data protection compatible cloud\nprovider that beats physical price amortised over three years (including\nrack costs). The ability to more easily \"make\" machines to help with\nupgrades is attractive, though.\n\nSome comments and advice on how to approach this would be very\ngratefully received.\n\nThanks\nRory\n\n\n\n\n",
"msg_date": "Sun, 16 Sep 2018 13:23:36 +0100",
"msg_from": "Rory Campbell-Lange <[email protected]>",
"msg_from_op": true,
"msg_subject": "Advice on machine specs for growth"
}
] |
[
{
"msg_contents": "Hi ,\n\nI have a 10 TB size table with multiple bytea columns (image & doc)and\nmakes 20TB of DB size. I have a couple of issues to maintain the DB.\n\n1. I Would like to separate the image column from the 10TB size table,\nplace it in a separate schema. The change should not result in any query\nchange in the application. Is it possible? Doing this it should not affect\nthe performance.\n\n2. I can't maintain files on File system as the count is huge, so thinking\nof using any no-sql mostly mongo-DB, is it recommended? Or PostgreSQL\nitself can handle?\n\n3. Taking the backup of 20TB data, is big task. Any more feasible solution\nother than online backup/pg_dump?\n\nEach image retrieval is\nCurrently, we are on pg 9.4 and moving to 10.5 soon.\n\nThanks,\nGJ.\n\nHi ,I have a 10 TB size table with multiple bytea columns (image & doc)and makes 20TB of DB size. I have a couple of issues to maintain the DB.1. I Would like to separate the image column from the 10TB size table, place it in a separate schema. The change should not result in any query change in the application. Is it possible? Doing this it should not affect the performance. 2. I can't maintain files on File system as the count is huge, so thinking of using any no-sql mostly mongo-DB, is it recommended? Or PostgreSQL itself can handle? 3. Taking the backup of 20TB data, is big task. Any more feasible solution other than online backup/pg_dump?Each image retrieval is Currently, we are on pg 9.4 and moving to 10.5 soon. Thanks,GJ.",
"msg_date": "Mon, 17 Sep 2018 18:08:33 +0530",
"msg_from": "still Learner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Big image tables maintenance"
},
{
"msg_contents": "On 09/17/2018 07:38 AM, still Learner wrote:\n> Hi ,\n>\n> I have a 10 TB size table with multiple bytea columns (image & doc)and \n> makes 20TB of DB size. I have a couple of issues to maintain the DB.\n>\n> 1. I Would like to separate the image column from the 10TB size table, \n> place it in a separate schema. The change should not result in any query \n> change in the application. Is it possible? Doing this it should not \n> affect the performance.\n\nThat's called \"vertical partitioning\", which I don't think Postgres supports.\n\n>\n> 2. I can't maintain files on File system as the count is huge,\n\nEh? *You* aren't supposed to maintain the files on the filesystem; \n*Postgres* is.\n\n> so thinking of using any no-sql mostly mongo-DB, is it recommended? Or \n> PostgreSQL itself can handle?\n>\n> 3. Taking the backup of 20TB data, is big task. Any more feasible solution \n> other than online backup/pg_dump?\n\npgbackrest and barman are popular options.\n\n(We have a database like yours, though only 3TB, and have found that pg_dump \nruns a *lot* faster with \"--compress=0\". The backups are 2.25x larger than \nthe database, though...)\n\n>\n> Each image retrieval is\n> Currently, we are on pg 9.4 and moving to 10.5 soon.\n> Thanks,\n> GJ.\n\n-- \nAngular momentum makes the world go 'round.\n\n\n\n\n\n\n On 09/17/2018 07:38 AM, still Learner wrote:\n\n\nHi ,\n\n\nI have a 10 TB size table with multiple bytea columns\n (image & doc)and makes 20TB of DB size. I have a couple of\n issues to maintain the DB.\n\n\n1. I Would like to separate the image column from the 10TB\n size table, place it in a separate schema. The change should\n not result in any query change in the application. Is it\n possible? Doing this it should not affect the performance. \n\n\n\n\n That's called \"vertical partitioning\", which I don't think Postgres\n supports.\n\n\n\n\n\n2. I can't maintain files on File system\n as the count is huge, \n\n\n\n Eh? You aren't supposed to maintain the files on the\n filesystem; Postgres is.\n\n\n\nso thinking of using any no-sql mostly\n mongo-DB, is it recommended? Or PostgreSQL itself can handle? \n\n\n3. Taking the backup of 20TB data, is big task.\n Any more feasible solution other than online backup/pg_dump?\n\n\n\n pgbackrest and barman are popular options.\n\n (We have a database like yours, though only 3TB, and have found that\n pg_dump runs a lot faster with \"--compress=0\". The backups\n are 2.25x larger than the database, though...)\n\n\n\n\n\nEach image retrieval is \nCurrently, we are on pg 9.4 and moving to 10.5 soon.\n \n\n\n\nThanks,\nGJ.\n\n\n\n\n\n\n-- \n Angular momentum makes the world go 'round.",
"msg_date": "Mon, 17 Sep 2018 08:45:17 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big image tables maintenance"
},
{
"msg_contents": "Greetings,\n\n(limiting this to -admin, cross-posting this to a bunch of different\nlists really isn't helpful)\n\n* still Learner ([email protected]) wrote:\n> I have a 10 TB size table with multiple bytea columns (image & doc)and\n> makes 20TB of DB size. I have a couple of issues to maintain the DB.\n\n*What* are those issues..? That's really the first thing to discuss\nhere but you don't ask any questions about it or state what the issue is\n(except possibly for backups, but we have solutions for that, as\nmentioned below).\n\n> 1. I Would like to separate the image column from the 10TB size table,\n> place it in a separate schema. The change should not result in any query\n> change in the application. Is it possible? Doing this it should not affect\n> the performance.\n\nHow large are these images? PostgreSQL will already pull out large\ncolumn values and put them into a side-table for you, behind the scenes,\nusing a technique called TOAST. Documentation about TOAST is available\nhere:\n\nhttps://www.postgresql.org/docs/current/static/storage-toast.html\n\n> 2. I can't maintain files on File system as the count is huge, so thinking\n> of using any no-sql mostly mongo-DB, is it recommended? Or PostgreSQL\n> itself can handle?\n\nI suspect you'd find that your data size would end up being much, much\nlarger if you tried to store it as JSON or in a similar system, and\nyou're unlikely to get any performance improvement (much more likely the\nopposite, in fact).\n\n> 3. Taking the backup of 20TB data, is big task. Any more feasible solution\n> other than online backup/pg_dump?\n\nAbsolutely, I'd recommend using pgBackRest which supports parallel\nonline backup and restore. Using pg_dump for a large system like this\nis really not a good idea- your restore time would likely be\nparticularly terrible and you have no ability to do point-in-time\nrecovery. Using pgBackRest and a capable system, you'd be able to get a\ncomplete backup of 20TB in perhaps 6-12 hours, with similar time on the\nrecovery side. If you wish to be able to recover faster, running a\nreplica (as well as doing backups) may be a good idea, perhaps even a\ntime-delayed one.\n\n> Each image retrieval is\n\nUnfinished thought here..?\n\n> Currently, we are on pg 9.4 and moving to 10.5 soon.\n\nThat's definitely a good plan.\n\nThanks!\n\nStephen",
"msg_date": "Mon, 17 Sep 2018 09:58:20 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big image tables maintenance"
},
{
"msg_contents": "On 09/17/2018 07:38 AM, still Learner wrote:\n> Hi ,\n> \n> I have a 10 TB size table with multiple bytea columns (image & doc)and makes 20TB of DB size. I have a couple of issues to maintain the DB.\n> \n> 1. I Would like to separate the image column from the 10TB size table, place it in a separate schema. The change should not result in any query change in the application. Is it possible? Doing this it should not affect the performance. \n\nThey're automatically stored separate, see https://www.postgresql.org/docs/current/static/storage-toast.html.\n\n> 2. I can't maintain files on File system as the count is huge,\n\nSo? I've stored millions of documents on a Mac mini. Real server hardware & OS should have no problem--10TB is really not all that much.\n\n> so thinking of using any no-sql mostly mongo-DB, is it recommended? Or PostgreSQL itself can handle? \n\nOnly if all you need is the document storage, none of everything else PG offers.\n\n> 3. Taking the backup of 20TB data, is big task. Any more feasible solution other than online backup/pg_dump?\n\nThat's an argument for keeping the presumably immutable files on the file system. (There are arguments against as well.)\n\n\n",
"msg_date": "Mon, 17 Sep 2018 07:59:58 -0600",
"msg_from": "Scott Ribe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big image tables maintenance"
},
{
"msg_contents": "Greetings,\n\n* Ron ([email protected]) wrote:\n> On 09/17/2018 07:38 AM, still Learner wrote:\n> >I have a 10 TB size table with multiple bytea columns (image & doc)and\n> >makes 20TB of DB size. I have a couple of issues to maintain the DB.\n> >\n> >1. I Would like to separate the image column from the 10TB size table,\n> >place it in a separate schema. The change should not result in any query\n> >change in the application. Is it possible? Doing this it should not\n> >affect the performance.\n> \n> That's called \"vertical partitioning\", which I don't think Postgres supports.\n\nAs mentioned, PostgreSQL will already do this for you with TOAST, but\neven without that, you could certainly create a simple view..\n\n> >2. I can't maintain files on File system as the count is huge,\n> \n> Eh? *You* aren't supposed to maintain the files on the filesystem;\n> *Postgres* is.\n\nI believe the point being made here is that pushing the images out of PG\nand on to the filesystem would result in a huge number of files and that\nwould be difficult for the filesystem to handle and generally difficult\nto work with.\n\n> (We have a database like yours, though only 3TB, and have found that pg_dump\n> runs a *lot* faster with \"--compress=0\". The backups are 2.25x larger than\n> the database, though...)\n\nUnfortunately, your restore time with a pg_dump-based backup is very\nhigh and that's something that I don't think enough people think about.\n\nHaving both pgBackRest-based physical backups and pg_dump-based backups\nis nice as it allows you to do selective restore when you need it, and\nfast full restore when needed. Of course, that requires additional\nstorage.\n\nNote that pg_dump/pg_restore also support parallelism, which can help\nwith how long they take to run.\n\nThanks!\n\nStephen",
"msg_date": "Mon, 17 Sep 2018 10:01:08 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big image tables maintenance"
},
{
"msg_contents": "Greetings,\n\n* Scott Ribe ([email protected]) wrote:\n> On 09/17/2018 07:38 AM, still Learner wrote:\n> > 3. Taking the backup of 20TB data, is big task. Any more feasible solution other than online backup/pg_dump?\n> \n> That's an argument for keeping the presumably immutable files on the file system. (There are arguments against as well.)\n\nWhile I'm not generally against the idea of keeping files on the\nfilesystem, I'm not sure how that really changes things when it comes to\nbackup..? If anything, having the files on the filesystem makes backing\nthings up much more awkward, since you don't have the transactional\nguarantees on the filesystem that you do in the database and if you push\nthe files out but keep the metadata and indexes in the database then you\nhave to deal with reconsiling the two, in general and particularly when\nperforming a backup/restore.\n\nThanks!\n\nStephen",
"msg_date": "Mon, 17 Sep 2018 10:03:29 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big image tables maintenance"
},
{
"msg_contents": "On Mon, Sep 17, 2018, 19:28 Stephen Frost <[email protected]> wrote:\n\n> Greetings,\n>\n> (limiting this to -admin, cross-posting this to a bunch of different\n> lists really isn't helpful)\n>\n> * still Learner ([email protected]) wrote:\n> > I have a 10 TB size table with multiple bytea columns (image & doc)and\n> > makes 20TB of DB size. I have a couple of issues to maintain the DB.\n>\n> *What* are those issues..? That's really the first thing to discuss\n> here but you don't ask any questions about it or state what the issue is\n> (except possibly for backups, but we have solutions for that, as\n> mentioned below).\n\n> 1. I Would like to separate the image column from the 10TB size table,\n> > place it in a separate schema. The change should not result in any query\n> > change in the application. Is it possible? Doing this it should not\n> affect\n> > the performance.\n>\n> How large are these images? PostgreSQL will already pull out large\n> column values and put them into a side-table for you, behind the scenes,\n> using a technique called TOAST. Documentation about TOAST is available\n> here:\n>\n> https://www.postgresql.org/docs/current/static/storage-toast.htm\n> <https://www.postgresql.org/docs/current/static/storage-toast.html>\n\n\nImage size is restricted in two digit KBs\nonly, but we have very large volume of data. The main reason to split the\nimage to different schema is to avoid data loss in future if corruption\noccurs on the table. Also maintenance would be easier compared to now. The\nDb growth is much faster, I can say 1 Tb per quarter.\n\n\n> > 2. I can't maintain files on File system as the count is huge, so\n> thinking\n> > of using any no-sql mostly mongo-DB, is it recommended? Or PostgreSQL\n> > itself can handle?\n>\n> I suspect you'd find that your data size would end up being much, much\n> larger if you tried to store it as JSON or in a similar system, and\n> you're unlikely to get any performance improvement (much more likely the\n> opposite, in fact)\n>\n\n\nWe are also considering document management tools.\n\nFor these type of huge amount of data,is it advisable to keep the images in\nbytea type only or Jsonb( I haven't used yet) is also an option?\n\n> 3. Taking the backup of 20TB data, is big task. Any more feasible solution\n> > other than online backup/pg_dump?\n>\n> Absolutely, I'd recommend using pgBackRest which supports parallel\n> online backup and restore. Using pg_dump for a large system like this\n> is really not a good idea- your restore time would likely be\n> particularly terrible and you have no ability to do point-in-time\n> recovery. Using pgBackRest and a capable system, you'd be able to get a\n> complete backup of 20TB in perhaps 6-12 hours, with similar time on the\n> recovery side. If you wish to be able to recover faster, running a\n> replica (as well as doing backups) may be a good idea, perhaps even a\n> time-delayed one.\n>\n\nYeah I will try pgBackrest. We are already having time dealy replica.\n\n\n\n> > Each image retrieval is\n>\n> Unfinished thought here..?\n>\nSorry, some how I missed to complete. I supposed to say, image rerival\nratio would be 1:10, mean once each image inserted it would be retrieved by\nthe application more about 10 times for verification and prints etc.\n\nViewing the current data growth how long I mean till what size I can\nsurvive with this type of flow. In other words, just dont want to survive\nbut would like build a robust environment.\n\n\n> > Currently, we are on pg 9.4 and moving to 10.5 soon.\n>\n> That's definitely a good plan.\n>\n> Thanks!\n>\n> Stephen\n>\n\nOn Mon, Sep 17, 2018, 19:28 Stephen Frost <[email protected]> wrote:Greetings,\n\n(limiting this to -admin, cross-posting this to a bunch of different\nlists really isn't helpful)\n\n* still Learner ([email protected]) wrote:\n> I have a 10 TB size table with multiple bytea columns (image & doc)and\n> makes 20TB of DB size. I have a couple of issues to maintain the DB.\n\n*What* are those issues..? That's really the first thing to discuss\nhere but you don't ask any questions about it or state what the issue is\n(except possibly for backups, but we have solutions for that, as\nmentioned below).\n> 1. I Would like to separate the image column from the 10TB size table,\n> place it in a separate schema. The change should not result in any query\n> change in the application. Is it possible? Doing this it should not affect\n> the performance.\n\nHow large are these images? PostgreSQL will already pull out large\ncolumn values and put them into a side-table for you, behind the scenes,\nusing a technique called TOAST. Documentation about TOAST is available\nhere:\n\nhttps://www.postgresql.org/docs/current/static/storage-toast.htmImage size is restricted in two digit KBsonly, but we have very large volume of data. The main reason to split the image to different schema is to avoid data loss in future if corruption occurs on the table. Also maintenance would be easier compared to now. The Db growth is much faster, I can say 1 Tb per quarter.\n> 2. I can't maintain files on File system as the count is huge, so thinking\n> of using any no-sql mostly mongo-DB, is it recommended? Or PostgreSQL\n> itself can handle?\n\nI suspect you'd find that your data size would end up being much, much\nlarger if you tried to store it as JSON or in a similar system, and\nyou're unlikely to get any performance improvement (much more likely the\nopposite, in fact)We are also considering document management tools. For these type of huge amount of data,is it advisable to keep the images in bytea type only or Jsonb( I haven't used yet) is also an option?\n> 3. Taking the backup of 20TB data, is big task. Any more feasible solution\n> other than online backup/pg_dump?\n\nAbsolutely, I'd recommend using pgBackRest which supports parallel\nonline backup and restore. Using pg_dump for a large system like this\nis really not a good idea- your restore time would likely be\nparticularly terrible and you have no ability to do point-in-time\nrecovery. Using pgBackRest and a capable system, you'd be able to get a\ncomplete backup of 20TB in perhaps 6-12 hours, with similar time on the\nrecovery side. If you wish to be able to recover faster, running a\nreplica (as well as doing backups) may be a good idea, perhaps even a\ntime-delayed one.Yeah I will try pgBackrest. We are already having time dealy replica.\n\n> Each image retrieval is\n\nUnfinished thought here..?Sorry, some how I missed to complete. I supposed to say, image rerival ratio would be 1:10, mean once each image inserted it would be retrieved by the application more about 10 times for verification and prints etc.Viewing the current data growth how long I mean till what size I can survive with this type of flow. In other words, just dont want to survive but would like build a robust environment.\n\n> Currently, we are on pg 9.4 and moving to 10.5 soon.\n\nThat's definitely a good plan.\n\nThanks!\n\nStephen",
"msg_date": "Tue, 18 Sep 2018 00:42:15 +0530",
"msg_from": "still Learner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Big image tables maintenance"
},
{
"msg_contents": "Greetings,\n\n* still Learner ([email protected]) wrote:\n> On Mon, Sep 17, 2018, 19:28 Stephen Frost <[email protected]> wrote:\n> > (limiting this to -admin, cross-posting this to a bunch of different\n> > lists really isn't helpful)\n> >\n> > * still Learner ([email protected]) wrote:\n> > > I have a 10 TB size table with multiple bytea columns (image & doc)and\n> > > makes 20TB of DB size. I have a couple of issues to maintain the DB.\n> >\n> > *What* are those issues..? That's really the first thing to discuss\n> > here but you don't ask any questions about it or state what the issue is\n> > (except possibly for backups, but we have solutions for that, as\n> > mentioned below).\n> \n> > 1. I Would like to separate the image column from the 10TB size table,\n> > > place it in a separate schema. The change should not result in any query\n> > > change in the application. Is it possible? Doing this it should not\n> > affect\n> > > the performance.\n> >\n> > How large are these images? PostgreSQL will already pull out large\n> > column values and put them into a side-table for you, behind the scenes,\n> > using a technique called TOAST. Documentation about TOAST is available\n> > here:\n> >\n> > https://www.postgresql.org/docs/current/static/storage-toast.htm\n> > <https://www.postgresql.org/docs/current/static/storage-toast.html>\n> \n> Image size is restricted in two digit KBs\n> only, but we have very large volume of data. The main reason to split the\n> image to different schema is to avoid data loss in future if corruption\n> occurs on the table. Also maintenance would be easier compared to now. The\n> Db growth is much faster, I can say 1 Tb per quarter.\n\n\"two digit KBs\" doesn't actually provide much enlightenment. I'd\nsuggest you check for and look at the size of the TOAST table for your\nenvironment.\n\nAs for growth, you'd probably be best off looking at partitioning the\nlarge data set once you've gotten the system up to 10.5, but that's\nmostly to make it easier to manage the data and to do things like expire\nout old data.\n\n> > > 2. I can't maintain files on File system as the count is huge, so\n> > thinking\n> > > of using any no-sql mostly mongo-DB, is it recommended? Or PostgreSQL\n> > > itself can handle?\n> >\n> > I suspect you'd find that your data size would end up being much, much\n> > larger if you tried to store it as JSON or in a similar system, and\n> > you're unlikely to get any performance improvement (much more likely the\n> > opposite, in fact)\n> \n> We are also considering document management tools.\n\nNot really sure what that changes here, but doesn't seem like much.\n\n> For these type of huge amount of data,is it advisable to keep the images in\n> bytea type only or Jsonb( I haven't used yet) is also an option?\n\nIf you go to JSONB then you'd likely end up seriously increasing the\nsize, so, no, I wouldn't suggest going there for binary image data.\n\n> > 3. Taking the backup of 20TB data, is big task. Any more feasible solution\n> > > other than online backup/pg_dump?\n> >\n> > Absolutely, I'd recommend using pgBackRest which supports parallel\n> > online backup and restore. Using pg_dump for a large system like this\n> > is really not a good idea- your restore time would likely be\n> > particularly terrible and you have no ability to do point-in-time\n> > recovery. Using pgBackRest and a capable system, you'd be able to get a\n> > complete backup of 20TB in perhaps 6-12 hours, with similar time on the\n> > recovery side. If you wish to be able to recover faster, running a\n> > replica (as well as doing backups) may be a good idea, perhaps even a\n> > time-delayed one.\n> \n> Yeah I will try pgBackrest. We are already having time dealy replica.\n\nThat's good.\n\n> > > Each image retrieval is\n> >\n> > Unfinished thought here..?\n>\n> Sorry, some how I missed to complete. I supposed to say, image rerival\n> ratio would be 1:10, mean once each image inserted it would be retrieved by\n> the application more about 10 times for verification and prints etc.\n\nIf there's an issue with the load associated with retriving the images\nthen I would suggest that you stand up a physical replica or two and\nthen move the read load to those systems.\n\n> Viewing the current data growth how long I mean till what size I can\n> survive with this type of flow. In other words, just dont want to survive\n> but would like build a robust environment.\n\nPostgreSQL is quite robust and can handle a very large amount of data.\nYou can improve on that by having physical replicas which are available\nfor failover and handling high read load. At 1TB/quarter, it seems very\nunlikely that you'll run into any serious limitations in PostgreSQL any\ntime soon.\n\nThanks!\n\nStephen",
"msg_date": "Mon, 17 Sep 2018 15:34:20 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big image tables maintenance"
}
] |
[
{
"msg_contents": "Hi,\n\nFor a traditional LEFT JOIN, in case the SELECT does not mention a field\nfrom a joined table being unique , the planner removes the join. Eg:\n\nSELECT a, b --,c\nFROM table1\nLEFT JOIN (select a, c from table2 group by a) joined USING (a)\n\nHowever this behavior is not the same for LATERAL JOINS\n\nSELECT a, b --,c\nFROM table1\nLEFT JOIN LATERAL (select a, c from table2 where table1.a = table2.a group by a) joined ON TRUE\n\nIn this case, the planner still consider the joined table. My guess is\nit could remove it .\n\n\nAny thought ?\n\n-- \nnicolas\n\n",
"msg_date": "Tue, 18 Sep 2018 09:06:36 +0200",
"msg_from": "Nicolas Paris <[email protected]>",
"msg_from_op": true,
"msg_subject": "LEFT JOIN LATERAL optimisation at plan time"
},
{
"msg_contents": "Nicolas Paris <[email protected]> writes:\n> For a traditional LEFT JOIN, in case the SELECT does not mention a field\n> from a joined table being unique , the planner removes the join. Eg:\n\n> SELECT a, b --,c\n> FROM table1\n> LEFT JOIN (select a, c from table2 group by a) joined USING (a)\n\n> However this behavior is not the same for LATERAL JOINS\n\n> SELECT a, b --,c\n> FROM table1\n> LEFT JOIN LATERAL (select a, c from table2 where table1.a = table2.a group by a) joined ON TRUE\n\nThe way you've set that up, the constraint required to deduce uniqueness\n(i.e. the table1.a = table2.a clause) is hidden inside a non-trivial\nsubquery; and, where it's placed, it isn't actually guaranteeing anything\nso far as the inner query is concerned, ie the select from table2 could\neasily return multiple rows. I'm not too surprised that the outer planner\nlevel doesn't make this deduction.\n\n> In this case, the planner still consider the joined table. My guess is\n> it could remove it .\n\nIt looks to me like it would require a substantial amount of additional\ncode and plan-time effort to find cases like this. I'm not convinced\nthat the cost-benefit ratio is attractive.\n\nMaybe in some hypothetical future where we're able to flatten sub-selects\neven though they contain GROUP BY, it would get easier/cheaper to detect\nthis case. But that's just pie in the sky at the moment.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 18 Sep 2018 18:21:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LEFT JOIN LATERAL optimisation at plan time"
}
] |
[
{
"msg_contents": "We have faced in issue in our Postgresql 9.5.13 cluster. Inserts into\nbtree index are too slow when strings contain Thai characters.Test\nscript and results are in the attachment. Test shows that insert Thai\nstring into index is more than 60x times slower than Chinese or\nRussian, for example. Tracing with perf showed that problem is in\nstrcoll_l() libc function (see thai-slow.svg). This function is used\nwhen locale is different from 'C'. For 'C' locale just simple\ncomparison is used (see thai-fast.graph) and performance is OK.\nOf course, I googled and thought that it is a bug in glibc\n(https://sourceware.org/bugzilla/show_bug.cgi?id=18441), but when I\ntried previous version of glibc (2.19 and 2.13) I found out that it\nstill reproduced. I know that I can upgrade PostgreSQL to 10 and user\nlibicu for string comparison but is there any way to fix that in\nPostgreSQL 9.5.13?\n\nP.S. I can provide COLLATE \"C\" for this column during its creation but\nit looks a little bit tricky.\n\n-- \nWith best regards, Andrey Zhidenkov",
"msg_date": "Tue, 18 Sep 2018 14:31:08 +0700",
"msg_from": "Andrey Zhidenkov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance problems with Thai language"
}
] |
[
{
"msg_contents": "Hi,\n\npg_pub_decrypt() is ~10x slower when the priv/pub keys have been\ngenerated with gnupg version 2.x instead of version 1.x.\n\nWhat I do is:\n- Create keys with gpg\n- Export priv/pub keys\n- Store keys in binary form in a bytea\n- Create 32 byte random data and encrypt it with pg_pub_encrypt()\n- \\timing on\n- Decrypt with pg_pub_decrypt().\n\nI see ~8ms with v1 keys vs. ~100ms with v2 keys.\n\nI am using defaults everywhere, when generating keys as well as\nencrypting with pg_pub_encrypt().\n\nOutside postgresql, I've tested random file encryption/decryption\nwith gpg 2.x and with both the v1 keys against the v2 keys (both in\nthe gpg keyring) and cannot detect significant differences.\n\nWhat can I do to track that issue further down.\n\nThanks\n\n",
"msg_date": "Tue, 18 Sep 2018 16:28:18 +0200",
"msg_from": "\"Felix A. Kater\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_pub_decrypt: 10x performance hit with gpg v2"
}
] |
[
{
"msg_contents": "Why the sql is not executed in parallel mode, does the sql has some problem?\nwith sql1 as\n(select a.*\n from snaps a\n where a.f_date between to_date('2018-03-05', 'yyyy-MM-dd') and\n to_date('2018-03-11', 'yyyy-MM-dd')\n ),\nsql2 as\n(select '1' as pId, PM_TO as pValue, type_code as typeCode, version_no as versionNo,\nbs as bs, l.order_rule as orderRule\n from sql1, qfpl l\n where PM_TO is not null\n and l.pid = 1\n union all\n select '2' as pId,\n PRTO as pValue,\n type_code as typeCode, version_no as versionNo,\nbs as bs, l.order_rule as orderRule\n from sql1, qfpl l\n where PRTO is not null\n and l.pid = 2\n union all\n select '3' as pId,\n PRATO as pValue,\n type_code as typeCode, version_no as versionNo,\nbs as bs, l.order_rule as orderRule\n from sql1, qfpl l\n where PRATO is not null\n and l.pid = 3\n ),\nsql4 as (\nselect typeCode, pId, orderRule, versionNo,\nrow_number() over(partition by pId, typeCode order by pValue) as rnn\n from sql2\n),\nsql5 as (\nselect sql4.typeCode as typeCode,\n sql4.pId as pId,\n sql4.orderRule as orderRule,\n t.pValue as pValue,\n sql4.versionNo as versionNo\nfrom sql4,\n(select sql2.typeCode,sql2.pId,sql2.orderRule,\n (case when sql2.orderRule = 1 then\nPERCENTILE_DISC(0.05) WITHIN GROUP(ORDER BY sql2.pValue)\n else\nPERCENTILE_DISC(0.95) WITHIN GROUP(ORDER BY sql2.pValue)\nend) as pValue,\n (case when sql2.orderRule = 1 then\n (case when round(count(1) * 0.05) - 1 < 0 then 1\n else round(count(1) * 0.05)\n end)\n else\n (case when round(count(1) * 0.95) - 1 < 0 then 1\n else round(count(1) * 0.95)\n end)\n end) as rnn\n from sql2\n group by sql2.typeCode, sql2.pId, sql2.orderRule) t\nwhere sql4.typeCode = t.typeCode\nand sql4.pId = t.pId\n and sql4.orderRule = t.orderRule\n and sql4.rnn = t.rnn\n),\nsql6 as (\nselect sql2.pId, sql2.typeCode as typeCode, count(1) as fCount\n from sql2, sql5\n where sql2.pId = sql5.pId\n and sql2.typeCode = sql5.typeCode\n and ((sql2.orderRule = 2 and sql2.pValue >= sql5.pValue) or\n (sql2.orderRule = 1 and sql2.pValue <= sql5.pValue))\n and sql2.pId != '22'\n group by sql2.pId, sql2.typeCode\n union \n select sql5.pId, sql5.typeCode, 0 as fCount\n from sql5\n where sql5.pId = '22'\n group by sql5.pId, sql5.typeCode\n)\nselect sql5.pId,\n sql5.typeCode,\n (case when sql5.pId = '22' then\n (select p.d_chn\n from qlp p\n where p.version_no = sql5.versionNo\n and p.cno = sql5.pValue\n and (p.typeCode = sql5.typeCode or p.typeCode is null))\n else \nsql5.pValue || ''\n end) pValue,\n sql6.fCount,\n (case when d.delta = 'Y' then d.dy_val\nelse d.y_val\nend) yVal,\n (case when d.is_delta = 'Y' then d.dr_val\nelse d.r_val\nend) rVal,\n f.p_no pNo,\n f.p_name ||(case when f.unit = '' then ''\nelse '('|| f.unit ||')'\n end) pName,\n f.pe_name || (case when f.unit = '' then ''\n else '(' || f.unit || ')'\n end) peName,\n c.fp_name fpName,\n f.order_rule as orderRule,\n f.pflag pFlag,\n f.pdesc as pDesc\n from sql5, sql6, qfpl f, qpa d,qfp c\n where sql5.pId = sql6.pId\n and sql5.typeCode = sql6.typeCode\n and sql5.pId = f.pid||''\n and f.deleted = 0\n and f.pid = d.pid\n and sql5.typeCode = d.typeCode\n and f.fp_id = c.fp_id\n order by f.t_sort, c.fp_id,f.p_no\nWhy the sql is not executed in parallel mode, does the sql has some problem?with sql1 as (select a.* from snaps a where a.f_date between to_date('2018-03-05', 'yyyy-MM-dd') and to_date('2018-03-11', 'yyyy-MM-dd') ), sql2 as (select '1' as pId, PM_TO as pValue, type_code as typeCode, version_no as versionNo, bs as bs, l.order_rule as orderRule from sql1, qfpl l where PM_TO is not null and l.pid = 1 union all select '2' as pId, PRTO as pValue, type_code as typeCode, version_no as versionNo, bs as bs, l.order_rule as orderRule from sql1, qfpl l where PRTO is not null and l.pid = 2 union all select '3' as pId, PRATO as pValue, type_code as typeCode, version_no as versionNo, bs as bs, l.order_rule as orderRule from sql1, qfpl l where PRATO is not null and l.pid = 3 ), sql4 as ( select typeCode, pId, orderRule, versionNo, row_number() over(partition by pId, typeCode order by pValue) as rnn from sql2 ), sql5 as ( select sql4.typeCode as typeCode, sql4.pId as pId, sql4.orderRule as orderRule, t.pValue as pValue, sql4.versionNo as versionNo from sql4, (select sql2.typeCode,sql2.pId,sql2.orderRule, (case when sql2.orderRule = 1 then PERCENTILE_DISC(0.05) WITHIN GROUP(ORDER BY sql2.pValue) else PERCENTILE_DISC(0.95) WITHIN GROUP(ORDER BY sql2.pValue) end) as pValue, (case when sql2.orderRule = 1 then (case when round(count(1) * 0.05) - 1 < 0 then 1 else round(count(1) * 0.05) end) else (case when round(count(1) * 0.95) - 1 < 0 then 1 else round(count(1) * 0.95) end) end) as rnn from sql2 group by sql2.typeCode, sql2.pId, sql2.orderRule) t where sql4.typeCode = t.typeCode and sql4.pId = t.pId and sql4.orderRule = t.orderRule and sql4.rnn = t.rnn ), sql6 as ( select sql2.pId, sql2.typeCode as typeCode, count(1) as fCount from sql2, sql5 where sql2.pId = sql5.pId and sql2.typeCode = sql5.typeCode and ((sql2.orderRule = 2 and sql2.pValue >= sql5.pValue) or (sql2.orderRule = 1 and sql2.pValue <= sql5.pValue)) and sql2.pId != '22' group by sql2.pId, sql2.typeCode union select sql5.pId, sql5.typeCode, 0 as fCount from sql5 where sql5.pId = '22' group by sql5.pId, sql5.typeCode ) select sql5.pId, sql5.typeCode, (case when sql5.pId = '22' then (select p.d_chn from qlp p where p.version_no = sql5.versionNo and p.cno = sql5.pValue and (p.typeCode = sql5.typeCode or p.typeCode is null)) else sql5.pValue || '' end) pValue, sql6.fCount, (case when d.delta = 'Y' then d.dy_val else d.y_val end) yVal, (case when d.is_delta = 'Y' then d.dr_val else d.r_val end) rVal, f.p_no pNo, f.p_name ||(case when f.unit = '' then '' else '('|| f.unit ||')' end) pName, f.pe_name || (case when f.unit = '' then '' else '(' || f.unit || ')' end) peName, c.fp_name fpName, f.order_rule as orderRule, f.pflag pFlag, f.pdesc as pDesc from sql5, sql6, qfpl f, qpa d,qfp c where sql5.pId = sql6.pId and sql5.typeCode = sql6.typeCode and sql5.pId = f.pid||'' and f.deleted = 0 and f.pid = d.pid and sql5.typeCode = d.typeCode and f.fp_id = c.fp_id order by f.t_sort, c.fp_id,f.p_no",
"msg_date": "Wed, 19 Sep 2018 09:53:28 +0800 (CST)",
"msg_from": "jimmy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Why the sql is not executed in parallel mode"
},
{
"msg_contents": "On Wed, Sep 19, 2018 at 1:53 PM jimmy <[email protected]> wrote:\n>\n> Why the sql is not executed in parallel mode, does the sql has some problem?\n> with sql1 as\n\nHello Jimmy,\n\nWITH is the problem. From the manual[1]: \"The following operations\nare always parallel restricted. Scans of common table expressions\n(CTEs). ...\". That means that these CTEs can only be scanned in the\nleader process.\n\nIf you rewrite the query using sub selects it might do better. FWIW\nthere is a project to make WITH work like subselects automatically in\na future release of PostgreSQL:\n\nhttps://www.postgresql.org/message-id/flat/[email protected]\n\n[1] https://www.postgresql.org/docs/10/static/parallel-safety.html\n\n-- \nThomas Munro\nhttp://www.enterprisedb.com\n\n",
"msg_date": "Wed, 19 Sep 2018 15:21:53 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why the sql is not executed in parallel mode"
},
{
"msg_contents": "Which version are you running?\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html\n\n",
"msg_date": "Wed, 26 Sep 2018 09:47:27 -0700 (MST)",
"msg_from": "pinker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why the sql is not executed in parallel mode"
}
] |
[
{
"msg_contents": "Hi!\n\nI would have following question, if someone could help.\n\nQuestion 1: How to see/calculate size of index in memory?\nBTree, hash index.\n\nI can see size of index e.g. with pg_relation_size FROM pg_class (after reindex). Does that tell size of index on disk?\n\nI would be interested how big part of index is in memory. (Whole index?)\n\nPG10/PG11.\nBest Regards, Sam\n\n\nHi!I would have following question, if someone could help.Question 1: How to see/calculate size of index in memory?BTree, hash index.I can see size of index e.g. with pg_relation_size FROM pg_class (after reindex). Does that tell size of index on disk?I would be interested how big part of index is in memory. (Whole index?)PG10/PG11.Best Regards, Sam",
"msg_date": "Wed, 19 Sep 2018 08:30:49 +0000 (UTC)",
"msg_from": "\"Sam R.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to see/calculate size of index in memory?"
},
{
"msg_contents": "Hello\nYou can use pg_buffercache contrib module: https://www.postgresql.org/docs/current/static/pgbuffercache.html\n\npg_relation_size - yes, its full size on disk regardless buffer cache\n\nregards, Sergei\n\n",
"msg_date": "Wed, 19 Sep 2018 11:42:48 +0300",
"msg_from": "Sergei Kornilov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to see/calculate size of index in memory?"
}
] |
[
{
"msg_contents": "Hi!\n\nRelated to my other email (size of index in memory), \n\nOther questions,\nQ: To keep _index(es)_ in memory, is large enough effective_cache_size enough?\nQ: Size of shared_buffers does not matter regarding keeping index in memory?\nOr have I missed something, does it matter (to keep indexes in memory)?\n\nBackground info: I have plans to use hash indexes: very large amount of data in db tables, but (e.g. hash) indexes could be kept in memory.\n\nI am using PostgreSQL 10. I could start to use PostgreSQL 11, after it has been released.\n\nBest Regards, Sam\n\nHi!Related to my other email (size of index in memory), Other questions,Q: To keep _index(es)_ in memory, is large enough effective_cache_size enough?Q: Size of shared_buffers does not matter regarding keeping index in memory?Or have I missed something, does it matter (to keep indexes in memory)?Background info: I have plans to use hash indexes: very large amount of data in db tables, but (e.g. hash) indexes could be kept in memory.I am using PostgreSQL 10. I could start to use PostgreSQL 11, after it has been released.Best Regards, Sam",
"msg_date": "Wed, 19 Sep 2018 08:35:38 +0000 (UTC)",
"msg_from": "\"Sam R.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "To keep indexes in memory, is large enough effective_cache_size\n enough?"
},
{
"msg_contents": "Hi\n\neffective_cache_size is not cache. It is just approx value for query planner: how many data can be found in RAM (both in shared_buffers and OS page cache)\n\n> Q: Size of shared_buffers does not matter regarding keeping index in memory?\nshared_buffers is cache for both tables and indexes pages. All data in tables and indexes are split to chunks 8 kb each - pages (usually 8kb, it can be redefined during source compilation).\nShared buffers cache is fully automatic, active used pages keeps in memory, lower used pages may be evicted. You can not pin any table or index to shared buffers.\n\nregards, Sergei\n\n",
"msg_date": "Wed, 19 Sep 2018 12:10:52 +0300",
"msg_from": "Sergei Kornilov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: To keep indexes in memory,\n is large enough effective_cache_size enough?"
},
{
"msg_contents": "Hi!\nIs is possible to force PostgreSQL to keep an index in memory? The data in db table columns is not needed to be kept in memory, only the index. (hash index.)\n\nIt would sound optimal in our scenario.I think Oracle has capability to keep index in memory (in-memory db functionality). But does PostgreSQL have such a functionality? (I keep searching.)\nI have read:\nTuning Your PostgreSQL Server - PostgreSQL wiki\n(effective_cache_size, shared_buffers)\nI have seen responses to:\nPostgreSQL Index Caching\n\n\n\nShould I actually set shared_buffers to tens of gigabytes also, if I want to keep one very big index in memory?\n\nI ma also reading a PG book.\n\nBest Regards, Sam\n\n\n \n\n On Wednesday, September 19, 2018 11:40 AM, Sam R. <[email protected]> wrote:\n \n\n Hi!\n\nRelated to my other email (size of index in memory), \n\nOther questions,\nQ: To keep _index(es)_ in memory, is large enough effective_cache_size enough?\nQ: Size of shared_buffers does not matter regarding keeping index in memory?\nOr have I missed something, does it matter (to keep indexes in memory)?\n\nBackground info: I have plans to use hash indexes: very large amount of data in db tables, but (e.g. hash) indexes could be kept in memory.\n\nI am using PostgreSQL 10. I could start to use PostgreSQL 11, after it has been released.\n\nBest Regards, Sam\n\n\n \nHi!Is is possible to force PostgreSQL to keep an index in memory? The data in db table columns is not needed to be kept in memory, only the index. (hash index.)It would sound optimal in our scenario.I think Oracle has capability to keep index in memory (in-memory db functionality). But does PostgreSQL have such a functionality? (I keep searching.)I have read:Tuning Your PostgreSQL Server - PostgreSQL wiki(effective_cache_size, shared_buffers)I have seen responses to:PostgreSQL Index CachingShould I actually set shared_buffers to tens of gigabytes also, if I want to keep one very big index in memory?I ma also reading a PG book.Best Regards, Sam On Wednesday, September 19, 2018 11:40 AM, Sam R. <[email protected]> wrote: Hi!Related to my other email (size of index in memory), Other questions,Q: To keep _index(es)_ in memory, is large enough effective_cache_size enough?Q: Size of shared_buffers does not matter regarding keeping index in memory?Or have I missed something, does it matter (to keep indexes in memory)?Background info: I have plans to use hash indexes: very large amount of data in db tables, but (e.g. hash) indexes could be kept in memory.I am using PostgreSQL 10. I could start to use PostgreSQL 11, after it has been released.Best Regards, Sam",
"msg_date": "Wed, 19 Sep 2018 09:15:03 +0000 (UTC)",
"msg_from": "\"Sam R.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: To keep indexes in memory, is large enough effective_cache_size\n enough?"
},
{
"msg_contents": "Sergei wrote:\n> You can not pin any table or index to shared buffers.\nThanks, this is answer to my other question!\nIn our case, this might be an important feature. \n(Index in memory, other data / columns not.)\n> shared_buffers is cache for both tables and indexes pages.\nOk. So, we should set also shared_buffers big.\n\nBR Sam\n \n\n On Wednesday, September 19, 2018 12:10 PM, Sergei Kornilov <[email protected]> wrote:\n \n\n Hi\n\neffective_cache_size is not cache. It is just approx value for query planner: how many data can be found in RAM (both in shared_buffers and OS page cache)\n\n> Q: Size of shared_buffers does not matter regarding keeping index in memory?\nshared_buffers is cache for both tables and indexes pages. All data in tables and indexes are split to chunks 8 kb each - pages (usually 8kb, it can be redefined during source compilation).\nShared buffers cache is fully automatic, active used pages keeps in memory, lower used pages may be evicted. You can not pin any table or index to shared buffers.\n\nregards, Sergei\n\n\n \nSergei wrote:> You can not pin any table or index to shared buffers.Thanks, this is answer to my other question!In our case, this might be an important feature. (Index in memory, other data / columns not.)> shared_buffers is cache for both tables and indexes pages.Ok. So, we should set also shared_buffers big.BR Sam On Wednesday, September 19, 2018 12:10 PM, Sergei Kornilov <[email protected]> wrote: Hieffective_cache_size is not cache. It is just approx value for query planner: how many data can be found in RAM (both in shared_buffers and OS page cache)> Q: Size of shared_buffers does not matter regarding keeping index in memory?shared_buffers is cache for both tables and indexes pages. All data in tables and indexes are split to chunks 8 kb each - pages (usually 8kb, it can be redefined during source compilation).Shared buffers cache is fully automatic, active used pages keeps in memory, lower used pages may be evicted. You can not pin any table or index to shared buffers.regards, Sergei",
"msg_date": "Wed, 19 Sep 2018 09:18:24 +0000 (UTC)",
"msg_from": "\"Sam R.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: To keep indexes in memory, is large enough effective_cache_size\n enough?"
},
{
"msg_contents": "On 19 September 2018 at 21:18, Sam R. <[email protected]> wrote:\n> Ok. So, we should set also shared_buffers big.\n\nIt might not be quite as beneficial as you might think. If your\ndatabase is larger than RAM often having a smaller shared_buffers\nsetting yields better performance. The reason is that if you have a\nvery large shared_buffers that the same buffers can end up cached in\nthe kernel page cache and shared buffers. If you have a smaller shared\nbuffers setting then the chances of that double buffering are reduced\nand the chances of finding a page cached somewhere increases.\n\nHowever, if your database is quite small and you can afford to fit all\nyour data in shared buffers, with enough free RAM for everything else,\nthen you might benefit from a large shared buffers, but it's important\nto also consider that some operations, such as DROP TABLE can become\nslow of shared buffers is very large.\n\nYou might get more specific recommendations if you mention how much\nRAM the server has and how big the data is now and will be in the\nfuture.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Wed, 19 Sep 2018 22:11:08 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: To keep indexes in memory,\n is large enough effective_cache_size enough?"
},
{
"msg_contents": "Does a large shared_buffers impact checkpoint performance negatively? I was\nunder the impression that everything inside shared_buffers must be written\nduring a checkpoint.\n\nDoes a large shared_buffers impact checkpoint performance negatively? I was under the impression that everything inside shared_buffers must be written during a checkpoint.",
"msg_date": "Wed, 19 Sep 2018 12:12:35 +0200",
"msg_from": "Kaixi Luo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: To keep indexes in memory,\n is large enough effective_cache_size enough?"
},
{
"msg_contents": "On 19 September 2018 at 22:12, Kaixi Luo <[email protected]> wrote:\n> Does a large shared_buffers impact checkpoint performance negatively? I was\n> under the impression that everything inside shared_buffers must be written\n> during a checkpoint.\n\nOnly the dirty buffers get written.\n\nAlso having too small a shared buffers can mean that buffers must be\nwritten more than they'd otherwise need to be. If a buffer must be\nevicted from shared buffers to make way for a new buffer then the\nchances of having to evict a dirty buffer increases with smaller\nshared buffers. Obviously, this dirty buffer needs to be written out\nbefore the new buffer can be loaded in. In a worst-case scenario, a\nbackend performing a query would have to do this. pg_stat_bgwriter is\nyour friend.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Wed, 19 Sep 2018 22:23:18 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: To keep indexes in memory,\n is large enough effective_cache_size enough?"
},
{
"msg_contents": "Hi!\nThanks for all of the comments!\nDavid wrote:> if you mention \n> how muchRAM the server has and how big the data is now\nLet's say for example:\nRAM: 64 GB\nData: 500 GB - 1.5 TB, for example. \n( RAM: Less would of course be better, e.g. 32 GB, but we could maybe go for an even little bit bigger value than 64 GB, if needed to. )\nBR Sam\n\n On Wednesday, September 19, 2018 1:11 PM, David Rowley <[email protected]> wrote:\n \n\n On 19 September 2018 at 21:18, Sam R. <[email protected]> wrote:\n> Ok. So, we should set also shared_buffers big.\n\nIt might not be quite as beneficial as you might think. If your\ndatabase is larger than RAM often having a smaller shared_buffers\nsetting yields better performance. The reason is that if you have a\nvery large shared_buffers that the same buffers can end up cached in\nthe kernel page cache and shared buffers. If you have a smaller shared\nbuffers setting then the chances of that double buffering are reduced\nand the chances of finding a page cached somewhere increases.\n\nHowever, if your database is quite small and you can afford to fit all\nyour data in shared buffers, with enough free RAM for everything else,\nthen you might benefit from a large shared buffers, but it's important\nto also consider that some operations, such as DROP TABLE can become\nslow of shared buffers is very large.\n\nYou might get more specific recommendations if you mention how much\nRAM the server has and how big the data is now and will be in the\nfuture.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n \nHi!Thanks for all of the comments!David wrote:> if you mention > how muchRAM the server has and how big the data is nowLet's say for example:RAM: 64 GBData: 500 GB - 1.5 TB, for example. ( RAM: Less would of course \nbe better, e.g. 32 GB, but we could maybe go for an even little bit \nbigger value than 64 GB, if needed to. )BR Sam On Wednesday, September 19, 2018 1:11 PM, David Rowley <[email protected]> wrote: On 19 September 2018 at 21:18, Sam R. <[email protected]> wrote:> Ok. So, we should set also shared_buffers big.It might not be quite as beneficial as you might think. If yourdatabase is larger than RAM often having a smaller shared_bufferssetting yields better performance. The reason is that if you have avery large shared_buffers that the same buffers can end up cached inthe kernel page cache and shared buffers. If you have a smaller sharedbuffers setting then the chances of that double buffering are reducedand the chances of finding a page cached somewhere increases.However, if your database is quite small and you can afford to fit allyour data in shared buffers, with enough free RAM for everything else,then you might benefit from a large shared buffers, but it's importantto also consider that some operations, such as DROP TABLE can becomeslow of shared buffers is very large.You might get more specific recommendations if you mention how muchRAM the server has and how big the data is now and will be in thefuture.-- David Rowley http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Wed, 19 Sep 2018 11:01:16 +0000 (UTC)",
"msg_from": "\"Sam R.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: To keep indexes in memory, is large enough effective_cache_size\n enough?"
},
{
"msg_contents": "Size of the index of one huge table has been e.g. 16-20 GB (after REINDEX). \n\nSize of such an index is quite big.\n \nBR Samuli\n\n On Wednesday, September 19, 2018 2:01 PM, Sam R. <[email protected]> wrote:\n \n\n Hi!\nThanks for all of the comments!\nDavid wrote:> if you mention \n> how muchRAM the server has and how big the data is now\nLet's say for example:\nRAM: 64 GB\nData: 500 GB - 1.5 TB, for example. \n( RAM: Less would of course be better, e.g. 32 GB, but we could maybe go for an even little bit bigger value than 64 GB, if needed to. )\nBR Sam\n\n On Wednesday, September 19, 2018 1:11 PM, David Rowley <[email protected]> wrote:\n \n...\n \n\n\n\n \n\n \nSize of the index of one huge table has been e.g. 16-20 GB (after REINDEX). Size of such an index is quite big. BR Samuli On Wednesday, September 19, 2018 2:01 PM, Sam R. <[email protected]> wrote: Hi!Thanks for all of the comments!David wrote:> if you mention > how muchRAM the server has and how big the data is nowLet's say for example:RAM: 64 GBData: 500 GB - 1.5 TB, for example. ( RAM: Less would of course \nbe better, e.g. 32 GB, but we could maybe go for an even little bit \nbigger value than 64 GB, if needed to. )BR Sam On Wednesday, September 19, 2018 1:11 PM, David Rowley <[email protected]> wrote: ...",
"msg_date": "Wed, 19 Sep 2018 11:06:53 +0000 (UTC)",
"msg_from": "\"Sam R.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: To keep indexes in memory, is large enough effective_cache_size\n enough?"
},
{
"msg_contents": "On Wed, Sep 19, 2018 at 5:19 AM Sam R. <[email protected]> wrote:\n\n> Hi!\n>\n> Is is possible to force PostgreSQL to keep an index in memory?\n>\n\nIt might be possible to put the indexes in a separate tablespace, then do\nsomething at the file-system level to to force the OS cache to keep pages\nfor that FS in memory.\n\n\n\n> The data in db table columns is not needed to be kept in memory, only the\n> index. (hash index.)\n>\n\nThis sounds like speculation. Do you have hard evidence that this is\nactually the case?\n\n\n>\n> It would sound optimal in our scenario.\n> I think Oracle has capability to keep index in memory (in-memory db\n> functionality). But does PostgreSQL have such a functionality? (I keep\n> searching.)\n>\n\nThere are a lot of Oracle capabilities which encourage people to\nmicromanage the server in ways that are almost never actually productive.\n\nShould I actually set shared_buffers to tens of gigabytes also, if I want\n> to keep one very big index in memory?\n>\n\nIf your entire database fits in RAM, then it could be useful to set\nshared_buffers high enough to fit the entire database.\n\nIf fitting the entire database in RAM is hopeless, 10s of gigabytes is\nprobably too much, unless you have 100s of GB of RAM. PostgreSQL doesn't do\ndirect IO, but rather uses the OS file cache extensively. This leads to\ndouble-buffering, where a page is read from disk and stored in the OS file\ncache, then handed over to PostgreSQL where it is also stored in\nshared_buffers. That means that 1/2 of RAM is often the worse value for\nshared_buffers. You would want it to be either something like 1/20 to 1/10\nof RAM, or something like 9/10 or 19/20 of RAM, so that you concentrate\npages into one of the caches or the other. The low fraction of RAM is the\nmore generally useful option. The high fraction of RAM is useful when you\nhave very high write loads, particularly intensive index updating--and in\nthat case you probably need someone to intensively monitor and baby-sit the\ndatabase.\n\nCheers,\n\nJeff\n\nOn Wed, Sep 19, 2018 at 5:19 AM Sam R. <[email protected]> wrote:Hi!Is is possible to force PostgreSQL to keep an index in memory? It might be possible to put the indexes in a separate tablespace, then do something at the file-system level to to force the OS cache to keep pages for that FS in memory. The data in db table columns is not needed to be kept in memory, only the index. (hash index.)This sounds like speculation. Do you have hard evidence that this is actually the case? It would sound optimal in our scenario.I think Oracle has capability to keep index in memory (in-memory db functionality). But does PostgreSQL have such a functionality? (I keep searching.)There are a lot of Oracle capabilities which encourage people to micromanage the server in ways that are almost never actually productive.Should I actually set shared_buffers to tens of gigabytes also, if I want to keep one very big index in memory?If your entire database fits in RAM, then it could be useful to set shared_buffers high enough to fit the entire database.If fitting the entire database in RAM is hopeless, 10s of gigabytes is probably too much, unless you have 100s of GB of RAM. PostgreSQL doesn't do direct IO, but rather uses the OS file cache extensively. This leads to double-buffering, where a page is read from disk and stored in the OS file cache, then handed over to PostgreSQL where it is also stored in shared_buffers. That means that 1/2 of RAM is often the worse value for shared_buffers. You would want it to be either something like 1/20 to 1/10 of RAM, or something like 9/10 or 19/20 of RAM, so that you concentrate pages into one of the caches or the other. The low fraction of RAM is the more generally useful option. The high fraction of RAM is useful when you have very high write loads, particularly intensive index updating--and in that case you probably need someone to intensively monitor and baby-sit the database.Cheers,Jeff",
"msg_date": "Wed, 19 Sep 2018 09:42:23 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: To keep indexes in memory,\n is large enough effective_cache_size enough?"
},
{
"msg_contents": "Thanks for the comments!\nSam wrote:\n\n>> The data in db table columns is not needed to be kept in memory, only the index. (hash index.)\n\n\nJeff Janes wrote:\n> This sounds like speculation. Do you have hard evidence that this is actually the case?\nIn our case the \"ID\" is randomly generated random number. (Large ID.)\nIt is not a \"sequential\" number, but random.\n\nIn generation phase, it is a very large random number. Our application may not even generate the random ID.\n\nWe use hash index over the ID.\n\nAt the moment, in \"pure theory\", we will read randomly through the hash index.So, no one will be able to know what part of the data (from the table) should be kept in memory. \nSide note: Of course there may be (even many) use cases, where same data is read again and again. Still: I am thinking now from a very theoretical point of view (which we may still apply in practice).\n\nIn generic: \nI am not certain how PostgreSQL or hash indexes work in detail, so my claim / wish of keeping only the index in memory may be faulty. (This is one reason for these discussions.)\n\nBR Sam\n\n \n \nThanks for the comments!Sam wrote:>> The data in db table columns is not needed to be kept in memory, only the index. (hash index.)Jeff Janes wrote:> This sounds like speculation. Do you have hard evidence that this is actually the case?In our case the \"ID\" is randomly generated random number. (Large ID.)It is not a \"sequential\" number, but random.In generation phase, it is a very large random number. Our application may not even generate the random ID.We use hash index over the ID.At the moment, in \"pure theory\", we will read randomly through the hash index.So, no one will be able to know what part of the data (from the table) should be kept in memory. Side note: Of course there may be (even many) use cases, where same data is read again and again. Still: I am thinking now from a very theoretical point of view (which we may still apply in practice).In generic: I am not certain how PostgreSQL or hash indexes work in detail, so my claim / wish of keeping only the index in memory may be faulty. (This is one reason for these discussions.)BR Sam",
"msg_date": "Wed, 19 Sep 2018 14:45:39 +0000 (UTC)",
"msg_from": "\"Sam R.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: To keep indexes in memory, is large enough effective_cache_size\n enough?"
},
{
"msg_contents": "I believe you can use pg_prewarm to pin index or table to cache.\n\nhttps://www.postgresql.org/docs/current/static/pgprewarm.html\n\nOn Wed, 19 Sep 2018 at 22:50, Sam R. <[email protected]> wrote:\n\n> Thanks for the comments!\n>\n> Sam wrote:\n>\n> >> The data in db table columns is not needed to be kept in memory, only\n> the index. (hash index.)\n>\n>\n> Jeff Janes wrote:\n> > This sounds like speculation. Do you have hard evidence that this is\n> actually the case?\n>\n> In our case the \"ID\" is randomly generated random number. (Large ID.)\n> It is not a \"sequential\" number, but random.\n>\n> In generation phase, it is a very large random number. Our application may\n> not even generate the random ID.\n>\n> We use hash index over the ID.\n>\n> At the moment, in \"pure theory\", we will read randomly through the hash\n> index.\n> So, no one will be able to know what part of the data (from the table)\n> should be kept in memory.\n>\n> Side note: Of course there may be (even many) use cases, where same data\n> is read again and again. Still: I am thinking now from a very theoretical\n> point of view (which we may still apply in practice).\n>\n> In generic:\n> I am not certain how PostgreSQL or hash indexes work in detail, so my\n> claim / wish of keeping only the index in memory may be faulty. (This is\n> one reason for these discussions.)\n>\n> BR Sam\n>\n>\n>\n\n-- \nRegards,\nAng Wei Shan\n\nI believe you can use pg_prewarm to pin index or table to cache.https://www.postgresql.org/docs/current/static/pgprewarm.htmlOn Wed, 19 Sep 2018 at 22:50, Sam R. <[email protected]> wrote:Thanks for the comments!Sam wrote:>> The data in db table columns is not needed to be kept in memory, only the index. (hash index.)Jeff Janes wrote:> This sounds like speculation. Do you have hard evidence that this is actually the case?In our case the \"ID\" is randomly generated random number. (Large ID.)It is not a \"sequential\" number, but random.In generation phase, it is a very large random number. Our application may not even generate the random ID.We use hash index over the ID.At the moment, in \"pure theory\", we will read randomly through the hash index.So, no one will be able to know what part of the data (from the table) should be kept in memory. Side note: Of course there may be (even many) use cases, where same data is read again and again. Still: I am thinking now from a very theoretical point of view (which we may still apply in practice).In generic: I am not certain how PostgreSQL or hash indexes work in detail, so my claim / wish of keeping only the index in memory may be faulty. (This is one reason for these discussions.)BR Sam -- Regards,Ang Wei Shan",
"msg_date": "Thu, 20 Sep 2018 11:19:43 +0800",
"msg_from": "Wei Shan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: To keep indexes in memory,\n is large enough effective_cache_size enough?"
},
{
"msg_contents": "On 20 September 2018 at 15:19, Wei Shan <[email protected]> wrote:\n> I believe you can use pg_prewarm to pin index or table to cache.\n>\n> https://www.postgresql.org/docs/current/static/pgprewarm.html\n\nI think the key sentence in the document you linked to is:\n\n\"Prewarmed data also enjoys no special protection from cache\nevictions, so it is possible that other system activity may evict the\nnewly prewarmed blocks shortly after they are read\"\n\nSo this is not pinning. It's merely loading buffers into shared\nbuffers in the hope that they might be around long enough for you to\nmake the most of that effort.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Thu, 20 Sep 2018 17:17:25 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: To keep indexes in memory,\n is large enough effective_cache_size enough?"
},
{
"msg_contents": "Hi!\n\"Index in memory\" topic:\nAfter read operation starts,\nI think / it seems that a big part of an index gets loaded to memory quite quickly. A lot of IDs fit to one 8 KB page in PostgreSQL. When reading operation starts, pages start to be loaded to memory quickly.\n\nSo, this \"feature\" / PostgreSQL may work well in our very long \"random\" IDs cases still. \nIt is not needed / not possible to keep the whole index in memory: E.g. if there is not enough memory / size of index is bigger than memory, it is not even possible to keep whole index in memory. \n( Regarding in memory DB functionalities: I do not know would \"In-memory\" index / db work in such situations, if index would not fit in memory. We would like to keep most of the index in memory, but not whole index in all cases e.g. when there is not enough memory available. )\nSo, again, maybe PostgreSQL works well in our case.\nRegarding double buffering: I do not know how much double buffering would slow down operations. \nIt could also be possible to turn off kernel page cache on our DB server, to avoid double buffering. Although, we may still keep it in use.\n\nBR Sam\n\n On Wednesday, September 19, 2018 5:50 PM, Sam R. <[email protected]> wrote:\n \n\n Thanks for the comments!\nSam wrote:\n\n>> The data in db table columns is not needed to be kept in memory, only the index. (hash index.)\n\n\nJeff Janes wrote:\n> This sounds like speculation. Do you have hard evidence that this is actually the case?\nIn our case the \"ID\" is randomly generated random number. (Large ID.)\nIt is not a \"sequential\" number, but random.\n\nIn generation phase, it is a very large random number. Our application may not even generate the random ID.\n\nWe use hash index over the ID.\n\nAt the moment, in \"pure theory\", we will read randomly through the hash index.So, no one will be able to know what part of the data (from the table) should be kept in memory. \nSide note: Of course there may be (even many) use cases, where same data is read again and again. Still: I am thinking now from a very theoretical point of view (which we may still apply in practice).\n\nIn generic: \nI am not certain how PostgreSQL or hash indexes work in detail, so my claim / wish of keeping only the index in memory may be faulty. (This is one reason for these discussions.)\n\nBR Sam\n\n \n \n\n \nHi!\"Index in memory\" topic:After read operation starts,I think / it seems that a big part of an index gets loaded to memory quite quickly. A lot of IDs fit to one 8 KB page in PostgreSQL. When reading operation starts, pages start to be loaded to memory quickly.So, this \"feature\" / PostgreSQL may work well in our very long \"random\" IDs cases still. It is not needed / not possible to keep the whole index in memory: E.g. if there is not enough memory / size of index is bigger than memory, it is not even possible to keep whole index in memory. ( Regarding in memory DB functionalities: I do not know would \"In-memory\" index / db work in such situations, if index would not fit in memory. We would like to keep most of the index in memory, but not whole index in all cases e.g. when there is not enough memory available. )So, again, maybe PostgreSQL works well in our case.Regarding double buffering: I do not know how much double buffering would slow down operations. It could also be possible to turn off kernel page cache on our DB server, to avoid double buffering. Although, we may still keep it in use.BR Sam On Wednesday, September 19, 2018 5:50 PM, Sam R. <[email protected]> wrote: Thanks for the comments!Sam wrote:>> The data in db table columns is not needed to be kept in memory, only the index. (hash index.)Jeff Janes wrote:> This sounds like speculation. Do you have hard evidence that this is actually the case?In our case the \"ID\" is randomly generated random number. (Large ID.)It is not a \"sequential\" number, but random.In generation phase, it is a very large random number. Our application may not even generate the random ID.We use hash index over the ID.At the moment, in \"pure theory\", we will read randomly through the hash index.So, no one will be able to know what part of the data (from the table) should be kept in memory. Side note: Of course there may be (even many) use cases, where same data is read again and again. Still: I am thinking now from a very theoretical point of view (which we may still apply in practice).In generic: I am not certain how PostgreSQL or hash indexes work in detail, so my claim / wish of keeping only the index in memory may be faulty. (This is one reason for these discussions.)BR Sam",
"msg_date": "Tue, 25 Sep 2018 06:36:18 +0000 (UTC)",
"msg_from": "\"Sam R.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: To keep indexes in memory, is large enough effective_cache_size\n enough?"
},
{
"msg_contents": "On Tue, 25 Sep 2018 at 18:36, Sam R. <[email protected]> wrote:\n> Regarding double buffering: I do not know how much double buffering would slow down operations.\n> It could also be possible to turn off kernel page cache on our DB server, to avoid double buffering. Although, we may still keep it in use.\n\nI think you've misunderstood double buffering. The double buffering\nitself does not slow anything down. If the buffer is in shared buffers\nalready then it does not need to look any further for it. Double\nbuffering only becomes an issue when buffers existing 2 times in\nmemory causes other useful buffers to appear 0 times.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Wed, 26 Sep 2018 08:55:24 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: To keep indexes in memory,\n is large enough effective_cache_size enough?"
},
{
"msg_contents": "Hi!\n> The double buffering> itself does not slow anything down. \n\nThat was what I was suspecting a little. Double buffering may not matter in our case, because the whole server is meant for PostgreSQL only.\nIn our case, we can e.g. reserve almost \"all memory\" for PostgreSQL (shared buffers etc.).\n\nPlease correct me if I am wrong.\nBR Sam\n\n\n \n \n On ti, syysk. 25, 2018 at 23:55, David Rowley<[email protected]> wrote: On Tue, 25 Sep 2018 at 18:36, Sam R. <[email protected]> wrote:\n> Regarding double buffering: I do not know how much double buffering would slow down operations.\n> It could also be possible to turn off kernel page cache on our DB server, to avoid double buffering. Although, we may still keep it in use.\n\nI think you've misunderstood double buffering. The double buffering\nitself does not slow anything down. If the buffer is in shared buffers\nalready then it does not need to look any further for it. Double\nbuffering only becomes an issue when buffers existing 2 times in\nmemory causes other useful buffers to appear 0 times.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n \n\nHi!> The double buffering> itself does not slow anything down. That was what I was suspecting a little. Double buffering may not matter in our case, because the whole server is meant for PostgreSQL only.In our case, we can e.g. reserve almost \"all memory\" for PostgreSQL (shared buffers etc.).Please correct me if I am wrong.BR Sam On ti, syysk. 25, 2018 at 23:55, David Rowley<[email protected]> wrote: On Tue, 25 Sep 2018 at 18:36, Sam R. <[email protected]> wrote:> Regarding double buffering: I do not know how much double buffering would slow down operations.> It could also be possible to turn off kernel page cache on our DB server, to avoid double buffering. Although, we may still keep it in use.I think you've misunderstood double buffering. The double bufferingitself does not slow anything down. If the buffer is in shared buffersalready then it does not need to look any further for it. Doublebuffering only becomes an issue when buffers existing 2 times inmemory causes other useful buffers to appear 0 times.-- David Rowley http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Fri, 28 Sep 2018 04:45:25 +0000 (UTC)",
"msg_from": "\"Sam R.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: To keep indexes in memory, is large enough effective_cache_size\n enough?"
},
{
"msg_contents": "On 28 September 2018 at 16:45, Sam R. <[email protected]> wrote:\n> That was what I was suspecting a little. Double buffering may not matter in\n> our case, because the whole server is meant for PostgreSQL only.\n>\n> In our case, we can e.g. reserve almost \"all memory\" for PostgreSQL (shared\n> buffers etc.).\n>\n> Please correct me if I am wrong.\n\nYou mentioned above:\n\n> RAM: 64 GB\n> Data: 500 GB - 1.5 TB, for example.\n\nIf most of that data just sits on disk and is never read then you\nmight be right, but if the working set of the data is larger than RAM\nthen you might find you get better performance from smaller shared\nbuffers.\n\nI think the best thing you can go and do is to go and test this. Write\nsome code that mocks up a realistic production workload and see where\nyou get the best performance.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Fri, 28 Sep 2018 22:32:49 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: To keep indexes in memory,\n is large enough effective_cache_size enough?"
}
] |
[
{
"msg_contents": "I am experiencing a strange performance problem when accessing JSONB\ncontent by primary key.\n\nMy DB version() is PostgreSQL 10.3 (Ubuntu 10.3-1.pgdg14.04+1) on\nx86_64-pc-linux-gnu, compiled by gcc (Ubuntu 4.8.4-2ubuntu1~14.04.4) 4.8.4,\n64-bit\npostgres.conf: https://justpaste.it/6pzz1\nuname -a: Linux postgresnlpslave 4.4.0-62-generic #83-Ubuntu SMP Wed Jan 18\n14:10:15 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux\nThe machine is virtual, running under Hyper-V.\nProcessor: Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz, 1x1 cores\nDisk storage: the host has two vmdx drives, first shared between the root\npartition and an LVM PV, second is a single LVM PV. Both PVs are in a VG\ncontaining swap and postgres data partitions. The data is mostly on the\nfirst PV.\n\nI have such a table:\n\nCREATE TABLE articles\n(\n article_id bigint NOT NULL,\n content jsonb NOT NULL,\n published_at timestamp without time zone NOT NULL,\n appended_at timestamp without time zone NOT NULL,\n source_id integer NOT NULL,\n language character varying(2) NOT NULL,\n title text NOT NULL,\n topicstopic[] NOT NULL,\n objects object[] NOT NULL,\n cluster_id bigint NOT NULL,\n CONSTRAINT articles_pkey PRIMARY KEY (article_id)\n)\n\nWe have a Python lib (using psycopg2 driver) to access this table. It\nexecutes simple queries to the table, one of them is used for bulk\ndownloading of content and looks like this:\n\nselect content from articles where id between $1 and $2\n\nI noticed that with some IDs it works pretty fast while with other it is\n4-5 times slower. It is suitable to note, there are two main 'categories'\nof IDs in this table: first is range 270000000-500000000, and second is\nrange 10000000000-100030000000. For the first range it is 'fast' and for\nthe second it is 'slow'. Besides larger absolute numbers withdrawing them\nfrom int to bigint, values in the second range are more 'sparse', which\nmeans in the first range values are almost consequent (with very few\n'holes' of missing values) while in the second range there are much more\n'holes' (average filling is 35%). Total number of rows in the first range:\n~62M, in the second range: ~10M.\n\nI conducted several experiments to eliminate possible influence of\nlibrary's code and network throughput, I omit some of them. I ended up with\niterating over table with EXPLAIN to simulate read load:\n\nexplain (analyze, buffers)\nselect count(*), sum(length(content::text)) from articles where article_id\nbetween %s and %s\n\nSample output:\n\nAggregate (cost=8635.91..8635.92 rows=1 width=16) (actual\ntime=6625.993..6625.995 rows=1 loops=1)\n Buffers: shared hit=26847 read=3914\n -> Index Scan using articles_pkey on articles (cost=0.57..8573.35\nrows=5005 width=107) (actual time=21.649..1128.004 rows=5000 loops=1)\n Index Cond: ((article_id >= 438000000) AND (article_id <=\n438005000))\n Buffers: shared hit=4342 read=671\nPlanning time: 0.393 ms\nExecution time: 6626.136 ms\n\nAggregate (cost=5533.02..5533.03 rows=1 width=16) (actual\ntime=33219.100..33219.102 rows=1 loops=1)\n Buffers: shared hit=6568 read=7104\n -> Index Scan using articles_pkey on articles (cost=0.57..5492.96\nrows=3205 width=107) (actual time=22.167..12082.624 rows=2416 loops=1)\n Index Cond: ((article_id >= '100021000000'::bigint) AND (article_id\n<= '100021010000'::bigint))\n Buffers: shared hit=50 read=2378\nPlanning time: 0.517 ms\nExecution time: 33219.218 ms\n\nDuring iteration, I parse the result of EXPLAIN and collect series of\nfollowing metrics:\n\n- buffer hits/reads for the table,\n- buffer hits/reads for the index,\n- number of rows (from \"Index Scan...\"),\n- duration of execution.\n\nBased on metrics above I calculate inherited metrics:\n\n- disk read rate: (index reads + table reads) * 8192 / duration,\n- reads ratio: (index reads + table reads) / (index reads + table reads +\nindex hits + table hits),\n- data rate: (index reads + table reads + index hits + table hits) * 8192 /\nduration,\n- rows rate: number of rows / duration.\n\nSince \"density\" of IDs is different in \"small\" and \"big\" ranges, I adjusted\nsize of chunks in order to get around 5000 rows on each iteration in both\ncases, though my experiments show that chunk size does not really matter a\nlot.\n\nThe issue posted at the very beginning of my message was confirmed for the\n*whole* first and second ranges (so it was not just caused by randomly\ncached data).\n\nTo eliminate cache influence, I restarted Postgres server with flushing\nbuffers:\n\n/$ postgresql stop; sync; echo 3 > /proc/sys/vm/drop_caches; postgresql\nstart\n\nAfter this I repeated the test and got next-to-same picture.\n\n\"Small' range: disk read rate is around 10-11 MB/s uniformly across the\ntest. Output rate was 1300-1700 rows/s. Read ratio is around 13% (why?\nShouldn't it be ~ 100% after drop_caches?).\n\"Big\" range: In most of time disk read speed was about 2 MB/s but sometimes\nit jumped to 26-30 MB/s. Output rate was 70-80 rows/s (but varied a lot and\nreached 8000 rows/s). Read ratio also varied a lot.\n\nI rendered series from the last test into charts:\n\"Small\" range: https://i.stack.imgur.com/3Zfml.png\n\"Big\" range (insane): https://i.stack.imgur.com/VXdID.png\n\nDuring the tests I verified disk read speed with iotop and found its\nindications very close to ones calculated by me based on EXPLAIN BUFFERS. I\ncannot say I was monitoring it all the time, but I confirmed it when it was\n2 MB/s and 22 MB/s on the second range and 10 MB/s on the first range. I\nalso checked with htop that CPU was not a bottleneck and was around 3%\nduring the tests.\n\nThe issue is reproducible on both master and slave servers. My tests were\nconducted on slave, while there were no any other load on DBMS, or disk\nactivity on the host unrelated to DBMS.\n\nMy only assumption is that different fragments of data are being read with\ndifferent speed due to virtualization or something, but... why is it so\nstrictly bound to these ranges? Why is it the same on two different\nmachines?\n\nThe file system performance measured by dd:\n\nroot@postgresnlpslave:/# echo 3 > /proc/sys/vm/drop_caches\nroot@postgresnlpslave:/# dd if=/dev/mapper/postgresnlpslave--vg-root\nof=/dev/null bs=8K count=128K\n131072+0 records in\n131072+0 records out\n1073741824 bytes (1.1 GB, 1.0 GiB) copied, 2.12304 s, 506 MB/s\n\nAm I missing something? What else can I do to narrow down the cause?\n\nP.S. Initially posted on\nhttps://stackoverflow.com/questions/52105172/why-could-different-data-in-a-table-be-processed-with-different-performance\n\nRegards,\nVlad\n\nI am experiencing a strange performance problem when accessing JSONB content by primary key.My DB version() is PostgreSQL 10.3 (Ubuntu 10.3-1.pgdg14.04+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 4.8.4-2ubuntu1~14.04.4) 4.8.4, 64-bitpostgres.conf: https://justpaste.it/6pzz1uname -a: Linux postgresnlpslave 4.4.0-62-generic #83-Ubuntu SMP Wed Jan 18 14:10:15 UTC 2017 x86_64 x86_64 x86_64 GNU/LinuxThe machine is virtual, running under Hyper-V.Processor: Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz, 1x1 coresDisk storage: the host has two vmdx drives, first shared between the root partition and an LVM PV, second is a single LVM PV. Both PVs are in a VG containing swap and postgres data partitions. The data is mostly on the first PV.I have such a table:CREATE TABLE articles( article_id bigint NOT NULL, content jsonb NOT NULL, published_at timestamp without time zone NOT NULL, appended_at timestamp without time zone NOT NULL, source_id integer NOT NULL, language character varying(2) NOT NULL, title text NOT NULL, topicstopic[] NOT NULL, objects object[] NOT NULL, cluster_id bigint NOT NULL, CONSTRAINT articles_pkey PRIMARY KEY (article_id))We have a Python lib (using psycopg2 driver) to access this table. It executes simple queries to the table, one of them is used for bulk downloading of content and looks like this:select content from articles where id between $1 and $2I noticed that with some IDs it works pretty fast while with other it is 4-5 times slower. It is suitable to note, there are two main 'categories' of IDs in this table: first is range 270000000-500000000, and second is range 10000000000-100030000000. For the first range it is 'fast' and for the second it is 'slow'. Besides larger absolute numbers withdrawing them from int to bigint, values in the second range are more 'sparse', which means in the first range values are almost consequent (with very few 'holes' of missing values) while in the second range there are much more 'holes' (average filling is 35%). Total number of rows in the first range: ~62M, in the second range: ~10M.I conducted several experiments to eliminate possible influence of library's code and network throughput, I omit some of them. I ended up with iterating over table with EXPLAIN to simulate read load:explain (analyze, buffers)select count(*), sum(length(content::text)) from articles where article_id between %s and %sSample output:Aggregate (cost=8635.91..8635.92 rows=1 width=16) (actual time=6625.993..6625.995 rows=1 loops=1) Buffers: shared hit=26847 read=3914 -> Index Scan using articles_pkey on articles (cost=0.57..8573.35 rows=5005 width=107) (actual time=21.649..1128.004 rows=5000 loops=1) Index Cond: ((article_id >= 438000000) AND (article_id <= 438005000)) Buffers: shared hit=4342 read=671Planning time: 0.393 msExecution time: 6626.136 msAggregate (cost=5533.02..5533.03 rows=1 width=16) (actual time=33219.100..33219.102 rows=1 loops=1) Buffers: shared hit=6568 read=7104 -> Index Scan using articles_pkey on articles (cost=0.57..5492.96 rows=3205 width=107) (actual time=22.167..12082.624 rows=2416 loops=1) Index Cond: ((article_id >= '100021000000'::bigint) AND (article_id <= '100021010000'::bigint)) Buffers: shared hit=50 read=2378Planning time: 0.517 msExecution time: 33219.218 msDuring iteration, I parse the result of EXPLAIN and collect series of following metrics:- buffer hits/reads for the table,- buffer hits/reads for the index,- number of rows (from \"Index Scan...\"),- duration of execution.Based on metrics above I calculate inherited metrics:- disk read rate: (index reads + table reads) * 8192 / duration,- reads ratio: (index reads + table reads) / (index reads + table reads + index hits + table hits),- data rate: (index reads + table reads + index hits + table hits) * 8192 / duration,- rows rate: number of rows / duration.Since \"density\" of IDs is different in \"small\" and \"big\" ranges, I adjusted size of chunks in order to get around 5000 rows on each iteration in both cases, though my experiments show that chunk size does not really matter a lot.The issue posted at the very beginning of my message was confirmed for the *whole* first and second ranges (so it was not just caused by randomly cached data).To eliminate cache influence, I restarted Postgres server with flushing buffers:/$ postgresql stop; sync; echo 3 > /proc/sys/vm/drop_caches; postgresql startAfter this I repeated the test and got next-to-same picture.\"Small' range: disk read rate is around 10-11 MB/s uniformly across the test. Output rate was 1300-1700 rows/s. Read ratio is around 13% (why? Shouldn't it be ~ 100% after drop_caches?).\"Big\" range: In most of time disk read speed was about 2 MB/s but sometimes it jumped to 26-30 MB/s. Output rate was 70-80 rows/s (but varied a lot and reached 8000 rows/s). Read ratio also varied a lot.I rendered series from the last test into charts:\"Small\" range: https://i.stack.imgur.com/3Zfml.png\"Big\" range (insane): https://i.stack.imgur.com/VXdID.pngDuring the tests I verified disk read speed with iotop and found its indications very close to ones calculated by me based on EXPLAIN BUFFERS. I cannot say I was monitoring it all the time, but I confirmed it when it was 2 MB/s and 22 MB/s on the second range and 10 MB/s on the first range. I also checked with htop that CPU was not a bottleneck and was around 3% during the tests.The issue is reproducible on both master and slave servers. My tests were conducted on slave, while there were no any other load on DBMS, or disk activity on the host unrelated to DBMS.My only assumption is that different fragments of data are being read with different speed due to virtualization or something, but... why is it so strictly bound to these ranges? Why is it the same on two different machines?The file system performance measured by dd:root@postgresnlpslave:/# echo 3 > /proc/sys/vm/drop_caches root@postgresnlpslave:/# dd if=/dev/mapper/postgresnlpslave--vg-root of=/dev/null bs=8K count=128K131072+0 records in131072+0 records out1073741824 bytes (1.1 GB, 1.0 GiB) copied, 2.12304 s, 506 MB/sAm I missing something? What else can I do to narrow down the cause?P.S. Initially posted on https://stackoverflow.com/questions/52105172/why-could-different-data-in-a-table-be-processed-with-different-performanceRegards,Vlad",
"msg_date": "Thu, 20 Sep 2018 17:07:21 -0700",
"msg_from": "Vladimir Ryabtsev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Why could different data in a table be processed with different\n performance?"
},
{
"msg_contents": "On Thu, Sep 20, 2018 at 05:07:21PM -0700, Vladimir Ryabtsev wrote:\n> I am experiencing a strange performance problem when accessing JSONB\n> content by primary key.\n\n> I noticed that with some IDs it works pretty fast while with other it is\n> 4-5 times slower. It is suitable to note, there are two main 'categories'\n> of IDs in this table: first is range 270000000-500000000, and second is\n> range 10000000000-100030000000. For the first range it is 'fast' and for\n> the second it is 'slow'.\n\nWas the data populated differently, too ?\nHas the table been reindexed (or pg_repack'ed) since loading (or vacuumed for\nthat matter) ?\nWere the tests run when the DB was otherwise idle?\n\nYou can see the index scan itself takes an additional 11sec, the \"heap\" portion\ntakes the remaining, additional 14sec (33s-12s-7s).\n\nSo it seems to me like the index itself is slow to scan. *And*, the heap\nreferenced by the index is slow to scan, probably due to being referenced by\nthe index less consecutively.\n\n> \"Small' range: disk read rate is around 10-11 MB/s uniformly across the\n> test. Output rate was 1300-1700 rows/s. Read ratio is around 13% (why?\n> Shouldn't it be ~ 100% after drop_caches?).\n\nI guess you mean buffers cache hit ratio: read/hit, which I think should\nactually be read/(hit+read).\n\nIt's because a given buffer can be requested multiple times. For example, if\nan index page is read which references multiple items on the same heap page,\neach heap access is counted separately. If the index is freshly built, that'd\nhappen nearly every item.\n\nJustin\n\n> Aggregate (cost=8635.91..8635.92 rows=1 width=16) (actual time=6625.993..6625.995 rows=1 loops=1)\n> Buffers: shared hit=26847 read=3914\n> -> Index Scan using articles_pkey on articles (cost=0.57..8573.35 rows=5005 width=107) (actual time=21.649..1128.004 rows=5000 loops=1)\n> Index Cond: ((article_id >= 438000000) AND (article_id <= 438005000))\n> Buffers: shared hit=4342 read=671\n\n> Aggregate (cost=5533.02..5533.03 rows=1 width=16) (actual time=33219.100..33219.102 rows=1 loops=1)\n> Buffers: shared hit=6568 read=7104\n> -> Index Scan using articles_pkey on articles (cost=0.57..5492.96 rows=3205 width=107) (actual time=22.167..12082.624 rows=2416 loops=1)\n> Index Cond: ((article_id >= '100021000000'::bigint) AND (article_id <= '100021010000'::bigint))\n> Buffers: shared hit=50 read=2378\n\n",
"msg_date": "Thu, 20 Sep 2018 19:42:32 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why could different data in a table be processed with different\n performance?"
},
{
"msg_contents": "> Was the data populated differently, too ?\nHere is how new records were coming in last two month, by days:\nhttps://i.stack.imgur.com/zp9WP.png During a day, records come evenly (in\nboth ranges), slightly faster in Europe and American work time.\n\nSince Jul 1, 2018, when we started population by online records, trend was\napproximately same as before Aug 04, 2018 (see picture). Then it changed\nfor \"big\" range, we now in some transition period until it stabilizes.\n\nWe also have imported historical data massively from another system. First\npart was the range with big numbers, they were added in couple of days,\nsecond part was range with small numbers, it took around a week. Online\nrecords were coming uninterruptedly during the import.\n\nRows are updated rarely and almost never deleted.\n\nHere is distribution of JSONB field length (if converted to ::text) in last\n5 days:\n<10KB: 665066\n10-20KB: 225697\n20-30KB: 25640\n30-40KB: 6678\n40-50KB: 2100\n50-60KB: 1028\nOther (max 2.7MB): 2248 (only single exemplars larger than 250KB)\n\n> Has the table been reindexed (or pg_repack'ed) since loading (or vacuumed\nfor that matter) ?\nNot sure what you mean... We created indexes on some fields (on\nappended_at, published_at, source_id).\nWhen I came across the problem I noticed that table is not being vacuumed.\nI then ran VACUUM ANALYZE manually but it did not change anything about the\nissue.\n\n> Were the tests run when the DB was otherwise idle?\nYes, like I said, my test were performed on slave, the were no any other\nusers connected (only me monitoring sessions from pgAdmin), and I never\nnoticed any significant I/O from processes other than postgres (only light\nload from replication).\n\n> You can see the index scan itself takes an additional 11sec, the \"heap\"\nportion takes the remaining, additional 14sec (33s-12s-7s).\nSorry, I see 33 s total and 12 s for index, where do you see 7 s?\n\n> I guess you mean buffers cache hit ratio: read/hit, which I think should\nactually be read/(hit+read).\nI will quote myself:\n> reads ratio: (index reads + table reads) / (index reads + table reads +\nindex hits + table hits)\nSo yes, you are right, it is.\n\n+ Some extra info about my system from QA recommendations:\n\nOS version: Ubuntu 16.04.2 LTS / xenial\n\n~$ time dd if=/dev/mapper/postgresnlpslave--vg-root of=/dev/null bs=1M\ncount=32K skip=$((128*$RANDOM/32))\n32768+0 records in\n32768+0 records out\n34359738368 bytes (34 GB, 32 GiB) copied, 62.1574 s, 553 MB/s\n0.05user 23.13system 1:02.15elapsed 37%CPU (0avgtext+0avgdata\n3004maxresident)k\n67099496inputs+0outputs (0major+335minor)pagefaults 0swaps\n\nDBMS is accessed directly (no pgpool, pgbouncer, etc).\n\nRAM: 58972 MB\n\nOn physical device level RAID10 is used.\n\nTable metadata: (relname, relpages, reltuples, relallvisible, relkind,\nrelnatts, relhassubclass, reloptions, pg_table_size(oid)) = (articles,\n7824944, 6.74338e+07, 7635864, 10, false, 454570926080)\n\nRegards,\nVlad\n\nчт, 20 сент. 2018 г. в 17:42, Justin Pryzby <[email protected]>:\n\n> On Thu, Sep 20, 2018 at 05:07:21PM -0700, Vladimir Ryabtsev wrote:\n> > I am experiencing a strange performance problem when accessing JSONB\n> > content by primary key.\n>\n> > I noticed that with some IDs it works pretty fast while with other it is\n> > 4-5 times slower. It is suitable to note, there are two main 'categories'\n> > of IDs in this table: first is range 270000000-500000000, and second is\n> > range 10000000000-100030000000. For the first range it is 'fast' and for\n> > the second it is 'slow'.\n>\n> Was the data populated differently, too ?\n> Has the table been reindexed (or pg_repack'ed) since loading (or vacuumed\n> for\n> that matter) ?\n> Were the tests run when the DB was otherwise idle?\n>\n> You can see the index scan itself takes an additional 11sec, the \"heap\"\n> portion\n> takes the remaining, additional 14sec (33s-12s-7s).\n>\n> So it seems to me like the index itself is slow to scan. *And*, the heap\n> referenced by the index is slow to scan, probably due to being referenced\n> by\n> the index less consecutively.\n>\n> > \"Small' range: disk read rate is around 10-11 MB/s uniformly across the\n> > test. Output rate was 1300-1700 rows/s. Read ratio is around 13% (why?\n> > Shouldn't it be ~ 100% after drop_caches?).\n>\n> I guess you mean buffers cache hit ratio: read/hit, which I think should\n> actually be read/(hit+read).\n>\n> It's because a given buffer can be requested multiple times. For example,\n> if\n> an index page is read which references multiple items on the same heap\n> page,\n> each heap access is counted separately. If the index is freshly built,\n> that'd\n> happen nearly every item.\n>\n> Justin\n>\n> > Aggregate (cost=8635.91..8635.92 rows=1 width=16) (actual\n> time=6625.993..6625.995 rows=1 loops=1)\n> > Buffers: shared hit=26847 read=3914\n> > -> Index Scan using articles_pkey on articles (cost=0.57..8573.35\n> rows=5005 width=107) (actual time=21.649..1128.004 rows=5000 loops=1)\n> > Index Cond: ((article_id >= 438000000) AND (article_id <=\n> 438005000))\n> > Buffers: shared hit=4342 read=671\n>\n> > Aggregate (cost=5533.02..5533.03 rows=1 width=16) (actual\n> time=33219.100..33219.102 rows=1 loops=1)\n> > Buffers: shared hit=6568 read=7104\n> > -> Index Scan using articles_pkey on articles (cost=0.57..5492.96\n> rows=3205 width=107) (actual time=22.167..12082.624 rows=2416 loops=1)\n> > Index Cond: ((article_id >= '100021000000'::bigint) AND\n> (article_id <= '100021010000'::bigint))\n> > Buffers: shared hit=50 read=2378\n>\n\n> Was the data populated differently, too ?Here is how new records were coming in last two month, by days: https://i.stack.imgur.com/zp9WP.png During a day, records come evenly (in both ranges), slightly faster in Europe and American work time.Since Jul 1, 2018, when we started population by online records, trend was approximately same as before Aug 04, 2018 (see picture). Then it changed for \"big\" range, we now in some transition period until it stabilizes.We also have imported historical data massively from another system. First part was the range with big numbers, they were added in couple of days, second part was range with small numbers, it took around a week. Online records were coming uninterruptedly during the import.Rows are updated rarely and almost never deleted.Here is distribution of JSONB field length (if converted to ::text) in last 5 days:<10KB: 66506610-20KB: 22569720-30KB: 2564030-40KB: 667840-50KB: 210050-60KB: 1028Other (max 2.7MB): 2248 (only single exemplars larger than 250KB)> Has the table been reindexed (or pg_repack'ed) since loading (or vacuumed for that matter) ?Not sure what you mean... We created indexes on some fields (on appended_at, published_at, source_id).When I came across the problem I noticed that table is not being vacuumed. I then ran VACUUM ANALYZE manually but it did not change anything about the issue.> Were the tests run when the DB was otherwise idle?Yes, like I said, my test were performed on slave, the were no any other users connected (only me monitoring sessions from pgAdmin), and I never noticed any significant I/O from processes other than postgres (only light load from replication).> You can see the index scan itself takes an additional 11sec, the \"heap\" portion takes the remaining, additional 14sec (33s-12s-7s).Sorry, I see 33 s total and 12 s for index, where do you see 7 s?> I guess you mean buffers cache hit ratio: read/hit, which I think should actually be read/(hit+read).I will quote myself:> reads ratio: (index reads + table reads) / (index reads + table reads + index hits + table hits)So yes, you are right, it is.+ Some extra info about my system from QA recommendations:OS version: Ubuntu 16.04.2 LTS / xenial~$ time dd if=/dev/mapper/postgresnlpslave--vg-root of=/dev/null bs=1M count=32K skip=$((128*$RANDOM/32))32768+0 records in32768+0 records out34359738368 bytes (34 GB, 32 GiB) copied, 62.1574 s, 553 MB/s0.05user 23.13system 1:02.15elapsed 37%CPU (0avgtext+0avgdata 3004maxresident)k67099496inputs+0outputs (0major+335minor)pagefaults 0swapsDBMS is accessed directly (no pgpool, pgbouncer, etc).RAM: 58972 MBOn physical device level RAID10 is used.Table metadata: (relname, relpages, reltuples, relallvisible, relkind, relnatts, relhassubclass, reloptions, pg_table_size(oid)) = (articles, 7824944, 6.74338e+07, 7635864, 10, false, 454570926080)Regards,Vladчт, 20 сент. 2018 г. в 17:42, Justin Pryzby <[email protected]>:On Thu, Sep 20, 2018 at 05:07:21PM -0700, Vladimir Ryabtsev wrote:\n> I am experiencing a strange performance problem when accessing JSONB\n> content by primary key.\n\n> I noticed that with some IDs it works pretty fast while with other it is\n> 4-5 times slower. It is suitable to note, there are two main 'categories'\n> of IDs in this table: first is range 270000000-500000000, and second is\n> range 10000000000-100030000000. For the first range it is 'fast' and for\n> the second it is 'slow'.\n\nWas the data populated differently, too ?\nHas the table been reindexed (or pg_repack'ed) since loading (or vacuumed for\nthat matter) ?\nWere the tests run when the DB was otherwise idle?\n\nYou can see the index scan itself takes an additional 11sec, the \"heap\" portion\ntakes the remaining, additional 14sec (33s-12s-7s).\n\nSo it seems to me like the index itself is slow to scan. *And*, the heap\nreferenced by the index is slow to scan, probably due to being referenced by\nthe index less consecutively.\n\n> \"Small' range: disk read rate is around 10-11 MB/s uniformly across the\n> test. Output rate was 1300-1700 rows/s. Read ratio is around 13% (why?\n> Shouldn't it be ~ 100% after drop_caches?).\n\nI guess you mean buffers cache hit ratio: read/hit, which I think should\nactually be read/(hit+read).\n\nIt's because a given buffer can be requested multiple times. For example, if\nan index page is read which references multiple items on the same heap page,\neach heap access is counted separately. If the index is freshly built, that'd\nhappen nearly every item.\n\nJustin\n\n> Aggregate (cost=8635.91..8635.92 rows=1 width=16) (actual time=6625.993..6625.995 rows=1 loops=1)\n> Buffers: shared hit=26847 read=3914\n> -> Index Scan using articles_pkey on articles (cost=0.57..8573.35 rows=5005 width=107) (actual time=21.649..1128.004 rows=5000 loops=1)\n> Index Cond: ((article_id >= 438000000) AND (article_id <= 438005000))\n> Buffers: shared hit=4342 read=671\n\n> Aggregate (cost=5533.02..5533.03 rows=1 width=16) (actual time=33219.100..33219.102 rows=1 loops=1)\n> Buffers: shared hit=6568 read=7104\n> -> Index Scan using articles_pkey on articles (cost=0.57..5492.96 rows=3205 width=107) (actual time=22.167..12082.624 rows=2416 loops=1)\n> Index Cond: ((article_id >= '100021000000'::bigint) AND (article_id <= '100021010000'::bigint))\n> Buffers: shared hit=50 read=2378",
"msg_date": "Thu, 20 Sep 2018 18:54:00 -0700",
"msg_from": "Vladimir Ryabtsev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why could different data in a table be processed with different\n performance?"
},
{
"msg_contents": "Sorry, dropped -performance.\n\n>>>> Has the table been reindexed (or pg_repack'ed) since loading (or vacuumed\n>>>> for that matter) ?\n>>> Not sure what you mean... We created indexes on some fields (on\n>> I mean REINDEX INDEX articles_pkey;\n>> Or (from \"contrib\"): /usr/pgsql-10/bin/pg_repack -i articles_pkey\n>I never did it... Do you recommend to try it? Which variant is preferable?\n\nREINDEX is likely to block access to the table [0], and pg_repack is \"online\"\n(except for briefly acquiring an exclusive lock).\n\n[0] https://www.postgresql.org/docs/10/static/sql-reindex.html\n\n>>>> You can see the index scan itself takes an additional 11sec, the \"heap\"\n>>>> portion takes the remaining, additional 14sec (33s-12s-7s).\n>>> Sorry, I see 33 s total and 12 s for index, where do you see 7 s?\n>> 6625 ms (for short query).\n>> So the heap component of the long query is 14512 ms slower.\n> Yes, I see, thanks.\n> So reindex can help only with index component? What should I do for heap?\n> May be reindex the corresponding toast table?\n\nI think reindex will improve the heap access..and maybe the index access too.\nI don't see why it would be bloated without UPDATE/DELETE, but you could check\nto see if its size changes significantly after reindex.\n\nJustin\n\n",
"msg_date": "Thu, 20 Sep 2018 21:29:53 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why could different data in a table be processed with different\n performance?"
},
{
"msg_contents": "Vladimir Ryabtsev wrote:\n> explain (analyze, buffers)\n> select count(*), sum(length(content::text)) from articles where article_id between %s and %s\n> \n> Sample output:\n> \n> Aggregate (cost=8635.91..8635.92 rows=1 width=16) (actual time=6625.993..6625.995 rows=1 loops=1)\n> Buffers: shared hit=26847 read=3914\n> -> Index Scan using articles_pkey on articles (cost=0.57..8573.35 rows=5005 width=107) (actual time=21.649..1128.004 rows=5000 loops=1)\n> Index Cond: ((article_id >= 438000000) AND (article_id <= 438005000))\n> Buffers: shared hit=4342 read=671\n> Planning time: 0.393 ms\n> Execution time: 6626.136 ms\n> \n> Aggregate (cost=5533.02..5533.03 rows=1 width=16) (actual time=33219.100..33219.102 rows=1 loops=1)\n> Buffers: shared hit=6568 read=7104\n> -> Index Scan using articles_pkey on articles (cost=0.57..5492.96 rows=3205 width=107) (actual time=22.167..12082.624 rows=2416 loops=1)\n> Index Cond: ((article_id >= '100021000000'::bigint) AND (article_id <= '100021010000'::bigint))\n> Buffers: shared hit=50 read=2378\n> Planning time: 0.517 ms\n> Execution time: 33219.218 ms\n> \n> During iteration, I parse the result of EXPLAIN and collect series of following metrics:\n> \n> - buffer hits/reads for the table,\n> - buffer hits/reads for the index,\n> - number of rows (from \"Index Scan...\"),\n> - duration of execution.\n> \n> Based on metrics above I calculate inherited metrics:\n> \n> - disk read rate: (index reads + table reads) * 8192 / duration,\n> - reads ratio: (index reads + table reads) / (index reads + table reads + index hits + table hits),\n> - data rate: (index reads + table reads + index hits + table hits) * 8192 / duration,\n> - rows rate: number of rows / duration.\n> \n> Since \"density\" of IDs is different in \"small\" and \"big\" ranges, I adjusted\n> size of chunks in order to get around 5000 rows on each iteration in both cases,\n> though my experiments show that chunk size does not really matter a lot.\n> \n> The issue posted at the very beginning of my message was confirmed for the\n> *whole* first and second ranges (so it was not just caused by randomly cached data).\n> \n> To eliminate cache influence, I restarted Postgres server with flushing buffers:\n> \n> /$ postgresql stop; sync; echo 3 > /proc/sys/vm/drop_caches; postgresql start\n> \n> After this I repeated the test and got next-to-same picture.\n> \n> \"Small' range: disk read rate is around 10-11 MB/s uniformly across the test.\n> Output rate was 1300-1700 rows/s. Read ratio is around 13% (why? Shouldn't it be\n> ~ 100% after drop_caches?).\n> \"Big\" range: In most of time disk read speed was about 2 MB/s but sometimes\n> it jumped to 26-30 MB/s. Output rate was 70-80 rows/s (but varied a lot and\n> reached 8000 rows/s). Read ratio also varied a lot.\n> \n> I rendered series from the last test into charts:\n> \"Small\" range: https://i.stack.imgur.com/3Zfml.png\n> \"Big\" range (insane): https://i.stack.imgur.com/VXdID.png\n> \n> During the tests I verified disk read speed with iotop and found its indications\n> very close to ones calculated by me based on EXPLAIN BUFFERS. I cannot say I was\n> monitoring it all the time, but I confirmed it when it was 2 MB/s and 22 MB/s on\n> the second range and 10 MB/s on the first range. I also checked with htop that\n> CPU was not a bottleneck and was around 3% during the tests.\n> \n> The issue is reproducible on both master and slave servers. My tests were conducted\n> on slave, while there were no any other load on DBMS, or disk activity on the\n> host unrelated to DBMS.\n> \n> My only assumption is that different fragments of data are being read with different\n> speed due to virtualization or something, but... why is it so strictly bound\n> to these ranges? Why is it the same on two different machines?\n\nWhat is the storage system?\n\nSetting \"track_io_timing = on\" should measure the time spent doing I/O\nmore accurately.\n\nOne problem with measuring read speed that way is that \"buffers read\" can\nmean \"buffers read from storage\" or \"buffers read from the file system cache\",\nbut you say you observe a difference even after dropping the cache.\n\nTo verify if the difference comes from the physical placement, you could\nrun VACUUM (FULL) which rewrites the table and see if that changes the behavior.\n\nAnother idea is that the operating system rearranges I/O in a way that\nis not ideal for your storage.\n\nTry a different I/O scheduler by running\n\necho deadline > /sys/block/sda/queue/scheduler\n\n(replace \"sda\" with the disk where your database resides)\n\nSee if that changes the observed I/O speed.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Fri, 21 Sep 2018 05:17:26 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why could different data in a table be processed with different\n performance?"
},
{
"msg_contents": "> Setting \"track_io_timing = on\" should measure the time spent doing I/O\nmore accurately.\nI see I/O timings after this. It shows that 96.5% of long queries is spent\non I/O. If I subtract I/O time from total I get ~1,4 s for 5000 rows, which\nis SAME for both ranges if I adjust segment borders accordingly (to match\n~5000 rows). Only I/O time differs, and differs significantly.\n\n> One problem with measuring read speed that way is that \"buffers read\" can\nmean \"buffers read from storage\" or \"buffers read from the file system\ncache\",\nI understand, that's why I conducted experiments with drop_caches.\n\n> but you say you observe a difference even after dropping the cache.\nNo, I say I see NO significant difference (accurate to measurement error)\nbetween \"with caches\" and after dropping caches. And this is explainable, I\nthink. Since I read consequently almost all data from the huge table, no\ncache can fit this data, thus it cannot influence significantly on results.\nAnd whilst the PK index *could* be cached (in theory) I think its data is\nbeing displaced from buffers by bulkier JSONB data.\n\nVlad\n\n> Setting \"track_io_timing = on\" should measure the time spent doing I/O more accurately.I see I/O timings after this. It shows that 96.5% of long queries is spent on I/O. If I subtract I/O time from total I get ~1,4 s for 5000 rows, which is SAME for both ranges if I adjust segment borders accordingly (to match ~5000 rows). Only I/O time differs, and differs significantly.> One problem with measuring read speed that way is that \"buffers read\" can mean \"buffers read from storage\" or \"buffers read from the file system cache\",I understand, that's why I conducted experiments with drop_caches.> but you say you observe a difference even after dropping the cache.No, I say I see NO significant difference (accurate to measurement error) between \"with caches\" and after dropping caches. And this is explainable, I think. Since I read consequently almost all data from the huge table, no cache can fit this data, thus it cannot influence significantly on results. And whilst the PK index *could* be cached (in theory) I think its data is being displaced from buffers by bulkier JSONB data.Vlad",
"msg_date": "Thu, 20 Sep 2018 23:28:27 -0700",
"msg_from": "Vladimir Ryabtsev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why could different data in a table be processed with different\n performance?"
},
{
"msg_contents": "Hi Vladimir,\n\n\n\nOn 09/21/2018 02:07 AM, Vladimir Ryabtsev wrote:\n\n> \n> I have such a table:\n> \n> CREATE TABLE articles\n> (\n> article_id bigint NOT NULL,\n> content jsonb NOT NULL,\n> published_at timestamp without time zone NOT NULL,\n> appended_at timestamp without time zone NOT NULL,\n> source_id integer NOT NULL,\n> language character varying(2) NOT NULL,\n> title text NOT NULL,\n> topicstopic[] NOT NULL,\n> objects object[] NOT NULL,\n> cluster_id bigint NOT NULL,\n> CONSTRAINT articles_pkey PRIMARY KEY (article_id)\n> )\n> \n> select content from articles where id between $1 and $2\n> \n> I noticed that with some IDs it works pretty fast while with other it is\n> 4-5 times slower. It is suitable to note, there are two main\n> 'categories' of IDs in this table: first is range 270000000-500000000,\n> and second is range 10000000000-100030000000. For the first range it is\n> 'fast' and for the second it is 'slow'. Besides larger absolute numbers\n> withdrawing them from int to bigint, values in the second range are more\n> 'sparse', which means in the first range values are almost consequent\n> (with very few 'holes' of missing values) while in the second range\n> there are much more 'holes' (average filling is 35%). Total number of\n> rows in the first range: ~62M, in the second range: ~10M.\n> \n> \n> explain (analyze, buffers)\n> select count(*), sum(length(content::text)) from articles where\n> article_id between %s and %s\n> \n\nis the length of the text equally distributed over the 2 partitions?\n\n> Sample output:\n> \n> Aggregate (cost=8635.91..8635.92 rows=1 width=16) (actual\n> time=6625.993..6625.995 rows=1 loops=1)\n> Buffers: shared hit=26847 read=3914\n> -> Index Scan using articles_pkey on articles (cost=0.57..8573.35\n> rows=5005 width=107) (actual time=21.649..1128.004 rows=5000 loops=1)\n> Index Cond: ((article_id >= 438000000) AND (article_id <=\n> 438005000))\n> Buffers: shared hit=4342 read=671\n> Planning time: 0.393 ms\n> Execution time: 6626.136 ms\n> \n> Aggregate (cost=5533.02..5533.03 rows=1 width=16) (actual\n> time=33219.100..33219.102 rows=1 loops=1)\n> Buffers: shared hit=6568 read=7104\n> -> Index Scan using articles_pkey on articles (cost=0.57..5492.96\n> rows=3205 width=107) (actual time=22.167..12082.624 rows=2416 loops=1)\n> Index Cond: ((article_id >= '100021000000'::bigint) AND\n> (article_id <= '100021010000'::bigint))\n> Buffers: shared hit=50 read=2378\n> Planning time: 0.517 ms\n> Execution time: 33219.218 ms\n> \n\n> \n> Since \"density\" of IDs is different in \"small\" and \"big\" ranges, I\n> adjusted size of chunks in order to get around 5000 rows on each\n> iteration in both cases, though my experiments show that chunk size does\n> not really matter a lot.\n> \n\n From what you posted, the first query retrieves 5005 rows, but the\nsecond 2416. It might be helpful if we are able to compare 5000 vs 5000\n\nAlso is worth noticing that the 'estimated' differs from 'actual' on the\nsecond query.\nI think that happens because data is differently distributed over the\nranges.\nProbably the analyzer does not have enough samples to understand the\nreal distribution. You might try to increase the number of samples (and\nrun analyze) or to create partial indexes on the 2 ranges.\nCan you give a try to both options and let us know?\n\n> The issue posted at the very beginning of my message was confirmed for\n> the *whole* first and second ranges (so it was not just caused by\n> randomly cached data).\n> \n> To eliminate cache influence, I restarted Postgres server with flushing\n> buffers:\n> \n> /$ postgresql stop; sync; echo 3 > /proc/sys/vm/drop_caches; postgresql\n> start\n> \n\ni would do a sync at the end, after dropping caches.\n\nBut the problem here is that you are virtualizing. I think that you\nmight want to consider the physical layer. Eg:\n\n- does the raid controller have a cache?\n\n- how big is the cache? (when you measure disk speed, that will\ninfluence the result very much, if you do not run the test on\nbig-enough data chunk) best if is disabled during your tests\n\n- is the OS caching disk blocks too? maybe you want to drop everything\nfrom there too.\n\nI think that you should be pragmatic and try to run the tests on a\nphysical machine. If results are then reproducible there too, then you\ncan exclude the whole virtual layer.\n\n> After this I repeated the test and got next-to-same picture.\n> \n> \"Small' range: disk read rate is around 10-11 MB/s uniformly across the\n> test. Output rate was 1300-1700 rows/s. Read ratio is around 13% (why?\n> Shouldn't it be ~ 100% after drop_caches?).\n> \"Big\" range: In most of time disk read speed was about 2 MB/s but\n> sometimes it jumped to 26-30 MB/s. Output rate was 70-80 rows/s (but\n> varied a lot and reached 8000 rows/s). Read ratio also varied a lot.\n> \n> I rendered series from the last test into charts:\n> \"Small\" range: https://i.stack.imgur.com/3Zfml.png\n> \"Big\" range (insane): https://i.stack.imgur.com/VXdID.png\n> \n> During the tests I verified disk read speed with iotop\n\non the VM or on the physical host?\n\n\nregards,\n\nfabio pardi\n\n",
"msg_date": "Fri, 21 Sep 2018 15:08:53 +0200",
"msg_from": "Fabio Pardi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why could different data in a table be processed with different\n performance?"
},
{
"msg_contents": "\n\nOn 09/21/2018 08:28 AM, Vladimir Ryabtsev wrote:\n\n>> but you say you observe a difference even after dropping the cache.\n> No, I say I see NO significant difference (accurate to measurement\n> error) between \"with caches\" and after dropping caches. And this is\n> explainable, I think. Since I read consequently almost all data from the\n> huge table, no cache can fit this data, thus it cannot influence\n> significantly on results. And whilst the PK index *could* be cached (in\n> theory) I think its data is being displaced from buffers by bulkier\n> JSONB data.\n> \n> Vlad\n\nI think this is not accurate. If you fetch from an index, then only the\nblocks containing the matching records are red from disk and therefore\ncached in RAM.\n\nregards,\n\nfabio pardi\n\n",
"msg_date": "Fri, 21 Sep 2018 15:12:58 +0200",
"msg_from": "Fabio Pardi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why could different data in a table be processed with different\n performance?"
},
{
"msg_contents": "> is the length of the text equally distributed over the 2 partitions?\nNot 100% equally, but to me it does not seem to be a big deal...\nConsidering the ranges independently:\nFirst range: ~70% < 10 KB, ~25% for 10-20 KB, ~3% for 20-30 KB, everything\nelse is less than 1% (with 10 KB steps).\nSecond range: ~80% < 10 KB, ~18% for 10-20 KB, ~2% for 20-30 KB, everything\nelse is less than 1% (with 10 KB steps).\n\n>From what you posted, the first query retrieves 5005 rows, but the second\n2416. It might be helpful if we are able to compare 5000 vs 5000\nYes it was just an example, here are the plans for approximately same\nnumber of rows:\n\nAggregate (cost=9210.12..9210.13 rows=1 width=16) (actual\ntime=4265.478..4265.479 rows=1 loops=1)\n Buffers: shared hit=27027 read=4311\n I/O Timings: read=2738.728\n -> Index Scan using articles_pkey on articles (cost=0.57..9143.40\nrows=5338 width=107) (actual time=12.254..873.081 rows=5001 loops=1)\n Index Cond: ((article_id >= 438030000) AND (article_id <=\n438035000))\n Buffers: shared hit=4282 read=710\n I/O Timings: read=852.547\nPlanning time: 0.235 ms\nExecution time: 4265.554 ms\n\nAggregate (cost=11794.59..11794.60 rows=1 width=16) (actual\ntime=62298.559..62298.559 rows=1 loops=1)\n Buffers: shared hit=15071 read=14847\n I/O Timings: read=60703.859\n -> Index Scan using articles_pkey on articles (cost=0.57..11709.13\nrows=6837 width=107) (actual time=24.686..24582.221 rows=5417 loops=1)\n Index Cond: ((article_id >= '100021040000'::bigint) AND (article_id\n<= '100021060000'::bigint))\n Buffers: shared hit=195 read=5244\n I/O Timings: read=24507.621\nPlanning time: 0.494 ms\nExecution time: 62298.630 ms\n\nIf we subtract I/O from total time, we get 1527 ms vs 1596 ms — very close\ntimings for other than I/O operations (considering slightly higher number\nof rows in second case). But I/O time differs dramatically.\n\n> Also is worth noticing that the 'estimated' differs from 'actual' on the\nsecond query. I think that happens because data is differently distributed\nover the ranges. Probably the analyzer does not have enough samples to\nunderstand the real distribution.\nI think we should not worry about it unless the planner chose poor plan,\nshould we? Statistics affects on picking a proper plan, but not on\nexecution of the plan, doesn't it?\n\n> You might try to increase the number of samples (and run analyze)\nTo be honest, I don't understand it... As I know, in Postgres we have two\noptions: set column target percentile and set n_distinct. We can't increase\nfraction of rows analyzed (like in other DBMSs we can set ANALYZE\npercentage explicitly). Moreover, in our case the problem column is PRIMARY\nKEY with all distinct values, Could you point me, what exactly should I do?\n\n> or to create partial indexes on the 2 ranges.\nSure, will try it with partial indexes. Should I drop existing PK index, or\nensuring that planner picks range index is enough?\n\n> i would do a sync at the end, after dropping caches.\nA bit off-topic, but why? Doing sync may put something to cache again.\nhttps://linux-mm.org/Drop_Caches\nhttps://unix.stackexchange.com/a/82164/309344\n\n> - does the raid controller have a cache?\n> - how big is the cache? (when you measure disk speed, that will influence\nthe result very much, if you do not run the test on big-enough data chunk)\nbest if is disabled during your tests\nI am pretty sure there is some, usually it's several tens of megabytes, but\nI ran disk read tests several times with chunks that could not be fit in\nthe cache and with random offset, so I am pretty sure that something around\n500 MB/s is enough reasonably accurate (but it is only for sequential read).\n\n> - is the OS caching disk blocks too? maybe you want to drop everything\nfrom there too.\nHow can I find it out? And how to drop it? Or you mean hypervisor OS?\nAnyway, don't you think that caching specifics could not really explain\nthese issues?\n\n> I think that you should be pragmatic and try to run the tests on a\nphysical machine.\nI wish I could do it, but hardly it is possible. In some future we may\nmigrate the DB to physical hosts, but now we need to make it work in\nvirtual.\n\n> on the VM or on the physical host?\nOn the VM. The physical host is Windows (no iotop) and I have no access to\nit.\n\nVlad\n\n> is the length of the text equally distributed over the 2 partitions?Not 100% equally, but to me it does not seem to be a big deal... Considering the ranges independently:First range: ~70% < 10 KB, ~25% for 10-20 KB, ~3% for 20-30 KB, everything else is less than 1% (with 10 KB steps).Second range: ~80% < 10 KB, ~18% for 10-20 KB, ~2% for 20-30 KB, everything else is less than 1% (with 10 KB steps).>From what you posted, the first query retrieves 5005 rows, but the second 2416. It might be helpful if we are able to compare 5000 vs 5000Yes it was just an example, here are the plans for approximately same number of rows:Aggregate (cost=9210.12..9210.13 rows=1 width=16) (actual time=4265.478..4265.479 rows=1 loops=1) Buffers: shared hit=27027 read=4311 I/O Timings: read=2738.728 -> Index Scan using articles_pkey on articles (cost=0.57..9143.40 rows=5338 width=107) (actual time=12.254..873.081 rows=5001 loops=1) Index Cond: ((article_id >= 438030000) AND (article_id <= 438035000)) Buffers: shared hit=4282 read=710 I/O Timings: read=852.547Planning time: 0.235 msExecution time: 4265.554 msAggregate (cost=11794.59..11794.60 rows=1 width=16) (actual time=62298.559..62298.559 rows=1 loops=1) Buffers: shared hit=15071 read=14847 I/O Timings: read=60703.859 -> Index Scan using articles_pkey on articles (cost=0.57..11709.13 rows=6837 width=107) (actual time=24.686..24582.221 rows=5417 loops=1) Index Cond: ((article_id >= '100021040000'::bigint) AND (article_id <= '100021060000'::bigint)) Buffers: shared hit=195 read=5244 I/O Timings: read=24507.621Planning time: 0.494 msExecution time: 62298.630 msIf we subtract I/O from total time, we get 1527 ms vs 1596 ms — very close timings for other than I/O operations (considering slightly higher number of rows in second case). But I/O time differs dramatically.> Also is worth noticing that the 'estimated' differs from 'actual' on the second query. I think that happens because data is differently distributed over the ranges. Probably the analyzer does not have enough samples to understand the real distribution.I think we should not worry about it unless the planner chose poor plan, should we? Statistics affects on picking a proper plan, but not on execution of the plan, doesn't it?> You might try to increase the number of samples (and run analyze)To be honest, I don't understand it... As I know, in Postgres we have two options: set column target percentile and set n_distinct. We can't increase fraction of rows analyzed (like in other DBMSs we can set ANALYZE percentage explicitly). Moreover, in our case the problem column is PRIMARY KEY with all distinct values, Could you point me, what exactly should I do?> or to create partial indexes on the 2 ranges.Sure, will try it with partial indexes. Should I drop existing PK index, or ensuring that planner picks range index is enough?> i would do a sync at the end, after dropping caches.A bit off-topic, but why? Doing sync may put something to cache again.https://linux-mm.org/Drop_Cacheshttps://unix.stackexchange.com/a/82164/309344> - does the raid controller have a cache?> - how big is the cache? (when you measure disk speed, that will influence the result very much, if you do not run the test on big-enough data chunk) best if is disabled during your tests I am pretty sure there is some, usually it's several tens of megabytes, but I ran disk read tests several times with chunks that could not be fit in the cache and with random offset, so I am pretty sure that something around 500 MB/s is enough reasonably accurate (but it is only for sequential read).> - is the OS caching disk blocks too? maybe you want to drop everything from there too.How can I find it out? And how to drop it? Or you mean hypervisor OS?Anyway, don't you think that caching specifics could not really explain these issues?> I think that you should be pragmatic and try to run the tests on a physical machine.I wish I could do it, but hardly it is possible. In some future we may migrate the DB to physical hosts, but now we need to make it work in virtual.> on the VM or on the physical host?On the VM. The physical host is Windows (no iotop) and I have no access to it.Vlad",
"msg_date": "Sat, 22 Sep 2018 02:19:40 -0700",
"msg_from": "Vladimir Ryabtsev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why could different data in a table be processed with different\n performance?"
},
{
"msg_contents": "> I think reindex will improve the heap access..and maybe the index access\ntoo. I don't see why it would be bloated without UPDATE/DELETE, but you\ncould check to see if its size changes significantly after reindex.\nI tried REINDEX, and size of PK index changed from 2579 to 1548 MB.\nBut test don't show any significant improvement from what it was. May be\nread speed for the \"big\" range became just slightly faster in average.\n\nVlad\n\n> I think reindex will improve the heap access..and maybe the index access too. I don't see why it would be bloated without UPDATE/DELETE, but you could check to see if its size changes significantly after reindex.I tried REINDEX, and size of PK index changed from 2579 to 1548 MB.But test don't show any significant improvement from what it was. May be read speed for the \"big\" range became just slightly faster in average.Vlad",
"msg_date": "Sat, 22 Sep 2018 03:32:01 -0700",
"msg_from": "Vladimir Ryabtsev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why could different data in a table be processed with different\n performance?"
},
{
"msg_contents": "Hi,\nAssuming DB is quiescent.\n\nAnd if you run?\n\nselect count(*) from articles where article_id between %s and %s\n\nie without reading json, is your buffers hit count increasing?\n20 000 8K blocks *2 is 500MB , should be in RAM after the first run.\n\nFast:\nread=710 I/O Timings: read=852.547 ==> 1.3 ms /IO\n800 IO/s some memory, sequential reads or a good raid layout.\n\nSlow:\nread=5244 I/O Timings: read=24507.621 ==> 4.7 ms /IO\n200 IO/s more HD reads? more seeks? slower HD zones ?\n\nMaybe you can play with PG cache size.\n\n\nOn Sat, Sep 22, 2018 at 12:32 PM Vladimir Ryabtsev <[email protected]>\nwrote:\n\n> > I think reindex will improve the heap access..and maybe the index access\n> too. I don't see why it would be bloated without UPDATE/DELETE, but you\n> could check to see if its size changes significantly after reindex.\n> I tried REINDEX, and size of PK index changed from 2579 to 1548 MB.\n> But test don't show any significant improvement from what it was. May be\n> read speed for the \"big\" range became just slightly faster in average.\n>\n> Vlad\n>\n>\n\nHi,Assuming DB is quiescent.And if you run?select count(*) from articles where article_id between %s and %sie without reading json, is your buffers hit count increasing?20 000 8K blocks *2 is 500MB , should be in RAM after the first run.Fast:read=710 I/O Timings: read=852.547 ==> 1.3 ms /IO800 IO/s some memory, sequential reads or a good raid layout.Slow:read=5244 I/O Timings: read=24507.621 ==> 4.7 ms /IO200 IO/s more HD reads? more seeks? slower HD zones ?Maybe you can play with PG cache size.On Sat, Sep 22, 2018 at 12:32 PM Vladimir Ryabtsev <[email protected]> wrote:> I think reindex will improve the heap access..and maybe the index access too. I don't see why it would be bloated without UPDATE/DELETE, but you could check to see if its size changes significantly after reindex.I tried REINDEX, and size of PK index changed from 2579 to 1548 MB.But test don't show any significant improvement from what it was. May be read speed for the \"big\" range became just slightly faster in average.Vlad",
"msg_date": "Sat, 22 Sep 2018 16:49:58 +0200",
"msg_from": "didier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why could different data in a table be processed with different\n performance?"
},
{
"msg_contents": "> Another idea is that the operating system rearranges I/O in a way that\nis not ideal for your storage.\n> Try a different I/O scheduler by running\necho deadline > /sys/block/sda/queue/scheduler\n\nMy scheduler was already \"deadline\".\nIn some places I read that in virtual environment sometimes \"noop\"\nscheduler is better, so I tried it. However the experiment shown NO\nnoticeable difference between them (look \"deadline\":\nhttps://i.stack.imgur.com/wCOJW.png, \"noop\":\nhttps://i.stack.imgur.com/lB33u.png). At the same time tests show almost\nsimilar patterns in changing read speed when going over the \"slow\" range.\n\nVlad\n\nчт, 20 сент. 2018 г. в 20:17, Laurenz Albe <[email protected]>:\n\n> Vladimir Ryabtsev wrote:\n> > explain (analyze, buffers)\n> > select count(*), sum(length(content::text)) from articles where\n> article_id between %s and %s\n> >\n> > Sample output:\n> >\n> > Aggregate (cost=8635.91..8635.92 rows=1 width=16) (actual\n> time=6625.993..6625.995 rows=1 loops=1)\n> > Buffers: shared hit=26847 read=3914\n> > -> Index Scan using articles_pkey on articles (cost=0.57..8573.35\n> rows=5005 width=107) (actual time=21.649..1128.004 rows=5000 loops=1)\n> > Index Cond: ((article_id >= 438000000) AND (article_id <=\n> 438005000))\n> > Buffers: shared hit=4342 read=671\n> > Planning time: 0.393 ms\n> > Execution time: 6626.136 ms\n> >\n> > Aggregate (cost=5533.02..5533.03 rows=1 width=16) (actual\n> time=33219.100..33219.102 rows=1 loops=1)\n> > Buffers: shared hit=6568 read=7104\n> > -> Index Scan using articles_pkey on articles (cost=0.57..5492.96\n> rows=3205 width=107) (actual time=22.167..12082.624 rows=2416 loops=1)\n> > Index Cond: ((article_id >= '100021000000'::bigint) AND\n> (article_id <= '100021010000'::bigint))\n> > Buffers: shared hit=50 read=2378\n> > Planning time: 0.517 ms\n> > Execution time: 33219.218 ms\n> >\n> > During iteration, I parse the result of EXPLAIN and collect series of\n> following metrics:\n> >\n> > - buffer hits/reads for the table,\n> > - buffer hits/reads for the index,\n> > - number of rows (from \"Index Scan...\"),\n> > - duration of execution.\n> >\n> > Based on metrics above I calculate inherited metrics:\n> >\n> > - disk read rate: (index reads + table reads) * 8192 / duration,\n> > - reads ratio: (index reads + table reads) / (index reads + table reads\n> + index hits + table hits),\n> > - data rate: (index reads + table reads + index hits + table hits) *\n> 8192 / duration,\n> > - rows rate: number of rows / duration.\n> >\n> > Since \"density\" of IDs is different in \"small\" and \"big\" ranges, I\n> adjusted\n> > size of chunks in order to get around 5000 rows on each iteration in\n> both cases,\n> > though my experiments show that chunk size does not really matter a lot.\n> >\n> > The issue posted at the very beginning of my message was confirmed for\n> the\n> > *whole* first and second ranges (so it was not just caused by randomly\n> cached data).\n> >\n> > To eliminate cache influence, I restarted Postgres server with flushing\n> buffers:\n> >\n> > /$ postgresql stop; sync; echo 3 > /proc/sys/vm/drop_caches; postgresql\n> start\n> >\n> > After this I repeated the test and got next-to-same picture.\n> >\n> > \"Small' range: disk read rate is around 10-11 MB/s uniformly across the\n> test.\n> > Output rate was 1300-1700 rows/s. Read ratio is around 13% (why?\n> Shouldn't it be\n> > ~ 100% after drop_caches?).\n> > \"Big\" range: In most of time disk read speed was about 2 MB/s but\n> sometimes\n> > it jumped to 26-30 MB/s. Output rate was 70-80 rows/s (but varied a lot\n> and\n> > reached 8000 rows/s). Read ratio also varied a lot.\n> >\n> > I rendered series from the last test into charts:\n> > \"Small\" range: https://i.stack.imgur.com/3Zfml.png\n> > \"Big\" range (insane): https://i.stack.imgur.com/VXdID.png\n> >\n> > During the tests I verified disk read speed with iotop and found its\n> indications\n> > very close to ones calculated by me based on EXPLAIN BUFFERS. I cannot\n> say I was\n> > monitoring it all the time, but I confirmed it when it was 2 MB/s and 22\n> MB/s on\n> > the second range and 10 MB/s on the first range. I also checked with\n> htop that\n> > CPU was not a bottleneck and was around 3% during the tests.\n> >\n> > The issue is reproducible on both master and slave servers. My tests\n> were conducted\n> > on slave, while there were no any other load on DBMS, or disk activity\n> on the\n> > host unrelated to DBMS.\n> >\n> > My only assumption is that different fragments of data are being read\n> with different\n> > speed due to virtualization or something, but... why is it so strictly\n> bound\n> > to these ranges? Why is it the same on two different machines?\n>\n> What is the storage system?\n>\n> Setting \"track_io_timing = on\" should measure the time spent doing I/O\n> more accurately.\n>\n> One problem with measuring read speed that way is that \"buffers read\" can\n> mean \"buffers read from storage\" or \"buffers read from the file system\n> cache\",\n> but you say you observe a difference even after dropping the cache.\n>\n> To verify if the difference comes from the physical placement, you could\n> run VACUUM (FULL) which rewrites the table and see if that changes the\n> behavior.\n>\n> Another idea is that the operating system rearranges I/O in a way that\n> is not ideal for your storage.\n>\n> Try a different I/O scheduler by running\n>\n> echo deadline > /sys/block/sda/queue/scheduler\n>\n> (replace \"sda\" with the disk where your database resides)\n>\n> See if that changes the observed I/O speed.\n>\n> Yours,\n> Laurenz Albe\n> --\n> Cybertec | https://www.cybertec-postgresql.com\n>\n>\n\n> Another idea is that the operating system rearranges I/O in a way thatis not ideal for your storage.> Try a different I/O scheduler by runningecho deadline > /sys/block/sda/queue/schedulerMy scheduler was already \"deadline\".In some places I read that in virtual environment sometimes \"noop\" scheduler is better, so I tried it. However the experiment shown NO noticeable difference between them (look \"deadline\": https://i.stack.imgur.com/wCOJW.png, \"noop\": https://i.stack.imgur.com/lB33u.png). At the same time tests show almost similar patterns in changing read speed when going over the \"slow\" range.Vladчт, 20 сент. 2018 г. в 20:17, Laurenz Albe <[email protected]>:Vladimir Ryabtsev wrote:\n> explain (analyze, buffers)\n> select count(*), sum(length(content::text)) from articles where article_id between %s and %s\n> \n> Sample output:\n> \n> Aggregate (cost=8635.91..8635.92 rows=1 width=16) (actual time=6625.993..6625.995 rows=1 loops=1)\n> Buffers: shared hit=26847 read=3914\n> -> Index Scan using articles_pkey on articles (cost=0.57..8573.35 rows=5005 width=107) (actual time=21.649..1128.004 rows=5000 loops=1)\n> Index Cond: ((article_id >= 438000000) AND (article_id <= 438005000))\n> Buffers: shared hit=4342 read=671\n> Planning time: 0.393 ms\n> Execution time: 6626.136 ms\n> \n> Aggregate (cost=5533.02..5533.03 rows=1 width=16) (actual time=33219.100..33219.102 rows=1 loops=1)\n> Buffers: shared hit=6568 read=7104\n> -> Index Scan using articles_pkey on articles (cost=0.57..5492.96 rows=3205 width=107) (actual time=22.167..12082.624 rows=2416 loops=1)\n> Index Cond: ((article_id >= '100021000000'::bigint) AND (article_id <= '100021010000'::bigint))\n> Buffers: shared hit=50 read=2378\n> Planning time: 0.517 ms\n> Execution time: 33219.218 ms\n> \n> During iteration, I parse the result of EXPLAIN and collect series of following metrics:\n> \n> - buffer hits/reads for the table,\n> - buffer hits/reads for the index,\n> - number of rows (from \"Index Scan...\"),\n> - duration of execution.\n> \n> Based on metrics above I calculate inherited metrics:\n> \n> - disk read rate: (index reads + table reads) * 8192 / duration,\n> - reads ratio: (index reads + table reads) / (index reads + table reads + index hits + table hits),\n> - data rate: (index reads + table reads + index hits + table hits) * 8192 / duration,\n> - rows rate: number of rows / duration.\n> \n> Since \"density\" of IDs is different in \"small\" and \"big\" ranges, I adjusted\n> size of chunks in order to get around 5000 rows on each iteration in both cases,\n> though my experiments show that chunk size does not really matter a lot.\n> \n> The issue posted at the very beginning of my message was confirmed for the\n> *whole* first and second ranges (so it was not just caused by randomly cached data).\n> \n> To eliminate cache influence, I restarted Postgres server with flushing buffers:\n> \n> /$ postgresql stop; sync; echo 3 > /proc/sys/vm/drop_caches; postgresql start\n> \n> After this I repeated the test and got next-to-same picture.\n> \n> \"Small' range: disk read rate is around 10-11 MB/s uniformly across the test.\n> Output rate was 1300-1700 rows/s. Read ratio is around 13% (why? Shouldn't it be\n> ~ 100% after drop_caches?).\n> \"Big\" range: In most of time disk read speed was about 2 MB/s but sometimes\n> it jumped to 26-30 MB/s. Output rate was 70-80 rows/s (but varied a lot and\n> reached 8000 rows/s). Read ratio also varied a lot.\n> \n> I rendered series from the last test into charts:\n> \"Small\" range: https://i.stack.imgur.com/3Zfml.png\n> \"Big\" range (insane): https://i.stack.imgur.com/VXdID.png\n> \n> During the tests I verified disk read speed with iotop and found its indications\n> very close to ones calculated by me based on EXPLAIN BUFFERS. I cannot say I was\n> monitoring it all the time, but I confirmed it when it was 2 MB/s and 22 MB/s on\n> the second range and 10 MB/s on the first range. I also checked with htop that\n> CPU was not a bottleneck and was around 3% during the tests.\n> \n> The issue is reproducible on both master and slave servers. My tests were conducted\n> on slave, while there were no any other load on DBMS, or disk activity on the\n> host unrelated to DBMS.\n> \n> My only assumption is that different fragments of data are being read with different\n> speed due to virtualization or something, but... why is it so strictly bound\n> to these ranges? Why is it the same on two different machines?\n\nWhat is the storage system?\n\nSetting \"track_io_timing = on\" should measure the time spent doing I/O\nmore accurately.\n\nOne problem with measuring read speed that way is that \"buffers read\" can\nmean \"buffers read from storage\" or \"buffers read from the file system cache\",\nbut you say you observe a difference even after dropping the cache.\n\nTo verify if the difference comes from the physical placement, you could\nrun VACUUM (FULL) which rewrites the table and see if that changes the behavior.\n\nAnother idea is that the operating system rearranges I/O in a way that\nis not ideal for your storage.\n\nTry a different I/O scheduler by running\n\necho deadline > /sys/block/sda/queue/scheduler\n\n(replace \"sda\" with the disk where your database resides)\n\nSee if that changes the observed I/O speed.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com",
"msg_date": "Mon, 24 Sep 2018 00:44:06 -0700",
"msg_from": "Vladimir Ryabtsev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why could different data in a table be processed with different\n performance?"
},
{
"msg_contents": "Hi,\n\nanswers (and questions) in line here below\n\nOn 22/09/18 11:19, Vladimir Ryabtsev wrote:\n> > is the length of the text equally distributed over the 2 partitions?\n> Not 100% equally, but to me it does not seem to be a big deal... Considering the ranges independently:\n> First range: ~70% < 10 KB, ~25% for 10-20 KB, ~3% for 20-30 KB, everything else is less than 1% (with 10 KB steps).\n> Second range: ~80% < 10 KB, ~18% for 10-20 KB, ~2% for 20-30 KB, everything else is less than 1% (with 10 KB steps).\n>\nagree, should not play a role here\n\n> >From what you posted, the first query retrieves 5005 rows, but the second 2416. It might be helpful if we are able to compare 5000 vs 5000\n> Yes it was just an example, here are the plans for approximately same number of rows:\n>\n> Aggregate (cost=9210.12..9210.13 rows=1 width=16) (actual time=4265.478..4265.479 rows=1 loops=1)\n> Buffers: shared hit=27027 read=4311\n> I/O Timings: read=2738.728\n> -> Index Scan using articles_pkey on articles (cost=0.57..9143.40 rows=5338 width=107) (actual time=12.254..873.081 rows=5001 loops=1)\n> Index Cond: ((article_id >= 438030000) AND (article_id <= 438035000))\n> Buffers: shared hit=4282 read=710\n> I/O Timings: read=852.547\n> Planning time: 0.235 ms\n> Execution time: 4265.554 ms\n>\n> Aggregate (cost=11794.59..11794.60 rows=1 width=16) (actual time=62298.559..62298.559 rows=1 loops=1)\n> Buffers: shared hit=15071 read=14847\n> I/O Timings: read=60703.859\n> -> Index Scan using articles_pkey on articles (cost=0.57..11709.13 rows=6837 width=107) (actual time=24.686..24582.221 rows=5417 loops=1)\n> Index Cond: ((article_id >= '100021040000'::bigint) AND (article_id <= '100021060000'::bigint))\n> Buffers: shared hit=195 read=5244\n> I/O Timings: read=24507.621\n> Planning time: 0.494 ms\n> Execution time: 62298.630 ms\n>\n> If we subtract I/O from total time, we get 1527 ms vs 1596 ms — very close timings for other than I/O operations (considering slightly higher number of rows in second case). But I/O time differs dramatically.\n>\n> > Also is worth noticing that the 'estimated' differs from 'actual' on the second query. I think that happens because data is differently distributed over the ranges. Probably the analyzer does not have enough samples to understand the real distribution.\n> I think we should not worry about it unless the planner chose poor plan, should we? Statistics affects on picking a proper plan, but not on execution of the plan, doesn't it?\n>\n\nAgree, it was pure speculation\n\n>\n> > or to create partial indexes on the 2 ranges.\n> Sure, will try it with partial indexes. Should I drop existing PK index, or ensuring that planner picks range index is enough?\n>\nyou cannot drop it since is on a PKEY.\n\nYou can create 2 partial indexes and the planner will pick it up for you. (and the planning time will go a bit up)\n\n\n> - does the raid controller have a cache?\n> > - how big is the cache? (when you measure disk speed, that will influence the result very much, if you do not run the test on big-enough data chunk) best if is disabled during your tests \n> I am pretty sure there is some, usually it's several tens of megabytes, but I ran disk read tests several times with chunks that could not be fit in the cache and with random offset, so I am pretty sure that something around 500 MB/s is enough reasonably accurate (but it is only for sequential read).\n>\n\nit is not unusual to have 1GB cache or more... and do not forget to drop the cache between tests + do a sync\n\n\n> > - is the OS caching disk blocks too? maybe you want to drop everything from there too.\n> How can I find it out? And how to drop it? Or you mean hypervisor OS?\n> Anyway, don't you think that caching specifics could not really explain these issues?\n>\nSorry I meant the hypervisor OS.\n\nGiven that the most of the time is on the I/O then caching is maybe playing a role.\n\nI tried to reproduce your problem but I cannot go even closer to your results. Everything goes smooth with or without shared buffers, or OS cache.\n\nA few questions and considerations came to mind:\n\n- how big is your index?\n\n- how big is the table?\n\n- given the size of shared_buffers, almost 2M blocks should fit, but you say 2 consecutive runs still are hitting the disk. That's strange indeed since you are using way more than 2M blocks.\nDid you check that perhaps are there any other processes or cronjobs (on postgres and on the system) that are maybe reading data and flushing out the cache?\n\nYou can make use of pg_buffercache in order to see what is actually cached. That might help to have an overview of the content of it.\n\n- As Laurenz suggested (VACUUM FULL), you might want to move data around. You can try also a dump + restore to narrow the problem to data or disk\n\n- You might also want to try to see the disk graph of Windows, while you are running your tests. It can show you if data (and good to know how much) is actually fetching from disk or not.\n\nregards,\n\nfabio pardi\n\n\n\n\n\n\n\n Hi,\n\n answers (and questions) in line here below\n\nOn 22/09/18 11:19, Vladimir Ryabtsev\n wrote:\n\n\n\n> is the length of the text equally distributed\n over the 2 partitions?\n Not 100% equally, but to me it does not seem to be a big deal...\n Considering the ranges independently:\n First range: ~70% < 10 KB, ~25% for 10-20 KB, ~3% for\n 20-30 KB, everything else is less than 1% (with 10 KB steps).\nSecond range: ~80% < 10 KB, ~18% for 10-20 KB, ~2% for\n 20-30 KB, everything else is less than 1% (with 10 KB steps).\n\n\n\n\n agree, should not play a role here\n\n\n\n>From what you posted, the first query retrieves 5005\n rows, but the second 2416. It might be helpful if we are able\n to compare 5000 vs 5000\nYes it was just an example, here are the plans for\n approximately same number of rows:\n\n\n\nAggregate \n (cost=9210.12..9210.13 rows=1 width=16) (actual\n time=4265.478..4265.479 rows=1 loops=1)\n Buffers: shared\n hit=27027 read=4311\n I/O Timings:\n read=2738.728\n -> Index Scan\n using articles_pkey on articles (cost=0.57..9143.40\n rows=5338 width=107) (actual time=12.254..873.081\n rows=5001 loops=1)\n Index Cond:\n ((article_id >= 438030000) AND (article_id <=\n 438035000))\n Buffers: shared\n hit=4282 read=710\n I/O Timings:\n read=852.547\nPlanning time: 0.235 ms\nExecution time:\n 4265.554 ms\n\n\nAggregate \n (cost=11794.59..11794.60 rows=1 width=16) (actual\n time=62298.559..62298.559 rows=1 loops=1)\n Buffers: shared\n hit=15071 read=14847\n I/O Timings:\n read=60703.859\n -> Index Scan\n using articles_pkey on articles (cost=0.57..11709.13\n rows=6837 width=107) (actual time=24.686..24582.221\n rows=5417 loops=1)\n Index Cond:\n ((article_id >= '100021040000'::bigint) AND (article_id\n <= '100021060000'::bigint))\n Buffers: shared\n hit=195 read=5244\n I/O Timings:\n read=24507.621\nPlanning time: 0.494 ms\nExecution time:\n 62298.630 ms\n\n\n\nIf we subtract I/O from total time, we get 1527 ms vs 1596\n ms — very close timings for other than I/O operations\n (considering slightly higher number of rows in second case).\n But I/O time differs dramatically.\n\n\n> Also is worth noticing that the 'estimated' differs\n from 'actual' on the second query. I think that happens\n because data is differently distributed over the ranges.\n Probably the analyzer does not have enough samples to\n understand the real distribution.\nI think we should not worry about it unless the planner\n chose poor plan, should we? Statistics affects on picking a\n proper plan, but not on execution of the plan, doesn't it?\n\n\n\n\n\n Agree, it was pure speculation\n\n\n\n\n> or to create partial indexes on the 2 ranges.\n\nSure, will try it with partial indexes. Should I drop\n existing PK index, or ensuring that planner picks range\n index is enough?\n\n\n\n\n\n\n you cannot drop it since is on a PKEY.\n\n You can create 2 partial indexes and the planner will pick it up for\n you. (and the planning time will go a bit up)\n\n\n > - does the raid controller have a cache?\n\n\n\n> - how big is the cache? (when you measure disk\n speed, that will influence the result very much, if you do\n not run the test on big-enough data chunk) best if is\n disabled during your tests \nI am pretty sure there is some, usually it's several tens\n of megabytes, but I ran disk read tests several times with\n chunks that could not be fit in the cache and with random\n offset, so I am pretty sure that something around 500 MB/s\n is enough reasonably accurate (but it is only for sequential\n read).\n\n\n\n\n\n\n it is not unusual to have 1GB cache or more... and do not forget to\n drop the cache between tests + do a sync\n\n\n\n\n\n> - is the OS caching disk blocks too? maybe you want\n to drop everything from there too.\n\nHow can I find it out? And how to drop it? Or you mean\n hypervisor OS?\nAnyway, don't you think that caching specifics could not\n really explain these issues?\n\n\n\n\n Sorry I meant the hypervisor OS. \n\n Given that the most of the time is on the I/O then caching is maybe\n playing a role.\n\n I tried to reproduce your problem but I cannot go even closer to\n your results. Everything goes smooth with or without shared buffers,\n or OS cache.\n\n A few questions and considerations came to mind:\n\n - how big is your index? \n\n - how big is the table?\n\n - given the size of shared_buffers, almost 2M blocks should fit, but\n you say 2 consecutive runs still are hitting the disk. That's\n strange indeed since you are using way more than 2M blocks. \n Did you check that perhaps are there any other processes or cronjobs\n (on postgres and on the system) that are maybe reading data and\n flushing out the cache?\n\n You can make use of pg_buffercache in order to see what is actually\n cached. That might help to have an overview of the content of it.\n\n - As Laurenz suggested (VACUUM FULL), you might want to move data\n around. You can try also a dump + restore to narrow the problem to\n data or disk \n\n - You might also want to try to see the disk graph of Windows, while\n you are running your tests. It can show you if data (and good to\n know how much) is actually fetching from disk or not. \n\n regards,\n\n fabio pardi",
"msg_date": "Mon, 24 Sep 2018 16:47:42 +0200",
"msg_from": "Fabio Pardi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why could different data in a table be processed with different\n performance?"
},
{
"msg_contents": "> You can create 2 partial indexes and the planner will pick it up for you.\n(and the planning time will go a bit up).\nCreated two partial indexes and ensured planner uses it. But the result is\nstill the same, no noticeable difference.\n\n> it is not unusual to have 1GB cache or more... and do not forget to drop\nthe cache between tests + do a sync\nI conducted several long runs of dd, so I am sure that this numbers are\nfairly correct. However, what worries me is that I test sequential read\nspeed while during my experiments Postgres might need to read from random\nplaces thus reducing real read speed dramatically. I have a feeling that\nthis can be the reason.\nI also reviewed import scripts and found the import was done in DESCENDING\norder of IDs. It was so to get most recent records sooner, may be it caused\nsome inefficiency in the storage... But again, it was so for both ranges.\n\n> - how big is your index?\npg_table_size('articles_pkey') = 1561 MB\n\n> - how big is the table?\npg_table_size('articles') = 427 GB\npg_table_size('pg_toast.pg_toast_221558') = 359 GB\n\n> - given the size of shared_buffers, almost 2M blocks should fit, but you\nsay 2 consecutive runs still are hitting the disk. That's strange indeed\nsince you are using way more than 2M blocks.\nTBH, I cannot say I understand your calculations with number of blocks...\nBut to clarify: consecutive runs with SAME parameters do NOT hit the disk,\nonly the first one does, consequent ones read only from buffer cache.\n\n> Did you check that perhaps are there any other processes or cronjobs (on\npostgres and on the system) that are maybe reading data and flushing out\nthe cache?\nI checked with iotop than nothing else reads intensively from any disk in\nthe system. And again, the result is 100% reproducible and depends on ID\nrange only, if there were any thing like these I would have noticed some\nfluctuations in results.\n\n> You can make use of pg_buffercache in order to see what is actually\ncached.\nIt seems that there is no such a view in my DB, could it be that the module\nis not installed?\n\n> - As Laurenz suggested (VACUUM FULL), you might want to move data around.\nYou can try also a dump + restore to narrow the problem to data or disk\nI launched VACUUM FULL, but it ran very slowly, according to my calculation\nit might take 17 hours. I will try to do copy data into another table with\nthe same structure or spin up another server, and let you know.\n\n> - You might also want to try to see the disk graph of Windows, while you\nare running your tests. It can show you if data (and good to know how much)\nis actually fetching from disk or not.\nI wanted to do so but I don't have access to Hyper-V server, will try to\nrequest credentials from admins.\n\nCouple more observations:\n1) The result of my experiment is almost not affected by other server load.\nAnother user was running a query (over this table) with read speed ~130\nMB/s, while with my query read at 1.8-2 MB/s.\n2) iotop show higher IO % (~93-94%) with slower read speed (though it is\nnot quite clear what this field is). A process from example above had ~55%\nIO with 130 MB/s while my process had ~93% with ~2MB/s.\n\nRegards,\nVlad\n\n> You can create 2 partial indexes and the planner will pick it up for you. (and the planning time will go a bit up).Created two partial indexes and ensured planner uses it. But the result is still the same, no noticeable difference.> it is not unusual to have 1GB cache or more... and do not forget to drop the cache between tests + do a syncI conducted several long runs of dd, so I am sure that this numbers are fairly correct. However, what worries me is that I test sequential read speed while during my experiments Postgres might need to read from random places thus reducing real read speed dramatically. I have a feeling that this can be the reason.I also reviewed import scripts and found the import was done in DESCENDING order of IDs. It was so to get most recent records sooner, may be it caused some inefficiency in the storage... But again, it was so for both ranges.> - how big is your index? pg_table_size('articles_pkey') = 1561 MB> - how big is the table?pg_table_size('articles') = 427 GBpg_table_size('pg_toast.pg_toast_221558') = 359 GB> - given the size of shared_buffers, almost 2M blocks should fit, but you say 2 consecutive runs still are hitting the disk. That's strange indeed since you are using way more than 2M blocks.TBH, I cannot say I understand your calculations with number of blocks... But to clarify: consecutive runs with SAME parameters do NOT hit the disk, only the first one does, consequent ones read only from buffer cache.> Did you check that perhaps are there any other processes or cronjobs (on postgres and on the system) that are maybe reading data and flushing out the cache?I checked with iotop than nothing else reads intensively from any disk in the system. And again, the result is 100% reproducible and depends on ID range only, if there were any thing like these I would have noticed some fluctuations in results.> You can make use of pg_buffercache in order to see what is actually cached.It seems that there is no such a view in my DB, could it be that the module is not installed?> - As Laurenz suggested (VACUUM FULL), you might want to move data around. You can try also a dump + restore to narrow the problem to data or diskI launched VACUUM FULL, but it ran very slowly, according to my calculation it might take 17 hours. I will try to do copy data into another table with the same structure or spin up another server, and let you know.> - You might also want to try to see the disk graph of Windows, while you are running your tests. It can show you if data (and good to know how much) is actually fetching from disk or not.I wanted to do so but I don't have access to Hyper-V server, will try to request credentials from admins.Couple more observations:1) The result of my experiment is almost not affected by other server load. Another user was running a query (over this table) with read speed ~130 MB/s, while with my query read at 1.8-2 MB/s.2) iotop show higher IO % (~93-94%) with slower read speed (though it is not quite clear what this field is). A process from example above had ~55% IO with 130 MB/s while my process had ~93% with ~2MB/s.Regards,Vlad",
"msg_date": "Mon, 24 Sep 2018 15:28:15 -0700",
"msg_from": "Vladimir Ryabtsev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why could different data in a table be processed with different\n performance?"
},
{
"msg_contents": "On Mon, Sep 24, 2018 at 03:28:15PM -0700, Vladimir Ryabtsev wrote:\n> > it is not unusual to have 1GB cache or more... and do not forget to drop\n> the cache between tests + do a sync\n> I also reviewed import scripts and found the import was done in DESCENDING\n> order of IDs.\n\nThis seems significant..it means the heap was probably written in backwards\norder relative to the IDs, and the OS readahead is ineffective when index\nscanning across a range of IDs. From memory, linux since (at least) 2.6.32 can\noptimize this. You mentioned you're using 4.4. Does your LVM have readahead\nramped up ? Try lvchange -r 65536 data/postgres (or similar).\n\nAlso..these might be an impractical solution for several reasons, but did you\ntry either 1) forcing a bitmap scan (of only one index), to force the heap\nreads to be ordered, if not sequential? SET enable_indexscan=off (and maybe\nSET enable_seqscan=off and others as needed).\n\nOr, 2) Using a brin index (scanning of which always results in bitmap heap\nscan).\n\n> > - how big is the table?\n> pg_table_size('articles') = 427 GB\n> pg_table_size('pg_toast.pg_toast_221558') = 359 GB\n\nOuch .. if it were me, I would definitely want to make that a partitioned table..\nOr perhaps two unioned together with a view? One each for the sparse and dense\nrange?\n\n> > You can make use of pg_buffercache in order to see what is actually\n> cached.\n> It seems that there is no such a view in my DB, could it be that the module\n> is not installed?\n\nRight, it's in the postgresql -contrib package.\nAnd you have to \"CREATE EXTENSION pg_buffercache\".\n\n> > - As Laurenz suggested (VACUUM FULL), you might want to move data around.\n> You can try also a dump + restore to narrow the problem to data or disk\n> I launched VACUUM FULL, but it ran very slowly, according to my calculation\n> it might take 17 hours. I will try to do copy data into another table with\n> the same structure or spin up another server, and let you know.\n\nI *suspect* VACUUM FULL won't help, since (AIUI) it copies all \"visible\" tuples\nfrom the source table into a new table (and updates indices as necessary). It\ncan resolve bloat due to historic DELETEs, but since I think your table was\nwritten in reverse order of pkey, I think it'll also copy it in reverse order.\nCLUSTER will fix that. You can use pg_repack to do so online...but it's going\nto be painful for a table+toast 1TiB in size: it'll take all day, and also\nrequire an additional 1TB while running (same as VAC FULL).\n\nJustin\n\n",
"msg_date": "Mon, 24 Sep 2018 18:34:23 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why could different data in a table be processed with different\n performance?"
},
{
"msg_contents": "> And if you run?\n> select count(*) from articles where article_id between %s and %s\n> ie without reading json, is your buffers hit count increasing?\nTried this. This is somewhat interesting, too... Even index-only scan is\nfaster for the \"fast\" range. The results are consistently fast in it, with\nsmall and constant numbers of hits and reads. For the big one, in contrary,\nit shows huge number of hits (why? how it manages to do the same with\nlesser blocks access in \"fast\" range?) and the duration is \"jumping\" with\nhigher values in average.\n\"Fast\": https://i.stack.imgur.com/63I9k.png\n\"Slow\": https://i.stack.imgur.com/QzI3N.png\nNote that results on the charts are averaged by 1M, but particular values\nin \"slow\" range reached 4 s, while maximum execution time for the \"fast\"\nrange was only 0.3 s.\n\nRegards,\nVlad\n\n> And if you run?> select count(*) from articles where article_id between %s and %s> ie without reading json, is your buffers hit count increasing?Tried this. This is somewhat interesting, too... Even index-only scan is faster for the \"fast\" range. The results are consistently fast in it, with small and constant numbers of hits and reads. For the big one, in contrary, it shows huge number of hits (why? how it manages to do the same with lesser blocks access in \"fast\" range?) and the duration is \"jumping\" with higher values in average.\"Fast\": https://i.stack.imgur.com/63I9k.png\"Slow\": https://i.stack.imgur.com/QzI3N.pngNote that results on the charts are averaged by 1M, but particular values in \"slow\" range reached 4 s, while maximum execution time for the \"fast\" range was only 0.3 s.Regards,Vlad",
"msg_date": "Mon, 24 Sep 2018 17:21:26 -0700",
"msg_from": "Vladimir Ryabtsev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why could different data in a table be processed with different\n performance?"
},
{
"msg_contents": "> This seems significant..it means the heap was probably written in\nbackwards\norder relative to the IDs, and the OS readahead is ineffective when index\nscanning across a range of IDs.\nBut again, why is it different for one range and another? It was reversed\nfor both ranges.\n\n> I would definitely want to make that a partitioned table\nYes, I believe it will be partitioned in the future.\n\n> I *suspect* VACUUM FULL won't help, since (AIUI) it copies all \"visible\"\ntuples from the source table into a new table (and updates indices as\nnecessary). It can resolve bloat due to historic DELETEs, but since I\nthink your table was written in reverse order of pkey, I think it'll also\ncopy it in reverse order.\nI am going copy the slow range into a table nearby and see if it reproduces\n(I hope \"INSERT INTO t2 SELECT * FROM t1 WHERE ...\" will keep existing\norder of rows). Then I could try the same after CLUSTER.\n\nRegards,\nVlad\n\n> This seems significant..it means the heap was probably written in backwardsorder relative to the IDs, and the OS readahead is ineffective when indexscanning across a range of IDs.But again, why is it different for one range and another? It was reversed for both ranges.> I would definitely want to make that a partitioned tableYes, I believe it will be partitioned in the future.> I *suspect* VACUUM FULL won't help, since (AIUI) it copies all \"visible\" tuples from the source table into a new table (and updates indices as necessary). It can resolve bloat due to historic DELETEs, but since I think your table was written in reverse order of pkey, I think it'll also copy it in reverse order.I am going copy the slow range into a table nearby and see if it reproduces (I hope \"INSERT INTO t2 SELECT * FROM t1 WHERE ...\" will keep existing order of rows). Then I could try the same after CLUSTER.Regards,Vlad",
"msg_date": "Mon, 24 Sep 2018 17:59:12 -0700",
"msg_from": "Vladimir Ryabtsev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why could different data in a table be processed with different\n performance?"
},
{
"msg_contents": "> did you try either 1) forcing a bitmap scan (of only one index), to force\nthe heap reads to be ordered, if not sequential? SET enable_indexscan=off\n(and maybe SET enable_seqscan=off and others as needed).\nDisabling index scan made it bitmap.\nIt is surprising, but this increased read speed in both ranges.\nIt came two times for \"fast\" range and 3 times faster for \"slow\" range (for\ncertain segments of data I checked on, the whole experiment takes a while\nthough).\nBut there is still a difference between the ranges, it became now ~20 MB/s\nvs ~6 MB/s.\n\nVlad\n\n> did you try either 1) forcing a bitmap scan (of only one index), to force the heap reads to be ordered, if not sequential? SET enable_indexscan=off (and maybe SET enable_seqscan=off and others as needed).Disabling index scan made it bitmap.It is surprising, but this increased read speed in both ranges.It came two times for \"fast\" range and 3 times faster for \"slow\" range (for certain segments of data I checked on, the whole experiment takes a while though).But there is still a difference between the ranges, it became now ~20 MB/s vs ~6 MB/s.Vlad",
"msg_date": "Mon, 24 Sep 2018 18:11:12 -0700",
"msg_from": "Vladimir Ryabtsev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why could different data in a table be processed with different\n performance?"
},
{
"msg_contents": "On Mon, Sep 24, 2018 at 05:59:12PM -0700, Vladimir Ryabtsev wrote:\n> > I *suspect* VACUUM FULL won't help, since (AIUI) it copies all \"visible\"\n...\n> I am going copy the slow range into a table nearby and see if it reproduces\n> (I hope \"INSERT INTO t2 SELECT * FROM t1 WHERE ...\" will keep existing\n> order of rows). Then I could try the same after CLUSTER.\n\nIf it does an index scan, I think that will badly fail to keep the same order\nof heap TIDs - it'll be inserting rows in ID order rather than in (I guess)\nreverse ID order.\n\nJustin\n\n",
"msg_date": "Mon, 24 Sep 2018 21:19:54 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why could different data in a table be processed with different\n performance?"
},
{
"msg_contents": "> If it does an index scan, I think that will badly fail to keep the same\norder of heap TIDs - it'll be inserting rows in ID order rather than in (I\nguess) reverse ID order.\nAccording to the plan, it's gonna be seq. scan with filter.\n\nVlad\n\n> If it does an index scan, I think that will badly fail to keep the same order of heap TIDs - it'll be inserting rows in ID order rather than in (I guess) reverse ID order.According to the plan, it's gonna be seq. scan with filter.Vlad",
"msg_date": "Mon, 24 Sep 2018 19:31:06 -0700",
"msg_from": "Vladimir Ryabtsev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why could different data in a table be processed with different\n performance?"
},
{
"msg_contents": "On Mon, Sep 24, 2018 at 05:59:12PM -0700, Vladimir Ryabtsev wrote:\n> > This seems significant..it means the heap was probably written in\n> backwards\n> order relative to the IDs, and the OS readahead is ineffective when index\n> scanning across a range of IDs.\n> But again, why is it different for one range and another? It was reversed\n> for both ranges.\n\nI don't have an explaination for it.. but I'd be curious to know\npg_stats.correlation for the id column:\n\nSELECT schemaname, tablename, attname, correlation FROM pg_stats WHERE tablename='articles' AND column='article_id' LIMIT 1;\n\nJustin\n\n",
"msg_date": "Mon, 24 Sep 2018 21:38:52 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why could different data in a table be processed with different\n performance?"
},
{
"msg_contents": "> but I'd be curious to know\n> SELECT schemaname, tablename, attname, correlation FROM pg_stats WHERE\ntablename='articles' AND column='article_id' LIMIT 1;\nI think you meant 'attname'. It gives\nstorage articles article_id -0.77380306\n\nVlad\n\n> but I'd be curious to know> SELECT schemaname, tablename, attname, correlation FROM pg_stats WHERE tablename='articles' AND column='article_id' LIMIT 1;I think you meant 'attname'. It givesstorage articles article_id -0.77380306Vlad",
"msg_date": "Mon, 24 Sep 2018 20:40:28 -0700",
"msg_from": "Vladimir Ryabtsev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why could different data in a table be processed with different\n performance?"
},
{
"msg_contents": "Hi, Vladimir,\n\n\nReading the whole thread it seems you should look deeper into IO subsystem.\n\n1) Which file system are you using?\n\n2) What is the segment layout of the LVM PVs and LVs? See\nhttps://www.centos.org/docs/5/html/Cluster_Logical_Volume_Manager/report_object_selection.html\nhow to check. If data is fragmented, maybe the disks are doing a lot of\nseeking?\n\n3) Do you use LVM for any \"extra\" features, such as snapshots?\n\n4) You can try using seekwatcher to see where on the disk the slowness\nis occurring. You get a chart similar to this\nhttp://kernel.dk/dd-md0-xfs-pdflush.png\n\n5) BCC is a collection of tools that might shed a light on what is\nhappening. https://github.com/iovisor/bcc\n\n\nKind regards,\n\nGasper\n\n\nOn 21. 09. 2018 02:07, Vladimir Ryabtsev wrote:\n> I am experiencing a strange performance problem when accessing JSONB\n> content by primary key.\n>\n> My DB version() is PostgreSQL 10.3 (Ubuntu 10.3-1.pgdg14.04+1) on\n> x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 4.8.4-2ubuntu1~14.04.4)\n> 4.8.4, 64-bit\n> postgres.conf: https://justpaste.it/6pzz1\n> uname -a: Linux postgresnlpslave 4.4.0-62-generic #83-Ubuntu SMP Wed\n> Jan 18 14:10:15 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux\n> The machine is virtual, running under Hyper-V.\n> Processor: Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz, 1x1 cores\n> Disk storage: the host has two vmdx drives, first shared between the\n> root partition and an LVM PV, second is a single LVM PV. Both PVs are\n> in a VG containing swap and postgres data partitions. The data is\n> mostly on the first PV.\n>\n> I have such a table:\n>\n> CREATE TABLE articles\n> (\n> article_id bigint NOT NULL,\n> content jsonb NOT NULL,\n> published_at timestamp without time zone NOT NULL,\n> appended_at timestamp without time zone NOT NULL,\n> source_id integer NOT NULL,\n> language character varying(2) NOT NULL,\n> title text NOT NULL,\n> topicstopic[] NOT NULL,\n> objects object[] NOT NULL,\n> cluster_id bigint NOT NULL,\n> CONSTRAINT articles_pkey PRIMARY KEY (article_id)\n> )\n>\n> We have a Python lib (using psycopg2 driver) to access this table. It\n> executes simple queries to the table, one of them is used for bulk\n> downloading of content and looks like this:\n>\n> select content from articles where id between $1 and $2\n>\n> I noticed that with some IDs it works pretty fast while with other it\n> is 4-5 times slower. It is suitable to note, there are two main\n> 'categories' of IDs in this table: first is range 270000000-500000000,\n> and second is range 10000000000-100030000000. For the first range it\n> is 'fast' and for the second it is 'slow'. Besides larger absolute\n> numbers withdrawing them from int to bigint, values in the second\n> range are more 'sparse', which means in the first range values are\n> almost consequent (with very few 'holes' of missing values) while in\n> the second range there are much more 'holes' (average filling is 35%).\n> Total number of rows in the first range: ~62M, in the second range: ~10M.\n>\n> I conducted several experiments to eliminate possible influence of\n> library's code and network throughput, I omit some of them. I ended up\n> with iterating over table with EXPLAIN to simulate read load:\n>\n> explain (analyze, buffers)\n> select count(*), sum(length(content::text)) from articles where\n> article_id between %s and %s\n>\n> Sample output:\n>\n> Aggregate (cost=8635.91..8635.92 rows=1 width=16) (actual\n> time=6625.993..6625.995 rows=1 loops=1)\n> Buffers: shared hit=26847 read=3914\n> -> Index Scan using articles_pkey on articles (cost=0.57..8573.35\n> rows=5005 width=107) (actual time=21.649..1128.004 rows=5000 loops=1)\n> Index Cond: ((article_id >= 438000000) AND (article_id <=\n> 438005000))\n> Buffers: shared hit=4342 read=671\n> Planning time: 0.393 ms\n> Execution time: 6626.136 ms\n>\n> Aggregate (cost=5533.02..5533.03 rows=1 width=16) (actual\n> time=33219.100..33219.102 rows=1 loops=1)\n> Buffers: shared hit=6568 read=7104\n> -> Index Scan using articles_pkey on articles (cost=0.57..5492.96\n> rows=3205 width=107) (actual time=22.167..12082.624 rows=2416 loops=1)\n> Index Cond: ((article_id >= '100021000000'::bigint) AND\n> (article_id <= '100021010000'::bigint))\n> Buffers: shared hit=50 read=2378\n> Planning time: 0.517 ms\n> Execution time: 33219.218 ms\n>\n> During iteration, I parse the result of EXPLAIN and collect series of\n> following metrics:\n>\n> - buffer hits/reads for the table,\n> - buffer hits/reads for the index,\n> - number of rows (from \"Index Scan...\"),\n> - duration of execution.\n>\n> Based on metrics above I calculate inherited metrics:\n>\n> - disk read rate: (index reads + table reads) * 8192 / duration,\n> - reads ratio: (index reads + table reads) / (index reads + table\n> reads + index hits + table hits),\n> - data rate: (index reads + table reads + index hits + table hits) *\n> 8192 / duration,\n> - rows rate: number of rows / duration.\n>\n> Since \"density\" of IDs is different in \"small\" and \"big\" ranges, I\n> adjusted size of chunks in order to get around 5000 rows on each\n> iteration in both cases, though my experiments show that chunk size\n> does not really matter a lot.\n>\n> The issue posted at the very beginning of my message was confirmed for\n> the *whole* first and second ranges (so it was not just caused by\n> randomly cached data).\n>\n> To eliminate cache influence, I restarted Postgres server with\n> flushing buffers:\n>\n> /$ postgresql stop; sync; echo 3 > /proc/sys/vm/drop_caches;\n> postgresql start\n>\n> After this I repeated the test and got next-to-same picture.\n>\n> \"Small' range: disk read rate is around 10-11 MB/s uniformly across\n> the test. Output rate was 1300-1700 rows/s. Read ratio is around 13%\n> (why? Shouldn't it be ~ 100% after drop_caches?).\n> \"Big\" range: In most of time disk read speed was about 2 MB/s but\n> sometimes it jumped to 26-30 MB/s. Output rate was 70-80 rows/s (but\n> varied a lot and reached 8000 rows/s). Read ratio also varied a lot.\n>\n> I rendered series from the last test into charts:\n> \"Small\" range: https://i.stack.imgur.com/3Zfml.png\n> \"Big\" range (insane): https://i.stack.imgur.com/VXdID.png\n>\n> During the tests I verified disk read speed with iotop and found its\n> indications very close to ones calculated by me based on EXPLAIN\n> BUFFERS. I cannot say I was monitoring it all the time, but I\n> confirmed it when it was 2 MB/s and 22 MB/s on the second range and 10\n> MB/s on the first range. I also checked with htop that CPU was not a\n> bottleneck and was around 3% during the tests.\n>\n> The issue is reproducible on both master and slave servers. My tests\n> were conducted on slave, while there were no any other load on DBMS,\n> or disk activity on the host unrelated to DBMS.\n>\n> My only assumption is that different fragments of data are being read\n> with different speed due to virtualization or something, but... why is\n> it so strictly bound to these ranges? Why is it the same on two\n> different machines?\n>\n> The file system performance measured by dd:\n>\n> root@postgresnlpslave:/# echo 3 > /proc/sys/vm/drop_caches \n> root@postgresnlpslave:/# dd if=/dev/mapper/postgresnlpslave--vg-root\n> of=/dev/null bs=8K count=128K\n> 131072+0 records in\n> 131072+0 records out\n> 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 2.12304 s, 506 MB/s\n>\n> Am I missing something? What else can I do to narrow down the cause?\n>\n> P.S. Initially posted on\n> https://stackoverflow.com/questions/52105172/why-could-different-data-in-a-table-be-processed-with-different-performance\n>\n> Regards,\n> Vlad\n\n\n\n\n\n\n\nHi, Vladimir,\n\n\nReading the whole thread it seems you should look deeper into IO\n subsystem. \n\n1) Which file system are you using?\n2) What is the segment layout of the LVM PVs and LVs? See\nhttps://www.centos.org/docs/5/html/Cluster_Logical_Volume_Manager/report_object_selection.html\n how to check. If data is fragmented, maybe the disks are doing a\n lot of seeking?\n\n3) Do you use LVM for any \"extra\" features, such as snapshots?\n4) You can try using seekwatcher to see where on the disk the\n slowness is occurring. You get a chart similar to this\n http://kernel.dk/dd-md0-xfs-pdflush.png \n\n5) BCC is a collection of tools that might shed a light on what\n is happening. https://github.com/iovisor/bcc\n\n\n\nKind regards,\nGasper\n\n\nOn 21. 09. 2018 02:07, Vladimir\n Ryabtsev wrote:\n\n\n\nI am experiencing a strange performance problem\n when accessing JSONB content by primary key.\n\n My DB version() is PostgreSQL 10.3 (Ubuntu\n 10.3-1.pgdg14.04+1) on x86_64-pc-linux-gnu, compiled by gcc\n (Ubuntu 4.8.4-2ubuntu1~14.04.4) 4.8.4, 64-bit\npostgres.conf: https://justpaste.it/6pzz1\n uname -a: Linux postgresnlpslave 4.4.0-62-generic\n #83-Ubuntu SMP Wed Jan 18 14:10:15 UTC 2017 x86_64 x86_64\n x86_64 GNU/Linux\n The machine is virtual, running under Hyper-V.\n Processor: Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz, 1x1\n cores\n Disk storage: the host has two vmdx drives, first shared\n between the root partition and an LVM PV, second is a single\n LVM PV. Both PVs are in a VG containing swap and postgres data\n partitions. The data is mostly on the first PV.\n\n I have such a table:\n\nCREATE TABLE articles\n (\n article_id bigint NOT NULL,\n content jsonb NOT NULL,\n published_at timestamp without time zone NOT NULL,\n appended_at timestamp without time zone NOT NULL,\n source_id integer NOT NULL,\n language character varying(2) NOT NULL,\n title text NOT NULL,\n topicstopic[] NOT NULL,\n objects object[] NOT NULL,\n cluster_id bigint NOT NULL,\n CONSTRAINT articles_pkey PRIMARY KEY (article_id)\n )\n\n We have a Python lib (using psycopg2 driver) to access this\n table. It executes simple queries to the table, one of them is\n used for bulk downloading of content and looks like this:\n\nselect content from articles\n where id between $1 and $2\n\n I noticed that with some IDs it works pretty fast while with\n other it is 4-5 times slower. It is suitable to note, there\n are two main 'categories' of IDs in this table: first is range\n 270000000-500000000, and second is range\n 10000000000-100030000000. For the first range it is 'fast' and\n for the second it is 'slow'. Besides larger absolute numbers\n withdrawing them from int to bigint, values in the second\n range are more 'sparse', which means in the first range values\n are almost consequent (with very few 'holes' of missing\n values) while in the second range there are much more 'holes'\n (average filling is 35%). Total number of rows in the first\n range: ~62M, in the second range: ~10M.\n\n I conducted several experiments to eliminate possible\n influence of library's code and network throughput, I omit\n some of them. I ended up with iterating over table with\n EXPLAIN to simulate read load:\n\nexplain (analyze, buffers)\n select count(*), sum(length(content::text)) from articles\n where article_id between %s and %s\n\n Sample output:\n\n Aggregate (cost=8635.91..8635.92 rows=1 width=16) (actual\n time=6625.993..6625.995 rows=1 loops=1)\n Buffers: shared hit=26847 read=3914\n -> Index Scan using articles_pkey on articles\n (cost=0.57..8573.35 rows=5005 width=107) (actual\n time=21.649..1128.004 rows=5000 loops=1)\n Index Cond: ((article_id >= 438000000) AND\n (article_id <= 438005000))\n Buffers: shared hit=4342 read=671\n Planning time: 0.393 ms\n Execution time: 6626.136 ms\n\n Aggregate (cost=5533.02..5533.03 rows=1 width=16) (actual\n time=33219.100..33219.102 rows=1 loops=1)\n Buffers: shared hit=6568 read=7104\n -> Index Scan using articles_pkey on articles\n (cost=0.57..5492.96 rows=3205 width=107) (actual\n time=22.167..12082.624 rows=2416 loops=1)\n Index Cond: ((article_id >=\n '100021000000'::bigint) AND (article_id <=\n '100021010000'::bigint))\n Buffers: shared hit=50 read=2378\n Planning time: 0.517 ms\n Execution time: 33219.218 ms\n\n During iteration, I parse the result of EXPLAIN and collect\n series of following metrics:\n\n - buffer hits/reads for the table,\n - buffer hits/reads for the index,\n - number of rows (from \"Index Scan...\"),\n - duration of execution.\n\n Based on metrics above I calculate inherited metrics:\n\n - disk read rate: (index reads + table reads) * 8192 /\n duration,\n - reads ratio: (index reads + table reads) / (index reads +\n table reads + index hits + table hits),\n - data rate: (index reads + table reads + index hits + table\n hits) * 8192 / duration,\n- rows rate: number of rows / duration.\n\n Since \"density\" of IDs is different in \"small\" and \"big\"\n ranges, I adjusted size of chunks in order to get around 5000\n rows on each iteration in both cases, though my experiments\n show that chunk size does not really matter a lot.\n\n The issue posted at the very beginning of my message was\n confirmed for the *whole* first and second ranges (so it was\n not just caused by randomly cached data).\n\n To eliminate cache influence, I restarted Postgres server with\n flushing buffers:\n\n/$ postgresql stop; sync;\n echo 3 > /proc/sys/vm/drop_caches; postgresql start\n\n After this I repeated the test and got next-to-same picture.\n\n \"Small' range: disk read rate is around 10-11 MB/s uniformly\n across the test. Output rate was 1300-1700 rows/s. Read ratio\n is around 13% (why? Shouldn't it be ~ 100% after\n drop_caches?).\n \"Big\" range: In most of time disk read speed was about 2 MB/s\n but sometimes it jumped to 26-30 MB/s. Output rate was 70-80\n rows/s (but varied a lot and reached 8000 rows/s). Read ratio\n also varied a lot.\n\n I rendered series from the last test into charts:\n \"Small\" range: https://i.stack.imgur.com/3Zfml.png\n \"Big\" range (insane): https://i.stack.imgur.com/VXdID.png\n\n During the tests I verified disk read speed with iotop and\n found its indications very close to ones calculated by me\n based on EXPLAIN BUFFERS.\n I cannot say I was monitoring it all the time, but I confirmed\n it when it was 2 MB/s and 22 MB/s on the second range and 10\n MB/s on the first range. I also checked with htop that CPU was\n not a bottleneck and was around 3% during the tests.\n\n The issue is reproducible on both master and slave servers. My\n tests were conducted on slave, while there were no any other\n load on DBMS, or disk activity on the host unrelated to DBMS.\n\n My only assumption is that different fragments of data are\n being read with different speed due to virtualization or\n something, but... why is it so strictly bound to these ranges?\n Why is it the same on two different machines?\n\n The file system performance measured by dd:\n\nroot@postgresnlpslave:/#\n echo 3 > /proc/sys/vm/drop_caches \nroot@postgresnlpslave:/#\n dd if=/dev/mapper/postgresnlpslave--vg-root of=/dev/null\n bs=8K count=128K\n131072+0 records in\n131072+0 records out\n1073741824 bytes (1.1\n GB, 1.0 GiB) copied, 2.12304 s, 506 MB/s\n\n Am I missing something? What else can I do to narrow down the\n cause?\n\n P.S. Initially posted on https://stackoverflow.com/questions/52105172/why-could-different-data-in-a-table-be-processed-with-different-performance\n\n\nRegards,\nVlad",
"msg_date": "Tue, 25 Sep 2018 08:32:09 +0200",
"msg_from": "Gasper Zejn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why could different data in a table be processed with different\n performance?"
},
{
"msg_contents": "On 25/09/18 00:28, Vladimir Ryabtsev wrote:\n>\n> > it is not unusual to have 1GB cache or more... and do not forget to drop the cache between tests + do a sync\n> I conducted several long runs of dd, so I am sure that this numbers are fairly correct. However, what worries me is that I test sequential read speed while during my experiments Postgres might need to read from random places thus reducing real read speed dramatically. I have a feeling that this can be the reason.\n> I also reviewed import scripts and found the import was done in DESCENDING order of IDs. It was so to get most recent records sooner, may be it caused some inefficiency in the storage... But again, it was so for both ranges.\n>\n> > - how big is your index?\n> pg_table_size('articles_pkey') = 1561 MB\n>\n> > - how big is the table?\n> pg_table_size('articles') = 427 GB\n> pg_table_size('pg_toast.pg_toast_221558') = 359 GB\n>\n\nSince you have a very big toast table, given you are using spinning disks, I think that increasing the block size will bring benefits. (Also partitioning is not a bad idea.)\n\nIf my understanding of TOAST is correct, if data will fit blocks of let's say 16 or 24 KB then one block retrieval from Postgres will result in less seeks on the disk and less possibility data gets sparse on your disk. (a very quick and dirty calculation, shows your average block size is 17KB)\n\nOne thing you might want to have a look at, is again the RAID controller and your OS. You might want to have all of them aligned in block size, or maybe have Postgres ones a multiple of what OS and RAID controller have.\n\n\n\n> > - given the size of shared_buffers, almost 2M blocks should fit, but you say 2 consecutive runs still are hitting the disk. That's strange indeed since you are using way more than 2M blocks.\n> TBH, I cannot say I understand your calculations with number of blocks...\nshared_buffers = 15GB IIRC (justpaste link is gone)\n\n15 * 1024 *1024 = 15728640 KB\n\nusing 8KB blocks = 1966080 total blocks\n\nif you query shared_buffers you should get the same number of total available blocks\n\n> But to clarify: consecutive runs with SAME parameters do NOT hit the disk, only the first one does, consequent ones read only from buffer cache.\n>\nI m a bit confused.. every query you pasted contains 'read':\n\n Buffers: shared hit=50 read=2378\n\nand 'read' means you are reading from disk (or OS cache). Or not?\n\n\n\n> > - As Laurenz suggested (VACUUM FULL), you might want to move data around. You can try also a dump + restore to narrow the problem to data or disk\n> I launched VACUUM FULL, but it ran very slowly, according to my calculation it might take 17 hours. I will try to do copy data into another table with the same structure or spin up another server, and let you know.\n>\ncool, that should also clarify if the reverse order matters or not\n\n> > - You might also want to try to see the disk graph of Windows, while you are running your tests. It can show you if data (and good to know how much) is actually fetching from disk or not.\n> I wanted to do so but I don't have access to Hyper-V server, will try to request credentials from admins.\n>\n> Couple more observations:\n> 1) The result of my experiment is almost not affected by other server load. Another user was running a query (over this table) with read speed ~130 MB/s, while with my query read at 1.8-2 MB/s.\n> 2) iotop show higher IO % (~93-94%) with slower read speed (though it is not quite clear what this field is). A process from example above had ~55% IO with 130 MB/s while my process had ~93% with ~2MB/s.\n>\nI think because you are looking at 'IO' column which indicates (from manual) '..the percentage of time the thread/process spent [..] while waiting on I/O.'\n\n> Regards,\n> Vlad\n>\n\nregards,\n\nfabio pardi\n\n\n\n\n\n\n\n\nOn 25/09/18 00:28, Vladimir Ryabtsev\n wrote:\n\n\n\n\n> it is not unusual to have 1GB cache or more... and do\n not forget to drop the cache between tests + do a sync\n I conducted several long runs of dd, so I am sure that this\n numbers are fairly correct. However, what worries me is that I\n test sequential read speed while during my experiments\n Postgres might need to read from random places thus reducing\n real read speed dramatically. I have a feeling that this can\n be the reason.\n I also reviewed import scripts and found the import was done\n in DESCENDING order of IDs. It was so to get most recent\n records sooner, may be it caused some inefficiency in the\n storage... But again, it was so for both ranges.\n\n > - how big is your index? \n pg_table_size('articles_pkey') = 1561 MB\n\n> - how big is the table?\n pg_table_size('articles') = 427 GB\n pg_table_size('pg_toast.pg_toast_221558') = 359 GB\n\n\n\n\n\n\n Since you have a very big toast table, given you are using spinning\n disks, I think that increasing the block size will bring benefits.\n (Also partitioning is not a bad idea.)\n\n If my understanding of TOAST is correct, if data will fit blocks of\n let's say 16 or 24 KB then one block retrieval from Postgres will\n result in less seeks on the disk and less possibility data gets\n sparse on your disk. (a very quick and dirty calculation, shows your\n average block size is 17KB)\n\n One thing you might want to have a look at, is again the RAID\n controller and your OS. You might want to have all of them aligned\n in block size, or maybe have Postgres ones a multiple of what OS and\n RAID controller have.\n\n\n\n\n\n\n> - given the size of shared_buffers, almost 2M blocks\n should fit, but you say 2 consecutive runs still are hitting\n the disk. That's strange indeed since you are using way more\n than 2M blocks.\nTBH, I cannot say I understand your calculations with\n number of blocks... \n\n\n\n shared_buffers = 15GB IIRC (justpaste link is gone)\n\n 15 * 1024 *1024 = 15728640 KB\n\n using 8KB blocks = 1966080 total blocks\n\n if you query shared_buffers you should get the same number of total\n available blocks\n\n\n\n\nBut to clarify: consecutive runs with SAME parameters do\n NOT hit the disk, only the first one does, consequent ones\n read only from buffer cache.\n\n\n\n\n\n I m a bit confused.. every query you pasted contains 'read':\n\n Buffers: shared hit=50\n read=2378\n\n and 'read' means you are reading from disk (or OS cache). Or not?\n \n\n\n\n\n\n\n> - As Laurenz suggested (VACUUM FULL), you might want\n to move data around. You can try also a dump + restore to\n narrow the problem to data or disk\n I launched VACUUM FULL, but it ran very slowly, according to\n my calculation it might take 17 hours. I will try to do copy\n data into another table with the same structure or spin up\n another server, and let you know.\n\n\n\n\n\n cool, that should also clarify if the reverse order matters or not\n\n\n\n\n> - You might also want to try to see the disk graph\n of Windows, while you are running your tests. It can show\n you if data (and good to know how much) is actually fetching\n from disk or not.\n I wanted to do so but I don't have access to Hyper-V server,\n will try to request credentials from admins.\n\n\nCouple more observations:\n1) The result of my experiment is almost not affected by\n other server load. Another user was running a query (over\n this table) with read speed ~130 MB/s, while with my query\n read at 1.8-2 MB/s.\n\n\n\n\n\n\n\n2) iotop show higher IO % (~93-94%) with slower read\n speed (though it is not quite clear what this field is). A\n process from example above had ~55% IO with 130 MB/s while\n my process had ~93% with ~2MB/s.\n\n\n\n\n\n I think because you are looking at 'IO' column which indicates (from\n manual) '..the percentage of time the thread/process spent [..] \n while waiting on I/O.'\n\n\n\n\nRegards,\nVlad\n\n\n\n\n\n\n regards,\n\n fabio pardi",
"msg_date": "Tue, 25 Sep 2018 11:14:47 +0200",
"msg_from": "Fabio Pardi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why could different data in a table be processed with different\n performance?"
},
{
"msg_contents": "> 1) Which file system are you using?\n From Linux's view it's ext4. Real vmdx file on Hyper-V is stored on NTFS,\nas far as I know.\n\n> 2) What is the segment layout of the LVM PVs and LVs?\nI am a bit lost with it. Is that what you are asking about?\nmaster:\n# pvs --segments\n PV VG Fmt Attr PSize PFree Start SSize\n /dev/sda5 ubuntu-vg lvm2 a-- 19.76g 20.00m 0 4926\n /dev/sda5 ubuntu-vg lvm2 a-- 19.76g 20.00m 4926 127\n /dev/sda5 ubuntu-vg lvm2 a-- 19.76g 20.00m 5053 5\n# lvs --segments\n LV VG Attr #Str Type SSize\n root ubuntu-vg -wi-ao--- 1 linear 19.24g\n swap_1 ubuntu-vg -wi-ao--- 1 linear 508.00m\n\nslave:\n# pvs --segments\n PV VG Fmt Attr PSize PFree Start SSize\n /dev/sda3 postgresnlpslave-vg lvm2 a-- 429.77g 0 0 110021\n /dev/sda5 postgresnlpslave-vg lvm2 a-- 169.52g 0 0 28392\n /dev/sda5 postgresnlpslave-vg lvm2 a-- 169.52g 0 28392 2199\n /dev/sda5 postgresnlpslave-vg lvm2 a-- 169.52g 0 30591 2560\n /dev/sda5 postgresnlpslave-vg lvm2 a-- 169.52g 0 33151 10246\n /dev/sdb1 postgresnlpslave-vg lvm2 a-- 512.00g 0 0 131071\n# lvs --segments\n LV VG Attr #Str Type SSize\n root postgresnlpslave-vg -wi-ao---- 1 linear 110.91g\n root postgresnlpslave-vg -wi-ao---- 1 linear 40.02g\n root postgresnlpslave-vg -wi-ao---- 1 linear 10.00g\n root postgresnlpslave-vg -wi-ao---- 1 linear 429.77g\n root postgresnlpslave-vg -wi-ao---- 1 linear 512.00g\n swap_1 postgresnlpslave-vg -wi-ao---- 1 linear 8.59g\n\n> 3) Do you use LVM for any \"extra\" features, such as snapshots?\nI don't think so, but how to check? vgs gives #SN = 0, is that it?\n\n> 4) You can try using seekwatcher to see where on the disk the slowness is\noccurring. You get a chart similar to this\nhttp://kernel.dk/dd-md0-xfs-pdflush.png\n> 5) BCC is a collection of tools that might shed a light on what is\nhappening. https://github.com/iovisor/bcc\nWill look into it.\n\nRegards,\nVlad\n\n> 1) Which file system are you using?From Linux's view it's ext4. Real vmdx file on Hyper-V is stored on NTFS, as far as I know.> 2) What is the segment layout of the LVM PVs and LVs?I am a bit lost with it. Is that what you are asking about?master:# pvs --segments PV VG Fmt Attr PSize PFree Start SSize /dev/sda5 ubuntu-vg lvm2 a-- 19.76g 20.00m 0 4926 /dev/sda5 ubuntu-vg lvm2 a-- 19.76g 20.00m 4926 127 /dev/sda5 ubuntu-vg lvm2 a-- 19.76g 20.00m 5053 5# lvs --segments LV VG Attr #Str Type SSize root ubuntu-vg -wi-ao--- 1 linear 19.24g swap_1 ubuntu-vg -wi-ao--- 1 linear 508.00mslave:# pvs --segments PV VG Fmt Attr PSize PFree Start SSize /dev/sda3 postgresnlpslave-vg lvm2 a-- 429.77g 0 0 110021 /dev/sda5 postgresnlpslave-vg lvm2 a-- 169.52g 0 0 28392 /dev/sda5 postgresnlpslave-vg lvm2 a-- 169.52g 0 28392 2199 /dev/sda5 postgresnlpslave-vg lvm2 a-- 169.52g 0 30591 2560 /dev/sda5 postgresnlpslave-vg lvm2 a-- 169.52g 0 33151 10246 /dev/sdb1 postgresnlpslave-vg lvm2 a-- 512.00g 0 0 131071# lvs --segments LV VG Attr #Str Type SSize root postgresnlpslave-vg -wi-ao---- 1 linear 110.91g root postgresnlpslave-vg -wi-ao---- 1 linear 40.02g root postgresnlpslave-vg -wi-ao---- 1 linear 10.00g root postgresnlpslave-vg -wi-ao---- 1 linear 429.77g root postgresnlpslave-vg -wi-ao---- 1 linear 512.00g swap_1 postgresnlpslave-vg -wi-ao---- 1 linear 8.59g> 3) Do you use LVM for any \"extra\" features, such as snapshots?I don't think so, but how to check? vgs gives #SN = 0, is that it?> 4) You can try using seekwatcher to see where on the disk the slowness is occurring. You get a chart similar to this http://kernel.dk/dd-md0-xfs-pdflush.png > 5) BCC is a collection of tools that might shed a light on what is happening. https://github.com/iovisor/bccWill look into it.Regards,Vlad",
"msg_date": "Tue, 25 Sep 2018 13:28:22 -0700",
"msg_from": "Vladimir Ryabtsev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why could different data in a table be processed with different\n performance?"
},
{
"msg_contents": "> Since you have a very big toast table, given you are using spinning\ndisks, I think that increasing the block size will bring benefits.\nBut will it worsen caching? I will have lesser slots in cache. Also will it\naffect required storage space?\n\n>> consecutive runs with SAME parameters do NOT hit the disk, only the\nfirst one does, consequent ones read only from buffer cache.\n> I m a bit confused.. every query you pasted contains 'read':\n> Buffers: shared hit=50 read=2378\n> and 'read' means you are reading from disk (or OS cache). Or not?\nYes, sorry, it was just my misunderstanding of what is \"consecutive\". To\nmake it clear: I iterate over all data in table with one request and\ndifferent parameters on each iteration (e.g. + 5000 both borders), in this\ncase I get disk reads on each query run (much more reads on \"slow\" range).\nBut if I request data from an area queried previously, it reads from cache\nand does not hit disk (both ranges). E.g. iterating over 1M of records with\nempty cache takes ~11 minutes in \"fast\" range and ~1 hour in \"slow\" range,\nwhile on second time it takes only ~2 minutes for both ranges (if I don't\ndo drop_caches).\n\nRegards,\nVlad\n\n> Since you have a very big toast table, given you are using spinning disks, I think that increasing the block size will bring benefits.But will it worsen caching? I will have lesser slots in cache. Also will it affect required storage space?>> consecutive runs with SAME parameters do NOT hit the disk, only the first one does, consequent ones read only from buffer cache.> I m a bit confused.. every query you pasted contains 'read':> Buffers: shared hit=50 read=2378> and 'read' means you are reading from disk (or OS cache). Or not? Yes, sorry, it was just my misunderstanding of what is \"consecutive\". To make it clear: I iterate over all data in table with one request and different parameters on each iteration (e.g. + 5000 both borders), in this case I get disk reads on each query run (much more reads on \"slow\" range). But if I request data from an area queried previously, it reads from cache and does not hit disk (both ranges). E.g. iterating over 1M of records with empty cache takes ~11 minutes in \"fast\" range and ~1 hour in \"slow\" range, while on second time it takes only ~2 minutes for both ranges (if I don't do drop_caches).Regards,Vlad",
"msg_date": "Wed, 26 Sep 2018 10:15:15 -0700",
"msg_from": "Vladimir Ryabtsev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why could different data in a table be processed with different\n performance?"
},
{
"msg_contents": "\n\nOn 09/26/2018 07:15 PM, Vladimir Ryabtsev wrote:\n>> Since you have a very big toast table, given you are using spinning\n> disks, I think that increasing the block size will bring benefits.\n> But will it worsen caching? I will have lesser slots in cache. Also will\n> it affect required storage space?\n\n\nI think in your case it will not worsen the cache. You will have lesser\nslots in the cache, but the total available cache will indeed be\nunchanged (half the blocks of double the size). It could affect space\nstorage, for the smaller blocks. Much depends which block size you\nchoose and how is actually your data distributed in the ranges you\nmentioned. (eg: range 10K -20 might be more on the 10 or more on the 20\nside.).\n\nImagine you request a record of 24 KB, and you are using 8KB blocks. It\nwill result in 3 different block lookup/request/returned. Those 3 blocks\nmight be displaced on disk, resulting maybe in 3 different lookups.\nHaving all in one block, avoids this problem.\nThe cons is that if you need to store 8KB of data, you will allocate 24KB.\nYou say you do not do updates, so it might also be the case that when\nyou write data all at once (24 KB in one go) it goes all together in a\ncontiguous strip. Therefore the block size change here will bring nothing.\nThis is very much data and usage driven. To change block size is a\npainful thing, because IIRC you do that at db initialization time\n\nSimilarly, if your RAID controller uses for instance 128KB blocks, each\ntime you are reading one block of 8KB, it will return to you a whole\n128KB chunk, which is quite a waste of resources.\n\nIf your 'slow' range is maybe fragmented here and there on the disk, not\nhaving a proper alignment between Postgres blocks/ Filesystem/RAID\nmight worsen the problem of orders of magnitude. This is very true on\nspinning disks, where the seek time is noticeable.\n\nNote that trying to set a very small block size has the opposite effect:\nyou might hit the IOPS of your hardware, and create a bottleneck. (been\nthere while benchmarking some new hardware)\n\nBut before going through all this, I would first try to reload the data\nwith dump+restore into a new machine, and see how it behaves.\n\nHope it helps.\n\nregards,\n\nfabio pardi\n\n> \n>>> consecutive runs with SAME parameters do NOT hit the disk, only the\n> first one does, consequent ones read only from buffer cache.\n>> I m a bit confused.. every query you pasted contains 'read':\n>> Buffers: shared hit=50 read=2378\n>> and 'read' means you are reading from disk (or OS cache). Or not? \n> Yes, sorry, it was just my misunderstanding of what is \"consecutive\". To\n> make it clear: I iterate over all data in table with one request and\n> different parameters on each iteration (e.g. + 5000 both borders), in\n> this case I get disk reads on each query run (much more reads on \"slow\"\n> range). But if I request data from an area queried previously, it reads\n> from cache and does not hit disk (both ranges). E.g. iterating over 1M\n> of records with empty cache takes ~11 minutes in \"fast\" range and ~1\n> hour in \"slow\" range, while on second time it takes only ~2 minutes for\n> both ranges (if I don't do drop_caches).\n> \n> Regards,\n> Vlad\n> \n\n",
"msg_date": "Thu, 27 Sep 2018 11:25:58 +0200",
"msg_from": "Fabio Pardi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why could different data in a table be processed with different\n performance?"
},
{
"msg_contents": "> Does your LVM have readahead\n> ramped up ? Try lvchange -r 65536 data/postgres (or similar).\n\nChanged this from 256 to 65536.\nIf it is supposed to take effect immediately (no server reboot or other\nchanges), then I've got no changes in performance. No at all.\n\nVlad\n\n> Does your LVM have readahead> ramped up ? Try lvchange -r 65536 data/postgres (or similar).Changed this from 256 to 65536.If it is supposed to take effect immediately (no server reboot or other changes), then I've got no changes in performance. No at all.Vlad",
"msg_date": "Fri, 28 Sep 2018 02:16:46 -0700",
"msg_from": "Vladimir Ryabtsev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why could different data in a table be processed with different\n performance?"
},
{
"msg_contents": "> You will have lesser\n> slots in the cache, but the total available cache will indeed be\n> unchanged (half the blocks of double the size).\nBut we have many other tables, queries to which may suffer from smaller\nnumber of blocks in buffer cache.\n\n> To change block size is a\n> painful thing, because IIRC you do that at db initialization time\nMy research shows that I can only change it in compile time.\nhttps://www.postgresql.org/docs/10/static/install-procedure.html\nAnd then initdb a new cluster...\nMoreover, this table/schema is not the only in the database, there is a\nbunch of other schemas. And we will need to dump-restore everything... So\nthis is super-painful.\n\n> It could affect space storage, for the smaller blocks.\nBut at which extent? As I understand it is not something about \"alignment\"\nto block size for rows? Is it only low-level IO thing with datafiles?\n\n> But before going through all this, I would first try to reload the data\n> with dump+restore into a new machine, and see how it behaves.\nYes, this is the plan, I'll be back once I find enough disk space for my\nfurther experiments.\n\nVlad\n\n> You will have lesser> slots in the cache, but the total available cache will indeed be> unchanged (half the blocks of double the size).But we have many other tables, queries to which may suffer from smaller number of blocks in buffer cache.> To change block size is a> painful thing, because IIRC you do that at db initialization timeMy research shows that I can only change it in compile time.https://www.postgresql.org/docs/10/static/install-procedure.htmlAnd then initdb a new cluster...Moreover, this table/schema is not the only in the database, there is a bunch of other schemas. And we will need to dump-restore everything... So this is super-painful.> It could affect space storage, for the smaller blocks.But at which extent? As I understand it is not something about \"alignment\" to block size for rows? Is it only low-level IO thing with datafiles?> But before going through all this, I would first try to reload the data> with dump+restore into a new machine, and see how it behaves.Yes, this is the plan, I'll be back once I find enough disk space for my further experiments.Vlad",
"msg_date": "Fri, 28 Sep 2018 02:56:24 -0700",
"msg_from": "Vladimir Ryabtsev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why could different data in a table be processed with different\n performance?"
},
{
"msg_contents": "\n\nOn 28/09/18 11:56, Vladimir Ryabtsev wrote:\n>\n> > It could affect space storage, for the smaller blocks.\n> But at which extent? As I understand it is not something about \"alignment\" to block size for rows? Is it only low-level IO thing with datafiles?\n>\n\nMaybe 'for the smaller blocks' was not very meaningful.\nWhat i mean is 'in terms of wasted disk space: '\n\nIn an example:\n\ncreate table test_space (i int);\n\nempty table:\n\nselect pg_total_relation_size('test_space');\n pg_total_relation_size\n------------------------\n 0\n(1 row)\n\ninsert one single record:\n\ninsert into test_space values (1);\n\n\nselect pg_total_relation_size('test_space');\n pg_total_relation_size\n------------------------\n 8192\n\n\nselect pg_relation_filepath('test_space');\n pg_relation_filepath\n----------------------\n base/16384/179329\n\n\nls -alh base/16384/179329\n-rw------- 1 postgres postgres 8.0K Sep 28 16:09 base/16384/179329\n\nThat means, if your block size was bigger, then you would have bigger space allocated for one single record.\n\nregards,\n\nfabio aprdi\n\n",
"msg_date": "Fri, 28 Sep 2018 16:15:47 +0200",
"msg_from": "Fabio Pardi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why could different data in a table be processed with different\n performance?"
},
{
"msg_contents": "> That means, if your block size was bigger, then you would have bigger\nspace allocated for one single record.\nBut if I INSERT second, third ... hundredth record in the table, the size\nremains 8K.\nSo my point is that if one decides to increase block size, increasing\nstorage space is not so significant, because it does not set minimum\nstorage unit for a row.\n\nvlad\n\n> That means, if your block size was bigger, then you would have bigger space allocated for one single record.But if I INSERT second, third ... hundredth record in the table, the size remains 8K.So my point is that if one decides to increase block size, increasing storage space is not so significant, because it does not set minimum storage unit for a row.vlad",
"msg_date": "Fri, 28 Sep 2018 12:51:03 -0700",
"msg_from": "Vladimir Ryabtsev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why could different data in a table be processed with different\n performance?"
},
{
"msg_contents": "\n\nOn 28/09/18 21:51, Vladimir Ryabtsev wrote:\n> > That means, if your block size was bigger, then you would have bigger space allocated for one single record.\n> But if I INSERT second, third ... hundredth record in the table, the size remains 8K.\n\n\n> So my point is that if one decides to increase block size, increasing storage space is not so significant, because it does not set minimum storage unit for a row.\n>\nah, yes, correct. Now we are on the same page.\n\nGood luck with the rest of things you are going to try out, and let us know your findings.\n\nregards,\n\nfabio pardi\n\n> vlad\n\n\n",
"msg_date": "Mon, 1 Oct 2018 09:35:38 +0200",
"msg_from": "Fabio Pardi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why could different data in a table be processed with different\n performance?"
},
{
"msg_contents": "FYI, posting an intermediate update on the issue.\n\nI disabled index scans to keep existing order, and copied part of the\n\"slow\" range into another table (3M rows in 2.2 GB table + 17 GB toast). I\nwas able to reproduce slow readings from this copy. Then I performed\nCLUSTER of the copy using PK and everything improved significantly. Overall\ntime became 6 times faster with disk read speed (reported by iotop)\n30-60MB/s.\n\nI think we can take bad physical data distribution as the main hypothesis\nof the issue. I was not able to launch seekwatcher though (it does not work\nout of the box in Ubuntu and I failed to rebuild it) and confirm lots of\nseeks.\n\nI still don't have enough disk space to solve the problem with original\ntable, I am waiting for this from admin/devops team.\n\nMy plan is to partition the original table and CLUSTER every partition on\nprimary key once I have space.\n\nBest regards,\nVlad\n\nFYI, posting an intermediate update on the issue.I disabled index scans to keep existing order, and copied part of the \"slow\" range into another table (3M rows in 2.2 GB table + 17 GB toast). I was able to reproduce slow readings from this copy. Then I performed CLUSTER of the copy using PK and everything improved significantly. Overall time became 6 times faster with disk read speed (reported by iotop) 30-60MB/s.I think we can take bad physical data distribution as the main hypothesis of the issue. I was not able to launch seekwatcher though (it does not work out of the box in Ubuntu and I failed to rebuild it) and confirm lots of seeks.I still don't have enough disk space to solve the problem with original table, I am waiting for this from admin/devops team.My plan is to partition the original table and CLUSTER every partition on primary key once I have space.Best regards,Vlad",
"msg_date": "Wed, 10 Oct 2018 03:59:53 -0700",
"msg_from": "Vladimir Ryabtsev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why could different data in a table be processed with different\n performance?"
},
{
"msg_contents": "Hi!\n\nCould someone discuss about following? It would be great to hear comments!\nThere is a good storage. According to \"fio\", write speed could be e.g. 3 GB/s. (It is First time using the command for me, so I am not certain of the real speed with \"fio\". E.g. with --bs=100m, direct=1, in fio. The measurement result may be faulty also. So, checking still.)\nCurrently there is one table without partitioning. The table contains json data. In PostgreSQL, in Linux.\nWrite speed can be e.g. 300 - 600 MB/s, through PostgreSQL. Measured with dstat while inserting. Shared buffers is large is PostgreSQL.\nWith a storage/disk which \"scales\", is there some way to write faster to the disk in the system through PostgreSQL?\nInside same server.\nDoes splitting data help? Partitioned table / splitting to smaller tables? Should I test it?\nChange settings somewhere? Block sizes? 8 KB / 16 KB, ... \"Dangerous\" to change?\n2nd question, sharding:\nIf the storage / \"disk\" scales, could better *disk writing speed* be achieved (in total) with sharding kind of splitting of data? (Same NAS storage, which scales, in use in all shards.)Sharding or use only one server? From pure disk writing speed point of view.\n\nBR Sam\n\nHi!Could someone discuss about following? It would be great to hear comments!There is a good storage. According to \"fio\", write speed could be e.g. 3 GB/s. (It is First time using the command for me, so I am not certain of the real speed with \"fio\". E.g. with --bs=100m, direct=1, in fio. The measurement result may be faulty also. So, checking still.)Currently there is one table without partitioning. The table contains json data. In PostgreSQL, in Linux.Write speed can be e.g. 300 - 600 MB/s, through PostgreSQL. Measured with dstat while inserting. Shared buffers is large is PostgreSQL.With a storage/disk which \"scales\", is there some way to write faster to the disk in the system through PostgreSQL?Inside same server.Does splitting data help? Partitioned table / splitting to smaller tables? Should I test it?Change settings somewhere? Block sizes? 8 KB / 16 KB, ... \"Dangerous\" to change?2nd question, sharding:If the storage / \"disk\" scales, could better *disk writing speed* be achieved (in total) with sharding kind of splitting of data? (Same NAS storage, which scales, in use in all shards.)Sharding or use only one server? From pure disk writing speed point of view.BR Sam",
"msg_date": "Fri, 12 Oct 2018 16:27:45 +0000 (UTC)",
"msg_from": "\"Sam R.\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "One big table or split data? Writing data. From disk point of view.\n With a good storage (GBs/s, writing speed)"
},
{
"msg_contents": "Hi!\nStill, \"writing\" speed to disk:\nWith \"fio\" 3 GB / s.With PostgreSQL: 350 -500 MB / s, also with a partitioned table.\n(and something similar with an other DBMS).Monitored with dstat.\nWith about 12 threads, then it does not get better anymore.Same results with data split to 4 partitions, in a partitioned table. CPU load increased, but not full yet.Same results with open_datasync.\nBR Sam\n\n \n \n On pe, lokak. 12, 2018 at 19:27, Sam R.<[email protected]> wrote: Hi!\n\nCould someone discuss about following? It would be great to hear comments!\nThere is a good storage. According to \"fio\", write speed could be e.g. 3 GB/s. (It is First time using the command for me, so I am not certain of the real speed with \"fio\". E.g. with --bs=100m, direct=1, in fio. The measurement result may be faulty also. So, checking still.)\nCurrently there is one table without partitioning. The table contains json data. In PostgreSQL, in Linux.\nWrite speed can be e.g. 300 - 600 MB/s, through PostgreSQL. Measured with dstat while inserting. Shared buffers is large is PostgreSQL.\nWith a storage/disk which \"scales\", is there some way to write faster to the disk in the system through PostgreSQL?\nInside same server.\nDoes splitting data help? Partitioned table / splitting to smaller tables? Should I test it?\nChange settings somewhere? Block sizes? 8 KB / 16 KB, ... \"Dangerous\" to change?\n2nd question, sharding:\nIf the storage / \"disk\" scales, could better *disk writing speed* be achieved (in total) with sharding kind of splitting of data? (Same NAS storage, which scales, in use in all shards.)Sharding or use only one server? From pure disk writing speed point of view.\n\nBR Sam\n \n\nHi!Still, \"writing\" speed to disk:With \"fio\" 3 GB / s.With PostgreSQL: 350 -500 MB / s, also with a partitioned table.(and something similar with an other DBMS).Monitored with dstat.With about 12 threads, then it does not get better anymore.Same results with data split to 4 partitions, in a partitioned table. CPU load increased, but not full yet.Same results with open_datasync.BR Sam On pe, lokak. 12, 2018 at 19:27, Sam R.<[email protected]> wrote: Hi!Could someone discuss about following? It would be great to hear comments!There is a good storage. According to \"fio\", write speed could be e.g. 3 GB/s. (It is First time using the command for me, so I am not certain of the real speed with \"fio\". E.g. with --bs=100m, direct=1, in fio. The measurement result may be faulty also. So, checking still.)Currently there is one table without partitioning. The table contains json data. In PostgreSQL, in Linux.Write speed can be e.g. 300 - 600 MB/s, through PostgreSQL. Measured with dstat while inserting. Shared buffers is large is PostgreSQL.With a storage/disk which \"scales\", is there some way to write faster to the disk in the system through PostgreSQL?Inside same server.Does splitting data help? Partitioned table / splitting to smaller tables? Should I test it?Change settings somewhere? Block sizes? 8 KB / 16 KB, ... \"Dangerous\" to change?2nd question, sharding:If the storage / \"disk\" scales, could better *disk writing speed* be achieved (in total) with sharding kind of splitting of data? (Same NAS storage, which scales, in use in all shards.)Sharding or use only one server? From pure disk writing speed point of view.BR Sam",
"msg_date": "Tue, 16 Oct 2018 03:47:39 +0000 (UTC)",
"msg_from": "\"Sam R.\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: One big table or split data? Writing data. From disk point of\n view. With a good storage (GBs/s, writing speed)"
}
] |
[
{
"msg_contents": "Hello,\nI have found that explain on tables with many (hundreds) columns\nare slow compare to nominal executions.\n\nThis can break application performances when using auto_explain or\npg_store_plans.\n\nHere is my test case (with 500 columns, can be pushed to 1000 or 1600)\n\ncreate table a();\n\nDECLARE\ni int;\nBEGIN\nfor i in 1..500\nloop\nexecute 'alter table a add column a'||i::text||' int';\nend loop;\nEND\n$$;\n\n#\\timing\n#select a500 from a;\n a500 \n------\n(0 rows)\nTime: 0,319 ms\n\n\n#explain analyze select a500 from a;\n QUERY PLAN \n--------------------------------------------------------------------------------------------\n Seq Scan on a (cost=0.00..10.40 rows=40 width=4) (actual time=0.010..0.010\nrows=0 loops=1)\n Planning time: 0.347 ms\n Execution time: 0.047 ms\n(3 rows)\nTime: 4,290 ms\n\n\nHere is a loop to try to understand where this comes from \n\nDO\n$$\nDECLARE\ni int;\nj int;\nBEGIN\nfor j in 1..100\nloop\nfor i in 1..500\nloop\nexecute 'explain select a'||i::text||' from a';\nend loop;\nend loop;\nEND\n$$;\n\nUsing perf top, most of the cpu time seems to come from relutils.c\ncolname_is_unique:\n\n 59,54% libc-2.26.so [.] __GI___strcmp_ssse3\n 26,11% postgres [.] colname_is_unique.isra.2\n 1,46% postgres [.] AllocSetAlloc\n 1,43% postgres [.] SearchCatCache3\n 0,70% postgres [.] set_relation_column_names\n 0,56% libc-2.26.so [.] __strlen_avx2\n\n\nselect version();\n PostgreSQL 11devel on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu\n7.2.0-8ubuntu3) 7.2.0, 64-bit\n\nCould this be improved ?\n\nRegards\nPAscal\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n",
"msg_date": "Mon, 24 Sep 2018 12:22:28 -0700 (MST)",
"msg_from": "legrand legrand <[email protected]>",
"msg_from_op": true,
"msg_subject": "Explain is slow with tables having many columns"
},
{
"msg_contents": "On Mon, Sep 24, 2018 at 12:22:28PM -0700, legrand legrand wrote:\n> Hello,\n> I have found that explain on tables with many (hundreds) columns\n> are slow compare to nominal executions.\n\nSee also this thread from last month:\n\nhttps://www.postgresql.org/message-id/flat/CAEe%3DmRnNNL3RDKJDmY%3D_mpcpAb5ugYL9NcchELa6Qgtoz2NjCw%40mail.gmail.com\n\nJustin\n\n",
"msg_date": "Mon, 24 Sep 2018 14:30:44 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Explain is slow with tables having many columns"
},
{
"msg_contents": "Justin Pryzby wrote\n> On Mon, Sep 24, 2018 at 12:22:28PM -0700, legrand legrand wrote:\n>> Hello,\n>> I have found that explain on tables with many (hundreds) columns\n>> are slow compare to nominal executions.\n> \n> See also this thread from last month:\n> \n> https://www.postgresql.org/message-id/flat/CAEe%3DmRnNNL3RDKJDmY%3D_mpcpAb5ugYL9NcchELa6Qgtoz2NjCw%40mail.gmail.com\n> \n> Justin\n\nmaybe, I will check that patch ...\n\nI thought it would also have been related to\nhttps://www.postgresql.org/message-id/CAMkU%3D1xPqHP%3D7YPeChq6n1v_qd4WGf%2BZvtnR-b%2BgyzFqtJqMMQ%40mail.gmail.com\n\nRegards\nPAscal\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n",
"msg_date": "Mon, 24 Sep 2018 12:43:44 -0700 (MST)",
"msg_from": "legrand legrand <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Explain is slow with tables having many columns"
},
{
"msg_contents": "Hi,\n\n(CCing -hackers)\n\nOn 2018-09-24 12:22:28 -0700, legrand legrand wrote:\n> I have found that explain on tables with many (hundreds) columns\n> are slow compare to nominal executions.\n\nYea, colname_is_unique() (called via make_colname_unique()) is\nessentially O(#total_columns) and rougly called once for each column in\na select list (or using or ...). IIRC we've hit this once when I was at\ncitus, too.\n\nWe really should be usign a more appropriate datastructure here - very\nlikely a hashtable. Unfortunately such a change would likely be a bit\ntoo much to backpatch...\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Mon, 24 Sep 2018 12:50:48 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Explain is slow with tables having many columns"
},
{
"msg_contents": "Hi,\n\nOn 2018-09-24 12:43:44 -0700, legrand legrand wrote:\n> Justin Pryzby wrote\n> > On Mon, Sep 24, 2018 at 12:22:28PM -0700, legrand legrand wrote:\n> >> Hello,\n> >> I have found that explain on tables with many (hundreds) columns\n> >> are slow compare to nominal executions.\n> > \n> > See also this thread from last month:\n> > \n> > https://www.postgresql.org/message-id/flat/CAEe%3DmRnNNL3RDKJDmY%3D_mpcpAb5ugYL9NcchELa6Qgtoz2NjCw%40mail.gmail.com\n> > \n> > Justin\n> \n> maybe, I will check that patch ...\n> \n> I thought it would also have been related to\n> https://www.postgresql.org/message-id/CAMkU%3D1xPqHP%3D7YPeChq6n1v_qd4WGf%2BZvtnR-b%2BgyzFqtJqMMQ%40mail.gmail.com\n\nNeither of these are related to the problem.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Mon, 24 Sep 2018 12:51:53 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Explain is slow with tables having many columns"
}
] |
[
{
"msg_contents": "I'm hoping to find a document I once read about the write load.\n\nAs best I can recall, it looked something like this:\n\n. at beginning of (spread) checkpoint, larger than average write load to\n pg_wal/, due to full_page_writes;\n. during most of checkpoint, decreasing WAL due to FPW, \n. towards end of checkpoint, increased writes to table data base/, due to\n fsync();\n. assuming the next checkpoint doesn't start immediately, quiescent period, due\n to clean OS buffers;\n\nThis isn't very important, but I hadn't seen that described before, and \nI think there was more detail than I can remember.\n\nI've been hoping for awhile to run across it and not able to find it. It\nprobably dates back to 8.3/9.0 days and maybe disappeared.\n\nDoes anyone know what I'm talking about or where I can find it?\n\nThanks,\nJustin\n\n",
"msg_date": "Wed, 26 Sep 2018 15:05:00 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": true,
"msg_subject": "reference regarding write load during different stages of checkpoint"
}
] |
[
{
"msg_contents": "I have a strange performance situation that I cannot resolve with my usual\nprocess.\n\nI have a SELECT statement that completes in about 12 seconds for the full\nresult (~1100 rows).\n\nIf I create an empty table first, and then INSERT with the SELECT query, it\ntakes 6.5 minutes.\n\nWhen I look at the EXPLAIN ANALYZE output, it seems that it's using a\ndrastically different query plan for the INSERT+SELECT than SELECT by\nitself.\n\nHere's the explain plan for the SELECT() by itself:\nhttps://explain.depesz.com/s/8Qmr\n\nHere's the explain plan for INSERT INTO x SELECT():\nhttps://explain.depesz.com/s/qifT\n\nI am running Postgresql 10(PostgreSQL 10.4 on x86_64-pc-linux-gnu, compiled\nby gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-18), 64-bit).\n\nShared Buffers = 4gb\neffective_cache_size = 4gb\nwork_mem = 8gb\nwal_buffers = -1\nmax_wal_sze = 2gb\nwal_level = replica\narchiving on\nTotal RAM on machine: 252GB\n\nThis machine is VACUUM FULL,ANALYZE once a week. Autovac is ON with PG10\ndefault settings.\n\nThe machine has 12 Intel(R) Xeon(R) CPU E5-2643 v3 @ 3.40GHz, and 15k RPM\ndisks for Postgres. I have tested write speed to all filesystems and\nspeeds are as expected. The pg_wal is on a separate disk resource,\nhowever, these disks are also 15k in speed and setup the same way as\nPostgres data disks.\n\nThe queries are sensitive so I had to obfuscate them in the explain plans.\nI am reluctant to provide full metadata for all the objects involved, but\nwill if it comes to that. I first want to understand why the query plan\nwould be so different for a SELECT vs INSERT into X SELECT. I also tried\nCREATE TABLE x as SELECT() but it also takes 6+ minutes.\n\nIs there any advice as to the general case on why SELECT can finish in\n10seconds but CREATE TABLE as SELECT() runs in 7 minutes?\n\nAny advice would be much appreciated.\n\nThanks,\nArjun Ranade\n\nI have a strange performance situation that I cannot resolve with my usual process. I have a SELECT statement that completes in about 12 seconds for the full result (~1100 rows). If I create an empty table first, and then INSERT with the SELECT query, it takes 6.5 minutes.When I look at the EXPLAIN ANALYZE output, it seems that it's using a drastically different query plan for the INSERT+SELECT than SELECT by itself.Here's the explain plan for the SELECT() by itself: https://explain.depesz.com/s/8QmrHere's the explain plan for INSERT INTO x SELECT(): https://explain.depesz.com/s/qifTI am running Postgresql 10(PostgreSQL 10.4 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-18), 64-bit). Shared Buffers = 4gbeffective_cache_size = 4gbwork_mem = 8gbwal_buffers = -1max_wal_sze = 2gbwal_level = replicaarchiving onTotal RAM on machine: 252GBThis machine is VACUUM FULL,ANALYZE once a week. Autovac is ON with PG10 default settings.The machine has 12 Intel(R) Xeon(R) CPU E5-2643 v3 @ 3.40GHz, and 15k RPM disks for Postgres. I have tested write speed to all filesystems and speeds are as expected. The pg_wal is on a separate disk resource, however, these disks are also 15k in speed and setup the same way as Postgres data disks.The queries are sensitive so I had to obfuscate them in the explain plans. I am reluctant to provide full metadata for all the objects involved, but will if it comes to that. I first want to understand why the query plan would be so different for a SELECT vs INSERT into X SELECT. I also tried CREATE TABLE x as SELECT() but it also takes 6+ minutes.Is there any advice as to the general case on why SELECT can finish in 10seconds but CREATE TABLE as SELECT() runs in 7 minutes? Any advice would be much appreciated.Thanks,Arjun Ranade",
"msg_date": "Thu, 27 Sep 2018 13:08:05 -0400",
"msg_from": "Arjun Ranade <[email protected]>",
"msg_from_op": true,
"msg_subject": "SELECT statement returns in 10seconds, but INSERT/CREATE TABLE AS\n with same SELECT takes 7 minutes"
},
{
"msg_contents": "Arjun Ranade <[email protected]> writes:\n> I have a strange performance situation that I cannot resolve with my usual\n> process.\n> I have a SELECT statement that completes in about 12 seconds for the full\n> result (~1100 rows).\n> If I create an empty table first, and then INSERT with the SELECT query, it\n> takes 6.5 minutes.\n\n> When I look at the EXPLAIN ANALYZE output, it seems that it's using a\n> drastically different query plan for the INSERT+SELECT than SELECT by\n> itself.\n\nThe reason for the plan shape difference is probably that the bare SELECT\nis allowed to use parallelism while INSERT/SELECT isn't. I'm not sure\nto what extent we could relax that without creating semantic gotchas.\n\nHowever, your real problem with either query is that the planner's\nrowcount estimates are off by several orders of magnitude. If you could\nimprove that, you'd likely get better plan choices in both cases.\n\nI also notice that this seems to be a 14-way join, which means you're\nprobably getting an artificially poor plan as a result of \nfrom_collapse_limit and/or join_collapse_limit constraining the planner's\nsearch space. Maybe raising those limits would help, although I'm not\nsure how much it'd help if the rowcount estimates aren't improved.\n\nSince you haven't told us much of anything about the actual query or the\ndata, it's hard to offer concrete advice beyond that.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 27 Sep 2018 13:21:34 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT statement returns in 10seconds,\n but INSERT/CREATE TABLE AS with same SELECT takes 7 minutes"
},
{
"msg_contents": "On Thu, Sep 27, 2018 at 01:08:05PM -0400, Arjun Ranade wrote:\n> When I look at the EXPLAIN ANALYZE output, it seems that it's using a\n> drastically different query plan for the INSERT+SELECT than SELECT by\n> itself.\n\nThe fast, SELECT plan is using parallel query, which isn't available for INSERT+SELECT:\n\nhttps://www.postgresql.org/docs/current/static/when-can-parallel-query-be-used.html\n|Even when it is in general possible for parallel query plans to be generated, the planner will not generate them for a given query if any of the following are true:\n|The query writes any data or locks any database rows.\n\nUsing parallel query in this case happens to mitigate the effects of the bad\nplan.\n\nI see Tom responded, and you got an improvement by changing join threshold.\n\nBut I think you could perhaps get an better plan if the rowcount estimates were\nfixed. That's more important than probably anything else - changing settings\nis only a workaround for bad estimates.\n\nIn the slow/INSERT plan, this join is returning 55000x more rows than expected\n(not 55k more: 55k TIMES more).\n\n7. \t26,937.132 \t401,503.136 \t↓ 55,483.7 \t332,902 \t1 \t\nNested Loop (cost=1,516.620..42,244.240 rows=6 width=84) (actual time=311.021..401,503.136 rows=332,902 loops=1)\n Join Filter: (((papa_echo.oscar_bravo)::text = (five_hotel.tango_november)::text) AND ((papa_echo.lima_tango)::text = (five_hotel.lima_mike)::text) AND ((xray_juliet1.juliet)::text = (five_hotel.papa_victor)::text))\n Rows Removed by Join Filter: 351664882\n Buffers: shared hit=8570619 read=6\n\nFirst question is if all those conditions are independent? Or if one of those\nconditions also implies another, which is confusing the planner.\n\nJustin\n\n",
"msg_date": "Thu, 27 Sep 2018 13:52:24 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT statement returns in 10seconds, but INSERT/CREATE TABLE\n AS with same SELECT takes 7 minutes"
},
{
"msg_contents": "Hi Tom,\n\nThank you for your suggestions. I tried increasing from_collapse_limit and\njoin_collapse_limit to 16 in a specific session and that significantly\nimproved my query performance (it takes < 2s now). Now, my instinct is to\nincrease this globally but I'm sure there are some drawbacks to this so I\nwill need to read more about it.\n\nYour point about parallelism is interesting, I hadn't considered that.\n\nEven after working with Postgres for years, there really is a lot to learn\nabout query optimization that is new for me. I'd never heard of these\nparameters before your email since almost every performance issue I've had\nthus far was resolved by creating an index or smarter query re-writing.\n\nI'm reading the documentation regarding these specific parameters, but it's\nwritten as a reference page as opposed to an explanation into query\nplanning and optimization. I wonder if there is a class or book these\ndetails better.\n\nAnyway, thank you so much for pointing me in the right direction.\n\nBest,\nArjun\n\nOn Thu, Sep 27, 2018 at 1:21 PM Tom Lane <[email protected]> wrote:\n\n> Arjun Ranade <[email protected]> writes:\n> > I have a strange performance situation that I cannot resolve with my\n> usual\n> > process.\n> > I have a SELECT statement that completes in about 12 seconds for the full\n> > result (~1100 rows).\n> > If I create an empty table first, and then INSERT with the SELECT query,\n> it\n> > takes 6.5 minutes.\n>\n> > When I look at the EXPLAIN ANALYZE output, it seems that it's using a\n> > drastically different query plan for the INSERT+SELECT than SELECT by\n> > itself.\n>\n> The reason for the plan shape difference is probably that the bare SELECT\n> is allowed to use parallelism while INSERT/SELECT isn't. I'm not sure\n> to what extent we could relax that without creating semantic gotchas.\n>\n> However, your real problem with either query is that the planner's\n> rowcount estimates are off by several orders of magnitude. If you could\n> improve that, you'd likely get better plan choices in both cases.\n>\n> I also notice that this seems to be a 14-way join, which means you're\n> probably getting an artificially poor plan as a result of\n> from_collapse_limit and/or join_collapse_limit constraining the planner's\n> search space. Maybe raising those limits would help, although I'm not\n> sure how much it'd help if the rowcount estimates aren't improved.\n>\n> Since you haven't told us much of anything about the actual query or the\n> data, it's hard to offer concrete advice beyond that.\n>\n> regards, tom lane\n>\n\nHi Tom,Thank you for your suggestions. I tried increasing from_collapse_limit and join_collapse_limit to 16 in a specific session and that significantly improved my query performance (it takes < 2s now). Now, my instinct is to increase this globally but I'm sure there are some drawbacks to this so I will need to read more about it.Your point about parallelism is interesting, I hadn't considered that. Even after working with Postgres for years, there really is a lot to learn about query optimization that is new for me. I'd never heard of these parameters before your email since almost every performance issue I've had thus far was resolved by creating an index or smarter query re-writing.I'm reading the documentation regarding these specific parameters, but it's written as a reference page as opposed to an explanation into query planning and optimization. I wonder if there is a class or book these details better. Anyway, thank you so much for pointing me in the right direction.Best,ArjunOn Thu, Sep 27, 2018 at 1:21 PM Tom Lane <[email protected]> wrote:Arjun Ranade <[email protected]> writes:\n> I have a strange performance situation that I cannot resolve with my usual\n> process.\n> I have a SELECT statement that completes in about 12 seconds for the full\n> result (~1100 rows).\n> If I create an empty table first, and then INSERT with the SELECT query, it\n> takes 6.5 minutes.\n\n> When I look at the EXPLAIN ANALYZE output, it seems that it's using a\n> drastically different query plan for the INSERT+SELECT than SELECT by\n> itself.\n\nThe reason for the plan shape difference is probably that the bare SELECT\nis allowed to use parallelism while INSERT/SELECT isn't. I'm not sure\nto what extent we could relax that without creating semantic gotchas.\n\nHowever, your real problem with either query is that the planner's\nrowcount estimates are off by several orders of magnitude. If you could\nimprove that, you'd likely get better plan choices in both cases.\n\nI also notice that this seems to be a 14-way join, which means you're\nprobably getting an artificially poor plan as a result of \nfrom_collapse_limit and/or join_collapse_limit constraining the planner's\nsearch space. Maybe raising those limits would help, although I'm not\nsure how much it'd help if the rowcount estimates aren't improved.\n\nSince you haven't told us much of anything about the actual query or the\ndata, it's hard to offer concrete advice beyond that.\n\n regards, tom lane",
"msg_date": "Thu, 27 Sep 2018 14:58:28 -0400",
"msg_from": "Arjun Ranade <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SELECT statement returns in 10seconds, but INSERT/CREATE TABLE AS\n with same SELECT takes 7 minutes"
},
{
"msg_contents": "On Thu, Sep 27, 2018 at 03:37:57PM -0400, Arjun Ranade wrote:\n> Yes, that join is concerning (red text below). The conditions all need to\n> be checked so they are independent.\n\nYou can play with the join conditions to see which test is getting such a bad\nestimate, or if it's a combination of tests (as I suspected) giving a bad\nestimate.\n\nThere's a good chance this one isn't doing very well:\n\n> vw2.product_group_name ||'.'|| vw2.product_node_name = i.product_node_name\n\nAs a workaround/test, you could maybe add an expression index\nON( (vw2.product_group_name ||'.'|| vw2.product_node_name) )\n\n..and then ANALYZE. Eventually, you'd want to consider splitting\ni.product_node_name into separate columns. \n\nJustin\n\n",
"msg_date": "Thu, 27 Sep 2018 14:33:14 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT statement returns in 10seconds, but INSERT/CREATE TABLE\n AS with same SELECT takes 7 minutes"
},
{
"msg_contents": "Yes, that join is concerning (red text below). The conditions all need to\nbe checked so they are independent.\n\nThe query (with consistent obfuscation) is below :\n\nselect distinct\n a.sale_id\n , a.test_date\n , a.product_id as original_product_id\n ,vw2.product_id\n , a.volume as volume\n ,b.pair_rank\nfrom not_sold_locations a\n inner join vw_product vw2 using\n(product_group_name,product_class_code,product_type_code,sale_end_date)\n inner join product_mapping b on a.product_group_name =\nb.left_product_group_name and\n a.product_node_name = b.left_product_node and\n a.product_type_code = b.left_product and\n vw2.product_node_name = b.right_product_node and\n vw2.product_group_name =\nb.right_product_group_name and\n vw2.product_type_code = b.right_product\n inner join mapping_ref i on vw2.product_group_name || '.' ||\nvw2.product_node_name = i.product_node_name and\n vw2.product_class_code = i.product_class_code and\n vw2.product_type_code = i.product_type_code and\n vw2.sale_end_date between i.first_product_date\nand i.last_product_date;\n\nnot_sold_locations(a) has 836 rows\nvw_product (vw2) has 785k rows and is a view that joins 11 tables\ntogether to have a consolidated view of all products, sales locations,\netc\n\nproduct_mapping (b) has 2520 rows\n\nmapping_ref (i) has 178 rows\n\n\n\nOn Thu, Sep 27, 2018 at 2:52 PM Justin Pryzby <[email protected]> wrote:\n\n> On Thu, Sep 27, 2018 at 01:08:05PM -0400, Arjun Ranade wrote:\n> > When I look at the EXPLAIN ANALYZE output, it seems that it's using a\n> > drastically different query plan for the INSERT+SELECT than SELECT by\n> > itself.\n>\n> The fast, SELECT plan is using parallel query, which isn't available for\n> INSERT+SELECT:\n>\n>\n> https://www.postgresql.org/docs/current/static/when-can-parallel-query-be-used.html\n> |Even when it is in general possible for parallel query plans to be\n> generated, the planner will not generate them for a given query if any of\n> the following are true:\n> |The query writes any data or locks any database rows.\n>\n> Using parallel query in this case happens to mitigate the effects of the\n> bad\n> plan.\n>\n> I see Tom responded, and you got an improvement by changing join threshold.\n>\n> But I think you could perhaps get an better plan if the rowcount estimates\n> were\n> fixed. That's more important than probably anything else - changing\n> settings\n> is only a workaround for bad estimates.\n>\n> In the slow/INSERT plan, this join is returning 55000x more rows than\n> expected\n> (not 55k more: 55k TIMES more).\n>\n> 7. 26,937.132 401,503.136 ↓ 55,483.7 332,902 1\n>\n> Nested Loop (cost=1,516.620..42,244.240 rows=6 width=84) (actual\n> time=311.021..401,503.136 rows=332,902 loops=1)\n> Join Filter: (((papa_echo.oscar_bravo)::text =\n> (five_hotel.tango_november)::text) AND ((papa_echo.lima_tango)::text =\n> (five_hotel.lima_mike)::text) AND ((xray_juliet1.juliet)::text =\n> (five_hotel.papa_victor)::text))\n> Rows Removed by Join Filter: 351664882\n> Buffers: shared hit=8570619 read=6\n>\n> First question is if all those conditions are independent? Or if one of\n> those\n> conditions also implies another, which is confusing the planner.\n>\n> Justin\n>\n\nYes, that join is concerning (red text below). The conditions all need to be checked so they are independent.The query (with consistent obfuscation) is below :select distinct a.sale_id , a.test_date , a.product_id as original_product_id ,vw2.product_id , a.volume as volume ,b.pair_rankfrom not_sold_locations a inner join vw_product vw2 using (product_group_name,product_class_code,product_type_code,sale_end_date) inner join product_mapping b on a.product_group_name = b.left_product_group_name and a.product_node_name = b.left_product_node and a.product_type_code = b.left_product and vw2.product_node_name = b.right_product_node and vw2.product_group_name = b.right_product_group_name and vw2.product_type_code = b.right_product inner join mapping_ref i on vw2.product_group_name || '.' || vw2.product_node_name = i.product_node_name and vw2.product_class_code = i.product_class_code and vw2.product_type_code = i.product_type_code and vw2.sale_end_date between i.first_product_date and i.last_product_date;not_sold_locations(a) has 836 rowsvw_product (vw2) has 785k rows and is a view that joins 11 tables together to have a consolidated view of all products, sales locations, etcproduct_mapping (b) has 2520 rowsmapping_ref (i) has 178 rowsOn Thu, Sep 27, 2018 at 2:52 PM Justin Pryzby <[email protected]> wrote:On Thu, Sep 27, 2018 at 01:08:05PM -0400, Arjun Ranade wrote:\n> When I look at the EXPLAIN ANALYZE output, it seems that it's using a\n> drastically different query plan for the INSERT+SELECT than SELECT by\n> itself.\n\nThe fast, SELECT plan is using parallel query, which isn't available for INSERT+SELECT:\n\nhttps://www.postgresql.org/docs/current/static/when-can-parallel-query-be-used.html\n|Even when it is in general possible for parallel query plans to be generated, the planner will not generate them for a given query if any of the following are true:\n|The query writes any data or locks any database rows.\n\nUsing parallel query in this case happens to mitigate the effects of the bad\nplan.\n\nI see Tom responded, and you got an improvement by changing join threshold.\n\nBut I think you could perhaps get an better plan if the rowcount estimates were\nfixed. That's more important than probably anything else - changing settings\nis only a workaround for bad estimates.\n\nIn the slow/INSERT plan, this join is returning 55000x more rows than expected\n(not 55k more: 55k TIMES more).\n\n7. 26,937.132 401,503.136 ↓ 55,483.7 332,902 1 \nNested Loop (cost=1,516.620..42,244.240 rows=6 width=84) (actual time=311.021..401,503.136 rows=332,902 loops=1)\n Join Filter: (((papa_echo.oscar_bravo)::text = (five_hotel.tango_november)::text) AND ((papa_echo.lima_tango)::text = (five_hotel.lima_mike)::text) AND ((xray_juliet1.juliet)::text = (five_hotel.papa_victor)::text))\n Rows Removed by Join Filter: 351664882\n Buffers: shared hit=8570619 read=6\n\nFirst question is if all those conditions are independent? Or if one of those\nconditions also implies another, which is confusing the planner.\n\nJustin",
"msg_date": "Thu, 27 Sep 2018 15:37:57 -0400",
"msg_from": "Arjun Ranade <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SELECT statement returns in 10seconds, but INSERT/CREATE TABLE AS\n with same SELECT takes 7 minutes"
},
{
"msg_contents": "\"As a workaround/test, you could maybe add an expression index\nON( (vw2.product_group_name ||'.'|| vw2.product_node_name) )\"\n\nUnfortunately, vw2 is a view, but I had a similar thought. I'm looking\ninto splitting i.product-node_name into separate columns though, thanks!\n\n\nOn Thu, Sep 27, 2018 at 3:33 PM Justin Pryzby <[email protected]> wrote:\n\n> On Thu, Sep 27, 2018 at 03:37:57PM -0400, Arjun Ranade wrote:\n> > Yes, that join is concerning (red text below). The conditions all need\n> to\n> > be checked so they are independent.\n>\n> You can play with the join conditions to see which test is getting such a\n> bad\n> estimate, or if it's a combination of tests (as I suspected) giving a bad\n> estimate.\n>\n> There's a good chance this one isn't doing very well:\n>\n> > vw2.product_group_name ||'.'|| vw2.product_node_name =\n> i.product_node_name\n>\n> As a workaround/test, you could maybe add an expression index\n> ON( (vw2.product_group_name ||'.'|| vw2.product_node_name) )\n>\n> ..and then ANALYZE. Eventually, you'd want to consider splitting\n> i.product_node_name into separate columns.\n>\n> Justin\n>\n\n\"As a workaround/test, you could maybe add an expression index\nON( (vw2.product_group_name ||'.'|| vw2.product_node_name) )\"Unfortunately, vw2 is a view, but I had a similar thought. I'm looking into splitting i.product-node_name into separate columns though, thanks!On Thu, Sep 27, 2018 at 3:33 PM Justin Pryzby <[email protected]> wrote:On Thu, Sep 27, 2018 at 03:37:57PM -0400, Arjun Ranade wrote:\n> Yes, that join is concerning (red text below). The conditions all need to\n> be checked so they are independent.\n\nYou can play with the join conditions to see which test is getting such a bad\nestimate, or if it's a combination of tests (as I suspected) giving a bad\nestimate.\n\nThere's a good chance this one isn't doing very well:\n\n> vw2.product_group_name ||'.'|| vw2.product_node_name = i.product_node_name\n\nAs a workaround/test, you could maybe add an expression index\nON( (vw2.product_group_name ||'.'|| vw2.product_node_name) )\n\n..and then ANALYZE. Eventually, you'd want to consider splitting\ni.product_node_name into separate columns. \n\nJustin",
"msg_date": "Thu, 27 Sep 2018 15:51:32 -0400",
"msg_from": "Arjun Ranade <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SELECT statement returns in 10seconds, but INSERT/CREATE TABLE AS\n with same SELECT takes 7 minutes"
},
{
"msg_contents": "> The reason for the plan shape difference is probably that the bare SELECT\n> is allowed to use parallelism while INSERT/SELECT isn't.\nIn case parallelism is used, should it report in the plan as something like\n\"workers planned: N\"?\n\nVlad\n\n> The reason for the plan shape difference is probably that the bare SELECT> is allowed to use parallelism while INSERT/SELECT isn't.In case parallelism is used, should it report in the plan as something like \"workers planned: N\"?Vlad",
"msg_date": "Thu, 27 Sep 2018 13:39:32 -0700",
"msg_from": "Vladimir Ryabtsev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT statement returns in 10seconds, but INSERT/CREATE TABLE AS\n with same SELECT takes 7 minutes"
},
{
"msg_contents": "Vladimir Ryabtsev <[email protected]> writes:\n>> The reason for the plan shape difference is probably that the bare SELECT\n>> is allowed to use parallelism while INSERT/SELECT isn't.\n\n> In case parallelism is used, should it report in the plan as something like\n> \"workers planned: N\"?\n\nIt did --- see the Gather node.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 27 Sep 2018 16:41:29 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT statement returns in 10seconds,\n but INSERT/CREATE TABLE AS with same SELECT takes 7 minutes"
},
{
"msg_contents": "> It did --- see the Gather node.\nBut \"workers launched: 1\"...\nTo my opinion, such a dramatic difference cannot be explained with avoiding\nparallelism, the query was just stuck in a very inefficient plan (even\nthough almost all source data is read from cache).\n\nAdditionally, I think author can try CREATE STATISTICS on the bunch of\ncolumns used in join. Very low rows estimate for this join may come from\nmultiplying selectivities for each column assuming they are independent.\n\nVlad\n\n> It did --- see the Gather node.But \"workers launched: 1\"...To my opinion, such a dramatic difference cannot be explained with avoiding parallelism, the query was just stuck in a very inefficient plan (even though almost all source data is read from cache).Additionally, I think author can try CREATE STATISTICS on the bunch of columns used in join. Very low rows estimate for this join may come from multiplying selectivities for each column assuming they are independent.Vlad",
"msg_date": "Thu, 27 Sep 2018 16:50:36 -0700",
"msg_from": "Vladimir Ryabtsev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT statement returns in 10seconds, but INSERT/CREATE TABLE AS\n with same SELECT takes 7 minutes"
},
{
"msg_contents": "On Thu, Sep 27, 2018 at 04:50:36PM -0700, Vladimir Ryabtsev wrote:\n> Additionally, I think author can try CREATE STATISTICS on the bunch of\n> columns used in join. Very low rows estimate for this join may come from\n> multiplying selectivities for each column assuming they are independent.\n\nMV statistics don't currently help for joins:\nhttps://www.postgresql.org/message-id/flat/CAKJS1f-6B7KnDFrh6SFhYn-YbHYOXmDDAfd0XC%3DjJKZMCrfQyg%40mail.gmail.com#925e19951fabc9a480b804d661d83be8\n\nJustin\n\n",
"msg_date": "Thu, 27 Sep 2018 19:12:08 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT statement returns in 10seconds, but INSERT/CREATE TABLE\n AS with same SELECT takes 7 minutes"
}
] |
[
{
"msg_contents": "Hi team,\n\nI need your help to resolve an issue psql: fe_sendauth: no password supplied which we are getting by nagios plugin , we are trying to monitor the disk space and via .pgpass for an authentication.\n\n[root@viicinga-02 ~]# /mnt/common/local/linux/local/bin/check_postgres/check_postgres_disk_space.sh -H vikbcn-db2.vpc.prod.scl1.us.tribalfusion.net -f /var/spool/icinga/.pgpass_stby -w '60' -c '70'\nConnection to vikbcn-db2.vpc.prod.scl1.us.tribalfusion.net 5432 port [tcp/postgres] succeeded!\npsql: fe_sendauth: no password supplied\n\nRegards,\nDaulat\n\n\n\n\n\n\n\n\n\n\nHi team,\n \nI need your help to resolve an issue psql: fe_sendauth: no password supplied which we are getting by nagios plugin , we are trying to monitor the disk space and via .pgpass for an authentication.\n \n[root@viicinga-02 ~]# /mnt/common/local/linux/local/bin/check_postgres/check_postgres_disk_space.sh -H vikbcn-db2.vpc.prod.scl1.us.tribalfusion.net -f /var/spool/icinga/.pgpass_stby -w\n '60' -c '70'\nConnection to vikbcn-db2.vpc.prod.scl1.us.tribalfusion.net 5432 port [tcp/postgres] succeeded!\npsql: fe_sendauth: no password supplied\n\n \nRegards,\nDaulat",
"msg_date": "Sun, 30 Sep 2018 23:36:48 +0000",
"msg_from": "Daulat Ram <[email protected]>",
"msg_from_op": true,
"msg_subject": "psql: fe_sendauth: no password supplied"
},
{
"msg_contents": "It's a question better directed to the -general list.\n\nYou'll either want to specify the DB name and DB user in check_postgres\ninvocation, or otherwise add an entry to pg_hba.conf, like:\nhost ts nagios 192.168.122.1/32 trust\n\nhttps://bucardo.org/check_postgres/check_postgres.pl.html#database_connection_options\n\nOn Sun, Sep 30, 2018 at 6:37 PM Daulat Ram <[email protected]>\nwrote:\n\n> Hi team,\n>\n>\n>\n> I need your help to resolve an issue psql: fe_sendauth: no password\n> supplied which we are getting by nagios plugin , we are trying to monitor\n> the disk space and via .pgpass for an authentication.\n>\n>\n>\n> [root@viicinga-02 ~]#\n> /mnt/common/local/linux/local/bin/check_postgres/check_postgres_disk_space.sh\n> -H vikbcn-db2.vpc.prod.scl1.us.tribalfusion.net -f /var/spool/icinga/.pgpass_stby\n> -w '60' -c '70'\n> Connection to vikbcn-db2.vpc.prod.scl1.us.tribalfusion.net 5432\n> port [tcp/postgres] succeeded!\n> psql: fe_sendauth: no password supplied\n>\n>\n>\n> Regards,\n>\n> Daulat\n>\n>\n>\n\nIt's a question better directed to the -general list.You'll either want to specify the DB name and DB user in check_postgres invocation, or otherwise add an entry to pg_hba.conf, like:host ts nagios 192.168.122.1/32 trusthttps://bucardo.org/check_postgres/check_postgres.pl.html#database_connection_optionsOn Sun, Sep 30, 2018 at 6:37 PM Daulat Ram <[email protected]> wrote:\n\n\nHi team,\n \nI need your help to resolve an issue psql: fe_sendauth: no password supplied which we are getting by nagios plugin , we are trying to monitor the disk space and via .pgpass for an authentication.\n \n[root@viicinga-02 ~]# /mnt/common/local/linux/local/bin/check_postgres/check_postgres_disk_space.sh -H vikbcn-db2.vpc.prod.scl1.us.tribalfusion.net -f /var/spool/icinga/.pgpass_stby -w\n '60' -c '70'\nConnection to vikbcn-db2.vpc.prod.scl1.us.tribalfusion.net 5432 port [tcp/postgres] succeeded!\npsql: fe_sendauth: no password supplied\n\n \nRegards,\nDaulat",
"msg_date": "Sun, 30 Sep 2018 18:49:08 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql: fe_sendauth: no password supplied"
}
] |
[
{
"msg_contents": "Hi\nI would like to submit the following problem to the PostgreSQL community. In my company, we have data encryption needs.\nSo I decided to use the following procedure :\n\n\n(1) Creating a table with a bytea type column to store the encrypted data\nCREATE TABLE cartedecredit(card_id SERIAL PRIMARY KEY, username VARCHAR(100), cc bytea);\n\n\n\n(2) inserting encrypted data\nINSERT INTO cartedecredit(username,cc) SELECT 'individu ' || x.id, pgp_sym_encrypt('test value ' || x.id, 'motdepasse','compress-algo=2, cipher-algo=aes256') FROM generate_series(1,100000) AS x(id);\n\n\n(3) Querying the table\nSELECT pgp_sym_decrypt(cc, 'motdepasse') FROM cartedecredit WHERE pgp_sym_decrypt(cc, 'motdepasse')='test value 32';\n\npgp_sym_decrypt\n\n-----------------\n\ntest value 32\n\n(1 row)\n\n\n\nTime: 115735.035 ms (01:55.735)\n-> the execution time is very long. So, I decide to create an index\n\n\n\n(4) Creating an index on encrypted data\nCREATE INDEX idx_cartedecredit_cc02 ON cartedecredit(cc);\n\n\n(5) Querying the table again\n\nSELECT pgp_sym_decrypt(cc, 'motdepasse') FROM cartedecredit WHERE pgp_sym_decrypt(cc, 'motdepasse')='test value 32';\npgp_sym_decrypt\n\n-----------------\n\ntest value 32\n\n(1 row)\n\n\n\nTime: 118558.485 ms (01:58.558) -> almost 2 minutes !!\npostgres=# explain analyze SELECT pgp_sym_decrypt(cc, 'motdepasse') FROM cartedecredit WHERE pgp_sym_decrypt(cc, 'motdepasse')='test value 32';\n\n QUERY PLAN\n\n----------------------------------------------------------------------------------------------------------------------\n\nSeq Scan on cartedecredit (cost=0.00..3647.25 rows=500 width=32) (actual time=60711.787..102920.509 rows=1 loops=1)\n\n Filter: (pgp_sym_decrypt(cc, 'motdepasse'::text) = 'test value 32'::text)\n\n Rows Removed by Filter: 99999\n\nPlanning time: 0.112 ms\n\nExecution time: 102920.585 ms\n\n(5 rows)\n\n\n\n? the index is not used in the execution plan. maybe because of the use of a function in the WHERE clause. I decide to modify the SQL query\n\n\n(6) Querying the table\nSELECT pgp_sym_decrypt(cc, 'motdepasse') FROM cartedecredit WHERE cc=pgp_sym_encrypt('test value 32', 'motdepasse');\npgp_sym_decrypt\n\n-----------------\n\n(0 rows)\n\n\n\nTime: 52659.571 ms (00:52.660)\n\n\n? The execution time is very long and I get no result (!?)\n\n QUERY PLAN\n\n-------------------------------------------------------------------------------------------------------------------\n\nSeq Scan on cartedecredit (cost=0.00..3646.00 rows=1 width=32) (actual time=61219.989..61219.989 rows=0 loops=1)\n\n Filter: (cc = pgp_sym_encrypt('test value 32'::text, 'motdepasse'::text))\n\n Rows Removed by Filter: 100000\n\nPlanning time: 0.157 ms\n\nExecution time: 61220.035 ms\n\n(5 rows)\n\n\n\n? My index is not used.\n\nQUESTIONS :\n- why I get no result ?\n\n- why the index is not used?\n\nThanks in advance\n\nBest Regards\nDidier\n\n\n\n[cid:[email protected]]\n\n\nDidier ROS\nExpertise SGBD\nDS IT/IT DMA/Solutions Groupe EDF/Expertise Applicative - SGBD\nCe message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\n\nSi vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\n\nIl est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\n____________________________________________________\n\nThis message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.\n\nIf you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\n\nE-mail communication cannot be guaranteed to be timely secure, error or virus-free.",
"msg_date": "Sat, 6 Oct 2018 09:57:25 +0000",
"msg_from": "ROS Didier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Why the index is not used ?"
},
{
"msg_contents": "so 6. 10. 2018 v 11:57 odesílatel ROS Didier <[email protected]> napsal:\n\n> Hi\n>\n> I would like to submit the following problem to the PostgreSQL community.\n> In my company, we have data encryption needs.\n> So I decided to use the following procedure :\n>\n>\n>\n> (1) Creating a table with a bytea type column to store the encrypted\n> data\n> CREATE TABLE cartedecredit(card_id SERIAL PRIMARY KEY, username\n> VARCHAR(100), cc bytea);\n>\n>\n>\n> (2) inserting encrypted data\n> INSERT INTO cartedecredit(username,cc) SELECT 'individu ' || x.id,\n> pgp_sym_encrypt('test value ' || x.id, 'motdepasse','compress-algo=2,\n> cipher-algo=aes256') FROM generate_series(1,100000) AS x(id);\n>\n>\n>\n> (3) Querying the table\n> SELECT pgp_sym_decrypt(cc, 'motdepasse') FROM cartedecredit WHERE\n> pgp_sym_decrypt(cc, 'motdepasse')='test value 32';\n>\n> pgp_sym_decrypt\n>\n> -----------------\n>\n> test value 32\n>\n> (1 row)\n>\n>\n>\n> Time: 115735.035 ms (01:55.735)\n> -> the execution time is very long. So, I decide to create an index\n>\n>\n>\n> (4) Creating an index on encrypted data\n> CREATE INDEX idx_cartedecredit_cc02 ON cartedecredit(cc);\n>\n\nthis index cannot to help.\n\nbut functional index can cartedecredit(pgp_sym_decrypt(cc, 'motdepasse').\nUnfortunately index file will be decrypted in this case.\n\nCREATE INDEX ON\n\n\n>\n>\n> (5) Querying the table again\n>\n> SELECT pgp_sym_decrypt(cc, 'motdepasse') FROM cartedecredit WHERE\n> pgp_sym_decrypt(cc, 'motdepasse')='test value 32';\n> pgp_sym_decrypt\n>\n> -----------------\n>\n> test value 32\n>\n> (1 row)\n>\n>\n>\n> Time: 118558.485 ms (01:58.558) -> almost 2 minutes !!\n> postgres=# explain analyze SELECT pgp_sym_decrypt(cc, 'motdepasse') FROM\n> cartedecredit WHERE pgp_sym_decrypt(cc, 'motdepasse')='test value 32';\n>\n> QUERY PLAN\n>\n>\n> ----------------------------------------------------------------------------------------------------------------------\n>\n> Seq Scan on cartedecredit (cost=0.00..3647.25 rows=500 width=32) (actual\n> time=60711.787..102920.509 rows=1 loops=1)\n>\n> Filter: (pgp_sym_decrypt(cc, 'motdepasse'::text) = 'test value\n> 32'::text)\n>\n> Rows Removed by Filter: 99999\n>\n> Planning time: 0.112 ms\n>\n> Execution time: 102920.585 ms\n>\n> (5 rows)\n>\n>\n>\n> è the index is not used in the execution plan. maybe because of the use\n> of a function in the WHERE clause. I decide to modify the SQL query\n>\n>\n>\n> (6) Querying the table\n> SELECT pgp_sym_decrypt(cc, 'motdepasse') FROM cartedecredit WHERE *cc*=pgp_sym_encrypt('test\n> value 32', 'motdepasse');\n>\n\nit is strange - this should to use index, when there is usual index over cc\ncolumn.\n\nWhat is result of explain analyze when you penalize seq scan by\n\nset enable_seqscan to off\n\n\n\n> pgp_sym_decrypt\n>\n> -----------------\n>\n> (0 rows)\n>\n>\n>\n> Time: 52659.571 ms (00:52.660)\n>\n> è The execution time is very long and I get no result (!?)\n>\n> QUERY PLAN\n>\n>\n> -------------------------------------------------------------------------------------------------------------------\n>\n> Seq Scan on cartedecredit (cost=0.00..3646.00 rows=1 width=32) (actual\n> time=61219.989..61219.989 rows=0 loops=1)\n>\n> Filter: (cc = pgp_sym_encrypt('test value 32'::text,\n> 'motdepasse'::text))\n>\n> Rows Removed by Filter: 100000\n>\n> Planning time: 0.157 ms\n>\n> Execution time: 61220.035 ms\n>\n> (5 rows)\n>\n>\n>\n> è My index is not used.\n>\n>\n> QUESTIONS :\n> - why I get no result ?\n>\n> - why the index is not used?\n>\n> Thanks in advance\n>\n>\n>\n> Best Regards\n> Didier\n>\n>\n>\n>\n>\n> [image: cid:[email protected]]\n>\n>\n> * Didier ROS*\n> * Expertise SGBD*\n>\n>\n> *DS IT/IT DMA/Solutions Groupe EDF/Expertise Applicative - SGBD *\n>\n>\n>\n>\n>\n>\n> Ce message et toutes les pièces jointes (ci-après le 'Message') sont\n> établis à l'intention exclusive des destinataires et les informations qui y\n> figurent sont strictement confidentielles. Toute utilisation de ce Message\n> non conforme à sa destination, toute diffusion ou toute publication totale\n> ou partielle, est interdite sauf autorisation expresse.\n>\n> Si vous n'êtes pas le destinataire de ce Message, il vous est interdit de\n> le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou\n> partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de\n> votre système, ainsi que toutes ses copies, et de n'en garder aucune trace\n> sur quelque support que ce soit. Nous vous remercions également d'en\n> avertir immédiatement l'expéditeur par retour du message.\n>\n> Il est impossible de garantir que les communications par messagerie\n> électronique arrivent en temps utile, sont sécurisées ou dénuées de toute\n> erreur ou virus.\n> ____________________________________________________\n>\n> This message and any attachments (the 'Message') are intended solely for\n> the addressees. The information contained in this Message is confidential.\n> Any use of information contained in this Message not in accord with its\n> purpose, any dissemination or disclosure, either whole or partial, is\n> prohibited except formal approval.\n>\n> If you are not the addressee, you may not copy, forward, disclose or use\n> any part of it. If you have received this message in error, please delete\n> it and all copies from your system and notify the sender immediately by\n> return message.\n>\n> E-mail communication cannot be guaranteed to be timely secure, error or\n> virus-free.\n>",
"msg_date": "Sat, 6 Oct 2018 12:13:31 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why the index is not used ?"
},
{
"msg_contents": "Hello Didier,\n\n(3), (5) to find the match, you decrypt the whole table, apparently this\ntake quite a long time.\nIndex cannot help here because indexes work on exact match of type and\nvalue, but you compare mapped value, not indexed. Functional index should\nhelp, but like it was said, it against the idea of encrypted storage.\n\n(6) I never used pgp_sym_encrypt() but I see that in INSERT INTO you\nsupplied additional parameter 'compress-algo=2, cipher-algo=aes256' while\nin (6) you did not. Probably this is the reason.\n\nIn general matching indexed bytea column should use index, you can ensure\nin this populating the column unencrypted and using 'test value 32'::bytea\nfor match.\nIn you case I believe pgp_sym_encrypt() is not marked as STABLE or\nIMMUTABLE that's why it will be evaluated for each row (very inefficient)\nand cannot use index. From documentation:\n\n\"Since an index scan will evaluate the comparison value only once, not once\nat each row, it is not valid to use a VOLATILE function in an index scan\ncondition.\"\nhttps://www.postgresql.org/docs/10/static/xfunc-volatility.html\n\nIf you cannot add STABLE/IMMUTABLE to pgp_sym_encrypt() (which apparently\nshould be there), you can encrypt searched value as a separate operation\nand then search in the table using basic value match.\n\nVlad",
"msg_date": "Sat, 6 Oct 2018 09:51:24 -0700",
"msg_from": "Vladimir Ryabtsev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why the index is not used ?"
},
{
"msg_contents": "I haven’t looked up what pgp_sym_encrypt() does but assuming it does encryption the way you should be for credit card data then it will be using a random salt and the same input value won’t encrypt to the same output value so\n====\nWHERE cc=pgp_sym_encrypt('test value 32', 'motdepasse');\n====\nwouldn’t work because the value generated by the function when you are searching on isn’t the same value as when you stored it.\n\n\n\nPaul\n\n> On 6 Oct 2018, at 19:57, ROS Didier <[email protected]> wrote:\n> \n> WHERE cc=pgp_sym_encrypt('test value 32', 'motdepasse');\n\nI haven’t looked up what pgp_sym_encrypt() does but assuming it does encryption the way you should be for credit card data then it will be using a random salt and the same input value won’t encrypt to the same output value so====WHERE cc=pgp_sym_encrypt('test value 32', 'motdepasse');====wouldn’t work because the value generated by the function when you are searching on isn’t the same value as when you stored it.PaulOn 6 Oct 2018, at 19:57, ROS Didier <[email protected]> wrote:WHERE cc=pgp_sym_encrypt('test value 32', 'motdepasse');",
"msg_date": "Sun, 7 Oct 2018 13:20:53 +1100",
"msg_from": "Paul McGarry <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why the index is not used ?"
},
{
"msg_contents": "Hi Pavel\r\n\r\n Thanks you for your answer. here is a procedure that works :\r\n\r\n- CREATE TABLE cartedecredit(card_id SERIAL PRIMARY KEY, username VARCHAR(100), cc bytea);\r\n\r\n- INSERT INTO cartedecredit(username,cc) SELECT 'individu ' || x.id, pgp_sym_encrypt('test value ' || x.id, 'motdepasse','compress-algo=2, cipher-algo=aes256') FROM generate_series(1,100000) AS x(id);\r\n\r\n- CREATE INDEX idx_cartedecredit_cc02 ON cartedecredit(pgp_sym_decrypt(cc, 'motdepasse','compress-algo=2, cipher-algo=aes256'));\r\n\r\n- SELECT pgp_sym_decrypt(cc, 'motdepasse') FROM cartedecredit WHERE pgp_sym_decrypt(cc, 'motdepasse','compress-algo=2, cipher-algo=aes256')='test value 32';\r\npgp_sym_decrypt\r\n-----------------\r\ntest value 32\r\n(1 row)\r\n\r\nTime: 2.237 ms\r\n\r\n- explain analyze SELECT pgp_sym_decrypt(cc, 'motdepasse') FROM cartedecredit WHERE pgp_sym_decrypt(cc, 'motdepasse','compress-algo=2, cipher-algo=aes256')='test value 32';\r\n QUERY PLAN\r\n---------------------------------------------------------------------------------------------------------------------------------------\r\nIndex Scan using idx_cartedecredit_cc02 on cartedecredit (cost=0.42..8.44 rows=1 width=32) (actual time=1.545..1.546 rows=1 loops=1)\r\n Index Cond: (pgp_sym_decrypt(cc, 'motdepasse'::text, 'compress-algo=2, cipher-algo=aes256'::text) = 'test value 32'::text)\r\nPlanning time: 0.330 ms\r\nExecution time: 1.580 ms\r\n(4 rows)\r\n\r\nOK that works great.\r\nThank you for the recommendation\r\n\r\nBest Regards\r\n\r\n[cid:[email protected]]\r\n\r\n\r\nDidier ROS\r\nExpertise SGBD\r\nDS IT/IT DMA/Solutions Groupe EDF/Expertise Applicative - SGBD\r\nNanterre Picasso - E2 565D (aile nord-est)\r\n32 Avenue Pablo Picasso\r\n92000 Nanterre\r\[email protected]<mailto:[email protected]>\r\[email protected]<mailto:[email protected]>\r\[email protected]<mailto:[email protected]>\r\nTél. : 01 78 66 61 14\r\nTél. mobile : 06 49 51 11 88\r\nLync : [email protected]<sip:[email protected]>\r\n\r\n\r\n\r\nDe : [email protected] [mailto:[email protected]]\r\nEnvoyé : samedi 6 octobre 2018 12:14\r\nÀ : ROS Didier <[email protected]>\r\nCc : [email protected]; [email protected]; [email protected]\r\nObjet : Re: Why the index is not used ?\r\n\r\n\r\nso 6. 10. 2018 v 11:57 odesílatel ROS Didier <[email protected]<mailto:[email protected]>> napsal:\r\nHi\r\nI would like to submit the following problem to the PostgreSQL community. In my company, we have data encryption needs.\r\nSo I decided to use the following procedure :\r\n\r\n\r\n(1) Creating a table with a bytea type column to store the encrypted data\r\nCREATE TABLE cartedecredit(card_id SERIAL PRIMARY KEY, username VARCHAR(100), cc bytea);\r\n\r\n\r\n\r\n(2) inserting encrypted data\r\nINSERT INTO cartedecredit(username,cc) SELECT 'individu ' || x.id<http://x.id>, pgp_sym_encrypt('test value ' || x.id<http://x.id>, 'motdepasse','compress-algo=2, cipher-algo=aes256') FROM generate_series(1,100000) AS x(id);\r\n\r\n\r\n(3) Querying the table\r\nSELECT pgp_sym_decrypt(cc, 'motdepasse') FROM cartedecredit WHERE pgp_sym_decrypt(cc, 'motdepasse')='test value 32';\r\n\r\npgp_sym_decrypt\r\n\r\n-----------------\r\n\r\ntest value 32\r\n\r\n(1 row)\r\n\r\n\r\n\r\nTime: 115735.035 ms (01:55.735)\r\n-> the execution time is very long. So, I decide to create an index\r\n\r\n\r\n\r\n(4) Creating an index on encrypted data\r\nCREATE INDEX idx_cartedecredit_cc02 ON cartedecredit(cc);\r\n\r\nthis index cannot to help.\r\n\r\nbut functional index can cartedecredit(pgp_sym_decrypt(cc, 'motdepasse'). Unfortunately index file will be decrypted in this case.\r\n\r\nCREATE INDEX ON\r\n\r\n\r\n\r\n(5) Querying the table again\r\n\r\nSELECT pgp_sym_decrypt(cc, 'motdepasse') FROM cartedecredit WHERE pgp_sym_decrypt(cc, 'motdepasse')='test value 32';\r\npgp_sym_decrypt\r\n\r\n-----------------\r\n\r\ntest value 32\r\n\r\n(1 row)\r\n\r\n\r\n\r\nTime: 118558.485 ms (01:58.558) -> almost 2 minutes !!\r\npostgres=# explain analyze SELECT pgp_sym_decrypt(cc, 'motdepasse') FROM cartedecredit WHERE pgp_sym_decrypt(cc, 'motdepasse')='test value 32';\r\n\r\n QUERY PLAN\r\n\r\n----------------------------------------------------------------------------------------------------------------------\r\n\r\nSeq Scan on cartedecredit (cost=0.00..3647.25 rows=500 width=32) (actual time=60711.787..102920.509 rows=1 loops=1)\r\n\r\n Filter: (pgp_sym_decrypt(cc, 'motdepasse'::text) = 'test value 32'::text)\r\n\r\n Rows Removed by Filter: 99999\r\n\r\nPlanning time: 0.112 ms\r\n\r\nExecution time: 102920.585 ms\r\n\r\n(5 rows)\r\n\r\n\r\n\r\n==> the index is not used in the execution plan. maybe because of the use of a function in the WHERE clause. I decide to modify the SQL query\r\n\r\n\r\n(6) Querying the table\r\nSELECT pgp_sym_decrypt(cc, 'motdepasse') FROM cartedecredit WHERE cc=pgp_sym_encrypt('test value 32', 'motdepasse');\r\n\r\nit is strange - this should to use index, when there is usual index over cc column.\r\n\r\nWhat is result of explain analyze when you penalize seq scan by\r\n\r\nset enable_seqscan to off\r\n\r\n\r\n\r\npgp_sym_decrypt\r\n\r\n-----------------\r\n\r\n(0 rows)\r\n\r\n\r\n\r\nTime: 52659.571 ms (00:52.660)\r\n\r\n==> The execution time is very long and I get no result (!?)\r\n\r\n QUERY PLAN\r\n\r\n-------------------------------------------------------------------------------------------------------------------\r\n\r\nSeq Scan on cartedecredit (cost=0.00..3646.00 rows=1 width=32) (actual time=61219.989..61219.989 rows=0 loops=1)\r\n\r\n Filter: (cc = pgp_sym_encrypt('test value 32'::text, 'motdepasse'::text))\r\n\r\n Rows Removed by Filter: 100000\r\n\r\nPlanning time: 0.157 ms\r\n\r\nExecution time: 61220.035 ms\r\n\r\n(5 rows)\r\n\r\n\r\n\r\n==> My index is not used.\r\n\r\nQUESTIONS :\r\n- why I get no result ?\r\n\r\n- why the index is not used?\r\nThanks in advance\r\n\r\nBest Regards\r\nDidier\r\n\r\n\r\n[cid:[email protected]]\r\n\r\n\r\nDidier ROS\r\nExpertise SGBD\r\nDS IT/IT DMA/Solutions Groupe EDF/Expertise Applicative - SGBD\r\n\r\n\r\n\r\n\r\nCe message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\r\n\r\nSi vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\r\n\r\nIl est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\r\n____________________________________________________\r\n\r\nThis message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.\r\n\r\nIf you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\r\n\r\nE-mail communication cannot be guaranteed to be timely secure, error or virus-free.\r\n\n\n\nCe message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\n\nSi vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\n\nIl est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\n____________________________________________________\n\nThis message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.\n\nIf you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\n\nE-mail communication cannot be guaranteed to be timely secure, error or virus-free.",
"msg_date": "Sun, 7 Oct 2018 13:13:24 +0000",
"msg_from": "ROS Didier <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Why the index is not used ?"
},
{
"msg_contents": "Hi Paul\r\n\r\n Thanks for the explanation. I think you are right.\r\n I understand why the WHERE clause “cc=pgp_sym_encrypt('test value 32', 'motdepasse');” does not bring anything back.\r\n\r\nBest Regards\r\nDidier ROS\r\n\r\nDe : [email protected] [mailto:[email protected]]\r\nEnvoyé : dimanche 7 octobre 2018 04:21\r\nÀ : ROS Didier <[email protected]>\r\nCc : [email protected]; [email protected]; [email protected]\r\nObjet : Re: Why the index is not used ?\r\n\r\nI haven’t looked up what pgp_sym_encrypt() does but assuming it does encryption the way you should be for credit card data then it will be using a random salt and the same input value won’t encrypt to the same output value so\r\n====\r\nWHERE cc=pgp_sym_encrypt('test value 32', 'motdepasse');\r\n====\r\nwouldn’t work because the value generated by the function when you are searching on isn’t the same value as when you stored it.\r\n\r\n\r\nPaul\r\n\r\nOn 6 Oct 2018, at 19:57, ROS Didier <[email protected]<mailto:[email protected]>> wrote:\r\nWHERE cc=pgp_sym_encrypt('test value 32', 'motdepasse');\r\n\n\n\nCe message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\n\nSi vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\n\nIl est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\n____________________________________________________\n\nThis message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.\n\nIf you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\n\nE-mail communication cannot be guaranteed to be timely secure, error or virus-free.\n\n\n\n\n\n\n\n\n\nHi Paul\n \n \r\nThanks for the explanation. I think you are right.\n I understand why the WHERE clause “cc=pgp_sym_encrypt('test value 32', 'motdepasse');” does not bring anything\r\n back.\n \nBest Regards\nDidier ROS\n \n\n\nDe : [email protected] [mailto:[email protected]]\r\n\nEnvoyé : dimanche 7 octobre 2018 04:21\nÀ : ROS Didier <[email protected]>\nCc : [email protected]; [email protected]; [email protected]\nObjet : Re: Why the index is not used ?\n\n\n \n\nI haven’t looked up what pgp_sym_encrypt() does but assuming it does encryption the way you should be for credit card data then it will be using a random salt and the same input value won’t encrypt to the same output value so\n\n====\n\nWHERE cc=pgp_sym_encrypt('test value 32', 'motdepasse');\n\n\n====\n\n\nwouldn’t work because the value generated by the function when you are searching on isn’t the same value as when you stored it.\n\n\n \n\n\n \n\nPaul\n\n\n\r\nOn 6 Oct 2018, at 19:57, ROS Didier <[email protected]> wrote:\n\n\n\nWHERE cc=pgp_sym_encrypt('test value 32', 'motdepasse');\n\n\n\n\n\n\nCe message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\nSi vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\nIl est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\n____________________________________________________\nThis message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.\nIf you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\nE-mail communication cannot be guaranteed to be timely secure, error or virus-free.",
"msg_date": "Sun, 7 Oct 2018 13:20:02 +0000",
"msg_from": "ROS Didier <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Why the index is not used ?"
},
{
"msg_contents": "ROS:\n\nOn Sun, Oct 7, 2018 at 3:13 PM, ROS Didier <[email protected]> wrote:\n....\n> - INSERT INTO cartedecredit(username,cc) SELECT 'individu ' || x.id, pgp_sym_encrypt('test value ' || x.id, 'motdepasse','compress-algo=2, cipher-algo=aes256') FROM generate_series(1,100000) AS x(id);\n> - CREATE INDEX idx_cartedecredit_cc02 ON cartedecredit(pgp_sym_decrypt(cc, 'motdepasse','compress-algo=2, cipher-algo=aes256'));\n\nIf my french is not too rusty you are encrypting a credit-card, and\nthen storing an UNENCRYPTED copy in the index. So, getting it from the\nserver is trivial for anyone with filesystem access.\n\nFrancisco Olarte.\n\n",
"msg_date": "Sun, 7 Oct 2018 17:58:29 +0200",
"msg_from": "Francisco Olarte <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why the index is not used ?"
},
{
"msg_contents": "Hi Francisco\r\n\r\n\tThank you for your remark. \r\n\tYou're right, but it's the only procedure I found to make search on encrypted fields with good response times (using index) !\r\n\r\n\tRegarding access to the file system, our servers are in protected network areas. few people can connect to it.\r\n\r\n\tit's not the best solution, but we have data encryption needs and good performance needs too. I do not know how to do it except the specified procedure..\r\n\tif anyone has any proposals to put this in place, I'm interested.\r\n\r\n\tThanks in advance\r\n\r\nBest Regards\r\nDidier ROS\r\n\r\n-----Message d'origine-----\r\nDe : [email protected] [mailto:[email protected]] \r\nEnvoyé : dimanche 7 octobre 2018 17:58\r\nÀ : ROS Didier <[email protected]>\r\nCc : [email protected]; [email protected]; [email protected]; [email protected]\r\nObjet : Re: Why the index is not used ?\r\n\r\nROS:\r\n\r\nOn Sun, Oct 7, 2018 at 3:13 PM, ROS Didier <[email protected]> wrote:\r\n....\r\n> - INSERT INTO cartedecredit(username,cc) SELECT 'individu ' || x.id, pgp_sym_encrypt('test value ' || x.id, 'motdepasse','compress-algo=2, cipher-algo=aes256') FROM generate_series(1,100000) AS x(id);\r\n> - CREATE INDEX idx_cartedecredit_cc02 ON cartedecredit(pgp_sym_decrypt(cc, 'motdepasse','compress-algo=2, cipher-algo=aes256'));\r\n\r\nIf my french is not too rusty you are encrypting a credit-card, and then storing an UNENCRYPTED copy in the index. So, getting it from the server is trivial for anyone with filesystem access.\r\n\r\nFrancisco Olarte.\r\n\n\n\nCe message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\n\nSi vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\n\nIl est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\n____________________________________________________\n\nThis message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.\n\nIf you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\n\nE-mail communication cannot be guaranteed to be timely secure, error or virus-free.\n",
"msg_date": "Sun, 7 Oct 2018 18:32:38 +0000",
"msg_from": "ROS Didier <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Why the index is not used ?"
},
{
"msg_contents": "You can consider outside DB encryption which is less of worry for performance and data at rest will be encrypted.\r\n\r\nRegards,\r\nVirendra\r\n-----Original Message-----\r\nFrom: ROS Didier [mailto:[email protected]]\r\nSent: Sunday, October 07, 2018 2:33 PM\r\nTo: [email protected]\r\nCc: [email protected]; [email protected]; [email protected]; [email protected]\r\nSubject: RE: Why the index is not used ?\r\n\r\nHi Francisco\r\n\r\nThank you for your remark.\r\nYou're right, but it's the only procedure I found to make search on encrypted fields with good response times (using index) !\r\n\r\nRegarding access to the file system, our servers are in protected network areas. few people can connect to it.\r\n\r\nit's not the best solution, but we have data encryption needs and good performance needs too. I do not know how to do it except the specified procedure..\r\nif anyone has any proposals to put this in place, I'm interested.\r\n\r\nThanks in advance\r\n\r\nBest Regards\r\nDidier ROS\r\n\r\n-----Message d'origine-----\r\nDe : [email protected] [mailto:[email protected]]\r\nEnvoyé : dimanche 7 octobre 2018 17:58\r\nÀ : ROS Didier <[email protected]>\r\nCc : [email protected]; [email protected]; [email protected]; [email protected]\r\nObjet : Re: Why the index is not used ?\r\n\r\nROS:\r\n\r\nOn Sun, Oct 7, 2018 at 3:13 PM, ROS Didier <[email protected]> wrote:\r\n....\r\n> - INSERT INTO cartedecredit(username,cc) SELECT 'individu ' || x.id, pgp_sym_encrypt('test value ' || x.id, 'motdepasse','compress-algo=2, cipher-algo=aes256') FROM generate_series(1,100000) AS x(id);\r\n> - CREATE INDEX idx_cartedecredit_cc02 ON cartedecredit(pgp_sym_decrypt(cc, 'motdepasse','compress-algo=2, cipher-algo=aes256'));\r\n\r\nIf my french is not too rusty you are encrypting a credit-card, and then storing an UNENCRYPTED copy in the index. So, getting it from the server is trivial for anyone with filesystem access.\r\n\r\nFrancisco Olarte.\r\n\r\n\r\n\r\nCe message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\r\n\r\nSi vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\r\n\r\nIl est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\r\n____________________________________________________\r\n\r\nThis message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.\r\n\r\nIf you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\r\n\r\nE-mail communication cannot be guaranteed to be timely secure, error or virus-free.\r\n\r\n________________________________\r\n\r\nThis message is intended only for the use of the addressee and may contain\r\ninformation that is PRIVILEGED AND CONFIDENTIAL.\r\n\r\nIf you are not the intended recipient, you are hereby notified that any\r\ndissemination of this communication is strictly prohibited. If you have\r\nreceived this communication in error, please erase all copies of the message\r\nand its attachments and notify the sender immediately. Thank you.\r\n",
"msg_date": "Sun, 7 Oct 2018 18:41:25 +0000",
"msg_from": "\"Kumar, Virendra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Why the index is not used ?"
},
{
"msg_contents": "Didier,\n\nyou was given a few things to check in another my message on the same day.\nYou have not provided any feedback.\nIt is up to you how to implement your system, but you can with no doubt\nconsider your database as not encrypted with your approach. You (or\nprobably your management) have no understanding from which risks you\nprotect your data.\n\nRegards,\nVlad\n\n\nвс, 7 окт. 2018 г. в 11:33, ROS Didier <[email protected]>:\n\n> Hi Francisco\n>\n> Thank you for your remark.\n> You're right, but it's the only procedure I found to make search\n> on encrypted fields with good response times (using index) !\n>\n> Regarding access to the file system, our servers are in protected\n> network areas. few people can connect to it.\n>\n> it's not the best solution, but we have data encryption needs and\n> good performance needs too. I do not know how to do it except the specified\n> procedure..\n> if anyone has any proposals to put this in place, I'm interested.\n>\n> Thanks in advance\n>\n> Best Regards\n> Didier ROS\n>\n> -----Message d'origine-----\n> De : [email protected] [mailto:[email protected]]\n> Envoyé : dimanche 7 octobre 2018 17:58\n> À : ROS Didier <[email protected]>\n> Cc : [email protected]; [email protected];\n> [email protected]; [email protected]\n> Objet : Re: Why the index is not used ?\n>\n> ROS:\n>\n> On Sun, Oct 7, 2018 at 3:13 PM, ROS Didier <[email protected]> wrote:\n> ....\n> > - INSERT INTO cartedecredit(username,cc) SELECT 'individu ' ||\n> x.id, pgp_sym_encrypt('test value ' || x.id,\n> 'motdepasse','compress-algo=2, cipher-algo=aes256') FROM\n> generate_series(1,100000) AS x(id);\n> > - CREATE INDEX idx_cartedecredit_cc02 ON\n> cartedecredit(pgp_sym_decrypt(cc, 'motdepasse','compress-algo=2,\n> cipher-algo=aes256'));\n>\n> If my french is not too rusty you are encrypting a credit-card, and then\n> storing an UNENCRYPTED copy in the index. So, getting it from the server is\n> trivial for anyone with filesystem access.\n>\n> Francisco Olarte.\n>\n>\n>\n> Ce message et toutes les pièces jointes (ci-après le 'Message') sont\n> établis à l'intention exclusive des destinataires et les informations qui y\n> figurent sont strictement confidentielles. Toute utilisation de ce Message\n> non conforme à sa destination, toute diffusion ou toute publication totale\n> ou partielle, est interdite sauf autorisation expresse.\n>\n> Si vous n'êtes pas le destinataire de ce Message, il vous est interdit de\n> le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou\n> partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de\n> votre système, ainsi que toutes ses copies, et de n'en garder aucune trace\n> sur quelque support que ce soit. Nous vous remercions également d'en\n> avertir immédiatement l'expéditeur par retour du message.\n>\n> Il est impossible de garantir que les communications par messagerie\n> électronique arrivent en temps utile, sont sécurisées ou dénuées de toute\n> erreur ou virus.\n> ____________________________________________________\n>\n> This message and any attachments (the 'Message') are intended solely for\n> the addressees. The information contained in this Message is confidential.\n> Any use of information contained in this Message not in accord with its\n> purpose, any dissemination or disclosure, either whole or partial, is\n> prohibited except formal approval.\n>\n> If you are not the addressee, you may not copy, forward, disclose or use\n> any part of it. If you have received this message in error, please delete\n> it and all copies from your system and notify the sender immediately by\n> return message.\n>\n> E-mail communication cannot be guaranteed to be timely secure, error or\n> virus-free.\n>\n\nDidier,you was given a few things to check in another my message on the same day. You have not provided any feedback.It is up to you how to implement your system, but you can with no doubt consider your database as not encrypted with your approach. You (or probably your management) have no understanding from which risks you protect your data.Regards,Vladвс, 7 окт. 2018 г. в 11:33, ROS Didier <[email protected]>:Hi Francisco\n\n Thank you for your remark. \n You're right, but it's the only procedure I found to make search on encrypted fields with good response times (using index) !\n\n Regarding access to the file system, our servers are in protected network areas. few people can connect to it.\n\n it's not the best solution, but we have data encryption needs and good performance needs too. I do not know how to do it except the specified procedure..\n if anyone has any proposals to put this in place, I'm interested.\n\n Thanks in advance\n\nBest Regards\nDidier ROS\n\n-----Message d'origine-----\nDe : [email protected] [mailto:[email protected]] \nEnvoyé : dimanche 7 octobre 2018 17:58\nÀ : ROS Didier <[email protected]>\nCc : [email protected]; [email protected]; [email protected]; [email protected]\nObjet : Re: Why the index is not used ?\n\nROS:\n\nOn Sun, Oct 7, 2018 at 3:13 PM, ROS Didier <[email protected]> wrote:\n....\n> - INSERT INTO cartedecredit(username,cc) SELECT 'individu ' || x.id, pgp_sym_encrypt('test value ' || x.id, 'motdepasse','compress-algo=2, cipher-algo=aes256') FROM generate_series(1,100000) AS x(id);\n> - CREATE INDEX idx_cartedecredit_cc02 ON cartedecredit(pgp_sym_decrypt(cc, 'motdepasse','compress-algo=2, cipher-algo=aes256'));\n\nIf my french is not too rusty you are encrypting a credit-card, and then storing an UNENCRYPTED copy in the index. So, getting it from the server is trivial for anyone with filesystem access.\n\nFrancisco Olarte.\n\n\n\nCe message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\n\nSi vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\n\nIl est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\n____________________________________________________\n\nThis message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.\n\nIf you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\n\nE-mail communication cannot be guaranteed to be timely secure, error or virus-free.",
"msg_date": "Sun, 7 Oct 2018 11:48:20 -0700",
"msg_from": "Vladimir Ryabtsev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why the index is not used ?"
},
{
"msg_contents": "Additionally it is not clear why you want to search in table on encrypted\ndata. Usually you match user with it's unpersonalized data (such as login,\nuser ID) and then decrypt personalized data. If you need to store user\nidentifying data encrypted as well (e.g. bank account number) you can use a\ndeterministic algorithm for it (without salt) because it is guaranteed to\nbe unique and you don't need to have different encrypted data for two same\ninput strings.\n\nVlad\n\nAdditionally it is not clear why you want to search in table on encrypted data. Usually you match user with it's unpersonalized data (such as login, user ID) and then decrypt personalized data. If you need to store user identifying data encrypted as well (e.g. bank account number) you can use a deterministic algorithm for it (without salt) because it is guaranteed to be unique and you don't need to have different encrypted data for two same input strings.Vlad",
"msg_date": "Sun, 7 Oct 2018 12:32:46 -0700",
"msg_from": "Vladimir Ryabtsev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why the index is not used ?"
},
{
"msg_contents": "Hi,\n\nOn 10/07/2018 08:32 PM, ROS Didier wrote:\n> Hi Francisco\n> \n> \tThank you for your remark. \n> You're right, but it's the only procedure I found to make search on\n> encrypted fields with good response times (using index) !\n> \n\nUnfortunately, that kinda invalidates the whole purpose of in-database\nencryption - you'll have encrypted on-disk data in one place, and then\nplaintext right next to it. If you're dealing with credit card numbers,\nthen you presumably care about PCI DSS, and this is likely a direct\nviolation of that.\n\n> Regarding access to the file system, our servers are in protected\nnetwork areas. few people can connect to it.\n> \n\nThen why do you need encryption at all? If you assume access to the\nfilesystem / storage is protected, why do you bother with encryption?\nWhat is your threat model?\n\n> it's not the best solution, but we have data encryption needs and\n> good performance needs too. I do not know how to do it except the\n> specified procedure..\n>\n> if anyone has any proposals to put this in place, I'm interested.\n> \n\nOne thing you could do is hashing the value and then searching by the\nhash. So aside from having the encrypted column you'll also have a short\nhash, and you may use it in the query *together* with the original\ncondition. It does not need to be unique (in fact it should not be to\nmake it impossible to reverse the hash), but it needs to have enough\ndistinct values to make the index efficient. Say, 10k values should be\nenough, because that means 0.01% selectivity.\n\nSo the function might look like this, for example:\n\n CREATE FUNCTION cchash(text) RETURNS int AS $$\n SELECT abs(hashtext($1)) % 10000;\n $$ LANGUAGE sql;\n\nand then be used like this:\n\n CREATE INDEX idx_cartedecredit_cc02 ON cartedecredit(cchash(cc));\n\nand in the query\n\n SELECT pgp_sym_decrypt(cc, 'motdepasse') FROM cartedecredit\n WHERE pgp_sym_decrypt(cc, 'motdepasse')='test value 32'\n AND cchash(cc) = cchash('test value 32');\n\nObviously, this does not really solve the issues with having to pass the\npassword to the query, making it visible in pg_stat_activity, various\nlogs etc.\n\nWhich is why people generally use FDE for the whole disk, which is\ntransparent and provides the same level of protection.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Sun, 7 Oct 2018 22:07:44 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why the index is not used ?"
},
{
"msg_contents": "Hi Didier,\n\nI’m sorry to tell you that you are probably doing something (ie handling/storing credit cards) which would mean you have to comply with PCI DSS requirements.\n\nAs such you should probably have a QSA (auditor) who you can run any proposed solution by (so you know they will be comfortable with it when they do their audit).\n\nI think your current solution would be frowned upon because:\n- cards are effectively stored in plaintext in the index.\n- your encryption/decryption is being done in database, rather than by something with that as its sole role.\n\nPeople have already mentioned the former so I won’t go into it further\n\nBut for the second part if someone can do a \n\n>> Select pgp_sym_decrypt(cc)\n\nthen you are one sql injection away from having your card data stolen. You do have encryption, but in practice everything is available unencrypted so in practice the encryption is more of a tick in a box than an actual defence against bad things happening. In a properly segmented system even your DBA should not be able to access decrypted card data.\n\nYou probably should look into doing something like:\n\n- store the first 6 and last 4 digits of the card unencrypted.\n- store the remaining card digits encrypted\n- have the encryption/decryption done by a seperate service called by your application code outside the db.\n\nYou haven’t gone into what your requirements re search are (or I missed them) but while the above won’t give you a fast exact cc lookup in practice being able to search using the first 6 and last 4 can get you a small enough subset than can then be filtered after decrypting the middle. \n\nWe are straying a little off PostgreSQL topic here but if you and/or your management aren’t already looking at PCI DSS compliance I’d strongly recommend you do so. It can seem like a pain but it is much better to take that pain up front rather than having to reengineer everything later. There are important security aspects it helps make sure you cover but maybe some business aspects (ie possible partners who won’t be able to deal with you without your compliance sign off documentation).\n\n\nThe alternative, if storing cc data isn’t a core requirement, is to not store the credit card data at all. That is generally the best solution if it meets your needs, ie if you just want to accept payments then use a third party who is PCI compliant to handle the cc part.\n\nI hope that helps a little.\n\nPaul\n\n\n\n\nSent from my iPhone\n\n> On 8 Oct 2018, at 05:32, ROS Didier <[email protected]> wrote:\n> \n> Hi Francisco\n> \n> Thank you for your remark. \n> You're right, but it's the only procedure I found to make search on encrypted fields with good response times (using index) !\n> \n> Regarding access to the file system, our servers are in protected network areas. few people can connect to it.\n> \n> it's not the best solution, but we have data encryption needs and good performance needs too. I do not know how to do it except the specified procedure..\n> if anyone has any proposals to put this in place, I'm interested.\n> \n> Thanks in advance\n> \n> Best Regards\n> Didier ROS\n> \n> -----Message d'origine-----\n> De : [email protected] [mailto:[email protected]] \n> Envoyé : dimanche 7 octobre 2018 17:58\n> À : ROS Didier <[email protected]>\n> Cc : [email protected]; [email protected]; [email protected]; [email protected]\n> Objet : Re: Why the index is not used ?\n> \n> ROS:\n> \n>> On Sun, Oct 7, 2018 at 3:13 PM, ROS Didier <[email protected]> wrote:\n>> ....\n>> - INSERT INTO cartedecredit(username,cc) SELECT 'individu ' || x.id, pgp_sym_encrypt('test value ' || x.id, 'motdepasse','compress-algo=2, cipher-algo=aes256') FROM generate_series(1,100000) AS x(id);\n>> - CREATE INDEX idx_cartedecredit_cc02 ON cartedecredit(pgp_sym_decrypt(cc, 'motdepasse','compress-algo=2, cipher-algo=aes256'));\n> \n> If my french is not too rusty you are encrypting a credit-card, and then storing an UNENCRYPTED copy in the index. So, getting it from the server is trivial for anyone with filesystem access.\n> \n> Francisco Olarte.\n> \n> \n> \n> Ce message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\n> \n> Si vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\n> \n> Il est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\n> ____________________________________________________\n> \n> This message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.\n> \n> If you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\n> \n> E-mail communication cannot be guaranteed to be timely secure, error or virus-free.\n\nHi Didier,I’m sorry to tell you that you are probably doing something (ie handling/storing credit cards) which would mean you have to comply with PCI DSS requirements.As such you should probably have a QSA (auditor) who you can run any proposed solution by (so you know they will be comfortable with it when they do their audit).I think your current solution would be frowned upon because:- cards are effectively stored in plaintext in the index.- your encryption/decryption is being done in database, rather than by something with that as its sole role.People have already mentioned the former so I won’t go into it furtherBut for the second part if someone can do a Select pgp_sym_decrypt(cc)then you are one sql injection away from having your card data stolen. You do have encryption, but in practice everything is available unencrypted so in practice the encryption is more of a tick in a box than an actual defence against bad things happening. In a properly segmented system even your DBA should not be able to access decrypted card data.You probably should look into doing something like:- store the first 6 and last 4 digits of the card unencrypted.- store the remaining card digits encrypted- have the encryption/decryption done by a seperate service called by your application code outside the db.You haven’t gone into what your requirements re search are (or I missed them) but while the above won’t give you a fast exact cc lookup in practice being able to search using the first 6 and last 4 can get you a small enough subset than can then be filtered after decrypting the middle. We are straying a little off PostgreSQL topic here but if you and/or your management aren’t already looking at PCI DSS compliance I’d strongly recommend you do so. It can seem like a pain but it is much better to take that pain up front rather than having to reengineer everything later. There are important security aspects it helps make sure you cover but maybe some business aspects (ie possible partners who won’t be able to deal with you without your compliance sign off documentation).The alternative, if storing cc data isn’t a core requirement, is to not store the credit card data at all. That is generally the best solution if it meets your needs, ie if you just want to accept payments then use a third party who is PCI compliant to handle the cc part.I hope that helps a little.PaulSent from my iPhoneOn 8 Oct 2018, at 05:32, ROS Didier <[email protected]> wrote:Hi Francisco Thank you for your remark. You're right, but it's the only procedure I found to make search on encrypted fields with good response times (using index) ! Regarding access to the file system, our servers are in protected network areas. few people can connect to it. it's not the best solution, but we have data encryption needs and good performance needs too. I do not know how to do it except the specified procedure.. if anyone has any proposals to put this in place, I'm interested. Thanks in advanceBest RegardsDidier ROS-----Message d'origine-----De : [email protected] [mailto:[email protected]] Envoyé : dimanche 7 octobre 2018 17:58À : ROS Didier <[email protected]>Cc : [email protected]; [email protected]; [email protected]; [email protected] : Re: Why the index is not used ?ROS:On Sun, Oct 7, 2018 at 3:13 PM, ROS Didier <[email protected]> wrote:....- INSERT INTO cartedecredit(username,cc) SELECT 'individu ' || x.id, pgp_sym_encrypt('test value ' || x.id, 'motdepasse','compress-algo=2, cipher-algo=aes256') FROM generate_series(1,100000) AS x(id);- CREATE INDEX idx_cartedecredit_cc02 ON cartedecredit(pgp_sym_decrypt(cc, 'motdepasse','compress-algo=2, cipher-algo=aes256'));If my french is not too rusty you are encrypting a credit-card, and then storing an UNENCRYPTED copy in the index. So, getting it from the server is trivial for anyone with filesystem access.Francisco Olarte.Ce message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.Si vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.Il est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.____________________________________________________This message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.If you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.E-mail communication cannot be guaranteed to be timely secure, error or virus-free.",
"msg_date": "Mon, 8 Oct 2018 09:10:30 +1100",
"msg_from": "Paul McGarry <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why the index is not used ?"
},
{
"msg_contents": "Hi Vlad\r\n\r\nYour remark is very interesting. You want to say that it's better to run SQL queries on unpersonalized data, and then retrieve the encrypted data for those records.\r\nOK, I take this recommendation into account and I will forward it to my company's projects.\r\n\r\nNevertheless, you say that it is possible, in spite of everything, to use indexes on the encrypted data by using deterministic algorithms.\r\nCan you tell me some examples of these algorithms?\r\n\r\nThanks in advance\r\n\r\nBest Regards\r\n\r\n[cid:[email protected]]\r\n\r\n\r\nDidier ROS\r\nExpertise SGBD\r\n\r\n\r\n\r\nDe : [email protected] [mailto:[email protected]]\r\nEnvoyé : dimanche 7 octobre 2018 21:33\r\nÀ : ROS Didier <[email protected]>\r\nCc : [email protected]; [email protected]; [email protected]; [email protected]; [email protected]\r\nObjet : Re: Why the index is not used ?\r\n\r\nAdditionally it is not clear why you want to search in table on encrypted data. Usually you match user with it's unpersonalized data (such as login, user ID) and then decrypt personalized data. If you need to store user identifying data encrypted as well (e.g. bank account number) you can use a deterministic algorithm for it (without salt) because it is guaranteed to be unique and you don't need to have different encrypted data for two same input strings.\r\n\r\nVlad\r\n\n\n\nCe message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\n\nSi vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\n\nIl est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\n____________________________________________________\n\nThis message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.\n\nIf you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\n\nE-mail communication cannot be guaranteed to be timely secure, error or virus-free.",
"msg_date": "Mon, 8 Oct 2018 06:29:57 +0000",
"msg_from": "ROS Didier <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Why the index is not used ?"
},
{
"msg_contents": "Hi Virendra \r\n\r\n\tYou think that outside encryption of the database is the best solution\t ?\r\n How do you manage the encryption key ?\r\n\tCan you give me some examples of this kind of solution.\r\n\r\nBest Regards\r\nDidier ROS\r\n\r\n-----Message d'origine-----\r\nDe : [email protected] [mailto:[email protected]] \r\nEnvoyé : dimanche 7 octobre 2018 20:41\r\nÀ : ROS Didier <[email protected]>; [email protected]\r\nCc : [email protected]; [email protected]; [email protected]; [email protected]\r\nObjet : RE: Why the index is not used ?\r\n\r\nYou can consider outside DB encryption which is less of worry for performance and data at rest will be encrypted.\r\n\r\nRegards,\r\nVirendra\r\n-----Original Message-----\r\nFrom: ROS Didier [mailto:[email protected]]\r\nSent: Sunday, October 07, 2018 2:33 PM\r\nTo: [email protected]\r\nCc: [email protected]; [email protected]; [email protected]; [email protected]\r\nSubject: RE: Why the index is not used ?\r\n\r\nHi Francisco\r\n\r\nThank you for your remark.\r\nYou're right, but it's the only procedure I found to make search on encrypted fields with good response times (using index) !\r\n\r\nRegarding access to the file system, our servers are in protected network areas. few people can connect to it.\r\n\r\nit's not the best solution, but we have data encryption needs and good performance needs too. I do not know how to do it except the specified procedure..\r\nif anyone has any proposals to put this in place, I'm interested.\r\n\r\nThanks in advance\r\n\r\nBest Regards\r\nDidier ROS\r\n\r\n-----Message d'origine-----\r\nDe : [email protected] [mailto:[email protected]] Envoyé : dimanche 7 octobre 2018 17:58 À : ROS Didier <[email protected]> Cc : [email protected]; [email protected]; [email protected]; [email protected]\r\nObjet : Re: Why the index is not used ?\r\n\r\nROS:\r\n\r\nOn Sun, Oct 7, 2018 at 3:13 PM, ROS Didier <[email protected]> wrote:\r\n....\r\n> - INSERT INTO cartedecredit(username,cc) SELECT 'individu ' || x.id, pgp_sym_encrypt('test value ' || x.id, 'motdepasse','compress-algo=2, cipher-algo=aes256') FROM generate_series(1,100000) AS x(id);\r\n> - CREATE INDEX idx_cartedecredit_cc02 ON cartedecredit(pgp_sym_decrypt(cc, 'motdepasse','compress-algo=2, cipher-algo=aes256'));\r\n\r\nIf my french is not too rusty you are encrypting a credit-card, and then storing an UNENCRYPTED copy in the index. So, getting it from the server is trivial for anyone with filesystem access.\r\n\r\nFrancisco Olarte.\r\n\r\n\r\n\r\nCe message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\r\n\r\nSi vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\r\n\r\nIl est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\r\n____________________________________________________\r\n\r\nThis message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.\r\n\r\nIf you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\r\n\r\nE-mail communication cannot be guaranteed to be timely secure, error or virus-free.\r\n\r\n________________________________\r\n\r\nThis message is intended only for the use of the addressee and may contain information that is PRIVILEGED AND CONFIDENTIAL.\r\n\r\nIf you are not the intended recipient, you are hereby notified that any dissemination of this communication is strictly prohibited. If you have received this communication in error, please erase all copies of the message and its attachments and notify the sender immediately. Thank you.\r\n\n\n\nCe message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\n\nSi vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\n\nIl est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\n____________________________________________________\n\nThis message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.\n\nIf you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\n\nE-mail communication cannot be guaranteed to be timely secure, error or virus-free.\n",
"msg_date": "Mon, 8 Oct 2018 08:32:36 +0000",
"msg_from": "ROS Didier <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Why the index is not used ?"
},
{
"msg_contents": "Dear Didier,\n\n\nLe lundi 08 octobre 2018 ᅵ 08:32 +0000, ROS Didier a ᅵcrit :\n> Hi Virendra \n> \tYou think that outside encryption of the database is the best solution\t ?\n> How do you manage the encryption key ?\n> \tCan you give me some examples of this kind of solution.\n> Best Regards\n> Didier ROS\n\nIf I understand your need well, you need to store credit card information into your database.\n\nThis is ruled by the Payment Card Industry Data Security Standard (aka PCI DSS).\n\nI attend some years ago a real good presentation from Denish Patel, a well known community member.\n\nI saw this talk in pgconf 2015 in Moscow: https://pgconf.ru/media2015c/patel.pdf\n\nI recommend you read it, if you had not already? It shows code examples, etc.\n\n\nMy 2 cents...\n\n\n-- \nJean-Paul Argudo\n\n\n",
"msg_date": "Mon, 08 Oct 2018 10:44:22 +0200",
"msg_from": "Jean-Paul Argudo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why the index is not used ?"
},
{
"msg_contents": "Hi Vlad\r\nSorry for this delay, but apparently the subject is of interest to many people in the community. I received a lot of comments and answers.\r\nI wrote my answers in the body of your message below\r\n\r\nBest Regards\r\nDidier\r\n\r\nDe : [email protected] [mailto:[email protected]]\r\nEnvoyé : samedi 6 octobre 2018 18:51\r\nÀ : ROS Didier <[email protected]>\r\nCc : [email protected]; [email protected]; [email protected]\r\nObjet : Re: Why the index is not used ?\r\n\r\nHello Didier,\r\n\r\n>>\r\n(3), (5) to find the match, you decrypt the whole table, apparently this take quite a long time.\r\nIndex cannot help here because indexes work on exact match of type and value, but you compare mapped value, not indexed. Functional index should help, but like it was said, it against the idea of encrypted storage.\r\n<<\r\nI tested the solution of the functional index. It works very well, but the data is no longer encrypted. This is not the right solution\r\n>>\r\n(6) I never used pgp_sym_encrypt() but I see that in INSERT INTO you supplied additional parameter 'compress-algo=2, cipher-algo=aes256' while in (6) you did not. Probably this is the reason.\r\n\r\nIn general matching indexed bytea column should use index, you can ensure in this populating the column unencrypted and using 'test value 32'::bytea for match.\r\nIn you case I believe pgp_sym_encrypt() is not marked as STABLE or IMMUTABLE that's why it will be evaluated for each row (very inefficient) and cannot use index. From documentation:\r\n\r\n\"Since an index scan will evaluate the comparison value only once, not once at each row, it is not valid to use a VOLATILE function in an index scan condition.\"\r\nhttps://www.postgresql.org/docs/10/static/xfunc-volatility.html\r\n\r\nIf you cannot add STABLE/IMMUTABLE to pgp_sym_encrypt() (which apparently should be there), you can encrypt searched value as a separate operation and then search in the table using basic value match.\r\n>>\r\nyou're right about the missing parameter 'compress-algo=2, cipher-algo=aes256'. I agree with you.\r\n(1) I have tested your proposal :\r\nDROP TABLE cartedecredit;\r\nCREATE TABLE cartedecredit(card_id SERIAL PRIMARY KEY, username VARCHAR(100), cc bytea);\r\nINSERT INTO cartedecredit(username,cc) SELECT 'individu ' || x.id, decode('test value ' || x.id,'escape') FROM generate_series(1,100000) AS x(id);\r\n\r\nè I inserted unencrypted data into the bytea column\r\npostgres=# select * from cartedecredit limit 5 ;\r\ncard_id | username | cc\r\n---------+-------------+------------------------------\r\n 1 | individu 1 | \\x746573742076616c75652031\r\n 2 | individu 2 | \\x746573742076616c75652032\r\n 3 | individu 3 | \\x746573742076616c75652033\r\n 4 | individu 4 | \\x746573742076616c75652034\r\n 5 | individu 5 | \\x746573742076616c75652035\r\nCREATE INDEX idx_cartedecredit_cc02 ON cartedecredit(cc);\r\nSELECT encode(cc,'escape') FROM cartedecredit WHERE cc=decode('test value 32','escape');\r\n QUERY PLAN\r\n--------------------------------------------------------------------------------------------------------------------------------------------\r\nIndex Only Scan using idx_cartedecredit_cc02 on cartedecredit (cost=0.42..8.44 rows=1 width=32) (actual time=0.033..0.034 rows=1 loops=1)\r\n Index Cond: (cc = '\\x746573742076616c7565203332'::bytea)\r\n Heap Fetches: 1\r\nPlanning time: 0.130 ms\r\nExecution time: 0.059 ms\r\n(5 rows)\r\n\r\nè It works but the data is not encrypted. everyone can have access to the data\r\n(2) 2nd test :\r\nDROP TABLE cartedecredit;\r\nCREATE TABLE cartedecredit(card_id SERIAL PRIMARY KEY, username VARCHAR(100), cc bytea);\r\nINSERT INTO cartedecredit(username,cc) SELECT 'individu ' || x.id, pgp_sym_encrypt('test value ' || x.id, 'motdepasse','compress-algo=2, cipher-algo=aes256') FROM generate_series(1,100000) AS x(id);\r\npostgres=# select * from cartedecredit limit 5 ;\r\n>>\r\ncard_id | username | cc\r\n---------+-------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------\r\n-------------------\r\n 1 | individu 1 | \\xc30d0409030296304d007bf50ed768d2480153cd4a4e2d240249f94b31ec168391515ea80947f97970f7a4e058bff648f752df194498dd480c3b8a5c0d2942f90c6dde21a6b9bf4e9fd7986c6f986e3783\r\n647e7a6205b48c03\r\n 2 | individu 2 | \\xc30d0409030257b50bc0e6bcd8d270d248010984b60126af01ba922da27e2e78c33110f223f0210cf34da77243277305254cba374708d447fc7d653dd9e00ff9a96803a2c47ee95269534f2c24fab1c9dc\r\n31f7909ca7adeaf0\r\n 3 | individu 3 | \\xc30d040903023c5f8cb688c7945275d24801a518d70c6cc2d4a31f99f3738e736c5312f78bb9c3cc187a65d0cf7f893dbc9448825d39b79df5d0460508fc93336c2bec7794893bb08a290afd649ae15fe2\r\n2b0433eff89222f7\r\n 4 | individu 4 | \\xc30d04090302dcc3bb49a41b297578d2480167f17b09004e7dacc0891fc0cc7276dd551273eec72644520f8d0543abe8e795af7c1b84fc8e5b4adc33994c479d5ff17988e60bf446dc8c77caf3f3b008c1\r\nc06bf0a3c4df41ae\r\n 5 | individu 5 | \\xc30d04090302a8c3552fb4b297b567d24801c060fb9241355b49717479107ff59d2928b3c0d9001dabd0035a0419b1a54c0b15f1907a981f08a4227784ac5cf3994b32ba594eff35933825730ac42af8ca\r\n76bd497c5079b127\r\nCREATE INDEX idx_cartedecredit_cc02 ON cartedecredit(cc);\r\nSELECT pgp_sym_decrypt(cc, 'motdepasse','compress-algo=2, cipher-algo=aes256') FROM cartedecredit WHERE cc=pgp_sym_encrypt('test value 32', 'motdepasse'::text,'compress-algo=2, cipher-algo=aes256');\r\npgp_sym_decrypt\r\n-----------------\r\n(0 rows)\r\n\r\nè No row returned !\r\nTime: 116185.300 ms (01:56.185)\r\n QUERY PLAN\r\n--------------------------------------------------------------------------------------------------------------------------\r\nSeq Scan on cartedecredit (cost=0.00..3309.00 rows=1 width=32) (actual time=105969.099..105969.099 rows=0 loops=1)\r\n Filter: (cc = pgp_sym_encrypt('test value 32'::text, 'motdepasse'::text, 'compress-algo=2, cipher-algo=aes256'::text))\r\n Rows Removed by Filter: 100000\r\nPlanning time: 0.150 ms\r\nExecution time: 105969.166 ms\r\n(5 rows)\r\nTime: 105969.912 ms (01:45.970)\r\n-> Index is not used .\r\nBest Regards\r\nVlad\r\n\n\n\nCe message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\n\nSi vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\n\nIl est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\n____________________________________________________\n\nThis message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.\n\nIf you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\n\nE-mail communication cannot be guaranteed to be timely secure, error or virus-free.\n\n\n\n\n\n\n\n\n\nHi Vlad\r\n\nSorry for this delay, but apparently the subject is of interest to many people in the community.\r\n I received a lot of comments and answers.\nI wrote my answers in the body of your message below\n \nBest Regards\nDidier\n \nDe : [email protected] [mailto:[email protected]]\r\n\nEnvoyé : samedi 6 octobre 2018 18:51\nÀ : ROS Didier <[email protected]>\nCc : [email protected]; [email protected]; [email protected]\nObjet : Re: Why the index is not used ?\n \n\n\nHello Didier,\n\n>>\n(3), (5) to find the match, you decrypt the whole table, apparently this take quite a long time.\nIndex cannot help here because indexes work on exact match of type and value, but you compare mapped value, not indexed.\r\nFunctional index should help, but like it was said, it against the idea of encrypted storage.\n<<\nI tested the solution of the functional index. It works very well, but the data is no longer encrypted. This is not the right solution\n>>\r\n(6) I never used pgp_sym_encrypt() but I see that in INSERT INTO you supplied additional parameter 'compress-algo=2, cipher-algo=aes256' while in (6) you did not.\r\nProbably this is the reason.\n\r\nIn general matching indexed bytea column should use index, you can ensure in this populating the column unencrypted and using 'test value 32'::bytea for match.\nIn you case I believe pgp_sym_encrypt() is not marked as STABLE or IMMUTABLE that's why it will be evaluated for each row (very inefficient) and cannot use index. From documentation:\n\r\n\"Since an index scan will evaluate the comparison value only once, not once at each row, it is not valid to use a VOLATILE function in an index scan condition.\"\nhttps://www.postgresql.org/docs/10/static/xfunc-volatility.html\n\r\nIf you cannot add STABLE/IMMUTABLE to pgp_sym_encrypt() (which apparently should be there), you can encrypt searched value as a separate operation and then search in the table using basic value match.\n>>\r\nyou're right about the missing parameter 'compress-algo=2, cipher-algo=aes256'. I agree with you.\n(1) I have tested your proposal :\nDROP TABLE cartedecredit;\nCREATE TABLE cartedecredit(card_id SERIAL PRIMARY KEY, username VARCHAR(100), cc bytea);\nINSERT INTO cartedecredit(username,cc) SELECT 'individu ' || x.id, decode('test value ' || x.id,'escape') FROM generate_series(1,100000) AS x(id);\r\n\n\nè\nI inserted unencrypted data into the bytea column\npostgres=# select * from cartedecredit limit 5 ;\ncard_id | username | cc\n---------+-------------+------------------------------\n 1 | individu 1 | \\x746573742076616c75652031\n \r\n2 | individu 2 | \\x746573742076616c75652032\n 3 | individu 3 | \\x746573742076616c75652033\n 4 | individu 4 | \\x746573742076616c75652034\n 5 | individu 5 | \\x746573742076616c75652035\nCREATE INDEX idx_cartedecredit_cc02 ON cartedecredit(cc);\nSELECT encode(cc,'escape') FROM cartedecredit WHERE cc=decode('test value 32','escape');\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------\nIndex Only Scan using idx_cartedecredit_cc02 on cartedecredit (cost=0.42..8.44 rows=1 width=32) (actual time=0.033..0.034 rows=1 loops=1)\n Index Cond: (cc = '\\x746573742076616c7565203332'::bytea)\n Heap Fetches: 1\nPlanning time: 0.130 ms\nExecution time: 0.059 ms\n(5 rows)\n\nè\nIt works but the data is not encrypted. everyone can have access to the data\n(2) 2nd test :\nDROP TABLE cartedecredit;\nCREATE TABLE cartedecredit(card_id SERIAL PRIMARY KEY, username VARCHAR(100), cc bytea);\nINSERT INTO cartedecredit(username,cc) SELECT 'individu ' || x.id, pgp_sym_encrypt('test value ' || x.id, 'motdepasse','compress-algo=2, cipher-algo=aes256') FROM generate_series(1,100000)\r\n AS x(id); \npostgres=# select * from cartedecredit limit 5 ;\r\n>>\ncard_id | username | cc\n---------+-------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-------------------\n 1 | individu 1 | \\xc30d0409030296304d007bf50ed768d2480153cd4a4e2d240249f94b31ec168391515ea80947f97970f7a4e058bff648f752df194498dd480c3b8a5c0d2942f90c6dde21a6b9bf4e9fd7986c6f986e3783\n647e7a6205b48c03\n 2 | individu 2 | \\xc30d0409030257b50bc0e6bcd8d270d248010984b60126af01ba922da27e2e78c33110f223f0210cf34da77243277305254cba374708d447fc7d653dd9e00ff9a96803a2c47ee95269534f2c24fab1c9dc\n31f7909ca7adeaf0\n 3 | individu 3 | \\xc30d040903023c5f8cb688c7945275d24801a518d70c6cc2d4a31f99f3738e736c5312f78bb9c3cc187a65d0cf7f893dbc9448825d39b79df5d0460508fc93336c2bec7794893bb08a290afd649ae15fe2\n2b0433eff89222f7\n 4 | individu 4 | \\xc30d04090302dcc3bb49a41b297578d2480167f17b09004e7dacc0891fc0cc7276dd551273eec72644520f8d0543abe8e795af7c1b84fc8e5b4adc33994c479d5ff17988e60bf446dc8c77caf3f3b008c1\nc06bf0a3c4df41ae\n 5 | individu 5 | \\xc30d04090302a8c3552fb4b297b567d24801c060fb9241355b49717479107ff59d2928b3c0d9001dabd0035a0419b1a54c0b15f1907a981f08a4227784ac5cf3994b32ba594eff35933825730ac42af8ca\n76bd497c5079b127\nCREATE INDEX idx_cartedecredit_cc02 ON cartedecredit(cc);\nSELECT pgp_sym_decrypt(cc, 'motdepasse','compress-algo=2, cipher-algo=aes256') FROM cartedecredit WHERE cc=pgp_sym_encrypt('test value 32', 'motdepasse'::text,'compress-algo=2,\r\n cipher-algo=aes256');\npgp_sym_decrypt\n-----------------\n(0 rows)\n\nè\nNo row returned !\nTime: 116185.300 ms (01:56.185)\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------\nSeq Scan on cartedecredit (cost=0.00..3309.00 rows=1 width=32) (actual time=105969.099..105969.099 rows=0 loops=1)\n Filter: (cc = pgp_sym_encrypt('test value 32'::text, 'motdepasse'::text, 'compress-algo=2, cipher-algo=aes256'::text))\n Rows Removed by Filter: 100000\nPlanning time: 0.150 ms\nExecution time: 105969.166 ms\n(5 rows)\nTime: 105969.912 ms (01:45.970)\n-> Index is not used .\nBest Regards\r\nVlad\n\n\n\n\n\nCe message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\nSi vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\nIl est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\n____________________________________________________\nThis message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.\nIf you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\nE-mail communication cannot be guaranteed to be timely secure, error or virus-free.",
"msg_date": "Mon, 8 Oct 2018 11:47:06 +0000",
"msg_from": "ROS Didier <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Why the index is not used ?"
},
{
"msg_contents": "Hi Phil\r\n\r\n\tThank you for this recommendation, but I posted on this public list only generic examples that have nothing to do with the works done in my company.\r\n\tThese examples serve me only to discuss about the subject of data encryption and performance\r\n\tMy answers to your remarks :\r\n\r\n>>\r\nWhy do you need to search by credit card number?\r\n<<\r\n Again, this is just an example. I just want to find a solution to query a column containing encrypted data with good performance.\r\n\r\n>>\r\none option is to use an encryption function that doesn't salt the data\r\n<<\r\nI am interested. Can you give some examples of these encryption function that doesn't salt the data.\r\n\r\nBest Regards\r\nDidier ROS\r\n-----Message d'origine-----\r\nDe : [email protected] [mailto:[email protected]] \r\nEnvoyé : dimanche 7 octobre 2018 21:17\r\nÀ : ROS Didier <[email protected]>; [email protected]\r\nObjet : RE: Why the index is not used ?\r\n\r\nHello Didier,\r\n\r\nYour email is [email protected]. Are you working at Electricite de France, and storing actual customers' credit card details? How many millions of them?\r\n\r\nNote that this mailing list is public; people looking for targets with poor security from which they can harvest credit card numbers might be reading it.\r\nAnd after you are hacked and all your customers' credit card details are made public, someone will find this thread.\r\n\r\n> it's not the best solution, but we have data encryption needs and good \r\n> performance needs too. I do not know how to do it except the specified \r\n> procedure..\r\n\r\nYou should probably employ someone who knows what they are doing.\r\n\r\nSorry for being so direct, but really... storing large quantities of credit card details is the text book example of something that has to be done correctly.\r\n\r\n> if anyone has any proposals to put this in place, I'm interested.\r\n\r\nWhy do you need to search by credit card number?\r\n\r\nIf you really really need to do that, then one option is to use an encryption function that doesn't salt the data. Or you could store part of the number (last 4 digits?), or an unsalted hash of the number, unencrypted and indexed, and then you need only to sequentially decrypt (using the salted encryption) e.g. 1/10000 of the card numbers. But there are complex security issues and tradeoffs involved here. You probably need to comply with regulations (e.g. \"PCI standards\") which will specify what is allowed and what isn't. And if you didn't already know that, you shouldn't be doing this.\r\n\r\n\r\nGood luck, I suppose.\r\n\r\nPhil.\r\n\r\nP.S. It seems that you were asking about this a year ago, and got the same answers...\r\n\r\n\r\n\r\n\r\n\n\n\nCe message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\n\nSi vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\n\nIl est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\n____________________________________________________\n\nThis message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.\n\nIf you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\n\nE-mail communication cannot be guaranteed to be timely secure, error or virus-free.\n",
"msg_date": "Mon, 8 Oct 2018 12:02:50 +0000",
"msg_from": "ROS Didier <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Why the index is not used ?"
},
{
"msg_contents": "Hi Vlad\r\n OK, I take into account your remark about the need to do research on encrypted data.\r\nMy answers to your remarks :\r\n>>\r\nyou can use a deterministic algorithm for it (without salt)\r\n<<\r\nCan you give me on of these deterministic algorithms(without salt) ?\r\n\r\nBest Regards\r\n\r\nDidier\r\nDe : [email protected] [mailto:[email protected]]\r\nEnvoyé : dimanche 7 octobre 2018 21:33\r\nÀ : ROS Didier <[email protected]>\r\nCc : [email protected]; [email protected]; [email protected]; [email protected]; [email protected]\r\nObjet : Re: Why the index is not used ?\r\n\r\nAdditionally it is not clear why you want to search in table on encrypted data. Usually you match user with it's unpersonalized data (such as login, user ID) and then decrypt personalized data. If you need to store user identifying data encrypted as well (e.g. bank account number) you can use a deterministic algorithm for it (without salt) because it is guaranteed to be unique and you don't need to have different encrypted data for two same input strings.\r\n\r\nVlad\r\n\n\n\nCe message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\n\nSi vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\n\nIl est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\n____________________________________________________\n\nThis message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.\n\nIf you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\n\nE-mail communication cannot be guaranteed to be timely secure, error or virus-free.\n\n\n\n\n\n\n\n\n\nHi Vlad\r\n\n \r\nOK, I take into account your remark about the need to do research on encrypted data.\nMy answers to your remarks :\n>> \nyou can use a deterministic algorithm for it (without salt)\n<< \nCan you give me on of these deterministic algorithms(without salt) ?\n \nBest Regards\n \nDidier\nDe : [email protected] [mailto:[email protected]]\r\n\nEnvoyé : dimanche 7 octobre 2018 21:33\nÀ : ROS Didier <[email protected]>\nCc : [email protected]; [email protected]; [email protected]; [email protected]; [email protected]\nObjet : Re: Why the index is not used ?\n \n\nAdditionally it is not clear why you want to search in table on encrypted data. Usually you match user with it's unpersonalized data (such as login, user ID) and then decrypt personalized data. If you need to store user identifying data\r\n encrypted as well (e.g. bank account number) you can use a deterministic algorithm for it (without salt) because it is guaranteed to be unique and you don't need to have different encrypted data for two same input strings.\n\n \n\n\nVlad\n\n\n\n\n\nCe message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\nSi vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\nIl est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\n____________________________________________________\nThis message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.\nIf you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\nE-mail communication cannot be guaranteed to be timely secure, error or virus-free.",
"msg_date": "Mon, 8 Oct 2018 12:07:44 +0000",
"msg_from": "ROS Didier <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Why the index is not used ?"
},
{
"msg_contents": "ROS Didier wrote:\n> Can you give some examples of these encryption function \n> that doesn't salt the data.\n\nencrypt(data, 'motdepass', 'aes')\n\n\nRegards, Phil.\n\n\n\n\n\n",
"msg_date": "Mon, 08 Oct 2018 14:14:45 +0100",
"msg_from": "\"Phil Endecott\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Why the index is not used ?"
},
{
"msg_contents": "Hi Tomas\r\n\r\n Thank you for your answer and recommendation which is very interesting. I'm going to study the PCI DSS document right now.\r\n- Here are my answer to your question :\r\n>>\r\nWhat is your threat model?\r\n<<\r\nwe want to prevent access to sensitive data for everyone except those who have the encryption key.\r\nin case of files theft, backups theft, dumps theft, we do not want anyone to access sensitive data.\r\n\r\n- I have tested the solution you proposed, it works great.\r\n\r\nBest Regards\r\n\r\nDidier ROS\r\n-----Message d'origine-----\r\nDe : [email protected] [mailto:[email protected]]\r\nEnvoyé : dimanche 7 octobre 2018 22:08\r\nÀ : ROS Didier <[email protected]>; [email protected]\r\nCc : [email protected]; [email protected]; [email protected]; [email protected]\r\nObjet : Re: Why the index is not used ?\r\n\r\nHi,\r\n\r\nOn 10/07/2018 08:32 PM, ROS Didier wrote:\r\n> Hi Francisco\r\n>\r\n> Thank you for your remark.\r\n> You're right, but it's the only procedure I found to make search on\r\n> encrypted fields with good response times (using index) !\r\n>\r\n\r\nUnfortunately, that kinda invalidates the whole purpose of in-database encryption - you'll have encrypted on-disk data in one place, and then plaintext right next to it. If you're dealing with credit card numbers, then you presumably care about PCI DSS, and this is likely a direct violation of that.\r\n\r\n> Regarding access to the file system, our servers are in protected\r\nnetwork areas. few people can connect to it.\r\n>\r\n\r\nThen why do you need encryption at all? If you assume access to the filesystem / storage is protected, why do you bother with encryption?\r\nWhat is your threat model?\r\n\r\n> it's not the best solution, but we have data encryption needs and good\r\n> performance needs too. I do not know how to do it except the specified\r\n> procedure..\r\n>\r\n> if anyone has any proposals to put this in place, I'm interested.\r\n>\r\n\r\nOne thing you could do is hashing the value and then searching by the hash. So aside from having the encrypted column you'll also have a short hash, and you may use it in the query *together* with the original condition. It does not need to be unique (in fact it should not be to make it impossible to reverse the hash), but it needs to have enough distinct values to make the index efficient. Say, 10k values should be enough, because that means 0.01% selectivity.\r\n\r\nSo the function might look like this, for example:\r\n\r\n CREATE FUNCTION cchash(text) RETURNS int AS $$\r\n SELECT abs(hashtext($1)) % 10000;\r\n $$ LANGUAGE sql;\r\n\r\nand then be used like this:\r\n\r\n CREATE INDEX idx_cartedecredit_cc02 ON cartedecredit(cchash(cc));\r\n\r\nand in the query\r\n\r\n SELECT pgp_sym_decrypt(cc, 'motdepasse') FROM cartedecredit\r\n WHERE pgp_sym_decrypt(cc, 'motdepasse')='test value 32'\r\n AND cchash(cc) = cchash('test value 32');\r\n\r\nObviously, this does not really solve the issues with having to pass the password to the query, making it visible in pg_stat_activity, various logs etc.\r\n\r\nWhich is why people generally use FDE for the whole disk, which is transparent and provides the same level of protection.\r\n\r\n\r\nregards\r\n\r\n--\r\nTomas Vondra http://www.2ndQuadrant.com\r\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\r\n\r\n\n\n\nCe message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\n\nSi vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\n\nIl est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\n____________________________________________________\n\nThis message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.\n\nIf you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\n\nE-mail communication cannot be guaranteed to be timely secure, error or virus-free.\n\n\n\n\n\n\n\n\n\n\nHi Tomas\n \n Thank you for your answer and recommendation which is very interesting. I'm going to study the PCI DSS document right now.\n\nHere are my answer to your question :\n>>\nWhat is your threat model?\n<<\nwe want to prevent access to sensitive data for everyone except those who have the encryption key.\nin case of files theft, backups theft, dumps theft, we do not want anyone to access sensitive data.\n \n\nI have tested the solution you proposed, it works great.\n \nBest Regards\n \nDidier ROS\n-----Message d'origine-----\r\n\r\nDe : [email protected] [mailto:[email protected]]\r\n\r\n\r\nEnvoyé : dimanche 7 octobre 2018 22:08\r\n\r\nÀ : ROS Didier <[email protected]>; [email protected]\r\n\r\nCc : [email protected]; [email protected]; [email protected]; [email protected]\r\n\r\nObjet : Re: Why the index is not used ?\n \nHi,\n \nOn 10/07/2018 08:32 PM, ROS Didier wrote:\n> Hi Francisco\n> \n> Thank you for your remark. \n> You're right, but it's the only procedure I found to make search on \n> encrypted fields with good response times (using index) !\n> \n \nUnfortunately, that kinda invalidates the whole purpose of in-database encryption - you'll have encrypted on-disk data in one place, and then plaintext right next to it. If you're dealing with credit card numbers, then you presumably care about PCI DSS,\r\nand this is likely a direct violation of that.\n \n> Regarding access to the file system, our servers are in protected\nnetwork areas. few people can connect to it.\n> \n \nThen why do you need encryption at all? If you assume access to the filesystem / storage is protected, why do you bother with encryption?\nWhat is your threat model?\n \n> it's not the best solution, but we have data encryption needs and good \n> performance needs too. I do not know how to do it except the specified \n> procedure..\n>\n> if anyone has any proposals to put this in place, I'm interested.\n> \n \nOne thing you could do is hashing the value and then searching by the hash. So aside from having the encrypted column you'll also have a short hash, and you may use it in the query *together* with the original condition. It does not need to be unique (in\r\nfact it should not be to make it impossible to reverse the hash), but it needs to have enough distinct values to make the index efficient. Say, 10k values should be enough, because that means 0.01% selectivity.\n \nSo the function might look like this, for example:\n \n CREATE FUNCTION cchash(text) RETURNS int AS $$\n SELECT abs(hashtext($1)) % 10000;\n $$ LANGUAGE sql;\n \nand then be used like this:\n \n CREATE INDEX idx_cartedecredit_cc02 ON cartedecredit(cchash(cc));\n \nand in the query\n \n SELECT pgp_sym_decrypt(cc, 'motdepasse') FROM cartedecredit\n WHERE pgp_sym_decrypt(cc, 'motdepasse')='test value 32'\n AND cchash(cc) = cchash('test value 32');\n \nObviously, this does not really solve the issues with having to pass the password to the query, making it visible in pg_stat_activity, various logs etc.\n \nWhich is why people generally use FDE for the whole disk, which is transparent and provides the same level of protection.\n \n \nregards\n \n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n \n\n\n\nCe message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\nSi vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\nIl est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\n____________________________________________________\nThis message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.\nIf you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\nE-mail communication cannot be guaranteed to be timely secure, error or virus-free.",
"msg_date": "Mon, 8 Oct 2018 14:10:41 +0000",
"msg_from": "ROS Didier <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Why the index is not used ?"
},
{
"msg_contents": "Hi Paul\r\n\r\n Thank you very much for your feedback which is very informative.\r\n I understand that concerning the encryption of credit card numbers, it is imperative to respect the PCI DSS document. I am going to study it.\r\n However, I would like to say that I chose my example badly by using a table storing credit card numbers. In fact, my problem is more generic.\r\nI want to implement a solution that encrypts “sensitive” data and can retrieve data with good performance (by using an index).\r\nI find that the solution you propose is very interesting and I am going to test it.\r\n\r\nBest Regards\r\nDidier ROS\r\n\r\nDe : [email protected] [mailto:[email protected]]\r\nEnvoyé : lundi 8 octobre 2018 00:11\r\nÀ : ROS Didier <[email protected]>\r\nCc : [email protected]; [email protected]; [email protected]; [email protected]; [email protected]\r\nObjet : Re: Why the index is not used ?\r\n\r\nHi Didier,\r\n\r\nI’m sorry to tell you that you are probably doing something (ie handling/storing credit cards) which would mean you have to comply with PCI DSS requirements.\r\n\r\nAs such you should probably have a QSA (auditor) who you can run any proposed solution by (so you know they will be comfortable with it when they do their audit).\r\n\r\nI think your current solution would be frowned upon because:\r\n- cards are effectively stored in plaintext in the index.\r\n- your encryption/decryption is being done in database, rather than by something with that as its sole role.\r\n\r\nPeople have already mentioned the former so I won’t go into it further\r\n\r\nBut for the second part if someone can do a\r\n\r\nSelect pgp_sym_decrypt(cc)\r\n\r\nthen you are one sql injection away from having your card data stolen. You do have encryption, but in practice everything is available unencrypted so in practice the encryption is more of a tick in a box than an actual defence against bad things happening. In a properly segmented system even your DBA should not be able to access decrypted card data.\r\n\r\nYou probably should look into doing something like:\r\n\r\n- store the first 6 and last 4 digits of the card unencrypted.\r\n- store the remaining card digits encrypted\r\n- have the encryption/decryption done by a seperate service called by your application code outside the db.\r\n\r\nYou haven’t gone into what your requirements re search are (or I missed them) but while the above won’t give you a fast exact cc lookup in practice being able to search using the first 6 and last 4 can get you a small enough subset than can then be filtered after decrypting the middle.\r\n\r\nWe are straying a little off PostgreSQL topic here but if you and/or your management aren’t already looking at PCI DSS compliance I’d strongly recommend you do so. It can seem like a pain but it is much better to take that pain up front rather than having to reengineer everything later. There are important security aspects it helps make sure you cover but maybe some business aspects (ie possible partners who won’t be able to deal with you without your compliance sign off documentation).\r\n\r\n\r\nThe alternative, if storing cc data isn’t a core requirement, is to not store the credit card data at all. That is generally the best solution if it meets your needs, ie if you just want to accept payments then use a third party who is PCI compliant to handle the cc part.\r\n\r\nI hope that helps a little.\r\n\r\nPaul\r\n\r\n\r\n\r\nSent from my iPhone\r\n\r\nOn 8 Oct 2018, at 05:32, ROS Didier <[email protected]<mailto:[email protected]>> wrote:\r\nHi Francisco\r\n\r\n Thank you for your remark.\r\n You're right, but it's the only procedure I found to make search on encrypted fields with good response times (using index) !\r\n\r\n Regarding access to the file system, our servers are in protected network areas. few people can connect to it.\r\n\r\n it's not the best solution, but we have data encryption needs and good performance needs too. I do not know how to do it except the specified procedure..\r\n if anyone has any proposals to put this in place, I'm interested.\r\n\r\n Thanks in advance\r\n\r\nBest Regards\r\nDidier ROS\r\n\r\n-----Message d'origine-----\r\nDe : [email protected]<mailto:[email protected]> [mailto:[email protected]]\r\nEnvoyé : dimanche 7 octobre 2018 17:58\r\nÀ : ROS Didier <[email protected]<mailto:[email protected]>>\r\nCc : [email protected]<mailto:[email protected]>; [email protected]<mailto:[email protected]>; [email protected]<mailto:[email protected]>; [email protected]<mailto:[email protected]>\r\nObjet : Re: Why the index is not used ?\r\n\r\nROS:\r\n\r\nOn Sun, Oct 7, 2018 at 3:13 PM, ROS Didier <[email protected]<mailto:[email protected]>> wrote:\r\n....\r\n\r\n- INSERT INTO cartedecredit(username,cc) SELECT 'individu ' || x.id<http://x.id>, pgp_sym_encrypt('test value ' || x.id<http://x.id>, 'motdepasse','compress-algo=2, cipher-algo=aes256') FROM generate_series(1,100000) AS x(id);\r\n- CREATE INDEX idx_cartedecredit_cc02 ON cartedecredit(pgp_sym_decrypt(cc, 'motdepasse','compress-algo=2, cipher-algo=aes256'));\r\n\r\nIf my french is not too rusty you are encrypting a credit-card, and then storing an UNENCRYPTED copy in the index. So, getting it from the server is trivial for anyone with filesystem access.\r\n\r\nFrancisco Olarte.\r\n\r\n\r\n\r\nCe message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\r\n\r\nSi vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\r\n\r\nIl est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\r\n____________________________________________________\r\n\r\nThis message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.\r\n\r\nIf you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\r\n\r\nE-mail communication cannot be guaranteed to be timely secure, error or virus-free.\r\n\n\n\nCe message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\n\nSi vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\n\nIl est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\n____________________________________________________\n\nThis message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.\n\nIf you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\n\nE-mail communication cannot be guaranteed to be timely secure, error or virus-free.\n\n\n\n\n\n\n\n\n\nHi Paul\n \n \r\nThank you very much for your feedback which is very informative.\r\n\n I understand that concerning the encryption of credit card numbers, it is imperative to respect the PCI\r\n DSS document. I am going to study it.\n\n However, I would like to say that I chose my example badly by using a table storing credit card numbers.\r\nIn fact, my problem is more generic.\r\n\nI want to implement a solution that encrypts “sensitive” data and can retrieve data with good performance (by using an\r\n index).\nI find that the solution you propose is very interesting and I am going to test it.\n \nBest Regards\nDidier ROS\n\n \n\n\nDe : [email protected] [mailto:[email protected]]\r\n\nEnvoyé : lundi 8 octobre 2018 00:11\nÀ : ROS Didier <[email protected]>\nCc : [email protected]; [email protected]; [email protected]; [email protected]; [email protected]\nObjet : Re: Why the index is not used ?\n\n\n \nHi Didier,\n\n \n\n\nI’m sorry to tell you that you are probably doing something (ie handling/storing credit cards) which would mean you have to comply with PCI DSS requirements.\n\n\n \n\n\nAs such you should probably have a QSA (auditor) who you can run any proposed solution by (so you know they will be comfortable with it when they do their audit).\n\n\n \n\n\nI think your current solution would be frowned upon because:\n\n\n- cards are effectively stored in plaintext in the index.\n\n\n- your encryption/decryption is being done in database, rather than by something with that as its sole role.\n\n\n \n\n\nPeople have already mentioned the former so I won’t go into it further\n\n\n \n\n\nBut for the second part if someone can do a \n\n\n \n\n\n\n\n\nSelect pgp_sym_decrypt(cc)\n\n\n\n\n \n\nthen you are one sql injection away from having your card data stolen. You do have encryption, but in practice everything is available unencrypted so in practice the encryption is more of a tick in a box than an actual defence against bad\r\n things happening. In a properly segmented system even your DBA should not be able to access decrypted card data.\n\n\n \n\n\nYou probably should look into doing something like:\n\n\n \n\n\n- store the first 6 and last 4 digits of the card unencrypted.\n\n\n- store the remaining card digits encrypted\n\n\n- have the encryption/decryption done by a seperate service called by your application code outside the db.\n\n\n \n\n\nYou haven’t gone into what your requirements re search are (or I missed them) but while the above won’t give you a fast exact cc lookup in practice being able to search using the first 6 and last 4 can get you a small enough subset than\r\n can then be filtered after decrypting the middle. \n\n\n \n\n\nWe are straying a little off PostgreSQL topic here but if you and/or your management aren’t already looking at PCI DSS compliance I’d strongly recommend you do so. It can seem like a pain but it is much better to take that pain up front\r\n rather than having to reengineer everything later. There are important security aspects it helps make sure you cover but maybe some business aspects (ie possible partners who won’t be able to deal with you without your compliance sign off documentation).\n\n\n \n\n\n \n\n\nThe alternative, if storing cc data isn’t a core requirement, is to not store the credit card data at all. That is generally the best solution if it meets your needs, ie if you just want to accept payments then use a third party who is\r\n PCI compliant to handle the cc part.\n\n\n \n\n\nI hope that helps a little.\n\n\n \n\n\nPaul\n\n\n \n\n\n \n\n\n \n\nSent from my iPhone\n\n\n\r\nOn 8 Oct 2018, at 05:32, ROS Didier <[email protected]> wrote:\n\n\n\nHi Francisco\n\r\n Thank you for your remark. \r\n You're right, but it's the only procedure I found to make search on encrypted fields with good response times (using index) !\n\r\n Regarding access to the file system, our servers are in protected network areas. few people can connect to it.\n\r\n it's not the best solution, but we have data encryption needs and good performance needs too. I do not know how to do it except the specified procedure..\r\n if anyone has any proposals to put this in place, I'm interested.\n\r\n Thanks in advance\n\r\nBest Regards\r\nDidier ROS\n\r\n-----Message d'origine-----\r\nDe : [email protected] [mailto:[email protected]]\r\n\r\nEnvoyé : dimanche 7 octobre 2018 17:58\r\nÀ : ROS Didier <[email protected]>\r\nCc : [email protected]; \r\[email protected]; \r\[email protected]; \r\[email protected]\r\nObjet : Re: Why the index is not used ?\n\r\nROS:\n\r\nOn Sun, Oct 7, 2018 at 3:13 PM, ROS Didier <[email protected]> wrote:\r\n....\n\n\n\n- INSERT INTO cartedecredit(username,cc) SELECT 'individu ' ||\r\nx.id, pgp_sym_encrypt('test value ' || \r\nx.id, 'motdepasse','compress-algo=2, cipher-algo=aes256') FROM generate_series(1,100000) AS x(id);\n\n\n- CREATE INDEX idx_cartedecredit_cc02 ON cartedecredit(pgp_sym_decrypt(cc, 'motdepasse','compress-algo=2, cipher-algo=aes256'));\n\n\r\nIf my french is not too rusty you are encrypting a credit-card, and then storing an UNENCRYPTED copy in the index. So, getting it from the server is trivial for anyone with filesystem access.\n\r\nFrancisco Olarte.\n\n\n\r\nCe message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute\r\n diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\n\r\nSi vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies,\r\n et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\n\r\nIl est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\r\n____________________________________________________\n\r\nThis message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure,\r\n either whole or partial, is prohibited except formal approval.\n\r\nIf you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\n\r\nE-mail communication cannot be guaranteed to be timely secure, error or virus-free.\n\n\n\n\n\n\nCe message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\nSi vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\nIl est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\n____________________________________________________\nThis message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.\nIf you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\nE-mail communication cannot be guaranteed to be timely secure, error or virus-free.",
"msg_date": "Mon, 8 Oct 2018 14:29:26 +0000",
"msg_from": "ROS Didier <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Why the index is not used ?"
},
{
"msg_contents": "Dear Jean-Paul\n\n\tThank you very much for this link which is actually very interesting. I am going to study it carefully.\n\tBut my problem is more generic: \n\tHow to set up the encryption of sensitive data and have good performance (using an index by example) ?. \n\tApparently it is not obvious as that.\n\nBest Regards\n\nDidier ROS\n-----Message d'origine-----\nDe : [email protected] [mailto:[email protected]] \nEnvoyé : lundi 8 octobre 2018 10:44\nÀ : [email protected]\nObjet : Re: Why the index is not used ?\n\nDear Didier,\n\n\nLe lundi 08 octobre 2018 à 08:32 +0000, ROS Didier a écrit :\n> Hi Virendra \n> \tYou think that outside encryption of the database is the best solution\t ?\n> How do you manage the encryption key ?\n> \tCan you give me some examples of this kind of solution.\n> Best Regards\n> Didier ROS\n\nIf I understand your need well, you need to store credit card information into your database.\n\nThis is ruled by the Payment Card Industry Data Security Standard (aka PCI DSS).\n\nI attend some years ago a real good presentation from Denish Patel, a well known community member.\n\nI saw this talk in pgconf 2015 in Moscow: https://pgconf.ru/media2015c/patel.pdf\n\nI recommend you read it, if you had not already? It shows code examples, etc.\n\n\nMy 2 cents...\n\n\n-- \nJean-Paul Argudo\n\n\n\n\n\nCe message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\n\nSi vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\n\nIl est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\n____________________________________________________\n\nThis message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.\n\nIf you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\n\nE-mail communication cannot be guaranteed to be timely secure, error or virus-free.\n\n\n",
"msg_date": "Mon, 8 Oct 2018 15:32:45 +0000",
"msg_from": "ROS Didier <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Why the index is not used ?"
},
{
"msg_contents": "Hi,\n\nOn 10/08/2018 04:10 PM, ROS Didier wrote:\n> Hi Tomas\n> \n> Thank you for your answer and recommendation which is very\n> interesting. I'm going to study the PCI DSS document right now.\n> \n> * Here are my answer to your question :\n> \n> />>/\n> /What is your threat model?/\n> /<</\n> we want to prevent access to sensitive data for everyone except those\n> who have the encryption key.\n> in case of files theft, backups theft, dumps theft, we do not want\n> anyone to access sensitive data.\n> \n\nThe thing is - encryption is not panacea. The interesting question is\nwhether this improves security compared to simply using FDE and regular\naccess rights (which are grantable at the column level).\n\nUsing those two pieces properly may very well be a better defense than\nnot well designed encryption scheme - and based on this discussion, it\ndoes not seem very polished / resilient.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Mon, 8 Oct 2018 22:00:15 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why the index is not used ?"
},
{
"msg_contents": "Hi Didier,\n\nYes, credit cards are a very specific space that probably gets people who are familiar with it going a bit. By the time you factor in general security practices, specific PCI requirements, your threat model and likely business requirements (needing relatively free access to parts of the card number) the acceptable solution space narrows considerably.\n\nMore generally though I’d recommend reading:\n\nhttps://paragonie.com/blog/2017/05/building-searchable-encrypted-databases-with-php-and-sql\n\nas (even if you aren’t using PHP) it discusses several strategies and what makes them good/bad for different use cases and how to implement them well.\n\nI don’t think I’d consider the main solution discussed there particularly applicable to credit card data (mostly because the low entropy of card data makes it difficult to handle safely without additional per-row randomness added, though as always, consult your QSA) but it is generally interesting.\n\nPaul\n\nSent from my iPhone\n\n> On 9 Oct 2018, at 01:29, ROS Didier <[email protected]> wrote:\n> \n> Hi Paul\n> \n> Thank you very much for your feedback which is very informative.\n> I understand that concerning the encryption of credit card numbers, it is imperative to respect the PCI DSS document. I am going to study it.\n> However, I would like to say that I chose my example badly by using a table storing credit card numbers. In fact, my problem is more generic.\n> I want to implement a solution that encrypts “sensitive” data and can retrieve data with good performance (by using an index).\n> I find that the solution you propose is very interesting and I am going to test it.\n> \n> Best Regards\n> Didier ROS\n> \n> De : [email protected] [mailto:[email protected]] \n> Envoyé : lundi 8 octobre 2018 00:11\n> À : ROS Didier <[email protected]>\n> Cc : [email protected]; [email protected]; [email protected]; [email protected]; [email protected]\n> Objet : Re: Why the index is not used ?\n> \n> Hi Didier,\n> \n> I’m sorry to tell you that you are probably doing something (ie handling/storing credit cards) which would mean you have to comply with PCI DSS requirements.\n> \n> As such you should probably have a QSA (auditor) who you can run any proposed solution by (so you know they will be comfortable with it when they do their audit).\n> \n> I think your current solution would be frowned upon because:\n> - cards are effectively stored in plaintext in the index.\n> - your encryption/decryption is being done in database, rather than by something with that as its sole role.\n> \n> People have already mentioned the former so I won’t go into it further\n> \n> But for the second part if someone can do a \n> \n> Select pgp_sym_decrypt(cc)\n> \n> then you are one sql injection away from having your card data stolen. You do have encryption, but in practice everything is available unencrypted so in practice the encryption is more of a tick in a box than an actual defence against bad things happening. In a properly segmented system even your DBA should not be able to access decrypted card data.\n> \n> You probably should look into doing something like:\n> \n> - store the first 6 and last 4 digits of the card unencrypted.\n> - store the remaining card digits encrypted\n> - have the encryption/decryption done by a seperate service called by your application code outside the db.\n> \n> You haven’t gone into what your requirements re search are (or I missed them) but while the above won’t give you a fast exact cc lookup in practice being able to search using the first 6 and last 4 can get you a small enough subset than can then be filtered after decrypting the middle. \n> \n> We are straying a little off PostgreSQL topic here but if you and/or your management aren’t already looking at PCI DSS compliance I’d strongly recommend you do so. It can seem like a pain but it is much better to take that pain up front rather than having to reengineer everything later. There are important security aspects it helps make sure you cover but maybe some business aspects (ie possible partners who won’t be able to deal with you without your compliance sign off documentation).\n> \n> \n> The alternative, if storing cc data isn’t a core requirement, is to not store the credit card data at all. That is generally the best solution if it meets your needs, ie if you just want to accept payments then use a third party who is PCI compliant to handle the cc part.\n> \n> I hope that helps a little.\n> \n> Paul\n> \n> \n> \n> \n> Sent from my iPhone\n> \n> On 8 Oct 2018, at 05:32, ROS Didier <[email protected]> wrote:\n> \n> Hi Francisco\n> \n> Thank you for your remark. \n> You're right, but it's the only procedure I found to make search on encrypted fields with good response times (using index) !\n> \n> Regarding access to the file system, our servers are in protected network areas. few people can connect to it.\n> \n> it's not the best solution, but we have data encryption needs and good performance needs too. I do not know how to do it except the specified procedure..\n> if anyone has any proposals to put this in place, I'm interested.\n> \n> Thanks in advance\n> \n> Best Regards\n> Didier ROS\n> \n> -----Message d'origine-----\n> De : [email protected] [mailto:[email protected]] \n> Envoyé : dimanche 7 octobre 2018 17:58\n> À : ROS Didier <[email protected]>\n> Cc : [email protected]; [email protected]; [email protected]; [email protected]\n> Objet : Re: Why the index is not used ?\n> \n> ROS:\n> \n> On Sun, Oct 7, 2018 at 3:13 PM, ROS Didier <[email protected]> wrote:\n> ....\n> \n> - INSERT INTO cartedecredit(username,cc) SELECT 'individu ' || x.id, pgp_sym_encrypt('test value ' || x.id, 'motdepasse','compress-algo=2, cipher-algo=aes256') FROM generate_series(1,100000) AS x(id);\n> - CREATE INDEX idx_cartedecredit_cc02 ON cartedecredit(pgp_sym_decrypt(cc, 'motdepasse','compress-algo=2, cipher-algo=aes256'));\n> \n> If my french is not too rusty you are encrypting a credit-card, and then storing an UNENCRYPTED copy in the index. So, getting it from the server is trivial for anyone with filesystem access.\n> \n> Francisco Olarte.\n> \n> \n> \n> Ce message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\n> \n> Si vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\n> \n> Il est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\n> ____________________________________________________\n> \n> This message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.\n> \n> If you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\n> \n> E-mail communication cannot be guaranteed to be timely secure, error or virus-free.\n> \n> Ce message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\n> \n> Si vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\n> \n> Il est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\n> ____________________________________________________\n> \n> This message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.\n> \n> If you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\n> \n> E-mail communication cannot be guaranteed to be timely secure, error or virus-free.\n\nHi Didier,Yes, credit cards are a very specific space that probably gets people who are familiar with it going a bit. By the time you factor in general security practices, specific PCI requirements, your threat model and likely business requirements (needing relatively free access to parts of the card number) the acceptable solution space narrows considerably.More generally though I’d recommend reading:https://paragonie.com/blog/2017/05/building-searchable-encrypted-databases-with-php-and-sqlas (even if you aren’t using PHP) it discusses several strategies and what makes them good/bad for different use cases and how to implement them well.I don’t think I’d consider the main solution discussed there particularly applicable to credit card data (mostly because the low entropy of card data makes it difficult to handle safely without additional per-row randomness added, though as always, consult your QSA) but it is generally interesting.PaulSent from my iPhoneOn 9 Oct 2018, at 01:29, ROS Didier <[email protected]> wrote:\n\n\n\n\nHi Paul\n \n \nThank you very much for your feedback which is very informative.\n\n I understand that concerning the encryption of credit card numbers, it is imperative to respect the PCI\n DSS document. I am going to study it.\n\n However, I would like to say that I chose my example badly by using a table storing credit card numbers.\nIn fact, my problem is more generic.\n\nI want to implement a solution that encrypts “sensitive” data and can retrieve data with good performance (by using an\n index).\nI find that the solution you propose is very interesting and I am going to test it.\n \nBest Regards\nDidier ROS\n\n \n\n\nDe : [email protected] [mailto:[email protected]]\n\nEnvoyé : lundi 8 octobre 2018 00:11\nÀ : ROS Didier <[email protected]>\nCc : [email protected]; [email protected]; [email protected]; [email protected]; [email protected]\nObjet : Re: Why the index is not used ?\n\n\n \nHi Didier,\n\n \n\n\nI’m sorry to tell you that you are probably doing something (ie handling/storing credit cards) which would mean you have to comply with PCI DSS requirements.\n\n\n \n\n\nAs such you should probably have a QSA (auditor) who you can run any proposed solution by (so you know they will be comfortable with it when they do their audit).\n\n\n \n\n\nI think your current solution would be frowned upon because:\n\n\n- cards are effectively stored in plaintext in the index.\n\n\n- your encryption/decryption is being done in database, rather than by something with that as its sole role.\n\n\n \n\n\nPeople have already mentioned the former so I won’t go into it further\n\n\n \n\n\nBut for the second part if someone can do a \n\n\n \n\n\n\n\n\nSelect pgp_sym_decrypt(cc)\n\n\n\n\n \n\nthen you are one sql injection away from having your card data stolen. You do have encryption, but in practice everything is available unencrypted so in practice the encryption is more of a tick in a box than an actual defence against bad\n things happening. In a properly segmented system even your DBA should not be able to access decrypted card data.\n\n\n \n\n\nYou probably should look into doing something like:\n\n\n \n\n\n- store the first 6 and last 4 digits of the card unencrypted.\n\n\n- store the remaining card digits encrypted\n\n\n- have the encryption/decryption done by a seperate service called by your application code outside the db.\n\n\n \n\n\nYou haven’t gone into what your requirements re search are (or I missed them) but while the above won’t give you a fast exact cc lookup in practice being able to search using the first 6 and last 4 can get you a small enough subset than\n can then be filtered after decrypting the middle. \n\n\n \n\n\nWe are straying a little off PostgreSQL topic here but if you and/or your management aren’t already looking at PCI DSS compliance I’d strongly recommend you do so. It can seem like a pain but it is much better to take that pain up front\n rather than having to reengineer everything later. There are important security aspects it helps make sure you cover but maybe some business aspects (ie possible partners who won’t be able to deal with you without your compliance sign off documentation).\n\n\n \n\n\n \n\n\nThe alternative, if storing cc data isn’t a core requirement, is to not store the credit card data at all. That is generally the best solution if it meets your needs, ie if you just want to accept payments then use a third party who is\n PCI compliant to handle the cc part.\n\n\n \n\n\nI hope that helps a little.\n\n\n \n\n\nPaul\n\n\n \n\n\n \n\n\n \n\nSent from my iPhone\n\n\n\nOn 8 Oct 2018, at 05:32, ROS Didier <[email protected]> wrote:\n\n\n\nHi Francisco\n\n Thank you for your remark. \n You're right, but it's the only procedure I found to make search on encrypted fields with good response times (using index) !\n\n Regarding access to the file system, our servers are in protected network areas. few people can connect to it.\n\n it's not the best solution, but we have data encryption needs and good performance needs too. I do not know how to do it except the specified procedure..\n if anyone has any proposals to put this in place, I'm interested.\n\n Thanks in advance\n\nBest Regards\nDidier ROS\n\n-----Message d'origine-----\nDe : [email protected] [mailto:[email protected]]\n\nEnvoyé : dimanche 7 octobre 2018 17:58\nÀ : ROS Didier <[email protected]>\nCc : [email protected]; \[email protected]; \[email protected]; \[email protected]\nObjet : Re: Why the index is not used ?\n\nROS:\n\nOn Sun, Oct 7, 2018 at 3:13 PM, ROS Didier <[email protected]> wrote:\n....\n\n\n\n- INSERT INTO cartedecredit(username,cc) SELECT 'individu ' ||\nx.id, pgp_sym_encrypt('test value ' || \nx.id, 'motdepasse','compress-algo=2, cipher-algo=aes256') FROM generate_series(1,100000) AS x(id);\n\n\n- CREATE INDEX idx_cartedecredit_cc02 ON cartedecredit(pgp_sym_decrypt(cc, 'motdepasse','compress-algo=2, cipher-algo=aes256'));\n\n\nIf my french is not too rusty you are encrypting a credit-card, and then storing an UNENCRYPTED copy in the index. So, getting it from the server is trivial for anyone with filesystem access.\n\nFrancisco Olarte.\n\n\n\nCe message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute\n diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\n\nSi vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies,\n et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\n\nIl est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\n____________________________________________________\n\nThis message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure,\n either whole or partial, is prohibited except formal approval.\n\nIf you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\n\nE-mail communication cannot be guaranteed to be timely secure, error or virus-free.\n\n\n\n\n\n\nCe message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\nSi vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\nIl est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\n____________________________________________________\nThis message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.\nIf you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\nE-mail communication cannot be guaranteed to be timely secure, error or virus-free.",
"msg_date": "Tue, 9 Oct 2018 08:34:51 +1100",
"msg_from": "Paul McGarry <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why the index is not used ?"
}
] |
[
{
"msg_contents": "Hi,\nI'm trying to understand the execution plan that is chosen for my query\nwhen I run a select on a partition table . I have on my main partition\ntable rules that redirect the insert to the right son table.\n\nMy scheme :\nPostgresql 9.6.8\n\nmydb=# \\d comments_daily\n Table \"public.fw_log_daily\"\n Column | Type | Modifiers\n---------------+-----------------------+-----------\n log_server_id | bigint | not null\n comment_id | bigint | not null\n date | date | not null\n\nRules:\n comments_daily_1 AS\n ON INSERT TO fw_log_daily\n WHERE new.log_server_id = 1::bigint DO INSTEAD INSERT INTO\ncomments_daily_1 (log_server_id,comment_id, date)\n VALUES (new.log_server_id, new.comment_id, new.date)\n\n comments_daily_2 AS\n ON INSERT TO fw_log_daily\n WHERE new.log_server_id = 1::bigint DO INSTEAD INSERT INTO\ncomments_daily_2 (log_server_id, comment_id, date)\n VALUES (new.log_server_id, new.comment_id, new.date)\n\n and so on...\n\n\nThe son table structure :\nmydb=# \\d comments_daily_247\n Table \"public.comments_daily_247\"\n Column | Type | Modifiers\n---------------+-----------------------+-----------\n log_server_id | bigint | not null\n comment_id | bigint | not null\n date | date | not null\n\nIndexes:\n \"comments_daily_247_date_device_id_idx\" btree (date, device_id)\nCheck constraints:\n \"comments_daily_247_log_server_id_check\" CHECK (log_server_id =\n247::bigint)\nInherits: comments_daily\n\n\n\nthe query :\nmydb=# explain\nSELECT * FROM comments_daily\nwhere\nlog_server_id in (247)\nAND\ncomments_daily.date >= '2017-04-12'\nAND\ncomments_daily.date <= '2017-04-12'\nAND\ncomment_id IN (1256);\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------\n Append (cost=0.00..47368.49 rows=2 width=186)\n -> Seq Scan on comments_daily (cost=0.00..47360.30 rows=1 width=186)\n Filter: ((date >= '2017-04-12'::date) AND (date <=\n'2017-04-12'::date) AND (log_server_id = 247) AND (comment_id = 1256))\n -> Index Scan using comments_daily_247_date_comment_id_idx on\ncomments_daily_247 (cost=0.15..8.19 rows=1 width=186)\n Index Cond: ((date >= '2017-04-12'::date) AND (date <=\n'2017-04-12'::date) AND (comment_id = 1256))\n Filter: (log_server_id = 247)\n(6 rows)\n\ntraffic_log_db=#\n\nI had 2 questions :\n1)Why the filtering on the main comments_daily table is according to all\nthe where clause and not only according the log_server_id?\n2)Why the filtering on the son table is according to the log_server_id ? Is\nit because of the check constraint ?\n3)Should I create another index to improve the performance ?\n4)Any suggestions ?\n\nHi,I'm trying to understand the execution plan that is chosen for my query when I run a select on a partition table . I have on my main partition table rules that redirect the insert to the right son table.My scheme : Postgresql 9.6.8mydb=# \\d comments_daily Table \"public.fw_log_daily\" Column | Type | Modifiers---------------+-----------------------+----------- log_server_id | bigint | not null comment_id | bigint | not null date | date | not null Rules: comments_daily_1 AS ON INSERT TO fw_log_daily WHERE new.log_server_id = 1::bigint DO INSTEAD INSERT INTO comments_daily_1 (log_server_id,comment_id, date) VALUES (new.log_server_id, new.comment_id, new.date) comments_daily_2 AS ON INSERT TO fw_log_daily WHERE new.log_server_id = 1::bigint DO INSTEAD INSERT INTO comments_daily_2 (log_server_id, comment_id, date) VALUES (new.log_server_id, new.comment_id, new.date) and so on...The son table structure : mydb=# \\d comments_daily_247 Table \"public.comments_daily_247\" Column | Type | Modifiers---------------+-----------------------+----------- log_server_id | bigint | not null comment_id | bigint | not null date | date | not null Indexes: \"comments_daily_247_date_device_id_idx\" btree (date, device_id)Check constraints: \"comments_daily_247_log_server_id_check\" CHECK (log_server_id = 247::bigint)Inherits: comments_dailythe query : mydb=# explainSELECT * FROM comments_dailywherelog_server_id in (247)ANDcomments_daily.date >= '2017-04-12'ANDcomments_daily.date <= '2017-04-12'ANDcomment_id IN (1256); QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------- Append (cost=0.00..47368.49 rows=2 width=186) -> Seq Scan on comments_daily (cost=0.00..47360.30 rows=1 width=186) Filter: ((date >= '2017-04-12'::date) AND (date <= '2017-04-12'::date) AND (log_server_id = 247) AND (comment_id = 1256)) -> Index Scan using comments_daily_247_date_comment_id_idx on comments_daily_247 (cost=0.15..8.19 rows=1 width=186) Index Cond: ((date >= '2017-04-12'::date) AND (date <= '2017-04-12'::date) AND (comment_id = 1256)) Filter: (log_server_id = 247)(6 rows)traffic_log_db=#I had 2 questions : 1)Why the filtering on the main comments_daily table is according to all the where clause and not only according the log_server_id?2)Why the filtering on the son table is according to the log_server_id ? Is it because of the check constraint ?3)Should I create another index to improve the performance ?4)Any suggestions ?",
"msg_date": "Tue, 9 Oct 2018 12:19:56 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "understand query on partition table"
},
{
"msg_contents": "Dear Mariel, 1,4. Could you please check all child tables whether they all have check constraints or not? Does your main table store any data? Also could you please share output of following command.show constraint_exclusion; 2. Filtering on comments_daily_247 table over log_server_id is not big issue for your situation. Postgres applies filtering on your results because comments_daily_247_date_comment_id_idx composite index does not contain log_server_id. 3. I think it is not related with indexes. Best regards.İyi çalışmalar.Samed YILDIRIM 09.10.2018, 12:20, \"Mariel Cherkassky\" <[email protected]>:Hi,I'm trying to understand the execution plan that is chosen for my query when I run a select on a partition table . I have on my main partition table rules that redirect the insert to the right son table. My scheme : Postgresql 9.6.8 mydb=# \\d comments_daily Table \"public.fw_log_daily\" Column | Type | Modifiers---------------+-----------------------+----------- log_server_id | bigint | not null comment_id | bigint | not null date | date | not null Rules: comments_daily_1 AS ON INSERT TO fw_log_daily WHERE new.log_server_id = 1::bigint DO INSTEAD INSERT INTO comments_daily_1 (log_server_id,comment_id, date) VALUES (new.log_server_id, new.comment_id, new.date) comments_daily_2 AS ON INSERT TO fw_log_daily WHERE new.log_server_id = 1::bigint DO INSTEAD INSERT INTO comments_daily_2 (log_server_id, comment_id, date) VALUES (new.log_server_id, new.comment_id, new.date) and so on... The son table structure : mydb=# \\d comments_daily_247 Table \"public.comments_daily_247\" Column | Type | Modifiers---------------+-----------------------+----------- log_server_id | bigint | not null comment_id | bigint | not null date | date | not null Indexes: \"comments_daily_247_date_device_id_idx\" btree (date, device_id)Check constraints: \"comments_daily_247_log_server_id_check\" CHECK (log_server_id = 247::bigint)Inherits: comments_daily the query : mydb=# explainSELECT * FROM comments_dailywherelog_server_id in (247)ANDcomments_daily.date >= '2017-04-12'ANDcomments_daily.date <= '2017-04-12'ANDcomment_id IN (1256); QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------- Append (cost=0.00..47368.49 rows=2 width=186) -> Seq Scan on comments_daily (cost=0.00..47360.30 rows=1 width=186) Filter: ((date >= '2017-04-12'::date) AND (date <= '2017-04-12'::date) AND (log_server_id = 247) AND (comment_id = 1256)) -> Index Scan using comments_daily_247_date_comment_id_idx on comments_daily_247 (cost=0.15..8.19 rows=1 width=186) Index Cond: ((date >= '2017-04-12'::date) AND (date <= '2017-04-12'::date) AND (comment_id = 1256)) Filter: (log_server_id = 247)(6 rows) traffic_log_db=# I had 2 questions : 1)Why the filtering on the main comments_daily table is according to all the where clause and not only according the log_server_id?2)Why the filtering on the son table is according to the log_server_id ? Is it because of the check constraint ?3)Should I create another index to improve the performance ?4)Any suggestions ?",
"msg_date": "Tue, 09 Oct 2018 16:12:08 +0300",
"msg_from": "Samed YILDIRIM <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: understand query on partition table"
}
] |
[
{
"msg_contents": "Hi,\nDoes the work mem is used when I do sorts or hash operations on temp\ntables ? Or the temp_buffer that is allocated at the beginning of the\nsession is used for it ?\n\nAt one of our apps, the app create a temp table and run on it some\noperations (joins,sum,count,avg ) and so on.. I saw in the postgresql.conf\nthat fs temp files are generated so I guested that the memory buffer that\nwas allocated for that session was too small. The question is should I\nincrease the work_mem or the temp_buffers ?\n\nThanks , Mariel.\n\nHi,Does the work mem is used when I do sorts or hash operations on temp tables ? Or the temp_buffer that is allocated at the beginning of the session is used for it ?At one of our apps, the app create a temp table and run on it some operations (joins,sum,count,avg ) and so on.. I saw in the postgresql.conf that fs temp files are generated so I guested that the memory buffer that was allocated for that session was too small. The question is should I increase the work_mem or the temp_buffers ?Thanks , Mariel.",
"msg_date": "Thu, 11 Oct 2018 09:56:48 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "does work_mem is used on temp tables?"
},
{
"msg_contents": ">>>>> \"Mariel\" == Mariel Cherkassky <[email protected]> writes:\n\n Mariel> Hi,\n Mariel> Does the work mem is used when I do sorts or hash operations on\n Mariel> temp tables ? Or the temp_buffer that is allocated at the\n Mariel> beginning of the session is used for it ?\n\nwork_mem is used for sorts and hashes regardless of what type the\nunderlying table (if any) is.\n\ntemp_buffers is used to buffer the actual _content_ of temp tables.\n\n-- \nAndrew (irc:RhodiumToad)\n\n",
"msg_date": "Thu, 11 Oct 2018 08:41:56 +0100",
"msg_from": "Andrew Gierth <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: does work_mem is used on temp tables?"
},
{
"msg_contents": "great, thanks !\n\nבתאריך יום ה׳, 11 באוק׳ 2018 ב-10:42 מאת Andrew Gierth <\[email protected]>:\n\n> >>>>> \"Mariel\" == Mariel Cherkassky <[email protected]> writes:\n>\n> Mariel> Hi,\n> Mariel> Does the work mem is used when I do sorts or hash operations on\n> Mariel> temp tables ? Or the temp_buffer that is allocated at the\n> Mariel> beginning of the session is used for it ?\n>\n> work_mem is used for sorts and hashes regardless of what type the\n> underlying table (if any) is.\n>\n> temp_buffers is used to buffer the actual _content_ of temp tables.\n>\n> --\n> Andrew (irc:RhodiumToad)\n>\n\ngreat, thanks !בתאריך יום ה׳, 11 באוק׳ 2018 ב-10:42 מאת Andrew Gierth <[email protected]>:>>>>> \"Mariel\" == Mariel Cherkassky <[email protected]> writes:\n\n Mariel> Hi,\n Mariel> Does the work mem is used when I do sorts or hash operations on\n Mariel> temp tables ? Or the temp_buffer that is allocated at the\n Mariel> beginning of the session is used for it ?\n\nwork_mem is used for sorts and hashes regardless of what type the\nunderlying table (if any) is.\n\ntemp_buffers is used to buffer the actual _content_ of temp tables.\n\n-- \nAndrew (irc:RhodiumToad)",
"msg_date": "Thu, 11 Oct 2018 10:58:06 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: does work_mem is used on temp tables?"
}
] |
[
{
"msg_contents": "Hi,\n\nI have a CSV file having only 400 records.\n\nI have to import it in DB table, it's working fine but why it's importing 1047303 rows as I have only 400 records are present in that file.\nCould you please help me on this?\n\n[cid:[email protected]]\n\nRegards,\nDinesh Chandra\n\n\n________________________________\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.",
"msg_date": "Mon, 15 Oct 2018 05:42:28 +0000",
"msg_from": "Dinesh Chandra 12108 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Import csv in PostgreSQL"
},
{
"msg_contents": "it must be something in your data. either you have ',' (which you specified\nas your delimiter) in the data itself or you have end of line chars\nembedded in the data. Try a file with one row only and see what happens. If\nit's ok try a few more - possibly the problem lies in some other row. add\nmore lines until you can see the problem happening and then identify the\nproblem row(s) / char(s).\nPossibly also try using the encoding parameter for the copy command or for\nthe query/procedure that you use to issue the data to file\n\n\nOn Mon, Oct 15, 2018 at 8:43 AM Dinesh Chandra 12108 <\[email protected]> wrote:\n\n> Hi,\n>\n>\n>\n> I have a CSV file having only 400 records.\n>\n>\n>\n> I have to import it in DB table, it’s working fine but why it’s importing\n> 1047303 rows as I have only 400 records are present in that file.\n>\n> Could you please help me on this?\n>\n>\n>\n>\n>\n> *Regards,*\n>\n> *Dinesh Chandra*\n>\n>\n>\n> ------------------------------\n>\n> DISCLAIMER:\n>\n> This email message is for the sole use of the intended recipient(s) and\n> may contain confidential and privileged information. Any unauthorized\n> review, use, disclosure or distribution is prohibited. If you are not the\n> intended recipient, please contact the sender by reply email and destroy\n> all copies of the original message. Check all attachments for viruses\n> before opening them. All views or opinions presented in this e-mail are\n> those of the author and may not reflect the opinion of Cyient or those of\n> our affiliates.\n>",
"msg_date": "Mon, 15 Oct 2018 09:09:18 +0300",
"msg_from": "Gabi D <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Import csv in PostgreSQL"
}
] |
[
{
"msg_contents": "Hi all,\n\nI have a problem with my query. Query always using parallel bitmap heap scan. I've created an index with all where conditions and id but query does not this index and continue to use bitmapscan. So I decided disable bitmap scan for testing. And after that, things became strange. Cost is higher, execution time is lower.\nBut I want to use index_only_scan because index have all column that query need. No need to access table.\nIt is doing index_only_scan when disabling bitmap scan but I cannot disable bitmap scan for cluster wide. There are other queries...\nCan you help me to solve the issue?\n\nPostgreSQL Version: PostgreSQL 10.3 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16), 64-bit\n\n\n\nHere my query:\n\nexplain analyze with ids as (\nselect g.id,g.kdv,g.tutar from\ndbs.gider g\nleft join dbs.gider_belge gb\non gb.id=g.gider_belge_id\nwhere gb.mukellef_id='0123456789' and g.deleted is not true and gb.deleted is not true and gb.sube_no='-13' and gb.defter='sm' and gb.kayit_tarihi>='2018-01-01 00:00:00'),\ntotals as (select sum(kdv) tkdv,sum(tutar) ttutar from ids)\nselect ids.id,totals.tkdv,totals.ttutar from ids,totals;\n\nHere default explain analyze output:\n\nNested Loop (cost=25939.84..26244.15 rows=10143 width=72) (actual time=92.936..94.708 rows=12768 loops=1)\n CTE ids\n -> Gather (cost=1317.56..25686.25 rows=10143 width=20) (actual time=12.774..87.854 rows=12768 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Nested Loop (cost=317.56..23671.95 rows=4226 width=20) (actual time=5.382..80.240 rows=4256 loops=3)\n -> Parallel Bitmap Heap Scan on gider_belge gb (cost=316.99..10366.28 rows=3835 width=8) (actual time=5.223..29.208 rows=4077 loops=3)\n Recheck Cond: (((mukellef_id)::text = '0123456789'::text) AND (kayit_tarihi >= '2018-01-01 00:00:00'::timestamp without time zone) AND (sube_no = '-13'::integer) AND ((defter)::text\n = 'sm'::text) AND (deleted IS NOT TRUE))\n Heap Blocks: exact=7053\n -> Bitmap Index Scan on idx_gider_belge_mukellef_id_kayit_tarihi_sube_no_defter_id (cost=0.00..314.69 rows=9205 width=0) (actual time=8.086..8.086 rows=12230 loops=1)\n Index Cond: (((mukellef_id)::text = '0123456789'::text) AND (kayit_tarihi >= '2018-01-01 00:00:00'::timestamp without time zone) AND (sube_no = '-13'::integer) AND ((defter)::\ntext = 'sm'::text))\n -> Index Scan using idx_gider_gider_belge_id on gider g (cost=0.56..3.41 rows=6 width=28) (actual time=0.012..0.012 rows=1 loops=12230)\n Index Cond: (gider_belge_id = gb.id)\n Filter: (deleted IS NOT TRUE)\n Rows Removed by Filter: 0\n CTE totals\n -> Aggregate (cost=253.58..253.59 rows=1 width=64) (actual time=92.925..92.925 rows=1 loops=1)\n -> CTE Scan on ids ids_1 (cost=0.00..202.86 rows=10143 width=40) (actual time=12.776..90.976 rows=12768 loops=1)\n -> CTE Scan on totals (cost=0.00..0.02 rows=1 width=64) (actual time=92.926..92.927 rows=1 loops=1)\n -> CTE Scan on ids (cost=0.00..202.86 rows=10143 width=8) (actual time=0.001..0.820 rows=12768 loops=1)\n Planning time: 0.691 ms\n Execution time: 113.107 ms\n\nHere explain analyze output after disabling bitmapscan:\n\nNested Loop (cost=31493.51..31797.85 rows=10144 width=72) (actual time=73.359..75.107 rows=12768 loops=1)\n CTE ids\n -> Gather (cost=1001.13..31239.89 rows=10144 width=20) (actual time=0.741..67.391 rows=12768 loops=1)\n Workers Planned: 1\n Workers Launched: 1\n -> Nested Loop (cost=1.13..29225.49 rows=5967 width=20) (actual time=0.185..62.422 rows=6384 loops=2)\n -> Parallel Index Only Scan using idx_gider_belge_mukellef_id_kayit_tarihi_sube_no_defter_id on gider_belge gb (cost=0.56..10437.97 rows=5415 width=8) (actual time=0.092..15.913 rows=61\n15 loops=2)\n Index Cond: ((mukellef_id = '0123456789'::text) AND (kayit_tarihi >= '2018-01-01 00:00:00'::timestamp without time zone) AND (sube_no = '-13'::integer) AND (defter = 'sm'::text))\n Heap Fetches: 9010\n -> Index Scan using idx_gider_gider_belge_id on gider g (cost=0.56..3.41 rows=6 width=28) (actual time=0.007..0.007 rows=1 loops=12230)\n Index Cond: (gider_belge_id = gb.id)\n Filter: (deleted IS NOT TRUE)\n Rows Removed by Filter: 0\n CTE totals\n -> Aggregate (cost=253.60..253.61 rows=1 width=64) (actual time=73.354..73.354 rows=1 loops=1)\n -> CTE Scan on ids ids_1 (cost=0.00..202.88 rows=10144 width=40) (actual time=0.743..70.975 rows=12768 loops=1)\n -> CTE Scan on totals (cost=0.00..0.02 rows=1 width=64) (actual time=73.356..73.357 rows=1 loops=1)\n -> CTE Scan on ids (cost=0.00..202.88 rows=10144 width=8) (actual time=0.001..0.820 rows=12768 loops=1)\n Planning time: 0.723 ms\n Execution time: 82.995 ms\n\n\nHere my index:\n\ndbs=# \\d dbs.idx_gider_belge_mukellef_id_kayit_tarihi_sube_no_defter_id\nIndex \"dbs.idx_gider_belge_mukellef_id_kayit_tarihi_sube_no_defter_id\"\n Column | Type | Definition\n--------------+-----------------------------+--------------\n mukellef_id | character varying(12) | mukellef_id\n kayit_tarihi | timestamp without time zone | kayit_tarihi\n sube_no | integer | sube_no\n defter | character varying(4) | defter\n id | bigint | id\nbtree, for table \"dbs.gider_belge\", predicate (deleted IS NOT TRUE)\n\n\n\n________________________________\nYASAL UYARI:\nBu E-mail mesaji ve ekleri, isimleri yazili alicilar disindaki kisilere aciklanmamasi, dagitilmamasi ve iletilmemesi gereken kisiye ozel ve gizli bilgiler icerebilir. Mesajin muhatabi degilseniz lutfen gonderici ile irtibat kurunuz, mesaj ve eklerini siliniz.\nE-mail sistemlerinin tasidigi guvenlik risklerinden dolayi, mesajlarin gizlilikleri ve butunlukleri bozulabilir, mesaj virus icerebilir. Bilinen viruslere karsi kontrolleri yapilmis olarak yollanan mesajin sisteminizde yaratabilecegi olasi zararlardan Sirketimiz sorumlu tutulamaz.\nDISCLAIMER:\nThis email and its attachments may contain private and confidential information intended for the use of the addressee only, which should not be announced, copied or forwarded. If you are not the intended recipient, please contact the sender, delete the message and its attachments. Due to security risks of email systems, the confidentiality and integrity of the message may be damaged, the message may contain viruses. This message is scanned for known viruses and our Company will not be liable for possible system damages caused by the message.\n\n\n\n\n\n\n\n\n\nHi all,\n\nI have a problem with my query. Query always using parallel bitmap heap scan. I've created an index with all where conditions and id but query does not this index and continue to use bitmapscan. So I decided disable bitmap scan for testing. And after that,\n things became strange. Cost is higher, execution time is lower.\nBut I want to use index_only_scan because index have all column that query need. No need to access table.\nIt is doing index_only_scan when disabling bitmap scan but I cannot disable bitmap scan for cluster wide. There are other queries...\nCan you help me to solve the issue?\n\nPostgreSQL Version: PostgreSQL 10.3 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16), 64-bit\n\n\n\n\nHere my query:\n\nexplain analyze with ids as (\nselect g.id,g.kdv,g.tutar from \ndbs.gider g\nleft join dbs.gider_belge gb\non gb.id=g.gider_belge_id\nwhere gb.mukellef_id='0123456789' and g.deleted is not true and gb.deleted is not true and gb.sube_no='-13' and gb.defter='sm' and gb.kayit_tarihi>='2018-01-01 00:00:00'),\ntotals as (select sum(kdv) tkdv,sum(tutar) ttutar from ids)\nselect ids.id,totals.tkdv,totals.ttutar from ids,totals;\n\nHere default explain analyze output:\n\n\nNested Loop (cost=25939.84..26244.15 rows=10143 width=72) (actual time=92.936..94.708 rows=12768 loops=1)\n CTE ids\n -> Gather (cost=1317.56..25686.25 rows=10143 width=20) (actual time=12.774..87.854 rows=12768 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Nested Loop (cost=317.56..23671.95 rows=4226 width=20) (actual time=5.382..80.240 rows=4256 loops=3)\n -> Parallel Bitmap Heap Scan on gider_belge gb (cost=316.99..10366.28 rows=3835 width=8) (actual time=5.223..29.208 rows=4077 loops=3)\n Recheck Cond: (((mukellef_id)::text = '0123456789'::text)\n AND (kayit_tarihi >= '2018-01-01 00:00:00'::timestamp without time zone) AND (sube_no = '-13'::integer) AND ((defter)::text\n = 'sm'::text) AND (deleted IS NOT TRUE))\n Heap Blocks: exact=7053\n -> Bitmap Index Scan on idx_gider_belge_mukellef_id_kayit_tarihi_sube_no_defter_id (cost=0.00..314.69 rows=9205 width=0) (actual time=8.086..8.086 rows=12230 loops=1)\n Index Cond: (((mukellef_id)::text = '0123456789'::text)\n AND (kayit_tarihi >= '2018-01-01 00:00:00'::timestamp without time zone) AND (sube_no = '-13'::integer) AND ((defter)::\ntext = 'sm'::text))\n -> Index Scan using idx_gider_gider_belge_id on gider g (cost=0.56..3.41 rows=6 width=28) (actual time=0.012..0.012 rows=1 loops=12230)\n Index Cond: (gider_belge_id = gb.id)\n Filter: (deleted IS NOT TRUE)\n Rows Removed by Filter: 0\n CTE totals\n -> Aggregate (cost=253.58..253.59 rows=1 width=64) (actual time=92.925..92.925 rows=1 loops=1)\n -> CTE Scan on ids ids_1 (cost=0.00..202.86 rows=10143 width=40) (actual time=12.776..90.976 rows=12768 loops=1)\n -> CTE Scan on totals (cost=0.00..0.02 rows=1 width=64) (actual time=92.926..92.927 rows=1 loops=1)\n -> CTE Scan on ids (cost=0.00..202.86 rows=10143 width=8) (actual time=0.001..0.820 rows=12768 loops=1)\n Planning time: 0.691 ms\n Execution time: 113.107 ms\n\nHere explain analyze output after disabling bitmapscan:\n\n\n\nNested Loop (cost=31493.51..31797.85 rows=10144 width=72) (actual time=73.359..75.107 rows=12768 loops=1)\n CTE ids\n -> Gather (cost=1001.13..31239.89 rows=10144 width=20) (actual time=0.741..67.391 rows=12768 loops=1)\n Workers Planned: 1\n Workers Launched: 1\n -> Nested Loop (cost=1.13..29225.49 rows=5967 width=20) (actual time=0.185..62.422 rows=6384 loops=2)\n -> Parallel Index Only Scan using idx_gider_belge_mukellef_id_kayit_tarihi_sube_no_defter_id on gider_belge gb (cost=0.56..10437.97 rows=5415 width=8) (actual time=0.092..15.913 rows=61\n15 loops=2)\n Index Cond: ((mukellef_id = '0123456789'::text)\n AND (kayit_tarihi >= '2018-01-01 00:00:00'::timestamp without time zone) AND (sube_no = '-13'::integer) AND (defter = 'sm'::text))\n Heap Fetches: 9010\n -> Index Scan using idx_gider_gider_belge_id on gider g (cost=0.56..3.41 rows=6 width=28) (actual time=0.007..0.007 rows=1 loops=12230)\n Index Cond: (gider_belge_id = gb.id)\n Filter: (deleted IS NOT TRUE)\n Rows Removed by Filter: 0\n CTE totals\n -> Aggregate (cost=253.60..253.61 rows=1 width=64) (actual time=73.354..73.354 rows=1 loops=1)\n -> CTE Scan on ids ids_1 (cost=0.00..202.88 rows=10144 width=40) (actual time=0.743..70.975 rows=12768 loops=1)\n -> CTE Scan on totals (cost=0.00..0.02 rows=1 width=64) (actual time=73.356..73.357 rows=1 loops=1)\n -> CTE Scan on ids (cost=0.00..202.88 rows=10144 width=8) (actual time=0.001..0.820 rows=12768 loops=1)\n Planning time: 0.723 ms\n Execution time: 82.995 ms\n\n\n\n\nHere my index:\n\n\ndbs=# \\d dbs.idx_gider_belge_mukellef_id_kayit_tarihi_sube_no_defter_id \nIndex \"dbs.idx_gider_belge_mukellef_id_kayit_tarihi_sube_no_defter_id\"\n Column | Type | Definition \n--------------+-----------------------------+--------------\n mukellef_id | character varying(12) | mukellef_id\n kayit_tarihi | timestamp without time zone | kayit_tarihi\n sube_no | integer | sube_no\n defter | character varying(4) | defter\n id | bigint | id\nbtree, for table \"dbs.gider_belge\", predicate (deleted IS NOT TRUE)\n\n\n\n\n\n\n\nYASAL UYARI:\n\nBu E-mail mesaji ve ekleri, isimleri yazili alicilar disindaki kisilere aciklanmamasi, dagitilmamasi ve iletilmemesi gereken kisiye ozel ve gizli bilgiler icerebilir. Mesajin muhatabi degilseniz lutfen gonderici ile irtibat kurunuz, mesaj ve eklerini siliniz.\n\n\nE-mail sistemlerinin tasidigi guvenlik risklerinden dolayi, mesajlarin gizlilikleri ve butunlukleri bozulabilir, mesaj virus icerebilir. Bilinen viruslere karsi kontrolleri yapilmis olarak yollanan mesajin sisteminizde yaratabilecegi olasi zararlardan Sirketimiz\n sorumlu tutulamaz. \nDISCLAIMER:\n\nThis email and its attachments may contain private and confidential information intended for the use of the addressee only, which should not be announced, copied or forwarded. If you are not the intended recipient, please contact the sender, delete the message\n and its attachments. Due to security risks of email systems, the confidentiality and integrity of the message may be damaged, the message may contain viruses. This message is scanned for known viruses and our Company will not be liable for possible system\n damages caused by the message.",
"msg_date": "Fri, 19 Oct 2018 07:19:12 +0000",
"msg_from": "Yavuz Selim Sertoglu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Gained %20 performance after disabling bitmapscan"
},
{
"msg_contents": "On Fri, Oct 19, 2018 at 07:19:12AM +0000, Yavuz Selim Sertoglu wrote:\n> I have a problem with my query. Query always using parallel bitmap heap scan. I've created an index with all where conditions and id but query does not this index and continue to use bitmapscan. So I decided disable bitmap scan for testing. And after that, things became strange. Cost is higher, execution time is lower.\n> But I want to use index_only_scan because index have all column that query need. No need to access table.\n> It is doing index_only_scan when disabling bitmap scan but I cannot disable bitmap scan for cluster wide. There are other queries...\n\nMy first comment is that bitmap IOS is supported on PG11, which was\nreleased..yesterday:\n\nhttps://www.postgresql.org/docs/11/static/release-11.html\n|Allow bitmap scans to perform index-only scans when possible (Alexander Kuzmenkov)\n\nAlso, I wonder whether parallel query is helping here or hurting (SET\nmax_parallel_workers_per_gather=0)? If it's hurting, should you adjust cost\nparameters or perhaps disable it globally ?\n\nJustin\n\n",
"msg_date": "Fri, 19 Oct 2018 08:44:55 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Gained %20 performance after disabling bitmapscan"
},
{
"msg_contents": "Yavuz Selim Sertoglu <[email protected]> writes:\n> I have a problem with my query. Query always using parallel bitmap heap scan.\n\nHave you messed with the parallel cost parameters? It seems a bit\nsurprising that this query wants to use parallelism at all.\n\n> Index Cond: (((mukellef_id)::text = '0123456789'::text) AND (kayit_tarihi >= '2018-01-01 00:00:00'::timestamp without time zone) AND (sube_no = '-13'::integer) AND ((defter)::text = 'sm'::text))\n\nIf that's your normal query pattern, then this isn't a very good\nindex design:\n\n> Column | Type | Definition\n> --------------+-----------------------------+--------------\n> mukellef_id | character varying(12) | mukellef_id\n> kayit_tarihi | timestamp without time zone | kayit_tarihi\n> sube_no | integer | sube_no\n> defter | character varying(4) | defter\n> id | bigint | id\n\nThe column order should be mukellef_id, sube_no, defter, kayit_tarihi, id\nso that the index entries you want are adjacent in the index.\n\nOf course, if you have other queries using this index, you might need\nto leave it as-is --- but this is the query you're complaining about...\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 19 Oct 2018 09:52:04 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Gained %20 performance after disabling bitmapscan"
},
{
"msg_contents": "Yavuz, cannot add much to other points but as for index-only scan, an\n(auto)vacuum must be run in order to optimizer understand it can utilize\nindex-only scan. Please check if autovacuum was run on the table after\nindex creation and if no, run it manually.\n\nVlad\n\nYavuz, cannot add much to other points but as for index-only scan, an (auto)vacuum must be run in order to optimizer understand it can utilize index-only scan. Please check if autovacuum was run on the table after index creation and if no, run it manually.Vlad",
"msg_date": "Fri, 19 Oct 2018 11:09:03 -0700",
"msg_from": "Vladimir Ryabtsev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Gained %20 performance after disabling bitmapscan"
},
{
"msg_contents": "On Fri, Oct 19, 2018 at 3:19 AM Yavuz Selim Sertoglu <\[email protected]> wrote:\n\n> Hi all,\n>\n> I have a problem with my query. Query always using parallel bitmap heap\n> scan. I've created an index with all where conditions and id but query does\n> not this index and continue to use bitmapscan. So I decided disable bitmap\n> scan for testing. And after that, things became strange. Cost is higher,\n> execution time is lower.\n>\n\nA 20% difference in speed is unlikely to make or break you. Is it even\nworth worrying about?\n\n\n> But I want to use index_only_scan because index have all column that query\n> need. No need to access table.\n>\n\nYour table is not very well vacuumed, so there is need to access it (9010\ntimes to get 6115 rows, which seems like quite an anti-feat; but I don't\nknow which of those numbers are averaged over loops/parallel workers,\nversus summed over them). Vacuuming your table will not only make the\nindex-only scan look faster to the planner, but also actually be faster.\n\nThe difference in timing could easily be down to one query warming the\ncache for the other. Are these timings fully reproducible altering\nexecution orders back and forth? And they have different degrees of\nparallelism, what happens if you disable parallelism to simplify the\nanalysis?\n\n\n> It is doing index_only_scan when disabling bitmap scan but I cannot\n> disable bitmap scan for cluster wide. There are other queries...\n> Can you help me to solve the issue?\n>\n>\nCranking up effective_cache_size can make index scans look better in\ncomparison to bitmap scans, without changing a lot of other stuff. This\nstill holds even for index-only-scan, in cases where the planner knows the\ntable to be poorly vacuumed.\n\nBut moving the column tested for inequality to the end of the index would\nbe probably make much more of a difference, regardless of which plan it\nchooses.\n\nCheers,\n\nJeff\n\n>\n\nOn Fri, Oct 19, 2018 at 3:19 AM Yavuz Selim Sertoglu <[email protected]> wrote:\n\n\n\nHi all,\n\nI have a problem with my query. Query always using parallel bitmap heap scan. I've created an index with all where conditions and id but query does not this index and continue to use bitmapscan. So I decided disable bitmap scan for testing. And after that,\n things became strange. Cost is higher, execution time is lower.A 20% difference in speed is unlikely to make or break you. Is it even worth worrying about? \nBut I want to use index_only_scan because index have all column that query need. No need to access table.Your table is not very well vacuumed, so there is need to access it (9010 times to get 6115 rows, which seems like quite an anti-feat; but I don't know which of those numbers are averaged over loops/parallel workers, versus summed over them). Vacuuming your table will not only make the index-only scan look faster to the planner, but also actually be faster.The difference in timing could easily be down to one query warming the cache for the other. Are these timings fully reproducible altering execution orders back and forth? And they have different degrees of parallelism, what happens if you disable parallelism to simplify the analysis? \nIt is doing index_only_scan when disabling bitmap scan but I cannot disable bitmap scan for cluster wide. There are other queries...\nCan you help me to solve the issue?\nCranking up effective_cache_size can make index scans look better in comparison to bitmap scans, without changing a lot of other stuff. This still holds even for index-only-scan, in cases where the planner knows the table to be poorly vacuumed. But moving the column tested for inequality to the end of the index would be probably make much more of a difference, regardless of which plan it chooses.Cheers,Jeff",
"msg_date": "Fri, 19 Oct 2018 15:40:57 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Gained %20 performance after disabling bitmapscan"
},
{
"msg_contents": "Thanks for reply Tom,\n\nAFAIK nothing changed with planner. Only max_parallel_*\n\n[postgres@db-server ~]$ psql -c\"show all\" | grep parallel\n force_parallel_mode | off | Forces use of parallel query facilities.\n max_parallel_workers | 192 | Sets the maximum number of parallel workers than can be active at one time.\n max_parallel_workers_per_gather | 96 | Sets the maximum number of parallel processes per executor node.\n min_parallel_index_scan_size | 512kB | Sets the minimum amount of index data for a parallel scan.\n min_parallel_table_scan_size | 8MB | Sets the minimum amount of table data for a parallel scan.\n parallel_setup_cost | 1000 | Sets the planner's estimate of the cost of starting up worker processes for parallel query.\n parallel_tuple_cost | 0.1 | Sets the planner's estimate of the cost of passing each tuple (row) from worker to master backend.\n\nQueries written by developer team, I can only recommend them your suggestion.\n\n________________________________\nGönderen: Tom Lane <[email protected]>\nGönderildi: 19 Ekim 2018 Cuma 16:52:04\nKime: Yavuz Selim Sertoglu\nBilgi: [email protected]\nKonu: Re: Gained %20 performance after disabling bitmapscan\n\nYavuz Selim Sertoglu <[email protected]> writes:\n> I have a problem with my query. Query always using parallel bitmap heap scan.\n\nHave you messed with the parallel cost parameters? It seems a bit\nsurprising that this query wants to use parallelism at all.\n\n> Index Cond: (((mukellef_id)::text = '0123456789'::text) AND (kayit_tarihi >= '2018-01-01 00:00:00'::timestamp without time zone) AND (sube_no = '-13'::integer) AND ((defter)::text = 'sm'::text))\n\nIf that's your normal query pattern, then this isn't a very good\nindex design:\n\n> Column | Type | Definition\n> --------------+-----------------------------+--------------\n> mukellef_id | character varying(12) | mukellef_id\n> kayit_tarihi | timestamp without time zone | kayit_tarihi\n> sube_no | integer | sube_no\n> defter | character varying(4) | defter\n> id | bigint | id\n\nThe column order should be mukellef_id, sube_no, defter, kayit_tarihi, id\nso that the index entries you want are adjacent in the index.\n\nOf course, if you have other queries using this index, you might need\nto leave it as-is --- but this is the query you're complaining about...\n\n regards, tom lane\n________________________________\nYASAL UYARI:\nBu E-mail mesaji ve ekleri, isimleri yazili alicilar disindaki kisilere aciklanmamasi, dagitilmamasi ve iletilmemesi gereken kisiye ozel ve gizli bilgiler icerebilir. Mesajin muhatabi degilseniz lutfen gonderici ile irtibat kurunuz, mesaj ve eklerini siliniz.\nE-mail sistemlerinin tasidigi guvenlik risklerinden dolayi, mesajlarin gizlilikleri ve butunlukleri bozulabilir, mesaj virus icerebilir. Bilinen viruslere karsi kontrolleri yapilmis olarak yollanan mesajin sisteminizde yaratabilecegi olasi zararlardan Sirketimiz sorumlu tutulamaz.\nDISCLAIMER:\nThis email and its attachments may contain private and confidential information intended for the use of the addressee only, which should not be announced, copied or forwarded. If you are not the intended recipient, please contact the sender, delete the message and its attachments. Due to security risks of email systems, the confidentiality and integrity of the message may be damaged, the message may contain viruses. This message is scanned for known viruses and our Company will not be liable for possible system damages caused by the message.\n\n\n\n\n\n\n\n\nThanks for reply Tom,\n\nAFAIK nothing changed with planner. Only max_parallel_*\n\n[postgres@db-server ~]$ psql -c\"show all\" | grep parallel\n force_parallel_mode | off | Forces use of parallel query facilities.\n max_parallel_workers | 192 | Sets the maximum number of parallel workers than can be active at one time.\n max_parallel_workers_per_gather | 96 | Sets the maximum number of parallel processes per executor node.\n min_parallel_index_scan_size | 512kB | Sets the minimum amount of index data for a parallel scan.\n min_parallel_table_scan_size | 8MB | Sets the minimum amount of table data for a parallel scan.\n parallel_setup_cost | 1000 | Sets the planner's estimate of the cost of starting up worker processes for parallel query.\n parallel_tuple_cost | 0.1 | Sets the planner's estimate of the cost of passing each tuple (row) from worker to master backend.\n\n\nQueries written by developer team, I can only recommend them your suggestion.\n\n\n\nGönderen: Tom Lane <[email protected]>\nGönderildi: 19 Ekim 2018 Cuma 16:52:04\nKime: Yavuz Selim Sertoglu\nBilgi: [email protected]\nKonu: Re: Gained %20 performance after disabling bitmapscan\n \n\n\nYavuz Selim Sertoglu <[email protected]> writes:\n> I have a problem with my query. Query always using parallel bitmap heap scan.\n\nHave you messed with the parallel cost parameters? It seems a bit\nsurprising that this query wants to use parallelism at all.\n\n> Index Cond: (((mukellef_id)::text = '0123456789'::text) AND (kayit_tarihi >= '2018-01-01 00:00:00'::timestamp without time zone) AND (sube_no = '-13'::integer) AND ((defter)::text = 'sm'::text))\n\nIf that's your normal query pattern, then this isn't a very good\nindex design:\n\n> Column | Type | Definition\n> --------------+-----------------------------+--------------\n> mukellef_id | character varying(12) | mukellef_id\n> kayit_tarihi | timestamp without time zone | kayit_tarihi\n> sube_no | integer | sube_no\n> defter | character varying(4) | defter\n> id | bigint | id\n\nThe column order should be mukellef_id, sube_no, defter, kayit_tarihi, id\nso that the index entries you want are adjacent in the index.\n\nOf course, if you have other queries using this index, you might need\nto leave it as-is --- but this is the query you're complaining about...\n\n regards, tom lane\n\n\n\n\nYASAL UYARI:\n\nBu E-mail mesaji ve ekleri, isimleri yazili alicilar disindaki kisilere aciklanmamasi, dagitilmamasi ve iletilmemesi gereken kisiye ozel ve gizli bilgiler icerebilir. Mesajin muhatabi degilseniz lutfen gonderici ile irtibat kurunuz, mesaj ve eklerini siliniz.\n\n\nE-mail sistemlerinin tasidigi guvenlik risklerinden dolayi, mesajlarin gizlilikleri ve butunlukleri bozulabilir, mesaj virus icerebilir. Bilinen viruslere karsi kontrolleri yapilmis olarak yollanan mesajin sisteminizde yaratabilecegi olasi zararlardan Sirketimiz\n sorumlu tutulamaz. \nDISCLAIMER:\n\nThis email and its attachments may contain private and confidential information intended for the use of the addressee only, which should not be announced, copied or forwarded. If you are not the intended recipient, please contact the sender, delete the message\n and its attachments. Due to security risks of email systems, the confidentiality and integrity of the message may be damaged, the message may contain viruses. This message is scanned for known viruses and our Company will not be liable for possible system\n damages caused by the message.",
"msg_date": "Mon, 22 Oct 2018 06:52:53 +0000",
"msg_from": "Yavuz Selim Sertoglu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Ynt: Gained %20 performance after disabling bitmapscan"
},
{
"msg_contents": "Thanks for the reply Vladimir,\n\nI thought explain analyze is enough. I run vacuum analyze manually but it didn't work either.\n\n________________________________\nGönderen: Vladimir Ryabtsev <[email protected]>\nGönderildi: 19 Ekim 2018 Cuma 21:09:03\nKime: Yavuz Selim Sertoglu\nBilgi: [email protected]\nKonu: Re: Gained %20 performance after disabling bitmapscan\n\nYavuz, cannot add much to other points but as for index-only scan, an (auto)vacuum must be run in order to optimizer understand it can utilize index-only scan. Please check if autovacuum was run on the table after index creation and if no, run it manually.\n\nVlad\n________________________________\nYASAL UYARI:\nBu E-mail mesaji ve ekleri, isimleri yazili alicilar disindaki kisilere aciklanmamasi, dagitilmamasi ve iletilmemesi gereken kisiye ozel ve gizli bilgiler icerebilir. Mesajin muhatabi degilseniz lutfen gonderici ile irtibat kurunuz, mesaj ve eklerini siliniz.\nE-mail sistemlerinin tasidigi guvenlik risklerinden dolayi, mesajlarin gizlilikleri ve butunlukleri bozulabilir, mesaj virus icerebilir. Bilinen viruslere karsi kontrolleri yapilmis olarak yollanan mesajin sisteminizde yaratabilecegi olasi zararlardan Sirketimiz sorumlu tutulamaz.\nDISCLAIMER:\nThis email and its attachments may contain private and confidential information intended for the use of the addressee only, which should not be announced, copied or forwarded. If you are not the intended recipient, please contact the sender, delete the message and its attachments. Due to security risks of email systems, the confidentiality and integrity of the message may be damaged, the message may contain viruses. This message is scanned for known viruses and our Company will not be liable for possible system damages caused by the message.\n\n\n\n\n\n\n\n\nThanks for the reply Vladimir,\n\nI thought explain analyze is enough. I run vacuum analyze manually but it didn't work either.\n\n\n\nGönderen: Vladimir Ryabtsev <[email protected]>\nGönderildi: 19 Ekim 2018 Cuma 21:09:03\nKime: Yavuz Selim Sertoglu\nBilgi: [email protected]\nKonu: Re: Gained %20 performance after disabling bitmapscan\n \n\n\n\nYavuz, cannot add much to other points but as for index-only scan, an (auto)vacuum must be run in order to optimizer understand it can utilize index-only scan. Please check if autovacuum was run on the table after index creation and if no, run\n it manually.\n\n\nVlad\n\n\n\n\nYASAL UYARI:\n\nBu E-mail mesaji ve ekleri, isimleri yazili alicilar disindaki kisilere aciklanmamasi, dagitilmamasi ve iletilmemesi gereken kisiye ozel ve gizli bilgiler icerebilir. Mesajin muhatabi degilseniz lutfen gonderici ile irtibat kurunuz, mesaj ve eklerini siliniz.\n\n\nE-mail sistemlerinin tasidigi guvenlik risklerinden dolayi, mesajlarin gizlilikleri ve butunlukleri bozulabilir, mesaj virus icerebilir. Bilinen viruslere karsi kontrolleri yapilmis olarak yollanan mesajin sisteminizde yaratabilecegi olasi zararlardan Sirketimiz\n sorumlu tutulamaz. \nDISCLAIMER:\n\nThis email and its attachments may contain private and confidential information intended for the use of the addressee only, which should not be announced, copied or forwarded. If you are not the intended recipient, please contact the sender, delete the message\n and its attachments. Due to security risks of email systems, the confidentiality and integrity of the message may be damaged, the message may contain viruses. This message is scanned for known viruses and our Company will not be liable for possible system\n damages caused by the message.",
"msg_date": "Mon, 22 Oct 2018 06:55:13 +0000",
"msg_from": "Yavuz Selim Sertoglu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Ynt: Gained %20 performance after disabling bitmapscan"
},
{
"msg_contents": "Thanks for the reply Jeff,\n\nI know 20ms is nothing but it shows me that there is a problem with my configuration. I want to find it.\n\nI've vacuumed table but it didn't work either.\nAfter vacuum, query start to using another index.\n\nI run query a few times so result comes from cache with both query.\n\nIf I set max_parallel_workers_per_gather to 0, it is using index scan.\n\nHere is new explain;\n\nselect id,kdv,tutar from dbs.gider_kayitlar where mukellef_id='3800433276' and deleted is not true and sube_no='-13' and defter='sm' and kayit_tarihi>='2018-01-01 00:00:00'),\ntotals as (select sum(kdv) tkdv,sum(tutar) ttutar from ids)\nselect ids.id,totals.tkdv,totals.ttutar from ids,totals;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=27505.85..27676.06 rows=5673 width=72) (actual time=83.704..85.395 rows=12768 loops=1)\n CTE ids\n -> Nested Loop (cost=1.13..27364.01 rows=5673 width=46) (actual time=0.063..77.898 rows=12768 loops=1)\n -> Index Scan using idx_gider_belge_mukellef_id on gider_belge (cost=0.56..8998.87 rows=5335 width=8) (actual time=0.045..23.261 rows=12369 loops=1)\n Index Cond: ((mukellef_id)::text = '0123456789'::text)\n Filter: ((kayit_tarihi >= '2018-01-01 00:00:00'::timestamp without time zone) AND (sube_no = '-13'::integer) AND ((defter)::text = 'sm'::text))\n -> Index Scan using idx_gider_gider_belge_id on gider (cost=0.56..3.37 rows=7 width=30) (actual time=0.004..0.004 rows=1 loops=12369)\n Index Cond: (gider_belge_id = gider_belge.id)\n Filter: (deleted IS NOT TRUE)\n Rows Removed by Filter: 0\n CTE totals\n -> Aggregate (cost=141.83..141.84 rows=1 width=64) (actual time=83.700..83.700 rows=1 loops=1)\n -> CTE Scan on ids ids_1 (cost=0.00..113.46 rows=5673 width=52) (actual time=0.065..81.463 rows=12768 loops=1)\n -> CTE Scan on totals (cost=0.00..0.02 rows=1 width=64) (actual time=83.702..83.702 rows=1 loops=1)\n -> CTE Scan on ids (cost=0.00..113.46 rows=5673 width=8) (actual time=0.001..0.796 rows=12768 loops=1)\n Planning time: 0.909 ms\n Execution time: 85.839 ms\n\nshared_buffers is 256G\neffective_cache_size is 768G\nDatabase size about 90G\n\n________________________________\nGönderen: Jeff Janes <[email protected]>\nGönderildi: 19 Ekim 2018 Cuma 22:40:57\nKime: Yavuz Selim Sertoglu\nBilgi: [email protected]\nKonu: Re: Gained %20 performance after disabling bitmapscan\n\nOn Fri, Oct 19, 2018 at 3:19 AM Yavuz Selim Sertoglu <[email protected]<mailto:[email protected]>> wrote:\n\nHi all,\n\nI have a problem with my query. Query always using parallel bitmap heap scan. I've created an index with all where conditions and id but query does not this index and continue to use bitmapscan. So I decided disable bitmap scan for testing. And after that, things became strange. Cost is higher, execution time is lower.\n\nA 20% difference in speed is unlikely to make or break you. Is it even worth worrying about?\n\nBut I want to use index_only_scan because index have all column that query need. No need to access table.\n\nYour table is not very well vacuumed, so there is need to access it (9010 times to get 6115 rows, which seems like quite an anti-feat; but I don't know which of those numbers are averaged over loops/parallel workers, versus summed over them). Vacuuming your table will not only make the index-only scan look faster to the planner, but also actually be faster.\n\nThe difference in timing could easily be down to one query warming the cache for the other. Are these timings fully reproducible altering execution orders back and forth? And they have different degrees of parallelism, what happens if you disable parallelism to simplify the analysis?\n\nIt is doing index_only_scan when disabling bitmap scan but I cannot disable bitmap scan for cluster wide. There are other queries...\nCan you help me to solve the issue?\n\n\nCranking up effective_cache_size can make index scans look better in comparison to bitmap scans, without changing a lot of other stuff. This still holds even for index-only-scan, in cases where the planner knows the table to be poorly vacuumed.\n\nBut moving the column tested for inequality to the end of the index would be probably make much more of a difference, regardless of which plan it chooses.\n\nCheers,\n\nJeff\n________________________________\nYASAL UYARI:\nBu E-mail mesaji ve ekleri, isimleri yazili alicilar disindaki kisilere aciklanmamasi, dagitilmamasi ve iletilmemesi gereken kisiye ozel ve gizli bilgiler icerebilir. Mesajin muhatabi degilseniz lutfen gonderici ile irtibat kurunuz, mesaj ve eklerini siliniz.\nE-mail sistemlerinin tasidigi guvenlik risklerinden dolayi, mesajlarin gizlilikleri ve butunlukleri bozulabilir, mesaj virus icerebilir. Bilinen viruslere karsi kontrolleri yapilmis olarak yollanan mesajin sisteminizde yaratabilecegi olasi zararlardan Sirketimiz sorumlu tutulamaz.\nDISCLAIMER:\nThis email and its attachments may contain private and confidential information intended for the use of the addressee only, which should not be announced, copied or forwarded. If you are not the intended recipient, please contact the sender, delete the message and its attachments. Due to security risks of email systems, the confidentiality and integrity of the message may be damaged, the message may contain viruses. This message is scanned for known viruses and our Company will not be liable for possible system damages caused by the message.\n\n\n\n\n\n\n\n\nThanks for the reply Jeff,\n\nI know 20ms is nothing but it shows me that there is a problem with my configuration. I want to find it.\n\nI've vacuumed table but it didn't work either.\nAfter vacuum, query start to using another index.\n\nI run query a few times so result comes from cache with both query.\n\nIf I set max_parallel_workers_per_gather to 0, it is using index scan.\n\nHere is new explain;\n\n\nselect id,kdv,tutar from dbs.gider_kayitlar where mukellef_id='3800433276' and deleted is not true and sube_no='-13' and defter='sm' and kayit_tarihi>='2018-01-01 00:00:00'),\ntotals as (select sum(kdv) tkdv,sum(tutar) ttutar from ids)\nselect ids.id,totals.tkdv,totals.ttutar from ids,totals;\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=27505.85..27676.06 rows=5673 width=72) (actual time=83.704..85.395 rows=12768 loops=1)\n CTE ids\n -> Nested Loop (cost=1.13..27364.01 rows=5673 width=46) (actual time=0.063..77.898 rows=12768 loops=1)\n -> Index Scan using idx_gider_belge_mukellef_id on gider_belge (cost=0.56..8998.87 rows=5335 width=8) (actual time=0.045..23.261 rows=12369 loops=1)\n Index Cond: ((mukellef_id)::text = '0123456789'::text)\n Filter: ((kayit_tarihi >= '2018-01-01 00:00:00'::timestamp without time zone) AND (sube_no = '-13'::integer) AND ((defter)::text = 'sm'::text))\n -> Index Scan using idx_gider_gider_belge_id on gider (cost=0.56..3.37 rows=7 width=30) (actual time=0.004..0.004 rows=1 loops=12369)\n Index Cond: (gider_belge_id = gider_belge.id)\n Filter: (deleted IS NOT TRUE)\n Rows Removed by Filter: 0\n CTE totals\n -> Aggregate (cost=141.83..141.84 rows=1 width=64) (actual time=83.700..83.700 rows=1 loops=1)\n -> CTE Scan on ids ids_1 (cost=0.00..113.46 rows=5673 width=52) (actual time=0.065..81.463 rows=12768 loops=1)\n -> CTE Scan on totals (cost=0.00..0.02 rows=1 width=64) (actual time=83.702..83.702 rows=1 loops=1)\n -> CTE Scan on ids (cost=0.00..113.46 rows=5673 width=8) (actual time=0.001..0.796 rows=12768 loops=1)\n Planning time: 0.909 ms\n Execution time: 85.839 ms\n\n\n\n\nshared_buffers is 256G\neffective_cache_size is 768G\nDatabase size about 90G\n\n\n\nGönderen: Jeff Janes <[email protected]>\nGönderildi: 19 Ekim 2018 Cuma 22:40:57\nKime: Yavuz Selim Sertoglu\nBilgi: [email protected]\nKonu: Re: Gained %20 performance after disabling bitmapscan\n \n\n\n\n\n\nOn Fri, Oct 19, 2018 at 3:19 AM Yavuz Selim Sertoglu <[email protected]> wrote:\n\n\n\n\n\nHi all,\n\nI have a problem with my query. Query always using parallel bitmap heap scan. I've created an index with all where conditions and id but query does not this index and continue to use bitmapscan. So I decided disable bitmap scan for testing. And after that,\n things became strange. Cost is higher, execution time is lower.\n\n\n\n\n\n\nA 20% difference in speed is unlikely to make or break you. Is it even worth worrying about?\n \n\n\n\nBut I want to use index_only_scan because index have all column that query need. No need to access table.\n\n\n\n\n\nYour table is not very well vacuumed, so there is need to access it (9010 times to get 6115 rows, which seems like quite an anti-feat; but I don't know which of those numbers are averaged over loops/parallel workers, versus summed over them). Vacuuming\n your table will not only make the index-only scan look faster to the planner, but also actually be faster.\n\n\nThe difference in timing could easily be down to one query warming the cache for the other. Are these timings fully reproducible altering execution orders back and forth? And they have different degrees of parallelism, what happens if you disable parallelism\n to simplify the analysis?\n \n\n\n\nIt is doing index_only_scan when disabling bitmap scan but I cannot disable bitmap scan for cluster wide. There are other queries...\nCan you help me to solve the issue?\n\n\n\n\n\n\n\nCranking up effective_cache_size can make index scans look better in comparison to bitmap scans, without changing a lot of other stuff. This still holds even for index-only-scan, in cases where the planner knows the table to be poorly vacuumed. \n\n\nBut moving the column tested for inequality to the end of the index would be probably make much more of a difference, regardless of which plan it chooses.\n\n\nCheers,\n\n\nJeff\n\n\n\n\n\n\n\nYASAL UYARI:\n\nBu E-mail mesaji ve ekleri, isimleri yazili alicilar disindaki kisilere aciklanmamasi, dagitilmamasi ve iletilmemesi gereken kisiye ozel ve gizli bilgiler icerebilir. Mesajin muhatabi degilseniz lutfen gonderici ile irtibat kurunuz, mesaj ve eklerini siliniz.\n\n\nE-mail sistemlerinin tasidigi guvenlik risklerinden dolayi, mesajlarin gizlilikleri ve butunlukleri bozulabilir, mesaj virus icerebilir. Bilinen viruslere karsi kontrolleri yapilmis olarak yollanan mesajin sisteminizde yaratabilecegi olasi zararlardan Sirketimiz\n sorumlu tutulamaz. \nDISCLAIMER:\n\nThis email and its attachments may contain private and confidential information intended for the use of the addressee only, which should not be announced, copied or forwarded. If you are not the intended recipient, please contact the sender, delete the message\n and its attachments. Due to security risks of email systems, the confidentiality and integrity of the message may be damaged, the message may contain viruses. This message is scanned for known viruses and our Company will not be liable for possible system\n damages caused by the message.",
"msg_date": "Mon, 22 Oct 2018 07:20:38 +0000",
"msg_from": "Yavuz Selim Sertoglu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Ynt: Gained %20 performance after disabling bitmapscan"
},
{
"msg_contents": "On Mon, Oct 22, 2018 at 3:20 AM Yavuz Selim Sertoglu <\[email protected]> wrote:\n\n> Thanks for the reply Jeff,\n>\n> I know 20ms is nothing but it shows me that there is a problem with my\n> configuration. I want to find it.\n>\n\nThis is a dangerous assumption. This is no configuration you can come up\nwith which will cause the planner to be within 20% of perfection in all\ncases. Given the other plans you've shown and discussed, I think this is\njust chasing our own tail.\n\nCheers,\n\nJeff\n\n>\n\nOn Mon, Oct 22, 2018 at 3:20 AM Yavuz Selim Sertoglu <[email protected]> wrote:\n\n\nThanks for the reply Jeff,\n\nI know 20ms is nothing but it shows me that there is a problem with my configuration. I want to find it.This is a dangerous assumption. This is no configuration you can come up with which will cause the planner to be within 20% of perfection in all cases. Given the other plans you've shown and discussed, I think this is just chasing our own tail.Cheers,Jeff",
"msg_date": "Fri, 26 Oct 2018 13:54:32 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Gained %20 performance after disabling bitmapscan"
},
{
"msg_contents": "changing parameters can have surprising effects. fyi, I tried disabling\nbitmapscan and running the 113 queries (iirc) of the Join Order Benchmark\nagainst them. Several improved. Here the 'Baseline' is the best previously\nknown plan, and 'Baseline+1' is the plan with enable_bitmapscan = false. \nNotably:\n\nNOTICE: Baseline [Planning time 90.349 ms, Execution time 12531.577\nms]\nNOTICE: Baseline+1 [Planning time 81.473 ms, Execution time 7646.242\nms]\nNOTICE: Total time benefit: 4894.211 ms, Execution time benefit:\n4885.335 ms\n\nshaved 4.9s off a 12.5s query, and:\n\nNOTICE: Baseline [Planning time 198.983 ms, Execution time 2715.75\nms]\nNOTICE: Baseline+1 [Planning time 183.204 ms, Execution time 1395.494\nms]\nNOTICE: Total time benefit: 1336.035 ms, Execution time benefit:\n1320.256 ms\n\ngained nicely in percentage terms, and:\n\nNOTICE: Baseline [Planning time 91.527 ms, Execution time 12480.151\nms]\nNOTICE: Baseline+1 [Planning time 84.192 ms, Execution time 7918.974\nms]\nNOTICE: Total time benefit: 4568.512 ms, Execution time benefit:\n4561.177 ms\n\nalso had a nice 4.5s+ improvement, among other plan diffs. \n\nThis just shows that when you inject a new planning constraint into a\nworkload, at least some of the plans will probably get faster. In this case\na few of them got significantly faster either in absolute terms or in\npercentage terms. Unsurprisingly, the great majority got slower.\n\n /Jim\n\n\n\n-----\nJim Finnerty, AWS, Amazon Aurora PostgreSQL\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n",
"msg_date": "Sat, 29 Dec 2018 13:11:16 -0700 (MST)",
"msg_from": "Jim Finnerty <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Gained %20 performance after disabling bitmapscan"
}
] |
[
{
"msg_contents": "Hi there,\n\nI'm running PostgreSQL 9.6.10 on Debian and reported some performance\nissues with \"SET ROLE\" a while ago:\nhttps://www.postgresql.org/message-id/CABZYQR%2BKu%2BiLFhqwY89QrrnKG9wKxckmssDG2rYKESojiohRgQ%40mail.gmail.com\n\nThose long running queries, especially \"SET ROLE\", still persist until\ntoday.\n\nI just recently increased the number of PostgreSQL instances and added an\nAWS Aurora Cluster (9.6.10 with db.r4.2xlarge). AWS offers a tool named\n\"Performance Insights\" and shows some really high CPU usage for \"SET ROLE\"\nqueries. See attached image or just click here:\nhttps://i.imgur.com/UNkhFLr.png\n\nThe setup is the same as reported in the above mentioned post: I use more\nthan a thousand roles per PostgreSQL instance and set the role for every\nconnection before executing actual statements. My pg_class consists\nof 1,557,824 rows as every role has its own schema with more than 300\ntables.\n\nI'm currently building a simple docker test setup with pg_bench to\nreproduce the decreased performance when executing \"SET ROLE\".\n\nI'm aware that AWS Aurora is a proprietary version of PostgreSQL. But I\nsomehow have the feeling that my experienced abnormalities with \"SET ROLE\"\nin vanilla PostgreSQL occur in AWS Aurora as well. And the high CPU usage\nreported by \"Performance Insights\" may be a hint of a performance issue in\nPostgreSQL.\n\nI'm quite lost when reading the PostgreSQL source code (due to my inability\nto read it :)) but maybe one of you guys has an idea about that?\n\nRegards,\nUlf",
"msg_date": "Mon, 22 Oct 2018 15:44:16 +0200",
"msg_from": "=?UTF-8?Q?Ulf_Lohbr=C3=BCgge?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "High CPU Usage of \"SET ROLE\""
},
{
"msg_contents": "=?UTF-8?Q?Ulf_Lohbr=C3=BCgge?= <[email protected]> writes:\n> I'm running PostgreSQL 9.6.10 on Debian and reported some performance\n> issues with \"SET ROLE\" a while ago:\n> https://www.postgresql.org/message-id/CABZYQR%2BKu%2BiLFhqwY89QrrnKG9wKxckmssDG2rYKESojiohRgQ%40mail.gmail.com\n> ...\n> The setup is the same as reported in the above mentioned post: I use more\n> than a thousand roles per PostgreSQL instance and set the role for every\n> connection before executing actual statements. My pg_class consists\n> of 1,557,824 rows as every role has its own schema with more than 300\n> tables.\n\nIt seems plausible to guess that you've hit some behavior that's O(N^2)\nin the number of objects (for some object type or other). Perhaps \"perf\"\nor a similar tool would give some insight into where the bottleneck is.\n\nhttps://wiki.postgresql.org/wiki/Profiling_with_perf\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 22 Oct 2018 09:57:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU Usage of \"SET ROLE\""
},
{
"msg_contents": ">\n> It seems plausible to guess that you've hit some behavior that's O(N^2)\n> in the number of objects (for some object type or other). Perhaps \"perf\"\n> or a similar tool would give some insight into where the bottleneck is.\n>\n> https://wiki.postgresql.org/wiki/Profiling_with_perf\n\n\nThanks for your quick reply!\n\nI haven't used \"perf\" yet and decided to investigate a bit further with the\ntools I am more familiar with:\n\nAs I mentioned in my other post, I use the combination of \"SET ROLE ...;\"\nand \"SET search_path = ...;\" to switch to the requested tenant before\nexecuting actual statements. A typical request to my webapp executes the\nfollowing sql statements:\n\n1. SET ROLE tenant1;\n2. SET search_path = tenant1;\n3. -- execute various tenant1 related sql statements here\n4. SET search_path = DEAULT;\n5. RESET ROLE;\n\nI activated logging of all statements for around 6 minutes in production\nand analyzed the duration of parse, bind and execute for the statements 1,\n2, 4 and 5 above. I just summed parse, bind and execute and calculated the\naverage of them.\n\n\"SET ROLE ...;\" -> 7.109 ms (!)\n\"SET search_path = ...;\" -> 0.026 ms\n\"SET search_path = DEAULT;\" -> 0.059 ms\n\"RESET ROLE;\" -> 0.026 ms\n\nSo \"SET ROLE ...;\" is more than 260 times slower than \"SET search_path =\n...;\"! 7.109 vs. 0.026 ms.\n\nI was curious to see what happens when I change the order of statements as\nfollows (\"SET ROLE ...;\" happens after executing \"SET search_path = ...;\"):\n\n1. SET search_path = tenant1;\n2. SET ROLE tenant1;\n3. -- execute various tenant1 related sql statements here\n4. SET search_path = DEAULT;\n5. RESET ROLE;\n\nLogging of all statements was again enabled in production for around 6\nminutes. And these were the results:\n\n\"SET search_path = ...;\" -> 7.444 ms (!)\n\"SET ROLE ...;\" -> 0.141 ms\n\"SET search_path = DEAULT;\" -> 0.036 ms\n\"RESET ROLE;\" -> 0.025 ms\n\nAnd guess what? Now \"SET search_path = ...;\" takes more than 7 ms on\naverage is more than 50 times slower than \"SET ROLE ...;\"! 7.444 vs. 0.141\nms.\n\nI think I have found something here. It looks like that the order of\nstatements is affecting their duration. I somehow have the feeling that the\nfirst statement after \"RESET ROLE;\" experiences a performance degradation.\n\nWhen I use the psql cli on the same database I can see via \"\\timing\" that\nthe first statement after \"RESET ROLE;\" is significantly slower. I was even\nable to strip it down to two statements (\"SET ROLE ...;\" and \"RESET ROLE;\"):\n\nmydb=> set role tenant1;\nSET\nTime: 0.516 ms\nmydb=> reset role;\nRESET\nTime: 0.483 ms\nmydb=> set role tenant1; <-- first statement after \"reset role;\"\nSET\nTime: 10.177 ms <-- significantly slower\nmydb=> reset role;\nRESET\nTime: 0.523 ms\nmydb=> set role tenant1; <-- first statement after \"reset role;\"\nSET\nTime: 12.119 ms <-- significantly slower\nmydb=> reset role;\nRESET\nTime: 0.462 ms\nmydb=> set role tenant1; <-- first statement after \"reset role;\"\nSET\nTime: 19.533 ms <-- significantly slower\nmydb=>\n\nMaybe my observations here are already sufficient to find out what happens\nhere? I guess that my setup with 1k rows in pg_roles and 1.5m rows in\npg_class is probably the cause.\n\nDoes it help when I create a test setup with a docker image that contains a\ndatabase with that many entries in pg_roles and pg_class and share it here?\n\nRegards,\nUlf\n\nIt seems plausible to guess that you've hit some behavior that's O(N^2)\nin the number of objects (for some object type or other). Perhaps \"perf\"\nor a similar tool would give some insight into where the bottleneck is.\n\nhttps://wiki.postgresql.org/wiki/Profiling_with_perfThanks for your quick reply!I haven't used \"perf\" yet and decided to investigate a bit further with the tools I am more familiar with:As I mentioned in my other post, I use the combination of \"SET ROLE ...;\" and \"SET search_path = ...;\" to switch to the requested tenant before executing actual statements. A typical request to my webapp executes the following sql statements:1. SET ROLE tenant1;2. SET search_path = tenant1;3. -- execute various tenant1 related sql statements here4. SET search_path = DEAULT;5. RESET ROLE;I activated logging of all statements for around 6 minutes in production and analyzed the duration of parse, bind and execute for the statements 1, 2, 4 and 5 above. I just summed parse, bind and execute and calculated the average of them.\"SET ROLE ...;\" -> 7.109 ms (!)\"SET search_path = ...;\" -> 0.026 ms\"SET search_path = DEAULT;\" -> 0.059 ms\"RESET ROLE;\" -> 0.026 msSo \"SET ROLE ...;\" is more than 260 times slower than \"SET search_path = ...;\"! 7.109 vs. 0.026 ms.I was curious to see what happens when I change the order of statements as follows (\"SET ROLE ...;\" happens after executing \"SET search_path = ...;\"):1. SET search_path = tenant1;2. SET ROLE tenant1;3. -- execute various tenant1 related sql statements here4. SET search_path = DEAULT;5. RESET ROLE;Logging of all statements was again enabled in production for around 6 minutes. And these were the results:\"SET search_path = ...;\" -> 7.444 ms (!)\"SET ROLE ...;\" -> 0.141 ms\"SET search_path = DEAULT;\" -> 0.036 ms\"RESET ROLE;\" -> 0.025 msAnd guess what? Now \"SET search_path = ...;\" takes more than 7 ms on average is more than 50 times slower than \"SET ROLE ...;\"! 7.444 vs. 0.141 ms.I think I have found something here. It looks like that the order of statements is affecting their duration. I somehow have the feeling that the first statement after \"RESET ROLE;\" experiences a performance degradation.When I use the psql cli on the same database I can see via \"\\timing\" that the first statement after \"RESET ROLE;\" is significantly slower. I was even able to strip it down to two statements (\"SET ROLE ...;\" and \"RESET ROLE;\"):mydb=> set role tenant1;SETTime: 0.516 msmydb=> reset role;RESETTime: 0.483 msmydb=> set role tenant1; <-- first statement after \"reset role;\"SETTime: 10.177 ms <-- significantly slowermydb=> reset role;RESETTime: 0.523 msmydb=> set role tenant1; <-- first statement after \"reset role;\"SETTime: 12.119 ms <-- significantly slowermydb=> reset role;RESETTime: 0.462 msmydb=> set role tenant1; <-- first statement after \"reset role;\"SETTime: 19.533 ms <-- significantly slowermydb=>Maybe my observations here are already sufficient to find out what happens here? I guess that my setup with 1k rows in pg_roles and 1.5m rows in pg_class is probably the cause.Does it help when I create a test setup with a docker image that contains a database with that many entries in pg_roles and pg_class and share it here?Regards,Ulf",
"msg_date": "Tue, 30 Oct 2018 20:49:32 +0100",
"msg_from": "=?UTF-8?Q?Ulf_Lohbr=C3=BCgge?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High CPU Usage of \"SET ROLE\""
},
{
"msg_contents": "=?UTF-8?Q?Ulf_Lohbr=C3=BCgge?= <[email protected]> writes:\n> I think I have found something here. It looks like that the order of\n> statements is affecting their duration. I somehow have the feeling that the\n> first statement after \"RESET ROLE;\" experiences a performance degradation.\n\nHm. It's well known that the first query executed in a *session* takes\na pretty big performance hit, because of the need to populate the\nbackend's catalog caches. I'm not very sure however why \"RESET ROLE\"\nwould result in a mass cache flush, if indeed that's what's happening.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 30 Oct 2018 16:27:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU Usage of \"SET ROLE\""
},
{
"msg_contents": "On Tue, Oct 30, 2018 at 3:50 PM Ulf Lohbrügge <[email protected]>\nwrote:\n\n\n> When I use the psql cli on the same database I can see via \"\\timing\" that\n> the first statement after \"RESET ROLE;\" is significantly slower. I was even\n> able to strip it down to two statements (\"SET ROLE ...;\" and \"RESET ROLE;\"):\n>\n> ...\n>\nMaybe my observations here are already sufficient to find out what happens\n> here? I guess that my setup with 1k rows in pg_roles and 1.5m rows in\n> pg_class is probably the cause.\n>\n\nIt would probably be enough if it were reproducible, but I can't reproduce\nit.\n\n-- set up\nperl -le 'print \"create user foo$_;\" foreach 1..1000'|psql\nperl -le 'foreach $r (1..1000) {print \"create schema foo$r authorization\nfoo$r;\"}'|psql\nperl -le 'foreach $r (reverse 1..1000) {print \"set role foo$r;\"; print\n\"create table foo$r.foo$_ (x serial primary key);\" foreach 1..1000;}'|psql\n> out\n\n-- test\nperl -le 'print \"set role foo$_;\\nreset role;\" foreach 1..1000'|psql\n\nDoes it help when I create a test setup with a docker image that contains a\n> database with that many entries in pg_roles and pg_class and share it here?\n>\n\nIf you have a script to create the database, I'd be more likely to play\naround with that than with a docker image. (Which I have to guess would be\nquite large anyway, with 1.5 rows in pg_class)\n\nCheers,\n\nJeff\n\n>\n\nOn Tue, Oct 30, 2018 at 3:50 PM Ulf Lohbrügge <[email protected]> wrote: When I use the psql cli on the same database I can see via \"\\timing\" that the first statement after \"RESET ROLE;\" is significantly slower. I was even able to strip it down to two statements (\"SET ROLE ...;\" and \"RESET ROLE;\"):... Maybe my observations here are already sufficient to find out what happens here? I guess that my setup with 1k rows in pg_roles and 1.5m rows in pg_class is probably the cause.It would probably be enough if it were reproducible, but I can't reproduce it.-- set upperl -le 'print \"create user foo$_;\" foreach 1..1000'|psqlperl -le 'foreach $r (1..1000) {print \"create schema foo$r authorization foo$r;\"}'|psqlperl -le 'foreach $r (reverse 1..1000) {print \"set role foo$r;\"; print \"create table foo$r.foo$_ (x serial primary key);\" foreach 1..1000;}'|psql > out-- testperl -le 'print \"set role foo$_;\\nreset role;\" foreach 1..1000'|psqlDoes it help when I create a test setup with a docker image that contains a database with that many entries in pg_roles and pg_class and share it here?If you have a script to create the database, I'd be more likely to play around with that than with a docker image. (Which I have to guess would be quite large anyway, with 1.5 rows in pg_class)Cheers,Jeff",
"msg_date": "Tue, 30 Oct 2018 18:34:28 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU Usage of \"SET ROLE\""
}
] |
[
{
"msg_contents": "If SELECT is confident enough to limit itself to one partition, why isn't\nDELETE (or UPDATE)?\n\nAlso, I note in the query plan shown below it thinks the rows in the\nirrelevant partitions is something other than 0, which is impossible.\n(presumably, SELECT correctly determined this, and eliminated the\nirrelevant partitions from the plan, but DELETE doesn't seem to be doing\nthis).\n\nAnd, if it isn't impossible for some reason, then why isn't SELECT checking\nall partitions?\n\nIt also appears UPDATE has the same problem.\n\nThis is for HASH partitions, I don't know if this issue is present with the\nother types.\n\nPostgreSQL 11.0 (Ubuntu 11.0-1.pgdg16.04+2) on x86_64-pc-linux-gnu,\ncompiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.10) 5.4.0 20160609, 64-bit\n\nexplain select * from history where itemid=537021;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------\n Append (cost=4.79..143.63 rows=48 width=21)\n -> Bitmap Heap Scan on history_0028 (cost=4.79..143.39 rows=48\nwidth=21)\n Recheck Cond: (itemid = 537021)\n -> Bitmap Index Scan on history_0028_itemid_clock_idx\n(cost=0.00..4.78 rows=48 width=0)\n Index Cond: (itemid = 537021)\n(5 rows)\n\nexplain delete from history where itemid=537021;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------\n Delete on history (cost=4.77..13987.62 rows=4629 width=6)\n Delete on history_0000\n Delete on history_0001\n Delete on history_0002\n Delete on history_0003\n Delete on history_0004\n Delete on history_0005\n Delete on history_0006\n...\n -> Bitmap Heap Scan on public.history_0000 (cost=4.79..144.20 rows=48\nwidth=6)\n Output: history_0000.ctid\n Recheck Cond: (history_0000.itemid = 537021)\n -> Bitmap Index Scan on history_0000_itemid_clock_idx\n(cost=0.00..4.78 rows=48 width=0)\n Index Cond: (history_0000.itemid = 537021)\n -> Bitmap Heap Scan on public.history_0001 (cost=4.79..148.77 rows=48\nwidth=6)\n Output: history_0001.ctid\n Recheck Cond: (history_0001.itemid = 537021)\n -> Bitmap Index Scan on history_0001_itemid_clock_idx\n(cost=0.00..4.78 rows=48 width=0)\n Index Cond: (history_0001.itemid = 537021)\n...\n\n \\d+ history\n Table \"public.history\"\n Column | Type | Collation | Nullable | Default | Storage | Stats\ntarget | Description\n--------+---------------+-----------+----------+---------+---------+--------------+-------------\n itemid | bigint | | not null | | plain\n| |\n clock | integer | | not null | 0 | plain\n| |\n value | numeric(16,4) | | not null | 0.0 | main\n| |\n ns | integer | | not null | 0 | plain\n| |\nPartition key: HASH (itemid)\nIndexes:\n \"history_itemid_clock_idx\" btree (itemid, clock) WITH (fillfactor='20')\nPartitions: history_0000 FOR VALUES WITH (modulus 100, remainder 0),\n history_0001 FOR VALUES WITH (modulus 100, remainder 1),\n history_0002 FOR VALUES WITH (modulus 100, remainder 2),\n history_0003 FOR VALUES WITH (modulus 100, remainder 3),\n history_0004 FOR VALUES WITH (modulus 100, remainder 4),\n history_0005 FOR VALUES WITH (modulus 100, remainder 5),\n...\n\nselect count(*),count(distinct itemid),tableoid from history group by\ntableoid order by tableoid;\n count | count | tableoid\n--------+-------+----------\n 64,762 | 356 | 20,531\n 80,649 | 351 | 20,537\n 61,424 | 340 | 20,543\n 57,290 | 365 | 20,549\n 69,146 | 344 | 20,555\n 68,357 | 372 | 20,561\n 69,319 | 329 | 20,567\n 60,846 | 332 | 20,573\n 62,021 | 346 | 20,579\n 66,328 | 362 | 20,585\n 69,385 | 361 | 20,591\n 63,304 | 332 | 20,597\n...\n\n select count(*),count(distinct itemid) from history;\n count | count\n-----------+--------\n 6,607,298 | 34,885\n(1 row)\n...\nexplain verbose update history set clock =4 where itemid=537021;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------\n Update on public.history (cost=4.80..15043.41 rows=4992 width=27)\n Update on public.history_0000\n Update on public.history_0001\n Update on public.history_0002\n Update on public.history_0003\n Update on public.history_0004\n\nIf SELECT is confident enough to limit itself to one partition, why isn't DELETE (or UPDATE)?Also, I note in the query plan shown below it thinks the rows in the irrelevant partitions is something other than 0, which is impossible. (presumably, SELECT correctly determined this, and eliminated the irrelevant partitions from the plan, but DELETE doesn't seem to be doing this).And, if it isn't impossible for some reason, then why isn't SELECT checking all partitions?It also appears UPDATE has the same problem.This is for HASH partitions, I don't know if this issue is present with the other types.PostgreSQL 11.0 (Ubuntu 11.0-1.pgdg16.04+2) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.10) 5.4.0 20160609, 64-bitexplain select * from history where itemid=537021; QUERY PLAN--------------------------------------------------------------------------------------------------- Append (cost=4.79..143.63 rows=48 width=21) -> Bitmap Heap Scan on history_0028 (cost=4.79..143.39 rows=48 width=21) Recheck Cond: (itemid = 537021) -> Bitmap Index Scan on history_0028_itemid_clock_idx (cost=0.00..4.78 rows=48 width=0) Index Cond: (itemid = 537021)(5 rows)explain delete from history where itemid=537021; QUERY PLAN--------------------------------------------------------------------------------------------------- Delete on history (cost=4.77..13987.62 rows=4629 width=6) Delete on history_0000 Delete on history_0001 Delete on history_0002 Delete on history_0003 Delete on history_0004 Delete on history_0005 Delete on history_0006... -> Bitmap Heap Scan on public.history_0000 (cost=4.79..144.20 rows=48 width=6) Output: history_0000.ctid Recheck Cond: (history_0000.itemid = 537021) -> Bitmap Index Scan on history_0000_itemid_clock_idx (cost=0.00..4.78 rows=48 width=0) Index Cond: (history_0000.itemid = 537021) -> Bitmap Heap Scan on public.history_0001 (cost=4.79..148.77 rows=48 width=6) Output: history_0001.ctid Recheck Cond: (history_0001.itemid = 537021) -> Bitmap Index Scan on history_0001_itemid_clock_idx (cost=0.00..4.78 rows=48 width=0) Index Cond: (history_0001.itemid = 537021)... \\d+ history Table \"public.history\" Column | Type | Collation | Nullable | Default | Storage | Stats target | Description--------+---------------+-----------+----------+---------+---------+--------------+------------- itemid | bigint | | not null | | plain | | clock | integer | | not null | 0 | plain | | value | numeric(16,4) | | not null | 0.0 | main | | ns | integer | | not null | 0 | plain | |Partition key: HASH (itemid)Indexes: \"history_itemid_clock_idx\" btree (itemid, clock) WITH (fillfactor='20')Partitions: history_0000 FOR VALUES WITH (modulus 100, remainder 0), history_0001 FOR VALUES WITH (modulus 100, remainder 1), history_0002 FOR VALUES WITH (modulus 100, remainder 2), history_0003 FOR VALUES WITH (modulus 100, remainder 3), history_0004 FOR VALUES WITH (modulus 100, remainder 4), history_0005 FOR VALUES WITH (modulus 100, remainder 5),...select count(*),count(distinct itemid),tableoid from history group by tableoid order by tableoid; count | count | tableoid--------+-------+---------- 64,762 | 356 | 20,531 80,649 | 351 | 20,537 61,424 | 340 | 20,543 57,290 | 365 | 20,549 69,146 | 344 | 20,555 68,357 | 372 | 20,561 69,319 | 329 | 20,567 60,846 | 332 | 20,573 62,021 | 346 | 20,579 66,328 | 362 | 20,585 69,385 | 361 | 20,591 63,304 | 332 | 20,597... select count(*),count(distinct itemid) from history; count | count-----------+-------- 6,607,298 | 34,885(1 row)...explain verbose update history set clock =4 where itemid=537021; QUERY PLAN--------------------------------------------------------------------------------------------------- Update on public.history (cost=4.80..15043.41 rows=4992 width=27) Update on public.history_0000 Update on public.history_0001 Update on public.history_0002 Update on public.history_0003 Update on public.history_0004",
"msg_date": "Thu, 25 Oct 2018 10:43:10 -0600",
"msg_from": "Dave E Martin <[email protected]>",
"msg_from_op": true,
"msg_subject": "DELETE / UPDATE from partition not optimized (11.0)"
},
{
"msg_contents": "On Thu, Oct 25, 2018 at 10:43:10AM -0600, Dave E Martin wrote:\n> If SELECT is confident enough to limit itself to one partition, why isn't\n> DELETE (or UPDATE)?\n\nBecause of this limitation:\n\nhttps://www.postgresql.org/docs/current/static/ddl-partitioning.html#DDL-PARTITION-PRUNING\n|Currently, pruning of partitions during the planning of an UPDATE or DELETE\n|command is implemented using the constraint exclusion method (however, it is\n|controlled by the enable_partition_pruning rather than constraint_exclusion) —\n|see the following section for details and caveats that apply.\n\nJustin\n\n",
"msg_date": "Fri, 26 Oct 2018 10:45:40 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DELETE / UPDATE from partition not optimized (11.0)"
},
{
"msg_contents": "On Fri, Oct 26, 2018 at 10:45:40AM -0500, Justin Pryzby wrote:\n> On Thu, Oct 25, 2018 at 10:43:10AM -0600, Dave E Martin wrote:\n> > If SELECT is confident enough to limit itself to one partition, why isn't\n> > DELETE (or UPDATE)?\n> \n> Because of this limitation:\n> \n> https://www.postgresql.org/docs/current/static/ddl-partitioning.html#DDL-PARTITION-PRUNING\n> |Currently, pruning of partitions during the planning of an UPDATE or DELETE\n> |command is implemented using the constraint exclusion method (however, it is\n> |controlled by the enable_partition_pruning rather than constraint_exclusion) —\n> |see the following section for details and caveats that apply.\n\nI meant to add that one can use a redundant constraints in addition to the\npartition bounds, both specifying the same condition. That also allows\ndetaching and re-attaching the partition without a table scan (which is why we\ndo it).\n\nJustin\n\n",
"msg_date": "Sun, 28 Oct 2018 16:49:24 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DELETE / UPDATE from partition not optimized (11.0)"
}
] |
[
{
"msg_contents": "Hi,\n\nI have searched in many postgres blogs for Sequential UUID generation,\nwhich can avoid Fragmentation issue.\n\nI did a POC(in postgres) with sequential UUID against Non sequential which\nhas shown lot of different in space utilization and index size. Sql server\nhas \"newsequentialid\" which generates sequential UUID. I have created C\nfunction which can generate a sequential UUID, but I am not sure how best I\ncan use that in postgres.\n\nI would really like to contribute to Postgres, If I can. Please let me know\nyour thoughts or plans regarding UUID generation.\n\nRegards,\nUday\n\nHi,I have searched in many postgres blogs for Sequential UUID generation, which can avoid Fragmentation issue.I did a POC(in postgres) with sequential UUID against Non sequential which has shown lot of different in space utilization and index size. Sql server has \"newsequentialid\" which generates sequential UUID. I have created C function which can generate a sequential UUID, but I am not sure how best I can use that in postgres.I would really like to contribute to Postgres, If I can. Please let me know your thoughts or plans regarding UUID generation.Regards,Uday",
"msg_date": "Mon, 29 Oct 2018 18:59:40 +0530",
"msg_from": "Uday Bhaskar V <[email protected]>",
"msg_from_op": true,
"msg_subject": "Indexes on UUID - Fragmentation Issue"
},
{
"msg_contents": "On Mon, Oct 29, 2018 at 9:18 AM Uday Bhaskar V\n<[email protected]> wrote:\n>\n> Hi,\n>\n> I have searched in many postgres blogs for Sequential UUID generation, which can avoid Fragmentation issue.\n>\n> I did a POC(in postgres) with sequential UUID against Non sequential which has shown lot of different in space utilization and index size. Sql server has \"newsequentialid\" which generates sequential UUID. I have created C function which can generate a sequential UUID, but I am not sure how best I can use that in postgres.\n>\n> I would really like to contribute to Postgres, If I can. Please let me know your thoughts or plans regarding UUID generation.\n\nI think the right approach here is to build a custom extension. There\nare lots of examples of extensions within contrib and on pgxn.\nhttps://pgxn.org/ I guess there might be some utility for this type\nas UUID fragmetnation is a major problem (it's one of the reasons I\ndiscourage the use off UUID type indexes).\n\nmerlin\n\n",
"msg_date": "Mon, 29 Oct 2018 09:28:44 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Indexes on UUID - Fragmentation Issue"
},
{
"msg_contents": "On 10/29/2018 02:29 PM, Uday Bhaskar V wrote:> I have\n> created C function which can generate a sequential UUID, but I am not \n> sure how best I can use that in postgres.\n> \n> I would really like to contribute to Postgres, If I can. Please let me \n> know your thoughts or plans regarding UUID generation.\n\nHow is it implemented? I can personally see two ways of generating \nsequential UUID:s. Either you use something like PostgreSQL's sequences \nor you can implement something based on the system time plus some few \nrandom bits which means they will be mostly sequential.\n\nIt could be worth checking on the hackers mailing list if there is any \ninterest in this feature, but if it works like a sequence it should also \nprobably be a sequence if it is ever going to be accepted into the core.\n\nFor your own use I recommend doing like Merlin suggested and write an \nextension. As long as you know a bit of C they are easy to write.\n\nAndreas\n\n",
"msg_date": "Mon, 29 Oct 2018 15:52:21 +0100",
"msg_from": "Andreas Karlsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Indexes on UUID - Fragmentation Issue"
},
{
"msg_contents": "We have migrated our Database from Oracle to Postgresql there because of\nreplication we went for UUIDs. I have C function ready, will try.\nThanks,\nUday\n\nOn Mon, Oct 29, 2018 at 7:58 PM Merlin Moncure <[email protected]> wrote:\n\n> On Mon, Oct 29, 2018 at 9:18 AM Uday Bhaskar V\n> <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > I have searched in many postgres blogs for Sequential UUID generation,\n> which can avoid Fragmentation issue.\n> >\n> > I did a POC(in postgres) with sequential UUID against Non sequential\n> which has shown lot of different in space utilization and index size. Sql\n> server has \"newsequentialid\" which generates sequential UUID. I have\n> created C function which can generate a sequential UUID, but I am not sure\n> how best I can use that in postgres.\n> >\n> > I would really like to contribute to Postgres, If I can. Please let me\n> know your thoughts or plans regarding UUID generation.\n>\n> I think the right approach here is to build a custom extension. There\n> are lots of examples of extensions within contrib and on pgxn.\n> https://pgxn.org/ I guess there might be some utility for this type\n> as UUID fragmetnation is a major problem (it's one of the reasons I\n> discourage the use off UUID type indexes).\n>\n> merlin\n>\n\nWe have migrated our Database from Oracle to Postgresql there because of replication we went for UUIDs. I have C function ready, will try.Thanks,UdayOn Mon, Oct 29, 2018 at 7:58 PM Merlin Moncure <[email protected]> wrote:On Mon, Oct 29, 2018 at 9:18 AM Uday Bhaskar V\n<[email protected]> wrote:\n>\n> Hi,\n>\n> I have searched in many postgres blogs for Sequential UUID generation, which can avoid Fragmentation issue.\n>\n> I did a POC(in postgres) with sequential UUID against Non sequential which has shown lot of different in space utilization and index size. Sql server has \"newsequentialid\" which generates sequential UUID. I have created C function which can generate a sequential UUID, but I am not sure how best I can use that in postgres.\n>\n> I would really like to contribute to Postgres, If I can. Please let me know your thoughts or plans regarding UUID generation.\n\nI think the right approach here is to build a custom extension. There\nare lots of examples of extensions within contrib and on pgxn.\nhttps://pgxn.org/ I guess there might be some utility for this type\nas UUID fragmetnation is a major problem (it's one of the reasons I\ndiscourage the use off UUID type indexes).\n\nmerlin",
"msg_date": "Mon, 29 Oct 2018 20:26:29 +0530",
"msg_from": "Uday Bhaskar V <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Indexes on UUID - Fragmentation Issue"
},
{
"msg_contents": "or prepend the UUID with a timestamp?\n\nRegards,\nMichael Vitale\n\n> Andreas Karlsson <mailto:[email protected]>\n> Monday, October 29, 2018 10:52 AM\n> On 10/29/2018 02:29 PM, Uday Bhaskar V wrote:> I have\n>\n> How is it implemented? I can personally see two ways of generating \n> sequential UUID:s. Either you use something like PostgreSQL's \n> sequences or you can implement something based on the system time plus \n> some few random bits which means they will be mostly sequential.\n>\n> It could be worth checking on the hackers mailing list if there is any \n> interest in this feature, but if it works like a sequence it should \n> also probably be a sequence if it is ever going to be accepted into \n> the core.\n>\n> For your own use I recommend doing like Merlin suggested and write an \n> extension. As long as you know a bit of C they are easy to write.\n>\n> Andreas\n>\n> Uday Bhaskar V <mailto:[email protected]>\n> Monday, October 29, 2018 9:29 AM\n> Hi,\n>\n> I have searched in many postgres blogs for Sequential UUID generation, \n> which can avoid Fragmentation issue.\n>\n> I did a POC(in postgres) with sequential UUID against Non sequential \n> which has shown lot of different in space utilization and index size. \n> Sql server has \"newsequentialid\" which generates sequential UUID. I \n> have created C function which can generate a sequential UUID, but I am \n> not sure how best I can use that in postgres.\n>\n> I would really like to contribute to Postgres, If I can. Please let me \n> know your thoughts or plans regarding UUID generation.\n>\n> Regards,\n> Uday\n\n\n\n\nor prepend the UUID with a\n timestamp?\n\nRegards,\nMichael Vitale\n\n\n\n \nAndreas Karlsson Monday,\n October 29, 2018 10:52 AM \nOn 10/29/2018 02:29 PM, Uday \nBhaskar V wrote:> I have\n\nHow is it implemented? I can personally see two ways of generating \nsequential UUID:s. Either you use something like PostgreSQL's sequences \nor you can implement something based on the system time plus some few \nrandom bits which means they will be mostly sequential.\n\nIt could be worth checking on the hackers mailing list if there is \nany \ninterest in this feature, but if it works like a sequence it should also\n \nprobably be a sequence if it is ever going to be accepted into the core.\n\nFor your own use I recommend doing like Merlin suggested and write \nan \nextension. As long as you know a bit of C they are easy to write.\n\nAndreas\n\n\n \nUday Bhaskar V Monday,\n October 29, 2018 9:29 AM \nHi,I\n have searched in many postgres blogs for Sequential UUID generation, \nwhich can avoid Fragmentation issue.I did a \nPOC(in postgres) with sequential UUID against Non sequential which has \nshown lot of different in space utilization and index size. Sql server \nhas \"newsequentialid\" which generates sequential UUID. I have created C \nfunction which can generate a sequential UUID, but I am not sure how \nbest I can use that in postgres.I would really\n like to contribute to Postgres, If I can. Please let me know your \nthoughts or plans regarding UUID generation.Regards,Uday",
"msg_date": "Mon, 29 Oct 2018 15:12:09 -0400",
"msg_from": "MichaelDBA <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Indexes on UUID - Fragmentation Issue"
}
] |
[
{
"msg_contents": "I am using pgadmin4 version 3.4 with PG 11.0 and I get this error when I \ntry to connect with scram authorization:\n\nUser \"myuser\" does not have a valid SCRAM verifier.\n\nHow do I get around this? And also how would I do this for psql?\n\nRegards,\nMichael Vitale\n\n\n\nI am using pgadmin4 version 3.4 \nwith PG 11.0 \nand I get this error when I \ntry to connect with scram authorization:\n \n\nUser \"myuser\" does not have a valid SCRAM verifier.\n \n\nHow do I get around this?��\nAnd also how would I do this for psql?\n\nRegards,\n \nMichael Vitale",
"msg_date": "Tue, 30 Oct 2018 13:51:57 -0400",
"msg_from": "MichaelDBA <[email protected]>",
"msg_from_op": true,
"msg_subject": "SCRAM question"
},
{
"msg_contents": "On 10/30/18 10:51 AM, MichaelDBA wrote:\n> I am using pgadmin4 version 3.4 with PG 11.0 and I get this error when \n> I try to connect with scram authorization:\n>\n> User \"myuser\" does not have a valid SCRAM verifier.\n>\n> How do I get around this? And also how would I do this for psql?\n\nYou need to update the password using SCRAM I believe...\n\n|See here: \nhttps://paquier.xyz/postgresql-2/postgres-10-scram-authentication/|\n\n|\n|\n\n|JD\n|\n\n>\n> Regards,\n> Michael Vitale \n\n\n-- \nCommand Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc\n*** A fault and talent of mine is to tell it exactly how it is. ***\nPostgreSQL centered full stack support, consulting and development.\nAdvocate: @amplifypostgres || Learn: https://postgresconf.org\n***** Unless otherwise stated, opinions are my own. *****\n\n\n\n\n\n\n\nOn 10/30/18 10:51 AM, MichaelDBA wrote:\n\n\n\n I am using pgadmin4 version 3.4\n with PG 11.0\n and I get this error when I try to connect with scram\n authorization: \n\n User \"myuser\" does not have a valid SCRAM verifier. \n\n How do I get around this?��\n And also how would I do this for psql?\n\nYou need to update the password using SCRAM I believe...\nSee here:\n https://paquier.xyz/postgresql-2/postgres-10-scram-authentication/\n\n\nJD\n\n \n Regards, \n Michael Vitale\n \n\n\n-- \nCommand Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc\n*** A fault and talent of mine is to tell it exactly how it is. ***\nPostgreSQL centered full stack support, consulting and development. \nAdvocate: @amplifypostgres || Learn: https://postgresconf.org\n***** Unless otherwise stated, opinions are my own. *****",
"msg_date": "Tue, 30 Oct 2018 11:03:14 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SCRAM question"
},
{
"msg_contents": "On Tue, Oct 30, 2018 at 11:03:14AM -0700, Joshua D. Drake wrote:\n> On 10/30/18 10:51 AM, MichaelDBA wrote:\n>> I am using pgadmin4 version 3.4 with PG 11.0 and I get this error when I\n>> try to connect with scram authorization:\n>> \n>> User \"myuser\" does not have a valid SCRAM verifier.\n>> \n>> How do I get around this? And also how would I do this for psql?\n> \n> You need to update the password using SCRAM I believe...\n> \n> |See here:\n> https://paquier.xyz/postgresql-2/postgres-10-scram-authentication/|\n\nIn order to do that, you would basically need to:\n1) Switch password_encryption to 'scram-sha-256' in the server\nconfiguration.\n2) Issue an ALTER ROLE command to update the password (likely it is\nbetter to use \\password from psql as this would send a hashed password\nto the server, which relies on the server setting for\npassword_encryption).\n3) Make sure that pg_hba.conf is using properly scram-sha-256 where it\nshould as authentication method.\n\nIf you do not want to upgrade to SCRAM, it is of course possible to\nstill remain with md5.\n--\nMichael",
"msg_date": "Wed, 31 Oct 2018 18:55:01 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SCRAM question"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm running a performance test for our application and encountered a\nparticular query with high planning time compared to the execution. Please\nrefer to attached explain.out for the explain analyze output.\n\nFormatted explain: https://explain.depesz.com/s/R834\n\nThe test was performed with Jmeter sending requests to the database, query\nwas generated by Hibernate which consists of a 133 table UNION. Also\nattached are some diagnostic info (database version, database settings,\ntable definitions and maintenance related information).\n\nDue to the extremely large query text, I'm choosing to provide information\nvia attachments instead of pasting in the email body.\n\nBelow are some additional OS information on the database server:\nCPU: 8\nRAM: 24GB\nDisk: SSD\nOS: CentOS Linux release 7.4.1708 (Core)\n\n[root@kvrh7os202 ~]# uname -a\nLinux kvrh7os202.comptel.com 3.10.0-693.21.1.el7.x86_64 #1 SMP Wed Mar 7\n19:03:37 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux\n[root@kvrh7os202 ~]#\n\nThings I tried:\n1. Setting random_page_cost = 1.1 and effective_io_concurrency = 200 - no\neffect on planning time\n2. Create materialized view for big UNION query - planning time reduced\nsignificantly but not a viable solution\n\nWhat are my other options to improve the query planning time?\n\n\n\nRegards,\nRichard Lee",
"msg_date": "Fri, 2 Nov 2018 17:36:41 +0800",
"msg_from": "Richard Lee <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fwd: Query with high planning time compared to execution time"
},
{
"msg_contents": "On 11/02/2018 10:36 AM, Richard Lee wrote:\n> Hi,\n> \n> I'm running a performance test for our application and encountered a\n> particular query with high planning time compared to the execution.\n> Please refer to attached explain.out for the explain analyze output.\n> \n> Formatted explain: https://explain.depesz.com/s/R834\n> \n> The test was performed with Jmeter sending requests to the database,\n> query was generated by Hibernate which consists of a 133 table UNION.\n> Also attached are some diagnostic info (database version, database\n> settings, table definitions and maintenance related information).\n> \n> Due to the extremely large query text, I'm choosing to provide\n> information via attachments instead of pasting in the email body.\n> \n> Below are some additional OS information on the database server:\n> CPU: 8\n> RAM: 24GB\n> Disk: SSD\n> OS: CentOS Linux release 7.4.1708 (Core)\n> \n> [root@kvrh7os202 ~]# uname -a\n> Linux kvrh7os202.comptel.com <http://kvrh7os202.comptel.com>\n> 3.10.0-693.21.1.el7.x86_64 #1 SMP Wed Mar 7 19:03:37 UTC 2018 x86_64\n> x86_64 x86_64 GNU/Linux\n> [root@kvrh7os202 ~]#\n> \n> Things I tried:\n> 1. Setting random_page_cost = 1.1 and effective_io_concurrency = 200 -\n> no effect on planning time\n> 2. Create materialized view for big UNION query - planning time reduced\n> significantly but not a viable solution\n> \n\nThose changes likely affect the query costing and execution, but the\nnumber of plans to consider is probably not going to change much. So\nplanning taking about the same time is kinda expected here.\n\n> What are my other options to improve the query planning time?\n> \n\nCan you do a bit of profiling, to determine which part of the query\nplanning process is slow here? That is:\n\n1) make sure you have the debug symbols installed\n2) do `perf record`\n3) run the benchmark for a while (a minute or so)\n4) stop the perf record using Ctrl-C\n5) generate a profile using `perf report` and share the result\n\nPossibly do the same thing with `perf record -g` to collect call-graph\ninformation, but that's probably going way larger.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Fri, 2 Nov 2018 14:55:17 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: Query with high planning time compared to execution time"
},
{
"msg_contents": "Tomas Vondra-4 wrote\n> On 11/02/2018 10:36 AM, Richard Lee wrote:\n> [...]\n> \n>> What are my other options to improve the query planning time?\n>> \n> \n> Can you do a bit of profiling, to determine which part of the query\n> planning process is slow here? \n> [...]\n\nAfter planning profiling, (or in //), you can try to limit the number of\nplans \nthat the planner has to evaluate:\n\nsetting enable_mergejoin to off, or some others from \nhttps://www.postgresql.org/docs/10/static/runtime-config-query.html\nbut you will have to check that the chosen plan is still good\n\nAn other way is maybe reducing the number of indexes (you have so many \nones ...). Usually, needed indexes are PK, UK, indexes for FK, and a \"few\" \nmore.\n\nCould you provide the SQL query to check that ?\n\nRegards\nPAscal \n\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n",
"msg_date": "Fri, 2 Nov 2018 15:51:40 -0700 (MST)",
"msg_from": "legrand legrand <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: Query with high planning time compared to execution time"
},
{
"msg_contents": "Hi,\n\nDebug symbols can only be enabled during configure? How about when\nPostgresql is running?\n\nRegards,\nRichard Lee\n\nOn Fri, Nov 2, 2018 at 9:55 PM Tomas Vondra <[email protected]>\nwrote:\n\n> On 11/02/2018 10:36 AM, Richard Lee wrote:\n> > Hi,\n> >\n> > I'm running a performance test for our application and encountered a\n> > particular query with high planning time compared to the execution.\n> > Please refer to attached explain.out for the explain analyze output.\n> >\n> > Formatted explain: https://explain.depesz.com/s/R834\n> >\n> > The test was performed with Jmeter sending requests to the database,\n> > query was generated by Hibernate which consists of a 133 table UNION.\n> > Also attached are some diagnostic info (database version, database\n> > settings, table definitions and maintenance related information).\n> >\n> > Due to the extremely large query text, I'm choosing to provide\n> > information via attachments instead of pasting in the email body.\n> >\n> > Below are some additional OS information on the database server:\n> > CPU: 8\n> > RAM: 24GB\n> > Disk: SSD\n> > OS: CentOS Linux release 7.4.1708 (Core)\n> >\n> > [root@kvrh7os202 ~]# uname -a\n> > Linux kvrh7os202.comptel.com <http://kvrh7os202.comptel.com>\n> > 3.10.0-693.21.1.el7.x86_64 #1 SMP Wed Mar 7 19:03:37 UTC 2018 x86_64\n> > x86_64 x86_64 GNU/Linux\n> > [root@kvrh7os202 ~]#\n> >\n> > Things I tried:\n> > 1. Setting random_page_cost = 1.1 and effective_io_concurrency = 200 -\n> > no effect on planning time\n> > 2. Create materialized view for big UNION query - planning time reduced\n> > significantly but not a viable solution\n> >\n>\n> Those changes likely affect the query costing and execution, but the\n> number of plans to consider is probably not going to change much. So\n> planning taking about the same time is kinda expected here.\n>\n> > What are my other options to improve the query planning time?\n> >\n>\n> Can you do a bit of profiling, to determine which part of the query\n> planning process is slow here? That is:\n>\n> 1) make sure you have the debug symbols installed\n> 2) do `perf record`\n> 3) run the benchmark for a while (a minute or so)\n> 4) stop the perf record using Ctrl-C\n> 5) generate a profile using `perf report` and share the result\n>\n> Possibly do the same thing with `perf record -g` to collect call-graph\n> information, but that's probably going way larger.\n>\n> regards\n>\n> --\n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\nHi,Debug symbols can only be enabled during configure? How about when Postgresql is running?Regards,Richard LeeOn Fri, Nov 2, 2018 at 9:55 PM Tomas Vondra <[email protected]> wrote:On 11/02/2018 10:36 AM, Richard Lee wrote:\n> Hi,\n> \n> I'm running a performance test for our application and encountered a\n> particular query with high planning time compared to the execution.\n> Please refer to attached explain.out for the explain analyze output.\n> \n> Formatted explain: https://explain.depesz.com/s/R834\n> \n> The test was performed with Jmeter sending requests to the database,\n> query was generated by Hibernate which consists of a 133 table UNION.\n> Also attached are some diagnostic info (database version, database\n> settings, table definitions and maintenance related information).\n> \n> Due to the extremely large query text, I'm choosing to provide\n> information via attachments instead of pasting in the email body.\n> \n> Below are some additional OS information on the database server:\n> CPU: 8\n> RAM: 24GB\n> Disk: SSD\n> OS: CentOS Linux release 7.4.1708 (Core)\n> \n> [root@kvrh7os202 ~]# uname -a\n> Linux kvrh7os202.comptel.com <http://kvrh7os202.comptel.com>\n> 3.10.0-693.21.1.el7.x86_64 #1 SMP Wed Mar 7 19:03:37 UTC 2018 x86_64\n> x86_64 x86_64 GNU/Linux\n> [root@kvrh7os202 ~]#\n> \n> Things I tried:\n> 1. Setting random_page_cost = 1.1 and effective_io_concurrency = 200 -\n> no effect on planning time\n> 2. Create materialized view for big UNION query - planning time reduced\n> significantly but not a viable solution\n> \n\nThose changes likely affect the query costing and execution, but the\nnumber of plans to consider is probably not going to change much. So\nplanning taking about the same time is kinda expected here.\n\n> What are my other options to improve the query planning time?\n> \n\nCan you do a bit of profiling, to determine which part of the query\nplanning process is slow here? That is:\n\n1) make sure you have the debug symbols installed\n2) do `perf record`\n3) run the benchmark for a while (a minute or so)\n4) stop the perf record using Ctrl-C\n5) generate a profile using `perf report` and share the result\n\nPossibly do the same thing with `perf record -g` to collect call-graph\ninformation, but that's probably going way larger.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 5 Nov 2018 11:36:46 +0800",
"msg_from": "Richard Lee <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fwd: Query with high planning time compared to execution time"
},
{
"msg_contents": "On Mon, Nov 05, 2018 at 11:36:46AM +0800, Richard Lee wrote:\n> Hi,\n> \n> Debug symbols can only be enabled during configure? How about when\n> Postgresql is running?\n\nIf you're running from RPMs (maybe from yum.postgresql.org), you can install\npostgresql10-debuginfo (maybe using: \"debuginfo-install postgresql10\").\n\nJust be sure it installs the debuginfo for exactly the same RPM (rather than\ninstalling a new minor version, for example).\n\nIf you compiled it yourself, you can probably point GDB to the un-stripped\nbinaries. Again, assuming you have binaries from exactly the same source\nversion, no additional patches, etc.\n\nJustin\n\n",
"msg_date": "Sun, 4 Nov 2018 22:10:09 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: Query with high planning time compared to execution time"
},
{
"msg_contents": "Hi,\n\nI managed to install postgresql10-debuginfo:\n[root@kvrh7os202 /]# yum install -y postgresql10-debuginfo.x86_64\n\nExecuted perf-record and perf-report:\n-bash-4.2$ perf record -g -- psql -U sri sri <\n/var/lib/pgsql/10/data/pg_log/1-b10/query.txt\n< ... snipped ... >\n Planning time: 1817.355 ms\n Execution time: 31.849 ms\n(480 rows)\n\n[ perf record: Woken up 1 times to write data ]\n[ perf record: Captured and wrote 0.025 MB perf.data (136 samples) ]\n-bash-4.2$\n-bash-4.2$ perf report -g > perf_report_20181105_1452.out\n\nPlease refer to the attached for the perf report output (not sure if it was\ndone correctly).\n\nRegards,\nRichard Lee\n\nOn Mon, Nov 5, 2018 at 12:10 PM Justin Pryzby <[email protected]> wrote:\n\n> On Mon, Nov 05, 2018 at 11:36:46AM +0800, Richard Lee wrote:\n> > Hi,\n> >\n> > Debug symbols can only be enabled during configure? How about when\n> > Postgresql is running?\n>\n> If you're running from RPMs (maybe from yum.postgresql.org), you can\n> install\n> postgresql10-debuginfo (maybe using: \"debuginfo-install postgresql10\").\n>\n> Just be sure it installs the debuginfo for exactly the same RPM (rather\n> than\n> installing a new minor version, for example).\n>\n> If you compiled it yourself, you can probably point GDB to the un-stripped\n> binaries. Again, assuming you have binaries from exactly the same source\n> version, no additional patches, etc.\n>\n> Justin\n>",
"msg_date": "Mon, 5 Nov 2018 15:04:29 +0800",
"msg_from": "Richard Lee <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fwd: Query with high planning time compared to execution time"
},
{
"msg_contents": "On Mon, Nov 05, 2018 at 03:04:29PM +0800, Richard Lee wrote:\n> Executed perf-record and perf-report:\n> -bash-4.2$ perf record -g -- psql -U sri sri <\n> /var/lib/pgsql/10/data/pg_log/1-b10/query.txt\n> < ... snipped ... >\n\nThat's showing perf output for the psql client. What you want is output for\nthe server process (essentially all the client does is move data between the\nuser to the server).\n\nhttps://wiki.postgresql.org/wiki/Profiling_with_perf\n\nJustin\n\n",
"msg_date": "Mon, 5 Nov 2018 07:55:19 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: Query with high planning time compared to execution time"
},
{
"msg_contents": "Hi,\n\nAh, apologize for the mistake. The entire will take several hours to\ncomplete and the problem query won't be executed until about halfway\nthrough the benchmark. Should I do `perf record` when the query appears? Or\none `perf record` at the start of the test and another one when the query\nappears? I imagine doing a `perf record` of the entire benchmark will fill\nthe storage (only about 100GB of space on the server).\n\nRegards,\nRichard Lee\n\nOn Mon, Nov 5, 2018 at 9:55 PM Justin Pryzby <[email protected]> wrote:\n\n> On Mon, Nov 05, 2018 at 03:04:29PM +0800, Richard Lee wrote:\n> > Executed perf-record and perf-report:\n> > -bash-4.2$ perf record -g -- psql -U sri sri <\n> > /var/lib/pgsql/10/data/pg_log/1-b10/query.txt\n> > < ... snipped ... >\n>\n> That's showing perf output for the psql client. What you want is output\n> for\n> the server process (essentially all the client does is move data between\n> the\n> user to the server).\n>\n> https://wiki.postgresql.org/wiki/Profiling_with_perf\n>\n> Justin\n>\n\nHi,Ah, apologize for the mistake. The entire will take several hours to complete and the problem query won't be executed until about halfway through the benchmark. Should I do `perf record` when the query appears? Or one `perf record` at the start of the test and another one when the query appears? I imagine doing a `perf record` of the entire benchmark will fill the storage (only about 100GB of space on the server).Regards,Richard LeeOn Mon, Nov 5, 2018 at 9:55 PM Justin Pryzby <[email protected]> wrote:On Mon, Nov 05, 2018 at 03:04:29PM +0800, Richard Lee wrote:\n> Executed perf-record and perf-report:\n> -bash-4.2$ perf record -g -- psql -U sri sri <\n> /var/lib/pgsql/10/data/pg_log/1-b10/query.txt\n> < ... snipped ... >\n\nThat's showing perf output for the psql client. What you want is output for\nthe server process (essentially all the client does is move data between the\nuser to the server).\n\nhttps://wiki.postgresql.org/wiki/Profiling_with_perf\n\nJustin",
"msg_date": "Tue, 6 Nov 2018 19:40:50 +0800",
"msg_from": "Richard Lee <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fwd: Query with high planning time compared to execution time"
}
] |
[
{
"msg_contents": "Hi all,\n\nI've figured out how to solve the performance issues I've been encountering\nwith a particular query, but I'm interested in better understanding the\nintuition behind why the better query is so much more performant.\n\nThe query in question involves a NOT IN filter from a CTE:\n\nWITH temp as (\n SELECT temp_id\n FROM other_table\n WHERE ...\n)\nSELECT COUNT(*)\nFROM main_table m\nWHERE m.main_id NOT IN (SELECT temp_id from temp);\n\nThe query plan for (the non-masked version of) this query includes:\nFilter: (NOT (SubPlan 1))\n Rows Removed by Filter: 1\n SubPlan 1\n -> CTE Scan on temp (cost=0.00..1950.60 rows=97530 width=418)\n(actual time=0.018..4.170 rows=15697 loops=100731)\n\nMy understanding is that PostgreSQL is not able to make this into a hashed\nSubplan because the expected number of rows * width of rows is too large\nfor the work_mem setting, and so instead it has to do many repeated linear\npasses to find whether the various values of main_id are in the list of\ntemp_ids.\n\nThe resolution to this problem was discovered via\nhttps://explainextended.com/2009/09/16/not-in-vs-not-exists-vs-left-join-is-null-postgresql/,\nand involves rewriting as so:\n\nWITH temp as (\n SELECT temp_id\n FROM other_table\n WHERE ...\n)\nSELECT COUNT(*)\nFROM main_table m\nWHERE NOT EXISTS (SELECT temp_id from temp where temp_id = m.main_id);\n\nThe query plan for this query (which I believe is equivalent to what it\nwould be if I instead explicitly LEFT JOINed main_table to temp and used a\nWHERE to filter out NULL values of temp_id) does not involve a high number\nof loops like above:\n\n-> Merge Anti Join (cost=147305.45..149523.68 rows=192712 width=4)\n(actual time=5050.773..5622.266 rows=158850 loops=1)\n Merge Cond: (m.main_id = temp.temp_id)\n -> Sort (cost=115086.98..115637.10 rows=220050 width=19) (actual\ntime=1290.829..1655.066 rows=199226 loops=1)\n Sort Key: m.main_id\n Sort Method: external merge Disk: 5632kB\n -> Materialize (cost=32218.47..32764.37 rows=109180 width=418) (actual\ntime=3759.936..3787.724 rows=38268 loops=1)\n -> Sort (cost=32218.47..32491.42 rows=109180 width=418) (actual\ntime=3759.933..3771.117 rows=38268 loops=1)\n Sort Key: temp.temp_id\n Sort Method: quicksort Memory: 3160kB\n -> CTE Scan on temp (cost=0.00..2183.60 rows=109180\nwidth=418) (actual time=2316.745..3735.486 rows=38268 loops=1)\n\nInstead it sorts using disk, and uses a MERGE ANTI JOIN, which I understand\nto eliminate the need for multiple passes through the table temp.\n\nMy primary question is: why is this approach only possible (for data too\nlarge for memory) when using NOT EXISTS, and not when using NOT IN?\n\nI understand that there is a slight difference in the meaning of the two\nexpressions, in that NOT IN will produce NULL if there are any NULL values\nin the right hand side (in this case there are none, and the queries should\nreturn the same COUNT). But if anything, I would expect that to improve\nperformance of the NOT IN operation, since a single pass through that data\nshould reveal if there are any NULL values, at which point that information\ncould be used to short-circuit. So I am a bit baffled.\n\nThanks very much for your help!\n\nBest,\nLincoln\n\n--\nLincoln Swaine-Moore\n\nHi all, I've figured out how to solve the performance issues I've been encountering with a particular query, but I'm interested in better understanding the intuition behind why the better query is so much more performant.The query in question involves a NOT IN filter from a CTE:WITH temp as ( SELECT temp_id FROM other_table WHERE ...)SELECT COUNT(*)FROM main_table mWHERE m.main_id NOT IN (SELECT temp_id from temp);The query plan for (the non-masked version of) this query includes:Filter: (NOT (SubPlan 1)) Rows Removed by Filter: 1 SubPlan 1 -> CTE Scan on temp (cost=0.00..1950.60 rows=97530 width=418) (actual time=0.018..4.170 rows=15697 loops=100731)My understanding is that PostgreSQL is not able to make this into a hashed Subplan because the expected number of rows * width of rows is too large for the work_mem setting, and so instead it has to do many repeated linear passes to find whether the various values of main_id are in the list of temp_ids.The resolution to this problem was discovered via https://explainextended.com/2009/09/16/not-in-vs-not-exists-vs-left-join-is-null-postgresql/, and involves rewriting as so:WITH temp as ( SELECT temp_id FROM other_table WHERE ...)SELECT COUNT(*)FROM main_table m WHERE NOT EXISTS (SELECT temp_id from temp where temp_id = m.main_id);The query plan for this query (which I believe is equivalent to what it would be if I instead explicitly LEFT JOINed main_table to temp and used a WHERE to filter out NULL values of temp_id) does not involve a high number of loops like above:-> Merge Anti Join (cost=147305.45..149523.68 rows=192712 width=4) (actual time=5050.773..5622.266 rows=158850 loops=1) Merge Cond: (m.main_id = temp.temp_id) -> Sort (cost=115086.98..115637.10 rows=220050 width=19) (actual time=1290.829..1655.066 rows=199226 loops=1) Sort Key: m.main_id Sort Method: external merge Disk: 5632kB -> Materialize (cost=32218.47..32764.37 rows=109180 width=418) (actual time=3759.936..3787.724 rows=38268 loops=1) -> Sort (cost=32218.47..32491.42 rows=109180 width=418) (actual time=3759.933..3771.117 rows=38268 loops=1) Sort Key: temp.temp_id Sort Method: quicksort Memory: 3160kB -> CTE Scan on temp (cost=0.00..2183.60 rows=109180 width=418) (actual time=2316.745..3735.486 rows=38268 loops=1)Instead it sorts using disk, and uses a MERGE ANTI JOIN, which I understand to eliminate the need for multiple passes through the table temp.My primary question is: why is this approach only possible (for data too large for memory) when using NOT EXISTS, and not when using NOT IN? I understand that there is a slight difference in the meaning of the two expressions, in that NOT IN will produce NULL if there are any NULL values in the right hand side (in this case there are none, and the queries should return the same COUNT). But if anything, I would expect that to improve performance of the NOT IN operation, since a single pass through that data should reveal if there are any NULL values, at which point that information could be used to short-circuit. So I am a bit baffled.Thanks very much for your help!Best,Lincoln--Lincoln Swaine-Moore",
"msg_date": "Thu, 8 Nov 2018 14:35:32 -0500",
"msg_from": "Lincoln Swaine-Moore <[email protected]>",
"msg_from_op": true,
"msg_subject": "NOT IN vs. NOT EXISTS performance"
},
{
"msg_contents": "On 9 November 2018 at 08:35, Lincoln Swaine-Moore\n<[email protected]> wrote:\n> My primary question is: why is this approach only possible (for data too\n> large for memory) when using NOT EXISTS, and not when using NOT IN?\n>\n> I understand that there is a slight difference in the meaning of the two\n> expressions, in that NOT IN will produce NULL if there are any NULL values\n> in the right hand side (in this case there are none, and the queries should\n> return the same COUNT). But if anything, I would expect that to improve\n> performance of the NOT IN operation, since a single pass through that data\n> should reveal if there are any NULL values, at which point that information\n> could be used to short-circuit. So I am a bit baffled.\n\nThe problem is that the planner makes the plan and would have to know\nbeforehand that no NULLs could exist on either side of the join. For\nmore simple cases it could make use of NOT NULL constaints, but more\ncomplex cases exist, such as:\n\nSELECT * FROM t1 LEFT JOIN t2 ON t1.x = t2.y WHERE t2.y NOT IN(SELECT\nz FROM t3);\n\nThere's a bit more reading about the complexity of this in [1]\n\n[1] https://www.postgresql.org/message-id/flat/CAMkU%3D1zPVbez_HWao781L8PzFk%2Bd1J8VaJuhyjUHaRifk6OcUA%40mail.gmail.com#7c6d3178c18103d8508f3ec5982b1b8e\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Fri, 9 Nov 2018 10:11:40 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: NOT IN vs. NOT EXISTS performance"
},
{
"msg_contents": "On Thu, Nov 8, 2018 at 3:12 PM David Rowley\n<[email protected]> wrote:\n>\n> On 9 November 2018 at 08:35, Lincoln Swaine-Moore\n> <[email protected]> wrote:\n> > My primary question is: why is this approach only possible (for data too\n> > large for memory) when using NOT EXISTS, and not when using NOT IN?\n> >\n> > I understand that there is a slight difference in the meaning of the two\n> > expressions, in that NOT IN will produce NULL if there are any NULL values\n> > in the right hand side (in this case there are none, and the queries should\n> > return the same COUNT). But if anything, I would expect that to improve\n> > performance of the NOT IN operation, since a single pass through that data\n> > should reveal if there are any NULL values, at which point that information\n> > could be used to short-circuit. So I am a bit baffled.\n>\n> The problem is that the planner makes the plan and would have to know\n> beforehand that no NULLs could exist on either side of the join.\n\nYeah, the core issue is the SQL rules that define NOT IN behaves as:\npostgres=# select 1 not in (select 2);\n ?column?\n──────────\n t\n(1 row)\n\npostgres=# select 1 not in (select 2 union all select null);\n ?column?\n──────────\n\n(1 row)\n\nThere's a certain logic to it but it's a death sentence for performance.\n\nmerlin\n\n",
"msg_date": "Fri, 9 Nov 2018 07:45:56 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: NOT IN vs. NOT EXISTS performance"
},
{
"msg_contents": "Thanks, both!\n\nThat's a very interesting thread. I was confident this was a subject that\nhad been discussed--just wasn't sure where--so thank you for forwarding.\n\nI guess the big-picture summary is that NOT IN's definition introduces\ncomplexity (the nature of which I now understand better) that is usually\nunwarranted by the question the querier is asking. So NOT EXISTS will\nalmost always be preferable when a subquery is involved, unless the\nbehavior around NULL values is specifically desired.\n\nOn Fri, Nov 9, 2018 at 8:45 AM Merlin Moncure <[email protected]> wrote:\n\n> On Thu, Nov 8, 2018 at 3:12 PM David Rowley\n> <[email protected]> wrote:\n> >\n> > On 9 November 2018 at 08:35, Lincoln Swaine-Moore\n> > <[email protected]> wrote:\n> > > My primary question is: why is this approach only possible (for data\n> too\n> > > large for memory) when using NOT EXISTS, and not when using NOT IN?\n> > >\n> > > I understand that there is a slight difference in the meaning of the\n> two\n> > > expressions, in that NOT IN will produce NULL if there are any NULL\n> values\n> > > in the right hand side (in this case there are none, and the queries\n> should\n> > > return the same COUNT). But if anything, I would expect that to improve\n> > > performance of the NOT IN operation, since a single pass through that\n> data\n> > > should reveal if there are any NULL values, at which point that\n> information\n> > > could be used to short-circuit. So I am a bit baffled.\n> >\n> > The problem is that the planner makes the plan and would have to know\n> > beforehand that no NULLs could exist on either side of the join.\n>\n> Yeah, the core issue is the SQL rules that define NOT IN behaves as:\n> postgres=# select 1 not in (select 2);\n> ?column?\n> ──────────\n> t\n> (1 row)\n>\n> postgres=# select 1 not in (select 2 union all select null);\n> ?column?\n> ──────────\n>\n> (1 row)\n>\n> There's a certain logic to it but it's a death sentence for performance.\n>\n> merlin\n>\n\n\n-- \nLincoln Swaine-Moore\n\nThanks, both! That's a very interesting thread. I was confident this was a subject that had been discussed--just wasn't sure where--so thank you for forwarding.I guess the big-picture summary is that NOT IN's definition introduces complexity (the nature of which I now understand better) that is usually unwarranted by the question the querier is asking. So NOT EXISTS will almost always be preferable when a subquery is involved, unless the behavior around NULL values is specifically desired.On Fri, Nov 9, 2018 at 8:45 AM Merlin Moncure <[email protected]> wrote:On Thu, Nov 8, 2018 at 3:12 PM David Rowley\n<[email protected]> wrote:\n>\n> On 9 November 2018 at 08:35, Lincoln Swaine-Moore\n> <[email protected]> wrote:\n> > My primary question is: why is this approach only possible (for data too\n> > large for memory) when using NOT EXISTS, and not when using NOT IN?\n> >\n> > I understand that there is a slight difference in the meaning of the two\n> > expressions, in that NOT IN will produce NULL if there are any NULL values\n> > in the right hand side (in this case there are none, and the queries should\n> > return the same COUNT). But if anything, I would expect that to improve\n> > performance of the NOT IN operation, since a single pass through that data\n> > should reveal if there are any NULL values, at which point that information\n> > could be used to short-circuit. So I am a bit baffled.\n>\n> The problem is that the planner makes the plan and would have to know\n> beforehand that no NULLs could exist on either side of the join.\n\nYeah, the core issue is the SQL rules that define NOT IN behaves as:\npostgres=# select 1 not in (select 2);\n ?column?\n──────────\n t\n(1 row)\n\npostgres=# select 1 not in (select 2 union all select null);\n ?column?\n──────────\n\n(1 row)\n\nThere's a certain logic to it but it's a death sentence for performance.\n\nmerlin\n-- Lincoln Swaine-Moore",
"msg_date": "Fri, 9 Nov 2018 19:06:15 -0500",
"msg_from": "Lincoln Swaine-Moore <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: NOT IN vs. NOT EXISTS performance"
}
] |
[
{
"msg_contents": "Hi, I'm also experiencing the problem: dsa_allocate could not find 7 free\npages CONTEXT: parallel worker\n\nI'm running: PostgreSQL 10.5 (Ubuntu 10.5-1.pgdg16.04+1) on\nx86_64-pc-linux-gnu, compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.10) 5.4.0\n20160609, 64-bit\n\nquery plan: (select statement over parent table to many partitions):\nselect ...\nfrom fa\nwhere c_id in (<ID_LIST>) and\ndatetime >= '2018/01/01'\nand ((dims ? 'p' and dims ? 'mcp')\nor (datasource in (FA', 'GA')))\nand not datasource = 'm'\nGROUP BY datasource, dims ->'ct', dims ->'mcp', dims -> 'p', dims -> 'sp':\n\nFinalize GroupAggregate (cost=31514757.77..31519357.77 rows=40000 width=223)\n Group Key: fa.datasource, ((fa.dims -> 'ct'::text)), ((fa.dims ->\n'mcp'::text)), ((fa.dims -> 'p'::text)), ((fa.dims -> 'sp'::text))\n -> Sort (cost=31514757.77..31515057.77 rows=120000 width=223)\n Sort Key: fa.datasource, ((fa.dims -> 'ct'::text)), ((fa.dims\n-> 'mcp'::text)), ((fa.dims -> 'p'::text)), ((fa.dims -> 'sp'::text))\n -> Gather (cost=31491634.17..31504634.17 rows=120000 width=223)\n Workers Planned: 3\n -> Partial HashAggregate\n(cost=31490634.17..31491634.17 rows=40000 width=223)\n Group Key: fa.datasource, (fa.dims ->\n'ct'::text), (fa.dims -> 'mcp'::text), (fa.dims -> 'p'::text),\n(fa.dims -> 'sp'::text)\n -> Result (cost=0.00..31364713.39 rows=5596479 width=175)\n -> Append (cost=0.00..31252783.81\nrows=5596479 width=659)\n -> Parallel Seq Scan on fa\n(cost=0.00..0.00 rows=1 width=580)\n Filter: ((datetime >=\n'2018-01-01 00:00:00+01'::timestamp with time zone) AND\n((datasource)::text <> 'M'::text) AND (((dims ? 'p'::text) AND (dims ?\n'mcp'::text)) OR ((datasource)::text =\nANY ('{\"FA\",\"GA\"}'::text[]))) AND (c_id = ANY ('{<ID_LIST>}'::bigint[])))\n -> Parallel Bitmap Heap Scan on\nfa_10 (cost=1226.36..53641.49 rows=1 width=1290)\n Recheck Cond: (datetime >=\n'2018-01-01 00:00:00+01'::timestamp with time zone)\n Filter: (((datasource)::text <>\n'M'::text) AND (((dims ? 'p'::text) AND (dims ? 'mcp'::text)) OR\n((datasource)::text = ANY ('<ID_LIST>'::bigint[])))\n -> Bitmap Index Scan on\nfa_10_rangestart (cost=0.00..1226.36 rows=32259 width=0)\n Index Cond: (datetime >=\n'2018-01-01 00:00:00+01'::timestamp with time zone)\n -> Parallel Seq Scan on fa_105\n(cost=0.00..11.99 rows=1 width=580)\n Filter: ((datetime >=\n'2018-01-01 00:00:00+01'::timestamp with time zone) AND\n((datasource)::text <> 'M'::text) AND (((dims ? 'p'::text) AND (dims ?\n'mcp'::text)) OR ((datasource)::text =\nANY ('{\"FA\",\"GA\"}'::text[]))) AND (c_id = ANY ('<ID_LIST>'::bigint[])))\n -> Parallel Seq Scan on fa_106\n(cost=0.00..11.99 rows=1 width=580)\n Filter: ((datetime >=\n'2018-01-01 00:00:00+01'::timestamp with time zone) AND\n((datasource)::text <> 'M'::text) AND (((dims ? 'p'::text) AND (dims ?\n'mcp'::text)) OR ((datasource)::text =\nANY ('{\"FA\",\"GA\"}'::text[]))) AND (c_id = ANY ('<ID_LIST>..........\n\n\n\n--\nregards,\nJakub Glapa\n\nHi, I'm also experiencing the problem: dsa_allocate could not find 7 free pages CONTEXT: parallel workerI'm running: PostgreSQL 10.5 (Ubuntu 10.5-1.pgdg16.04+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.10) 5.4.0 20160609, 64-bitquery plan: (select statement over parent table to many partitions):select ... from fa where c_id in (<ID_LIST>) and datetime >= '2018/01/01' and ((dims ? 'p' and dims ? 'mcp') or (datasource in (FA', 'GA'))) and not datasource = 'm' GROUP BY datasource, dims ->'ct', dims ->'mcp', dims -> 'p', dims -> 'sp':Finalize GroupAggregate (cost=31514757.77..31519357.77 rows=40000 width=223) Group Key: fa.datasource, ((fa.dims -> 'ct'::text)), ((fa.dims -> 'mcp'::text)), ((fa.dims -> 'p'::text)), ((fa.dims -> 'sp'::text)) -> Sort (cost=31514757.77..31515057.77 rows=120000 width=223) Sort Key: fa.datasource, ((fa.dims -> 'ct'::text)), ((fa.dims -> 'mcp'::text)), ((fa.dims -> 'p'::text)), ((fa.dims -> 'sp'::text)) -> Gather (cost=31491634.17..31504634.17 rows=120000 width=223) Workers Planned: 3 -> Partial HashAggregate (cost=31490634.17..31491634.17 rows=40000 width=223) Group Key: fa.datasource, (fa.dims -> 'ct'::text), (fa.dims -> 'mcp'::text), (fa.dims -> 'p'::text), (fa.dims -> 'sp'::text) -> Result (cost=0.00..31364713.39 rows=5596479 width=175) -> Append (cost=0.00..31252783.81 rows=5596479 width=659) -> Parallel Seq Scan on fa (cost=0.00..0.00 rows=1 width=580) Filter: ((datetime >= '2018-01-01 00:00:00+01'::timestamp with time zone) AND ((datasource)::text <> 'M'::text) AND (((dims ? 'p'::text) AND (dims ? 'mcp'::text)) OR ((datasource)::text = ANY ('{\"FA\",\"GA\"}'::text[]))) AND (c_id = ANY ('{<ID_LIST>}'::bigint[]))) -> Parallel Bitmap Heap Scan on fa_10 (cost=1226.36..53641.49 rows=1 width=1290) Recheck Cond: (datetime >= '2018-01-01 00:00:00+01'::timestamp with time zone) Filter: (((datasource)::text <> 'M'::text) AND (((dims ? 'p'::text) AND (dims ? 'mcp'::text)) OR ((datasource)::text = ANY ('<ID_LIST>'::bigint[]))) -> Bitmap Index Scan on fa_10_rangestart (cost=0.00..1226.36 rows=32259 width=0) Index Cond: (datetime >= '2018-01-01 00:00:00+01'::timestamp with time zone) -> Parallel Seq Scan on fa_105 (cost=0.00..11.99 rows=1 width=580) Filter: ((datetime >= '2018-01-01 00:00:00+01'::timestamp with time zone) AND ((datasource)::text <> 'M'::text) AND (((dims ? 'p'::text) AND (dims ? 'mcp'::text)) OR ((datasource)::text = ANY ('{\"FA\",\"GA\"}'::text[]))) AND (c_id = ANY ('<ID_LIST>'::bigint[]))) -> Parallel Seq Scan on fa_106 (cost=0.00..11.99 rows=1 width=580) Filter: ((datetime >= '2018-01-01 00:00:00+01'::timestamp with time zone) AND ((datasource)::text <> 'M'::text) AND (((dims ? 'p'::text) AND (dims ? 'mcp'::text)) OR ((datasource)::text = ANY ('{\"FA\",\"GA\"}'::text[]))) AND (c_id = ANY ('<ID_LIST>..........--regards,Jakub Glapa",
"msg_date": "Tue, 13 Nov 2018 14:08:24 +0100",
"msg_from": "Jakub Glapa <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: dsa_allocate() faliure"
},
{
"msg_contents": "Looks like my email didn't match the right thread:\nhttps://www.postgresql.org/message-id/flat/CAMAYy4%2Bw3NTBM5JLWFi8twhWK4%3Dk_5L4nV5%2BbYDSPu8r4b97Zg%40mail.gmail.com\nAny chance to get some feedback on this?\n--\nregards,\nJakub Glapa\n\n\nOn Tue, Nov 13, 2018 at 2:08 PM Jakub Glapa <[email protected]> wrote:\n\n> Hi, I'm also experiencing the problem: dsa_allocate could not find 7 free\n> pages CONTEXT: parallel worker\n>\n> I'm running: PostgreSQL 10.5 (Ubuntu 10.5-1.pgdg16.04+1) on\n> x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.10) 5.4.0\n> 20160609, 64-bit\n>\n> query plan: (select statement over parent table to many partitions):\n> select ...\n> from fa\n> where c_id in (<ID_LIST>) and\n> datetime >= '2018/01/01'\n> and ((dims ? 'p' and dims ? 'mcp')\n> or (datasource in (FA', 'GA')))\n> and not datasource = 'm'\n> GROUP BY datasource, dims ->'ct', dims ->'mcp', dims -> 'p', dims -> 'sp':\n>\n> Finalize GroupAggregate (cost=31514757.77..31519357.77 rows=40000 width=223)\n> Group Key: fa.datasource, ((fa.dims -> 'ct'::text)), ((fa.dims -> 'mcp'::text)), ((fa.dims -> 'p'::text)), ((fa.dims -> 'sp'::text))\n> -> Sort (cost=31514757.77..31515057.77 rows=120000 width=223)\n> Sort Key: fa.datasource, ((fa.dims -> 'ct'::text)), ((fa.dims -> 'mcp'::text)), ((fa.dims -> 'p'::text)), ((fa.dims -> 'sp'::text))\n> -> Gather (cost=31491634.17..31504634.17 rows=120000 width=223)\n> Workers Planned: 3\n> -> Partial HashAggregate (cost=31490634.17..31491634.17 rows=40000 width=223)\n> Group Key: fa.datasource, (fa.dims -> 'ct'::text), (fa.dims -> 'mcp'::text), (fa.dims -> 'p'::text), (fa.dims -> 'sp'::text)\n> -> Result (cost=0.00..31364713.39 rows=5596479 width=175)\n> -> Append (cost=0.00..31252783.81 rows=5596479 width=659)\n> -> Parallel Seq Scan on fa (cost=0.00..0.00 rows=1 width=580)\n> Filter: ((datetime >= '2018-01-01 00:00:00+01'::timestamp with time zone) AND ((datasource)::text <> 'M'::text) AND (((dims ? 'p'::text) AND (dims ? 'mcp'::text)) OR ((datasource)::text =\n> ANY ('{\"FA\",\"GA\"}'::text[]))) AND (c_id = ANY ('{<ID_LIST>}'::bigint[])))\n> -> Parallel Bitmap Heap Scan on fa_10 (cost=1226.36..53641.49 rows=1 width=1290)\n> Recheck Cond: (datetime >= '2018-01-01 00:00:00+01'::timestamp with time zone)\n> Filter: (((datasource)::text <> 'M'::text) AND (((dims ? 'p'::text) AND (dims ? 'mcp'::text)) OR ((datasource)::text = ANY ('<ID_LIST>'::bigint[])))\n> -> Bitmap Index Scan on fa_10_rangestart (cost=0.00..1226.36 rows=32259 width=0)\n> Index Cond: (datetime >= '2018-01-01 00:00:00+01'::timestamp with time zone)\n> -> Parallel Seq Scan on fa_105 (cost=0.00..11.99 rows=1 width=580)\n> Filter: ((datetime >= '2018-01-01 00:00:00+01'::timestamp with time zone) AND ((datasource)::text <> 'M'::text) AND (((dims ? 'p'::text) AND (dims ? 'mcp'::text)) OR ((datasource)::text =\n> ANY ('{\"FA\",\"GA\"}'::text[]))) AND (c_id = ANY ('<ID_LIST>'::bigint[])))\n> -> Parallel Seq Scan on fa_106 (cost=0.00..11.99 rows=1 width=580)\n> Filter: ((datetime >= '2018-01-01 00:00:00+01'::timestamp with time zone) AND ((datasource)::text <> 'M'::text) AND (((dims ? 'p'::text) AND (dims ? 'mcp'::text)) OR ((datasource)::text =\n> ANY ('{\"FA\",\"GA\"}'::text[]))) AND (c_id = ANY ('<ID_LIST>..........\n>\n>\n>\n> --\n> regards,\n> Jakub Glapa\n>\n\nLooks like my email didn't match the right thread: https://www.postgresql.org/message-id/flat/CAMAYy4%2Bw3NTBM5JLWFi8twhWK4%3Dk_5L4nV5%2BbYDSPu8r4b97Zg%40mail.gmail.comAny chance to get some feedback on this?--regards,Jakub GlapaOn Tue, Nov 13, 2018 at 2:08 PM Jakub Glapa <[email protected]> wrote:Hi, I'm also experiencing the problem: dsa_allocate could not find 7 free pages CONTEXT: parallel workerI'm running: PostgreSQL 10.5 (Ubuntu 10.5-1.pgdg16.04+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.10) 5.4.0 20160609, 64-bitquery plan: (select statement over parent table to many partitions):select ... from fa where c_id in (<ID_LIST>) and datetime >= '2018/01/01' and ((dims ? 'p' and dims ? 'mcp') or (datasource in (FA', 'GA'))) and not datasource = 'm' GROUP BY datasource, dims ->'ct', dims ->'mcp', dims -> 'p', dims -> 'sp':Finalize GroupAggregate (cost=31514757.77..31519357.77 rows=40000 width=223) Group Key: fa.datasource, ((fa.dims -> 'ct'::text)), ((fa.dims -> 'mcp'::text)), ((fa.dims -> 'p'::text)), ((fa.dims -> 'sp'::text)) -> Sort (cost=31514757.77..31515057.77 rows=120000 width=223) Sort Key: fa.datasource, ((fa.dims -> 'ct'::text)), ((fa.dims -> 'mcp'::text)), ((fa.dims -> 'p'::text)), ((fa.dims -> 'sp'::text)) -> Gather (cost=31491634.17..31504634.17 rows=120000 width=223) Workers Planned: 3 -> Partial HashAggregate (cost=31490634.17..31491634.17 rows=40000 width=223) Group Key: fa.datasource, (fa.dims -> 'ct'::text), (fa.dims -> 'mcp'::text), (fa.dims -> 'p'::text), (fa.dims -> 'sp'::text) -> Result (cost=0.00..31364713.39 rows=5596479 width=175) -> Append (cost=0.00..31252783.81 rows=5596479 width=659) -> Parallel Seq Scan on fa (cost=0.00..0.00 rows=1 width=580) Filter: ((datetime >= '2018-01-01 00:00:00+01'::timestamp with time zone) AND ((datasource)::text <> 'M'::text) AND (((dims ? 'p'::text) AND (dims ? 'mcp'::text)) OR ((datasource)::text = ANY ('{\"FA\",\"GA\"}'::text[]))) AND (c_id = ANY ('{<ID_LIST>}'::bigint[]))) -> Parallel Bitmap Heap Scan on fa_10 (cost=1226.36..53641.49 rows=1 width=1290) Recheck Cond: (datetime >= '2018-01-01 00:00:00+01'::timestamp with time zone) Filter: (((datasource)::text <> 'M'::text) AND (((dims ? 'p'::text) AND (dims ? 'mcp'::text)) OR ((datasource)::text = ANY ('<ID_LIST>'::bigint[]))) -> Bitmap Index Scan on fa_10_rangestart (cost=0.00..1226.36 rows=32259 width=0) Index Cond: (datetime >= '2018-01-01 00:00:00+01'::timestamp with time zone) -> Parallel Seq Scan on fa_105 (cost=0.00..11.99 rows=1 width=580) Filter: ((datetime >= '2018-01-01 00:00:00+01'::timestamp with time zone) AND ((datasource)::text <> 'M'::text) AND (((dims ? 'p'::text) AND (dims ? 'mcp'::text)) OR ((datasource)::text = ANY ('{\"FA\",\"GA\"}'::text[]))) AND (c_id = ANY ('<ID_LIST>'::bigint[]))) -> Parallel Seq Scan on fa_106 (cost=0.00..11.99 rows=1 width=580) Filter: ((datetime >= '2018-01-01 00:00:00+01'::timestamp with time zone) AND ((datasource)::text <> 'M'::text) AND (((dims ? 'p'::text) AND (dims ? 'mcp'::text)) OR ((datasource)::text = ANY ('{\"FA\",\"GA\"}'::text[]))) AND (c_id = ANY ('<ID_LIST>..........--regards,Jakub Glapa",
"msg_date": "Wed, 21 Nov 2018 15:26:42 +0100",
"msg_from": "Jakub Glapa <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: dsa_allocate() faliure"
},
{
"msg_contents": "On Wed, Nov 21, 2018 at 03:26:42PM +0100, Jakub Glapa wrote:\n> Looks like my email didn't match the right thread:\n> https://www.postgresql.org/message-id/flat/CAMAYy4%2Bw3NTBM5JLWFi8twhWK4%3Dk_5L4nV5%2BbYDSPu8r4b97Zg%40mail.gmail.com\n> Any chance to get some feedback on this?\n\nIn the related thread, it looks like Thomas backpatched a fix to v10, and so I\nguess this should be resolved in 10.6, which was released couple weeks ago.\nhttps://www.postgresql.org/message-id/CAEepm%3D0QxoUSkFqYbvmxi2eNvvU6BkqH6fTOu4oOzc1MRAT4Dw%40mail.gmail.com\n\nCould you upgrade and check ?\n\n38763d67784c6563d08dbea5c9f913fa174779b8 in master\n\n|commit ba20d392584cdecc2808fe936448d127f43f2c07\n|Author: Thomas Munro <[email protected]>\n|Date: Thu Sep 20 15:52:39 2018 +1200\n|\n| Fix segment_bins corruption in dsa.c.\n\nJustin\n\n",
"msg_date": "Thu, 22 Nov 2018 10:09:58 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: dsa_allocate() faliure"
},
{
"msg_contents": "Hi Justin, I've upgrade to 10.6 but the error still shows up:\n\npsql db@host as user => select version();\n version\n\n─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n PostgreSQL 10.6 (Ubuntu 10.6-1.pgdg16.04+1) on x86_64-pc-linux-gnu,\ncompiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.10) 5.4.0 20160609, 64-bit\n(1 row)\n\nTime: 110.512 ms\n\npsql db@host as user => select <COLUMNS> from fa where client_id in\n(<IDS>) and datetime >= '2018/01/01' and ((dims ? 'p' and dimensions ?\n'mcp') or (datasource in ('FA', 'GA'))) and not datasource = 'M' GROUP BY\ndatasource, dims ->'ct', dimensions ->'mct', dims -> 'p', dims -> 'sp';\nERROR: XX000: dsa_allocate could not find 7 free pages\nCONTEXT: parallel worker\nLOCATION: dsa_allocate_extended, dsa.c:729\nTime: 131400.831 ms (02:11.401)\n\nthe above is execute with max_parallel_workers=8\nIf I set it to max_parallel_workers=0 I also get and my connection is being\nclosed (but the server is alive):\n\npsql db@host as user => set max_parallel_workers=0;\nSET\nTime: 89.542 ms\npsql db@host as user => SELECT <QUERY>;\nFATAL: XX000: dsa_allocate could not find 7 free pages\nLOCATION: dsa_allocate_extended, dsa.c:729\nSSL connection has been closed unexpectedly\nThe connection to the server was lost. Attempting reset: Succeeded.\nTime: 200390.466 ms (03:20.390)\n\n\n\n--\nregards,\nJakub Glapa\n\n\nOn Thu, Nov 22, 2018 at 5:10 PM Justin Pryzby <[email protected]> wrote:\n\n> On Wed, Nov 21, 2018 at 03:26:42PM +0100, Jakub Glapa wrote:\n> > Looks like my email didn't match the right thread:\n> >\n> https://www.postgresql.org/message-id/flat/CAMAYy4%2Bw3NTBM5JLWFi8twhWK4%3Dk_5L4nV5%2BbYDSPu8r4b97Zg%40mail.gmail.com\n> > Any chance to get some feedback on this?\n>\n> In the related thread, it looks like Thomas backpatched a fix to v10, and\n> so I\n> guess this should be resolved in 10.6, which was released couple weeks ago.\n>\n> https://www.postgresql.org/message-id/CAEepm%3D0QxoUSkFqYbvmxi2eNvvU6BkqH6fTOu4oOzc1MRAT4Dw%40mail.gmail.com\n>\n> Could you upgrade and check ?\n>\n> 38763d67784c6563d08dbea5c9f913fa174779b8 in master\n>\n> |commit ba20d392584cdecc2808fe936448d127f43f2c07\n> |Author: Thomas Munro <[email protected]>\n> |Date: Thu Sep 20 15:52:39 2018 +1200\n> |\n> | Fix segment_bins corruption in dsa.c.\n>\n> Justin\n>\n\nHi Justin, I've upgrade to 10.6 but the error still shows up:psql db@host as user => select version(); version ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── PostgreSQL 10.6 (Ubuntu 10.6-1.pgdg16.04+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.10) 5.4.0 20160609, 64-bit(1 row)Time: 110.512 mspsql db@host as user => select <COLUMNS> from fa where client_id in (<IDS>) and datetime >= '2018/01/01' and ((dims ? 'p' and dimensions ? 'mcp') or (datasource in ('FA', 'GA'))) and not datasource = 'M' GROUP BY datasource, dims ->'ct', dimensions ->'mct', dims -> 'p', dims -> 'sp';ERROR: XX000: dsa_allocate could not find 7 free pagesCONTEXT: parallel workerLOCATION: dsa_allocate_extended, dsa.c:729Time: 131400.831 ms (02:11.401)the above is execute with max_parallel_workers=8If I set it to max_parallel_workers=0 I also get and my connection is being closed (but the server is alive):psql db@host as user => set max_parallel_workers=0;SETTime: 89.542 mspsql db@host as user => SELECT <QUERY>;FATAL: XX000: dsa_allocate could not find 7 free pagesLOCATION: dsa_allocate_extended, dsa.c:729SSL connection has been closed unexpectedlyThe connection to the server was lost. Attempting reset: Succeeded.Time: 200390.466 ms (03:20.390)--regards,Jakub GlapaOn Thu, Nov 22, 2018 at 5:10 PM Justin Pryzby <[email protected]> wrote:On Wed, Nov 21, 2018 at 03:26:42PM +0100, Jakub Glapa wrote:\n> Looks like my email didn't match the right thread:\n> https://www.postgresql.org/message-id/flat/CAMAYy4%2Bw3NTBM5JLWFi8twhWK4%3Dk_5L4nV5%2BbYDSPu8r4b97Zg%40mail.gmail.com\n> Any chance to get some feedback on this?\n\nIn the related thread, it looks like Thomas backpatched a fix to v10, and so I\nguess this should be resolved in 10.6, which was released couple weeks ago.\nhttps://www.postgresql.org/message-id/CAEepm%3D0QxoUSkFqYbvmxi2eNvvU6BkqH6fTOu4oOzc1MRAT4Dw%40mail.gmail.com\n\nCould you upgrade and check ?\n\n38763d67784c6563d08dbea5c9f913fa174779b8 in master\n\n|commit ba20d392584cdecc2808fe936448d127f43f2c07\n|Author: Thomas Munro <[email protected]>\n|Date: Thu Sep 20 15:52:39 2018 +1200\n|\n| Fix segment_bins corruption in dsa.c.\n\nJustin",
"msg_date": "Fri, 23 Nov 2018 15:31:41 +0100",
"msg_from": "Jakub Glapa <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: dsa_allocate() faliure"
},
{
"msg_contents": "On Fri, Nov 23, 2018 at 03:31:41PM +0100, Jakub Glapa wrote:\n> Hi Justin, I've upgrade to 10.6 but the error still shows up:\n> \n> If I set it to max_parallel_workers=0 I also get and my connection is being\n> closed (but the server is alive):\n> \n> psql db@host as user => set max_parallel_workers=0;\n\nCan you show the plan (explain without analyze) for the nonparallel case?\n\nAlso, it looks like the server crashed in that case (even if it restarted\nitself quickly). Can you confirm ?\n\nFor example: dmesg |tail might show \"postmaster[8582]: segfault [...]\" or\nsimilar. And other clients would've been disconnected. (For example, you'd\nget an error in another, previously-connected session the next time you run:\nSELECT 1).\n\nIn any case, could you try to find a minimal way to reproduce the problem ? I\nmean, is the dataset and query small and something you can publish, or can you\nreproduce with data generated from (for example) generate_series() ?\n\nThanks,\nJustin\n\n",
"msg_date": "Fri, 23 Nov 2018 10:10:28 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: dsa_allocate() faliure"
},
{
"msg_contents": "So, the issue occurs only on production db an right now I cannot reproduce\nit.\nI had a look at dmesg and indeed I see something like:\n\n\n--\nregards,\npozdrawiam,\nJakub Glapa\n\n\nOn Fri, Nov 23, 2018 at 5:10 PM Justin Pryzby <[email protected]> wrote:\n\n> On Fri, Nov 23, 2018 at 03:31:41PM +0100, Jakub Glapa wrote:\n> > Hi Justin, I've upgrade to 10.6 but the error still shows up:\n> >\n> > If I set it to max_parallel_workers=0 I also get and my connection is\n> being\n> > closed (but the server is alive):\n> >\n> > psql db@host as user => set max_parallel_workers=0;\n>\n> Can you show the plan (explain without analyze) for the nonparallel case?\n>\n> Also, it looks like the server crashed in that case (even if it restarted\n> itself quickly). Can you confirm ?\n>\n> For example: dmesg |tail might show \"postmaster[8582]: segfault [...]\" or\n> similar. And other clients would've been disconnected. (For example,\n> you'd\n> get an error in another, previously-connected session the next time you\n> run:\n> SELECT 1).\n>\n> In any case, could you try to find a minimal way to reproduce the problem\n> ? I\n> mean, is the dataset and query small and something you can publish, or can\n> you\n> reproduce with data generated from (for example) generate_series() ?\n>\n> Thanks,\n> Justin\n>\n\nSo, the issue occurs only on production db an right now I cannot reproduce it.I had a look at dmesg and indeed I see something like:--regards,pozdrawiam,Jakub GlapaOn Fri, Nov 23, 2018 at 5:10 PM Justin Pryzby <[email protected]> wrote:On Fri, Nov 23, 2018 at 03:31:41PM +0100, Jakub Glapa wrote:\n> Hi Justin, I've upgrade to 10.6 but the error still shows up:\n> \n> If I set it to max_parallel_workers=0 I also get and my connection is being\n> closed (but the server is alive):\n> \n> psql db@host as user => set max_parallel_workers=0;\n\nCan you show the plan (explain without analyze) for the nonparallel case?\n\nAlso, it looks like the server crashed in that case (even if it restarted\nitself quickly). Can you confirm ?\n\nFor example: dmesg |tail might show \"postmaster[8582]: segfault [...]\" or\nsimilar. And other clients would've been disconnected. (For example, you'd\nget an error in another, previously-connected session the next time you run:\nSELECT 1).\n\nIn any case, could you try to find a minimal way to reproduce the problem ? I\nmean, is the dataset and query small and something you can publish, or can you\nreproduce with data generated from (for example) generate_series() ?\n\nThanks,\nJustin",
"msg_date": "Mon, 26 Nov 2018 16:26:45 +0100",
"msg_from": "Jakub Glapa <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: dsa_allocate() faliure"
},
{
"msg_contents": "sorry, the message was sent out to early.\n\nSo, the issue occurs only on production db an right now I cannot reproduce\nit.\nI had a look at dmesg and indeed I see something like:\n\npostgres[30667]: segfault at 0 ip 0000557834264b16 sp 00007ffc2ce1e030\nerror 4 in postgres[557833db7000+6d5000]\n\nand AFAIR other sessions I had opened at that time were indeed disconnected.\n\nWhen it comes to the execution plan for max_parallel_workers=0.\nThere is no real difference.\nI guess *max_parallel_workers *has no effect and\n*max_parallel_workers_per_gather\n*should have been used.\n Why it caused a server crash is unknown right now.\n\nI cannot really give a reproducible recipe.\nMy case is that I have a parent table with ~300 partitions.\nAnd I initiate a select on ~100 of them with select [...] from fa where\nclient_id(<IDS>) and [filters].\nI know this is not effective. Every partition has several indexes and this\nquery acquires a lot of locks... even for relations not used in the query.\nPG11 should have better partition pruning mechanism but I'm not there yet\nto upgrade.\nSome of the partitions have millions of rows.\n\nI'll keep observing maybe I'l find a pattern when this occurs.\n\n\n--\nregards,\npozdrawiam,\nJakub Glapa\n\n\nOn Mon, Nov 26, 2018 at 4:26 PM Jakub Glapa <[email protected]> wrote:\n\n> So, the issue occurs only on production db an right now I cannot reproduce\n> it.\n> I had a look at dmesg and indeed I see something like:\n>\n>\n> --\n> regards,\n> Jakub Glapa\n>\n>\n> On Fri, Nov 23, 2018 at 5:10 PM Justin Pryzby <[email protected]>\n> wrote:\n>\n>> On Fri, Nov 23, 2018 at 03:31:41PM +0100, Jakub Glapa wrote:\n>> > Hi Justin, I've upgrade to 10.6 but the error still shows up:\n>> >\n>> > If I set it to max_parallel_workers=0 I also get and my connection is\n>> being\n>> > closed (but the server is alive):\n>> >\n>> > psql db@host as user => set max_parallel_workers=0;\n>>\n>> Can you show the plan (explain without analyze) for the nonparallel case?\n>>\n>> Also, it looks like the server crashed in that case (even if it restarted\n>> itself quickly). Can you confirm ?\n>>\n>> For example: dmesg |tail might show \"postmaster[8582]: segfault [...]\" or\n>> similar. And other clients would've been disconnected. (For example,\n>> you'd\n>> get an error in another, previously-connected session the next time you\n>> run:\n>> SELECT 1).\n>>\n>> In any case, could you try to find a minimal way to reproduce the problem\n>> ? I\n>> mean, is the dataset and query small and something you can publish, or\n>> can you\n>> reproduce with data generated from (for example) generate_series() ?\n>>\n>> Thanks,\n>> Justin\n>>\n>\n\nsorry, the message was sent out to early.So, the issue occurs only on production db an right now I cannot reproduce it.I had a look at dmesg and indeed I see something like:postgres[30667]: segfault at 0 ip 0000557834264b16 sp 00007ffc2ce1e030 error 4 in postgres[557833db7000+6d5000]and AFAIR other sessions I had opened at that time were indeed disconnected.When it comes to the execution plan for max_parallel_workers=0. There is no real difference.I guess max_parallel_workers has no effect and max_parallel_workers_per_gather should have been used. Why it caused a server crash is unknown right now.I cannot really give a reproducible recipe. My case is that I have a parent table with ~300 partitions. And I initiate a select on ~100 of them with select [...] from fa where client_id(<IDS>) and [filters]. I know this is not effective. Every partition has several indexes and this query acquires a lot of locks... even for relations not used in the query. PG11 should have better partition pruning mechanism but I'm not there yet to upgrade.Some of the partitions have millions of rows.I'll keep observing maybe I'l find a pattern when this occurs.--regards,pozdrawiam,Jakub GlapaOn Mon, Nov 26, 2018 at 4:26 PM Jakub Glapa <[email protected]> wrote:So, the issue occurs only on production db an right now I cannot reproduce it.I had a look at dmesg and indeed I see something like:--regards,Jakub GlapaOn Fri, Nov 23, 2018 at 5:10 PM Justin Pryzby <[email protected]> wrote:On Fri, Nov 23, 2018 at 03:31:41PM +0100, Jakub Glapa wrote:\n> Hi Justin, I've upgrade to 10.6 but the error still shows up:\n> \n> If I set it to max_parallel_workers=0 I also get and my connection is being\n> closed (but the server is alive):\n> \n> psql db@host as user => set max_parallel_workers=0;\n\nCan you show the plan (explain without analyze) for the nonparallel case?\n\nAlso, it looks like the server crashed in that case (even if it restarted\nitself quickly). Can you confirm ?\n\nFor example: dmesg |tail might show \"postmaster[8582]: segfault [...]\" or\nsimilar. And other clients would've been disconnected. (For example, you'd\nget an error in another, previously-connected session the next time you run:\nSELECT 1).\n\nIn any case, could you try to find a minimal way to reproduce the problem ? I\nmean, is the dataset and query small and something you can publish, or can you\nreproduce with data generated from (for example) generate_series() ?\n\nThanks,\nJustin",
"msg_date": "Mon, 26 Nov 2018 16:38:35 +0100",
"msg_from": "Jakub Glapa <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: dsa_allocate() faliure"
},
{
"msg_contents": "Hi, thanks for following through.\n\nOn Mon, Nov 26, 2018 at 04:38:35PM +0100, Jakub Glapa wrote:\n> I had a look at dmesg and indeed I see something like:\n> \n> postgres[30667]: segfault at 0 ip 0000557834264b16 sp 00007ffc2ce1e030\n> error 4 in postgres[557833db7000+6d5000]\n\nThat's useful, I think \"at 0\" means a null pointer dereferenced.\n\nCan you check /var/log/messages (or ./syslog or similar) and verify the\ntimestamp matches the time of the last crash (and not an unrelated crash) ?\n\nThe logs might also indicate if the process dumped a core file anywhere. \n\nI don't know what distribution/OS you're using, but it might be good to install\nabrt (RHEL) or apport (ubuntu) or other mechanism to save coredumps, or to\nmanually configure /proc/sys/kernel/core_pattern.\n\nOn centos, I usually set:\n/etc/abrt/abrt-action-save-package-data.conf\nOpenGPGCheck = no\n\nAlso, it might be good to install debug symbols, in case you do find a core\ndump now or get one later.\n\nOn centos: yum install postgresql10-debuginfo or debuginfo-install postgresql10-server\nMake sure this exactly matches the debug symbols exactly match the server version.\n\nJustin\n\n",
"msg_date": "Mon, 26 Nov 2018 09:52:08 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: dsa_allocate() faliure"
},
{
"msg_contents": "Justin thanks for the information!\nI'm running Ubuntu 16.04.\nI'll try to prepare for the next crash.\nCouldn't find anything this time.\n\n\n--\nregards,\nJakub Glapa\n\n\nOn Mon, Nov 26, 2018 at 4:52 PM Justin Pryzby <[email protected]> wrote:\n\n> Hi, thanks for following through.\n>\n> On Mon, Nov 26, 2018 at 04:38:35PM +0100, Jakub Glapa wrote:\n> > I had a look at dmesg and indeed I see something like:\n> >\n> > postgres[30667]: segfault at 0 ip 0000557834264b16 sp 00007ffc2ce1e030\n> > error 4 in postgres[557833db7000+6d5000]\n>\n> That's useful, I think \"at 0\" means a null pointer dereferenced.\n>\n> Can you check /var/log/messages (or ./syslog or similar) and verify the\n> timestamp matches the time of the last crash (and not an unrelated crash) ?\n>\n> The logs might also indicate if the process dumped a core file anywhere.\n>\n> I don't know what distribution/OS you're using, but it might be good to\n> install\n> abrt (RHEL) or apport (ubuntu) or other mechanism to save coredumps, or to\n> manually configure /proc/sys/kernel/core_pattern.\n>\n> On centos, I usually set:\n> /etc/abrt/abrt-action-save-package-data.conf\n> OpenGPGCheck = no\n>\n> Also, it might be good to install debug symbols, in case you do find a core\n> dump now or get one later.\n>\n> On centos: yum install postgresql10-debuginfo or debuginfo-install\n> postgresql10-server\n> Make sure this exactly matches the debug symbols exactly match the server\n> version.\n>\n> Justin\n>\n\nJustin thanks for the information! I'm running Ubuntu 16.04. I'll try to prepare for the next crash. Couldn't find anything this time.--regards,Jakub GlapaOn Mon, Nov 26, 2018 at 4:52 PM Justin Pryzby <[email protected]> wrote:Hi, thanks for following through.\n\nOn Mon, Nov 26, 2018 at 04:38:35PM +0100, Jakub Glapa wrote:\n> I had a look at dmesg and indeed I see something like:\n> \n> postgres[30667]: segfault at 0 ip 0000557834264b16 sp 00007ffc2ce1e030\n> error 4 in postgres[557833db7000+6d5000]\n\nThat's useful, I think \"at 0\" means a null pointer dereferenced.\n\nCan you check /var/log/messages (or ./syslog or similar) and verify the\ntimestamp matches the time of the last crash (and not an unrelated crash) ?\n\nThe logs might also indicate if the process dumped a core file anywhere. \n\nI don't know what distribution/OS you're using, but it might be good to install\nabrt (RHEL) or apport (ubuntu) or other mechanism to save coredumps, or to\nmanually configure /proc/sys/kernel/core_pattern.\n\nOn centos, I usually set:\n/etc/abrt/abrt-action-save-package-data.conf\nOpenGPGCheck = no\n\nAlso, it might be good to install debug symbols, in case you do find a core\ndump now or get one later.\n\nOn centos: yum install postgresql10-debuginfo or debuginfo-install postgresql10-server\nMake sure this exactly matches the debug symbols exactly match the server version.\n\nJustin",
"msg_date": "Mon, 26 Nov 2018 17:00:30 +0100",
"msg_from": "Jakub Glapa <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: dsa_allocate() faliure"
},
{
"msg_contents": "On 2018-Nov-26, Jakub Glapa wrote:\n\n> Justin thanks for the information!\n> I'm running Ubuntu 16.04.\n> I'll try to prepare for the next crash.\n> Couldn't find anything this time.\n\nAs I recall, the appport stuff in Ubuntu is terrible ... I've seen it\ntake 40 minutes to write the crash dump to disk, during which the\ndatabase was \"down\". I don't know why it is so slow (it's a rather\nsilly python script that apparently processes the core dump one byte at\na time, and you can imagine that with a few gigabytes of shared memory\nthat takes a while). Anyway my recommendation was to *remove* that\nstuff from the server and make sure the core file is saved by normal\nmeans.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Mon, 26 Nov 2018 15:45:09 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: dsa_allocate() faliure"
},
{
"msg_contents": "On Tue, Nov 27, 2018 at 7:45 AM Alvaro Herrera <[email protected]> wrote:\n> On 2018-Nov-26, Jakub Glapa wrote:\n> > Justin thanks for the information!\n> > I'm running Ubuntu 16.04.\n> > I'll try to prepare for the next crash.\n> > Couldn't find anything this time.\n>\n> As I recall, the appport stuff in Ubuntu is terrible ... I've seen it\n> take 40 minutes to write the crash dump to disk, during which the\n> database was \"down\". I don't know why it is so slow (it's a rather\n> silly python script that apparently processes the core dump one byte at\n> a time, and you can imagine that with a few gigabytes of shared memory\n> that takes a while). Anyway my recommendation was to *remove* that\n> stuff from the server and make sure the core file is saved by normal\n> means.\n\nThanks for CC-ing me. I didn't see this thread earlier because I'm\nnot subscribed to -performance. Let's move it over to -hackers since\nit looks like it's going to be a debugging exercise. So, reading\nthrough the thread[1], I think there might be two independent problems\nhere:\n\n1. Jakub has a many-partition Parallel Bitmap Heap Scan query that\nsegfaults when run with max_parallel_workers = 0. That sounds\nsuspiciously like an instance of a class of bug we've run into before.\nWe planned a parallel query, but were unable to launch one due to lack\nof DSM slots or process slots, so we run the parallel plan in a kind\nof degraded non-parallel mode that needs to cope with various pointers\ninto shared memory being NULL. A back trace from a core file should\nhopefully make it very obvious what's going on.\n\n2. The same query when run in real parallel query mode occasionally\nreaches an error \"dsa_allocate could not find 7 free pages\", which\nshould not happen. This is on 10.6, so it has the commit \"Fix\nsegment_bins corruption in dsa.c.\".\n\nHmm. I will see if I can come up with a many-partition torture test\nreproducer for this.\n\n[1] https://www.postgresql.org/message-id/flat/CAJk1zg10iCNsxFvQ4pgKe1B0rdjNG9iELA7AzLXjXnQm5T%3DKzQ%40mail.gmail.com\n\n-- \nThomas Munro\nhttp://www.enterprisedb.com\n\n",
"msg_date": "Tue, 27 Nov 2018 16:00:34 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: dsa_allocate() faliure"
},
{
"msg_contents": "On Tue, Nov 27, 2018 at 4:00 PM Thomas Munro\n<[email protected]> wrote:\n> Hmm. I will see if I can come up with a many-partition torture test\n> reproducer for this.\n\nNo luck. I suppose one theory that could link both failure modes\nwould a buffer overrun, where in the non-shared case it trashes a\npointer that is later dereferenced, and in the shared case it writes\npast the end of allocated 4KB pages and corrupts the intrusive btree\nthat lives in spare pages to track available space.\n\n--\nThomas Munro\nhttp://www.enterprisedb.com\n\n",
"msg_date": "Tue, 27 Nov 2018 21:02:29 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: dsa_allocate() faliure"
},
{
"msg_contents": "Hi, just a small update.\nI've configured the OS for taking crash dumps on Ubuntu 16.04 with the\nfollowing (maybe somebody will find it helpful):\nI've added LimitCORE=infinity to /lib/systemd/system/[email protected]\nunder [Service] section\nI've reloaded the service config with sudo systemctl daemon-reload\nChanged the core pattern to: sudo echo\n/var/lib/postgresql/core.%p.sig%s.%ts | tee -a /proc/sys/kernel/core_pattern\nI had tested it with kill -ABRT pidofbackend and it behaved correctly. A\ncrash dump was written.\n\nIn the last days I've been monitoring no segfault occurred but the\ndas_allocation did.\nI'm starting to doubt if the segfault I've found in dmesg was actually\nrelated.\n\nI've grepped the postgres log for dsa_allocated:\nWhy do the messages occur sometimes as FATAL and sometimes as ERROR?\n\n2018-11-29 07:59:06 CET::@:[20584]: FATAL: dsa_allocate could not find 7\nfree pages\n2018-11-29 07:59:06 CET:127.0.0.1(40846):user@db:[19507]: ERROR:\ndsa_allocate could not find 7 free pages\n2018-11-30 09:04:13 CET::@:[27341]: FATAL: dsa_allocate could not find 13\nfree pages\n2018-11-30 09:04:13 CET:127.0.0.1(41782):user@db:[25417]: ERROR:\ndsa_allocate could not find 13 free pages\n2018-11-30 09:28:38 CET::@:[30215]: FATAL: dsa_allocate could not find 4\nfree pages\n2018-11-30 09:28:38 CET:127.0.0.1(45980):user@db:[29924]: ERROR:\ndsa_allocate could not find 4 free pages\n2018-11-30 16:37:16 CET::@:[14385]: FATAL: dsa_allocate could not find 7\nfree pages\n2018-11-30 16:37:16 CET::@:[14375]: FATAL: dsa_allocate could not find 7\nfree pages\n2018-11-30 16:37:16 CET:212.186.105.45(55004):user@db:[14386]: FATAL:\ndsa_allocate could not find 7 free pages\n2018-11-30 16:37:16 CET:212.186.105.45(54964):user@db:[14379]: ERROR:\ndsa_allocate could not find 7 free pages\n2018-11-30 16:37:16 CET:212.186.105.45(54916):user@db:[14370]: ERROR:\ndsa_allocate could not find 7 free pages\n2018-11-30 16:45:11 CET:212.186.105.45(55356):user@db:[14555]: FATAL:\ndsa_allocate could not find 7 free pages\n2018-11-30 16:49:13 CET::@:[15359]: FATAL: dsa_allocate could not find 7\nfree pages\n2018-11-30 16:49:13 CET::@:[15363]: FATAL: dsa_allocate could not find 7\nfree pages\n2018-11-30 16:49:13 CET:212.186.105.45(54964):user@db:[14379]: FATAL:\ndsa_allocate could not find 7 free pages\n2018-11-30 16:49:13 CET:212.186.105.45(54916):user@db:[14370]: ERROR:\ndsa_allocate could not find 7 free pages\n2018-11-30 16:49:13 CET:212.186.105.45(55842):user@db:[14815]: ERROR:\ndsa_allocate could not find 7 free pages\n2018-11-30 16:56:11 CET:212.186.105.45(57076):user@db:[15638]: FATAL:\ndsa_allocate could not find 7 free pages\n\n\nThere's quite a bit errors from today but I was launching the problematic\nquery in parallel from 2-3 sessions.\nSometimes it was breaking sometimes not.\nCouldn't find any pattern.\nThe workload on this db is not really constant, rather bursting.\n\n--\nregards,\nJakub Glapa\n\n\nOn Tue, Nov 27, 2018 at 9:03 AM Thomas Munro <[email protected]>\nwrote:\n\n> On Tue, Nov 27, 2018 at 4:00 PM Thomas Munro\n> <[email protected]> wrote:\n> > Hmm. I will see if I can come up with a many-partition torture test\n> > reproducer for this.\n>\n> No luck. I suppose one theory that could link both failure modes\n> would a buffer overrun, where in the non-shared case it trashes a\n> pointer that is later dereferenced, and in the shared case it writes\n> past the end of allocated 4KB pages and corrupts the intrusive btree\n> that lives in spare pages to track available space.\n>\n> --\n> Thomas Munro\n> http://www.enterprisedb.com\n>\n\nHi, just a small update. I've configured the OS for taking crash dumps on Ubuntu 16.04 with the following (maybe somebody will find it helpful):I've added LimitCORE=infinity to /lib/systemd/system/[email protected] under [Service] sectionI've reloaded the service config with sudo systemctl daemon-reloadChanged the core pattern to: sudo echo /var/lib/postgresql/core.%p.sig%s.%ts | tee -a /proc/sys/kernel/core_patternI had tested it with kill -ABRT pidofbackend and it behaved correctly. A crash dump was written.In the last days I've been monitoring no segfault occurred but the das_allocation did.I'm starting to doubt if the segfault I've found in dmesg was actually related.I've grepped the postgres log for dsa_allocated:Why do the messages occur sometimes as FATAL and sometimes as ERROR?2018-11-29 07:59:06 CET::@:[20584]: FATAL: dsa_allocate could not find 7 free pages2018-11-29 07:59:06 CET:127.0.0.1(40846):user@db:[19507]: ERROR: dsa_allocate could not find 7 free pages2018-11-30 09:04:13 CET::@:[27341]: FATAL: dsa_allocate could not find 13 free pages2018-11-30 09:04:13 CET:127.0.0.1(41782):user@db:[25417]: ERROR: dsa_allocate could not find 13 free pages2018-11-30 09:28:38 CET::@:[30215]: FATAL: dsa_allocate could not find 4 free pages2018-11-30 09:28:38 CET:127.0.0.1(45980):user@db:[29924]: ERROR: dsa_allocate could not find 4 free pages2018-11-30 16:37:16 CET::@:[14385]: FATAL: dsa_allocate could not find 7 free pages2018-11-30 16:37:16 CET::@:[14375]: FATAL: dsa_allocate could not find 7 free pages2018-11-30 16:37:16 CET:212.186.105.45(55004):user@db:[14386]: FATAL: dsa_allocate could not find 7 free pages2018-11-30 16:37:16 CET:212.186.105.45(54964):user@db:[14379]: ERROR: dsa_allocate could not find 7 free pages2018-11-30 16:37:16 CET:212.186.105.45(54916):user@db:[14370]: ERROR: dsa_allocate could not find 7 free pages2018-11-30 16:45:11 CET:212.186.105.45(55356):user@db:[14555]: FATAL: dsa_allocate could not find 7 free pages2018-11-30 16:49:13 CET::@:[15359]: FATAL: dsa_allocate could not find 7 free pages2018-11-30 16:49:13 CET::@:[15363]: FATAL: dsa_allocate could not find 7 free pages2018-11-30 16:49:13 CET:212.186.105.45(54964):user@db:[14379]: FATAL: dsa_allocate could not find 7 free pages2018-11-30 16:49:13 CET:212.186.105.45(54916):user@db:[14370]: ERROR: dsa_allocate could not find 7 free pages2018-11-30 16:49:13 CET:212.186.105.45(55842):user@db:[14815]: ERROR: dsa_allocate could not find 7 free pages2018-11-30 16:56:11 CET:212.186.105.45(57076):user@db:[15638]: FATAL: dsa_allocate could not find 7 free pagesThere's quite a bit errors from today but I was launching the problematic query in parallel from 2-3 sessions. Sometimes it was breaking sometimes not. Couldn't find any pattern. The workload on this db is not really constant, rather bursting.--regards,Jakub GlapaOn Tue, Nov 27, 2018 at 9:03 AM Thomas Munro <[email protected]> wrote:On Tue, Nov 27, 2018 at 4:00 PM Thomas Munro\n<[email protected]> wrote:\n> Hmm. I will see if I can come up with a many-partition torture test\n> reproducer for this.\n\nNo luck. I suppose one theory that could link both failure modes\nwould a buffer overrun, where in the non-shared case it trashes a\npointer that is later dereferenced, and in the shared case it writes\npast the end of allocated 4KB pages and corrupts the intrusive btree\nthat lives in spare pages to track available space.\n\n--\nThomas Munro\nhttp://www.enterprisedb.com",
"msg_date": "Fri, 30 Nov 2018 20:20:49 +0100",
"msg_from": "Jakub Glapa <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: dsa_allocate() faliure"
},
{
"msg_contents": "On Fri, Nov 30, 2018 at 08:20:49PM +0100, Jakub Glapa wrote:\n> In the last days I've been monitoring no segfault occurred but the\n> das_allocation did.\n> I'm starting to doubt if the segfault I've found in dmesg was actually\n> related.\n\nThe dmesg looks like a real crash, not just OOM. You can hopefully find the\ntimestamp of the segfaults in /var/log/syslog, and compare with postgres logs\nif they go back far enough. All the postgres processes except the parent\nwould've been restarted at that time.\n\n> I've grepped the postgres log for dsa_allocated:\n> Why do the messages occur sometimes as FATAL and sometimes as ERROR?\n\nI believe it may depend if it happens in a parallel worker or the leader.\n\nYou may get more log detail if you enable CSV logging (although unfortunately\nas I recall it doesn't indicate it's a parallel worker).\n\nYou could force it to dump core if you recompile postgres with an assert() (see\npatch below).\n\nYou could build an .deb by running dpkg-buildpackage -rfakeroot or similar (i\nhaven't done this in awhile), or you could compile, install, and launch\ndebugging binaries from your homedir (or similar)\n\nYou'd want to compile the same version (git checkout REL_10_6) and with the\nproper configure flags..perhaps starting with:\n./configure --with-libxml --with-libxslt --enable-debug --prefix=$HOME/src/postgresql.bin --enable-cassert && time make && make install\n\nBe careful if you have extensions installed that they still work.\n\nJustin\n\n--- a/src/backend/utils/mmgr/dsa.c\n+++ b/src/backend/utils/mmgr/dsa.c\n@@ -727,4 +727,7 @@ dsa_allocate_extended(dsa_area *area, size_t size, int flags)\n if (!FreePageManagerGet(segment_map->fpm, npages, &first_page))\n+ {\n elog(FATAL,\n \"dsa_allocate could not find %zu free pages\", npages);\n+ abort()\n+ }\n\n\n",
"msg_date": "Fri, 30 Nov 2018 14:46:47 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: dsa_allocate() faliure"
},
{
"msg_contents": "On Sat, Dec 1, 2018 at 9:46 AM Justin Pryzby <[email protected]> wrote:\n> elog(FATAL,\n> \"dsa_allocate could not find %zu free pages\", npages);\n> + abort()\n\nIf anyone can reproduce this problem with a debugger, it'd be\ninteresting to see the output of dsa_dump(area), and\nFreePageManagerDump(segment_map->fpm). This error condition means\nthat get_best_segment() selected a segment from a segment bin that\nholds segments with a certain minimum number of contiguous free pages\n>= the requested number npages, but then FreePageManagerGet() found\nthat it didn't have npages of contiguous free memory after all when it\nconsulted the segment's btree of free space. Possible explanations\ninclude: the segment bin lists are somehow messed up, the FPM in the\nsegment was corrupted by someone scribbling on free pages (which hold\nthe btree), the btree was corrupted by an incorrect sequence of\nallocate/free calls (for example double frees, allocating from one\narea and freeing to another etc), freepage.c fails to track its\nlargest size correctly.\n\nThere is a macro FPM_EXTRA_ASSERTS that can be defined to double-check\nthe largest contiguous page tracking. I have also been wondering\nabout a debug mode that would mprotect(PROT_READ) free pages when they\naren't being modified to detect unexpected writes, which should work\non systems that have 4k pages.\n\nOne thing I noticed is that it is failing on a \"large\" allocation,\nwhere we go straight to the btree of 4k pages, but the equivalent code\nwhere we allocate a superblock for \"small\" allocations doesn't report\nthe same kind of FATAL this-can't-happen error, it just fails the\nallocation via the regular error path without explanation. I also\nspotted a path that doesn't respect the DSA_ALLOC_NO_OOM flag (you get\na null pointer instead of an error). I should fix those\ninconsistencies (draft patch attached), but those are incidental\nproblems AFAIK.\n\n-- \nThomas Munro\nhttp://www.enterprisedb.com",
"msg_date": "Mon, 3 Dec 2018 11:45:00 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: dsa_allocate() faliure"
},
{
"msg_contents": "Hi,\n\nOn Mon, Nov 26, 2018 at 09:52:07AM -0600, Justin Pryzby wrote:\n> Hi, thanks for following through.\n> \n> On Mon, Nov 26, 2018 at 04:38:35PM +0100, Jakub Glapa wrote:\n> > I had a look at dmesg and indeed I see something like:\n> > \n> > postgres[30667]: segfault at 0 ip 0000557834264b16 sp 00007ffc2ce1e030\n> > error 4 in postgres[557833db7000+6d5000]\n> \n> That's useful, I think \"at 0\" means a null pointer dereferenced.\n\nThomas fixed several bugs in DSA, which will be in next release, postgres 10.8\nand 11.3.\n\nHowever that doesn't explain the segfault you saw, and I don't see anything\nwhich looks relevant changed since in 10.5.\n\nIf you still see that using the latest minor release (10.7), please try to\ncapture a core file and send a backtrace with a new thread on pgsql-hackers.\n\nThanks,\nJustin\n\n",
"msg_date": "Sun, 17 Feb 2019 16:21:29 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: dsa_allocate() faliure"
},
{
"msg_contents": "Hi I just checked the dmesg.\nThe segfault I wrote about is the only one I see, dated Nov 24 last year.\nSince then no other segfaults happened although dsa_allocated failures\nhappen daily.\nI'll report if anything occurs.\nI have the core dumping setup in place.\n\n\n\n--\nregards,\npozdrawiam,\nJakub Glapa\n\n\nOn Sun, Feb 17, 2019 at 11:21 PM Justin Pryzby <[email protected]> wrote:\n\n> Hi,\n>\n> On Mon, Nov 26, 2018 at 09:52:07AM -0600, Justin Pryzby wrote:\n> > Hi, thanks for following through.\n> >\n> > On Mon, Nov 26, 2018 at 04:38:35PM +0100, Jakub Glapa wrote:\n> > > I had a look at dmesg and indeed I see something like:\n> > >\n> > > postgres[30667]: segfault at 0 ip 0000557834264b16 sp 00007ffc2ce1e030\n> > > error 4 in postgres[557833db7000+6d5000]\n> >\n> > That's useful, I think \"at 0\" means a null pointer dereferenced.\n>\n> Thomas fixed several bugs in DSA, which will be in next release, postgres\n> 10.8\n> and 11.3.\n>\n> However that doesn't explain the segfault you saw, and I don't see anything\n> which looks relevant changed since in 10.5.\n>\n> If you still see that using the latest minor release (10.7), please try to\n> capture a core file and send a backtrace with a new thread on\n> pgsql-hackers.\n>\n> Thanks,\n> Justin\n>\n\nHi I just checked the dmesg. The segfault I wrote about is the only one I see, dated Nov 24 last year. Since then no other segfaults happened although dsa_allocated failures happen daily.I'll report if anything occurs. I have the core dumping setup in place.--regards,pozdrawiam,Jakub GlapaOn Sun, Feb 17, 2019 at 11:21 PM Justin Pryzby <[email protected]> wrote:Hi,\n\nOn Mon, Nov 26, 2018 at 09:52:07AM -0600, Justin Pryzby wrote:\n> Hi, thanks for following through.\n> \n> On Mon, Nov 26, 2018 at 04:38:35PM +0100, Jakub Glapa wrote:\n> > I had a look at dmesg and indeed I see something like:\n> > \n> > postgres[30667]: segfault at 0 ip 0000557834264b16 sp 00007ffc2ce1e030\n> > error 4 in postgres[557833db7000+6d5000]\n> \n> That's useful, I think \"at 0\" means a null pointer dereferenced.\n\nThomas fixed several bugs in DSA, which will be in next release, postgres 10.8\nand 11.3.\n\nHowever that doesn't explain the segfault you saw, and I don't see anything\nwhich looks relevant changed since in 10.5.\n\nIf you still see that using the latest minor release (10.7), please try to\ncapture a core file and send a backtrace with a new thread on pgsql-hackers.\n\nThanks,\nJustin",
"msg_date": "Mon, 18 Feb 2019 10:11:10 +0100",
"msg_from": "Jakub Glapa <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: dsa_allocate() faliure"
}
] |
[
{
"msg_contents": "Hi,\nCan someone explain the logic behind it ? I know that vacuum full isnt\nsomething recommended but I found out that whenever I run vacuum full on my\ndatabase checkpoint occurs during that time every second ! well I know that\nVACUUM FULL duplicates the data into new data files and then it deletes the\nold data files. The writing the vacuum does, is it with the checkpoint\nprocess ?\n\nIs there any connection ?\n\n\nThanks.\n\nHi,Can someone explain the logic behind it ? I know that vacuum full isnt something recommended but I found out that whenever I run vacuum full on my database checkpoint occurs during that time every second ! well I know that VACUUM FULL duplicates the data into new data files and then it deletes the old data files. The writing the vacuum does, is it with the checkpoint process ? Is there any connection ?Thanks.",
"msg_date": "Thu, 15 Nov 2018 20:53:14 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "checkpoint occurs very often when vacuum full running"
},
{
"msg_contents": "Hi\n\nCheckpoint can be occurs due timeout (checkpoint_timeout) or due amount of WAL (max_wal_size).\nVacuum full does write all data through WAL and therefore may trigger checkpoint more frequently.\n\nregards, Sergei\n\n",
"msg_date": "Thu, 15 Nov 2018 22:10:42 +0300",
"msg_from": "Sergei Kornilov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: checkpoint occurs very often when vacuum full running"
},
{
"msg_contents": "First of all thank you for the quick answer. In my case checkpoint happened\nevery one second during the vacuum full so the checkpoint timeout isn't\nrelevant. My guess was that it writes the changes to the wals but I didn't\nfind anything about it in the documentation. Can you share a link that\nproves it ? I mean basicly the wals should contain the changes, and vacuum\nfull changes the location of the data and not actually the data.\n\nOn Thu, Nov 15, 2018, 9:10 PM Sergei Kornilov <[email protected] wrote:\n\n> Hi\n>\n> Checkpoint can be occurs due timeout (checkpoint_timeout) or due amount of\n> WAL (max_wal_size).\n> Vacuum full does write all data through WAL and therefore may trigger\n> checkpoint more frequently.\n>\n> regards, Sergei\n>\n\nFirst of all thank you for the quick answer. In my case checkpoint happened every one second during the vacuum full so the checkpoint timeout isn't relevant. My guess was that it writes the changes to the wals but I didn't find anything about it in the documentation. Can you share a link that proves it ? I mean basicly the wals should contain the changes, and vacuum full changes the location of the data and not actually the data.On Thu, Nov 15, 2018, 9:10 PM Sergei Kornilov <[email protected] wrote:Hi\n\nCheckpoint can be occurs due timeout (checkpoint_timeout) or due amount of WAL (max_wal_size).\nVacuum full does write all data through WAL and therefore may trigger checkpoint more frequently.\n\nregards, Sergei",
"msg_date": "Thu, 15 Nov 2018 21:29:49 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: checkpoint occurs very often when vacuum full running"
},
{
"msg_contents": "Hi\n\n> I mean basicly the wals should contain the changes, and vacuum full changes the location of the data and not actually the data.\nRow location is data. For example, index lookup relies on TID (tuple id, hidden ctid column) - physical row address in datafile.\nPostgresql WAL - it is about physical changes in datafiles (block level), not logical. Just moving one row to another place without logical changes means: mark row deleted in old place, write to new place and update every index which contains this row.\nAnd vacuum full does not change location, it create copy in different datafile. Then it rebuild every index because TID was obviously changed. Then vacuum full drop old datafiles. Full size of new datafile and indexes should be written to WAL, because all of this is changes and must be reliable written (and then can be replayed on replicas).\n\n> but I didn't find anything about it in the documentation\nhmm, i can not found something exact in documentation about it.. It's my knowledge about postgresql internals.\nYou can read this article: https://www.depesz.com/2011/07/14/write-ahead-log-understanding-postgresql-conf-checkpoint_segments-checkpoint_timeout-checkpoint_warning/ Its about WAL logic. All IO operations use pages, and difference between pages written to WAL.\nFor example, full_page_writes setting ( https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-FULL-PAGE-WRITES ) say about pages too.\n> writes the entire content of each disk page to WAL during the first modification of that page after a checkpoint.\nIf you want change few bytes in page - the whole page (8kb typical) will be written to WAL during first change of this page after checkpoint.\n\nregards, Sergei\n\n",
"msg_date": "Thu, 15 Nov 2018 23:28:40 +0300",
"msg_from": "Sergei Kornilov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: checkpoint occurs very often when vacuum full running"
},
{
"msg_contents": "Mariel Cherkassky wrote:\n> First of all thank you for the quick answer. In my case checkpoint happened\n> every one second during the vacuum full so the checkpoint timeout isn't relevant.\n> My guess was that it writes the changes to the wals but I didn't find anything\n> about it in the documentation. Can you share a link that proves it ?\n> I mean basicly the wals should contain the changes, and vacuum full changes\n> the location of the data and not actually the data.\n\nVACUUM (FULL) completely rewrites all the tables and indexes, so the complete\ndatabase will go into the WAL (these data changes have to be replayed in case\nof a crash!). WAL contains the physical and not the logical changes, and the\nphysical data *are* modified.\n\nYou should let autovacuum do the job instead of running VACUUM (FULL), unless\nyour whole database is bloated beyond tolerance. That will cause less WAL\nactivity and also won't disrupt normal database operation.\n\nIf you really need that VACUUM (FULL), you can increase \"max_wal_size\" to\nget fewer checkpoints.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Thu, 15 Nov 2018 21:32:18 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: checkpoint occurs very often when vacuum full running"
},
{
"msg_contents": "Hi,\n\nPlease don't cross post to multiple lists.\n\nOn Thu, Nov 15, 2018 at 08:53:14PM +0200, Mariel Cherkassky wrote:\n> Can someone explain the logic behind it ? I know that vacuum full isnt\n> something recommended but I found out that whenever I run vacuum full on my\n> database checkpoint occurs during that time every second ! well I know that\n> VACUUM FULL duplicates the data into new data files and then it deletes the\n> old data files. The writing the vacuum does, is it with the checkpoint\n> process ?\n\nIt's a good question. What version postgres are you using, and what is the\nsetting of wal_level ?\n\nOn Thu, Nov 15, 2018 at 11:28:40PM +0300, Sergei Kornilov wrote:\n> Row location is data. For example, index lookup relies on TID (tuple id, hidden ctid column) - physical row address in datafile.\n\nBut, since VAC FULL has an exclusive lock, and since it's atomic (it's either\ngoing to succeed and use the new table or interrupted or otherwise fail and\ncontinue using the old table data), I it doesn't need to write to WAL, except\nif needed for physical replication. Same as CREATE TABLE AS and similar. In\nmy test, setting wal_level=minimal seemed to avoid WAL writes from vac full.\n\nhttps://www.postgresql.org/docs/current/populate.html#POPULATE-PITR\n\nJustin\n\n",
"msg_date": "Thu, 15 Nov 2018 14:46:13 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: checkpoint occurs very often when vacuum full running"
},
{
"msg_contents": "I'm just trying to under the logic in some environments that I faced (some\nhas 9.6 version and wal level is replica and some has 9.2v and wal_level is\nset to archive. I'm not sure regarding your answer because I believe that\nthere is a connection between the VACUUM FULL and the checkpoints that I\nsaw during the vacuum full. Laurenz Albe from cybertec sent a good\nexplanation about it to the pgsql-admins list You should check it out.\n\nבתאריך יום ה׳, 15 בנוב׳ 2018 ב-22:46 מאת Justin Pryzby <\[email protected]>:\n\n> Hi,\n>\n> Please don't cross post to multiple lists.\n>\n> On Thu, Nov 15, 2018 at 08:53:14PM +0200, Mariel Cherkassky wrote:\n> > Can someone explain the logic behind it ? I know that vacuum full isnt\n> > something recommended but I found out that whenever I run vacuum full on\n> my\n> > database checkpoint occurs during that time every second ! well I know\n> that\n> > VACUUM FULL duplicates the data into new data files and then it deletes\n> the\n> > old data files. The writing the vacuum does, is it with the checkpoint\n> > process ?\n>\n> It's a good question. What version postgres are you using, and what is the\n> setting of wal_level ?\n>\n> On Thu, Nov 15, 2018 at 11:28:40PM +0300, Sergei Kornilov wrote:\n> > Row location is data. For example, index lookup relies on TID (tuple id,\n> hidden ctid column) - physical row address in datafile.\n>\n> But, since VAC FULL has an exclusive lock, and since it's atomic (it's\n> either\n> going to succeed and use the new table or interrupted or otherwise fail and\n> continue using the old table data), I it doesn't need to write to WAL,\n> except\n> if needed for physical replication. Same as CREATE TABLE AS and similar.\n> In\n> my test, setting wal_level=minimal seemed to avoid WAL writes from vac\n> full.\n>\n> https://www.postgresql.org/docs/current/populate.html#POPULATE-PITR\n>\n> Justin\n>\n\nI'm just trying to under the logic in some environments that I faced (some has 9.6 version and wal level is replica and some has 9.2v and wal_level is set to archive. I'm not sure regarding your answer because I believe that there is a connection between the VACUUM FULL and the checkpoints that I saw during the vacuum full. Laurenz Albe from cybertec sent a good explanation about it to the pgsql-admins list You should check it out. בתאריך יום ה׳, 15 בנוב׳ 2018 ב-22:46 מאת Justin Pryzby <[email protected]>:Hi,\n\nPlease don't cross post to multiple lists.\n\nOn Thu, Nov 15, 2018 at 08:53:14PM +0200, Mariel Cherkassky wrote:\n> Can someone explain the logic behind it ? I know that vacuum full isnt\n> something recommended but I found out that whenever I run vacuum full on my\n> database checkpoint occurs during that time every second ! well I know that\n> VACUUM FULL duplicates the data into new data files and then it deletes the\n> old data files. The writing the vacuum does, is it with the checkpoint\n> process ?\n\nIt's a good question. What version postgres are you using, and what is the\nsetting of wal_level ?\n\nOn Thu, Nov 15, 2018 at 11:28:40PM +0300, Sergei Kornilov wrote:\n> Row location is data. For example, index lookup relies on TID (tuple id, hidden ctid column) - physical row address in datafile.\n\nBut, since VAC FULL has an exclusive lock, and since it's atomic (it's either\ngoing to succeed and use the new table or interrupted or otherwise fail and\ncontinue using the old table data), I it doesn't need to write to WAL, except\nif needed for physical replication. Same as CREATE TABLE AS and similar. In\nmy test, setting wal_level=minimal seemed to avoid WAL writes from vac full.\n\nhttps://www.postgresql.org/docs/current/populate.html#POPULATE-PITR\n\nJustin",
"msg_date": "Sat, 17 Nov 2018 13:31:20 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: checkpoint occurs very often when vacuum full running"
},
{
"msg_contents": "Hi Laurenz.\nthank you for the explanation but I still got a few questions about this\nsubject :\n1.By physical location of the data do you mean the location on disk of the\nobjects ? I mean i thought that the wal files containing the logical\nchanges , for example If I run an update then the wal file will contain the\nupdate in some format. Can you explain then what exactly it contains and\nwhat you meant by physical ?\n2.So If the vacuum full stopped in some brutal way (kill -9 or the disk run\nof space) does all the new data files that are created deleted or left on\nthe storage as orphan files ?\n\nThanks, Mariel.\n\nבתאריך יום ה׳, 15 בנוב׳ 2018 ב-22:32 מאת Laurenz Albe <\[email protected]>:\n\n> Mariel Cherkassky wrote:\n> > First of all thank you for the quick answer. In my case checkpoint\n> happened\n> > every one second during the vacuum full so the checkpoint timeout isn't\n> relevant.\n> > My guess was that it writes the changes to the wals but I didn't find\n> anything\n> > about it in the documentation. Can you share a link that proves it ?\n> > I mean basicly the wals should contain the changes, and vacuum full\n> changes\n> > the location of the data and not actually the data.\n>\n> VACUUM (FULL) completely rewrites all the tables and indexes, so the\n> complete\n> database will go into the WAL (these data changes have to be replayed in\n> case\n> of a crash!). WAL contains the physical and not the logical changes, and\n> the\n> physical data *are* modified.\n>\n> You should let autovacuum do the job instead of running VACUUM (FULL),\n> unless\n> your whole database is bloated beyond tolerance. That will cause less WAL\n> activity and also won't disrupt normal database operation.\n>\n> If you really need that VACUUM (FULL), you can increase \"max_wal_size\" to\n> get fewer checkpoints.\n>\n> Yours,\n> Laurenz Albe\n> --\n> Cybertec | https://www.cybertec-postgresql.com\n>\n>\n\nHi Laurenz.thank you for the explanation but I still got a few questions about this subject : 1.By physical location of the data do you mean the location on disk of the objects ? I mean i thought that the wal files containing the logical changes , for example If I run an update then the wal file will contain the update in some format. Can you explain then what exactly it contains and what you meant by physical ?2.So If the vacuum full stopped in some brutal way (kill -9 or the disk run of space) does all the new data files that are created deleted or left on the storage as orphan files ?Thanks, Mariel.בתאריך יום ה׳, 15 בנוב׳ 2018 ב-22:32 מאת Laurenz Albe <[email protected]>:Mariel Cherkassky wrote:\n> First of all thank you for the quick answer. In my case checkpoint happened\n> every one second during the vacuum full so the checkpoint timeout isn't relevant.\n> My guess was that it writes the changes to the wals but I didn't find anything\n> about it in the documentation. Can you share a link that proves it ?\n> I mean basicly the wals should contain the changes, and vacuum full changes\n> the location of the data and not actually the data.\n\nVACUUM (FULL) completely rewrites all the tables and indexes, so the complete\ndatabase will go into the WAL (these data changes have to be replayed in case\nof a crash!). WAL contains the physical and not the logical changes, and the\nphysical data *are* modified.\n\nYou should let autovacuum do the job instead of running VACUUM (FULL), unless\nyour whole database is bloated beyond tolerance. That will cause less WAL\nactivity and also won't disrupt normal database operation.\n\nIf you really need that VACUUM (FULL), you can increase \"max_wal_size\" to\nget fewer checkpoints.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com",
"msg_date": "Sat, 17 Nov 2018 13:36:21 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: checkpoint occurs very often when vacuum full running"
},
{
"msg_contents": "Mariel Cherkassky wrote:\n> thank you for the explanation but I still got a few questions about this subject : \n> 1.By physical location of the data do you mean the location on disk of the objects?\n> I mean i thought that the wal files containing the logical changes , for example\n> If I run an update then the wal file will contain the update in some format.\n> Can you explain then what exactly it contains and what you meant by physical ?\n\nIt does *not* contain the SQL executed - how'd that work with functions like\n\"random\"?\n\nRather, the information is like \"replace the 42 bytes from offset 99 of block 12\nin file xy with these bytes: ...\".\n\n> 2.So If the vacuum full stopped in some brutal way (kill -9 or the disk run of space)\n> does all the new data files that are created deleted or left on the storage as orphan files ?\n\nIn an out of space scenario, the files would get removed.\n\nIf you kill -9 the backend they will be left behind.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Sat, 17 Nov 2018 20:28:10 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: checkpoint occurs very often when vacuum full running"
}
] |
[
{
"msg_contents": "Hi list,\n\n\nFor your consideration I'm submitting you a research I made together with my colleague Wouter.\n\nWe compared the 2 databases for our specific use case on medical data stored using FHIR standard.\n\nThis is indeed a very restrictive use case, and moreover under read only circumstances. We are aware of that.\n\nWe might consider in a near future to enlarge the scope of the research including read and write operations, more use cases, using partitioning, and/or scaling Mongo.\n\nThis has to be considered as a starting point and we would like to share our results aiming in constructive feedbacks and perhaps fix mistakes or inaccuracies you might find.\n\nThis is the link:\n\nhttps://portavita.github.io/2018-10-31-blog_A_JSON_use_case_comparison_between_PostgreSQL_and_MongoDB/\n\nWe are open to any kind of feedback and we hope you enjoy the reading.\n\n\nRegards,\n\nFabio Pardi and Wouter van Teijlingen\n\n\n\n\n\n\nHi list,\n\n\n For your consideration I'm submitting you a research I made\n together with my colleague Wouter.\n\n We compared the 2 databases for our specific use case on medical\n data stored using FHIR standard.\n\n This is indeed a very restrictive use case, and moreover under\n read only circumstances. We are aware of that.\n\n We might consider in a near future to enlarge the scope of the\n research including read and write operations, more use cases,\n using partitioning, and/or scaling Mongo. \n\n This has to be considered as a starting point and we would like to\n share our results aiming in constructive feedbacks and perhaps fix\n mistakes or inaccuracies you might find. \n\n This is the link:\n\nhttps://portavita.github.io/2018-10-31-blog_A_JSON_use_case_comparison_between_PostgreSQL_and_MongoDB/\n\n We are open to any kind of feedback and we hope you enjoy the\n reading.\n\n\n Regards,\n\n Fabio Pardi and Wouter van Teijlingen",
"msg_date": "Mon, 19 Nov 2018 16:38:27 +0100",
"msg_from": "Fabio Pardi <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL VS MongoDB: a use case comparison"
},
{
"msg_contents": "Greetings,\n\n* Fabio Pardi ([email protected]) wrote:\n> We are open to any kind of feedback and we hope you enjoy the reading.\n\nLooks like a lot of the difference being seen and the comments made\nabout one being faster than the other are because one system is\ncompressing *everything*, while PG (quite intentionally...) only\ncompresses the data sometimes- once it hits the TOAST limit. That\nlikely also contributes to why you're seeing the on-disk size\ndifferences that you are.\n\nOf course, if you want to see where PG will really shine, you'd stop\nthinking of data as just blobs of JSON and actually define individual\nfields in PG instead of just one 'jsonb' column, especially when you\nknow that field will always exist (which is obviously the case if you're\nbuilding an index on it, such as your MarriageDate) and then remove\nthose fields from the jsonb and just reconstruct the JSON when you\nquery. Doing that you'll get the size down dramatically.\n\nAnd that's without even going to that next-level stuff of actual\nnormalization where you pull out duplicate data from across the JSON\nand have just one instance of that data in another, smaller, table and\nuse a JOIN to bring it all back together. Even better is when you\nrealize that then you only have to update one row in this other table\nwhen something changes in that subset of data, unlike when you\nrepeatedly store that data in individual JSON entries all across the\nsystem and such a change requires rewriting every single JSON object in\nthe entire system...\n\nLastly, as with any performance benchmark, please include full details-\nall scripts used, all commands run, all data used, so that others can\nreproduce your results. I'm sure it'd be fun to take your json data and\ncreate actual tables out of it and see what it'd be like then.\n\nThanks!\n\nStephen",
"msg_date": "Mon, 19 Nov 2018 12:26:09 -0500",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL VS MongoDB: a use case comparison"
},
{
"msg_contents": "Hi Stephen,\n\nthanks for your feedback.\n\nI agree with you the compression is playing a role in the comparison.\nProbably there is a toll to pay when the load is high and the CPU\nstressed from de/compressing data. If we will be able to bring our\nstudies that further, this is definitely something we would like to measure.\n\nI also agree with you that at the moment Postgres really shines on\nrelational data. To be honest, after seeing the outcome of our research,\nwe are actually considering to decouple some (or all) fields from their\nJSON structure. There will be a toll to be payed there too, since we are\nreceiving data in JSON format.\nAnd the toll will be in time spent to deliver such a solution, and\nindeed time spent by the engine in doing the conversion. It might not be\nthat convenient after all.\n\nAnyway, to bring data from JSON to a relational model is out of topic\nfor the current discussion, since we are actually questioning if\nPostgres is a good replacement for Mongo when handling JSON data.\n\nAs per sharing the dataset, as mentioned in the post we are handling\nmedical data. Even if the content is anonymized, we are not keen to\nshare the data structure too for security reasons.\nThat's a pity I know but i cannot do anything about it.\nThe queries we ran and the commands we used are mentioned in the blog\npost but if you see gaps, feel free to ask.\n\nregards,\n\nfabio pardi\n\n\n\nOn 11/19/18 6:26 PM, Stephen Frost wrote:\n> Greetings,\n> \n> * Fabio Pardi ([email protected]) wrote:\n>> We are open to any kind of feedback and we hope you enjoy the reading.\n> \n> Looks like a lot of the difference being seen and the comments made\n> about one being faster than the other are because one system is\n> compressing *everything*, while PG (quite intentionally...) only\n> compresses the data sometimes- once it hits the TOAST limit. That\n> likely also contributes to why you're seeing the on-disk size\n> differences that you are.\n> \n> Of course, if you want to see where PG will really shine, you'd stop\n> thinking of data as just blobs of JSON and actually define individual\n> fields in PG instead of just one 'jsonb' column, especially when you\n> know that field will always exist (which is obviously the case if you're\n> building an index on it, such as your MarriageDate) and then remove\n> those fields from the jsonb and just reconstruct the JSON when you\n> query. Doing that you'll get the size down dramatically.\n> \n> And that's without even going to that next-level stuff of actual\n> normalization where you pull out duplicate data from across the JSON\n> and have just one instance of that data in another, smaller, table and\n> use a JOIN to bring it all back together. Even better is when you\n> realize that then you only have to update one row in this other table\n> when something changes in that subset of data, unlike when you\n> repeatedly store that data in individual JSON entries all across the\n> system and such a change requires rewriting every single JSON object in\n> the entire system...\n> \n> Lastly, as with any performance benchmark, please include full details-\n> all scripts used, all commands run, all data used, so that others can\n> reproduce your results. I'm sure it'd be fun to take your json data and\n> create actual tables out of it and see what it'd be like then.\n> \n> Thanks!\n> \n> Stephen\n>",
"msg_date": "Tue, 20 Nov 2018 14:11:23 +0100",
"msg_from": "Fabio Pardi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL VS MongoDB: a use case comparison"
},
{
"msg_contents": "Greetings,\n\n* Fabio Pardi ([email protected]) wrote:\n> thanks for your feedback.\n\nWe prefer on these mailing lists to not top-post but instead to reply\ninline, as I'm doing here. This helps the conversation by eliminating\nunnecessary dialogue and being able to make comments regarding specific\npoints clearly.\n\n> I agree with you the compression is playing a role in the comparison.\n> Probably there is a toll to pay when the load is high and the CPU\n> stressed from de/compressing data. If we will be able to bring our\n> studies that further, this is definitely something we would like to measure.\n\nI was actually thinking of the compression as having more of an impact\nwith regard to the 'cold' cases because you're pulling fewer blocks when\nit's compressed. The decompression cost on CPU is typically much, much\nless than the cost to pull the data off of the storage medium. When\nthings are 'hot' and in cache then it might be interesting to question\nif the compression/decompression is worth the cost.\n\n> I also agree with you that at the moment Postgres really shines on\n> relational data. To be honest, after seeing the outcome of our research,\n> we are actually considering to decouple some (or all) fields from their\n> JSON structure. There will be a toll to be payed there too, since we are\n> receiving data in JSON format.\n\nPostgreSQL has tools to help with this, you might look into\n'json_to_record' and friends.\n\n> And the toll will be in time spent to deliver such a solution, and\n> indeed time spent by the engine in doing the conversion. It might not be\n> that convenient after all.\n\nOh, the kind of reduction you'd see in space from both an on-disk and\nin-memory footprint would almost certainly be worth the tiny amount of\nCPU overhead from this.\n\n> Anyway, to bring data from JSON to a relational model is out of topic\n> for the current discussion, since we are actually questioning if\n> Postgres is a good replacement for Mongo when handling JSON data.\n\nThis narrow viewpoint isn't really sensible though- what you should be\nthinking about is what's appropriate for your *data*. JSON is just a\ndata format, and while it's alright as a system inter-exchange format,\nit's rather terrible as a storage format.\n\n> As per sharing the dataset, as mentioned in the post we are handling\n> medical data. Even if the content is anonymized, we are not keen to\n> share the data structure too for security reasons.\n\nIf you really want people to take your analysis seriously, others must\nbe able to reproduce your results. I certainly appreciate that there\nare very good reasons that you can't share this actual data, but your\ntesting could be done with completely generated data which happens to be\nsimilar in structure to your data and have similar frequency of values.\n\nThe way to approach generating such a data set would be to aggregate up\nthe actual data to a point where the appropriate committee/board agree\nthat it can be shared publicly, and then you build a randomly generated\nset of data which aggregates to the same result and then use that for\ntesting.\n\n> That's a pity I know but i cannot do anything about it.\n> The queries we ran and the commands we used are mentioned in the blog\n> post but if you see gaps, feel free to ask.\n\nThere were a lot of gaps that I saw when I looked through the article-\nstarting with things like the actual CREATE TABLE command you used, and\nthe complete size/structure of the JSON object, but really what a paper\nlike this should include is a full script which creates all the tables,\nloads all the data, runs the analysis, calculates the results, etc.\n\nThanks!\n\nStephen",
"msg_date": "Tue, 20 Nov 2018 08:34:20 -0500",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL VS MongoDB: a use case comparison"
},
{
"msg_contents": "Hi again,\n\nOn 11/20/18 2:34 PM, Stephen Frost wrote:\n\n>> I agree with you the compression is playing a role in the comparison.\n>> Probably there is a toll to pay when the load is high and the CPU\n>> stressed from de/compressing data. If we will be able to bring our\n>> studies that further, this is definitely something we would like to measure.\n> \n> I was actually thinking of the compression as having more of an impact\n> with regard to the 'cold' cases because you're pulling fewer blocks when\n> it's compressed. The decompression cost on CPU is typically much, much\n> less than the cost to pull the data off of the storage medium. When\n> things are 'hot' and in cache then it might be interesting to question\n> if the compression/decompression is worth the cost.\n> \n\ntrue.\n\nWhen data is present in RAM, Postgres then is faster, because as you say\nthe compression will not actually give a benefit on retrieving data from\ndisk.\n\n\nIn my statement here above about the CPU I was speculating if the speed\nMongo gains thanks to the blocks compression, would act as a double\nedged sword under warm cache scenarios and heavy load.\n\n\n\n>> I also agree with you that at the moment Postgres really shines on\n>> relational data. To be honest, after seeing the outcome of our research,\n>> we are actually considering to decouple some (or all) fields from their\n>> JSON structure. There will be a toll to be payed there too, since we are\n>> receiving data in JSON format.\n> \n> PostgreSQL has tools to help with this, you might look into\n> 'json_to_record' and friends.\n> \n\nit might turn out useful to us if we normalize our data, thanks.\n\n>> Anyway, to bring data from JSON to a relational model is out of topic\n>> for the current discussion, since we are actually questioning if\n>> Postgres is a good replacement for Mongo when handling JSON data.\n> \n> This narrow viewpoint isn't really sensible though- what you should be\n> thinking about is what's appropriate for your *data*. JSON is just a\n> data format, and while it's alright as a system inter-exchange format,\n> it's rather terrible as a storage format.\n> \n\nI did not want to narrow the viewpoint. I'm exploring possibilities.\nSince Postgres supports JSON, it would have been nice to know how far\none can go in storing data without transforming it.\n\n\nWhen we started our research the only question was:\nIs it possible to replace Postgres with Mongo 1 to 1?\nAll other considerations came after, and as matter of fact, as told\nalready, we are actually considering to (maybe partially) transform data\nto a relational model.\n\nMaybe we did not look around enough but we did not find on internet all\nthe answers to our questions, therefore we initiated something ourselves.\n\n\n>> As per sharing the dataset, as mentioned in the post we are handling\n>> medical data. Even if the content is anonymized, we are not keen to\n>> share the data structure too for security reasons.\n> \n> If you really want people to take your analysis seriously, others must\n> be able to reproduce your results. I certainly appreciate that there\n> are very good reasons that you can't share this actual data, but your\n> testing could be done with completely generated data which happens to be\n> similar in structure to your data and have similar frequency of values.\n> \n> The way to approach generating such a data set would be to aggregate up\n> the actual data to a point where the appropriate committee/board agree\n> that it can be shared publicly, and then you build a randomly generated\n> set of data which aggregates to the same result and then use that for\n> testing.\n> \nProbably looking backward, I would generate data that is sharable with\neverybody to give the opportunity to play with it and involve people more.\n\nThe fact is that we started very small and we ended up with quite a\nbunch of information we felt like sharing.\nTime is tyrant and at the moment we cannot re-run everything with\nsharable data so we all have to live with it. It is not optimal and is\nnot perfectly academic but is still better than not sharing at all in my\nopinion.\n\n\nOne good thing is that while testing and learning I found a similar\ninvestigation which led to similar results (unfortunately also there you\ncan argue that is not sharing dataset and scripts and all the rest).\n\nIn the jsquery section of the blog post there is a link pointing to:\n\nhttps://github.com/postgrespro/jsquery/blob/master/README.md\n\nwhich in turn points to\n\nhttp://www.sai.msu.su/~megera/postgres/talks/pgconfeu-2014-jsquery.pdf\n\nAt page 18 there are some results which are close to what we obtained.\n\nI think those results are close to what we found even if the paper is\nfrom 2014 and a lot changed in the landscape.\n\nThis to say that i suspect that if we generate random JSON data, we will\nprobably draw the same conclusions.\n\n\n>> That's a pity I know but i cannot do anything about it.\n>> The queries we ran and the commands we used are mentioned in the blog\n>> post but if you see gaps, feel free to ask.\n> \n> There were a lot of gaps that I saw when I looked through the article-\n> starting with things like the actual CREATE TABLE command you used,\n\nyou are right, there is only the command i used to transform the table\nto jsonb.\n\nSmall detail, but I updated the post for clarity\n\n\n and\n> the complete size/structure of the JSON object, but really what a paper\n> like this should include is a full script which creates all the tables,\n> loads all the data, runs the analysis, calculates the results, etc.\n> \n\nQueries are shared, but without data, to share the rest is quite useless\nin my opinion.\n\n\nregards,\n\nfabio pardi",
"msg_date": "Tue, 20 Nov 2018 16:53:03 +0100",
"msg_from": "Fabio Pardi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL VS MongoDB: a use case comparison"
},
{
"msg_contents": "On Mon, Nov 19, 2018 at 11:26 AM Stephen Frost <[email protected]> wrote:\n> Looks like a lot of the difference being seen and the comments made\n> about one being faster than the other are because one system is\n> compressing *everything*, while PG (quite intentionally...) only\n> compresses the data sometimes- once it hits the TOAST limit. That\n> likely also contributes to why you're seeing the on-disk size\n> differences that you are.\n\nHm. It may be intentional, but is it ideal? Employing datum\ncompression in the 1kb-8kb range with a faster but less compressing\nalgorithm could give benefits.\n\nmerlin\n\n",
"msg_date": "Tue, 20 Nov 2018 10:41:42 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL VS MongoDB: a use case comparison"
},
{
"msg_contents": "Greetings,\n\n* Merlin Moncure ([email protected]) wrote:\n> On Mon, Nov 19, 2018 at 11:26 AM Stephen Frost <[email protected]> wrote:\n> > Looks like a lot of the difference being seen and the comments made\n> > about one being faster than the other are because one system is\n> > compressing *everything*, while PG (quite intentionally...) only\n> > compresses the data sometimes- once it hits the TOAST limit. That\n> > likely also contributes to why you're seeing the on-disk size\n> > differences that you are.\n> \n> Hm. It may be intentional, but is it ideal? Employing datum\n> compression in the 1kb-8kb range with a faster but less compressing\n> algorithm could give benefits.\n\nWell, pglz is actually pretty fast and not as good at compression as\nother things. I could certainly see an argument for allowing a column\nto always be (or at least attempted to be) compressed.\n\nThere's been a lot of discussion around supporting alternative\ncompression algorithms but making that happen is a pretty big task.\n\nThanks!\n\nStephen",
"msg_date": "Tue, 20 Nov 2018 11:43:25 -0500",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL VS MongoDB: a use case comparison"
},
{
"msg_contents": "On Tue, Nov 20, 2018 at 10:43 AM Stephen Frost <[email protected]> wrote:\n>\n> Greetings,\n>\n> * Merlin Moncure ([email protected]) wrote:\n> > On Mon, Nov 19, 2018 at 11:26 AM Stephen Frost <[email protected]> wrote:\n> > > Looks like a lot of the difference being seen and the comments made\n> > > about one being faster than the other are because one system is\n> > > compressing *everything*, while PG (quite intentionally...) only\n> > > compresses the data sometimes- once it hits the TOAST limit. That\n> > > likely also contributes to why you're seeing the on-disk size\n> > > differences that you are.\n> >\n> > Hm. It may be intentional, but is it ideal? Employing datum\n> > compression in the 1kb-8kb range with a faster but less compressing\n> > algorithm could give benefits.\n>\n> Well, pglz is actually pretty fast and not as good at compression as\n> other things. I could certainly see an argument for allowing a column\n> to always be (or at least attempted to be) compressed.\n>\n> There's been a lot of discussion around supporting alternative\n> compression algorithms but making that happen is a pretty big task.\n\nYeah; pglz is closer to zlib. There's much faster stuff out\nthere...Andres summed it up pretty well;\nhttps://www.postgresql.org/message-id/20130605150144.GD28067%40alap2.anarazel.de\n\nThere are also some interesting discussions on jsonb specific\ndiscussion approaches.\n\nmerlin\n\n",
"msg_date": "Tue, 20 Nov 2018 11:02:53 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL VS MongoDB: a use case comparison"
},
{
"msg_contents": "Greetings,\n\n* Merlin Moncure ([email protected]) wrote:\n> On Tue, Nov 20, 2018 at 10:43 AM Stephen Frost <[email protected]> wrote:\n> > * Merlin Moncure ([email protected]) wrote:\n> > > On Mon, Nov 19, 2018 at 11:26 AM Stephen Frost <[email protected]> wrote:\n> > > > Looks like a lot of the difference being seen and the comments made\n> > > > about one being faster than the other are because one system is\n> > > > compressing *everything*, while PG (quite intentionally...) only\n> > > > compresses the data sometimes- once it hits the TOAST limit. That\n> > > > likely also contributes to why you're seeing the on-disk size\n> > > > differences that you are.\n> > >\n> > > Hm. It may be intentional, but is it ideal? Employing datum\n> > > compression in the 1kb-8kb range with a faster but less compressing\n> > > algorithm could give benefits.\n> >\n> > Well, pglz is actually pretty fast and not as good at compression as\n> > other things. I could certainly see an argument for allowing a column\n> > to always be (or at least attempted to be) compressed.\n> >\n> > There's been a lot of discussion around supporting alternative\n> > compression algorithms but making that happen is a pretty big task.\n> \n> Yeah; pglz is closer to zlib. There's much faster stuff out\n> there...Andres summed it up pretty well;\n> https://www.postgresql.org/message-id/20130605150144.GD28067%40alap2.anarazel.de\n> \n> There are also some interesting discussions on jsonb specific\n> discussion approaches.\n\nOh yes, having a dictionary would be a great start to reducing the size\nof the jsonb data, though it could then become a contention point if\nthere's a lot of new values being inserted and such. Naturally there\nwould also be a cost to pulling that data back out as well but likely it\nwould be well worth the benefit of not having to store the field names\nrepeatedly.\n\nThen again, taken far enough, what you end up with are tables... :)\n\nThanks!\n\nStephen",
"msg_date": "Tue, 20 Nov 2018 12:28:30 -0500",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL VS MongoDB: a use case comparison"
},
{
"msg_contents": "On Tue, Nov 20, 2018 at 11:28 AM Stephen Frost <[email protected]> wrote:\n>\n> Greetings,\n>\n> * Merlin Moncure ([email protected]) wrote:\n> > On Tue, Nov 20, 2018 at 10:43 AM Stephen Frost <[email protected]> wrote:\n> > > * Merlin Moncure ([email protected]) wrote:\n> > > > On Mon, Nov 19, 2018 at 11:26 AM Stephen Frost <[email protected]> wrote:\n> > > > > Looks like a lot of the difference being seen and the comments made\n> > > > > about one being faster than the other are because one system is\n> > > > > compressing *everything*, while PG (quite intentionally...) only\n> > > > > compresses the data sometimes- once it hits the TOAST limit. That\n> > > > > likely also contributes to why you're seeing the on-disk size\n> > > > > differences that you are.\n> > > >\n> > > > Hm. It may be intentional, but is it ideal? Employing datum\n> > > > compression in the 1kb-8kb range with a faster but less compressing\n> > > > algorithm could give benefits.\n> > >\n> > > Well, pglz is actually pretty fast and not as good at compression as\n> > > other things. I could certainly see an argument for allowing a column\n> > > to always be (or at least attempted to be) compressed.\n> > >\n> > > There's been a lot of discussion around supporting alternative\n> > > compression algorithms but making that happen is a pretty big task.\n> >\n> > Yeah; pglz is closer to zlib. There's much faster stuff out\n> > there...Andres summed it up pretty well;\n> > https://www.postgresql.org/message-id/20130605150144.GD28067%40alap2.anarazel.de\n> >\n> > There are also some interesting discussions on jsonb specific\n> > discussion approaches.\n>\n> Oh yes, having a dictionary would be a great start to reducing the size\n> of the jsonb data, though it could then become a contention point if\n> there's a lot of new values being inserted and such. Naturally there\n> would also be a cost to pulling that data back out as well but likely it\n> would be well worth the benefit of not having to store the field names\n> repeatedly.\n\nYes, the biggest concern with a shared dictionary ought to be\nconcurrency type problems.\n\nmerlin\n\n",
"msg_date": "Tue, 20 Nov 2018 13:45:02 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL VS MongoDB: a use case comparison"
},
{
"msg_contents": "Greetings,\n\n* Merlin Moncure ([email protected]) wrote:\n> On Tue, Nov 20, 2018 at 11:28 AM Stephen Frost <[email protected]> wrote:\n> > Oh yes, having a dictionary would be a great start to reducing the size\n> > of the jsonb data, though it could then become a contention point if\n> > there's a lot of new values being inserted and such. Naturally there\n> > would also be a cost to pulling that data back out as well but likely it\n> > would be well worth the benefit of not having to store the field names\n> > repeatedly.\n> \n> Yes, the biggest concern with a shared dictionary ought to be\n> concurrency type problems.\n\nHmmm, I wonder if we could do something like have a dictionary per\npage.. Or perhaps based on some hash of the toast ID.. Not sure. :)\n\nThanks!\n\nStephen",
"msg_date": "Tue, 20 Nov 2018 14:46:59 -0500",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL VS MongoDB: a use case comparison"
},
{
"msg_contents": "Stephen Frost schrieb am 20.11.2018 um 18:28:\n> Oh yes, having a dictionary would be a great start to reducing the size\n> of the jsonb data, though it could then become a contention point if\n> there's a lot of new values being inserted and such. Naturally there\n> would also be a cost to pulling that data back out as well but likely it\n> would be well worth the benefit of not having to store the field names\n> repeatedly.\n\nThere is an extension for a dictionary based JSONB compression:\n\nhttps://github.com/postgrespro/zson\n\n\n",
"msg_date": "Wed, 21 Nov 2018 07:48:05 +0100",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL VS MongoDB: a use case comparison"
},
{
"msg_contents": "On Wed, Nov 21, 2018 at 9:48 AM Thomas Kellerer <[email protected]> wrote:\n>\n> Stephen Frost schrieb am 20.11.2018 um 18:28:\n> > Oh yes, having a dictionary would be a great start to reducing the size\n> > of the jsonb data, though it could then become a contention point if\n> > there's a lot of new values being inserted and such. Naturally there\n> > would also be a cost to pulling that data back out as well but likely it\n> > would be well worth the benefit of not having to store the field names\n> > repeatedly.\n>\n> There is an extension for a dictionary based JSONB compression:\n>\n> https://github.com/postgrespro/zson\n\nThat was a 'toy' experiment. We did several experiments on jsonb\ncompression and presented\nthe results, for example,\nhttp://www.sai.msu.su/~megera/postgres/talks/jsonb-pgconf.us-2017.pdf\n\nAlso, check this thread on custom compression\nhttps://www.postgresql.org/message-id/flat/CAF4Au4xop7FqhCKgabYWymUS0yUk9i%3DbonPnmVUBbpoKsFYnLA%40mail.gmail.com#d01913d3b939b472ea5b38912bf3cbe4\n\nNow, there is YCSB-JSON benchmark available and it is worth to run it\nfor postgres\nhttps://dzone.com/articles/ycsb-json-implementation-for-couchbase-and-mongodb\nWe are pretty busy, so you may contribute.\n\nFor better indexing we are working on parameters for opclasses and I\nreally wanted to have it\nfor PG12. http://www.sai.msu.su/~megera/postgres/talks/opclass_pgcon-2018.pdf\n\nMy recommendation for testing performance - always run concurrent\nqueries and distibution of\nqueries should be for most cases zipfian (we have added to PG11).\n>\n>\n\n\n-- \nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n",
"msg_date": "Fri, 30 Nov 2018 00:14:29 +0300",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL VS MongoDB: a use case comparison"
},
{
"msg_contents": "On Tue, Nov 20, 2018 at 08:34:20AM -0500, Stephen Frost wrote:\n> > Anyway, to bring data from JSON to a relational model is out of topic\n> > for the current discussion, since we are actually questioning if\n> > Postgres is a good replacement for Mongo when handling JSON data.\n> \n> This narrow viewpoint isn't really sensible though- what you should be\n> thinking about is what's appropriate for your *data*. JSON is just a\n> data format, and while it's alright as a system inter-exchange format,\n> it's rather terrible as a storage format.\n\nI would add that *FHIR* is an inter-exchange format instead of a storage\nformat. FHIR spec evolves and its json format too. When implemented in\na relational format it allows to only change the serialization process\n(eg: json_tuple & co) instead of the data. In case FHIR is stored as a\njson, it makes the information frozen in its version and complicate to\nmake evolve.\n\n\n-- \nnicolas\n\n",
"msg_date": "Wed, 12 Dec 2018 23:37:00 +0100",
"msg_from": "Nicolas Paris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL VS MongoDB: a use case comparison"
}
] |
[
{
"msg_contents": "Hi,\nI'm trying to understand something that is weird on one of my environments.\nWhen I query pg_stat_all_tables I see that most of the tables dont have any\nvalue in the last_autovacuum/analyze column. In addition the columns\nautovacuum_count/analyze_count is set to 0. However, when checking the\nlogs, I see that on some of those tables autovacuum run. I think that there\nis something wrong with the database statistics collector. In addition, the\ncolumn n_dead_tup and n_live_tup are set and in some of the cases\nn_dead_tup is more then 20% of the table tuples. In addition, all tables\nhave default vacuum threshold.\n\nAny idea what else I can check ?\nThe problem isnt only that dead tuples arent deleted (I dont have long\nrunning transaction that might cause it) but the fact that the statistics\narent accurate/wrong.\n\nHi,I'm trying to understand something that is weird on one of my environments. When I query pg_stat_all_tables I see that most of the tables dont have any value in the last_autovacuum/analyze column. In addition the columns autovacuum_count/analyze_count is set to 0. However, when checking the logs, I see that on some of those tables autovacuum run. I think that there is something wrong with the database statistics collector. In addition, the column n_dead_tup and n_live_tup are set and in some of the cases n_dead_tup is more then 20% of the table tuples. In addition, all tables have default vacuum threshold.Any idea what else I can check ?The problem isnt only that dead tuples arent deleted (I dont have long running transaction that might cause it) but the fact that the statistics arent accurate/wrong.",
"msg_date": "Mon, 19 Nov 2018 19:31:35 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "autovacuum is running but pg_stat_all_tables empty"
},
{
"msg_contents": "Mariel Cherkassky wrote:\n> I'm trying to understand something that is weird on one of my environments.\n> When I query pg_stat_all_tables I see that most of the tables dont have any\n> value in the last_autovacuum/analyze column. In addition the columns\n> autovacuum_count/analyze_count is set to 0. However, when checking the logs,\n> I see that on some of those tables autovacuum run. I think that there is\n> something wrong with the database statistics collector. In addition, the\n> column n_dead_tup and n_live_tup are set and in some of the cases n_dead_tup\n> is more then 20% of the table tuples. In addition, all tables have default\n> vacuum threshold.\n> \n> Any idea what else I can check ?\n> The problem isnt only that dead tuples arent deleted (I dont have long running\n> transaction that might cause it) but the fact that the statistics arent accurate/wrong.\n\nYou can use the \"pgstattuple\" extension to check that table for the actual\ndead tuple percentage to see if the statistics are accurate or not.\n\nTo see the statistic collector's UDP socket, run\n\n netstat -a|grep udp|grep ESTABLISHED\n\nCheck if it is there. If it is on IPv6, make sure that IPv6 is up, otherwise\nthat would explain why you have no accurate statistics.\n\nAre there any log messages about statistics collection?\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Mon, 19 Nov 2018 19:52:43 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autovacuum is running but pg_stat_all_tables empty"
}
] |
[
{
"msg_contents": "Given a table, `github_repos`, with a multi-column unique index on `org_id`\nand `github_id` columns, is there any performance difference (or other\nissues to be aware of) between the two bulk upsert operations below? The\ndifference is that in the first query, the `org_id` and `github_id` columns\nare included in the UPDATE, whereas in the second query they are not. Since\nthe UPDATE runs ON CONFLICT, the updated values of `org_id` and `github_id`\nwill be the same as the old values, but those columns are included in the\nUPDATE because the underlying library I am using is designed that way. I'm\nwondering if its safe to use as-is or whether I should be explicitly\nexcluding those columns in the UPDATE.\n\nQuery #1:\n\n INSERT INTO \"github_repos\" (\"org_id\",\"github_id\",\"name\")\n VALUES (1,1,'foo')\n ON CONFLICT (org_id, github_id)\n DO UPDATE SET\n\"org_id\"=EXCLUDED.\"org_id\",\"github_id\"=EXCLUDED.\"github_id\",\"name\"=EXCLUDED.\"name\"\n RETURNING \"id\"\n\nQuery #2:\n\n INSERT INTO \"github_repos\" (\"org_id\",\"github_id\",\"name\")\n VALUES (1,1,'foo')\n ON CONFLICT (org_id, github_id)\n DO UPDATE SET \"name\"=EXCLUDED.\"name\"\n RETURNING \"id\"\n\n`github_repos` table:\n\n Column | Type | Collation | Nullable\n -------------------+-------------------+-----------+----------+\n id | bigint | | not null |\n org_id | bigint | | not null |\n github_id | bigint | | not null |\n name | character varying | | not null |\n\n Indexes:\n \"github_repos_pkey\" PRIMARY KEY, btree (id)\n \"unique_repos\" UNIQUE, btree (org_id, github_id)\n\nGiven a table, `github_repos`, with a multi-column unique index on `org_id` and `github_id` columns, is there any performance difference (or other issues to be aware of) between the two bulk upsert operations below? The difference is that in the first query, the `org_id` and `github_id` columns are included in the UPDATE, whereas in the second query they are not. Since the UPDATE runs ON CONFLICT, the updated values of `org_id` and `github_id` will be the same as the old values, but those columns are included in the UPDATE because the underlying library I am using is designed that way. I'm wondering if its safe to use as-is or whether I should be explicitly excluding those columns in the UPDATE.Query #1: INSERT INTO \"github_repos\" (\"org_id\",\"github_id\",\"name\") VALUES (1,1,'foo') ON CONFLICT (org_id, github_id) DO UPDATE SET \"org_id\"=EXCLUDED.\"org_id\",\"github_id\"=EXCLUDED.\"github_id\",\"name\"=EXCLUDED.\"name\" RETURNING \"id\"Query #2: INSERT INTO \"github_repos\" (\"org_id\",\"github_id\",\"name\") VALUES (1,1,'foo') ON CONFLICT (org_id, github_id) DO UPDATE SET \"name\"=EXCLUDED.\"name\" RETURNING \"id\"`github_repos` table: Column | Type | Collation | Nullable -------------------+-------------------+-----------+----------+ id | bigint | | not null | org_id | bigint | | not null | github_id | bigint | | not null | name | character varying | | not null | Indexes: \"github_repos_pkey\" PRIMARY KEY, btree (id) \"unique_repos\" UNIQUE, btree (org_id, github_id)",
"msg_date": "Thu, 22 Nov 2018 11:32:17 -0800",
"msg_from": "Abi Noda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance impact of updating target columns with unchanged values\n ON CONFLICT"
},
{
"msg_contents": "In other words, is Postgres smart enough to not actually write to disk any\ncolumns that haven’t changed value or update indexes based on those columns?\n\nOn Thu, Nov 22, 2018 at 11:32 AM Abi Noda <[email protected]> wrote:\n\n> Given a table, `github_repos`, with a multi-column unique index on\n> `org_id` and `github_id` columns, is there any performance difference (or\n> other issues to be aware of) between the two bulk upsert operations below?\n> The difference is that in the first query, the `org_id` and `github_id`\n> columns are included in the UPDATE, whereas in the second query they are\n> not. Since the UPDATE runs ON CONFLICT, the updated values of `org_id` and\n> `github_id` will be the same as the old values, but those columns are\n> included in the UPDATE because the underlying library I am using is\n> designed that way. I'm wondering if its safe to use as-is or whether I\n> should be explicitly excluding those columns in the UPDATE.\n>\n> Query #1:\n>\n> INSERT INTO \"github_repos\" (\"org_id\",\"github_id\",\"name\")\n> VALUES (1,1,'foo')\n> ON CONFLICT (org_id, github_id)\n> DO UPDATE SET\n> \"org_id\"=EXCLUDED.\"org_id\",\"github_id\"=EXCLUDED.\"github_id\",\"name\"=EXCLUDED.\"name\"\n> RETURNING \"id\"\n>\n> Query #2:\n>\n> INSERT INTO \"github_repos\" (\"org_id\",\"github_id\",\"name\")\n> VALUES (1,1,'foo')\n> ON CONFLICT (org_id, github_id)\n> DO UPDATE SET \"name\"=EXCLUDED.\"name\"\n> RETURNING \"id\"\n>\n> `github_repos` table:\n>\n> Column | Type | Collation | Nullable\n> -------------------+-------------------+-----------+----------+\n> id | bigint | | not null |\n> org_id | bigint | | not null |\n> github_id | bigint | | not null |\n> name | character varying | | not null |\n>\n> Indexes:\n> \"github_repos_pkey\" PRIMARY KEY, btree (id)\n> \"unique_repos\" UNIQUE, btree (org_id, github_id)\n>\n\nIn other words, is Postgres smart enough to not actually write to disk any columns that haven’t changed value or update indexes based on those columns?On Thu, Nov 22, 2018 at 11:32 AM Abi Noda <[email protected]> wrote:Given a table, `github_repos`, with a multi-column unique index on `org_id` and `github_id` columns, is there any performance difference (or other issues to be aware of) between the two bulk upsert operations below? The difference is that in the first query, the `org_id` and `github_id` columns are included in the UPDATE, whereas in the second query they are not. Since the UPDATE runs ON CONFLICT, the updated values of `org_id` and `github_id` will be the same as the old values, but those columns are included in the UPDATE because the underlying library I am using is designed that way. I'm wondering if its safe to use as-is or whether I should be explicitly excluding those columns in the UPDATE.Query #1: INSERT INTO \"github_repos\" (\"org_id\",\"github_id\",\"name\") VALUES (1,1,'foo') ON CONFLICT (org_id, github_id) DO UPDATE SET \"org_id\"=EXCLUDED.\"org_id\",\"github_id\"=EXCLUDED.\"github_id\",\"name\"=EXCLUDED.\"name\" RETURNING \"id\"Query #2: INSERT INTO \"github_repos\" (\"org_id\",\"github_id\",\"name\") VALUES (1,1,'foo') ON CONFLICT (org_id, github_id) DO UPDATE SET \"name\"=EXCLUDED.\"name\" RETURNING \"id\"`github_repos` table: Column | Type | Collation | Nullable -------------------+-------------------+-----------+----------+ id | bigint | | not null | org_id | bigint | | not null | github_id | bigint | | not null | name | character varying | | not null | Indexes: \"github_repos_pkey\" PRIMARY KEY, btree (id) \"unique_repos\" UNIQUE, btree (org_id, github_id)",
"msg_date": "Thu, 22 Nov 2018 13:31:10 -0800",
"msg_from": "Abi Noda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance impact of updating target columns with unchanged\n values ON CONFLICT"
},
{
"msg_contents": "On Thu, Nov 22, 2018 at 01:31:10PM -0800, Abi Noda wrote:\n> In other words, is Postgres smart enough to not actually write to disk any\n> columns that haven’t changed value or update indexes based on those columns?\n\nYou're asking about what's referred to as Heap only tuples:\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/access/heap/README.HOT;hb=HEAD\nhttps://wiki.postgresql.org/wiki/Index-only_scans#Interaction_with_HOT\n\nNote, if you're doing alot of updates, you should consider setting a lower the\ntable fillfactor, since HOT is only possible if the new tuple (row version) is\non the same page as the old tuple.\n\n|With HOT, a new tuple placed on the same page and with all indexed columns the\n|same as its parent row version does not get new index entries.\"\n\nAnd check pg_stat_user_tables to verify that's working as intended.\n\nJustin\n\n",
"msg_date": "Thu, 22 Nov 2018 16:40:51 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance impact of updating target columns with unchanged\n values ON CONFLICT"
},
{
"msg_contents": "Thanks Justin. Do you know if Postgres treats an UPDATE that sets the\nindexed columns set to the same previous values as a change? Or does it\nonly count it as \"changed\" if the values are different. This is ambiguous\nto me.\n\n*> HOT solves this problem for a restricted but useful special case where a\ntuple is repeatedly updated in ways that do not change its indexed columns.*\n\n*> With HOT, a new tuple placed on the same page and with all indexed\ncolumns the same as its parent row version does not get new index entries.*\n\n*> [HOT] will create a new physical heap tuple when inserting, and not a\nnew index tuple, if and only if the update did not affect indexed columns.*\n\n\n\nOn Thu, Nov 22, 2018 at 2:40 PM Justin Pryzby <[email protected]> wrote:\n\n> On Thu, Nov 22, 2018 at 01:31:10PM -0800, Abi Noda wrote:\n> > In other words, is Postgres smart enough to not actually write to disk\n> any\n> > columns that haven’t changed value or update indexes based on those\n> columns?\n>\n> You're asking about what's referred to as Heap only tuples:\n>\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/access/heap/README.HOT;hb=HEAD\n> https://wiki.postgresql.org/wiki/Index-only_scans#Interaction_with_HOT\n>\n> Note, if you're doing alot of updates, you should consider setting a lower\n> the\n> table fillfactor, since HOT is only possible if the new tuple (row\n> version) is\n> on the same page as the old tuple.\n>\n> |With HOT, a new tuple placed on the same page and with all indexed\n> columns the\n> |same as its parent row version does not get new index entries.\"\n>\n> And check pg_stat_user_tables to verify that's working as intended.\n>\n> Justin\n>\n\nThanks Justin. Do you know if Postgres treats an UPDATE that sets the indexed columns set to the same previous values as a change? Or does it only count it as \"changed\" if the values are different. This is ambiguous to me.> HOT solves this problem for a restricted but useful special case where a tuple is repeatedly updated in ways that do not change its indexed columns.> With HOT, a new tuple placed on the same page and with all indexed columns the same as its parent row version does not get new index entries.> [HOT] will create a new physical heap tuple when inserting, and not a new index \ntuple, if and only if the update did not affect indexed columns.On Thu, Nov 22, 2018 at 2:40 PM Justin Pryzby <[email protected]> wrote:On Thu, Nov 22, 2018 at 01:31:10PM -0800, Abi Noda wrote:\n> In other words, is Postgres smart enough to not actually write to disk any\n> columns that haven’t changed value or update indexes based on those columns?\n\nYou're asking about what's referred to as Heap only tuples:\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/access/heap/README.HOT;hb=HEAD\nhttps://wiki.postgresql.org/wiki/Index-only_scans#Interaction_with_HOT\n\nNote, if you're doing alot of updates, you should consider setting a lower the\ntable fillfactor, since HOT is only possible if the new tuple (row version) is\non the same page as the old tuple.\n\n|With HOT, a new tuple placed on the same page and with all indexed columns the\n|same as its parent row version does not get new index entries.\"\n\nAnd check pg_stat_user_tables to verify that's working as intended.\n\nJustin",
"msg_date": "Fri, 23 Nov 2018 19:44:37 -0800",
"msg_from": "Abi Noda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance impact of updating target columns with unchanged\n values ON CONFLICT"
},
{
"msg_contents": "I take that question back – someone helped me on StackExchange and\naddressed it:\n\n*> It appears that Postgres is smart enough to identify cases where indexed\ncolumns are not changed , and perform HOT updates; thus , there is no\ndifference between having or not having key columns in update statement\nfrom performance point of view. The only thing that matters it whether\nactual value changed. Surely, this behaviour is limited to B-Tree indexes. *\n\nhttps://dba.stackexchange.com/questions/223231/performance-impact-of-updating-target-columns-with-same-values-on-conflict\n\nOn Fri, Nov 23, 2018 at 7:44 PM Abi Noda <[email protected]> wrote:\n\n> Thanks Justin. Do you know if Postgres treats an UPDATE that sets the\n> indexed columns set to the same previous values as a change? Or does it\n> only count it as \"changed\" if the values are different. This is ambiguous\n> to me.\n>\n> *> HOT solves this problem for a restricted but useful special case where\n> a tuple is repeatedly updated in ways that do not change its indexed\n> columns.*\n>\n> *> With HOT, a new tuple placed on the same page and with all indexed\n> columns the same as its parent row version does not get new index entries.*\n>\n> *> [HOT] will create a new physical heap tuple when inserting, and not a\n> new index tuple, if and only if the update did not affect indexed columns.*\n>\n>\n>\n> On Thu, Nov 22, 2018 at 2:40 PM Justin Pryzby <[email protected]>\n> wrote:\n>\n>> On Thu, Nov 22, 2018 at 01:31:10PM -0800, Abi Noda wrote:\n>> > In other words, is Postgres smart enough to not actually write to disk\n>> any\n>> > columns that haven’t changed value or update indexes based on those\n>> columns?\n>>\n>> You're asking about what's referred to as Heap only tuples:\n>>\n>> https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/access/heap/README.HOT;hb=HEAD\n>> https://wiki.postgresql.org/wiki/Index-only_scans#Interaction_with_HOT\n>>\n>> Note, if you're doing alot of updates, you should consider setting a\n>> lower the\n>> table fillfactor, since HOT is only possible if the new tuple (row\n>> version) is\n>> on the same page as the old tuple.\n>>\n>> |With HOT, a new tuple placed on the same page and with all indexed\n>> columns the\n>> |same as its parent row version does not get new index entries.\"\n>>\n>> And check pg_stat_user_tables to verify that's working as intended.\n>>\n>> Justin\n>>\n>\n\nI take that question back – someone helped me on StackExchange and addressed it:> It appears that Postgres is smart enough to identify cases where indexed columns are not changed , and perform HOT updates; thus , there is no difference between having or not having key columns in update statement from performance point of view. The only thing that matters it whether actual value changed. Surely, this behaviour is limited to B-Tree indexes. https://dba.stackexchange.com/questions/223231/performance-impact-of-updating-target-columns-with-same-values-on-conflictOn Fri, Nov 23, 2018 at 7:44 PM Abi Noda <[email protected]> wrote:Thanks Justin. Do you know if Postgres treats an UPDATE that sets the indexed columns set to the same previous values as a change? Or does it only count it as \"changed\" if the values are different. This is ambiguous to me.> HOT solves this problem for a restricted but useful special case where a tuple is repeatedly updated in ways that do not change its indexed columns.> With HOT, a new tuple placed on the same page and with all indexed columns the same as its parent row version does not get new index entries.> [HOT] will create a new physical heap tuple when inserting, and not a new index \ntuple, if and only if the update did not affect indexed columns.On Thu, Nov 22, 2018 at 2:40 PM Justin Pryzby <[email protected]> wrote:On Thu, Nov 22, 2018 at 01:31:10PM -0800, Abi Noda wrote:\n> In other words, is Postgres smart enough to not actually write to disk any\n> columns that haven’t changed value or update indexes based on those columns?\n\nYou're asking about what's referred to as Heap only tuples:\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/access/heap/README.HOT;hb=HEAD\nhttps://wiki.postgresql.org/wiki/Index-only_scans#Interaction_with_HOT\n\nNote, if you're doing alot of updates, you should consider setting a lower the\ntable fillfactor, since HOT is only possible if the new tuple (row version) is\non the same page as the old tuple.\n\n|With HOT, a new tuple placed on the same page and with all indexed columns the\n|same as its parent row version does not get new index entries.\"\n\nAnd check pg_stat_user_tables to verify that's working as intended.\n\nJustin",
"msg_date": "Fri, 23 Nov 2018 19:53:14 -0800",
"msg_from": "Abi Noda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance impact of updating target columns with unchanged\n values ON CONFLICT"
}
] |
[
{
"msg_contents": "Hi,\nI'm using postgres 9.6.\nI have a table with 100M+ records which consume on disk about 8GB. In\naddition I have an index on the id column of the table.\nWhen I run in psql : explain analyze select id from my_table order by id\nThe query returns output after 130 seconds which is great. The plan that is\nchosen is Index only scan.\nHowever when I run the query without the explain analyze it takes forever\nto run it(More then two hours).\nAll the statistics are accurate and work_mem set to 4MB. What there is so\nmuch difference between running the query with explain analyze and without ?\nIs there a possibility that it is related to fetching or something like\nthat ?\n\nThanks.\n\nHi,I'm using postgres 9.6.I have a table with 100M+ records which consume on disk about 8GB. In addition I have an index on the id column of the table.When I run in psql : explain analyze select id from my_table order by id The query returns output after 130 seconds which is great. The plan that is chosen is Index only scan.However when I run the query without the explain analyze it takes forever to run it(More then two hours). All the statistics are accurate and work_mem set to 4MB. What there is so much difference between running the query with explain analyze and without ?Is there a possibility that it is related to fetching or something like that ?Thanks.",
"msg_date": "Sun, 25 Nov 2018 15:08:33 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "explain analyze faster then query"
},
{
"msg_contents": "Cc: [email protected],\n\[email protected]\n\nPlease avoid simultaneously sending the same question to multiple lists.\n\nIt means that people can't see each others replies and everything that implies.\n\nOn Sun, Nov 25, 2018 at 03:08:33PM +0200, Mariel Cherkassky wrote:\n> However when I run the query without the explain analyze it takes forever\n> to run it(More then two hours).\n> Is there a possibility that it is related to fetching or something like\n> that ?\n\nIf it's a remote database, I expect that's why.\nMaybe you can test by running the query on the DB server.\nOr by running another variant of the query, such as:\n\nWITH x AS (QUERY GOES HERE) SELECT 1;\n\nwhich returns only one row but after having executed the query behind CTE, as\noptimization fence.\n\nJustin\n\n",
"msg_date": "Sun, 25 Nov 2018 07:30:02 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: explain analyze faster then query"
},
{
"msg_contents": "I run it from inside the machine on the local database.\nFor example :\n\ndb=# create table rule_test as select generate_series(1,100000000);\nSELECT 100000000\n\ndb=# explain analyze select generate_series from rule_test order by\ngenerate_series asc;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=17763711.32..18045791.04 rows=112831890 width=4) (actual\ntime=62677.752..100928.829 rows=100000000 loops=1)\n Sort Key: generate_series\n Sort Method: external merge Disk: 1367624kB\n -> Seq Scan on rule_test (cost=0.00..1570796.90 rows=112831890\nwidth=4) (actual time=0.019..36098.463 rows=100000000 loops=1)\n Planning time: 0.072 ms\n Execution time: 107025.113 ms\n(6 rows)\n\ndb=# create index on rule_test(generate_series);\nCREATE INDEX\ndb=# select generate_series from rule_test order by generate_series asc;\n\n\ndb=# explain analyze select generate_series from rule_test order by\ngenerate_series asc;\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Only Scan using rule_test_generate_series_idx on rule_test\n(cost=0.57..2490867.57 rows=100000000 width=4) (actual\ntime=0.103..63122.906 rows=100000000 loops=1)\n Heap Fetches: 100000000\n Planning time: 6.682 ms\n Execution time: 69265.311 ms\n(4 rows)\n\ndb=# select generate_series from rule_test order by generate_series asc;\nstuck for more then a hour\n\n\nבתאריך יום א׳, 25 בנוב׳ 2018 ב-15:30 מאת Justin Pryzby <\[email protected]>:\n\n> Cc: [email protected],\n> [email protected]\n>\n> Please avoid simultaneously sending the same question to multiple lists.\n>\n> It means that people can't see each others replies and everything that\n> implies.\n>\n> On Sun, Nov 25, 2018 at 03:08:33PM +0200, Mariel Cherkassky wrote:\n> > However when I run the query without the explain analyze it takes forever\n> > to run it(More then two hours).\n> > Is there a possibility that it is related to fetching or something like\n> > that ?\n>\n> If it's a remote database, I expect that's why.\n> Maybe you can test by running the query on the DB server.\n> Or by running another variant of the query, such as:\n>\n> WITH x AS (QUERY GOES HERE) SELECT 1;\n>\n> which returns only one row but after having executed the query behind CTE,\n> as\n> optimization fence.\n>\n> Justin\n>\n\nI run it from inside the machine on the local database.For example : db=# create table rule_test as select generate_series(1,100000000);SELECT 100000000db=# explain analyze select generate_series from rule_test order by generate_series asc; QUERY PLAN----------------------------------------------------------------------------------------------------------------------------------- Sort (cost=17763711.32..18045791.04 rows=112831890 width=4) (actual time=62677.752..100928.829 rows=100000000 loops=1) Sort Key: generate_series Sort Method: external merge Disk: 1367624kB -> Seq Scan on rule_test (cost=0.00..1570796.90 rows=112831890 width=4) (actual time=0.019..36098.463 rows=100000000 loops=1) Planning time: 0.072 ms Execution time: 107025.113 ms(6 rows)db=# create index on rule_test(generate_series);CREATE INDEXdb=# select generate_series from rule_test order by generate_series asc;db=# explain analyze select generate_series from rule_test order by generate_series asc; QUERY PLAN------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Index Only Scan using rule_test_generate_series_idx on rule_test (cost=0.57..2490867.57 rows=100000000 width=4) (actual time=0.103..63122.906 rows=100000000 loops=1) Heap Fetches: 100000000 Planning time: 6.682 ms Execution time: 69265.311 ms(4 rows)db=# select generate_series from rule_test order by generate_series asc;stuck for more then a hourבתאריך יום א׳, 25 בנוב׳ 2018 ב-15:30 מאת Justin Pryzby <[email protected]>:Cc: [email protected],\n [email protected]\n\nPlease avoid simultaneously sending the same question to multiple lists.\n\nIt means that people can't see each others replies and everything that implies.\n\nOn Sun, Nov 25, 2018 at 03:08:33PM +0200, Mariel Cherkassky wrote:\n> However when I run the query without the explain analyze it takes forever\n> to run it(More then two hours).\n> Is there a possibility that it is related to fetching or something like\n> that ?\n\nIf it's a remote database, I expect that's why.\nMaybe you can test by running the query on the DB server.\nOr by running another variant of the query, such as:\n\nWITH x AS (QUERY GOES HERE) SELECT 1;\n\nwhich returns only one row but after having executed the query behind CTE, as\noptimization fence.\n\nJustin",
"msg_date": "Sun, 25 Nov 2018 15:37:46 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: explain analyze faster then query"
},
{
"msg_contents": "On Sun, Nov 25, 2018 at 03:37:46PM +0200, Mariel Cherkassky wrote:\n> I run it from inside the machine on the local database.\n> For example :\n> \n> db=# create table rule_test as select generate_series(1,100000000);\n> SELECT 100000000\n\n> db=# explain analyze select generate_series from rule_test order by\n> generate_series asc;\n\nSo it's returning 100M rows to the client, which nominally will require moving\n400MB.\n\nAnd pgsql is formatting the output.\n\nI did a test with 10M rows:\n\n[pryzbyj@database ~]$ command time -v psql postgres -c 'SELECT * FROM rule_test' |wc -c&\nCommand being timed: \"psql postgres -c SELECT * FROM rule_test\"\n User time (seconds): 11.52\n Percent of CPU this job got: 78%\n Elapsed (wall clock) time (h:mm:ss or m:ss): 0:17.25\n Maximum resident set size (kbytes): 396244\n...\n170000053\n\nExplain analyze takes 0.8sec, but returning query results uses 11sec CPU time\non the *client*, needed 400MB RAM (ints now being represented as strings\ninstead of machine types), and wrote 170MB to stdout, Also, if the output is\nbeing piped to less, the data is going to be buffered there, which means your\nquery is perhaps using 4GB RAM in psql + 4GB in less..\n\nIs the server swapping ? check \"si\" and \"so\" in output of \"vmstat -w 1\"\n\nJustin\n\n",
"msg_date": "Sun, 25 Nov 2018 08:12:25 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: explain analyze faster then query"
}
] |
[
{
"msg_contents": "*Postgres server version - 9.5.10*\n*RAM - 128 GB*\n*WorkMem 64 MB*\n\n*Problematic query with explain :*\n*Query 1 (original):*\nexplain analyse SELECT myTable1.ID FROM myTable1 LEFT JOIN myTable2 ON\nmyTable1.ID=myTable2.ID WHERE ((((myTable1.bool_val = true) AND\n(myTable1.small_intval IN (1,2,3))) AND ((*myTable2.bigint_val = 1*) AND\n(myTable1.bool_val = true))) AND (((myTable1.ID >= 1000000000000) AND\n(myTable1.ID <= 1999999999999)) )) ORDER BY 1 DESC , 1 NULLS FIRST LIMIT\n11;\n \nQUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=1.00..8077.43 rows=11 width=8) (actual time=6440.245..6440.245\nrows=0 loops=1)\n -> Nested Loop (cost=1.00..1268000.55 *rows=1727* width=8) (actual\ntime=6440.242..6440.242 rows=0 loops=1)\n -> Index Scan Backward using myTable2_fk1_idx on myTable2 \n(cost=0.43..1259961.54 rows=1756 width=8) (actual time=6440.241..6440.241\nrows=0 loops=1)\n Filter: (bigint_val = 1)\n Rows Removed by Filter: 12701925\n -> Index Scan using myTable1_fk2_idx on myTable1 (cost=0.56..4.57\nrows=1 width=8) (never executed)\n Index Cond: ((id = myTable2.id) AND (id >=\n'1000000000000'::bigint) AND (id <= '1999999999999'::bigint))\n Filter: (bool_val AND bool_val AND (small_intval = ANY\n('{1,2,3}'::integer[])))\n Planning time: 0.654 ms\n Execution time: 6440.353 ms\n(10 rows)\n\n*The columns myTable1.ID and myTable2.bigint_val = 1 both are indexed*\n\nThe table myTable2 contains *12701952* entries. Out of which only *86227* is\nnot null and *146* entries are distinct.\n\nThe above query returns 0 rows since 'myTable2.bigint_val = 1' criteria\nsatisfies nothing. It takes 6 seconds for execution as the planner chooses*\nmyTable1.ID column's index*. \n\n\nIf I use nulls last on the order by clause of the query then the planner\nchooses this plan since it doesn't use index for *DESC NULLS LAST*. And the\nquery executes in milliseconds.\n\n*Query 2 (order by modified to avoid index):*\nexplain analyse SELECT myTable1.ID FROM myTable1 LEFT JOIN myTable2 ON\nmyTable1.ID=myTable2.ID WHERE ((((myTable1.bool_val = true) AND\n(myTable1.small_intval IN (1,2,3))) AND ((myTable2.bigint_val = 1) AND\n(myTable1.bool_val = true))) AND (((myTable1.ID >= 1000000000000) AND\n(myTable1.ID <= 1999999999999)) )) ORDER BY 1 DESC *NULLS LAST*, 1 NULLS\nFIRST LIMIT 11;\n \nQUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=11625.07..11625.10 rows=11 width=8) (actual time=0.028..0.028\nrows=0 loops=1)\n -> Sort (cost=11625.07..11629.39 rows=1727 width=8) (actual\ntime=0.028..0.028 rows=0 loops=1)\n Sort Key: myTable1.id DESC NULLS LAST\n Sort Method: quicksort Memory: 25kB\n -> Nested Loop (cost=0.85..11586.56 *rows=1727 *width=8) (actual\ntime=0.024..0.024 rows=0 loops=1)\n -> Index Scan using bigint_val_idx_px on myTable2 \n(cost=0.29..3547.55 rows=1756 width=8) (actual time=0.024..0.024 rows=0\nloops=1)\n Index Cond: (bigint_val = 1)\n -> Index Scan using myTable1_fk2_idx on myTable1 \n(cost=0.56..4.57 rows=1 width=8) (never executed)\n Index Cond: ((id = myTable2.id) AND (id >=\n'1000000000000'::bigint) AND (id <= '1999999999999'::bigint))\n Filter: (bool_val AND bool_val AND (small_intval = ANY\n('{1,2,3}'::integer[])))\n Planning time: 0.547 ms\n Execution time: 0.110 ms\n\nThe reason why postgres chooses the 1st plan over the 2nd was due to it's\ncost. *plan 1 - 8077.43 and plan 2 - 11625.10* . But obviously plan 2 is\ncorrect. \n\nI tried running *vacuum analyse* table many times, tried changing the\n*statistics target of the column to 250 (since there are only 149 distinct\nvalues)*. But none worked out. The planner thinks that there are *1727* rows\nthat matches the condition *myTable2.bigint_val = 1* but there are none.\n\nAlso I tried changing the limit of the 1st query, increasing the limit\nincreases the cost of the 1st plan so if I use 16 as limit for the same 1st\nquery the planner chooses the 2nd plan.\n\n*Query 3 (same as 1st but limit increased to 16):*\n\nexplain analyse SELECT myTable1.ID FROM myTable1 LEFT JOIN myTable2 ON\nmyTable1.ID=myTable2.ID WHERE ((((myTable1.bool_val = true) AND\n(myTable1.small_intval IN (1,2,3))) AND ((myTable2.bigint_val = 1) AND\n(myTable1.bool_val = true))) AND (((myTable1.ID >= 1000000000000) AND\n(myTable1.ID <= 1999999999999)) )) ORDER BY 1 DESC , 1 NULLS FIRST *LIMIT\n16*;\n \nQUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=11629.74..11629.78 rows=16 width=8) (actual time=0.043..0.043\nrows=0 loops=1)\n -> Sort (cost=11629.74..11634.05 rows=1727 width=8) (actual\ntime=0.042..0.042 rows=0 loops=1)\n Sort Key: myTable1.id DESC\n Sort Method: quicksort Memory: 25kB\n -> Nested Loop (cost=0.85..11586.56 rows=1727 width=8) (actual\ntime=0.036..0.036 rows=0 loops=1)\n -> Index Scan using bigint_val_idx_px on myTable2 \n(cost=0.29..3547.55 rows=1756 width=8) (actual time=0.036..0.036 rows=0\nloops=1)\n Index Cond: (bigint_val = 1)\n -> Index Scan using myTable1_fk2_idx on myTable1 \n(cost=0.56..4.57 rows=1 width=8) (never executed)\n Index Cond: ((id = myTable2.id) AND (id >=\n'1000000000000'::bigint) AND (id <= '1999999999999'::bigint))\n Filter: (bool_val AND bool_val AND (small_intval = ANY\n('{1,2,3}'::integer[])))\n Planning time: 0.601 ms\n Execution time: 0.170 ms\n(12 rows)\n\nIs there any way to make postgres use the myTable2.bigint_val's index by\nchanging/optimizing parameters?\nI tried changing cost parameters too but since bot plan uses index scan it\ndoesn't affect much. Is there any \nway to set *how the cost works based on limit*?\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n",
"msg_date": "Mon, 26 Nov 2018 03:11:18 -0700 (MST)",
"msg_from": "Viswanath <[email protected]>",
"msg_from_op": true,
"msg_subject": "Optimizer choosing the wrong plan"
},
{
"msg_contents": "On Mon, Nov 26, 2018 at 5:11 AM Viswanath <[email protected]> wrote:\n\n> *Postgres server version - 9.5.10*\n> *RAM - 128 GB*\n> *WorkMem 64 MB*\n>\n> *Problematic query with explain :*\n> *Query 1 (original):*\n> explain analyse SELECT myTable1.ID FROM myTable1 LEFT JOIN myTable2 ON\n> myTable1.ID=myTable2.ID WHERE ((((myTable1.bool_val = true) AND\n> (myTable1.small_intval IN (1,2,3))) AND ((*myTable2.bigint_val = 1*) AND\n> (myTable1.bool_val = true))) AND (((myTable1.ID >= 1000000000000) AND\n> (myTable1.ID <= 1999999999999)) )) ORDER BY 1 DESC , 1 NULLS FIRST LIMIT\n> 11;\n>\n\nThere is no point doing a LEFT JOIN when the NULL-extended rows get\nfiltered out later.\n\nAlso, ordering by the same column twice is peculiar, to say the least.\n\n\n> The table myTable2 contains *12701952* entries. Out of which only *86227*\n> is\n> not null and *146* entries are distinct.\n>\n\nI assume you mean the column myTable2.ID has that many not null and\ndistinct?\n\n\n>\n> The above query returns 0 rows since 'myTable2.bigint_val = 1' criteria\n> satisfies nothing. It takes 6 seconds for execution as the planner chooses*\n> myTable1.ID column's index*.\n\n\nMore importantly, it chooses the index on myTable2.ID. It does also use\nthe index on myTable1.ID, but that is secondary.\n\nThe ideal index for this query would probably be a composite index on\nmyTable2 (bigint_val, id DESC);\nThe planner would probably choose to use that index, even if the statistics\nare off.\n\nI tried running *vacuum analyse* table many times, tried changing the\n> *statistics target of the column to 250 (since there are only 149 distinct\n> values)*. But none worked out. The planner thinks that there are *1727*\n> rows\n> that matches the condition *myTable2.bigint_val = 1* but there are none.\n>\n\nIt would interesting if you can upgrade a copy of your server to v11 and\ntry it there. We made changes to ANALYZE in that version which were\nintended to improve this situation, and it would be nice to know if it\nactually did so for your case.\n\nAlso, increasing statistics target even beyond 250 might help. If every\none of the observed value is seen at least twice, it will trigger the\nsystem to assume that it has observed all distinct values that exist. But\nif any of the values are seen exactly once, that causes it to take a\ndifferent path (the one which got modified in v11).\n\nCheers,\n\nJeff\n\nOn Mon, Nov 26, 2018 at 5:11 AM Viswanath <[email protected]> wrote:*Postgres server version - 9.5.10*\n*RAM - 128 GB*\n*WorkMem 64 MB*\n\n*Problematic query with explain :*\n*Query 1 (original):*\nexplain analyse SELECT myTable1.ID FROM myTable1 LEFT JOIN myTable2 ON\nmyTable1.ID=myTable2.ID WHERE ((((myTable1.bool_val = true) AND\n(myTable1.small_intval IN (1,2,3))) AND ((*myTable2.bigint_val = 1*) AND\n(myTable1.bool_val = true))) AND (((myTable1.ID >= 1000000000000) AND\n(myTable1.ID <= 1999999999999)) )) ORDER BY 1 DESC , 1 NULLS FIRST LIMIT\n11;There is no point doing a LEFT JOIN when the NULL-extended rows get filtered out later.Also, ordering by the same column twice is peculiar, to say the least.\nThe table myTable2 contains *12701952* entries. Out of which only *86227* is\nnot null and *146* entries are distinct.I assume you mean the column myTable2.ID has that many not null and distinct? \n\nThe above query returns 0 rows since 'myTable2.bigint_val = 1' criteria\nsatisfies nothing. It takes 6 seconds for execution as the planner chooses*\nmyTable1.ID column's index*.More importantly, it chooses the index on myTable2.ID. It does also use the index on myTable1.ID, but that is secondary.The ideal index for this query would probably be a composite index on myTable2 (bigint_val, id DESC);The planner would probably choose to use that index, even if the statistics are off.\nI tried running *vacuum analyse* table many times, tried changing the\n*statistics target of the column to 250 (since there are only 149 distinct\nvalues)*. But none worked out. The planner thinks that there are *1727* rows\nthat matches the condition *myTable2.bigint_val = 1* but there are none.It would interesting if you can upgrade a copy of your server to v11 and try it there. We made changes to ANALYZE in that version which were intended to improve this situation, and it would be nice to know if it actually did so for your case. Also, increasing statistics target even beyond 250 might help. If every one of the observed value is seen at least twice, it will trigger the system to assume that it has observed all distinct values that exist. But if any of the values are seen exactly once, that causes it to take a different path (the one which got modified in v11).Cheers,Jeff",
"msg_date": "Mon, 26 Nov 2018 10:53:24 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizer choosing the wrong plan"
},
{
"msg_contents": "re: It would interesting if you can upgrade a copy of your server to v11 and\ntry it there. We made changes to ANALYZE in that version which were\nintended to improve this situation, and it would be nice to know if it\nactually did so for your case. \n\nJeff, can you describe the changes that were made to ANALYZE in v11, please?\n\nI've found that running ANALYZE on v10 on the Join Order Benchmark, using\nthe default statistics target of 100, produces quite unstable results, so\nI'd be interested to hear what has been improved in v11.\n\n /Jim\n\n\n\n-----\nJim Finnerty, AWS, Amazon Aurora PostgreSQL\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n",
"msg_date": "Sat, 29 Dec 2018 05:17:33 -0700 (MST)",
"msg_from": "Jim Finnerty <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizer choosing the wrong plan"
},
{
"msg_contents": "On Sat, Dec 29, 2018 at 7:17 AM Jim Finnerty <[email protected]> wrote:\n\n\n> Jeff, can you describe the changes that were made to ANALYZE in v11,\n> please?\n>\n> I've found that running ANALYZE on v10 on the Join Order Benchmark, using\n> the default statistics target of 100, produces quite unstable results, so\n> I'd be interested to hear what has been improved in v11.\n>\n\nThere are two paths the code can take. One if all values which were\nsampled at all were sampled at least twice, and another if the\nleast-sampled value was sampled exactly once. For some distributions (like\nexponential-ish or maybe power-law), it is basically a coin flip whether\nthe least-sampled value is seen once, or more than once. If you are seeing\ninstability, it is probably for this reason. That fundamental instability\nwas not addressed in v11.\n\nOnce you follow the \"something seen exactly once\" path, it has to decide\nhow many of the values get represented in the most-common-value list. That\nis where the change was. The old method said a value had to have an\nestimated prevalence at least 25% more than the average estimated\nprevalence to get accepted into the list. The problem is that if there\nwere a few dominant values, it wouldn't be possible for any others to be\n\"over-represented\" because those few dominant values dragged the average\nprevalence up so far nothing else could qualify. What it was changed to\nwas to include a value in the most-common-value list if its\noverrepresentation was statistically significant given the sample size.\nThe most significant change (from my perspective) is that\nover-representation is measured not against all values, but only against\nall values more rare in the sample then the one currently being considered\nfor inclusion into the MCV. The old method basically said \"all rare values\nare the same\", while the new method realizes that a rare value present\n10,000 times in a billion row table is much different than a rare value\npresent 10 time in a billion row table.\n\nIt is possible that this change will fix the instability for you, because\nit could cause the \"seen exactly once\" path to generate a MCV list which is\nclose enough in size to the \"seen at least twice\" path that you won't\nnotice the difference between them anymore. But, it is also possible they\nwill still be different enough in size that it will still appear unstable.\nIt depends on your distribution of values.\n\nCheers,\n\nJeff\n\nOn Sat, Dec 29, 2018 at 7:17 AM Jim Finnerty <[email protected]> wrote: Jeff, can you describe the changes that were made to ANALYZE in v11, please?\n\nI've found that running ANALYZE on v10 on the Join Order Benchmark, using\nthe default statistics target of 100, produces quite unstable results, so\nI'd be interested to hear what has been improved in v11.There are two paths the code can take. One if all values which were sampled at all were sampled at least twice, and another if the least-sampled value was sampled exactly once. For some distributions (like exponential-ish or maybe power-law), it is basically a coin flip whether the least-sampled value is seen once, or more than once. If you are seeing instability, it is probably for this reason. That fundamental instability was not addressed in v11.Once you follow the \"something seen exactly once\" path, it has to decide how many of the values get represented in the most-common-value list. That is where the change was. The old method said a value had to have an estimated prevalence at least 25% more than the average estimated prevalence to get accepted into the list. The problem is that if there were a few dominant values, it wouldn't be possible for any others to be \"over-represented\" because those few dominant values dragged the average prevalence up so far nothing else could qualify. What it was changed to was to include a value in the most-common-value list if its overrepresentation was statistically significant given the sample size. The most significant change (from my perspective) is that over-representation is measured not against all values, but only against all values more rare in the sample then the one currently being considered for inclusion into the MCV. The old method basically said \"all rare values are the same\", while the new method realizes that a rare value present 10,000 times in a billion row table is much different than a rare value present 10 time in a billion row table. It is possible that this change will fix the instability for you, because it could cause the \"seen exactly once\" path to generate a MCV list which is close enough in size to the \"seen at least twice\" path that you won't notice the difference between them anymore. But, it is also possible they will still be different enough in size that it will still appear unstable. It depends on your distribution of values. Cheers,Jeff",
"msg_date": "Sat, 29 Dec 2018 12:14:19 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizer choosing the wrong plan"
}
] |
[
{
"msg_contents": "Hi,\nI checked pg_stat_all_tables and I see that the last_autovacuum is empty\nfor all the tables but in the database`s log I see that autovacuum was\nrunning and deleted records(not all of them but still deleted...).\nCan someone explain why the pg_stat_all_tables doesnt show the real data ?\n\nHi,I checked pg_stat_all_tables and I see that the last_autovacuum is empty for all the tables but in the database`s log I see that autovacuum was running and deleted records(not all of them but still deleted...).Can someone explain why the pg_stat_all_tables doesnt show the real data ?",
"msg_date": "Mon, 26 Nov 2018 17:12:02 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "autovacuum run but last_autovacuum is empty"
},
{
"msg_contents": "Mariel Cherkassky <[email protected]> writes:\n> I checked pg_stat_all_tables and I see that the last_autovacuum is empty\n> for all the tables but in the database`s log I see that autovacuum was\n> running and deleted records(not all of them but still deleted...).\n> Can someone explain why the pg_stat_all_tables doesnt show the real data ?\n\nHm, are you sure the stats collector is working at all? Are other fields\nin the pg_stat data updating?\n\n(If not, you may have a problem with firewall rules blocking traffic\non the stats loopback port.)\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 26 Nov 2018 10:24:59 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autovacuum run but last_autovacuum is empty"
},
{
"msg_contents": "I think that something is wrong because some tables has a value in\nlast_autoanalyze and some dont while most of them were analyzed. For\nexample :\n\n relname | n_live_tup | n_dead_tup |\nlast_autovacuum | last_autoanalyze\n-----------------------------+------------+------------+-------------------------------+-------------------------------\n tbl1 | 975 | 37 |\n | 2018-11-26 16:15:51.557115+02\n tbl2 | 4283798 | 739345 | 2018-11-26\n17:43:46.663709+02 | 2018-11-26 17:44:38.234908+02\n tbl3 | 3726015 | 596362 | 2018-11-26\n17:37:33.726438+02 | 2018-11-26 17:48:30.126623+02\n pg_language | 0 | 0 |\n |\n pg_toast_19018 | 0 | 0 |\n |\n pg_toast_13115 | 0 | 0 |\n |\n tbl4 | 0 | 0 |\n |\n\n\n log data :\n[root@pg_log]# cat postgresql-Sun.log | grep automatic | awk {'print $6\n$7,$10'} | grep db1 | sort -n | uniq -c\n 1 automaticanalyze \"db1.public.attributes\"\n 4 automaticanalyze \"db1.public.tbl3\"\n 3 automaticanalyze \"db1.public.tbl3\"\n 11 automaticanalyze \"db1.public.tbl2\"\n 1 automaticanalyze \"db1.public.tbl4\"\n 5 automaticanalyze \"db1.public.tbl1\"\n 1 automaticvacuum \"db1.pg_catalog.pg_statistic\":\n 5 automaticvacuum \"db1.pg_toast.pg_toast_19239\":\n 1 automaticvacuum \"db1.pg_toast.pg_toast_19420\":\n 2 automaticvacuum \"db1.pg_toast.pg_toast_2619\":\n 3 automaticvacuum \"db1.public.tbl3\":\n 9 automaticvacuum \"db1.public.tbl2\":\n 1 automaticvacuum \"db1.public.tbl4\":\n\n\nI just changed the name of the tables but all other values are accurate.\n\n\n Firewall rules should block all stats not just some of them right ?\nSomething is weird...\n\n\n\n\n\nבתאריך יום ב׳, 26 בנוב׳ 2018 ב-17:25 מאת Tom Lane <[email protected]\n>:\n\n> Mariel Cherkassky <[email protected]> writes:\n> > I checked pg_stat_all_tables and I see that the last_autovacuum is empty\n> > for all the tables but in the database`s log I see that autovacuum was\n> > running and deleted records(not all of them but still deleted...).\n> > Can someone explain why the pg_stat_all_tables doesnt show the real data\n> ?\n>\n> Hm, are you sure the stats collector is working at all? Are other fields\n> in the pg_stat data updating?\n>\n> (If not, you may have a problem with firewall rules blocking traffic\n> on the stats loopback port.)\n>\n> regards, tom lane\n>\n\nI think that something is wrong because some tables has a value in last_autoanalyze and some dont while most of them were analyzed. For example : relname | n_live_tup | n_dead_tup | last_autovacuum | last_autoanalyze-----------------------------+------------+------------+-------------------------------+------------------------------- tbl1 | 975 | 37 | | 2018-11-26 16:15:51.557115+02 tbl2 | 4283798 | 739345 | 2018-11-26 17:43:46.663709+02 | 2018-11-26 17:44:38.234908+02 tbl3 | 3726015 | 596362 | 2018-11-26 17:37:33.726438+02 | 2018-11-26 17:48:30.126623+02 pg_language | 0 | 0 | | pg_toast_19018 | 0 | 0 | | pg_toast_13115 | 0 | 0 | | tbl4 | 0 | 0 | | log data : [root@pg_log]# cat postgresql-Sun.log | grep automatic | awk {'print $6 $7,$10'} | grep db1 | sort -n | uniq -c 1 automaticanalyze \"db1.public.attributes\" 4 automaticanalyze \"db1.public.tbl3\" 3 automaticanalyze \"db1.public.tbl3\" 11 automaticanalyze \"db1.public.tbl2\" 1 automaticanalyze \"db1.public.tbl4\" 5 automaticanalyze \"db1.public.tbl1\" 1 automaticvacuum \"db1.pg_catalog.pg_statistic\": 5 automaticvacuum \"db1.pg_toast.pg_toast_19239\": 1 automaticvacuum \"db1.pg_toast.pg_toast_19420\": 2 automaticvacuum \"db1.pg_toast.pg_toast_2619\": 3 automaticvacuum \"db1.public.tbl3\": 9 automaticvacuum \"db1.public.tbl2\": 1 automaticvacuum \"db1.public.tbl4\":I just changed the name of the tables but all other values are accurate. Firewall rules should block all stats not just some of them right ? Something is weird...בתאריך יום ב׳, 26 בנוב׳ 2018 ב-17:25 מאת Tom Lane <[email protected]>:Mariel Cherkassky <[email protected]> writes:\n> I checked pg_stat_all_tables and I see that the last_autovacuum is empty\n> for all the tables but in the database`s log I see that autovacuum was\n> running and deleted records(not all of them but still deleted...).\n> Can someone explain why the pg_stat_all_tables doesnt show the real data ?\n\nHm, are you sure the stats collector is working at all? Are other fields\nin the pg_stat data updating?\n\n(If not, you may have a problem with firewall rules blocking traffic\non the stats loopback port.)\n\n regards, tom lane",
"msg_date": "Mon, 26 Nov 2018 17:51:22 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: autovacuum run but last_autovacuum is empty"
},
{
"msg_contents": "Mariel Cherkassky <[email protected]> writes:\n> I think that something is wrong because some tables has a value in\n> last_autoanalyze and some dont while most of them were analyzed.\n\nIt's possible that autovacuum abandoned some vacuum attempts without\ngetting to the end because of lock conflicts.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 26 Nov 2018 11:10:33 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autovacuum run but last_autovacuum is empty"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm running performance tests for my application at version 11.1 and\nencountered\nqueries with high planning time compared to the same planning, running at\nversions 10.5 and 11.0.\n\n-- Day and store where the highest price variation of a given product\noccurred in a given period\nexplain analyze select l_variacao.fecha, l_variacao.loccd as \"Almacen\",\nl_variacao.pant as \"Precio anterior\", l_variacao.patual as \"Precio atual\",\nmax_variacao.var_max as \"Variación máxima (Agua)\"\nfrom (select p.fecha, p.loccd, p.plusalesprice patual, da.plusalesprice\npant, abs(p.plusalesprice - da.plusalesprice) as var\n from precio p, (select p.fecha, p.plusalesprice, p.loccd\nfrom precio p\nwhere p.fecha between '2017-03-01' and '2017-03-02' and p.pluid = 2) da\n\n where p.fecha between '2017-03-01' and '2017-03-02' and p.pluid = 2\n and p.loccd = da.loccd and p.fecha = da.fecha + 1) l_variacao,\n (select max(abs(p.plusalesprice - da.plusalesprice)) as var_max\n from precio p, (select p.fecha, p.plusalesprice, p.loccd\nfrom precio p\n where p.fecha between '2017-03-01' and '2017-03-02' and p.pluid = 2)\nda\n where p.fecha between '2017-03-01' and '2017-03-02' and p.pluid = 2\n and p.loccd = da.loccd and p.fecha = da.fecha + 1) max_variacao\nwhere max_variacao.var_max = l_variacao.var;\n\nFormatted explain: https://explain.depesz.com/s/mUkP\n\nAnd below are the times generated by EXPLAIN ANALYZE:\n\n10.5\nPlanning time: 126.080 ms\nExecution time: 2.306 ms\n\n11.0\nPlanning Time: 7.238 ms\nPlanning Time: 2.638 ms\n\n11.5\nPlanning Time: 15138.533 ms\nExecution Time: 2.310 ms\n\nAll 3 EXPLAIN show exactly the same plan, but version 11.1 is consuming\nabout 15s more to\nperform the planning.\n\nBelow are some additional OS information:\nCPU: 16\nRAM: 128GB\nDisk: SSD\nOS: CentOS Linux release 7.5.1804\n\nIs there any configuration I have to do in 11.1 to achieve the same\nplanning performance\nas in previous versions?\n\nRegards,\n\nSanyo Capobiango\n\nHi,I'm running performance tests for my application at version 11.1 and encounteredqueries with high planning time compared to the same planning, running at versions 10.5 and 11.0. -- Day and store where the highest price variation of a given product occurred in a given periodexplain analyze select l_variacao.fecha, l_variacao.loccd as \"Almacen\", l_variacao.pant as \"Precio anterior\", l_variacao.patual as \"Precio atual\", max_variacao.var_max as \"Variación máxima (Agua)\" from (select p.fecha, p.loccd, p.plusalesprice patual, da.plusalesprice pant, abs(p.plusalesprice - da.plusalesprice) as var from precio p, (select p.fecha, p.plusalesprice, p.loccd from precio p where p.fecha between '2017-03-01' and '2017-03-02' and p.pluid = 2) da where p.fecha between '2017-03-01' and '2017-03-02' and p.pluid = 2 and p.loccd = da.loccd and p.fecha = da.fecha + 1) l_variacao, (select max(abs(p.plusalesprice - da.plusalesprice)) as var_max from precio p, (select p.fecha, p.plusalesprice, p.loccd from precio p where p.fecha between '2017-03-01' and '2017-03-02' and p.pluid = 2) da where p.fecha between '2017-03-01' and '2017-03-02' and p.pluid = 2 and p.loccd = da.loccd and p.fecha = da.fecha + 1) max_variacao where max_variacao.var_max = l_variacao.var;Formatted explain: https://explain.depesz.com/s/mUkPAnd below are the times generated by EXPLAIN ANALYZE:10.5Planning time: 126.080 msExecution time: 2.306 ms11.0Planning Time: 7.238 msPlanning Time: 2.638 ms11.5Planning Time: 15138.533 msExecution Time: 2.310 msAll 3 EXPLAIN show exactly the same plan, but version 11.1 is consuming about 15s more toperform the planning. Below are some additional OS information:CPU: 16RAM: 128GBDisk: SSDOS: CentOS Linux release 7.5.1804Is there any configuration I have to do in 11.1 to achieve the same planning performance as in previous versions?Regards,Sanyo Capobiango",
"msg_date": "Tue, 27 Nov 2018 12:16:41 -0200",
"msg_from": "Sanyo Moura <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query with high planning time at version 11.1 compared versions 10.5\n and 11.0"
},
{
"msg_contents": "Sanyo Moura <[email protected]> writes:\n> And below are the times generated by EXPLAIN ANALYZE:\n\n> 10.5\n> Planning time: 126.080 ms\n> Execution time: 2.306 ms\n\n> 11.0\n> Planning Time: 7.238 ms\n> Planning Time: 2.638 ms\n\n> 11.5 (I assume you mean 11.1 here)\n> Planning Time: 15138.533 ms\n> Execution Time: 2.310 ms\n\nThere were no changes between 11.0 and 11.1 that look like they'd affect\nplanning time. Nor does it seem particularly credible that planning time\nwould have dropped by a factor of 15 between 10.x and 11.x, especially\nnot given that the resulting plan didn't change. I think you've got some\nexternal factor causing long planning times --- maybe something taking an\nexclusive lock on one of the tables, or on pg_statistic?\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 27 Nov 2018 10:00:08 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with high planning time at version 11.1 compared versions\n 10.5 and 11.0"
},
{
"msg_contents": "Hello Tom,\n\nBoth versions 10.5 and 11.1 are running on the same test server.\nWhat I did was migrate the database from 10.5 to 11.1 via pg_upgrade. After\nsuccessful execution, I performed \"vacuumdb --all --analyze-in-stages\".\n\nThanks,\n\nSanyo Capobiango\n\nEm ter, 27 de nov de 2018 às 13:00, Tom Lane <[email protected]> escreveu:\n\n> Sanyo Moura <[email protected]> writes:\n> > And below are the times generated by EXPLAIN ANALYZE:\n>\n> > 10.5\n> > Planning time: 126.080 ms\n> > Execution time: 2.306 ms\n>\n> > 11.0\n> > Planning Time: 7.238 ms\n> > Planning Time: 2.638 ms\n>\n> > 11.5 (I assume you mean 11.1 here)\n> > Planning Time: 15138.533 ms\n> > Execution Time: 2.310 ms\n>\n> There were no changes between 11.0 and 11.1 that look like they'd affect\n> planning time. Nor does it seem particularly credible that planning time\n> would have dropped by a factor of 15 between 10.x and 11.x, especially\n> not given that the resulting plan didn't change. I think you've got some\n> external factor causing long planning times --- maybe something taking an\n> exclusive lock on one of the tables, or on pg_statistic?\n>\n> regards, tom lane\n>\n\nHello Tom,Both versions 10.5 and 11.1 are running on the same test server.What I did was migrate the database from 10.5 to 11.1 via pg_upgrade. After successful execution, I performed \"vacuumdb --all --analyze-in-stages\".Thanks,Sanyo CapobiangoEm ter, 27 de nov de 2018 às 13:00, Tom Lane <[email protected]> escreveu:Sanyo Moura <[email protected]> writes:\n> And below are the times generated by EXPLAIN ANALYZE:\n\n> 10.5\n> Planning time: 126.080 ms\n> Execution time: 2.306 ms\n\n> 11.0\n> Planning Time: 7.238 ms\n> Planning Time: 2.638 ms\n\n> 11.5 (I assume you mean 11.1 here)\n> Planning Time: 15138.533 ms\n> Execution Time: 2.310 ms\n\nThere were no changes between 11.0 and 11.1 that look like they'd affect\nplanning time. Nor does it seem particularly credible that planning time\nwould have dropped by a factor of 15 between 10.x and 11.x, especially\nnot given that the resulting plan didn't change. I think you've got some\nexternal factor causing long planning times --- maybe something taking an\nexclusive lock on one of the tables, or on pg_statistic?\n\n regards, tom lane",
"msg_date": "Tue, 27 Nov 2018 13:11:49 -0200",
"msg_from": "Sanyo Moura <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query with high planning time at version 11.1 compared versions\n 10.5 and 11.0"
},
{
"msg_contents": "On Tue, Nov 27, 2018 at 9:17 AM Sanyo Moura <[email protected]> wrote:\n\n> Hi,\n>\n> I'm running performance tests for my application at version 11.1 and\n> encountered\n> queries with high planning time compared to the same planning, running at\n> versions 10.5 and 11.0.\n>\n\nCan you reproduce the regression if the tables are empty? If so, can you\nshare the create script that creates the tables?\n\nCheers,\n\nJeff\n\n>\n\nOn Tue, Nov 27, 2018 at 9:17 AM Sanyo Moura <[email protected]> wrote:Hi,I'm running performance tests for my application at version 11.1 and encounteredqueries with high planning time compared to the same planning, running at versions 10.5 and 11.0.Can you reproduce the regression if the tables are empty? If so, can you share the create script that creates the tables?Cheers,Jeff",
"msg_date": "Tue, 27 Nov 2018 14:34:53 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with high planning time at version 11.1 compared versions\n 10.5 and 11.0"
},
{
"msg_contents": "Hello Jeff,\n\nMy table (PRICE) is partitioned and contains 730 partitions. Each partition\ncontains 1 day of data.\nI performed the same test now with restriction (WHERE) in only 1 day (1\npartition), but doing SELECT in the virtual table PRICE.\nI got the same delay in planning.\nHowever, when I changed my query to use the partition directly, the plan\nran instantaneously.\nI believe the problem should be in some internal code related to scanning\nthe partitions for the planning.\nDoes it make sense?\n\nThanks,\n\nSanyo Capobiango\n\nEm ter, 27 de nov de 2018 às 17:35, Jeff Janes <[email protected]>\nescreveu:\n\n>\n>\n> On Tue, Nov 27, 2018 at 9:17 AM Sanyo Moura <[email protected]> wrote:\n>\n>> Hi,\n>>\n>> I'm running performance tests for my application at version 11.1 and\n>> encountered\n>> queries with high planning time compared to the same planning, running at\n>> versions 10.5 and 11.0.\n>>\n>\n> Can you reproduce the regression if the tables are empty? If so, can you\n> share the create script that creates the tables?\n>\n> Cheers,\n>\n> Jeff\n>\n>>\n\nHello Jeff,My table (PRICE) is partitioned and contains 730 partitions. Each partition contains 1 day of data. I performed the same test now with restriction (WHERE) in only 1 day (1 partition), but doing SELECT in the virtual table PRICE. I got the same delay in planning. However, when I changed my query to use the partition directly, the plan ran instantaneously. I believe the problem should be in some internal code related to scanning the partitions for the planning.Does it make sense?Thanks,Sanyo CapobiangoEm ter, 27 de nov de 2018 às 17:35, Jeff Janes <[email protected]> escreveu:On Tue, Nov 27, 2018 at 9:17 AM Sanyo Moura <[email protected]> wrote:Hi,I'm running performance tests for my application at version 11.1 and encounteredqueries with high planning time compared to the same planning, running at versions 10.5 and 11.0.Can you reproduce the regression if the tables are empty? If so, can you share the create script that creates the tables?Cheers,Jeff",
"msg_date": "Tue, 27 Nov 2018 18:20:21 -0200",
"msg_from": "Sanyo Moura <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query with high planning time at version 11.1 compared versions\n 10.5 and 11.0"
},
{
"msg_contents": "Hello again Jeff,\n\nBelow is the script that creates one partition table:\n\nCREATE TABLE public.precio_20170301 PARTITION OF public.precio\n(\n CONSTRAINT precio_20170301_pkey PRIMARY KEY (fecha, pluid, loccd),\n CONSTRAINT precio_20170301_almacen_fk FOREIGN KEY (loccd)\n REFERENCES public.almacen (loccd) MATCH SIMPLE\n ON UPDATE NO ACTION\n ON DELETE NO ACTION,\n CONSTRAINT precio_20170301_producto_fk FOREIGN KEY (pluid)\n REFERENCES public.producto (pluid) MATCH SIMPLE\n ON UPDATE NO ACTION\n ON DELETE NO ACTION\n)\n FOR VALUES FROM ('2017-03-01') TO ('2017-03-02')\nTABLESPACE pg_default;\n\nI reproduce same test in a empty partition and got same result (15s) at\nplanning\ntime when I used the virtual table (PRECIO), and an instantly EXPLAIN when\nI used\nthe partition directly.\n\nRegards,\n\nSanyo Capobiango\n\nEm ter, 27 de nov de 2018 às 17:35, Jeff Janes <[email protected]>\nescreveu:\n\n>\n>\n> On Tue, Nov 27, 2018 at 9:17 AM Sanyo Moura <[email protected]> wrote:\n>\n>> Hi,\n>>\n>> I'm running performance tests for my application at version 11.1 and\n>> encountered\n>> queries with high planning time compared to the same planning, running at\n>> versions 10.5 and 11.0.\n>>\n>\n> Can you reproduce the regression if the tables are empty? If so, can you\n> share the create script that creates the tables?\n>\n> Cheers,\n>\n> Jeff\n>\n>>\n\nHello again Jeff,Below is the script that creates one partition table:CREATE TABLE public.precio_20170301 PARTITION OF public.precio( CONSTRAINT precio_20170301_pkey PRIMARY KEY (fecha, pluid, loccd), CONSTRAINT precio_20170301_almacen_fk FOREIGN KEY (loccd) REFERENCES public.almacen (loccd) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION, CONSTRAINT precio_20170301_producto_fk FOREIGN KEY (pluid) REFERENCES public.producto (pluid) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION) FOR VALUES FROM ('2017-03-01') TO ('2017-03-02')TABLESPACE pg_default;I reproduce same test in a empty partition and got same result (15s) at planningtime when I used the virtual table (PRECIO), and an instantly EXPLAIN when I usedthe partition directly. Regards,Sanyo CapobiangoEm ter, 27 de nov de 2018 às 17:35, Jeff Janes <[email protected]> escreveu:On Tue, Nov 27, 2018 at 9:17 AM Sanyo Moura <[email protected]> wrote:Hi,I'm running performance tests for my application at version 11.1 and encounteredqueries with high planning time compared to the same planning, running at versions 10.5 and 11.0.Can you reproduce the regression if the tables are empty? If so, can you share the create script that creates the tables?Cheers,Jeff",
"msg_date": "Tue, 27 Nov 2018 18:30:04 -0200",
"msg_from": "Sanyo Moura <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query with high planning time at version 11.1 compared versions\n 10.5 and 11.0"
},
{
"msg_contents": "On Tue, Nov 27, 2018 at 06:30:04PM -0200, Sanyo Moura wrote:\n>>> I'm running performance tests for my application at version 11.1 and\n>>> encountered\n>>> queries with high planning time compared to the same planning, running at\n>>> versions 10.5 and 11.0.\n> \n> Below is the script that creates one partition table:\n\nWould you send the CREATE TABLE or \\d for precio, produto, and almacen ?\n\nAre the 2 referenced tables also empty or can you reproduce the problem if they\nare (like in a separate database) ?\n\nDo you still have an instance running 10.5 ? Or did you find the planning time\nin logs (like autoexplain) ?\n\nAre any of your catalog tables bloated or indexes fragmented ?\nI assume catalog tables and their indices should all be much smaller than\nshared_buffers.\n\nSELECT relpages, relname FROM pg_class WHERE relnamespace='pg_catalog'::regnamespace ORDER BY 1 DESC LIMIT 9;\n\nCan you compare pg_settings between the servers ? Either from a live server or\nhistoric postgresql.conf or from memory if need be.\nhttps://wiki.postgresql.org/wiki/Server_Configuration\n\nJustin\n\n",
"msg_date": "Tue, 27 Nov 2018 15:10:22 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with high planning time at version 11.1 compared versions\n 10.5 and 11.0"
},
{
"msg_contents": "On Tue, Nov 27, 2018 at 9:17 AM Sanyo Moura <[email protected]> wrote:\n>>> I'm running performance tests for my application at version 11.1 and\n>>> encountered queries with high planning time compared to the same planning,\n>>> running at versions 10.5 and 11.0.\n\nI was able to reproduce this behavior.\n\nFor my version of the query:\n\nOn PG10.6\n| Result (cost=0.00..0.00 rows=0 width=24)\n| One-Time Filter: false\n|Time: 408.335 ms\n\nOn PG11.1\n| Result (cost=0.00..0.00 rows=0 width=24)\n| One-Time Filter: false\n|Time: 37487.364 ms (00:37.487)\n\nPerf shows me:\n 47.83% postmaster postgres [.] bms_overlap\n 45.30% postmaster postgres [.] add_child_rel_equivalences\n 1.26% postmaster postgres [.] generate_join_implied_equalities_for_ecs\n\nCREATE TABLE producto (pluid int unique);\nCREATE TABLE almacen (loccd int unique);\nCREATE TABLE precio(fecha timestamp, pluid int, loccd int, plusalesprice int) PARTITION BY RANGE (fecha);\nSELECT 'CREATE TABLE public.precio_'||i||' PARTITION OF public.precio (PRIMARY KEY (fecha, pluid, loccd), CONSTRAINT precio_20170301_almacen_fk FOREIGN KEY (loccd) REFERENCES public.almacen (loccd) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION, CONSTRAINT precio_20170301_producto_fk FOREIGN KEY (pluid) REFERENCES public.producto (pluid) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION) FOR VALUES FROM ('''||a||''')TO('''||b||''') TABLESPACE pg_default' FROM (SELECT '1990-01-01'::timestamp+(i||'days')::interval a, '1990-01-02'::timestamp+(i||'days')::interval b, i FROM generate_series(1,999) i)x;\n\n\\gexec\n\\timing\nexplain SELECT l_variacao.fecha, l_variacao.loccd , l_variacao.pant , l_variacao.patual , max_variacao.var_max FROM (SELECT p.fecha, p.loccd, p.plusalesprice patual, da.plusalesprice pant, abs(p.plusalesprice - da.plusalesprice) as var from precio p, (SELECT p.fecha, p.plusalesprice, p.loccd from precio p WHERE p.fecha between '2017-03-01' and '2017-03-02' and p.pluid = 2) da WHERE p.fecha between '2017-03-01' and '2017-03-02' and p.pluid = 2 and p.loccd = da.loccd and p.fecha = da.fecha) l_variacao, (SELECT max(abs(p.plusalesprice - da.plusalesprice)) as var_max from precio p, (SELECT p.fecha, p.plusalesprice, p.loccd from precio p WHERE p.fecha between '2017-03-01' and '2017-03-02' and p.pluid = 2) da WHERE p.fecha between '2017-03-01' and '2017-03-02' and p.pluid = 2 and p.loccd = da.loccd and p.fecha = da.fecha) max_variacao WHERE max_variacao.var_max = l_variacao.var;\n\nSince I don't know the original table definitions, I removed two \"+1\" from the\ngiven sql to avoid: \"ERROR: operator does not exist: timestamp without time zone + integer\"\n\nJustin\n\n",
"msg_date": "Tue, 27 Nov 2018 18:44:02 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with high planning time at version 11.1 compared versions\n 10.5 and 11.0"
},
{
"msg_contents": "Thanks a lot Justin,\n\nAt this moment I can not help you with what you asked for, but tomorrow\nmorning I will send other information.\nI believe Postgres 11.1 is somehow taking a lot of planning time when\nanalyzing which partitions are needed in execution.\n\nSanyo\n\nEm ter, 27 de nov de 2018 às 22:44, Justin Pryzby <[email protected]>\nescreveu:\n\n> On Tue, Nov 27, 2018 at 9:17 AM Sanyo Moura <[email protected]> wrote:\n> >>> I'm running performance tests for my application at version 11.1 and\n> >>> encountered queries with high planning time compared to the same\n> planning,\n> >>> running at versions 10.5 and 11.0.\n>\n> I was able to reproduce this behavior.\n>\n> For my version of the query:\n>\n> On PG10.6\n> | Result (cost=0.00..0.00 rows=0 width=24)\n> | One-Time Filter: false\n> |Time: 408.335 ms\n>\n> On PG11.1\n> | Result (cost=0.00..0.00 rows=0 width=24)\n> | One-Time Filter: false\n> |Time: 37487.364 ms (00:37.487)\n>\n> Perf shows me:\n> 47.83% postmaster postgres [.] bms_overlap\n> 45.30% postmaster postgres [.] add_child_rel_equivalences\n> 1.26% postmaster postgres [.]\n> generate_join_implied_equalities_for_ecs\n>\n> CREATE TABLE producto (pluid int unique);\n> CREATE TABLE almacen (loccd int unique);\n> CREATE TABLE precio(fecha timestamp, pluid int, loccd int, plusalesprice\n> int) PARTITION BY RANGE (fecha);\n> SELECT 'CREATE TABLE public.precio_'||i||' PARTITION OF public.precio\n> (PRIMARY KEY (fecha, pluid, loccd), CONSTRAINT precio_20170301_almacen_fk\n> FOREIGN KEY (loccd) REFERENCES public.almacen (loccd) MATCH SIMPLE ON\n> UPDATE NO ACTION ON DELETE NO ACTION, CONSTRAINT\n> precio_20170301_producto_fk FOREIGN KEY (pluid) REFERENCES public.producto\n> (pluid) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION) FOR VALUES\n> FROM ('''||a||''')TO('''||b||''') TABLESPACE pg_default' FROM (SELECT\n> '1990-01-01'::timestamp+(i||'days')::interval a,\n> '1990-01-02'::timestamp+(i||'days')::interval b, i FROM\n> generate_series(1,999) i)x;\n>\n> \\gexec\n> \\timing\n> explain SELECT l_variacao.fecha, l_variacao.loccd , l_variacao.pant ,\n> l_variacao.patual , max_variacao.var_max FROM (SELECT p.fecha, p.loccd,\n> p.plusalesprice patual, da.plusalesprice pant, abs(p.plusalesprice -\n> da.plusalesprice) as var from precio p, (SELECT p.fecha, p.plusalesprice,\n> p.loccd from precio p WHERE p.fecha between '2017-03-01' and '2017-03-02'\n> and p.pluid = 2) da WHERE p.fecha between '2017-03-01' and '2017-03-02' and\n> p.pluid = 2 and p.loccd = da.loccd and p.fecha = da.fecha) l_variacao,\n> (SELECT max(abs(p.plusalesprice - da.plusalesprice)) as var_max from precio\n> p, (SELECT p.fecha, p.plusalesprice, p.loccd from precio p WHERE p.fecha\n> between '2017-03-01' and '2017-03-02' and p.pluid = 2) da WHERE p.fecha\n> between '2017-03-01' and '2017-03-02' and p.pluid = 2 and p.loccd =\n> da.loccd and p.fecha = da.fecha) max_variacao WHERE max_variacao.var_max =\n> l_variacao.var;\n>\n> Since I don't know the original table definitions, I removed two \"+1\" from\n> the\n> given sql to avoid: \"ERROR: operator does not exist: timestamp without\n> time zone + integer\"\n>\n> Justin\n>\n\nThanks a lot Justin,At this moment I can not help you with what you asked for, but tomorrow morning I will send other information.I believe Postgres 11.1 is somehow taking a lot of planning time when analyzing which partitions are needed in execution.SanyoEm ter, 27 de nov de 2018 às 22:44, Justin Pryzby <[email protected]> escreveu:On Tue, Nov 27, 2018 at 9:17 AM Sanyo Moura <[email protected]> wrote:\n>>> I'm running performance tests for my application at version 11.1 and\n>>> encountered queries with high planning time compared to the same planning,\n>>> running at versions 10.5 and 11.0.\n\nI was able to reproduce this behavior.\n\nFor my version of the query:\n\nOn PG10.6\n| Result (cost=0.00..0.00 rows=0 width=24)\n| One-Time Filter: false\n|Time: 408.335 ms\n\nOn PG11.1\n| Result (cost=0.00..0.00 rows=0 width=24)\n| One-Time Filter: false\n|Time: 37487.364 ms (00:37.487)\n\nPerf shows me:\n 47.83% postmaster postgres [.] bms_overlap\n 45.30% postmaster postgres [.] add_child_rel_equivalences\n 1.26% postmaster postgres [.] generate_join_implied_equalities_for_ecs\n\nCREATE TABLE producto (pluid int unique);\nCREATE TABLE almacen (loccd int unique);\nCREATE TABLE precio(fecha timestamp, pluid int, loccd int, plusalesprice int) PARTITION BY RANGE (fecha);\nSELECT 'CREATE TABLE public.precio_'||i||' PARTITION OF public.precio (PRIMARY KEY (fecha, pluid, loccd), CONSTRAINT precio_20170301_almacen_fk FOREIGN KEY (loccd) REFERENCES public.almacen (loccd) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION, CONSTRAINT precio_20170301_producto_fk FOREIGN KEY (pluid) REFERENCES public.producto (pluid) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION) FOR VALUES FROM ('''||a||''')TO('''||b||''') TABLESPACE pg_default' FROM (SELECT '1990-01-01'::timestamp+(i||'days')::interval a, '1990-01-02'::timestamp+(i||'days')::interval b, i FROM generate_series(1,999) i)x;\n\n\\gexec\n\\timing\nexplain SELECT l_variacao.fecha, l_variacao.loccd , l_variacao.pant , l_variacao.patual , max_variacao.var_max FROM (SELECT p.fecha, p.loccd, p.plusalesprice patual, da.plusalesprice pant, abs(p.plusalesprice - da.plusalesprice) as var from precio p, (SELECT p.fecha, p.plusalesprice, p.loccd from precio p WHERE p.fecha between '2017-03-01' and '2017-03-02' and p.pluid = 2) da WHERE p.fecha between '2017-03-01' and '2017-03-02' and p.pluid = 2 and p.loccd = da.loccd and p.fecha = da.fecha) l_variacao, (SELECT max(abs(p.plusalesprice - da.plusalesprice)) as var_max from precio p, (SELECT p.fecha, p.plusalesprice, p.loccd from precio p WHERE p.fecha between '2017-03-01' and '2017-03-02' and p.pluid = 2) da WHERE p.fecha between '2017-03-01' and '2017-03-02' and p.pluid = 2 and p.loccd = da.loccd and p.fecha = da.fecha) max_variacao WHERE max_variacao.var_max = l_variacao.var;\n\nSince I don't know the original table definitions, I removed two \"+1\" from the\ngiven sql to avoid: \"ERROR: operator does not exist: timestamp without time zone + integer\"\n\nJustin",
"msg_date": "Tue, 27 Nov 2018 23:00:39 -0200",
"msg_from": "Sanyo Moura <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query with high planning time at version 11.1 compared versions\n 10.5 and 11.0"
},
{
"msg_contents": "On Tue, Nov 27, 2018 at 06:44:02PM -0600, Justin Pryzby wrote:\n> On Tue, Nov 27, 2018 at 9:17 AM Sanyo Moura <[email protected]> wrote:\n> >>> I'm running performance tests for my application at version 11.1 and\n> >>> encountered queries with high planning time compared to the same planning,\n> >>> running at versions 10.5 and 11.0.\n> \n> I was able to reproduce this behavior.\n\nI take that back, in part..\n\nMy query (with One-Time Filter: false) has high planning time under 11.0, also:\n\n| Result (cost=0.00..0.00 rows=0 width=24)\n| One-Time Filter: false\n|Time: 48335.098 ms (00:48.335)\n\nJustin\n\n",
"msg_date": "Tue, 27 Nov 2018 19:27:13 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with high planning time at version 11.1 compared versions\n 10.5 and 11.0"
},
{
"msg_contents": "I currently have version 11.1 and 10.6 running on the same linux server. In\nboth Postgres the \"Price\" table has 730 partitions.\nHowever, in the test I did in version 11.0, \"Precio\" is partitioned into\nonly 21 partitions. So it really is a problem introduced in version 11, and\nit has to do with a large number of partitions in a table.\n\nSanyo\n\nEm ter, 27 de nov de 2018 às 23:27, Justin Pryzby <[email protected]>\nescreveu:\n\n> On Tue, Nov 27, 2018 at 06:44:02PM -0600, Justin Pryzby wrote:\n> > On Tue, Nov 27, 2018 at 9:17 AM Sanyo Moura <[email protected]>\n> wrote:\n> > >>> I'm running performance tests for my application at version 11.1 and\n> > >>> encountered queries with high planning time compared to the same\n> planning,\n> > >>> running at versions 10.5 and 11.0.\n> >\n> > I was able to reproduce this behavior.\n>\n> I take that back, in part..\n>\n> My query (with One-Time Filter: false) has high planning time under 11.0,\n> also:\n>\n> | Result (cost=0.00..0.00 rows=0 width=24)\n> | One-Time Filter: false\n> |Time: 48335.098 ms (00:48.335)\n>\n> Justin\n>\n\nI currently have version 11.1 and 10.6 running on the same linux server. In both Postgres the \"Price\" table has 730 partitions.However, in the test I did in version 11.0, \"Precio\" is partitioned into only 21 partitions. So it really is a problem introduced in version 11, and it has to do with a large number of partitions in a table.SanyoEm ter, 27 de nov de 2018 às 23:27, Justin Pryzby <[email protected]> escreveu:On Tue, Nov 27, 2018 at 06:44:02PM -0600, Justin Pryzby wrote:\n> On Tue, Nov 27, 2018 at 9:17 AM Sanyo Moura <[email protected]> wrote:\n> >>> I'm running performance tests for my application at version 11.1 and\n> >>> encountered queries with high planning time compared to the same planning,\n> >>> running at versions 10.5 and 11.0.\n> \n> I was able to reproduce this behavior.\n\nI take that back, in part..\n\nMy query (with One-Time Filter: false) has high planning time under 11.0, also:\n\n| Result (cost=0.00..0.00 rows=0 width=24)\n| One-Time Filter: false\n|Time: 48335.098 ms (00:48.335)\n\nJustin",
"msg_date": "Tue, 27 Nov 2018 23:36:09 -0200",
"msg_from": "Sanyo Moura <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query with high planning time at version 11.1 compared versions\n 10.5 and 11.0"
},
{
"msg_contents": "On Tue, Nov 27, 2018 at 11:36:09PM -0200, Sanyo Moura wrote:\n> However, in the test I did in version 11.0, \"Precio\" is partitioned into\n> only 21 partitions. So it really is a problem introduced in version 11, and\n> it has to do with a large number of partitions in a table.\n\nThanks for confirming. My test works fine without FK CONSTRAINTs; as you said,\nit's an issue of unreasonably high overhead of many partitions.\n\nSELECT 'CREATE TABLE public.precio_'||i||' PARTITION OF public.precio (PRIMARY KEY (fecha, pluid, loccd) ) FOR VALUES FROM ('''||a||''')TO('''||b||''') ' FROM (SELECT '1990-01-01'::timestamp+(i||'days')::interval a, '1990-01-02'::timestamp+(i||'days')::interval b, i FROM generate_series(1,999) i)x;\n\nThis issue was discussed here:\nhttps://www.postgresql.org/message-id/flat/94dd7a4b-5e50-0712-911d-2278e055c622%40dalibo.com\n\nWhich culminated in this commit.\n\n|commit 7d872c91a3f9d49b56117557cdbb0c3d4c620687\n|Author: Alvaro Herrera <[email protected]>\n|Date: Tue Jun 26 10:35:26 2018 -0400\n|\n| Allow direct lookups of AppendRelInfo by child relid\n\nI tried with PG11.1 the test given here:\nhttps://www.postgresql.org/message-id/CAKJS1f8qkcwr2DULd%2B04rBmubHkKzp4abuFykgoPUsVM-4-38g%40mail.gmail.com\nwith 999 partitions: Planning Time: 50.142 ms\nwith 9999 partitions: Planning Time: 239.284 ms\n..close enough to what was reported.\n\nSo it seems there's something about your query which isn't handled as intended.\n\nAdding relevant parties to Cc - find current thread here:\nhttps://www.postgresql.org/message-id/flat/CAO698qZnrxoZu7MEtfiJmpmUtz3AVYFVnwzR%2BpqjF%3DrmKBTgpw%40mail.gmail.com\n\nJustin\n\n",
"msg_date": "Tue, 27 Nov 2018 21:01:29 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with high planning time at version 11.1 compared versions\n 10.5 and 11.0"
},
{
"msg_contents": "On Wed, 28 Nov 2018 at 03:16, Sanyo Moura <[email protected]> wrote:\n> 11.0\n> Planning Time: 7.238 ms\n> Planning Time: 2.638 ms\n>\n> 11.5\n> Planning Time: 15138.533 ms\n> Execution Time: 2.310 ms\n\nDoes it still take that long after running ANALYZE on the partitioned table?\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Wed, 28 Nov 2018 17:03:15 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with high planning time at version 11.1 compared versions\n 10.5 and 11.0"
},
{
"msg_contents": "On Wed, Nov 28, 2018 at 05:03:15PM +1300, David Rowley wrote:\n> Does it still take that long after running ANALYZE on the partitioned table?\n\nYes ; I've just reproduced the problem with a variation on Sanyo's query,\nretrofitted onto the empty \"partbench\" table you used for testing in July:\nhttps://www.postgresql.org/message-id/CAKJS1f8qkcwr2DULd%2B04rBmubHkKzp4abuFykgoPUsVM-4-38g%40mail.gmail.com\n\nNote, Sanyo's original query appears to be a poor-man's window function,\njoining two subqueries on a.value=max(b.value).\n\nI reduced issue to this:\n\n|postgres=# ANALYZE partbench;\n|postgres=# explain SELECT * FROM (SELECT a.i2-b.i2 n FROM partbench a, (SELECT i2 FROM partbench)b)b, (SELECT max(partbench.i3) m FROM partbench, (SELECT i3 FROM partbench)y )y WHERE m=n;\n|Time: 31555.582 ms (00:31.556)\n\nJustin\n\n",
"msg_date": "Tue, 27 Nov 2018 22:17:50 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with high planning time at version 11.1 compared versions\n 10.5 and 11.0"
},
{
"msg_contents": "On Wed, Nov 28, 2018 at 05:03:15PM +1300, David Rowley wrote:\n> On Wed, 28 Nov 2018 at 03:16, Sanyo Moura <[email protected]> wrote:\n> > 11.0\n> > Planning Time: 7.238 ms\n> > Planning Time: 2.638 ms\n> >\n> > 11.5\n> > Planning Time: 15138.533 ms\n> > Execution Time: 2.310 ms\n> \n> Does it still take that long after running ANALYZE on the partitioned table?\n\nNote, I'm sure 11.5 was meant to say 11.1.\n\nAlso note this earlier message indicates that \"high partitions\" tests were with\njust 10.6 and 11.1, and that times under 11.0 weren't a useful datapoint:\n\nOn Tue, Nov 27, 2018 at 11:36:09PM -0200, Sanyo Moura wrote:\n> However, in the test I did in version 11.0, \"Precio\" is partitioned into\n> only 21 partitions.\n\nI reduced the query a bit further:\n\n|postgres=# explain SELECT m-n FROM (SELECT a.i2-b.i2 n FROM partbench a, partbench b WHERE a.i2=b.i2) x, (SELECT max(partbench.i2) m FROM partbench)y WHERE m=n;\n|Time: 35182.536 ms (00:35.183)\n\nI should have said, that's with only 1k partitions, not 10k as you used in\nJune.\n\nI also tried doing what the query seems to be aiming for by using a window\nfunction, but that also experiences 30+ sec planning time:\n\n|explain SELECT rank() OVER(ORDER BY var) AS k FROM (SELECT p.plusalesprice-q.plusalesprice as var from precio p, precio q ) l_variacao\n|Time: 34173.401 ms (00:34.173)\n\nJustin\n\n",
"msg_date": "Wed, 28 Nov 2018 18:40:19 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with high planning time at version 11.1 compared versions\n 10.5 and 11.0"
},
{
"msg_contents": "Em qua, 28 de nov de 2018 às 22:40, Justin Pryzby <[email protected]>\nescreveu:\n\n> On Wed, Nov 28, 2018 at 05:03:15PM +1300, David Rowley wrote:\n> > On Wed, 28 Nov 2018 at 03:16, Sanyo Moura <[email protected]> wrote:\n> > > 11.0\n> > > Planning Time: 7.238 ms\n> > > Planning Time: 2.638 ms\n> > >\n> > > 11.5\n> > > Planning Time: 15138.533 ms\n> > > Execution Time: 2.310 ms\n> >\n> > Does it still take that long after running ANALYZE on the partitioned\n> table?\n>\n> Note, I'm sure 11.5 was meant to say 11.1.\n>\n\nYeah, 11.1, sorry for mistake.\n\n\n>\n> Also note this earlier message indicates that \"high partitions\" tests were\n> with\n> just 10.6 and 11.1, and that times under 11.0 weren't a useful datapoint:\n>\n\nThat's true, at 11.0 version I had tested with only 21 partitions because\nby this time I didn't have\nrealized that it was an issue with a huge number of partitions.\nIn both versions 10.6 and 11.1 I have tested with 730 partitions each\n(2 years of data partitioned by day).\n\nSanyo\n\n\n>\n> On Tue, Nov 27, 2018 at 11:36:09PM -0200, Sanyo Moura wrote:\n> > However, in the test I did in version 11.0, \"Precio\" is partitioned into\n> > only 21 partitions.\n>\n> I reduced the query a bit further:\n>\n> |postgres=# explain SELECT m-n FROM (SELECT a.i2-b.i2 n FROM partbench a,\n> partbench b WHERE a.i2=b.i2) x, (SELECT max(partbench.i2) m FROM\n> partbench)y WHERE m=n;\n> |Time: 35182.536 ms (00:35.183)\n>\n> I should have said, that's with only 1k partitions, not 10k as you used in\n> June.\n>\n> I also tried doing what the query seems to be aiming for by using a window\n> function, but that also experiences 30+ sec planning time:\n>\n> |explain SELECT rank() OVER(ORDER BY var) AS k FROM (SELECT\n> p.plusalesprice-q.plusalesprice as var from precio p, precio q ) l_variacao\n> |Time: 34173.401 ms (00:34.173)\n>\n> Justin\n>\n\nEm qua, 28 de nov de 2018 às 22:40, Justin Pryzby <[email protected]> escreveu:On Wed, Nov 28, 2018 at 05:03:15PM +1300, David Rowley wrote:\n> On Wed, 28 Nov 2018 at 03:16, Sanyo Moura <[email protected]> wrote:\n> > 11.0\n> > Planning Time: 7.238 ms\n> > Planning Time: 2.638 ms\n> >\n> > 11.5\n> > Planning Time: 15138.533 ms\n> > Execution Time: 2.310 ms\n> \n> Does it still take that long after running ANALYZE on the partitioned table?\n\nNote, I'm sure 11.5 was meant to say 11.1.Yeah, 11.1, sorry for mistake. \n\nAlso note this earlier message indicates that \"high partitions\" tests were with\njust 10.6 and 11.1, and that times under 11.0 weren't a useful datapoint:That's true, at 11.0 version I had tested with only 21 partitions because by this time I didn't haverealized that it was an issue with a huge number of partitions.In both versions 10.6 and 11.1 I have tested with 730 partitions each (2 years of data partitioned by day).Sanyo \n\nOn Tue, Nov 27, 2018 at 11:36:09PM -0200, Sanyo Moura wrote:\n> However, in the test I did in version 11.0, \"Precio\" is partitioned into\n> only 21 partitions.\n\nI reduced the query a bit further:\n\n|postgres=# explain SELECT m-n FROM (SELECT a.i2-b.i2 n FROM partbench a, partbench b WHERE a.i2=b.i2) x, (SELECT max(partbench.i2) m FROM partbench)y WHERE m=n;\n|Time: 35182.536 ms (00:35.183)\n\nI should have said, that's with only 1k partitions, not 10k as you used in\nJune.\n\nI also tried doing what the query seems to be aiming for by using a window\nfunction, but that also experiences 30+ sec planning time:\n\n|explain SELECT rank() OVER(ORDER BY var) AS k FROM (SELECT p.plusalesprice-q.plusalesprice as var from precio p, precio q ) l_variacao\n|Time: 34173.401 ms (00:34.173)\n\nJustin",
"msg_date": "Wed, 28 Nov 2018 23:01:36 -0200",
"msg_from": "Sanyo Moura <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query with high planning time at version 11.1 compared versions\n 10.5 and 11.0"
},
{
"msg_contents": "Hello again,\n\nAt the moment, I've got a palliative solution that has significantly\nreduced my planning time.\nWhat I did was nest the partitions by creating sub partitions.\nThat way, my 730 partitions (2 years of data) were partitioned first in 2\nyears,\n and each partitioned year in 12 months.\nIn turn, each month received the partitions per corresponding day.\nThat way, the planner needs to go through far fewer partitions to execute\nthe plan.\n\nMy planning time has dramatically reduced from 15s to 150ms.\n\nRegards,\n\nSanyo Moura\n\nEm qua, 28 de nov de 2018 às 23:01, Sanyo Moura <[email protected]>\nescreveu:\n\n> Em qua, 28 de nov de 2018 às 22:40, Justin Pryzby <[email protected]>\n> escreveu:\n>\n>> On Wed, Nov 28, 2018 at 05:03:15PM +1300, David Rowley wrote:\n>> > On Wed, 28 Nov 2018 at 03:16, Sanyo Moura <[email protected]>\n>> wrote:\n>> > > 11.0\n>> > > Planning Time: 7.238 ms\n>> > > Planning Time: 2.638 ms\n>> > >\n>> > > 11.5\n>> > > Planning Time: 15138.533 ms\n>> > > Execution Time: 2.310 ms\n>> >\n>> > Does it still take that long after running ANALYZE on the partitioned\n>> table?\n>>\n>> Note, I'm sure 11.5 was meant to say 11.1.\n>>\n>\n> Yeah, 11.1, sorry for mistake.\n>\n>\n>>\n>> Also note this earlier message indicates that \"high partitions\" tests\n>> were with\n>> just 10.6 and 11.1, and that times under 11.0 weren't a useful datapoint:\n>>\n>\n> That's true, at 11.0 version I had tested with only 21 partitions because\n> by this time I didn't have\n> realized that it was an issue with a huge number of partitions.\n> In both versions 10.6 and 11.1 I have tested with 730 partitions each\n> (2 years of data partitioned by day).\n>\n> Sanyo\n>\n>\n>>\n>> On Tue, Nov 27, 2018 at 11:36:09PM -0200, Sanyo Moura wrote:\n>> > However, in the test I did in version 11.0, \"Precio\" is partitioned into\n>> > only 21 partitions.\n>>\n>> I reduced the query a bit further:\n>>\n>> |postgres=# explain SELECT m-n FROM (SELECT a.i2-b.i2 n FROM partbench a,\n>> partbench b WHERE a.i2=b.i2) x, (SELECT max(partbench.i2) m FROM\n>> partbench)y WHERE m=n;\n>> |Time: 35182.536 ms (00:35.183)\n>>\n>> I should have said, that's with only 1k partitions, not 10k as you used in\n>> June.\n>>\n>> I also tried doing what the query seems to be aiming for by using a window\n>> function, but that also experiences 30+ sec planning time:\n>>\n>> |explain SELECT rank() OVER(ORDER BY var) AS k FROM (SELECT\n>> p.plusalesprice-q.plusalesprice as var from precio p, precio q ) l_variacao\n>> |Time: 34173.401 ms (00:34.173)\n>>\n>> Justin\n>>\n>\n\nHello again,At the moment, I've got a palliative solution that has significantly reduced my planning time.What I did was nest the partitions by creating sub partitions. That way, my 730 partitions (2 years of data) were partitioned first in 2 years, and each partitioned year in 12 months. In turn, each month received the partitions per corresponding day.That way, the planner needs to go through far fewer partitions to execute the plan.My planning time has dramatically reduced from 15s to 150ms.Regards,Sanyo MouraEm qua, 28 de nov de 2018 às 23:01, Sanyo Moura <[email protected]> escreveu:Em qua, 28 de nov de 2018 às 22:40, Justin Pryzby <[email protected]> escreveu:On Wed, Nov 28, 2018 at 05:03:15PM +1300, David Rowley wrote:\n> On Wed, 28 Nov 2018 at 03:16, Sanyo Moura <[email protected]> wrote:\n> > 11.0\n> > Planning Time: 7.238 ms\n> > Planning Time: 2.638 ms\n> >\n> > 11.5\n> > Planning Time: 15138.533 ms\n> > Execution Time: 2.310 ms\n> \n> Does it still take that long after running ANALYZE on the partitioned table?\n\nNote, I'm sure 11.5 was meant to say 11.1.Yeah, 11.1, sorry for mistake. \n\nAlso note this earlier message indicates that \"high partitions\" tests were with\njust 10.6 and 11.1, and that times under 11.0 weren't a useful datapoint:That's true, at 11.0 version I had tested with only 21 partitions because by this time I didn't haverealized that it was an issue with a huge number of partitions.In both versions 10.6 and 11.1 I have tested with 730 partitions each (2 years of data partitioned by day).Sanyo \n\nOn Tue, Nov 27, 2018 at 11:36:09PM -0200, Sanyo Moura wrote:\n> However, in the test I did in version 11.0, \"Precio\" is partitioned into\n> only 21 partitions.\n\nI reduced the query a bit further:\n\n|postgres=# explain SELECT m-n FROM (SELECT a.i2-b.i2 n FROM partbench a, partbench b WHERE a.i2=b.i2) x, (SELECT max(partbench.i2) m FROM partbench)y WHERE m=n;\n|Time: 35182.536 ms (00:35.183)\n\nI should have said, that's with only 1k partitions, not 10k as you used in\nJune.\n\nI also tried doing what the query seems to be aiming for by using a window\nfunction, but that also experiences 30+ sec planning time:\n\n|explain SELECT rank() OVER(ORDER BY var) AS k FROM (SELECT p.plusalesprice-q.plusalesprice as var from precio p, precio q ) l_variacao\n|Time: 34173.401 ms (00:34.173)\n\nJustin",
"msg_date": "Fri, 30 Nov 2018 12:37:26 -0200",
"msg_from": "Sanyo Moura <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query with high planning time at version 11.1 compared versions\n 10.5 and 11.0"
},
{
"msg_contents": "pá 30. 11. 2018 v 15:37 odesílatel Sanyo Moura <[email protected]>\nnapsal:\n\n> Hello again,\n>\n> At the moment, I've got a palliative solution that has significantly\n> reduced my planning time.\n> What I did was nest the partitions by creating sub partitions.\n> That way, my 730 partitions (2 years of data) were partitioned first in 2\n> years,\n> and each partitioned year in 12 months.\n> In turn, each month received the partitions per corresponding day.\n> That way, the planner needs to go through far fewer partitions to execute\n> the plan.\n>\n> My planning time has dramatically reduced from 15s to 150ms.\n>\n\ngood to know it.\n\nRegards\n\nPavel\n\n\n> Regards,\n>\n> Sanyo Moura\n>\n> Em qua, 28 de nov de 2018 às 23:01, Sanyo Moura <[email protected]>\n> escreveu:\n>\n>> Em qua, 28 de nov de 2018 às 22:40, Justin Pryzby <[email protected]>\n>> escreveu:\n>>\n>>> On Wed, Nov 28, 2018 at 05:03:15PM +1300, David Rowley wrote:\n>>> > On Wed, 28 Nov 2018 at 03:16, Sanyo Moura <[email protected]>\n>>> wrote:\n>>> > > 11.0\n>>> > > Planning Time: 7.238 ms\n>>> > > Planning Time: 2.638 ms\n>>> > >\n>>> > > 11.5\n>>> > > Planning Time: 15138.533 ms\n>>> > > Execution Time: 2.310 ms\n>>> >\n>>> > Does it still take that long after running ANALYZE on the partitioned\n>>> table?\n>>>\n>>> Note, I'm sure 11.5 was meant to say 11.1.\n>>>\n>>\n>> Yeah, 11.1, sorry for mistake.\n>>\n>>\n>>>\n>>> Also note this earlier message indicates that \"high partitions\" tests\n>>> were with\n>>> just 10.6 and 11.1, and that times under 11.0 weren't a useful datapoint:\n>>>\n>>\n>> That's true, at 11.0 version I had tested with only 21 partitions because\n>> by this time I didn't have\n>> realized that it was an issue with a huge number of partitions.\n>> In both versions 10.6 and 11.1 I have tested with 730 partitions each\n>> (2 years of data partitioned by day).\n>>\n>> Sanyo\n>>\n>>\n>>>\n>>> On Tue, Nov 27, 2018 at 11:36:09PM -0200, Sanyo Moura wrote:\n>>> > However, in the test I did in version 11.0, \"Precio\" is partitioned\n>>> into\n>>> > only 21 partitions.\n>>>\n>>> I reduced the query a bit further:\n>>>\n>>> |postgres=# explain SELECT m-n FROM (SELECT a.i2-b.i2 n FROM partbench\n>>> a, partbench b WHERE a.i2=b.i2) x, (SELECT max(partbench.i2) m FROM\n>>> partbench)y WHERE m=n;\n>>> |Time: 35182.536 ms (00:35.183)\n>>>\n>>> I should have said, that's with only 1k partitions, not 10k as you used\n>>> in\n>>> June.\n>>>\n>>> I also tried doing what the query seems to be aiming for by using a\n>>> window\n>>> function, but that also experiences 30+ sec planning time:\n>>>\n>>> |explain SELECT rank() OVER(ORDER BY var) AS k FROM (SELECT\n>>> p.plusalesprice-q.plusalesprice as var from precio p, precio q ) l_variacao\n>>> |Time: 34173.401 ms (00:34.173)\n>>>\n>>> Justin\n>>>\n>>\n\npá 30. 11. 2018 v 15:37 odesílatel Sanyo Moura <[email protected]> napsal:Hello again,At the moment, I've got a palliative solution that has significantly reduced my planning time.What I did was nest the partitions by creating sub partitions. That way, my 730 partitions (2 years of data) were partitioned first in 2 years, and each partitioned year in 12 months. In turn, each month received the partitions per corresponding day.That way, the planner needs to go through far fewer partitions to execute the plan.My planning time has dramatically reduced from 15s to 150ms.good to know it.RegardsPavelRegards,Sanyo MouraEm qua, 28 de nov de 2018 às 23:01, Sanyo Moura <[email protected]> escreveu:Em qua, 28 de nov de 2018 às 22:40, Justin Pryzby <[email protected]> escreveu:On Wed, Nov 28, 2018 at 05:03:15PM +1300, David Rowley wrote:\n> On Wed, 28 Nov 2018 at 03:16, Sanyo Moura <[email protected]> wrote:\n> > 11.0\n> > Planning Time: 7.238 ms\n> > Planning Time: 2.638 ms\n> >\n> > 11.5\n> > Planning Time: 15138.533 ms\n> > Execution Time: 2.310 ms\n> \n> Does it still take that long after running ANALYZE on the partitioned table?\n\nNote, I'm sure 11.5 was meant to say 11.1.Yeah, 11.1, sorry for mistake. \n\nAlso note this earlier message indicates that \"high partitions\" tests were with\njust 10.6 and 11.1, and that times under 11.0 weren't a useful datapoint:That's true, at 11.0 version I had tested with only 21 partitions because by this time I didn't haverealized that it was an issue with a huge number of partitions.In both versions 10.6 and 11.1 I have tested with 730 partitions each (2 years of data partitioned by day).Sanyo \n\nOn Tue, Nov 27, 2018 at 11:36:09PM -0200, Sanyo Moura wrote:\n> However, in the test I did in version 11.0, \"Precio\" is partitioned into\n> only 21 partitions.\n\nI reduced the query a bit further:\n\n|postgres=# explain SELECT m-n FROM (SELECT a.i2-b.i2 n FROM partbench a, partbench b WHERE a.i2=b.i2) x, (SELECT max(partbench.i2) m FROM partbench)y WHERE m=n;\n|Time: 35182.536 ms (00:35.183)\n\nI should have said, that's with only 1k partitions, not 10k as you used in\nJune.\n\nI also tried doing what the query seems to be aiming for by using a window\nfunction, but that also experiences 30+ sec planning time:\n\n|explain SELECT rank() OVER(ORDER BY var) AS k FROM (SELECT p.plusalesprice-q.plusalesprice as var from precio p, precio q ) l_variacao\n|Time: 34173.401 ms (00:34.173)\n\nJustin",
"msg_date": "Fri, 30 Nov 2018 15:54:12 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with high planning time at version 11.1 compared versions\n 10.5 and 11.0"
},
{
"msg_contents": "So, the slowness in this test seems to come from\nadd_child_rel_equivalences() and bms_overlap() therein, according to\nperf (mine and Justin's) ... apparently we end up with a lot of\nequivalence class members. I added a debugging block to spit out the\nnumber of ECs as well as the number of members in each (after creating\ntable \"precio\" and about a thousand partitions), and I got progressively\nslower lines the last of which says\nWARNING: 4 classes: 2000, 1999, 1999, 999001, \n\nso for some reason we produced quadratic number of EC members, and we\nbms_overlap all that stuff over and over a number of times.\n\nThis code seems to come from partitionwise join.\n\nNow, the query is a bit silly; it puts table \"precio\" four times in the\nrange table. (thanks http://sqlformat.darold.net/)\n\nCREATE TABLE precio(fecha timestamp, pluid int, loccd int, plusalesprice int) PARTITION BY RANGE (fecha); \nSELECT format('CREATE TABLE public.precio_%s PARTITION OF public.precio (PRIMARY KEY (fecha, pluid, loccd) ) FOR VALUES FROM (''%s'')TO(''%s'')', i, a, b) FROM (SELECT '1990-01-01'::timestam p+(i||'days')::interval a, '1990-01-02'::timestamp+(i||'days')::interval b, i FROM generate_series(1,999) i)x \\gexec\n\nEXPLAIN SELECT\n l_variacao.fecha,\n l_variacao.loccd,\n l_variacao.pant,\n l_variacao.patual,\n max_variacao.var_max\nFROM (\n SELECT\n p.fecha,\n p.loccd,\n p.plusalesprice patual,\n da.plusalesprice pant,\n a bs (p.plusalesprice - da.plusalesprice) AS var\n FROM\n precio p,\n (\n SELECT\n p.fecha,\n p.plusalesprice,\n p.loccd\n FROM\n precio p\n WHERE\n p.fecha BETWEEN '2017-03-01' AND '2017-03-02'\n AND p.pluid = 2) da\n WHERE\n p.fecha BETWEEN '2017-03-01' AND '2017-03-02'\n AND p.pluid = 2\n AND p.loccd = da.loccd\n AND p.fecha = da.fecha) l_variacao, (\n SELECT\n max(abs(p.plusalesprice - da.plusalesprice)) AS var_max\n FROM\n precio p, (\n SELECT\n p.fecha, p.plusalesprice, p.loccd\n FROM\n precio p\n WHERE\n p.fecha BETWEEN '2017-03-01' AND '2017-03-02'\n AND p.pluid = 2) da\n WHERE\n p.fecha BETWEEN '2017-03-01'\n AND '2017-03-02'\n AND p.pluid = 2\n AND p.loccd = da.loccd\n AND p.fecha = da.fecha) max_variacao\nWHERE\n max_variacao.var_max = l_variacao.var;\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Tue, 4 Dec 2018 18:43:31 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with high planning time at version 11.1 compared versions\n 10.5 and 11.0"
},
{
"msg_contents": "On 2018-Dec-04, Alvaro Herrera wrote:\n\n> CREATE TABLE precio(fecha timestamp, pluid int, loccd int, plusalesprice int) PARTITION BY RANGE (fecha); \n> SELECT format('CREATE TABLE public.precio_%s PARTITION OF public.precio (PRIMARY KEY (fecha, pluid, loccd) ) FOR VALUES FROM (''%s'')TO(''%s'')', i, a, b) FROM (SELECT '1990-01-01'::timestam p+(i||'days')::interval a, '1990-01-02'::timestamp+(i||'days')::interval b, i FROM generate_series(1,999) i)x \\gexec\n\nActually, the primary keys are not needed; it's just as slow without\nthem.\n\nI noticed another interesting thing, which is that if I modify the query\nto actually reference some partition that I do have (as opposed to the\nabove, which just takes 30s to prune everything) the plan is mighty\ncurious ... if only because in one of the Append nodes, partitions have\nnot been pruned as they should.\n\nSo, at least two bugs here,\n1. the equivalence-class related slowness,\n2. the lack of pruning\n\n QUERY PLAN \n─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n Hash Join (cost=1159.13..25423.65 rows=1 width=24)\n Hash Cond: (abs((p.plusalesprice - p_875.plusalesprice)) = (max(abs((p_877.plusalesprice - p_879.plusalesprice)))))\n -> Nested Loop (cost=1000.00..25264.52 rows=1 width=20)\n Join Filter: ((p.loccd = p_875.loccd) AND (p.fecha = p_875.fecha))\n -> Gather (cost=1000.00..25154.38 rows=875 width=16)\n Workers Planned: 2\n -> Parallel Append (cost=0.00..24066.88 rows=875 width=16)\n -> Parallel Seq Scan on precio_125 p (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_126 p_1 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_127 p_2 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_128 p_3 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_129 p_4 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_130 p_5 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_131 p_6 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_132 p_7 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_133 p_8 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_134 p_9 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_135 p_10 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_136 p_11 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_137 p_12 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_138 p_13 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_139 p_14 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_140 p_15 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_141 p_16 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_142 p_17 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_143 p_18 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_144 p_19 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_145 p_20 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_146 p_21 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_147 p_22 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_148 p_23 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_149 p_24 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_150 p_25 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_151 p_26 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_152 p_27 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_153 p_28 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_154 p_29 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_155 p_30 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_156 p_31 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_157 p_32 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_158 p_33 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_159 p_34 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_160 p_35 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_161 p_36 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_162 p_37 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_163 p_38 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_164 p_39 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_165 p_40 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_166 p_41 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_167 p_42 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_168 p_43 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_169 p_44 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_170 p_45 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_171 p_46 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_172 p_47 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_173 p_48 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_174 p_49 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_175 p_50 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_176 p_51 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_177 p_52 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_178 p_53 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_179 p_54 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_180 p_55 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_181 p_56 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_182 p_57 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_183 p_58 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_184 p_59 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_185 p_60 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_186 p_61 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_187 p_62 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_188 p_63 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_189 p_64 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_190 p_65 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_191 p_66 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_192 p_67 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_193 p_68 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_194 p_69 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_195 p_70 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_196 p_71 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_197 p_72 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_198 p_73 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_199 p_74 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_200 p_75 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_201 p_76 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_202 p_77 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_203 p_78 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_204 p_79 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_205 p_80 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_206 p_81 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_207 p_82 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_208 p_83 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_209 p_84 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_210 p_85 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_211 p_86 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_212 p_87 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_213 p_88 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_214 p_89 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_215 p_90 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_216 p_91 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_217 p_92 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_218 p_93 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_219 p_94 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_220 p_95 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_221 p_96 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_222 p_97 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_223 p_98 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_224 p_99 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_225 p_100 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_226 p_101 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_227 p_102 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_228 p_103 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_229 p_104 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_230 p_105 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_231 p_106 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_232 p_107 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_233 p_108 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_234 p_109 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_235 p_110 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_236 p_111 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_237 p_112 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_238 p_113 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_239 p_114 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_240 p_115 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_241 p_116 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_242 p_117 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_243 p_118 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_244 p_119 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_245 p_120 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_246 p_121 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_247 p_122 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_248 p_123 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_249 p_124 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_250 p_125 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_251 p_126 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_252 p_127 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_253 p_128 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_254 p_129 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_255 p_130 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_256 p_131 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_257 p_132 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_258 p_133 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_259 p_134 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_260 p_135 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_261 p_136 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_262 p_137 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_263 p_138 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_264 p_139 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_265 p_140 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_266 p_141 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_267 p_142 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_268 p_143 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_269 p_144 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_270 p_145 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_271 p_146 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_272 p_147 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_273 p_148 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_274 p_149 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_275 p_150 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_276 p_151 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_277 p_152 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_278 p_153 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_279 p_154 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_280 p_155 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_281 p_156 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_282 p_157 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_283 p_158 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_284 p_159 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_285 p_160 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_286 p_161 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_287 p_162 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_288 p_163 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_289 p_164 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_290 p_165 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_291 p_166 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_292 p_167 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_293 p_168 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_294 p_169 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_295 p_170 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_296 p_171 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_297 p_172 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_298 p_173 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_299 p_174 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_300 p_175 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_301 p_176 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_302 p_177 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_303 p_178 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_304 p_179 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_305 p_180 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_306 p_181 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_307 p_182 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_308 p_183 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_309 p_184 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_310 p_185 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_311 p_186 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_312 p_187 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_313 p_188 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_314 p_189 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_315 p_190 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_316 p_191 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_317 p_192 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_318 p_193 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_319 p_194 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_320 p_195 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_321 p_196 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_322 p_197 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_323 p_198 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_324 p_199 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_325 p_200 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_326 p_201 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_327 p_202 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_328 p_203 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_329 p_204 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_330 p_205 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_331 p_206 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_332 p_207 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_333 p_208 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_334 p_209 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_335 p_210 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_336 p_211 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_337 p_212 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_338 p_213 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_339 p_214 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_340 p_215 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_341 p_216 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_342 p_217 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_343 p_218 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_344 p_219 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_345 p_220 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_346 p_221 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_347 p_222 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_348 p_223 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_349 p_224 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_350 p_225 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_351 p_226 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_352 p_227 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_353 p_228 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_354 p_229 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_355 p_230 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_356 p_231 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_357 p_232 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_358 p_233 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_359 p_234 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_360 p_235 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_361 p_236 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_362 p_237 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_363 p_238 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_364 p_239 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_365 p_240 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_366 p_241 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_367 p_242 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_368 p_243 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_369 p_244 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_370 p_245 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_371 p_246 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_372 p_247 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_373 p_248 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_374 p_249 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_375 p_250 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_376 p_251 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_377 p_252 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_378 p_253 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_379 p_254 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_380 p_255 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_381 p_256 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_382 p_257 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_383 p_258 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_384 p_259 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_385 p_260 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_386 p_261 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_387 p_262 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_388 p_263 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_389 p_264 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_390 p_265 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_391 p_266 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_392 p_267 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_393 p_268 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_394 p_269 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_395 p_270 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_396 p_271 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_397 p_272 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_398 p_273 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_399 p_274 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_400 p_275 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_401 p_276 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_402 p_277 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_403 p_278 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_404 p_279 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_405 p_280 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_406 p_281 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_407 p_282 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_408 p_283 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_409 p_284 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_410 p_285 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_411 p_286 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_412 p_287 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_413 p_288 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_414 p_289 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_415 p_290 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_416 p_291 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_417 p_292 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_418 p_293 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_419 p_294 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_420 p_295 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_421 p_296 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_422 p_297 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_423 p_298 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_424 p_299 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_425 p_300 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_426 p_301 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_427 p_302 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_428 p_303 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_429 p_304 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_430 p_305 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_431 p_306 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_432 p_307 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_433 p_308 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_434 p_309 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_435 p_310 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_436 p_311 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_437 p_312 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_438 p_313 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_439 p_314 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_440 p_315 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_441 p_316 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_442 p_317 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_443 p_318 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_444 p_319 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_445 p_320 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_446 p_321 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_447 p_322 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_448 p_323 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_449 p_324 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_450 p_325 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_451 p_326 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_452 p_327 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_453 p_328 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_454 p_329 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_455 p_330 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_456 p_331 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_457 p_332 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_458 p_333 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_459 p_334 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_460 p_335 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_461 p_336 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_462 p_337 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_463 p_338 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_464 p_339 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_465 p_340 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_466 p_341 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_467 p_342 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_468 p_343 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_469 p_344 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_470 p_345 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_471 p_346 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_472 p_347 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_473 p_348 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_474 p_349 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_475 p_350 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_476 p_351 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_477 p_352 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_478 p_353 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_479 p_354 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_480 p_355 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_481 p_356 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_482 p_357 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_483 p_358 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_484 p_359 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_485 p_360 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_486 p_361 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_487 p_362 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_488 p_363 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_489 p_364 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_490 p_365 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_491 p_366 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_492 p_367 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_493 p_368 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_494 p_369 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_495 p_370 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_496 p_371 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_497 p_372 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_498 p_373 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_499 p_374 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_500 p_375 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_501 p_376 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_502 p_377 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_503 p_378 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_504 p_379 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_505 p_380 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_506 p_381 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_507 p_382 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_508 p_383 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_509 p_384 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_510 p_385 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_511 p_386 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_512 p_387 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_513 p_388 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_514 p_389 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_515 p_390 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_516 p_391 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_517 p_392 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_518 p_393 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_519 p_394 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_520 p_395 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_521 p_396 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_522 p_397 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_523 p_398 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_524 p_399 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_525 p_400 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_526 p_401 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_527 p_402 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_528 p_403 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_529 p_404 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_530 p_405 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_531 p_406 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_532 p_407 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_533 p_408 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_534 p_409 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_535 p_410 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_536 p_411 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_537 p_412 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_538 p_413 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_539 p_414 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_540 p_415 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_541 p_416 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_542 p_417 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_543 p_418 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_544 p_419 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_545 p_420 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_546 p_421 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_547 p_422 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_548 p_423 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_549 p_424 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_550 p_425 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_551 p_426 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_552 p_427 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_553 p_428 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_554 p_429 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_555 p_430 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_556 p_431 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_557 p_432 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_558 p_433 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_559 p_434 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_560 p_435 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_561 p_436 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_562 p_437 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_563 p_438 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_564 p_439 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_565 p_440 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_566 p_441 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_567 p_442 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_568 p_443 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_569 p_444 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_570 p_445 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_571 p_446 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_572 p_447 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_573 p_448 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_574 p_449 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_575 p_450 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_576 p_451 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_577 p_452 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_578 p_453 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_579 p_454 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_580 p_455 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_581 p_456 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_582 p_457 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_583 p_458 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_584 p_459 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_585 p_460 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_586 p_461 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_587 p_462 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_588 p_463 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_589 p_464 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_590 p_465 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_591 p_466 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_592 p_467 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_593 p_468 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_594 p_469 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_595 p_470 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_596 p_471 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_597 p_472 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_598 p_473 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_599 p_474 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_600 p_475 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_601 p_476 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_602 p_477 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_603 p_478 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_604 p_479 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_605 p_480 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_606 p_481 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_607 p_482 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_608 p_483 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_609 p_484 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_610 p_485 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_611 p_486 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_612 p_487 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_613 p_488 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_614 p_489 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_615 p_490 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_616 p_491 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_617 p_492 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_618 p_493 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_619 p_494 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_620 p_495 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_621 p_496 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_622 p_497 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_623 p_498 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_624 p_499 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_625 p_500 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_626 p_501 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_627 p_502 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_628 p_503 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_629 p_504 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_630 p_505 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_631 p_506 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_632 p_507 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_633 p_508 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_634 p_509 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_635 p_510 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_636 p_511 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_637 p_512 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_638 p_513 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_639 p_514 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_640 p_515 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_641 p_516 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_642 p_517 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_643 p_518 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_644 p_519 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_645 p_520 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_646 p_521 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_647 p_522 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_648 p_523 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_649 p_524 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_650 p_525 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_651 p_526 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_652 p_527 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_653 p_528 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_654 p_529 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_655 p_530 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_656 p_531 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_657 p_532 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_658 p_533 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_659 p_534 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_660 p_535 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_661 p_536 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_662 p_537 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_663 p_538 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_664 p_539 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_665 p_540 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_666 p_541 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_667 p_542 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_668 p_543 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_669 p_544 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_670 p_545 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_671 p_546 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_672 p_547 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_673 p_548 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_674 p_549 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_675 p_550 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_676 p_551 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_677 p_552 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_678 p_553 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_679 p_554 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_680 p_555 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_681 p_556 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_682 p_557 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_683 p_558 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_684 p_559 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_685 p_560 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_686 p_561 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_687 p_562 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_688 p_563 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_689 p_564 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_690 p_565 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_691 p_566 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_692 p_567 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_693 p_568 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_694 p_569 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_695 p_570 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_696 p_571 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_697 p_572 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_698 p_573 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_699 p_574 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_700 p_575 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_701 p_576 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_702 p_577 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_703 p_578 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_704 p_579 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_705 p_580 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_706 p_581 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_707 p_582 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_708 p_583 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_709 p_584 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_710 p_585 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_711 p_586 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_712 p_587 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_713 p_588 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_714 p_589 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_715 p_590 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_716 p_591 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_717 p_592 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_718 p_593 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_719 p_594 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_720 p_595 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_721 p_596 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_722 p_597 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_723 p_598 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_724 p_599 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_725 p_600 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_726 p_601 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_727 p_602 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_728 p_603 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_729 p_604 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_730 p_605 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_731 p_606 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_732 p_607 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_733 p_608 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_734 p_609 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_735 p_610 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_736 p_611 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_737 p_612 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_738 p_613 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_739 p_614 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_740 p_615 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_741 p_616 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_742 p_617 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_743 p_618 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_744 p_619 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_745 p_620 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_746 p_621 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_747 p_622 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_748 p_623 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_749 p_624 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_750 p_625 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_751 p_626 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_752 p_627 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_753 p_628 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_754 p_629 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_755 p_630 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_756 p_631 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_757 p_632 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_758 p_633 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_759 p_634 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_760 p_635 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_761 p_636 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_762 p_637 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_763 p_638 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_764 p_639 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_765 p_640 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_766 p_641 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_767 p_642 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_768 p_643 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_769 p_644 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_770 p_645 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_771 p_646 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_772 p_647 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_773 p_648 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_774 p_649 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_775 p_650 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_776 p_651 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_777 p_652 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_778 p_653 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_779 p_654 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_780 p_655 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_781 p_656 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_782 p_657 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_783 p_658 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_784 p_659 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_785 p_660 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_786 p_661 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_787 p_662 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_788 p_663 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_789 p_664 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_790 p_665 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_791 p_666 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_792 p_667 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_793 p_668 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_794 p_669 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_795 p_670 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_796 p_671 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_797 p_672 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_798 p_673 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_799 p_674 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_800 p_675 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_801 p_676 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_802 p_677 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_803 p_678 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_804 p_679 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_805 p_680 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_806 p_681 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_807 p_682 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_808 p_683 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_809 p_684 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_810 p_685 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_811 p_686 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_812 p_687 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_813 p_688 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_814 p_689 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_815 p_690 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_816 p_691 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_817 p_692 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_818 p_693 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_819 p_694 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_820 p_695 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_821 p_696 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_822 p_697 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_823 p_698 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_824 p_699 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_825 p_700 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_826 p_701 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_827 p_702 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_828 p_703 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_829 p_704 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_830 p_705 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_831 p_706 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_832 p_707 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_833 p_708 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_834 p_709 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_835 p_710 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_836 p_711 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_837 p_712 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_838 p_713 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_839 p_714 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_840 p_715 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_841 p_716 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_842 p_717 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_843 p_718 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_844 p_719 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_845 p_720 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_846 p_721 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_847 p_722 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_848 p_723 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_849 p_724 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_850 p_725 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_851 p_726 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_852 p_727 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_853 p_728 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_854 p_729 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_855 p_730 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_856 p_731 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_857 p_732 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_858 p_733 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_859 p_734 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_860 p_735 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_861 p_736 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_862 p_737 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_863 p_738 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_864 p_739 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_865 p_740 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_866 p_741 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_867 p_742 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_868 p_743 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_869 p_744 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_870 p_745 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_871 p_746 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_872 p_747 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_873 p_748 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_874 p_749 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_875 p_750 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_876 p_751 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_877 p_752 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_878 p_753 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_879 p_754 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_880 p_755 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_881 p_756 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_882 p_757 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_883 p_758 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_884 p_759 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_885 p_760 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_886 p_761 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_887 p_762 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_888 p_763 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_889 p_764 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_890 p_765 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_891 p_766 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_892 p_767 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_893 p_768 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_894 p_769 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_895 p_770 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_896 p_771 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_897 p_772 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_898 p_773 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_899 p_774 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_900 p_775 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_901 p_776 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_902 p_777 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_903 p_778 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_904 p_779 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_905 p_780 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_906 p_781 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_907 p_782 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_908 p_783 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_909 p_784 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_910 p_785 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_911 p_786 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_912 p_787 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_913 p_788 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_914 p_789 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_915 p_790 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_916 p_791 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_917 p_792 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_918 p_793 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_919 p_794 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_920 p_795 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_921 p_796 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_922 p_797 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_923 p_798 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_924 p_799 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_925 p_800 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_926 p_801 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_927 p_802 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_928 p_803 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_929 p_804 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_930 p_805 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_931 p_806 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_932 p_807 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_933 p_808 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_934 p_809 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_935 p_810 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_936 p_811 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_937 p_812 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_938 p_813 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_939 p_814 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_940 p_815 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_941 p_816 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_942 p_817 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_943 p_818 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_944 p_819 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_945 p_820 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_946 p_821 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_947 p_822 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_948 p_823 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_949 p_824 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_950 p_825 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_951 p_826 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_952 p_827 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_953 p_828 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_954 p_829 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_955 p_830 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_956 p_831 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_957 p_832 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_958 p_833 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_959 p_834 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_960 p_835 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_961 p_836 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_962 p_837 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_963 p_838 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_964 p_839 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_965 p_840 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_966 p_841 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_967 p_842 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_968 p_843 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_969 p_844 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_970 p_845 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_971 p_846 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_972 p_847 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_973 p_848 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_974 p_849 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_975 p_850 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_976 p_851 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_977 p_852 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_978 p_853 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_979 p_854 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_980 p_855 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_981 p_856 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_982 p_857 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_983 p_858 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_984 p_859 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_985 p_860 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_986 p_861 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_987 p_862 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_988 p_863 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_989 p_864 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_990 p_865 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_991 p_866 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_992 p_867 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_993 p_868 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_994 p_869 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_995 p_870 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_996 p_871 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_997 p_872 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_998 p_873 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Parallel Seq Scan on precio_999 p_874 (cost=0.00..27.50 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Materialize (cost=0.00..79.52 rows=2 width=16)\n -> Append (cost=0.00..79.51 rows=2 width=16)\n -> Seq Scan on precio_125 p_875 (cost=0.00..39.75 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1990-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Seq Scan on precio_126 p_876 (cost=0.00..39.75 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1990-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Hash (cost=159.12..159.12 rows=1 width=4)\n -> Aggregate (cost=159.10..159.11 rows=1 width=4)\n -> Nested Loop (cost=0.00..159.10 rows=1 width=8)\n Join Filter: ((p_877.loccd = p_879.loccd) AND (p_877.fecha = p_879.fecha))\n -> Append (cost=0.00..79.51 rows=2 width=16)\n -> Seq Scan on precio_125 p_877 (cost=0.00..39.75 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1990-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Seq Scan on precio_126 p_878 (cost=0.00..39.75 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1990-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Materialize (cost=0.00..79.52 rows=2 width=16)\n -> Append (cost=0.00..79.51 rows=2 width=16)\n -> Seq Scan on precio_125 p_879 (cost=0.00..39.75 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1990-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n -> Seq Scan on precio_126 p_880 (cost=0.00..39.75 rows=1 width=16)\n Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1990-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n(1778 filas)\n\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Tue, 4 Dec 2018 18:55:47 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with high planning time at version 11.1 compared versions\n 10.5 and 11.0"
},
{
"msg_contents": "Hi,\n\nOn 2018/12/05 6:55, Alvaro Herrera wrote:\n> On 2018-Dec-04, Alvaro Herrera wrote:\n> \n>> CREATE TABLE precio(fecha timestamp, pluid int, loccd int, plusalesprice int) PARTITION BY RANGE (fecha); \n>> SELECT format('CREATE TABLE public.precio_%s PARTITION OF public.precio (PRIMARY KEY (fecha, pluid, loccd) ) FOR VALUES FROM (''%s'')TO(''%s'')', i, a, b) FROM (SELECT '1990-01-01'::timestam p+(i||'days')::interval a, '1990-01-02'::timestamp+(i||'days')::interval b, i FROM generate_series(1,999) i)x \\gexec\n> \n> Actually, the primary keys are not needed; it's just as slow without\n> them.\n\nI ran the original unmodified query at [1] (the one that produces an empty\nplan due to all children being pruned) against the server built with\npatches I posted on the \"speeding up planning with partitions\" [2] thread\nand it finished in a jiffy.\n\nexplain SELECT l_variacao.fecha, l_variacao.loccd , l_variacao.pant ,\nl_variacao.patual , max_variacao.var_max FROM (SELECT p.fecha, p.loccd,\np.plusalesprice patual, da.plusalesprice pant, abs(p.plusalesprice -\nda.plusalesprice) as var from precio p, (SELECT p.fecha, p.plusalesprice,\np.loccd from precio p WHERE p.fecha between '2017-03-01' and '2017-03-02'\nand p.pluid = 2) da WHERE p.fecha between '2017-03-01' and '2017-03-02'\nand p.pluid = 2 and p.loccd = da.loccd and p.fecha = da.fecha) l_variacao,\n(SELECT max(abs(p.plusalesprice - da.plusalesprice)) as var_max from\nprecio p, (SELECT p.fecha, p.plusalesprice, p.loccd from precio p WHERE\np.fecha between '2017-03-01' and '2017-03-02' and p.pluid = 2) da WHERE\np.fecha between '2017-03-01' and '2017-03-02' and p.pluid = 2 and p.loccd\n= da.loccd and p.fecha = da.fecha) max_variacao WHERE max_variacao.var_max\n= l_variacao.var;\nQUERY PLAN\n───────────────────────────────────────────\n Result (cost=0.00..0.00 rows=0 width=24)\n One-Time Filter: false\n(2 rows)\n\nTime: 50.792 ms\n\nThat's because one of the things changed by one of the patches is that\nchild EC members are added only for the non-dummy children. In this case,\nsince all the children are pruned, there should be zero child EC members,\nwhich is what would happen in PG 10 too. The partitionwise join related\nchanges in PG 11 moved the add_child_rel_equivalences call in\nset_append_rel_size such that child EC members would be added even before\nchecking if the child rel is dummy, but for a reason named in the comment\nabove the call:\n\n ... Even if this child is\n * deemed dummy, it may fall on nullable side in a child-join, which\n * in turn may participate in a MergeAppend, where we will need the\n * EquivalenceClass data structures.\n\nHowever, I think we can skip adding the dummy child EC members here and\ninstead make it a responsibility of partitionwise join code in joinrels.c\nto add the needed EC members. Attached a patch to show what I mean, which\npasses the tests and gives this planning time:\n\n QUERY PLAN\n───────────────────────────────────────────────────────────────────\n Result (cost=0.00..0.00 rows=0 width=24) (actual rows=0 loops=1)\n One-Time Filter: false\n Planning Time: 512.788 ms\n Execution Time: 0.162 ms\n\nwhich is not as low as with the patches at [2] for obvious reasons, but as\nlow as we can hope to get with PG 11. Sadly, planning time is less with\nPG 10.6:\n\n QUERY PLAN\n───────────────────────────────────────────────────────────────────\n Result (cost=0.00..0.00 rows=0 width=24) (actual rows=0 loops=1)\n One-Time Filter: false\n Planning time: 254.533 ms\n Execution time: 0.080 ms\n(4 rows)\n\nBut I haven't looked closely at what else in PG 11 makes the planning time\ntwice that of 10.\n\n> I noticed another interesting thing, which is that if I modify the query\n> to actually reference some partition that I do have (as opposed to the\n> above, which just takes 30s to prune everything) the plan is mighty\n> curious ... if only because in one of the Append nodes, partitions have\n> not been pruned as they should.\n>\n> So, at least two bugs here,\n> 1. the equivalence-class related slowness,\n> 2. the lack of pruning\n\nI haven't reproduced 2 yet. Can you share the modified query?\n\nThanks,\nAmit\n\n[1]\nhttps://www.postgresql.org/message-id/20181128004402.GC30707%40telsasoft.com",
"msg_date": "Thu, 6 Dec 2018 11:14:06 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with high planning time at version 11.1 compared versions\n 10.5 and 11.0"
},
{
"msg_contents": "On 2018/12/06 11:14, Amit Langote wrote:\n> I ran the original unmodified query at [1] (the one that produces an empty\n> plan due to all children being pruned) against the server built with\n> patches I posted on the \"speeding up planning with partitions\" [2] thread\n> and it finished in a jiffy.\n\nForgot to add the link for [2]: https://commitfest.postgresql.org/21/1778/\n\nThanks,\nAmit\n\n\n",
"msg_date": "Thu, 6 Dec 2018 11:19:24 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with high planning time at version 11.1 compared versions\n 10.5 and 11.0"
},
{
"msg_contents": "Hi,\n\nOn 2018/12/05 6:55, Alvaro Herrera wrote:\n> I noticed another interesting thing, which is that if I modify the query\n> to actually reference some partition that I do have (as opposed to the\n> above, which just takes 30s to prune everything) the plan is mighty\n> curious ... if only because in one of the Append nodes, partitions have\n> not been pruned as they should.\n> \n> So, at least two bugs here,\n> 1. the equivalence-class related slowness,\n> 2. the lack of pruning\n> \n> QUERY PLAN \n> ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n> Hash Join (cost=1159.13..25423.65 rows=1 width=24)\n> Hash Cond: (abs((p.plusalesprice - p_875.plusalesprice)) = (max(abs((p_877.plusalesprice - p_879.plusalesprice)))))\n> -> Nested Loop (cost=1000.00..25264.52 rows=1 width=20)\n> Join Filter: ((p.loccd = p_875.loccd) AND (p.fecha = p_875.fecha))\n> -> Gather (cost=1000.00..25154.38 rows=875 width=16)\n> Workers Planned: 2\n> -> Parallel Append (cost=0.00..24066.88 rows=875 width=16)\n> -> Parallel Seq Scan on precio_125 p (cost=0.00..27.50 rows=1 width=16)\n> Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n\n[ Parallel SeqScan on precio_126 to precio_998 ]\n\n> -> Parallel Seq Scan on precio_999 p_874 (cost=0.00..27.50 rows=1 width=16)\n> Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n\nAs you can see from the \"Filter: \" property above, the baserestrictinfo of\nthis Append's parent relation is:\n\nBETWEEN '1990-05-06' AND '1999-05-07'\n\nwhich selects partitions for all days from '1990-05-06' (precio_125) up to\n'1992-09-26' (precio_999).\n\n> -> Materialize (cost=0.00..79.52 rows=2 width=16)\n> -> Append (cost=0.00..79.51 rows=2 width=16)\n> -> Seq Scan on precio_125 p_875 (cost=0.00..39.75 rows=1 width=16)\n> Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1990-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n> -> Seq Scan on precio_126 p_876 (cost=0.00..39.75 rows=1 width=16)\n> Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1990-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n\nWhereas for this Append, it is BETWEEN '1990-05-06' AND '1990-05-07'.\n\n> -> Hash (cost=159.12..159.12 rows=1 width=4)\n> -> Aggregate (cost=159.10..159.11 rows=1 width=4)\n> -> Nested Loop (cost=0.00..159.10 rows=1 width=8)\n> Join Filter: ((p_877.loccd = p_879.loccd) AND (p_877.fecha = p_879.fecha))\n> -> Append (cost=0.00..79.51 rows=2 width=16)\n> -> Seq Scan on precio_125 p_877 (cost=0.00..39.75 rows=1 width=16)\n> Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1990-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n> -> Seq Scan on precio_126 p_878 (cost=0.00..39.75 rows=1 width=16)\n> Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1990-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n> -> Materialize (cost=0.00..79.52 rows=2 width=16)\n> -> Append (cost=0.00..79.51 rows=2 width=16)\n> -> Seq Scan on precio_125 p_879 (cost=0.00..39.75 rows=1 width=16)\n> Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1990-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n> -> Seq Scan on precio_126 p_880 (cost=0.00..39.75 rows=1 width=16)\n> Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1990-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n\nAnd also for these two Appends.\n\nSo, I don't think there's anything funny going on with pruning here, maybe\njust a typo in the query (1999 looks very much like 1990 to miss the typo\nmaybe.) I fixed the query to change '1999-05-07' to '1990-05-07' of the\nfirst Append's parent relation and I get the following planning time with\nthe patch I posted above with 2 partitions selected under each Append as\nexpected.\n\n Planning Time: 536.947 ms\n Execution Time: 1.304 ms\n(31 rows)\n\nEven without changing 1999 to 1990, the planning time with the patch is:\n\n Planning Time: 4669.685 ms\n Execution Time: 110.506 ms\n(1777 rows)\n\nThanks,\nAmit\n\n\n",
"msg_date": "Thu, 6 Dec 2018 13:50:39 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with high planning time at version 11.1 compared versions\n 10.5 and 11.0"
},
{
"msg_contents": "Hi,\n\n(Re-sending after adding -hackers, sorry for the noise to those who would\nreceive this twice)\n\nOn 2018/12/05 6:55, Alvaro Herrera wrote:\n> I noticed another interesting thing, which is that if I modify the query\n> to actually reference some partition that I do have (as opposed to the\n> above, which just takes 30s to prune everything) the plan is mighty\n> curious ... if only because in one of the Append nodes, partitions have\n> not been pruned as they should.\n> \n> So, at least two bugs here,\n> 1. the equivalence-class related slowness,\n> 2. the lack of pruning\n> \n> QUERY PLAN \n> ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n> Hash Join (cost=1159.13..25423.65 rows=1 width=24)\n> Hash Cond: (abs((p.plusalesprice - p_875.plusalesprice)) = (max(abs((p_877.plusalesprice - p_879.plusalesprice)))))\n> -> Nested Loop (cost=1000.00..25264.52 rows=1 width=20)\n> Join Filter: ((p.loccd = p_875.loccd) AND (p.fecha = p_875.fecha))\n> -> Gather (cost=1000.00..25154.38 rows=875 width=16)\n> Workers Planned: 2\n> -> Parallel Append (cost=0.00..24066.88 rows=875 width=16)\n> -> Parallel Seq Scan on precio_125 p (cost=0.00..27.50 rows=1 width=16)\n> Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n\n[ Parallel SeqScan on precio_126 to precio_998 ]\n\n> -> Parallel Seq Scan on precio_999 p_874 (cost=0.00..27.50 rows=1 width=16)\n> Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n\nAs you can see from the \"Filter: \" property above, the baserestrictinfo of\nthis Append's parent relation is:\n\nBETWEEN '1990-05-06' AND '1999-05-07'\n\nwhich selects partitions for all days from '1990-05-06' (precio_125) up to\n'1992-09-26' (precio_999).\n\n> -> Materialize (cost=0.00..79.52 rows=2 width=16)\n> -> Append (cost=0.00..79.51 rows=2 width=16)\n> -> Seq Scan on precio_125 p_875 (cost=0.00..39.75 rows=1 width=16)\n> Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1990-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n> -> Seq Scan on precio_126 p_876 (cost=0.00..39.75 rows=1 width=16)\n> Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1990-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n\nWhereas for this Append, it is BETWEEN '1990-05-06' AND '1990-05-07'.\n\n> -> Hash (cost=159.12..159.12 rows=1 width=4)\n> -> Aggregate (cost=159.10..159.11 rows=1 width=4)\n> -> Nested Loop (cost=0.00..159.10 rows=1 width=8)\n> Join Filter: ((p_877.loccd = p_879.loccd) AND (p_877.fecha = p_879.fecha))\n> -> Append (cost=0.00..79.51 rows=2 width=16)\n> -> Seq Scan on precio_125 p_877 (cost=0.00..39.75 rows=1 width=16)\n> Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1990-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n> -> Seq Scan on precio_126 p_878 (cost=0.00..39.75 rows=1 width=16)\n> Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1990-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n> -> Materialize (cost=0.00..79.52 rows=2 width=16)\n> -> Append (cost=0.00..79.51 rows=2 width=16)\n> -> Seq Scan on precio_125 p_879 (cost=0.00..39.75 rows=1 width=16)\n> Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1990-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n> -> Seq Scan on precio_126 p_880 (cost=0.00..39.75 rows=1 width=16)\n> Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1990-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n\nAnd also for these two Appends.\n\nSo, I don't think there's anything funny going on with pruning here, maybe\njust a typo in the query (1999 looks very much like 1990 to miss the typo\nmaybe.) I fixed the query to change '1999-05-07' to '1990-05-07' of the\nfirst Append's parent relation and I get the following planning time with\nthe patch I posted above with 2 partitions selected under each Append as\nexpected.\n\n Planning Time: 536.947 ms\n Execution Time: 1.304 ms\n(31 rows)\n\nEven without changing 1999 to 1990, the planning time with the patch is:\n\n Planning Time: 4669.685 ms\n Execution Time: 110.506 ms\n(1777 rows)\n\nThanks,\nAmit\n\n\n",
"msg_date": "Thu, 6 Dec 2018 14:00:22 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with high planning time at version 11.1 compared versions\n 10.5 and 11.0"
},
{
"msg_contents": "On 2018-Dec-06, Amit Langote wrote:\n\nHi\n\n> [ Parallel SeqScan on precio_126 to precio_998 ]\n> \n> > -> Parallel Seq Scan on precio_999 p_874 (cost=0.00..27.50 rows=1 width=16)\n> > Filter: ((fecha >= '1990-05-06 00:00:00'::timestamp without time zone) AND (fecha <= '1999-05-07 00:00:00'::timestamp without time zone) AND (pluid = 2))\n> \n> As you can see from the \"Filter: \" property above, the baserestrictinfo of\n> this Append's parent relation is:\n> \n> BETWEEN '1990-05-06' AND '1999-05-07'\n> \n> which selects partitions for all days from '1990-05-06' (precio_125) up to\n> '1992-09-26' (precio_999).\n\nLooking at my .psql_history, you're right -- I typoed 1990 as 1999 in\none of the clauses. Thanks, mystery solved :-)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Thu, 6 Dec 2018 04:55:40 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with high planning time at version 11.1 compared versions\n 10.5 and 11.0"
},
{
"msg_contents": "On 2018-Dec-06, Amit Langote wrote:\n\n> The partitionwise join related\n> changes in PG 11 moved the add_child_rel_equivalences call in\n> set_append_rel_size such that child EC members would be added even before\n> checking if the child rel is dummy, but for a reason named in the comment\n> above the call:\n> \n> ... Even if this child is\n> * deemed dummy, it may fall on nullable side in a child-join, which\n> * in turn may participate in a MergeAppend, where we will need the\n> * EquivalenceClass data structures.\n> \n> However, I think we can skip adding the dummy child EC members here and\n> instead make it a responsibility of partitionwise join code in joinrels.c\n> to add the needed EC members. Attached a patch to show what I mean, which\n> passes the tests and gives this planning time:\n\nRobert, Ashutosh, any comments on this? I'm unfamiliar with the\npartitionwise join code.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Thu, 6 Dec 2018 04:57:26 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with high planning time at version 11.1 compared versions\n 10.5 and 11.0"
},
{
"msg_contents": "On Thu, Dec 6, 2018 at 1:27 PM Alvaro Herrera <[email protected]>\nwrote:\n\n> On 2018-Dec-06, Amit Langote wrote:\n>\n> > The partitionwise join related\n> > changes in PG 11 moved the add_child_rel_equivalences call in\n> > set_append_rel_size such that child EC members would be added even before\n> > checking if the child rel is dummy, but for a reason named in the comment\n> > above the call:\n> >\n> > ... Even if this child is\n> > * deemed dummy, it may fall on nullable side in a child-join, which\n> > * in turn may participate in a MergeAppend, where we will need the\n> > * EquivalenceClass data structures.\n> >\n> > However, I think we can skip adding the dummy child EC members here and\n> > instead make it a responsibility of partitionwise join code in joinrels.c\n> > to add the needed EC members. Attached a patch to show what I mean,\n> which\n> > passes the tests and gives this planning time:\n>\n> Robert, Ashutosh, any comments on this? I'm unfamiliar with the\n> partitionwise join code.\n>\n\nAs the comment says it has to do with the equivalence classes being used\nduring merge append. EC's are used to create pathkeys used for sorting.\nCreating a sort node which has column on the nullable side of an OUTER join\nwill fail if it doesn't find corresponding equivalence class. You may not\nnotice this if both the partitions being joined are pruned for some reason.\nAmit's idea to make partition-wise join code do this may work, but will add\na similar overhead esp. in N-way partition-wise join once those equivalence\nclasses are added.\n\n--\nBest Wishes,\nAshutosh Bapat\n\nOn Thu, Dec 6, 2018 at 1:27 PM Alvaro Herrera <[email protected]> wrote:On 2018-Dec-06, Amit Langote wrote:\n\n> The partitionwise join related\n> changes in PG 11 moved the add_child_rel_equivalences call in\n> set_append_rel_size such that child EC members would be added even before\n> checking if the child rel is dummy, but for a reason named in the comment\n> above the call:\n> \n> ... Even if this child is\n> * deemed dummy, it may fall on nullable side in a child-join, which\n> * in turn may participate in a MergeAppend, where we will need the\n> * EquivalenceClass data structures.\n> \n> However, I think we can skip adding the dummy child EC members here and\n> instead make it a responsibility of partitionwise join code in joinrels.c\n> to add the needed EC members. Attached a patch to show what I mean, which\n> passes the tests and gives this planning time:\n\nRobert, Ashutosh, any comments on this? I'm unfamiliar with the\npartitionwise join code.As the comment says it has to do with the equivalence classes being used during merge append. EC's are used to create pathkeys used for sorting. Creating a sort node which has column on the nullable side of an OUTER join will fail if it doesn't find corresponding equivalence class. You may not notice this if both the partitions being joined are pruned for some reason. Amit's idea to make partition-wise join code do this may work, but will add a similar overhead esp. in N-way partition-wise join once those equivalence classes are added.--Best Wishes,Ashutosh Bapat",
"msg_date": "Fri, 7 Dec 2018 11:13:45 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with high planning time at version 11.1 compared versions\n 10.5 and 11.0"
},
{
"msg_contents": "On Fri, Dec 7, 2018 at 11:13 AM Ashutosh Bapat <[email protected]>\nwrote:\n\n>\n>\n>\n>>\n>>\n>> Robert, Ashutosh, any comments on this? I'm unfamiliar with the\n>> partitionwise join code.\n>>\n>\n> As the comment says it has to do with the equivalence classes being used\n> during merge append. EC's are used to create pathkeys used for sorting.\n> Creating a sort node which has column on the nullable side of an OUTER join\n> will fail if it doesn't find corresponding equivalence class. You may not\n> notice this if both the partitions being joined are pruned for some reason.\n> Amit's idea to make partition-wise join code do this may work, but will add\n> a similar overhead esp. in N-way partition-wise join once those equivalence\n> classes are added.\n>\n>\n>\nI looked at the patch. The problem there is that for a given relation, we\nwill add child ec member multiple times, as many times as the number of\njoins it participates in. We need to avoid that to keep ec_member list\nlength in check.\n\n--\nBest Wishes,\nAshutosh Bapat\n\nOn Fri, Dec 7, 2018 at 11:13 AM Ashutosh Bapat <[email protected]> wrote:\n\nRobert, Ashutosh, any comments on this? I'm unfamiliar with the\npartitionwise join code.As the comment says it has to do with the equivalence classes being used during merge append. EC's are used to create pathkeys used for sorting. Creating a sort node which has column on the nullable side of an OUTER join will fail if it doesn't find corresponding equivalence class. You may not notice this if both the partitions being joined are pruned for some reason. Amit's idea to make partition-wise join code do this may work, but will add a similar overhead esp. in N-way partition-wise join once those equivalence classes are added.I looked at the patch. The problem there is that for a given relation, we will add child ec member multiple times, as many times as the number of joins it participates in. We need to avoid that to keep ec_member list length in check.--Best Wishes,Ashutosh Bapat",
"msg_date": "Fri, 7 Dec 2018 16:44:14 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with high planning time at version 11.1 compared versions\n 10.5 and 11.0"
},
{
"msg_contents": "Fujita-san,\n\n(sorry about the repeated email, but my previous attempt failed due to\ntrying to send to the -hackers and -performance lists at the same time, so\ntrying again after removing -performance)\n\nOn 2019/01/08 20:07, Etsuro Fujita wrote:\n> (2018/12/07 20:14), Ashutosh Bapat wrote:\n>> On Fri, Dec 7, 2018 at 11:13 AM Ashutosh Bapat\n>> <[email protected] <mailto:[email protected]>> wrote:\n> \n>> Robert, Ashutosh, any comments on this? I'm unfamiliar with the\n>> partitionwise join code.\n> \n>> As the comment says it has to do with the equivalence classes being\n>> used during merge append. EC's are used to create pathkeys used for\n>> sorting. Creating a sort node which has column on the nullable side\n>> of an OUTER join will fail if it doesn't find corresponding\n>> equivalence class. You may not notice this if both the partitions\n>> being joined are pruned for some reason. Amit's idea to make\n>> partition-wise join code do this may work, but will add a similar\n>> overhead esp. in N-way partition-wise join once those equivalence\n>> classes are added.\n> \n>> I looked at the patch. The problem there is that for a given relation,\n>> we will add child ec member multiple times, as many times as the number\n>> of joins it participates in. We need to avoid that to keep ec_member\n>> list length in check.\n> \n> Amit-san, are you still working on this, perhaps as part of the\n> speeding-up-planning-with-partitions patch [1]?\n\nI had tried to continue working on it after PGConf.ASIA last month, but\ngot distracted by something else.\n\nSo, while the patch at [1] can take care of this issue as I also mentioned\nupthread, I was trying to come up with a solution that can be back-patched\nto PG 11. The patch I posted above is one such solution and as Ashutosh\npoints out it's perhaps not the best, because it can result in potentially\ncreating many copies of the same child EC member if we do it in joinrel.c,\nas the patch proposes. I will try to respond to the concerns he raised in\nthe next week if possible.\n\nThanks,\nAmit\n\n\n\n\n\n",
"msg_date": "Wed, 9 Jan 2019 09:30:08 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with high planning time at version 11.1 compared versions\n 10.5 and 11.0"
},
{
"msg_contents": "Amit-san,\n\n(2019/01/09 9:30), Amit Langote wrote:\n> (sorry about the repeated email, but my previous attempt failed due to\n> trying to send to the -hackers and -performance lists at the same time, so\n> trying again after removing -performance)\n\nThanks! (Actually, I also failed to send my post to those lists...)\n\n> On 2019/01/08 20:07, Etsuro Fujita wrote:\n>> (2018/12/07 20:14), Ashutosh Bapat wrote:\n>>> On Fri, Dec 7, 2018 at 11:13 AM Ashutosh Bapat\n>>> <[email protected]<mailto:[email protected]>> wrote:\n>>\n>>> Robert, Ashutosh, any comments on this? I'm unfamiliar with the\n>>> partitionwise join code.\n>>\n>>> As the comment says it has to do with the equivalence classes being\n>>> used during merge append. EC's are used to create pathkeys used for\n>>> sorting. Creating a sort node which has column on the nullable side\n>>> of an OUTER join will fail if it doesn't find corresponding\n>>> equivalence class. You may not notice this if both the partitions\n>>> being joined are pruned for some reason. Amit's idea to make\n>>> partition-wise join code do this may work, but will add a similar\n>>> overhead esp. in N-way partition-wise join once those equivalence\n>>> classes are added.\n>>\n>>> I looked at the patch. The problem there is that for a given relation,\n>>> we will add child ec member multiple times, as many times as the number\n>>> of joins it participates in. We need to avoid that to keep ec_member\n>>> list length in check.\n>>\n>> Amit-san, are you still working on this, perhaps as part of the\n>> speeding-up-planning-with-partitions patch [1]?\n>\n> I had tried to continue working on it after PGConf.ASIA last month, but\n> got distracted by something else.\n>\n> So, while the patch at [1] can take care of this issue as I also mentioned\n> upthread, I was trying to come up with a solution that can be back-patched\n> to PG 11. The patch I posted above is one such solution and as Ashutosh\n> points out it's perhaps not the best, because it can result in potentially\n> creating many copies of the same child EC member if we do it in joinrel.c,\n> as the patch proposes. I will try to respond to the concerns he raised in\n> the next week if possible.\n\nThanks for working on this!\n\nI like your patch in general. I think one way to address Ashutosh's \nconcerns would be to use the consider_partitionwise_join flag: \noriginally, that was introduced for partitioned relations to show that \nthey can be partitionwise-joined, but I think that flag could also be \nused for non-partitioned relations to show that they have been set up \nproperly for partitionwise-joins, and I think by checking that flag we \ncould avoid creating those copies for child dummy rels in \ntry_partitionwise_join. Please find attached an updated version of the \npatch. I modified your version so that building tlists for child dummy \nrels are also postponed until after they actually participate in \npartitionwise-joins, to avoid that possibly-useless work as well. I \nhaven't done any performance tests yet though.\n\nBest regards,\nEtsuro Fujita",
"msg_date": "Wed, 09 Jan 2019 20:20:50 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with high planning time at version 11.1 compared versions\n 10.5 and 11.0"
},
{
"msg_contents": "Fujita-san,\n\nOn 2019/01/09 20:20, Etsuro Fujita wrote:\n> (2019/01/09 9:30), Amit Langote wrote:\n>> So, while the patch at [1] can take care of this issue as I also mentioned\n>> upthread, I was trying to come up with a solution that can be back-patched\n>> to PG 11. The patch I posted above is one such solution and as Ashutosh\n>> points out it's perhaps not the best, because it can result in potentially\n>> creating many copies of the same child EC member if we do it in joinrel.c,\n>> as the patch proposes. I will try to respond to the concerns he raised in\n>> the next week if possible.\n> \n> Thanks for working on this!\n> \n> I like your patch in general. I think one way to address Ashutosh's\n> concerns would be to use the consider_partitionwise_join flag: originally,\n> that was introduced for partitioned relations to show that they can be\n> partitionwise-joined, but I think that flag could also be used for\n> non-partitioned relations to show that they have been set up properly for\n> partitionwise-joins, and I think by checking that flag we could avoid\n> creating those copies for child dummy rels in try_partitionwise_join.\n\nAh, that's an interesting idea.\n\nIf I understand the original design of it correctly,\nconsider_partitionwise_join being true for a given relation (simple or\njoin) means that its RelOptInfo contains properties to consider it to be\njoined with another relation (simple or join) using partitionwise join\nmechanism. Partitionwise join will occur between the pair if the other\nrelation also has relevant properties (hence its\nconsider_partitionwise_join set to true) and properties on the two sides\nmatch.\n\nThat's a loaded meaning and abusing it to mean something else can be\nchallenged, but we can live with that if properly documented. Speaking of\nwhich:\n\n /* used by partitionwise joins: */\n bool consider_partitionwise_join; /* consider partitionwise join\n * paths? (if partitioned\nrel) */\n\nMaybe, mention here how it will be abused in back-branches for\nnon-partitioned relations?\n \n> Please find attached an updated version of the patch. I modified your\n> version so that building tlists for child dummy rels are also postponed\n> until after they actually participate in partitionwise-joins, to avoid\n> that possibly-useless work as well. I haven't done any performance tests\n> yet though.\n\nThanks for updating the patch. I tested your patch (test setup described\nbelow) and it has almost the same performance as my previous version:\n552ms (vs. 41159ms on HEAD vs. 253ms on PG 10) for the query also\nmentioned below.\n\nThanks,\nAmit\n\n[1] Test setup\n\n-- create tables\nCREATE TABLE precio(fecha timestamp, pluid int, loccd int, plusalesprice\nint) PARTITION BY RANGE (fecha);\n\nSELECT format('CREATE TABLE public.precio_%s PARTITION OF public.precio\n(PRIMARY KEY (fecha, pluid, loccd) ) FOR VALUES FROM (''%s'')TO(''%s'')',\ni, a, b) FROM (SELECT '1990-01-01'::timestamp +(i||'days')::interval a,\n'1990-01-02'::timestamp+(i||'days')::interval b, i FROM\ngenerate_series(1,999) i)x;\n\n\\gexec\n\n-- query\nSELECT l_variacao.fecha, l_variacao.loccd , l_variacao.pant ,\nl_variacao.patual , max_variacao.var_max FROM (SELECT p.fecha, p.loccd,\np.plusalesprice patual, da.plusalesprice pant, abs(p.plusalesprice -\nda.plusalesprice) as var from precio p, (SELECT p.fecha, p.plusalesprice,\np.loccd from precio p WHERE p.fecha between '2017-03-01' and '2017-03-02'\nand p.pluid = 2) da WHERE p.fecha between '2017-03-01' and '2017-03-02'\nand p.pluid = 2 and p.loccd = da.loccd and p.fecha = da.fecha) l_variacao,\n(SELECT max(abs(p.plusalesprice - da.plusalesprice)) as var_max from\nprecio p, (SELECT p.fecha, p.plusalesprice, p.loccd from precio p WHERE\np.fecha between '2017-03-01' and '2017-03-02' and p.pluid = 2) da WHERE\np.fecha between '2017-03-01' and '2017-03-02' and p.pluid = 2 and p.loccd\n= da.loccd and p.fecha = da.fecha) max_variacao WHERE max_variacao.var_max\n= l_variacao.var;\n\n\n",
"msg_date": "Thu, 10 Jan 2019 10:41:56 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with high planning time at version 11.1 compared versions\n 10.5 and 11.0"
},
{
"msg_contents": "Amit-san,\n\n(2019/01/10 10:41), Amit Langote wrote:\n> On 2019/01/09 20:20, Etsuro Fujita wrote:\n>> I like your patch in general. I think one way to address Ashutosh's\n>> concerns would be to use the consider_partitionwise_join flag: originally,\n>> that was introduced for partitioned relations to show that they can be\n>> partitionwise-joined, but I think that flag could also be used for\n>> non-partitioned relations to show that they have been set up properly for\n>> partitionwise-joins, and I think by checking that flag we could avoid\n>> creating those copies for child dummy rels in try_partitionwise_join.\n>\n> Ah, that's an interesting idea.\n>\n> If I understand the original design of it correctly,\n> consider_partitionwise_join being true for a given relation (simple or\n> join) means that its RelOptInfo contains properties to consider it to be\n> joined with another relation (simple or join) using partitionwise join\n> mechanism. Partitionwise join will occur between the pair if the other\n> relation also has relevant properties (hence its\n> consider_partitionwise_join set to true) and properties on the two sides\n> match.\n\nActually, the flag being true just means that the tlist for a given \npartitioned relation (simple or join) doesn't contain any whole-row \nVars. And if two given partitioned relations having the flag being true \nhave additional properties to be joined using the PWJ technique, then we \ntry to do PWJ for those partitioned relations. (The name of the flag \nisn't good? If so, that would be my fault because I named that flag.)\n\n> That's a loaded meaning and abusing it to mean something else can be\n> challenged, but we can live with that if properly documented. Speaking of\n> which:\n>\n> /* used by partitionwise joins: */\n> bool consider_partitionwise_join; /* consider partitionwise join\n> * paths? (if partitioned\n> rel) */\n>\n> Maybe, mention here how it will be abused in back-branches for\n> non-partitioned relations?\n\nWill do.\n\n>> Please find attached an updated version of the patch. I modified your\n>> version so that building tlists for child dummy rels are also postponed\n>> until after they actually participate in partitionwise-joins, to avoid\n>> that possibly-useless work as well. I haven't done any performance tests\n>> yet though.\n>\n> Thanks for updating the patch. I tested your patch (test setup described\n> below) and it has almost the same performance as my previous version:\n> 552ms (vs. 41159ms on HEAD vs. 253ms on PG 10) for the query also\n> mentioned below.\n\nThanks for that testing!\n\nI also tested the patch with your script:\n\n253.559 ms (vs. 85776.515 ms on HEAD vs. 206.066 ms on PG 10)\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Thu, 10 Jan 2019 15:07:01 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with high planning time at version 11.1 compared versions\n 10.5 and 11.0"
},
{
"msg_contents": "On Thu, Jan 10, 2019 at 7:12 AM Amit Langote <[email protected]>\nwrote:\n\n> Fujita-san,\n>\n> On 2019/01/09 20:20, Etsuro Fujita wrote:\n> > (2019/01/09 9:30), Amit Langote wrote:\n> >> So, while the patch at [1] can take care of this issue as I also\n> mentioned\n> >> upthread, I was trying to come up with a solution that can be\n> back-patched\n> >> to PG 11. The patch I posted above is one such solution and as Ashutosh\n> >> points out it's perhaps not the best, because it can result in\n> potentially\n> >> creating many copies of the same child EC member if we do it in\n> joinrel.c,\n> >> as the patch proposes. I will try to respond to the concerns he raised\n> in\n> >> the next week if possible.\n> >\n> > Thanks for working on this!\n> >\n> > I like your patch in general. I think one way to address Ashutosh's\n> > concerns would be to use the consider_partitionwise_join flag:\n> originally,\n> > that was introduced for partitioned relations to show that they can be\n> > partitionwise-joined, but I think that flag could also be used for\n> > non-partitioned relations to show that they have been set up properly for\n> > partitionwise-joins, and I think by checking that flag we could avoid\n> > creating those copies for child dummy rels in try_partitionwise_join.\n>\n> Ah, that's an interesting idea.\n>\n> If I understand the original design of it correctly,\n> consider_partitionwise_join being true for a given relation (simple or\n> join) means that its RelOptInfo contains properties to consider it to be\n> joined with another relation (simple or join) using partitionwise join\n> mechanism. Partitionwise join will occur between the pair if the other\n> relation also has relevant properties (hence its\n> consider_partitionwise_join set to true) and properties on the two sides\n> match.\n>\n>\nThough this will solve a problem for performance when partition-wise join\nis not possible, we still have the same problem when partition-wise join is\npossible. And that problem really happens because our inheritance mechanism\nrequires expression translation from parent to child everywhere. That\nconsumes memory, eats CPU cycles and generally downgrades performance of\npartition related query planning. I think a better way would be to avoid\nthese translations and use Parent var to represent a Var of the child being\ndealt with. That will be a massive churn on inheritance based planner code,\nbut it will improve planning time for queries involving thousands of\npartitions.\n\n--\nBest Wishes,\nAshutosh Bapat\n\nOn Thu, Jan 10, 2019 at 7:12 AM Amit Langote <[email protected]> wrote:Fujita-san,\n\nOn 2019/01/09 20:20, Etsuro Fujita wrote:\n> (2019/01/09 9:30), Amit Langote wrote:\n>> So, while the patch at [1] can take care of this issue as I also mentioned\n>> upthread, I was trying to come up with a solution that can be back-patched\n>> to PG 11. The patch I posted above is one such solution and as Ashutosh\n>> points out it's perhaps not the best, because it can result in potentially\n>> creating many copies of the same child EC member if we do it in joinrel.c,\n>> as the patch proposes. I will try to respond to the concerns he raised in\n>> the next week if possible.\n> \n> Thanks for working on this!\n> \n> I like your patch in general. I think one way to address Ashutosh's\n> concerns would be to use the consider_partitionwise_join flag: originally,\n> that was introduced for partitioned relations to show that they can be\n> partitionwise-joined, but I think that flag could also be used for\n> non-partitioned relations to show that they have been set up properly for\n> partitionwise-joins, and I think by checking that flag we could avoid\n> creating those copies for child dummy rels in try_partitionwise_join.\n\nAh, that's an interesting idea.\n\nIf I understand the original design of it correctly,\nconsider_partitionwise_join being true for a given relation (simple or\njoin) means that its RelOptInfo contains properties to consider it to be\njoined with another relation (simple or join) using partitionwise join\nmechanism. Partitionwise join will occur between the pair if the other\nrelation also has relevant properties (hence its\nconsider_partitionwise_join set to true) and properties on the two sides\nmatch.\nThough this will solve a problem for performance when partition-wise join is not possible, we still have the same problem when partition-wise join is possible. And that problem really happens because our inheritance mechanism requires expression translation from parent to child everywhere. That consumes memory, eats CPU cycles and generally downgrades performance of partition related query planning. I think a better way would be to avoid these translations and use Parent var to represent a Var of the child being dealt with. That will be a massive churn on inheritance based planner code, but it will improve planning time for queries involving thousands of partitions.--Best Wishes,Ashutosh Bapat",
"msg_date": "Thu, 10 Jan 2019 15:19:14 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with high planning time at version 11.1 compared versions\n 10.5 and 11.0"
},
{
"msg_contents": "On Thu, Jan 10, 2019 at 6:49 PM Ashutosh Bapat\n<[email protected]> wrote:\n> Though this will solve a problem for performance when partition-wise join is not possible, we still have the same problem when partition-wise join is possible. And that problem really happens because our inheritance mechanism requires expression translation from parent to child everywhere. That consumes memory, eats CPU cycles and generally downgrades performance of partition related query planning. I think a better way would be to avoid these translations and use Parent var to represent a Var of the child being dealt with. That will be a massive churn on inheritance based planner code, but it will improve planning time for queries involving thousands of partitions.\n\nYeah, it would be nice going forward to overhaul inheritance planning\nsuch that parent-to-child Var translation is not needed, especially\nwhere no pruning can occur or many partitions remain even after\npruning.\n\nThanks,\nAmit\n\n",
"msg_date": "Thu, 10 Jan 2019 21:23:40 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with high planning time at version 11.1 compared versions\n 10.5 and 11.0"
},
{
"msg_contents": "(2019/01/10 21:23), Amit Langote wrote:\n> On Thu, Jan 10, 2019 at 6:49 PM Ashutosh Bapat\n> <[email protected]> wrote:\n>> Though this will solve a problem for performance when partition-wise join is not possible, we still have the same problem when partition-wise join is possible. And that problem really happens because our inheritance mechanism requires expression translation from parent to child everywhere. That consumes memory, eats CPU cycles and generally downgrades performance of partition related query planning. I think a better way would be to avoid these translations and use Parent var to represent a Var of the child being dealt with. That will be a massive churn on inheritance based planner code, but it will improve planning time for queries involving thousands of partitions.\n>\n> Yeah, it would be nice going forward to overhaul inheritance planning\n> such that parent-to-child Var translation is not needed, especially\n> where no pruning can occur or many partitions remain even after\n> pruning.\n\nI agree on that point, but I think that's an improvement for a future \nrelease rather than a fix for the issue reported on this thread.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Fri, 11 Jan 2019 11:21:01 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with high planning time at version 11.1 compared versions\n 10.5 and 11.0"
},
{
"msg_contents": "Fujita-san,\n\nOn 2019/01/10 15:07, Etsuro Fujita wrote:\n> Amit-san,\n> \n> (2019/01/10 10:41), Amit Langote wrote:\n>> On 2019/01/09 20:20, Etsuro Fujita wrote:\n>>> I like your patch in general. I think one way to address Ashutosh's\n>>> concerns would be to use the consider_partitionwise_join flag: originally,\n>>> that was introduced for partitioned relations to show that they can be\n>>> partitionwise-joined, but I think that flag could also be used for\n>>> non-partitioned relations to show that they have been set up properly for\n>>> partitionwise-joins, and I think by checking that flag we could avoid\n>>> creating those copies for child dummy rels in try_partitionwise_join.\n>>\n>> Ah, that's an interesting idea.\n>>\n>> If I understand the original design of it correctly,\n>> consider_partitionwise_join being true for a given relation (simple or\n>> join) means that its RelOptInfo contains properties to consider it to be\n>> joined with another relation (simple or join) using partitionwise join\n>> mechanism. Partitionwise join will occur between the pair if the other\n>> relation also has relevant properties (hence its\n>> consider_partitionwise_join set to true) and properties on the two sides\n>> match.\n> \n> Actually, the flag being true just means that the tlist for a given\n> partitioned relation (simple or join) doesn't contain any whole-row Vars. \n> And if two given partitioned relations having the flag being true have\n> additional properties to be joined using the PWJ technique, then we try to\n> do PWJ for those partitioned relations.\n\nI see. Thanks for the explanation.\n\n> (The name of the flag isn't\n> good? If so, that would be my fault because I named that flag.)\n\nIf it's really just to store the fact that the relation's targetlist\ncontains expressions that partitionwise join currently cannot handle, then\nsetting it like this in set_append_rel_size seems a bit misleading:\n\n if (enable_partitionwise_join &&\n rel->reloptkind == RELOPT_BASEREL &&\n rte->relkind == RELKIND_PARTITIONED_TABLE &&\n rel->attr_needed[InvalidAttrNumber - rel->min_attr] == NULL)\n rel->consider_partitionwise_join = true;\n\nSorry, I wasn't around to comment on the patch which got committed in\n7cfdc77023a, but checking the value of enable_partitionwise_join and other\nthings in set_append_rel_size() to set the value of\nconsider_partitionwise_join seems a bit odd to me. Perhaps,\nconsider_partitionwise_join should be initially set to true for a relation\n(actually, to rel->part_scheme != NULL) and only set it to false if the\nrelation's targetlist is found to contain unsupported expressions. That\nway, it becomes easier to think what it means imho. I think\nenable_partitionwise_join should only be checked in relnode.c or\njoinrels.c. I've attached a patch to show what I mean. Can you please\ntake a look?\n\nIf you think that this patch is a good idea, then you'll need to\nexplicitly set consider_partitionwise_join to false for a dummy partition\nrel in set_append_rel_size(), because the assumption of your patch that\nsuch partition's rel's consider_partitionwise_join would be false (as\ninitialized with the current code) would be broken by my patch. But that\nmight be a good thing to do anyway as it will document the special case\nusage of consider_partitionwise_join variable more explicitly, assuming\nyou'll be adding a comment describing why it's being set to false explicitly.\n\n>> That's a loaded meaning and abusing it to mean something else can be\n>> challenged, but we can live with that if properly documented. Speaking of\n>> which:\n>>\n>> /* used by partitionwise joins: */\n>> bool consider_partitionwise_join; /* consider\n>> partitionwise join\n>> * paths? (if partitioned\n>> rel) */\n>>\n>> Maybe, mention here how it will be abused in back-branches for\n>> non-partitioned relations?\n> \n> Will do.\n\nThank you.\n\n>>> Please find attached an updated version of the patch. I modified your\n>>> version so that building tlists for child dummy rels are also postponed\n>>> until after they actually participate in partitionwise-joins, to avoid\n>>> that possibly-useless work as well. I haven't done any performance tests\n>>> yet though.\n>>\n>> Thanks for updating the patch. I tested your patch (test setup described\n>> below) and it has almost the same performance as my previous version:\n>> 552ms (vs. 41159ms on HEAD vs. 253ms on PG 10) for the query also\n>> mentioned below.\n> \n> Thanks for that testing!\n> \n> I also tested the patch with your script:\n> \n> 253.559 ms (vs. 85776.515 ms on HEAD vs. 206.066 ms on PG 10)\n\nOh, PG 11 doesn't appear as bad compared to PG 10 with your numbers as it\ndid on my machine. That's good anyway.\n\nThanks,\nAmit",
"msg_date": "Fri, 11 Jan 2019 13:46:09 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with high planning time at version 11.1 compared versions\n 10.5 and 11.0"
},
{
"msg_contents": "On 2019/01/11 11:21, Etsuro Fujita wrote:\n> (2019/01/10 21:23), Amit Langote wrote:\n>> On Thu, Jan 10, 2019 at 6:49 PM Ashutosh Bapat\n>> <[email protected]> wrote:\n>>> Though this will solve a problem for performance when partition-wise\n>>> join is not possible, we still have the same problem when\n>>> partition-wise join is possible. And that problem really happens\n>>> because our inheritance mechanism requires expression translation from\n>>> parent to child everywhere. That consumes memory, eats CPU cycles and\n>>> generally downgrades performance of partition related query planning. I\n>>> think a better way would be to avoid these translations and use Parent\n>>> var to represent a Var of the child being dealt with. That will be a\n>>> massive churn on inheritance based planner code, but it will improve\n>>> planning time for queries involving thousands of partitions.\n>>\n>> Yeah, it would be nice going forward to overhaul inheritance planning\n>> such that parent-to-child Var translation is not needed, especially\n>> where no pruning can occur or many partitions remain even after\n>> pruning.\n> \n> I agree on that point, but I think that's an improvement for a future\n> release rather than a fix for the issue reported on this thread.\n\nAgreed. Improving planning performance for large number of partitions\neven in the absence of pruning is a good goal to pursue for future\nversions, as is being discussed in some other threads [1].\n\nThanks,\nAmit\n\n[1]\nhttps://www.postgresql.org/message-id/0A3221C70F24FB45833433255569204D1FB60AE5%40G01JPEXMBYT05\n\n\n",
"msg_date": "Fri, 11 Jan 2019 13:49:33 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with high planning time at version 11.1 compared versions\n 10.5 and 11.0"
},
{
"msg_contents": "(2019/01/11 13:46), Amit Langote wrote:\n> On 2019/01/10 15:07, Etsuro Fujita wrote:\n>> (The name of the flag isn't\n>> good? If so, that would be my fault because I named that flag.)\n>\n> If it's really just to store the fact that the relation's targetlist\n> contains expressions that partitionwise join currently cannot handle, then\n> setting it like this in set_append_rel_size seems a bit misleading:\n>\n> if (enable_partitionwise_join&&\n> rel->reloptkind == RELOPT_BASEREL&&\n> rte->relkind == RELKIND_PARTITIONED_TABLE&&\n> rel->attr_needed[InvalidAttrNumber - rel->min_attr] == NULL)\n> rel->consider_partitionwise_join = true;\n>\n> Sorry, I wasn't around to comment on the patch which got committed in\n> 7cfdc77023a, but checking the value of enable_partitionwise_join and other\n> things in set_append_rel_size() to set the value of\n> consider_partitionwise_join seems a bit odd to me. Perhaps,\n> consider_partitionwise_join should be initially set to true for a relation\n> (actually, to rel->part_scheme != NULL) and only set it to false if the\n> relation's targetlist is found to contain unsupported expressions.\n\nOne thing I intended in that commit was to set the flag to false for \npartitioned tables contained in inheritance trees where the top parent \nis a UNION ALL subquery, because we don't consider PWJ for those tables. \n Actually we wouldn't need to care about that, because we don't do PWJ \nfor those tables regardless of what the flag is set, but I think that \nwould make the code a bit cleaner. However, what you proposed here \nas-is would not keep that behavior, because rel->part_scheme is created \nfor those tables as well (even though there would be no need IIUC).\n\n> That\n> way, it becomes easier to think what it means imho.\n\nMay be or may not be.\n\n> I think\n> enable_partitionwise_join should only be checked in relnode.c or\n> joinrels.c.\n\nSorry, I don't understand this.\n\n> I've attached a patch to show what I mean. Can you please\n> take a look?\n\nThanks for the patch! Maybe I'm missing something, but I don't have a \nstrong opinion about that change. I'd rather think to modify \nbuild_simple_rel so that it doesn't create rel->part_scheme if \nunnecessary (ie, partitioned tables contained in inheritance trees where \nthe top parent is a UNION ALL subquery).\n\n> If you think that this patch is a good idea, then you'll need to\n> explicitly set consider_partitionwise_join to false for a dummy partition\n> rel in set_append_rel_size(), because the assumption of your patch that\n> such partition's rel's consider_partitionwise_join would be false (as\n> initialized with the current code) would be broken by my patch. But that\n> might be a good thing to do anyway as it will document the special case\n> usage of consider_partitionwise_join variable more explicitly, assuming\n> you'll be adding a comment describing why it's being set to false explicitly.\n\nI'm not sure we need this as part of a fix for the issue reported on \nthis thread. I don't object to what you proposed here, but that would \nbe rather an improvement, so I think we should leave that for another patch.\n\n>>>> Please find attached an updated version of the patch. I modified your\n>>>> version so that building tlists for child dummy rels are also postponed\n>>>> until after they actually participate in partitionwise-joins, to avoid\n>>>> that possibly-useless work as well. I haven't done any performance tests\n>>>> yet though.\n>>>\n>>> Thanks for updating the patch. I tested your patch (test setup described\n>>> below) and it has almost the same performance as my previous version:\n>>> 552ms (vs. 41159ms on HEAD vs. 253ms on PG 10) for the query also\n>>> mentioned below.\n\n>> I also tested the patch with your script:\n>>\n>> 253.559 ms (vs. 85776.515 ms on HEAD vs. 206.066 ms on PG 10)\n>\n> Oh, PG 11 doesn't appear as bad compared to PG 10 with your numbers as it\n> did on my machine. That's good anyway.\n\nYeah, that's a good result!\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Fri, 11 Jan 2019 20:04:40 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with high planning time at version 11.1 compared versions\n 10.5 and 11.0"
},
{
"msg_contents": "(2019/01/11 13:49), Amit Langote wrote:\n> On 2019/01/11 11:21, Etsuro Fujita wrote:\n>> (2019/01/10 21:23), Amit Langote wrote:\n>>> On Thu, Jan 10, 2019 at 6:49 PM Ashutosh Bapat\n>>> <[email protected]> wrote:\n>>>> Though this will solve a problem for performance when partition-wise\n>>>> join is not possible, we still have the same problem when\n>>>> partition-wise join is possible. And that problem really happens\n>>>> because our inheritance mechanism requires expression translation from\n>>>> parent to child everywhere. That consumes memory, eats CPU cycles and\n>>>> generally downgrades performance of partition related query planning. I\n>>>> think a better way would be to avoid these translations and use Parent\n>>>> var to represent a Var of the child being dealt with. That will be a\n>>>> massive churn on inheritance based planner code, but it will improve\n>>>> planning time for queries involving thousands of partitions.\n>>>\n>>> Yeah, it would be nice going forward to overhaul inheritance planning\n>>> such that parent-to-child Var translation is not needed, especially\n>>> where no pruning can occur or many partitions remain even after\n>>> pruning.\n>>\n>> I agree on that point, but I think that's an improvement for a future\n>> release rather than a fix for the issue reported on this thread.\n>\n> Agreed.\n\nCool!\n\n> Improving planning performance for large number of partitions\n> even in the absence of pruning is a good goal to pursue for future\n> versions, as is being discussed in some other threads [1].\n\nYeah, we have a lot of challenges there. Thanks for sharing the info!\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Fri, 11 Jan 2019 20:10:44 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with high planning time at version 11.1 compared versions\n 10.5 and 11.0"
},
{
"msg_contents": "(2019/01/11 13:46), Amit Langote wrote:\n> On 2019/01/10 15:07, Etsuro Fujita wrote:\n>> (2019/01/10 10:41), Amit Langote wrote:\n>>> That's a loaded meaning and abusing it to mean something else can be\n>>> challenged, but we can live with that if properly documented. Speaking of\n>>> which:\n>>>\n>>> /* used by partitionwise joins: */\n>>> bool consider_partitionwise_join; /* consider\n>>> partitionwise join\n>>> * paths? (if partitioned\n>>> rel) */\n>>>\n>>> Maybe, mention here how it will be abused in back-branches for\n>>> non-partitioned relations?\n>>\n>> Will do.\n>\n> Thank you.\n\nI know we don't yet reach a consensus on what to do in details to \naddress this issue, but for the above, how about adding comments like \nthis to set_append_rel_size(), instead of the header file:\n\n /*\n * If we consider partitionwise joins with the parent rel, do \nthe same\n * for partitioned child rels.\n *\n * Note: here we abuse the consider_partitionwise_join flag for \nchild\n * rels that are not partitioned, to tell try_partitionwise_join()\n * that their targetlists and EC entries have been generated.\n */\n if (rel->consider_partitionwise_join)\n childrel->consider_partitionwise_join = true;\n\nISTM that that would be more clearer than the header file.\n\nUpdated patch attached, which also updated other comments a little bit.\n\nBest regards,\nEtsuro Fujita",
"msg_date": "Fri, 11 Jan 2019 21:50:19 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with high planning time at version 11.1 compared versions\n 10.5 and 11.0"
},
{
"msg_contents": "Fujita-san,\n\nOn 2019/01/11 20:04, Etsuro Fujita wrote:\n> (2019/01/11 13:46), Amit Langote wrote:\n>> On 2019/01/10 15:07, Etsuro Fujita wrote:\n>>> (The name of the flag isn't\n>>> good? If so, that would be my fault because I named that flag.)\n>>\n>> If it's really just to store the fact that the relation's targetlist\n>> contains expressions that partitionwise join currently cannot handle, then\n>> setting it like this in set_append_rel_size seems a bit misleading:\n>>\n>> if (enable_partitionwise_join&&\n>> rel->reloptkind == RELOPT_BASEREL&&\n>> rte->relkind == RELKIND_PARTITIONED_TABLE&&\n>> rel->attr_needed[InvalidAttrNumber - rel->min_attr] == NULL)\n>> rel->consider_partitionwise_join = true;\n>>\n>> Sorry, I wasn't around to comment on the patch which got committed in\n>> 7cfdc77023a, but checking the value of enable_partitionwise_join and other\n>> things in set_append_rel_size() to set the value of\n>> consider_partitionwise_join seems a bit odd to me. Perhaps,\n>> consider_partitionwise_join should be initially set to true for a relation\n>> (actually, to rel->part_scheme != NULL) and only set it to false if the\n>> relation's targetlist is found to contain unsupported expressions.\n> \n> One thing I intended in that commit was to set the flag to false for\n> partitioned tables contained in inheritance trees where the top parent is\n> a UNION ALL subquery, because we don't consider PWJ for those tables.\n> Actually we wouldn't need to care about that, because we don't do PWJ for\n> those tables regardless of what the flag is set, but I think that would\n> make the code a bit cleaner.\n\nYeah, we wouldn't do partitionwise join between partitioned tables that\nare under UNION ALL.\n\n> However, what you proposed here as-is would\n> not keep that behavior, because rel->part_scheme is created for those\n> tables as well\n\nIt'd be easy to prevent set consider_partitionwise_join to false in that\ncase as:\n\n+ rel->consider_partitionwise_join = (rel->part_scheme != NULL &&\n+ (parent == NULL ||\n+ parent->rtekind != RTE_SUBQUERY));\n\n\n> (even though there would be no need IIUC).\n\nPartition pruning uses part_scheme and pruning can occur even if a\npartitioned table is under UNION ALL, so it *is* needed in that case.\n\n>> I think\n>> enable_partitionwise_join should only be checked in relnode.c or\n>> joinrels.c.\n> \n> Sorry, I don't understand this.\n\nWhat I was trying to say is that we should check the GUC close to where\npartitionwise join is actually implemented even though there is no such\nhard and fast rule. Or maybe I'm just a bit uncomfortable with setting\nconsider_partitionwise_join based on the GUC.\n\n>> I've attached a patch to show what I mean. Can you please\n>> take a look?\n> \n> Thanks for the patch! Maybe I'm missing something, but I don't have a\n> strong opinion about that change. I'd rather think to modify\n> build_simple_rel so that it doesn't create rel->part_scheme if unnecessary\n> (ie, partitioned tables contained in inheritance trees where the top\n> parent is a UNION ALL subquery).\n\nAs I said above, partition pruning can occur even if a partitioned table\nhappens to be under UNION ALL. However, we *can* avoid creating\npart_scheme and setting other partitioning properties if *all* of\nenable_partition_pruning, enable_partitionwise_join, and\nenable_partitionwise_aggregate are turned off.\n\n>> If you think that this patch is a good idea, then you'll need to\n>> explicitly set consider_partitionwise_join to false for a dummy partition\n>> rel in set_append_rel_size(), because the assumption of your patch that\n>> such partition's rel's consider_partitionwise_join would be false (as\n>> initialized with the current code) would be broken by my patch. But that\n>> might be a good thing to do anyway as it will document the special case\n>> usage of consider_partitionwise_join variable more explicitly, assuming\n>> you'll be adding a comment describing why it's being set to false\n>> explicitly.\n> \n> I'm not sure we need this as part of a fix for the issue reported on this\n> thread. I don't object to what you proposed here, but that would be\n> rather an improvement, so I think we should leave that for another patch.\n\nSure, no problem with committing it separately if at all. Thanks for\nconsidering.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Tue, 15 Jan 2019 10:46:34 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with high planning time at version 11.1 compared versions\n 10.5 and 11.0"
},
{
"msg_contents": "On 2019/01/15 10:46, Amit Langote wrote:\n> On 2019/01/11 20:04, Etsuro Fujita wrote:\n>> One thing I intended in that commit was to set the flag to false for\n>> partitioned tables contained in inheritance trees where the top parent is\n>> a UNION ALL subquery, because we don't consider PWJ for those tables.\n>> Actually we wouldn't need to care about that, because we don't do PWJ for\n>> those tables regardless of what the flag is set, but I think that would\n>> make the code a bit cleaner.\n> \n> Yeah, we wouldn't do partitionwise join between partitioned tables that\n> are under UNION ALL.\n> \n>> However, what you proposed here as-is would\n>> not keep that behavior, because rel->part_scheme is created for those\n>> tables as well\n> \n> It'd be easy to prevent set consider_partitionwise_join to false in that\n> case as:\n\nOops, I meant to say:\n\nIt'd be easy to prevent setting consider_partitionwise_join in that case as:\n\n> \n> + rel->consider_partitionwise_join = (rel->part_scheme != NULL &&\n> + (parent == NULL ||\n> + parent->rtekind != RTE_SUBQUERY));\n\nThanks,\nAmit\n\n\n",
"msg_date": "Tue, 15 Jan 2019 10:51:54 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with high planning time at version 11.1 compared versions\n 10.5 and 11.0"
},
{
"msg_contents": "Fujita-san,\n\nOn 2019/01/11 21:50, Etsuro Fujita wrote:\n>>> (2019/01/10 10:41), Amit Langote wrote:\n>>>> That's a loaded meaning and abusing it to mean something else can be\n>>>> challenged, but we can live with that if properly documented. \n>>>> Speaking of\n>>>> which:\n>>>>\n>>>> /* used by partitionwise joins: */\n>>>> bool consider_partitionwise_join; /* consider\n>>>> partitionwise join\n>>>> * paths? (if\n>>>> partitioned\n>>>> rel) */\n>>>>\n>>>> Maybe, mention here how it will be abused in back-branches for\n>>>> non-partitioned relations?\n>>>\n>>> Will do.\n>>\n>> Thank you.\n> \n> I know we don't yet reach a consensus on what to do in details to address\n> this issue, but for the above, how about adding comments like this to\n> set_append_rel_size(), instead of the header file:\n> \n> /*\n> * If we consider partitionwise joins with the parent rel, do the\n> same\n> * for partitioned child rels.\n> *\n> * Note: here we abuse the consider_partitionwise_join flag for child\n> * rels that are not partitioned, to tell try_partitionwise_join()\n> * that their targetlists and EC entries have been generated.\n> */\n> if (rel->consider_partitionwise_join)\n> childrel->consider_partitionwise_join = true;\n> \n> ISTM that that would be more clearer than the header file.\n\nThanks for updating the patch. I tend to agree that it might be better to\nadd such details here than in the header as it's better to keep the latter\nmore stable.\n\nAbout the comment you added, I think we could clarify the note further as:\n\nNote: here we abuse the consider_partitionwise_join flag by setting it\n*even* for child rels that are not partitioned. In that case, we set it\nto tell try_partitionwise_join() that it doesn't need to generate their\ntargetlists and EC entries as they have already been generated here, as\nopposed to the dummy child rels for which the flag is left set to false so\nthat it will generate them.\n\nMaybe it's a bit wordy, but it helps get the intention across more clearly.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Tue, 15 Jan 2019 11:42:18 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with high planning time at version 11.1 compared versions\n 10.5 and 11.0"
},
{
"msg_contents": "I think, there's something better possible. Two partitioned relations won't\nuse partition-wise join, if their partition schemes do not match.\nPartitioned relations with same partitioning scheme share PartitionScheme\npointer. PartitionScheme structure should get an extra counter, maintaining\na count of number of partitioned relations sharing that structure. When\nthis counter is 1, that relation is certainly not going to participate in\nPWJ and thus need not have all the structure required by PWJ set up. If we\nuse this counter coupled with enable_partitionwise_join flag, we can get\nrid of consider_partitionwise_join flag altogether, I think.\n\nOn Tue, Jan 15, 2019 at 8:12 AM Amit Langote <[email protected]>\nwrote:\n\n> Fujita-san,\n>\n> On 2019/01/11 21:50, Etsuro Fujita wrote:\n> >>> (2019/01/10 10:41), Amit Langote wrote:\n> >>>> That's a loaded meaning and abusing it to mean something else can be\n> >>>> challenged, but we can live with that if properly documented.\n> >>>> Speaking of\n> >>>> which:\n> >>>>\n> >>>> /* used by partitionwise joins: */\n> >>>> bool consider_partitionwise_join; /* consider\n> >>>> partitionwise join\n> >>>> * paths? (if\n> >>>> partitioned\n> >>>> rel) */\n> >>>>\n> >>>> Maybe, mention here how it will be abused in back-branches for\n> >>>> non-partitioned relations?\n> >>>\n> >>> Will do.\n> >>\n> >> Thank you.\n> >\n> > I know we don't yet reach a consensus on what to do in details to address\n> > this issue, but for the above, how about adding comments like this to\n> > set_append_rel_size(), instead of the header file:\n> >\n> > /*\n> > * If we consider partitionwise joins with the parent rel, do the\n> > same\n> > * for partitioned child rels.\n> > *\n> > * Note: here we abuse the consider_partitionwise_join flag for\n> child\n> > * rels that are not partitioned, to tell\n> try_partitionwise_join()\n> > * that their targetlists and EC entries have been generated.\n> > */\n> > if (rel->consider_partitionwise_join)\n> > childrel->consider_partitionwise_join = true;\n> >\n> > ISTM that that would be more clearer than the header file.\n>\n> Thanks for updating the patch. I tend to agree that it might be better to\n> add such details here than in the header as it's better to keep the latter\n> more stable.\n>\n> About the comment you added, I think we could clarify the note further as:\n>\n> Note: here we abuse the consider_partitionwise_join flag by setting it\n> *even* for child rels that are not partitioned. In that case, we set it\n> to tell try_partitionwise_join() that it doesn't need to generate their\n> targetlists and EC entries as they have already been generated here, as\n> opposed to the dummy child rels for which the flag is left set to false so\n> that it will generate them.\n>\n> Maybe it's a bit wordy, but it helps get the intention across more clearly.\n>\n> Thanks,\n> Amit\n>\n>\n\n-- \n--\nBest Wishes,\nAshutosh Bapat\n\nI think, there's something better possible. Two partitioned relations won't use partition-wise join, if their partition schemes do not match. Partitioned relations with same partitioning scheme share PartitionScheme pointer. PartitionScheme structure should get an extra counter, maintaining a count of number of partitioned relations sharing that structure. When this counter is 1, that relation is certainly not going to participate in PWJ and thus need not have all the structure required by PWJ set up. If we use this counter coupled with enable_partitionwise_join flag, we can get rid of consider_partitionwise_join flag altogether, I think.On Tue, Jan 15, 2019 at 8:12 AM Amit Langote <[email protected]> wrote:Fujita-san,\n\nOn 2019/01/11 21:50, Etsuro Fujita wrote:\n>>> (2019/01/10 10:41), Amit Langote wrote:\n>>>> That's a loaded meaning and abusing it to mean something else can be\n>>>> challenged, but we can live with that if properly documented. \n>>>> Speaking of\n>>>> which:\n>>>>\n>>>> /* used by partitionwise joins: */\n>>>> bool consider_partitionwise_join; /* consider\n>>>> partitionwise join\n>>>> * paths? (if\n>>>> partitioned\n>>>> rel) */\n>>>>\n>>>> Maybe, mention here how it will be abused in back-branches for\n>>>> non-partitioned relations?\n>>>\n>>> Will do.\n>>\n>> Thank you.\n> \n> I know we don't yet reach a consensus on what to do in details to address\n> this issue, but for the above, how about adding comments like this to\n> set_append_rel_size(), instead of the header file:\n> \n> /*\n> * If we consider partitionwise joins with the parent rel, do the\n> same\n> * for partitioned child rels.\n> *\n> * Note: here we abuse the consider_partitionwise_join flag for child\n> * rels that are not partitioned, to tell try_partitionwise_join()\n> * that their targetlists and EC entries have been generated.\n> */\n> if (rel->consider_partitionwise_join)\n> childrel->consider_partitionwise_join = true;\n> \n> ISTM that that would be more clearer than the header file.\n\nThanks for updating the patch. I tend to agree that it might be better to\nadd such details here than in the header as it's better to keep the latter\nmore stable.\n\nAbout the comment you added, I think we could clarify the note further as:\n\nNote: here we abuse the consider_partitionwise_join flag by setting it\n*even* for child rels that are not partitioned. In that case, we set it\nto tell try_partitionwise_join() that it doesn't need to generate their\ntargetlists and EC entries as they have already been generated here, as\nopposed to the dummy child rels for which the flag is left set to false so\nthat it will generate them.\n\nMaybe it's a bit wordy, but it helps get the intention across more clearly.\n\nThanks,\nAmit\n\n-- --Best Wishes,Ashutosh Bapat",
"msg_date": "Tue, 15 Jan 2019 09:59:23 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with high planning time at version 11.1 compared versions\n 10.5 and 11.0"
},
{
"msg_contents": "Amit-san,\n\n(2019/01/15 10:46), Amit Langote wrote:\n> On 2019/01/11 20:04, Etsuro Fujita wrote:\n>> (2019/01/11 13:46), Amit Langote wrote:\n\n>> However, what you proposed here as-is would\n>> not keep that behavior, because rel->part_scheme is created for those\n>> tables as well\n\n>> (even though there would be no need IIUC).\n>\n> Partition pruning uses part_scheme and pruning can occur even if a\n> partitioned table is under UNION ALL, so it *is* needed in that case.\n\nAh, you are right. Thanks for pointing that out!\n\n>>> I think\n>>> enable_partitionwise_join should only be checked in relnode.c or\n>>> joinrels.c.\n>>\n>> Sorry, I don't understand this.\n>\n> What I was trying to say is that we should check the GUC close to where\n> partitionwise join is actually implemented even though there is no such\n> hard and fast rule. Or maybe I'm just a bit uncomfortable with setting\n> consider_partitionwise_join based on the GUC.\n\nI didn't think so. Consider the consider_parallel flag. I think the \nway of setting it deviates from that rule already; it is set essentially \nbased on a GUC and is set in set_base_rel_sizes() (ie, before \nimplementing parallel paths). When adding the \nconsider_partitionwise_join flag, I thought it would be a good idea to \nset consider_partitionwise_join in a similar way to consider_parallel, \nkeeping build_simple_rel() simple.\n\n>>> I've attached a patch to show what I mean. Can you please\n>>> take a look?\n>>\n>> Thanks for the patch! Maybe I'm missing something, but I don't have a\n>> strong opinion about that change. I'd rather think to modify\n>> build_simple_rel so that it doesn't create rel->part_scheme if unnecessary\n>> (ie, partitioned tables contained in inheritance trees where the top\n>> parent is a UNION ALL subquery).\n>\n> As I said above, partition pruning can occur even if a partitioned table\n> happens to be under UNION ALL. However, we *can* avoid creating\n> part_scheme and setting other partitioning properties if *all* of\n> enable_partition_pruning, enable_partitionwise_join, and\n> enable_partitionwise_aggregate are turned off.\n\nYeah, I think so.\n\n>>> If you think that this patch is a good idea, then you'll need to\n>>> explicitly set consider_partitionwise_join to false for a dummy partition\n>>> rel in set_append_rel_size(), because the assumption of your patch that\n>>> such partition's rel's consider_partitionwise_join would be false (as\n>>> initialized with the current code) would be broken by my patch. But that\n>>> might be a good thing to do anyway as it will document the special case\n>>> usage of consider_partitionwise_join variable more explicitly, assuming\n>>> you'll be adding a comment describing why it's being set to false\n>>> explicitly.\n>>\n>> I'm not sure we need this as part of a fix for the issue reported on this\n>> thread. I don't object to what you proposed here, but that would be\n>> rather an improvement, so I think we should leave that for another patch.\n>\n> Sure, no problem with committing it separately if at all.\n\nOK\n\nThanks!\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Wed, 16 Jan 2019 14:41:47 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with high planning time at version 11.1 compared versions\n 10.5 and 11.0"
},
{
"msg_contents": "(2019/01/15 11:42), Amit Langote wrote:\n> On 2019/01/11 21:50, Etsuro Fujita wrote:\n>>>> (2019/01/10 10:41), Amit Langote wrote:\n>>>>> That's a loaded meaning and abusing it to mean something else can be\n>>>>> challenged, but we can live with that if properly documented.\n>>>>> Speaking of\n>>>>> which:\n>>>>>\n>>>>> /* used by partitionwise joins: */\n>>>>> bool consider_partitionwise_join; /* consider\n>>>>> partitionwise join\n>>>>> * paths? (if\n>>>>> partitioned\n>>>>> rel) */\n>>>>>\n>>>>> Maybe, mention here how it will be abused in back-branches for\n>>>>> non-partitioned relations?\n\n>> I know we don't yet reach a consensus on what to do in details to address\n>> this issue, but for the above, how about adding comments like this to\n>> set_append_rel_size(), instead of the header file:\n>>\n>> /*\n>> * If we consider partitionwise joins with the parent rel, do the\n>> same\n>> * for partitioned child rels.\n>> *\n>> * Note: here we abuse the consider_partitionwise_join flag for child\n>> * rels that are not partitioned, to tell try_partitionwise_join()\n>> * that their targetlists and EC entries have been generated.\n>> */\n>> if (rel->consider_partitionwise_join)\n>> childrel->consider_partitionwise_join = true;\n>>\n>> ISTM that that would be more clearer than the header file.\n>\n> Thanks for updating the patch. I tend to agree that it might be better to\n> add such details here than in the header as it's better to keep the latter\n> more stable.\n>\n> About the comment you added, I think we could clarify the note further as:\n>\n> Note: here we abuse the consider_partitionwise_join flag by setting it\n> *even* for child rels that are not partitioned. In that case, we set it\n> to tell try_partitionwise_join() that it doesn't need to generate their\n> targetlists and EC entries as they have already been generated here, as\n> opposed to the dummy child rels for which the flag is left set to false so\n> that it will generate them.\n>\n> Maybe it's a bit wordy, but it helps get the intention across more clearly.\n\nI think that is well-worded, so +1 from me.\n\nThanks again!\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Wed, 16 Jan 2019 14:45:16 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with high planning time at version 11.1 compared versions\n 10.5 and 11.0"
},
{
"msg_contents": "Hi Ashutosh,\n\n(2019/01/15 13:29), Ashutosh Bapat wrote:\n> I think, there's something better possible. Two partitioned relations\n> won't use partition-wise join, if their partition schemes do not match.\n> Partitioned relations with same partitioning scheme share\n> PartitionScheme pointer. PartitionScheme structure should get an extra\n> counter, maintaining a count of number of partitioned relations sharing\n> that structure. When this counter is 1, that relation is certainly not\n> going to participate in PWJ and thus need not have all the structure\n> required by PWJ set up. If we use this counter coupled with\n> enable_partitionwise_join flag, we can get rid of\n> consider_partitionwise_join flag altogether, I think.\n\nInteresting!\n\nThat flag was introduced to disable PWJs when whole-row Vars are \ninvolved, as you know, so I think we need to first eliminate that \nlimitation, to remove that flag.\n\nThanks!\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Wed, 16 Jan 2019 14:52:15 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with high planning time at version 11.1 compared versions\n 10.5 and 11.0"
},
{
"msg_contents": "(2019/01/16 15:21), Ashutosh Bapat wrote:\n> On Wed, Jan 16, 2019 at 11:22 AM Etsuro Fujita\n> <[email protected] <mailto:[email protected]>> wrote:\n> (2019/01/15 13:29), Ashutosh Bapat wrote:\n> > I think, there's something better possible. Two partitioned relations\n> > won't use partition-wise join, if their partition schemes do not\n> match.\n> > Partitioned relations with same partitioning scheme share\n> > PartitionScheme pointer. PartitionScheme structure should get an\n> extra\n> > counter, maintaining a count of number of partitioned relations\n> sharing\n> > that structure. When this counter is 1, that relation is\n> certainly not\n> > going to participate in PWJ and thus need not have all the structure\n> > required by PWJ set up. If we use this counter coupled with\n> > enable_partitionwise_join flag, we can get rid of\n> > consider_partitionwise_join flag altogether, I think.\n>\n> Interesting!\n>\n> That flag was introduced to disable PWJs when whole-row Vars are\n> involved, as you know, so I think we need to first eliminate that\n> limitation, to remove that flag.\n>\n> For that we don't need a separate flag. Do we? AFAIR, somewhere under\n> try_partitionwise_join() we check whether PWJ is possible between two\n> relations. That involves a bunch of checks like checking whether the\n> relations have same bounds. Those checks should be enhanced to\n> incorporate existence of whole-var, I think.\n\nYeah, that check is actually done in build_joinrel_partition_info(), \nwhich is called from build_join_rel() and build_child_join_rel() (only \nthe latter is called from try_partitionwise_join()).\n\nThat flag is used in build_joinrel_partition_info() for that check, but \nas you mentioned, I think it would be possible to remove that flag, \nprobably by checking the WRV existence from the outer_rel/inner_rel's \nreltarget, instead of that flag. But I'm not sure we can do that \nefficiently without complicating the existing code including the \noriginal PWJ one. That flag doesn't make that code complicated. I \nthought it would be better to not complicate that code, because \ndisabling such PWJs would be something temporary until we support them.\n\nAnyway, I think this would be a separate issue from the original one we \ndiscussed on this thread.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Wed, 16 Jan 2019 20:21:32 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with high planning time at version 11.1 compared versions\n 10.5 and 11.0"
},
{
"msg_contents": "(2019/01/16 14:45), Etsuro Fujita wrote:\n> (2019/01/15 11:42), Amit Langote wrote:\n>> On 2019/01/11 21:50, Etsuro Fujita wrote:\n>>>>> (2019/01/10 10:41), Amit Langote wrote:\n>>>>>> That's a loaded meaning and abusing it to mean something else can be\n>>>>>> challenged, but we can live with that if properly documented.\n>>>>>> Speaking of\n>>>>>> which:\n>>>>>>\n>>>>>> /* used by partitionwise joins: */\n>>>>>> bool consider_partitionwise_join; /* consider\n>>>>>> partitionwise join\n>>>>>> * paths? (if\n>>>>>> partitioned\n>>>>>> rel) */\n>>>>>>\n>>>>>> Maybe, mention here how it will be abused in back-branches for\n>>>>>> non-partitioned relations?\n>\n>>> I know we don't yet reach a consensus on what to do in details to\n>>> address\n>>> this issue, but for the above, how about adding comments like this to\n>>> set_append_rel_size(), instead of the header file:\n>>>\n>>> /*\n>>> * If we consider partitionwise joins with the parent rel, do the\n>>> same\n>>> * for partitioned child rels.\n>>> *\n>>> * Note: here we abuse the consider_partitionwise_join flag for child\n>>> * rels that are not partitioned, to tell try_partitionwise_join()\n>>> * that their targetlists and EC entries have been generated.\n>>> */\n>>> if (rel->consider_partitionwise_join)\n>>> childrel->consider_partitionwise_join = true;\n>>>\n>>> ISTM that that would be more clearer than the header file.\n>>\n>> Thanks for updating the patch. I tend to agree that it might be better to\n>> add such details here than in the header as it's better to keep the\n>> latter\n>> more stable.\n>>\n>> About the comment you added, I think we could clarify the note further\n>> as:\n>>\n>> Note: here we abuse the consider_partitionwise_join flag by setting it\n>> *even* for child rels that are not partitioned. In that case, we set it\n>> to tell try_partitionwise_join() that it doesn't need to generate their\n>> targetlists and EC entries as they have already been generated here, as\n>> opposed to the dummy child rels for which the flag is left set to\n>> false so\n>> that it will generate them.\n>>\n>> Maybe it's a bit wordy, but it helps get the intention across more\n>> clearly.\n>\n> I think that is well-worded, so +1 from me.\n\nI updated the patch as such and rebased it to the latest HEAD. I also \nadded the commit message. Attached is an updated patch. Does that make \nsense? If there are no objections, I'll push that patch early next week.\n\nBest regards,\nEtsuro Fujita",
"msg_date": "Fri, 18 Jan 2019 21:55:28 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with high planning time at version 11.1 compared versions\n 10.5 and 11.0"
},
{
"msg_contents": "On Fri, Jan 18, 2019 at 9:58 PM Etsuro Fujita\n<[email protected]> wrote:\n> I updated the patch as such and rebased it to the latest HEAD. I also\n> added the commit message. Attached is an updated patch. Does that make\n> sense? If there are no objections, I'll push that patch early next week.\n\nThank you. Looks good to me.\n\nRegards,\nAmit\n\n",
"msg_date": "Sat, 19 Jan 2019 21:17:34 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with high planning time at version 11.1 compared versions\n 10.5 and 11.0"
},
{
"msg_contents": "(2019/01/19 21:17), Amit Langote wrote:\n> On Fri, Jan 18, 2019 at 9:58 PM Etsuro Fujita\n> <[email protected]> wrote:\n>> I updated the patch as such and rebased it to the latest HEAD. I also\n>> added the commit message. Attached is an updated patch. Does that make\n>> sense? If there are no objections, I'll push that patch early next week.\n>\n> Thank you. Looks good to me.\n\nCool. Pushed after tweaking the commit message based on the feedback \nfrom Justin offlist and a self-review.\n\nThanks everyone who worked on this!\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Mon, 21 Jan 2019 17:17:53 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with high planning time at version 11.1 compared versions\n 10.5 and 11.0"
},
{
"msg_contents": "On 2019/01/21 17:17, Etsuro Fujita wrote:\n> (2019/01/19 21:17), Amit Langote wrote:\n>> On Fri, Jan 18, 2019 at 9:58 PM Etsuro Fujita\n>> <[email protected]> wrote:\n>>> I updated the patch as such and rebased it to the latest HEAD. I also\n>>> added the commit message. Attached is an updated patch. Does that make\n>>> sense? If there are no objections, I'll push that patch early next week.\n>>\n>> Thank you. Looks good to me.\n> \n> Cool. Pushed after tweaking the commit message based on the feedback from\n> Justin offlist and a self-review.\n\nThank you.\n\nRegards,\nAmit\n\n\n\n\n",
"msg_date": "Mon, 21 Jan 2019 17:21:46 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with high planning time at version 11.1 compared versions\n 10.5 and 11.0"
},
{
"msg_contents": "(2019/01/21 17:21), Amit Langote wrote:\n> On 2019/01/21 17:17, Etsuro Fujita wrote:\n>> (2019/01/19 21:17), Amit Langote wrote:\n>>> On Fri, Jan 18, 2019 at 9:58 PM Etsuro Fujita\n>>> <[email protected]> wrote:\n>>>> I updated the patch as such and rebased it to the latest HEAD. I also\n>>>> added the commit message. Attached is an updated patch. Does that make\n>>>> sense? If there are no objections, I'll push that patch early next week.\n>>>\n>>> Thank you. Looks good to me.\n>>\n>> Cool. Pushed after tweaking the commit message based on the feedback from\n>> Justin offlist and a self-review.\n>\n> Thank you.\n\nOne thing to add: I forgot back-patching :(, so I did that a little \nwhile ago.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Mon, 21 Jan 2019 17:56:45 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with high planning time at version 11.1 compared versions\n 10.5 and 11.0"
}
] |
[
{
"msg_contents": "Hello all,\r\n\r\nWe recently moved our production database systems from a 9.4 running on a self-managed EC2 instance to 9.6.10 on Amazon’s AWS (same RAM, CPU). After the move, we’re finding that certain queries that we run against a GIN full-text index have some occasionally very slow executions and I’m struggling to figure out what to do about it. I would be very grateful for any ideas!\r\n\r\nThanks,\r\nScott\r\n\r\nThe setup we have is a 32-core, 244 GB RAM primary with a same-sized read replica. The queries are running off the replica, but performance is roughly the same between the master and the replica.\r\n\r\nHere’s a query that’s performing badly:\r\n\r\n\r\nSELECT ls.location AS locationId FROM location_search ls WHERE ls.client = 83 AND search_field_tsvector @@ to_tsquery('9000:* &Smith''s:* &Mill:*') AND ls.favorite = TRUE LIMIT 1\r\n\r\nThe mean time for this query (and others like it) is about 900ms, but the std deviation is over 1000ms and the max is 11000ms.\r\n\r\nThe explain looks like this:\r\n\r\nLimit (cost=1516.25..1520.52 rows=1 width=223) (actual time=4506.482..4506.482 rows=0 loops=1)\r\n Buffers: shared hit=9073\r\n -> Bitmap Heap Scan on location_search ls (cost=1516.25..1520.52 rows=1 width=223) (actual time=4506.480..4506.480 rows=0 loops=1)\r\n Recheck Cond: (search_field_tsvector @@ to_tsquery('9000:* &Smith''s:* &Mill:*'::text))\r\n Filter: (favorite AND (client = 83))\r\n Rows Removed by Filter: 8\r\n Heap Blocks: exact=12\r\n Buffers: shared hit=9073\r\n -> Bitmap Index Scan on location_search_tsvector_idx (cost=0.00..1516.25 rows=1 width=0) (actual time=4506.450..4506.450 rows=12 loops=1)\r\n Index Cond: (search_field_tsvector @@ to_tsquery('9000:* &Smith''s:* &Mill:*'::text))\r\n Buffers: shared hit=9061\r\nPlanning time: 0.240 ms\r\nExecution time: 4509.995 ms\r\n\r\nThe table has about 30 million rows in it. The table and index definition are:\r\n\r\nCREATE TABLE public.location_search\r\n(\r\n id bigint NOT NULL DEFAULT nextval('location_search_id_seq'::regclass),\r\n person_location bigint,\r\n person bigint,\r\n client_location bigint,\r\n client bigint,\r\n location bigint,\r\n org_unit_id bigint,\r\n latitude numeric(10,7),\r\n longitude numeric(10,7),\r\n geofence numeric(10,7),\r\n address_line_one text COLLATE pg_catalog.\"default\",\r\n address_line_two text COLLATE pg_catalog.\"default\",\r\n city text COLLATE pg_catalog.\"default\",\r\n state text COLLATE pg_catalog.\"default\",\r\n postal_code text COLLATE pg_catalog.\"default\",\r\n country text COLLATE pg_catalog.\"default\",\r\n full_address text COLLATE pg_catalog.\"default\",\r\n is_google_verified boolean,\r\n address_source text COLLATE pg_catalog.\"default\",\r\n active boolean,\r\n name character varying(255) COLLATE pg_catalog.\"default\",\r\n external_client_location_id character varying(500) COLLATE pg_catalog.\"default\",\r\n custom_field_values hstore,\r\n location_tags hstore,\r\n legacy_location_id bigint,\r\n favorite boolean,\r\n search_field_tsvector tsvector\r\n)\r\nWITH (\r\n OIDS = FALSE\r\n)\r\nTABLESPACE pg_default;\r\n\r\nCREATE INDEX location_search_tsvector_idx\r\n ON public.location_search USING gin\r\n (search_field_tsvector)\r\n TABLESPACE pg_default;\r\n\r\nRight now the output of pgstatginindex is this:\r\nversion pending_pages pending_tuples\r\n2 214 9983\r\n\r\nLastly, here are some of the relevant config entries:\r\n\r\nautovacuum on\r\nautovacuum_analyze_scale_factor 0\r\nautovacuum_analyze_threshold 50\r\nautovacuum_freeze_max_age 400000000\r\nautovacuum_max_workers 3\r\nautovacuum_multixact_freeze_max_age 400000000\r\nautovacuum_naptime 30s\r\nautovacuum_vacuum_cost_delay 20ms\r\nautovacuum_vacuum_cost_limit -1\r\nautovacuum_vacuum_scale_factor 0\r\nautovacuum_vacuum_threshold 50\r\ncpu_index_tuple_cost 0\r\ncpu_operator_cost 0\r\ncpu_tuple_cost 0\r\ncursor_tuple_fraction 0\r\neffective_cache_size 125777784kB\r\neffective_io_concurrency 1\r\ngin_fuzzy_search_limit 0\r\ngin_pending_list_limit 4MB\r\nmaintenance_work_mem 4027MB\r\nseq_page_cost 1\r\nshared_buffers 62888888kB\r\nwork_mem 200000kB\r\n\r\n\r\nSCOTT RANKIN\r\nVP, Technology\r\nMotus, LLC\r\nTwo Financial Center, 60 South Street, Boston, MA 02111\r\n617.467.1900 (O) | [email protected]<mailto:[email protected]>\r\n\r\nFollow us on LinkedIn<https://www.linkedin.com/company/motus-llc/> | Visit us at motus.com<http://www.motus.com/>\r\n\r\n\r\nThis email message contains information that Motus, LLC considers confidential and/or proprietary, or may later designate as confidential and proprietary. It is intended only for use of the individual or entity named above and should not be forwarded to any other persons or entities without the express consent of Motus, LLC, nor should it be used for any purpose other than in the course of any potential or actual business relationship with Motus, LLC. If the reader of this message is not the intended recipient, or the employee or agent responsible to deliver it to the intended recipient, you are hereby notified that any dissemination, distribution, or copying of this communication is strictly prohibited. If you have received this communication in error, please notify sender immediately and destroy the original message.\r\n\r\nInternal Revenue Service regulations require that certain types of written advice include a disclaimer. To the extent the preceding message contains advice relating to a Federal tax issue, unless expressly stated otherwise the advice is not intended or written to be used, and it cannot be used by the recipient or any other taxpayer, for the purpose of avoiding Federal tax penalties, and was not written to support the promotion or marketing of any transaction or matter discussed herein.\r\n\n\n\n\n\n\n\n\n\nHello all,\n \nWe recently moved our production database systems from a 9.4 running on a self-managed EC2 instance to 9.6.10 on Amazon’s AWS (same RAM, CPU). After the move, we’re finding that certain queries that we run\r\n against a GIN full-text index have some occasionally very slow executions and I’m struggling to figure out what to do about it. I would be very grateful for any ideas!\r\n\n \nThanks,\nScott \n \nThe setup we have is a 32-core, 244 GB RAM primary with a same-sized read replica. The queries are running off the replica, but performance is roughly the same between the master and the replica.\r\n\n \nHere’s a query that’s performing badly:\r\n\n \nSELECT ls.location \r\nAS locationId FROM location_search ls \r\nWHERE ls.client = 83 \r\nAND search_field_tsvector @@ to_tsquery('9000:* &Smith''s:* &Mill:*') \r\nAND ls.favorite = TRUE \r\nLIMIT 1\n \nThe mean time for this query (and others like it) is about 900ms, but the std deviation is over 1000ms and the max is 11000ms.\r\n\n \nThe explain looks like this: \n\n \nLimit (cost=1516.25..1520.52 rows=1 width=223) (actual time=4506.482..4506.482 rows=0 loops=1)\n Buffers: shared hit=9073\n -> Bitmap Heap Scan on location_search ls (cost=1516.25..1520.52 rows=1 width=223) (actual time=4506.480..4506.480 rows=0 loops=1)\n Recheck Cond: (search_field_tsvector @@ to_tsquery('9000:* &Smith''s:* &Mill:*'::text))\n Filter: (favorite AND (client = 83))\n Rows Removed by Filter: 8\n Heap Blocks: exact=12\n Buffers: shared hit=9073\n -> Bitmap Index Scan on location_search_tsvector_idx (cost=0.00..1516.25 rows=1 width=0) (actual time=4506.450..4506.450 rows=12 loops=1)\n Index Cond: (search_field_tsvector @@ to_tsquery('9000:* &Smith''s:* &Mill:*'::text))\n Buffers: shared hit=9061\nPlanning time: 0.240 ms\nExecution time: 4509.995 ms\n \nThe table has about 30 million rows in it. The table and index definition are:\r\n\n \nCREATE TABLE public.location_search\n(\n id bigint NOT NULL DEFAULT nextval('location_search_id_seq'::regclass),\n person_location bigint,\n person bigint,\n client_location bigint,\n client bigint,\n location bigint,\n org_unit_id bigint,\n latitude numeric(10,7),\n longitude numeric(10,7),\n geofence numeric(10,7),\n address_line_one text COLLATE pg_catalog.\"default\",\n address_line_two text COLLATE pg_catalog.\"default\",\n city text COLLATE pg_catalog.\"default\",\n state text COLLATE pg_catalog.\"default\",\n postal_code text COLLATE pg_catalog.\"default\",\n country text COLLATE pg_catalog.\"default\",\n full_address text COLLATE pg_catalog.\"default\",\n is_google_verified boolean,\n address_source text COLLATE pg_catalog.\"default\",\n active boolean,\n name character varying(255) COLLATE pg_catalog.\"default\",\n external_client_location_id character varying(500) COLLATE pg_catalog.\"default\",\n custom_field_values hstore,\n location_tags hstore,\n legacy_location_id bigint,\n favorite boolean,\n search_field_tsvector tsvector\n)\nWITH (\n OIDS = FALSE\n)\nTABLESPACE pg_default;\n \nCREATE INDEX location_search_tsvector_idx\n ON public.location_search USING gin\n (search_field_tsvector)\n TABLESPACE pg_default;\n \nRight now the output of pgstatginindex is this:\r\n\nversion pending_pages pending_tuples\n2 214 9983\n \nLastly, here are some of the relevant config entries:\n \nautovacuum on\nautovacuum_analyze_scale_factor 0\nautovacuum_analyze_threshold 50\nautovacuum_freeze_max_age 400000000\nautovacuum_max_workers 3\nautovacuum_multixact_freeze_max_age 400000000\nautovacuum_naptime 30s\nautovacuum_vacuum_cost_delay 20ms\nautovacuum_vacuum_cost_limit -1\nautovacuum_vacuum_scale_factor 0\nautovacuum_vacuum_threshold 50\ncpu_index_tuple_cost 0\ncpu_operator_cost 0\ncpu_tuple_cost 0\ncursor_tuple_fraction 0\neffective_cache_size 125777784kB\neffective_io_concurrency 1\ngin_fuzzy_search_limit 0\ngin_pending_list_limit 4MB\nmaintenance_work_mem 4027MB\nseq_page_cost 1\nshared_buffers 62888888kB\nwork_mem 200000kB\n \n \nSCOTT RANKIN\nVP, Technology\nMotus, LLC\r\nTwo Financial Center, 60 South Street, Boston, MA 02111 \r\n617.467.1900 (O) | [email protected]\n \nFollow us on LinkedIn |\r\n Visit us at motus.com \n \n\nThis email message contains information that Motus, LLC considers confidential and/or proprietary, or may later designate as confidential and proprietary. It is intended only for use of the individual or entity named above and should not\r\n be forwarded to any other persons or entities without the express consent of Motus, LLC, nor should it be used for any purpose other than in the course of any potential or actual business relationship with Motus, LLC. If the reader of this message is not the\r\n intended recipient, or the employee or agent responsible to deliver it to the intended recipient, you are hereby notified that any dissemination, distribution, or copying of this communication is strictly prohibited. If you have received this communication\r\n in error, please notify sender immediately and destroy the original message.\nInternal Revenue Service regulations require that certain types of written advice include a disclaimer. To the extent the preceding message contains advice relating to a Federal tax issue, unless expressly stated otherwise the advice is not\r\n intended or written to be used, and it cannot be used by the recipient or any other taxpayer, for the purpose of avoiding Federal tax penalties, and was not written to support the promotion or marketing of any transaction or matter discussed herein.",
"msg_date": "Wed, 28 Nov 2018 19:08:53 +0000",
"msg_from": "Scott Rankin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow Bitmap Index Scan"
},
{
"msg_contents": "On Wed, Nov 28, 2018 at 07:08:53PM +0000, Scott Rankin wrote:\n> We recently moved our production database systems from a 9.4 running on a self-managed EC2 instance to 9.6.10 on Amazon’s AWS (same RAM, CPU). After the move, we’re finding that certain queries that we run against a GIN full-text index have some occasionally very slow executions and I’m struggling to figure out what to do about it. I would be very grateful for any ideas!\n> \n> The setup we have is a 32-core, 244 GB RAM primary with a same-sized read replica. The queries are running off the replica, but performance is roughly the same between the master and the replica.\n> \n> Here’s a query that’s performing badly:\n\nCan you compare or show the explain(analyze,buffers) for a fast query instance\nvs slow query instance ? Is it slower due to index access or heap? Due to\ncache misses ?\n\nAlso, you have big ram - have you tried disabling KSM or THP ?\nhttps://www.postgresql.org/message-id/20170718180152.GE17566%40telsasoft.com\n\nJustin\n\n",
"msg_date": "Wed, 28 Nov 2018 13:17:53 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow Bitmap Index Scan"
},
{
"msg_contents": "\r\nOn 11/28/18, 2:18 PM, \"Justin Pryzby\" <[email protected]> wrote:\r\n\r\n On Wed, Nov 28, 2018 at 07:08:53PM +0000, Scott Rankin wrote:\r\n > We recently moved our production database systems from a 9.4 running on a self-managed EC2 instance to 9.6.10 on Amazon’s AWS (same RAM, CPU). After the move, we’re finding that certain queries that we run against a GIN full-text index have some occasionally very slow executions and I’m struggling to figure out what to do about it. I would be very grateful for any ideas!\r\n >\r\n > The setup we have is a 32-core, 244 GB RAM primary with a same-sized read replica. The queries are running off the replica, but performance is roughly the same between the master and the replica.\r\n >\r\n > Here’s a query that’s performing badly:\r\n\r\n Can you compare or show the explain(analyze,buffers) for a fast query instance\r\n vs slow query instance ? Is it slower due to index access or heap? Due to\r\n cache misses ?\r\n\r\nIf I reduce the number of search terms in , I get this:\r\n\r\nSELECT ls.location AS locationId FROM location_search ls WHERE ls.client = 83 AND search_field_tsvector @@ to_tsquery('9000:*'::text) AND ls.favorite = TRUE LIMIT 100\r\n\r\nLimit (cost=13203.99..13627.40 rows=100 width=8) (actual time=66.568..66.759 rows=100 loops=1)\r\n Buffers: shared hit=1975\r\n -> Bitmap Heap Scan on location_search ls (cost=13203.99..13923.79 rows=170 width=8) (actual time=66.568..66.729 rows=100 loops=1)\r\n Recheck Cond: ((search_field_tsvector @@ to_tsquery('9000:*'::text)) AND (client = 83))\r\n Filter: favorite\r\n Heap Blocks: exact=86\r\n Buffers: shared hit=1975\r\n -> BitmapAnd (cost=13203.99..13203.99 rows=170 width=0) (actual time=66.471..66.472 rows=0 loops=1)\r\n Buffers: shared hit=1889\r\n -> Bitmap Index Scan on location_search_tsvector_idx (cost=0.00..2235.02 rows=11570 width=0) (actual time=20.603..20.604 rows=29155 loops=1)\r\n Index Cond: (search_field_tsvector @@ to_tsquery('9000:*'::text))\r\n Buffers: shared hit=546\r\n -> Bitmap Index Scan on location_search_client_idx (cost=0.00..10968.63 rows=442676 width=0) (actual time=40.682..40.682 rows=482415 loops=1)\r\n Index Cond: (client = 83)\r\n Buffers: shared hit=1343\r\nPlanning time: 0.181 ms\r\nExecution time: 66.806 ms\r\n\r\nI see almost no IO reads, and pg_stat_statements shows no cache misses.\r\n\r\n Also, you have big ram - have you tried disabling KSM or THP ?\r\n https://www.postgresql.org/message-id/20170718180152.GE17566%40telsasoft.com\r\n\r\nSince this is Amazon RDS, we don't have any control over or access to the underlying OS. I know that huge_page support is on for these instances. I would hope that Amazon's already done that...\r\n\r\n Justin\r\n\r\n\r\n\r\nThis email message contains information that Motus, LLC considers confidential and/or proprietary, or may later designate as confidential and proprietary. It is intended only for use of the individual or entity named above and should not be forwarded to any other persons or entities without the express consent of Motus, LLC, nor should it be used for any purpose other than in the course of any potential or actual business relationship with Motus, LLC. If the reader of this message is not the intended recipient, or the employee or agent responsible to deliver it to the intended recipient, you are hereby notified that any dissemination, distribution, or copying of this communication is strictly prohibited. If you have received this communication in error, please notify sender immediately and destroy the original message.\r\n\r\nInternal Revenue Service regulations require that certain types of written advice include a disclaimer. To the extent the preceding message contains advice relating to a Federal tax issue, unless expressly stated otherwise the advice is not intended or written to be used, and it cannot be used by the recipient or any other taxpayer, for the purpose of avoiding Federal tax penalties, and was not written to support the promotion or marketing of any transaction or matter discussed herein.\r\n",
"msg_date": "Wed, 28 Nov 2018 19:31:29 +0000",
"msg_from": "Scott Rankin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow Bitmap Index Scan"
},
{
"msg_contents": "Upon further analysis, this is - unsurprisingly - taking place when we have multiple prefixed search terms in a ts_query going against a tsvector index.\r\n\r\nWe have roughly 30 million rows in the table, and the search column is basically a concatenation of a location's name (think \"Walmart #123456\") and its street address.\r\n\r\nWe use these searches mostly for autocompleting of a location search. So the search for that record above might be \"Walmart 123\", which we change to be to_tsquery('walmart:* &123:*'). We prefix both terms to correct for misspellings or lazy typing.\r\n\r\nIs it unrealistic to think that we could have sub-1000ms searches against that size of a table?\r\n\r\nOn 11/28/18, 2:18 PM, \"Justin Pryzby\" <[email protected]> wrote:\r\n\r\n On Wed, Nov 28, 2018 at 07:08:53PM +0000, Scott Rankin wrote:\r\n > We recently moved our production database systems from a 9.4 running on a self-managed EC2 instance to 9.6.10 on Amazon’s AWS (same RAM, CPU). After the move, we’re finding that certain queries that we run against a GIN full-text index have some occasionally very slow executions and I’m struggling to figure out what to do about it. I would be very grateful for any ideas!\r\n >\r\n > The setup we have is a 32-core, 244 GB RAM primary with a same-sized read replica. The queries are running off the replica, but performance is roughly the same between the master and the replica.\r\n >\r\n > Here’s a query that’s performing badly:\r\n\r\n Can you compare or show the explain(analyze,buffers) for a fast query instance\r\n vs slow query instance ? Is it slower due to index access or heap? Due to\r\n cache misses ?\r\n\r\n Also, you have big ram - have you tried disabling KSM or THP ?\r\n https://www.postgresql.org/message-id/20170718180152.GE17566%40telsasoft.com\r\n\r\n Justin\r\n\r\n\r\n\r\nThis email message contains information that Motus, LLC considers confidential and/or proprietary, or may later designate as confidential and proprietary. It is intended only for use of the individual or entity named above and should not be forwarded to any other persons or entities without the express consent of Motus, LLC, nor should it be used for any purpose other than in the course of any potential or actual business relationship with Motus, LLC. If the reader of this message is not the intended recipient, or the employee or agent responsible to deliver it to the intended recipient, you are hereby notified that any dissemination, distribution, or copying of this communication is strictly prohibited. If you have received this communication in error, please notify sender immediately and destroy the original message.\r\n\r\nInternal Revenue Service regulations require that certain types of written advice include a disclaimer. To the extent the preceding message contains advice relating to a Federal tax issue, unless expressly stated otherwise the advice is not intended or written to be used, and it cannot be used by the recipient or any other taxpayer, for the purpose of avoiding Federal tax penalties, and was not written to support the promotion or marketing of any transaction or matter discussed herein.\r\n",
"msg_date": "Mon, 3 Dec 2018 18:41:54 +0000",
"msg_from": "Scott Rankin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow Bitmap Index Scan"
},
{
"msg_contents": "On Mon, 2018-12-03 at 18:41 +0000, Scott Rankin wrote:\n> Upon further analysis, this is - unsurprisingly - taking place when we have multiple prefixed search terms in a ts_query going against a tsvector index.\n> \n> We have roughly 30 million rows in the table, and the search column is basically a concatenation of a location's name (think \"Walmart #123456\") and its street address.\n> \n> We use these searches mostly for autocompleting of a location search. So the search for that record above might be \"Walmart 123\", which we change to be to_tsquery('walmart:* &123:*'). We prefix both terms to correct for misspellings or lazy typing.\n> \n> Is it unrealistic to think that we could have sub-1000ms searches against that size of a table?\n> \n\nWe've found trigram indexes to be much faster and more useful for these\ntypes of searches than full-text.\n\nhttps://www.postgresql.org/docs/10/pgtrgm.html\n\nMight be worth a try, if you haven't tested them before.\nOn Mon, 2018-12-03 at 18:41 +0000, Scott Rankin wrote:Upon further analysis, this is - unsurprisingly - taking place when we have multiple prefixed search terms in a ts_query going against a tsvector index.\n\nWe have roughly 30 million rows in the table, and the search column is basically a concatenation of a location's name (think \"Walmart #123456\") and its street address.\n\nWe use these searches mostly for autocompleting of a location search. So the search for that record above might be \"Walmart 123\", which we change to be to_tsquery('walmart:* &123:*'). We prefix both terms to correct for misspellings or lazy typing.\n\nIs it unrealistic to think that we could have sub-1000ms searches against that size of a table?\n\nWe've found trigram indexes to be much faster and more useful for these types of searches than full-text.https://www.postgresql.org/docs/10/pgtrgm.htmlMight be worth a try, if you haven't tested them before.",
"msg_date": "Mon, 03 Dec 2018 10:51:29 -0800",
"msg_from": "Alan Hodgson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow Bitmap Index Scan"
}
] |
[
{
"msg_contents": "All;\n\n\nMy apologies if this is off topic.\n\n\nOur company is moving to Aurora, In the past I would take care not to \nallow postgresql to over-commit memory beyond the actual memory on the \nserver, which meant I would add the buffer pool + (work_mem * \nmax_connections) + (maintenance_work_mem * autovacuum threads)\n\n\nHowever as I look at the aroura defaults they are all off the charts, \nfor example, based on the calculations in the config (amazon doesn't \nmake it easy, some settings are in pages, some are in kb, some are who \nknows what) I see the following settings as default in our aroura config:\n\n\nThe instance size is db.r4.xlarge\n\n\nthis instance size is listed as having 30.5GB of ram\n\n\nHere's the default settings:\n\n\nshared_buffers: {DBInstanceClassMemory/10922}\n\nwhich equates to 24GB\n\n\nwork_mem: 64000 (kb)\n\nwhich equates to 65.5MB\n\n\nmaintenance_work_mem: GREATEST({DBInstanceClassMemory/63963136*1024},65536)\n\nwhich equates to 4.2GB\n\n\nmax_connections: LEAST({DBInstanceClassMemory/9531392},5000)\n\nwhich equates to 3,380\n\n\nAccording to my math (If I got it right) in a worst case scenario,\n\nif we maxed out max_connections, work_mem and maintenance_work_mem limits\n\nthe db would request 247GB of memory\n\n\nAdditionally amazon has set effective_cache_size =\n{DBInstanceClassMemory/10922}\n\nwhich equates to about 2.9MB (which given the other outlandish setting \nmay be the only appropriate setting in the system)\n\n\n\nWhat the hell is amazon doing here? Am I missing the boat on tuning \npostgresql memory? Is amazon simply counting on the bet that users will \nnever fully utilize an instance?\n\n\nThanks in advance\n\n\n\n\n",
"msg_date": "Sat, 8 Dec 2018 12:00:33 -0700",
"msg_from": "Square Bob <[email protected]>",
"msg_from_op": true,
"msg_subject": "amazon aroura config - seriously overcommited defaults? (May be Off\n Topic)"
},
{
"msg_contents": "This question is probably more of a fit for the performance list, sorry \nfor the cross post\n\n\n\n-------- Forwarded Message --------\nSubject: \tamazon aroura config - seriously overcommited defaults? (May \nbe Off Topic)\nDate: \tSat, 8 Dec 2018 12:00:33 -0700\nFrom: \tSquare Bob <[email protected]>\nTo: \[email protected]\n\n\n\nAll;\n\n\nMy apologies if this is off topic.\n\n\nOur company is moving to Aurora, In the past I would take care not to \nallow postgresql to over-commit memory beyond the actual memory on the \nserver, which meant I would add the buffer pool + (work_mem * \nmax_connections) + (maintenance_work_mem * autovacuum threads)\n\n\nHowever as I look at the aroura defaults they are all off the charts, \nfor example, based on the calculations in the config (amazon doesn't \nmake it easy, some settings are in pages, some are in kb, some are who \nknows what) I see the following settings as default in our aroura config:\n\n\nThe instance size is db.r4.xlarge\n\n\nthis instance size is listed as having 30.5GB of ram\n\n\nHere's the default settings:\n\n\nshared_buffers: {DBInstanceClassMemory/10922}\n\nwhich equates to 24GB\n\n\nwork_mem: 64000 (kb)\n\nwhich equates to 65.5MB\n\n\nmaintenance_work_mem: GREATEST({DBInstanceClassMemory/63963136*1024},65536)\n\nwhich equates to 4.2GB\n\n\nmax_connections: LEAST({DBInstanceClassMemory/9531392},5000)\n\nwhich equates to 3,380\n\n\nAccording to my math (If I got it right) in a worst case scenario,\n\nif we maxed out max_connections, work_mem and maintenance_work_mem limits\n\nthe db would request 247GB of memory\n\n\nAdditionally amazon has set effective_cache_size =\n{DBInstanceClassMemory/10922}\n\nwhich equates to about 2.9MB (which given the other outlandish setting \nmay be the only appropriate setting in the system)\n\n\n\nWhat the hell is amazon doing here? Am I missing the boat on tuning \npostgresql memory? Is amazon simply counting on the bet that users will \nnever fully utilize an instance?\n\n\nThanks in advance\n\n\n\n\n\n\n\n\n\nThis question is probably more of a fit for the performance list,\n sorry for the cross post\n\n\n\n -------- Forwarded Message --------\n \n\n\nSubject:\n \namazon aroura config - seriously overcommited defaults?\n (May be Off Topic)\n\n\nDate: \nSat, 8 Dec 2018 12:00:33 -0700\n\n\nFrom: \nSquare Bob <[email protected]>\n\n\nTo: \[email protected]\n\n\n\n\n\n All;\n\n\n My apologies if this is off topic.\n\n\n Our company is moving to Aurora, In the past I would take care not\n to allow postgresql to over-commit memory beyond the actual memory\n on the server, which meant I would add the buffer pool + (work_mem\n * max_connections) + (maintenance_work_mem * autovacuum threads)\n\n\n However as I look at the aroura defaults they are all off the\n charts, for example, based on the calculations in the config\n (amazon doesn't make it easy, some settings are in pages, some are\n in kb, some are who knows what) I see the following settings as\n default in our aroura config:\n\n\n The instance size is db.r4.xlarge\n\n\n this instance size is listed as having 30.5GB of ram\n\n\n Here's the default settings:\n\n\n shared_buffers: {DBInstanceClassMemory/10922}\n\n which equates to 24GB\n\n\n work_mem: 64000 (kb)\n\n which equates to 65.5MB\n\n\n maintenance_work_mem:\n GREATEST({DBInstanceClassMemory/63963136*1024},65536)\n\n which equates to 4.2GB\n\n\n max_connections: LEAST({DBInstanceClassMemory/9531392},5000)\n\n which equates to 3,380\n\n\n According to my math (If I got it right) in a worst case\n scenario,\n\n if we maxed out max_connections, work_mem and maintenance_work_mem\n limits\n\n the db would request 247GB of memory\n\n\n Additionally amazon has set effective_cache_size =\n {DBInstanceClassMemory/10922}\n\n which equates to about 2.9MB (which given the other outlandish\n setting may be the only appropriate setting in the system)\n\n\n\n What the hell is amazon doing here? Am I missing the boat on\n tuning postgresql memory? Is amazon simply counting on the bet\n that users will never fully utilize an instance?\n\n\n Thanks in advance",
"msg_date": "Sat, 8 Dec 2018 12:03:27 -0700",
"msg_from": "Square Bob <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fwd: amazon aroura config - seriously overcommited defaults? (May be\n Off Topic)"
},
{
"msg_contents": "so 8. 12. 2018 v 20:04 odesílatel Square Bob <[email protected]> napsal:\n\n> All;\n>\n>\n> My apologies if this is off topic.\n>\n>\n> Our company is moving to Aurora, In the past I would take care not to\n> allow postgresql to over-commit memory beyond the actual memory on the\n> server, which meant I would add the buffer pool + (work_mem *\n> max_connections) + (maintenance_work_mem * autovacuum threads)\n>\n>\n> However as I look at the aroura defaults they are all off the charts,\n> for example, based on the calculations in the config (amazon doesn't\n> make it easy, some settings are in pages, some are in kb, some are who\n> knows what) I see the following settings as default in our aroura config:\n>\n>\n> The instance size is db.r4.xlarge\n>\n>\n> this instance size is listed as having 30.5GB of ram\n>\n>\n> Here's the default settings:\n>\n>\n> shared_buffers: {DBInstanceClassMemory/10922}\n>\n> which equates to 24GB\n>\n>\n> work_mem: 64000 (kb)\n>\n> which equates to 65.5MB\n>\n>\n> maintenance_work_mem: GREATEST({DBInstanceClassMemory/63963136*1024},65536)\n>\n> which equates to 4.2GB\n>\n>\n> max_connections: LEAST({DBInstanceClassMemory/9531392},5000)\n>\n> which equates to 3,380\n>\n>\n> According to my math (If I got it right) in a worst case scenario,\n>\n> if we maxed out max_connections, work_mem and maintenance_work_mem limits\n>\n> the db would request 247GB of memory\n>\n>\n> Additionally amazon has set effective_cache_size =\n> {DBInstanceClassMemory/10922}\n>\n> which equates to about 2.9MB (which given the other outlandish setting\n> may be the only appropriate setting in the system)\n>\n>\n>\n> What the hell is amazon doing here? Am I missing the boat on tuning\n> postgresql memory? Is amazon simply counting on the bet that users will\n> never fully utilize an instance?\n>\n>\nnobody knows what patches are used there. Max connections over 1000 are\nnot good idea for native Postgres. But maybe there are some patches - or\njust mostly idle connections are expected.\n\nRegards\n\nPavel\n\n\n\n> Thanks in advance\n>\n>\n>\n>\n>\n\nso 8. 12. 2018 v 20:04 odesílatel Square Bob <[email protected]> napsal:All;\n\n\nMy apologies if this is off topic.\n\n\nOur company is moving to Aurora, In the past I would take care not to \nallow postgresql to over-commit memory beyond the actual memory on the \nserver, which meant I would add the buffer pool + (work_mem * \nmax_connections) + (maintenance_work_mem * autovacuum threads)\n\n\nHowever as I look at the aroura defaults they are all off the charts, \nfor example, based on the calculations in the config (amazon doesn't \nmake it easy, some settings are in pages, some are in kb, some are who \nknows what) I see the following settings as default in our aroura config:\n\n\nThe instance size is db.r4.xlarge\n\n\nthis instance size is listed as having 30.5GB of ram\n\n\nHere's the default settings:\n\n\nshared_buffers: {DBInstanceClassMemory/10922}\n\nwhich equates to 24GB\n\n\nwork_mem: 64000 (kb)\n\nwhich equates to 65.5MB\n\n\nmaintenance_work_mem: GREATEST({DBInstanceClassMemory/63963136*1024},65536)\n\nwhich equates to 4.2GB\n\n\nmax_connections: LEAST({DBInstanceClassMemory/9531392},5000)\n\nwhich equates to 3,380\n\n\nAccording to my math (If I got it right) in a worst case scenario,\n\nif we maxed out max_connections, work_mem and maintenance_work_mem limits\n\nthe db would request 247GB of memory\n\n\nAdditionally amazon has set effective_cache_size =\n{DBInstanceClassMemory/10922}\n\nwhich equates to about 2.9MB (which given the other outlandish setting \nmay be the only appropriate setting in the system)\n\n\n\nWhat the hell is amazon doing here? Am I missing the boat on tuning \npostgresql memory? Is amazon simply counting on the bet that users will \nnever fully utilize an instance?\nnobody knows what patches are used there. Max connections over 1000 are not good idea for native Postgres. But maybe there are some patches - or just mostly idle connections are expected.RegardsPavel\n\nThanks in advance",
"msg_date": "Sat, 8 Dec 2018 20:21:20 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: amazon aroura config - seriously overcommited defaults? (May be\n Off Topic)"
},
{
"msg_contents": "Aurora doesn’t use a typical file system so the RAM usually reserved for the OS file system bufffer cache is instead used for shared_buffers. \n\nWe run multiple Aurora/ PG instances and they work quite well. There are limitations in superuser private, so be aware of that, but generally speaking Aurora/PG works well. \n\nBob Lunney\n\nSent from my PDP11\n\n> On Dec 8, 2018, at 2:03 PM, Square Bob <[email protected]> wrote:\n> \n> This question is probably more of a fit for the performance list, sorry for the cross post\n> \n> \n> \n> -------- Forwarded Message --------\n> Subject:\tamazon aroura config - seriously overcommited defaults? (May be Off Topic)\n> Date:\tSat, 8 Dec 2018 12:00:33 -0700\n> From:\tSquare Bob <[email protected]>\n> To:\[email protected]\n> \n> \n> All;\n> \n> \n> My apologies if this is off topic.\n> \n> \n> Our company is moving to Aurora, In the past I would take care not to allow postgresql to over-commit memory beyond the actual memory on the server, which meant I would add the buffer pool + (work_mem * max_connections) + (maintenance_work_mem * autovacuum threads)\n> \n> \n> However as I look at the aroura defaults they are all off the charts, for example, based on the calculations in the config (amazon doesn't make it easy, some settings are in pages, some are in kb, some are who knows what) I see the following settings as default in our aroura config:\n> \n> \n> The instance size is db.r4.xlarge\n> \n> \n> this instance size is listed as having 30.5GB of ram\n> \n> \n> Here's the default settings:\n> \n> \n> shared_buffers: {DBInstanceClassMemory/10922}\n> \n> which equates to 24GB\n> \n> \n> work_mem: 64000 (kb)\n> \n> which equates to 65.5MB\n> \n> \n> maintenance_work_mem: GREATEST({DBInstanceClassMemory/63963136*1024},65536)\n> \n> which equates to 4.2GB\n> \n> \n> max_connections: LEAST({DBInstanceClassMemory/9531392},5000)\n> \n> which equates to 3,380\n> \n> \n> According to my math (If I got it right) in a worst case scenario,\n> \n> if we maxed out max_connections, work_mem and maintenance_work_mem limits\n> \n> the db would request 247GB of memory\n> \n> \n> Additionally amazon has set effective_cache_size =\n> {DBInstanceClassMemory/10922}\n> \n> which equates to about 2.9MB (which given the other outlandish setting may be the only appropriate setting in the system)\n> \n> \n> \n> What the hell is amazon doing here? Am I missing the boat on tuning postgresql memory? Is amazon simply counting on the bet that users will never fully utilize an instance?\n> \n> \n> Thanks in advance\n> \n> \n> \n\nAurora doesn’t use a typical file system so the RAM usually reserved for the OS file system bufffer cache is instead used for shared_buffers. We run multiple Aurora/ PG instances and they work quite well. There are limitations in superuser private, so be aware of that, but generally speaking Aurora/PG works well. Bob LunneySent from my PDP11On Dec 8, 2018, at 2:03 PM, Square Bob <[email protected]> wrote:\n\nThis question is probably more of a fit for the performance list,\n sorry for the cross post\n\n\n\n -------- Forwarded Message --------\n \n\n\nSubject:\n \namazon aroura config - seriously overcommited defaults?\n (May be Off Topic)\n\n\nDate: \nSat, 8 Dec 2018 12:00:33 -0700\n\n\nFrom: \nSquare Bob <[email protected]>\n\n\nTo: \[email protected]\n\n\n\n\n\n All;\n\n\n My apologies if this is off topic.\n\n\n Our company is moving to Aurora, In the past I would take care not\n to allow postgresql to over-commit memory beyond the actual memory\n on the server, which meant I would add the buffer pool + (work_mem\n * max_connections) + (maintenance_work_mem * autovacuum threads)\n\n\n However as I look at the aroura defaults they are all off the\n charts, for example, based on the calculations in the config\n (amazon doesn't make it easy, some settings are in pages, some are\n in kb, some are who knows what) I see the following settings as\n default in our aroura config:\n\n\n The instance size is db.r4.xlarge\n\n\n this instance size is listed as having 30.5GB of ram\n\n\n Here's the default settings:\n\n\n shared_buffers: {DBInstanceClassMemory/10922}\n\n which equates to 24GB\n\n\n work_mem: 64000 (kb)\n\n which equates to 65.5MB\n\n\n maintenance_work_mem:\n GREATEST({DBInstanceClassMemory/63963136*1024},65536)\n\n which equates to 4.2GB\n\n\n max_connections: LEAST({DBInstanceClassMemory/9531392},5000)\n\n which equates to 3,380\n\n\n According to my math (If I got it right) in a worst case\n scenario,\n\n if we maxed out max_connections, work_mem and maintenance_work_mem\n limits\n\n the db would request 247GB of memory\n\n\n Additionally amazon has set effective_cache_size =\n {DBInstanceClassMemory/10922}\n\n which equates to about 2.9MB (which given the other outlandish\n setting may be the only appropriate setting in the system)\n\n\n\n What the hell is amazon doing here? Am I missing the boat on\n tuning postgresql memory? Is amazon simply counting on the bet\n that users will never fully utilize an instance?\n\n\n Thanks in advance",
"msg_date": "Sat, 8 Dec 2018 14:48:24 -0500",
"msg_from": "Bob Lunney <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: amazon aroura config - seriously overcommited defaults? (May be\n Off Topic)"
},
{
"msg_contents": "On 12/8/18 11:00, Square Bob wrote:\n> My apologies if this is off topic.\n\nThe AWS Aurora PostgreSQL forums are also a great place to post\nquestions like this\n\nhttps://forums.aws.amazon.com/forum.jspa?forumID=227\n\n> Our company is moving to Aurora, In the past I would take care not to\n> allow postgresql to over-commit memory beyond the actual memory on the\n> server, which meant I would add the buffer pool + (work_mem *\n> max_connections) + (maintenance_work_mem * autovacuum threads)\n> \n> However as I look at the aroura defaults they are all off the charts,\n> for example, based on the calculations in the config (amazon doesn't\n> make it easy, some settings are in pages, some are in kb, some are who\n> knows what) I see the following settings as default in our aroura config:\n> \n> The instance size is db.r4.xlarge\n> this instance size is listed as having 30.5GB of ram\n> \n> Here's the default settings:\n> \n> shared_buffers: {DBInstanceClassMemory/10922}\n> which equates to 24GB\n\nOn RDS PostgreSQL, the default is 25% of your server memory. This seems\nto be pretty widely accepted as a good starting point on PostgreSQL. But\nremember that in open source PostgreSQL on linux, all I/O goes through\nthe filesystem and kernel buffer cache so in reality any available\nmemory on the box is used for cache.\n\nUnlike normal PostgreSQL, Aurora does not do I/O through the linux\nbuffer cache. If the default was left at 25% then this would result in\nvery surprising performance for most people. On other databases where\ndirect I/O is the normal pattern, 75% of memory on the box is often\ncited as a good starting point for OLTP systems. This default used on\nAurora.\n\n> work_mem: 64000 (kb)\n> which equates to 65.5MB\n\nAt present, this has been left at the community default for both RDS and\nAurora PostgreSQL.\n\n> maintenance_work_mem: GREATEST({DBInstanceClassMemory/63963136*1024},65536)\n> which equates to 4.2GB\n\nThis formula will set maint_work_mem to 1.639% of the memory on the\nsystem. It should be 511MB on an instance with 30.5GB of memory.\n\n> max_connections: LEAST({DBInstanceClassMemory/9531392},5000)\n> which equates to 3,380\n\nOn both RDS PostgreSQL and Aurora, max_connections is set to a value\nthat's conservatively high. While the default setting here won't stop\nyou, an r4.xlarge has only two physical CPUs and it's probably not a\ngood idea to run with 3000 connections.\n\nConnection management is a common challenge with databases of all\nflavors. The right number is incredibly workload dependent and I'm not\nsure whether it's possible to have a truly meaningful default limit as a\nformula of the server type.\n\n> According to my math (If I got it right) in a worst case scenario,\n> if we maxed out max_connections, work_mem and maintenance_work_mem limits\n> the db would request 247GB of memory\n\nIt's not quite this straightforward.\n\nFirst of all, work_mem is per plan node and it's only a guidance for\nwhere things should spill to disk. It doesn't completely prevent runaway\nmemory usage by queries. Many queries don't need much work_mem at all,\nand many other queries use more memory than work_mem.\n\nSecondly, IIRC, autovacuum actually has a hard-coded artificial 1GB\nlimit regardless of your maint_work_mem. However operations like index\ncreation can in fact use all of maint_work_mem.\n\n> Additionally amazon has set effective_cache_size =\n> {DBInstanceClassMemory/10922}\n> \n> which equates to about 2.9MB (which given the other outlandish setting\n> may be the only appropriate setting in the system)\n\nThat's actually the same as shared_buffers - 75% of the memory on the\nserver. And remember this is a planner/costing parameter; it has\nnothing to do with allocating actual memory.\n\n> What the hell is amazon doing here? Am I missing the boat on tuning\n> postgresql memory? Is amazon simply counting on the bet that users will\n> never fully utilize an instance?\n\nMemory management is hard. Nevermind PostgreSQL - it's hard to even get\na clear picture of what happens in the Linux kernel with regard to\nmemory. Think about these two questions: (1) Is memory pressure slowing\nme down? (2) Is memory pressure causing any risk or danger to the\nsystem? I've heard of issues even with the new MemAvailable value that\nwas added to /proc/meminfo - it seems difficult to get an accurate\npicture. While over-subscription might sound bad, you probably don't\nwant to just disable swap completely either. There are usually pages\nsitting in memory that are completely unnecessary.\n\nI'm not going to claim the RDS defaults are perfect - in fact I'd love\nto hear ideas about how they could be improved. [Hopefully without\nstarting any religious wars...] But I hope I've shown here that they\naren't as completely crazy as they first appeared? :)\n\n-Jeremy\n\n-- \nJeremy Schneider\nDatabase Engineer\nAmazon Web Services\n\n",
"msg_date": "Sat, 8 Dec 2018 12:06:23 -0800",
"msg_from": "Jeremy Schneider <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: amazon aroura config - seriously overcommited defaults? (May be\n Off Topic)"
},
{
"msg_contents": "On 2018-12-08 12:06:23 -0800, Jeremy Schneider wrote:\n> On RDS PostgreSQL, the default is 25% of your server memory. This seems\n> to be pretty widely accepted as a good starting point on PostgreSQL.\n\nFWIW, I think it's widely cited, but also bad advice. 25% for a OLTP\nworkload on a 1TB machine with a database size above 25% is a terrible\nidea.\n\n",
"msg_date": "Sat, 8 Dec 2018 15:12:42 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: amazon aroura config - seriously overcommited defaults? (May be\n Off Topic)"
},
{
"msg_contents": "\n\n> On Dec 8, 2018, at 3:12 PM, Andres Freund <[email protected]> wrote:\n> \n> On 2018-12-08 12:06:23 -0800, Jeremy Schneider wrote:\n>> On RDS PostgreSQL, the default is 25% of your server memory. This seems\n>> to be pretty widely accepted as a good starting point on PostgreSQL.\n> \n> FWIW, I think it's widely cited, but also bad advice. 25% for a OLTP\n> workload on a 1TB machine with a database size above 25% is a terrible\n> idea.\n> \n\nSorry, could you please expand “database size above 25%”? 25% of what?\n\nrjs\n\n\n",
"msg_date": "Sat, 8 Dec 2018 15:23:19 -0800",
"msg_from": "Rob Sargent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: amazon aroura config - seriously overcommited defaults? (May be\n Off Topic)"
},
{
"msg_contents": "On 2018-12-08 15:23:19 -0800, Rob Sargent wrote:\n> \n> \n> > On Dec 8, 2018, at 3:12 PM, Andres Freund <[email protected]> wrote:\n> > \n> > On 2018-12-08 12:06:23 -0800, Jeremy Schneider wrote:\n> >> On RDS PostgreSQL, the default is 25% of your server memory. This seems\n> >> to be pretty widely accepted as a good starting point on PostgreSQL.\n> > \n> > FWIW, I think it's widely cited, but also bad advice. 25% for a OLTP\n> > workload on a 1TB machine with a database size above 25% is a terrible\n> > idea.\n> > \n> \n> Sorry, could you please expand “database size above 25%”? 25% of what?\n\nMemory available to postgres (i.e. 100% of the server's memory on a\nserver dedicated to postgres, less if it's shared duty).\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Sat, 8 Dec 2018 15:38:18 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: amazon aroura config - seriously overcommited defaults? (May be\n Off Topic)"
},
{
"msg_contents": "\nOn 12/8/18 6:38 PM, Andres Freund wrote:\n> On 2018-12-08 15:23:19 -0800, Rob Sargent wrote:\n>>\n>>> On Dec 8, 2018, at 3:12 PM, Andres Freund <[email protected]> wrote:\n>>>\n>>> On 2018-12-08 12:06:23 -0800, Jeremy Schneider wrote:\n>>>> On RDS PostgreSQL, the default is 25% of your server memory. This seems\n>>>> to be pretty widely accepted as a good starting point on PostgreSQL.\n>>> FWIW, I think it's widely cited, but also bad advice. 25% for a OLTP\n>>> workload on a 1TB machine with a database size above 25% is a terrible\n>>> idea.\n>>>\n>> Sorry, could you please expand “database size above 25%”? 25% of what?\n> Memory available to postgres (i.e. 100% of the server's memory on a\n> server dedicated to postgres, less if it's shared duty).\n>\n\n\nI think the best advice these days is that you need to triangulate to \nfind the best setting for shared_buffers. It's very workload dependent, \nand there isn't even a semi-reliable rule of thumb.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 9 Dec 2018 07:51:33 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: amazon aroura config - seriously overcommited defaults? (May be\n Off Topic)"
},
{
"msg_contents": "\nOn 12/9/18 5:51 AM, Andrew Dunstan wrote:\n>\n> On 12/8/18 6:38 PM, Andres Freund wrote:\n>> On 2018-12-08 15:23:19 -0800, Rob Sargent wrote:\n>>>\n>>>> On Dec 8, 2018, at 3:12 PM, Andres Freund <[email protected]> wrote:\n>>>>\n>>>> On 2018-12-08 12:06:23 -0800, Jeremy Schneider wrote:\n>>>>> On RDS PostgreSQL, the default is 25% of your server memory. This \n>>>>> seems\n>>>>> to be pretty widely accepted as a good starting point on PostgreSQL.\n>>>> FWIW, I think it's widely cited, but also bad advice. 25% for a OLTP\n>>>> workload on a 1TB machine with a database size above 25% is a terrible\n>>>> idea.\n>>>>\n>>> Sorry, could you please expand “database size above 25%”? 25% of what?\n>> Memory available to postgres (i.e. 100% of the server's memory on a\n>> server dedicated to postgres, less if it's shared duty).\n>>\n>\n>\n> I think the best advice these days is that you need to triangulate to \n> find the best setting for shared_buffers. It's very workload \n> dependent, and there isn't even a semi-reliable rule of thumb.\n\nAny advice, approaches to triangulating shared_buffers you can share \nwould be most helpful\n\n\n\n>\n>\n> cheers\n>\n>\n> andrew\n>\n>\n\n",
"msg_date": "Sun, 9 Dec 2018 08:20:51 -0700",
"msg_from": "Square Bob <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: amazon aroura config - seriously overcommited defaults? (May be\n Off Topic)"
}
] |
[
{
"msg_contents": "Hi,\nI'm trying to understand why my database consume so much space. I checked\nthe space it consume on disk :\n\n[root@ base]# du -sh * | sort -n\n1.1T 17312\n5.2G pgsql_tmp\n6.3M 1\n6.3M 12865\n6.4M 12870\n119G 17313\n\nmyBIGdb=# select t1.oid,t1.datname AS\ndb_name,pg_size_pretty(pg_database_size(t1.datname)) as db_size from\npg_database t1 order by pg_database_size(t1.datname) desc\nmyBIGdb-# ;\n oid | db_name | db_size\n-------+----------------+---------\n 17312 | myBIGdb | 1054 GB\n 17313| mySmallDB | 118 GB\n 12870 | postgres | 6525 kB\n 1 | template1 | 6417 kB\n 12865 | template0 | 6409 kB\n(5 rows)\n\nHowever, when checking the sizes of my biggest tables (included with\nindexes and toasts) :\nselect a.oid as oid a.relname as table_name,pg_relation_size(a.oid,\n'main')/1024/1024 as main_MB,\n pg_relation_size(a.oid, 'fsm')/1024/1024 as fsm_MB,\n pg_relation_size(a.oid, 'vm')/1024/1024 as vm_MB,\n pg_relation_size(a.oid, 'init')/1024/1024 as init_MB,\n pg_table_size(a.oid)/1024/1024 AS relation_size_mb,\n pg_indexes_size(a.oid)/1024/1024 as indexes_MB,\n pg_total_relation_size(a.oid)/1024/1024 as total_size_MB\n from pg_class a where relkind in ('r','t') order by\nrelation_size_mb desc,total_size_MB desc limit 10;\n\noid | table_name | main_mb | fsm_mb | vm_mb | init_mb |\nrelation_size_mb | indexes_mb | total_size_mb\n------+-----------------------------+---------+--------+-------+---------+------------------+------------+---------------\n*17610 *| table_1 | 1 | 0 | 0 | 0\n| 115306 | 0 | 115306\n17614 | *pg_toast_17610 *| 114025 | 28 | 0 | 0\n| 114053 | 1250 | 115304\n*17315 *| table_2 | 166 | 0 | 0 | 0\n| 2414 | 18 | 2432\n17321 | *pg_toast_17315 *| 2222 | 0 | 0 | 0\n| 2223 | 24 | 2247\n*17540* | table_3 | 1016 | 0 | 0 | 0\n| 1368 | 1606 | 2975\n17634 | table_4 | 628 | 0 | 0 | 0 |\n 677 | 261 | 938\n17402 | table_5 | 623 | 0 | 0 | 0 |\n 623 | 419 | 1043\n17648 | table_5 | 393 | 0 | 0 | 0 |\n 393 | 341 | 735\n17548 | *pg_toast_17540 *| 347 | 0 | 0 | 0\n| 347 | 4 | 351\n17835 | table 6 | 109 | 0 | 0 | 0 |\n 109 | 71 | 181\n\nAs you can see , the sum of the biggest tables is under 200G. In addition,\nI know that on that database there were some vacuum full operations that\nfailed. So is there an option of orphans files in case vacuum full failed ?\nIn addition, what else would you recommend to check to understand why the\ndatabase consume so much space ?\n\nThanks .\n\nHi,I'm trying to understand why my database consume so much space. I checked the space it consume on disk : [root@ base]# du -sh * | sort -n1.1T 173125.2G pgsql_tmp6.3M 16.3M 128656.4M 12870119G 17313myBIGdb=# select t1.oid,t1.datname AS db_name,pg_size_pretty(pg_database_size(t1.datname)) as db_size from pg_database t1 order by pg_database_size(t1.datname) descmyBIGdb-# ; oid | db_name | db_size-------+----------------+--------- 17312 | myBIGdb | 1054 GB 17313| mySmallDB | 118 GB 12870 | postgres | 6525 kB 1 | template1 | 6417 kB 12865 | template0 | 6409 kB(5 rows)However, when checking the sizes of my biggest tables (included with indexes and toasts) : select a.oid as oid a.relname as table_name,pg_relation_size(a.oid, 'main')/1024/1024 as main_MB, pg_relation_size(a.oid, 'fsm')/1024/1024 as fsm_MB, pg_relation_size(a.oid, 'vm')/1024/1024 as vm_MB, pg_relation_size(a.oid, 'init')/1024/1024 as init_MB, pg_table_size(a.oid)/1024/1024 AS relation_size_mb, pg_indexes_size(a.oid)/1024/1024 as indexes_MB, pg_total_relation_size(a.oid)/1024/1024 as total_size_MB from pg_class a where relkind in ('r','t') order by relation_size_mb desc,total_size_MB desc limit 10;oid | table_name | main_mb | fsm_mb | vm_mb | init_mb | relation_size_mb | indexes_mb | total_size_mb------+-----------------------------+---------+--------+-------+---------+------------------+------------+---------------17610 | table_1 | 1 | 0 | 0 | 0 | 115306 | 0 | 11530617614 | pg_toast_17610 | 114025 | 28 | 0 | 0 | 114053 | 1250 | 11530417315 | table_2 | 166 | 0 | 0 | 0 | 2414 | 18 | 243217321 | pg_toast_17315 | 2222 | 0 | 0 | 0 | 2223 | 24 | 224717540 | table_3 | 1016 | 0 | 0 | 0 | 1368 | 1606 | 297517634 | table_4 | 628 | 0 | 0 | 0 | 677 | 261 | 93817402 | table_5 | 623 | 0 | 0 | 0 | 623 | 419 | 104317648 | table_5 | 393 | 0 | 0 | 0 | 393 | 341 | 73517548 | pg_toast_17540 | 347 | 0 | 0 | 0 | 347 | 4 | 35117835 | table 6 | 109 | 0 | 0 | 0 | 109 | 71 | 181As you can see , the sum of the biggest tables is under 200G. In addition, I know that on that database there were some vacuum full operations that failed. So is there an option of orphans files in case vacuum full failed ? In addition, what else would you recommend to check to understand why the database consume so much space ?Thanks .",
"msg_date": "Sun, 9 Dec 2018 17:18:55 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Database size 1T but unclear why"
},
{
"msg_contents": "On Sun, Dec 9, 2018 at 10:19 AM Mariel Cherkassky <\[email protected]> wrote:\n\n> Hi,\n> I'm trying to understand why my database consume so much space. I checked\n> the space it consume on disk :\n>\n>\nHave you tried running pg_repack? (It is an extension.)\n\nOn Sun, Dec 9, 2018 at 10:19 AM Mariel Cherkassky <[email protected]> wrote:Hi,I'm trying to understand why my database consume so much space. I checked the space it consume on disk : Have you tried running pg_repack? (It is an extension.)",
"msg_date": "Sun, 9 Dec 2018 10:42:40 -0500",
"msg_from": "Rick Otten <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Database size 1T but unclear why"
},
{
"msg_contents": "On Sun, Dec 09, 2018 at 05:18:55PM +0200, Mariel Cherkassky wrote:\n> I'm trying to understand why my database consume so much space. I checked\n> the space it consume on disk :\n\nThis seems to be essentially the same question you asked last month, so should\neither continue the existing thread or link to it. I went to the effort to\nlook it up:\nhttps://www.postgresql.org/message-id/flat/CA%2Bt6e1mtdVct%2BCn%3Dqs%3Dq%3DLLL_yKSssO6dxiZk%2Bb16xq4ccvWvw%40mail.gmail.com\n\n> [root@ base]# du -sh * | sort -n\n> 1.1T 17312\n> 5.2G pgsql_tmp\n> 6.3M 1\n> 6.3M 12865\n> 6.4M 12870\n> 119G 17313\n\ndu -h shouldn't be passed to sort -n. \nTo get useful, sorted output, use du -m.\n\n> However, when checking the sizes of my biggest tables (included with\n> indexes and toasts) :\n> select a.oid as oid a.relname as table_name,pg_relation_size(a.oid,\n> 'main')/1024/1024 as main_MB,\n> pg_relation_size(a.oid, 'fsm')/1024/1024 as fsm_MB,\n> pg_relation_size(a.oid, 'vm')/1024/1024 as vm_MB,\n> pg_relation_size(a.oid, 'init')/1024/1024 as init_MB,\n> pg_table_size(a.oid)/1024/1024 AS relation_size_mb,\n> pg_indexes_size(a.oid)/1024/1024 as indexes_MB,\n> pg_total_relation_size(a.oid)/1024/1024 as total_size_MB\n> from pg_class a where relkind in ('r','t') order by\n> relation_size_mb desc,total_size_MB desc limit 10;\n\nWhy condition on relkind ? It's possible an index or materialized view is huge.\nOther \"kind\"s may be tiny...but no reason not to check. Why not sort by\npg_total_relation_size() ? That would show a bloated index, but I think your\ncurrent query could miss it, if it wasn't also in the top 10 largest tables.\n\n> So is there an option of orphans files in case vacuum full failed ?\n\nAndrew answered here:\nhttps://www.postgresql.org/message-id/87pnvl2gki.fsf%40news-spur.riddles.org.uk\n\n> In addition, what else would you recommend to check to understand why the\n> database consume so much space ?\n\nYou can run: du --max=3 -mx ..../base/17312 |sort -nr |head\nAnd: find ..../base/17312 -printf '%s %p\\n' |sort -nr |head\n\nThat works for anything, not just postgres.\n\nAs andrew suggested, you should look for files which have no associated\nfilenode. You should use pg_relation_filenode(pg_class.oid), or maybe \npg_filenode_relation(tablespace oid, filenode oid)\nhttps://www.postgresql.org/docs/current/functions-admin.html\n\nJustin\n\n",
"msg_date": "Sun, 9 Dec 2018 10:01:08 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Database size 1T but unclear why"
},
{
"msg_contents": "On Sun, Dec 09, 2018 at 10:01:08AM -0600, Justin Pryzby wrote:\n> On Sun, Dec 09, 2018 at 05:18:55PM +0200, Mariel Cherkassky wrote:\n> > I'm trying to understand why my database consume so much space. I checked\n> > the space it consume on disk :\n\nTo find single relations which are using more than 100GB,\nyou could also run:\n|find ..../base/17312 -name '*.[0-9]??'\n\n(technically that should be a regex and not a shell glob but seems to work well\nenough).\n\nJustin\n\n",
"msg_date": "Tue, 11 Dec 2018 12:55:31 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Database size 1T but unclear why"
}
] |
[
{
"msg_contents": "Hi,\nI have a very strong machine with 64GB of ram and 19 cpu but it seems that\nwhen I'm running the next benchmark test with pg_bench the database is\ncrashing :\n\ncreatedb -U postgres bench\npgbench -i -s 50 -U postgres -d bench\npgbench -U postgres -d bench -c 10 -t 10000\n\noutput :\n\nclient 8 receiving\nFATAL: terminating connection due to administrator command\nclient 8 sending UPDATE pgbench_accounts SET abalance = abalance + -1542\nWHERE aid = 1142155;\nclient 8 could not send UPDATE pgbench_accounts SET abalance = abalance +\n:delta WHERE aid = :aid;\ninvalid socket: server closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 50\nquery mode: simple\nnumber of clients: 10\nnumber of threads: 1\nnumber of transactions per client: 10000\nnumber of transactions actually processed: 89241/100000\nlatency average = 27.944 ms\ntps = 357.864437 (including connections establishing)\ntps = 357.871594 (excluding connections establishing)\n\n\nit crashes after some time and not immediately.\n\noutput from logs :\n\n2018-12-10 19:11:12 IST 505 LOG: automatic analyze of table\n\"bench.public.pgbench_branches\" system usage: CPU 0.00s/0.00u sec elapsed\n0.01 sec\n2018-12-10 19:11:12 IST 505 LOG: automatic analyze of table\n\"bench.public.pgbench_history\" system usage: CPU 0.00s/0.05u sec elapsed\n0.10 sec\n2018-12-10 19:11:14 IST bench 25045 LOG: duration: 1451.819 ms\nstatement: UPDATE pgbench_branches SET bbalance = bbalance + -4059 WHERE\nbid = 14;\n2018-12-10 19:11:40 IST bench 25049 LOG: duration: 1039.710 ms\nstatement: UPDATE pgbench_tellers SET tbalance = tbalance + 3596 WHERE tid\n= 403;\n2018-12-10 19:11:56 IST 23647 LOG: received fast shutdown request\n2018-12-10 19:11:56 IST 23647 LOG: aborting any active transactions\n2018-12-10 19:11:56 IST bench 25051 FATAL: terminating connection due to\nadministrator command\n2018-12-10 19:11:56 IST bench 25049 FATAL: terminating connection due to\nadministrator command\n2018-12-10 19:11:56 IST sadas 27765 FATAL: terminating connection due to\nadministrator command\n2018-12-10 19:11:56 IST bench 25050 FATAL: terminating connection due to\nadministrator command\n2018-12-10 19:11:56 IST 23654 LOG: autovacuum launcher shutting down\n2018-12-10 19:11:56 IST sadas 24821 FATAL: terminating connection due\nto administrator command\n2018-12-10 19:11:56 IST hadr 24814 FATAL: terminating connection due to\nadministrator command\n2018-12-10 19:11:56 IST bench 25047 FATAL: terminating connection due to\nadministrator command\n2018-12-10 19:11:56 IST bench 25048 FATAL: terminating connection due to\nadministrator command\n2018-12-10 19:11:56 IST hadr 24065 FATAL: terminating connection due to\nadministrator command\n2018-12-10 19:11:56 IST postgres 24818 FATAL: terminating connection due\nto administrator command\n2018-12-10 19:11:56 IST postgres 24819 FATAL: terminating connection due\nto administrator command\n2018-12-10 19:11:56 IST hadr 24812 FATAL: terminating connection due to\nadministrator command\n2018-12-10 19:11:56 IST postgres 24817 FATAL: terminating connection due\nto administrator command\n2018-12-10 19:11:56 IST bench 25046 FATAL: terminating connection due to\nadministrator command\n2018-12-10 19:11:56 IST hadr 24813 FATAL: terminating connection due to\nadministrator command\n2018-12-10 19:11:56 IST postgres 24816 FATAL: terminating connection due\nto administrator command\n2018-12-10 19:11:56 IST 23651 LOG: shutting down\n2018-12-10 19:11:56 IST 23651 LOG: checkpoint starting: shutdown\nimmediate\n2018-12-10 19:11:59 IST 23651 LOG: checkpoint complete: wrote 69557\nbuffers (4.1%); 0 transaction log file(s) added, 0 removed, 0 recycled;\nwrite=2.800 s, sync=0.045 s, total=2.877 s; sync files=23, longest=0.045 s,\naverage=0.001 s; distance=573364 kB, estimate=573364 kB\n2018-12-10 19:11:59 IST 23647 LOG: database system is shut down\n2018-12-10 19:12:11 IST 2641 LOG: database system was shut down at\n2018-12-10 19:11:59 IST\n2018-12-10 19:12:11 IST 2641 LOG: MultiXact member wraparound\nprotections are now enabled\n2018-12-10 19:12:11 IST 2638 LOG: database system is ready to accept\nconnections\n2018-12-10 19:12:11 IST 2645 LOG: autovacuum launcher started\n2018-12-10 19:12:17 IST 2692 LOG: automatic vacuum of table\n\"bench.public.pgbench_tellers\": index scans: 0\n pages: 0 removed, 13 remain, 0 skipped due to pins, 0 skipped frozen\n\nsome conf parameters :\nlisten_addresses = '*'\nmaintenance_work_mem = 128MB\nwork_mem = 53MB\nshared_buffers = 13411MB\neffective_cache_size = 32278MB\nmax_wal_size = 1440MB\nwal_buffers = 16MB\ncheckpoint_completion_target = 0.9\nstandard_conforming_strings = off\nmax_locks_per_transaction = 5000\nmax_connections = 1200\ncheckpoint_timeout = 30min\nrandom_page_cost = 2.0\n\n\nany idea what can cause it ?\n\nHi,I have a very strong machine with 64GB of ram and 19 cpu but it seems that when I'm running the next benchmark test with pg_bench the database is crashing : createdb -U postgres benchpgbench -i -s 50 -U postgres -d benchpgbench -U postgres -d bench -c 10 -t 10000output : client 8 receivingFATAL: terminating connection due to administrator commandclient 8 sending UPDATE pgbench_accounts SET abalance = abalance + -1542 WHERE aid = 1142155;client 8 could not send UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;invalid socket: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.transaction type: <builtin: TPC-B (sort of)>scaling factor: 50query mode: simplenumber of clients: 10number of threads: 1number of transactions per client: 10000number of transactions actually processed: 89241/100000latency average = 27.944 mstps = 357.864437 (including connections establishing)tps = 357.871594 (excluding connections establishing)it crashes after some time and not immediately.output from logs :2018-12-10 19:11:12 IST 505 LOG: automatic analyze of table \"bench.public.pgbench_branches\" system usage: CPU 0.00s/0.00u sec elapsed 0.01 sec2018-12-10 19:11:12 IST 505 LOG: automatic analyze of table \"bench.public.pgbench_history\" system usage: CPU 0.00s/0.05u sec elapsed 0.10 sec2018-12-10 19:11:14 IST bench 25045 LOG: duration: 1451.819 ms statement: UPDATE pgbench_branches SET bbalance = bbalance + -4059 WHERE bid = 14;2018-12-10 19:11:40 IST bench 25049 LOG: duration: 1039.710 ms statement: UPDATE pgbench_tellers SET tbalance = tbalance + 3596 WHERE tid = 403;2018-12-10 19:11:56 IST 23647 LOG: received fast shutdown request2018-12-10 19:11:56 IST 23647 LOG: aborting any active transactions2018-12-10 19:11:56 IST bench 25051 FATAL: terminating connection due to administrator command2018-12-10 19:11:56 IST bench 25049 FATAL: terminating connection due to administrator command2018-12-10 19:11:56 IST sadas 27765 FATAL: terminating connection due to administrator command2018-12-10 19:11:56 IST bench 25050 FATAL: terminating connection due to administrator command2018-12-10 19:11:56 IST 23654 LOG: autovacuum launcher shutting down2018-12-10 19:11:56 IST \n\nsadas 24821 FATAL: terminating connection due to administrator command2018-12-10 19:11:56 IST hadr 24814 FATAL: terminating connection due to administrator command2018-12-10 19:11:56 IST bench 25047 FATAL: terminating connection due to administrator command2018-12-10 19:11:56 IST bench 25048 FATAL: terminating connection due to administrator command2018-12-10 19:11:56 IST hadr 24065 FATAL: terminating connection due to administrator command2018-12-10 19:11:56 IST postgres 24818 FATAL: terminating connection due to administrator command2018-12-10 19:11:56 IST postgres 24819 FATAL: terminating connection due to administrator command2018-12-10 19:11:56 IST hadr 24812 FATAL: terminating connection due to administrator command2018-12-10 19:11:56 IST postgres 24817 FATAL: terminating connection due to administrator command2018-12-10 19:11:56 IST bench 25046 FATAL: terminating connection due to administrator command2018-12-10 19:11:56 IST hadr 24813 FATAL: terminating connection due to administrator command2018-12-10 19:11:56 IST postgres 24816 FATAL: terminating connection due to administrator command2018-12-10 19:11:56 IST 23651 LOG: shutting down2018-12-10 19:11:56 IST 23651 LOG: checkpoint starting: shutdown immediate2018-12-10 19:11:59 IST 23651 LOG: checkpoint complete: wrote 69557 buffers (4.1%); 0 transaction log file(s) added, 0 removed, 0 recycled; write=2.800 s, sync=0.045 s, total=2.877 s; sync files=23, longest=0.045 s, average=0.001 s; distance=573364 kB, estimate=573364 kB2018-12-10 19:11:59 IST 23647 LOG: database system is shut down2018-12-10 19:12:11 IST 2641 LOG: database system was shut down at 2018-12-10 19:11:59 IST2018-12-10 19:12:11 IST 2641 LOG: MultiXact member wraparound protections are now enabled2018-12-10 19:12:11 IST 2638 LOG: database system is ready to accept connections2018-12-10 19:12:11 IST 2645 LOG: autovacuum launcher started2018-12-10 19:12:17 IST 2692 LOG: automatic vacuum of table \"bench.public.pgbench_tellers\": index scans: 0 pages: 0 removed, 13 remain, 0 skipped due to pins, 0 skipped frozensome conf parameters : listen_addresses = '*'maintenance_work_mem = 128MBwork_mem = 53MBshared_buffers = 13411MBeffective_cache_size = 32278MBmax_wal_size = 1440MBwal_buffers = 16MBcheckpoint_completion_target = 0.9standard_conforming_strings = offmax_locks_per_transaction = 5000max_connections = 1200checkpoint_timeout = 30minrandom_page_cost = 2.0any idea what can cause it ?",
"msg_date": "Mon, 10 Dec 2018 19:18:44 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "database crash during pgbench run"
},
{
"msg_contents": "Mariel Cherkassky <[email protected]> writes:\n> 2018-12-10 19:11:56 IST 23647 LOG: received fast shutdown request\n\n> any idea what can cause it ?\n\nSomething sent SIGINT to the postmaster.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 10 Dec 2018 12:44:56 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: database crash during pgbench run"
},
{
"msg_contents": "> > 2018-12-10 19:11:56 IST 23647 LOG: received fast shutdown request\n> > any idea what can cause it ?\n>\n>Something sent SIGINT to the postmaster.\n\nMy money is on the OoM (Out of Memory) killer. The standard PDGD install on CentOS should disable that, but I'm not sure what OS you're on or how PostgreSQL was installed.\n\nGreg Clough.\n\n\n\n\n\n\n\n\n________________________________\n\nThis e-mail, including accompanying communications and attachments, is strictly confidential and only for the intended recipient. Any retention, use or disclosure not expressly authorised by IHSMarkit is prohibited. This email is subject to all waivers and other terms at the following link: https://ihsmarkit.com/Legal/EmailDisclaimer.html\n\nPlease visit www.ihsmarkit.com/about/contact-us.html for contact information on our offices worldwide.\n\n",
"msg_date": "Tue, 11 Dec 2018 14:35:20 +0000",
"msg_from": "Greg Clough <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: database crash during pgbench run"
},
{
"msg_contents": "Greg Clough <[email protected]> writes:\n>>> 2018-12-10 19:11:56 IST 23647 LOG: received fast shutdown request\n>>> any idea what can cause it ?\n\n>> Something sent SIGINT to the postmaster.\n\n> My money is on the OoM (Out of Memory) killer.\n\nThat usually uses SIGKILL. If I had to guess, I'd wonder whether the\npostmaster was manually started, and if so whether it was properly\ndissociated from the user's terminal (with nohup or the like).\nIf it wasn't, then a control-C typed at the terminal would SIGINT the\npostmaster as well as whatever it was meant to terminate.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 11 Dec 2018 11:01:14 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: database crash during pgbench run"
},
{
"msg_contents": "On Tue, Dec 11, 2018 at 10:01 AM Tom Lane <[email protected]> wrote:\n>\n> Greg Clough <[email protected]> writes:\n> >>> 2018-12-10 19:11:56 IST 23647 LOG: received fast shutdown request\n> >>> any idea what can cause it ?\n>\n> >> Something sent SIGINT to the postmaster.\n>\n> > My money is on the OoM (Out of Memory) killer.\n>\n> That usually uses SIGKILL. If I had to guess, I'd wonder whether the\n> postmaster was manually started, and if so whether it was properly\n> dissociated from the user's terminal (with nohup or the like).\n> If it wasn't, then a control-C typed at the terminal would SIGINT the\n> postmaster as well as whatever it was meant to terminate.\n\nYeah. To add to this, pgbench runs are extremely unlikely to cause\nthe kind of memory consumption issues that would trigger an OOM. This\nis definitely not a database crash, just some kind of administrative\nproblem. Some things that might be helpful to help figure this out:\n*) What o/s\n*) how was the database installed\n*) how exactly did the database start\n*) are we looking at something exotic here (cloud managed postgres,\nexotic storage, etc)\n\nmerlin\n\n",
"msg_date": "Fri, 21 Dec 2018 07:40:25 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: database crash during pgbench run"
}
] |
[
{
"msg_contents": "Hey,\nI installed a new postgres 9.6 on both of my machines. I'm trying to\nmeasure the differences between the performances in each machine but it\nseems that the results arent accurate.\nI did 2 tests :\n\n1)In the first test the scale was set to 100 :\npgbench -i -s 100 -U postgres -d bench -h machine_name\npgbench -U postgres -d bench -h machine_name -j 2 -c 16 -T 300\nRUN TPS - machine1 TPS-machine2\n1 697 555\n2 732 861\n3 784 842\n\n\n2)In this test the scale was set to 10000 :\npgbench -i -s 10000 -U postgres -d bench -h machine_name\npgbench -U postgres -d bench --progress=30 -h machine_name -j 2 -c 16 -T\n300\nRUN TPS-MACHINE1 TPS-MACHINE2\n1 103 60\n2 63 66\n3 74 83\n4 56 61\n5 75 53\n6 73 60\n7 62 53\n\nIn both cases after the initalization I restarted the database and cleared\nthe cashe(echo 1 > /proc/sys/vm/drop_caches) one time. During all the runs\nI didnt shutdown the machine.\n\nNow, I was hopping the the tps will be almost the same in each machine for\nall the runs. In other words, I wanted to see that the tps in machine1\nduring all the tps are almost the same but I see that the values arent\naccurate.\n\nAny idea what might cause the differences in every run ?\n\nHey,I installed a new postgres 9.6 on both of my machines. I'm trying to measure the differences between the performances in each machine but it seems that the results arent accurate.I did 2 tests : 1)In the first test the scale was set to 100 :pgbench -i -s 100 -U postgres -d bench -h machine_namepgbench -U postgres -d bench -h machine_name -j 2 -c 16 -T 300\n\n\nRUN\n TPS - machine1\nTPS-machine2\n\n\n1\n697\n555\n\n\n2\n732\n861\n\n\n3\n784\n842\n\n\n2)In this test the scale was set to 10000 : pgbench -i -s 10000 -U postgres -d bench -h machine_namepgbench -U postgres -d bench --progress=30 -h machine_name -j 2 -c 16 -T 300 \n\n\nRUN\nTPS-MACHINE1 \nTPS-MACHINE2\n\n\n1\n103\n60\n\n\n2\n63\n66\n\n\n3\n74\n83\n\n\n4\n56\n61\n\n\n5\n75\n53\n\n\n6\n73\n60\n\n\n7\n62\n53\n\nIn both cases after the initalization I restarted the database and cleared the cashe(echo 1 > /proc/sys/vm/drop_caches) one time. During all the runs I didnt shutdown the machine.Now, I was hopping the the tps will be almost the same in each machine for all the runs. In other words, I wanted to see that the tps in machine1 during all the tps are almost the same but I see that the values arent accurate.Any idea what might cause the differences in every run ?",
"msg_date": "Wed, 12 Dec 2018 14:53:35 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgbench results arent accurate"
},
{
"msg_contents": "> I installed a new postgres 9.6 on both of my machines.\r\n\r\nWhere is your storage? Is it local, or on a SAN? A SAN will definitely have a cache, so possibly there is another layer of cache that you’re not accounting for.\r\n\r\nGreg Clough.\r\n\r\n________________________________\r\n\r\nThis e-mail, including accompanying communications and attachments, is strictly confidential and only for the intended recipient. Any retention, use or disclosure not expressly authorised by IHSMarkit is prohibited. This email is subject to all waivers and other terms at the following link: https://ihsmarkit.com/Legal/EmailDisclaimer.html\r\n\r\nPlease visit www.ihsmarkit.com/about/contact-us.html for contact information on our offices worldwide.\r\n\n\n\n\n\n\n\n\n\n> I installed a new postgres 9.6 on both of my machines.\n \nWhere is your storage? Is it local, or on a SAN? A SAN will definitely have a cache, so possibly there is another layer of cache that you’re not accounting for.\n \nGreg Clough.\n\n\n\n\r\nThis e-mail, including accompanying communications and attachments, is strictly confidential and only for the intended recipient. Any retention, use or disclosure not expressly authorised by IHSMarkit is prohibited. This email is subject to all waivers and\r\n other terms at the following link: https://ihsmarkit.com/Legal/EmailDisclaimer.html\n\r\nPlease visit www.ihsmarkit.com/about/contact-us.html for contact information on our offices worldwide.",
"msg_date": "Thu, 13 Dec 2018 13:05:18 +0000",
"msg_from": "Greg Clough <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: pgbench results arent accurate"
},
{
"msg_contents": "If you have not amended any Postgres config parameters, then you'll get \ncheckpoints approx every 5 min or so. Thus using a Pgbench run time of \n5min is going sometimes miss/sometimes hit a checkpoint in progress - \nwhich will hugely impact test results.\n\nI tend to do Pgbench runs of about 2x checkpoint_timeout - (i.e 10 min \nfor default configurations). Also for increased repeatability, I do a \nmanually triggered checkpoint immediately before each run.\n\nregards\n\nMark\n\nOn 13/12/18 1:53 AM, Mariel Cherkassky wrote:\n> Hey,\n> I installed a new postgres 9.6 on both of my machines. I'm trying to \n> measure the differences between the performances in each machine but \n> it seems that the results arent accurate.\n> I did 2 tests :\n>\n> 1)In the first test the scale was set to 100 :\n> pgbench -i -s 100 -U postgres -d bench -h machine_name\n> pgbench -U postgres -d bench -h machine_name -j 2 -c 16 -T 300\n> RUN \t TPS - machine1 \tTPS-machine2\n> 1 \t697 \t555\n> 2 \t732 \t861\n> 3 \t784 \t842\n>\n> \t\n> \t\n>\n>\n> 2)In this test the scale was set to 10000 :\n> pgbench -i -s 10000 -U postgres -d bench -h machine_name\n> pgbench -U postgres -d bench --progress=30 -h machine_name -j 2 -c 16 \n> -T 300\n> RUN \tTPS-MACHINE1 \tTPS-MACHINE2\n> 1 \t103 \t60\n> 2 \t63 \t66\n> 3 \t74 \t83\n> 4 \t56 \t61\n> 5 \t75 \t53\n> 6 \t73 \t60\n> 7 \t62 \t53\n>\n>\n> In both cases after the initalization I restarted the database and \n> cleared the cashe(echo 1 > /proc/sys/vm/drop_caches) one time. During \n> all the runs I didnt shutdown the machine.\n>\n> Now, I was hopping the the tps will be almost the same in each machine \n> for all the runs. In other words, I wanted to see that the tps in \n> machine1 during all the tps are almost the same but I see that the \n> values arent accurate.\n>\n> Any idea what might cause the differences in every run ?\n\n",
"msg_date": "Sat, 15 Dec 2018 12:46:20 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgbench results arent accurate"
},
{
"msg_contents": "As Greg suggested, update you all that each vm has its own dedicated esx.\nEvery esx has it`s own local disks.\nI run it one time on two different servers that has the same hardware and\nsame postgresql db (version and conf). The results :\npgbench -i -s 6 pgbench -p 5432 -U postgres\n pgbench -c 16 -j 4 -T 5 -U postgres pgbench\nMACHINE 1\nstarting vacuum...end.\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 6\nquery mode: simple\nnumber of clients: 16\nnumber of threads: 4\nduration: 5 s\nnumber of transactions actually processed: 669\nlatency average = 122.633 ms\ntps = 130.470828 (including connections establishing)\ntps = 130.620286 (excluding connections establishing)\n\nMACHINE 2\n\npgbench -c 16 -j 4 -T 600 -U postgres -p 5433 pgbench\nstarting vacuum...end.\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 6\nquery mode: simple\nnumber of clients: 16\nnumber of threads: 4\nduration: 600 s\nnumber of transactions actually processed: 2393723\nlatency average = 4.011 ms\ntps = 3989.437514 (including connections establishing)\ntps = 3989.473036 (excluding connections establishing)\n\nany idea what can cause such a difference ? Both of the machines have\n20core and 65GB of ram.\n\nבתאריך יום ה׳, 13 בדצמ׳ 2018 ב-15:54 מאת Mariel Cherkassky <\[email protected]>:\n\n> Ok, I'll do that. Thanks .\n>\n> בתאריך יום ה׳, 13 בדצמ׳ 2018 ב-15:54 מאת Greg Clough <\n> [email protected]>:\n>\n>> Hmmm... sounds like you’ve got most of it covered. It may be a good idea\n>> to send that last message back to the list, as maybe others will have\n>> better ideas.\n>>\n>>\n>>\n>> Greg.\n>>\n>>\n>>\n>> *From:* Mariel Cherkassky <[email protected]>\n>> *Sent:* Thursday, December 13, 2018 1:45 PM\n>> *To:* Greg Clough <[email protected]>\n>> *Subject:* Re: pgbench results arent accurate\n>>\n>>\n>>\n>> Both of the machines are the only vms in a dedicated esx for each one.\n>> Each esx has local disks.\n>>\n>>\n>>\n>> On Thu, Dec 13, 2018, 3:05 PM Greg Clough <[email protected]\n>> wrote:\n>>\n>> > I installed a new postgres 9.6 on both of my machines.\n>>\n>>\n>>\n>> Where is your storage? Is it local, or on a SAN? A SAN will definitely\n>> have a cache, so possibly there is another layer of cache that you’re not\n>> accounting for.\n>>\n>>\n>>\n>> Greg Clough.\n>>\n>>\n>> ------------------------------\n>>\n>>\n>> This e-mail, including accompanying communications and attachments, is\n>> strictly confidential and only for the intended recipient. Any retention,\n>> use or disclosure not expressly authorised by IHSMarkit is prohibited. This\n>> email is subject to all waivers and other terms at the following link:\n>> https://ihsmarkit.com/Legal/EmailDisclaimer.html\n>>\n>> Please visit www.ihsmarkit.com/about/contact-us.html for contact\n>> information on our offices worldwide.\n>>\n>>\n>> ------------------------------\n>>\n>> This e-mail, including accompanying communications and attachments, is\n>> strictly confidential and only for the intended recipient. Any retention,\n>> use or disclosure not expressly authorised by IHSMarkit is prohibited. This\n>> email is subject to all waivers and other terms at the following link:\n>> https://ihsmarkit.com/Legal/EmailDisclaimer.html\n>>\n>> Please visit www.ihsmarkit.com/about/contact-us.html for contact\n>> information on our offices worldwide.\n>>\n>\n\nAs Greg suggested, update you all that each vm has its own dedicated esx. Every esx has it`s own local disks.I run it one time on two different servers that has the same hardware and same postgresql db (version and conf). The results : pgbench -i -s 6 pgbench -p 5432 -U postgres pgbench -c 16 -j 4 -T 5 -U postgres pgbenchMACHINE 1starting vacuum...end.transaction type: <builtin: TPC-B (sort of)>scaling factor: 6query mode: simplenumber of clients: 16number of threads: 4duration: 5 snumber of transactions actually processed: 669latency average = 122.633 mstps = 130.470828 (including connections establishing)tps = 130.620286 (excluding connections establishing)MACHINE 2pgbench -c 16 -j 4 -T 600 -U postgres -p 5433 pgbenchstarting vacuum...end.transaction type: <builtin: TPC-B (sort of)>scaling factor: 6query mode: simplenumber of clients: 16number of threads: 4duration: 600 snumber of transactions actually processed: 2393723latency average = 4.011 mstps = 3989.437514 (including connections establishing)tps = 3989.473036 (excluding connections establishing)any idea what can cause such a difference ? Both of the machines have 20core and 65GB of ram.בתאריך יום ה׳, 13 בדצמ׳ 2018 ב-15:54 מאת Mariel Cherkassky <[email protected]>:Ok, I'll do that. Thanks .בתאריך יום ה׳, 13 בדצמ׳ 2018 ב-15:54 מאת Greg Clough <[email protected]>:\n\n\nHmmm... sounds like you’ve got most of it covered. It may be a good idea to send that last message back to the list, as maybe others will have better ideas.\n \nGreg. \n \nFrom: Mariel Cherkassky <[email protected]>\n\nSent: Thursday, December 13, 2018 1:45 PM\nTo: Greg Clough <[email protected]>\nSubject: Re: pgbench results arent accurate\n \n\nBoth of the machines are the only vms in a dedicated esx for each one. Each esx has local disks.\n\n \n\n\nOn Thu, Dec 13, 2018, 3:05 PM Greg Clough <[email protected] wrote:\n\n\n\n\n> I installed a new postgres 9.6 on both of my machines.\n \nWhere is your storage? Is it local, or on a SAN? A SAN will definitely have a cache, so possibly there is another layer of cache that you’re not accounting\n for.\n \nGreg Clough.\n\n \n\n\n\n\nThis e-mail, including accompanying communications and attachments, is strictly confidential and only for the intended recipient. Any retention, use or disclosure not expressly authorised by IHSMarkit is prohibited. This email is subject to all waivers and\n other terms at the following link: \nhttps://ihsmarkit.com/Legal/EmailDisclaimer.html\n\nPlease visit \nwww.ihsmarkit.com/about/contact-us.html for contact information on our offices worldwide.\n\n\n\n\n\n\n\nThis e-mail, including accompanying communications and attachments, is strictly confidential and only for the intended recipient. Any retention, use or disclosure not expressly authorised by IHSMarkit is prohibited. This email is subject to all waivers and\n other terms at the following link: https://ihsmarkit.com/Legal/EmailDisclaimer.html\n\nPlease visit www.ihsmarkit.com/about/contact-us.html for contact information on our offices worldwide.",
"msg_date": "Sun, 16 Dec 2018 13:58:45 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgbench results arent accurate"
},
{
"msg_contents": "Hi, I can see two issues making you get variable results:\n\n1/ Number of clients > scale factor\n\nUsing -c16 and -s 6 means you are largely benchmarking lock contention \nfor a row in the branches table (it has 6 rows in your case). So \nrandomness in *which* rows each client tries to lock will make for \nunwanted variation.\n\n\n2/ Short run times\n\nThat 1st run is 5s duration. This will be massively influenced by the \nabove point about randomness for locking a branches row.\n\n\nI'd recommend:\n\n- always run at least -T600\n\n- use -s of at least 1.5x your largest -c setting (I usually use -s 100 \nfor testing 1-32 clients).\n\nregards\n\nMark\n\nOn 17/12/18 12:58 AM, Mariel Cherkassky wrote:\n> As Greg suggested, update you all that each vm has its own dedicated \n> esx. Every esx has it`s own local disks.\n> I run it one time on two different servers that has the same hardware \n> and same postgresql db (version and conf). The results :\n> pgbench -i -s 6 pgbench -p 5432 -U postgres\n> pgbench -c 16 -j 4 -T 5 -U postgres pgbench\n> MACHINE 1\n> starting vacuum...end.\n> transaction type: <builtin: TPC-B (sort of)>\n> scaling factor: 6\n> query mode: simple\n> number of clients: 16\n> number of threads: 4\n> duration: 5 s\n> number of transactions actually processed: 669\n> latency average = 122.633 ms\n> tps = 130.470828 (including connections establishing)\n> tps = 130.620286 (excluding connections establishing)\n>\n> MACHINE 2\n>\n> pgbench -c 16 -j 4 -T 600 -U postgres -p 5433 pgbench\n> starting vacuum...end.\n> transaction type: <builtin: TPC-B (sort of)>\n> scaling factor: 6\n> query mode: simple\n> number of clients: 16\n> number of threads: 4\n> duration: 600 s\n> number of transactions actually processed: 2393723\n> latency average = 4.011 ms\n> tps = 3989.437514 (including connections establishing)\n> tps = 3989.473036 (excluding connections establishing)\n>\n> any idea what can cause such a difference ? Both of the machines have \n> 20core and 65GB of ram.\n>\n> בתאריך יום ה׳, 13 בדצמ׳ 2018 ב-15:54 מאת Mariel Cherkassky \n> <[email protected] <mailto:[email protected]>>:\n>\n> Ok, I'll do that. Thanks .\n>\n> בתאריך יום ה׳, 13 בדצמ׳ 2018 ב-15:54 מאת Greg Clough\n> <[email protected] <mailto:[email protected]>>:\n>\n> Hmmm... sounds like you’ve got most of it covered. It may be\n> a good idea to send that last message back to the list, as\n> maybe others will have better ideas.\n>\n> Greg.\n>\n> *From:* Mariel Cherkassky <[email protected]\n> <mailto:[email protected]>>\n> *Sent:* Thursday, December 13, 2018 1:45 PM\n> *To:* Greg Clough <[email protected]\n> <mailto:[email protected]>>\n> *Subject:* Re: pgbench results arent accurate\n>\n> Both of the machines are the only vms in a dedicated esx for\n> each one. Each esx has local disks.\n>\n> On Thu, Dec 13, 2018, 3:05 PM Greg Clough\n> <[email protected] <mailto:[email protected]>\n> wrote:\n>\n> > I installed a new postgres 9.6 on both of my machines.\n>\n> Where is your storage? Is it local, or on a SAN? A SAN\n> will definitely have a cache, so possibly there is another\n> layer of cache that you’re not accounting for.\n>\n> Greg Clough.\n>\n> ------------------------------------------------------------------------\n>\n>\n> This e-mail, including accompanying communications and\n> attachments, is strictly confidential and only for the\n> intended recipient. Any retention, use or disclosure not\n> expressly authorised by IHSMarkit is prohibited. This\n> email is subject to all waivers and other terms at the\n> following link:\n> https://ihsmarkit.com/Legal/EmailDisclaimer.html\n>\n> Please visit www.ihsmarkit.com/about/contact-us.html\n> <http://www.ihsmarkit.com/about/contact-us.html> for\n> contact information on our offices worldwide.\n>\n>\n> ------------------------------------------------------------------------\n>\n> This e-mail, including accompanying communications and\n> attachments, is strictly confidential and only for the\n> intended recipient. Any retention, use or disclosure not\n> expressly authorised by IHSMarkit is prohibited. This email is\n> subject to all waivers and other terms at the following link:\n> https://ihsmarkit.com/Legal/EmailDisclaimer.html\n>\n> Please visit www.ihsmarkit.com/about/contact-us.html\n> <http://www.ihsmarkit.com/about/contact-us.html> for contact\n> information on our offices worldwide.\n>\n\n",
"msg_date": "Tue, 18 Dec 2018 16:10:50 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgbench results arent accurate"
},
{
"msg_contents": "On Wed, Dec 12, 2018 at 6:54 AM Mariel Cherkassky <\[email protected]> wrote:\n\n> Hey,\n> I installed a new postgres 9.6 on both of my machines. I'm trying to\n> measure the differences between the performances in each machine but it\n> seems that the results arent accurate.\n> I did 2 tests :\n>\n\nBetter phrased, I'd say the results aren't _stable_ -- 'inaccurate'\nsuggests that pgbench is giving erroneous results; you've provided no\nevidence of that.\n\nStorage performance can seem random; there are numerous complex processes\nand caching that are involved between the software layer and the storage.\nSome are within the database, some are within the underlying operating\nsystem, and some are within the storage itself. Spinning media is also\nnotoriously capricious, various hard to control for factors (such as where\nthe data precisely exists on the platter) can influence data seek and fetch\ntimes.\n\nI think we can look ahead to a not too distant future where storage\nperformance will be less important with regards to typical database\nperformance than it is today. Clever people that are willing and able to\nbuy appropriate hardware already live in this world essentially, but the\nenterprise storage industry seems strongly inclined to postpone this day of\nreckoning as long as possible for obviously selfish reasons.\n\n\nmerlin\n\n>\n\nOn Wed, Dec 12, 2018 at 6:54 AM Mariel Cherkassky <[email protected]> wrote:Hey,I installed a new postgres 9.6 on both of my machines. I'm trying to measure the differences between the performances in each machine but it seems that the results arent accurate.I did 2 tests : Better phrased, I'd say the results aren't _stable_ -- 'inaccurate' suggests that pgbench is giving erroneous results; you've provided no evidence of that.Storage performance can seem random; there are numerous complex processes and caching that are involved between the software layer and the storage. Some are within the database, some are within the underlying operating system, and some are within the storage itself. Spinning media is also notoriously capricious, various hard to control for factors (such as where the data precisely exists on the platter) can influence data seek and fetch times.I think we can look ahead to a not too distant future where storage performance will be less important with regards to typical database performance than it is today. Clever people that are willing and able to buy appropriate hardware already live in this world essentially, but the enterprise storage industry seems strongly inclined to postpone this day of reckoning as long as possible for obviously selfish reasons.merlin",
"msg_date": "Thu, 20 Dec 2018 10:46:42 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgbench results arent accurate"
}
] |
[
{
"msg_contents": "I want to clean a large log table by chunks. I write such a query:\n\ndelete from categorization.log\nwhere ctid in (\n select ctid from categorization.log\n where timestamp < now() - interval '2 month'\n limit 1000\n)\n\nBut I am getting the following weird plan:\n\n[Plan 1]\nDelete on log (cost=74988058.17..77101421.77 rows=211334860 width=36)\n -> Merge Semi Join (cost=74988058.17..77101421.77 rows=211334860\nwidth=36)\n Merge Cond: (log.ctid = \"ANY_subquery\".ctid)\n -> Sort (cost=74987967.33..76044641.63 rows=422669720 width=6)\n Sort Key: log.ctid\n -> Seq Scan on log (cost=0.00..8651368.20 rows=422669720\nwidth=6)\n -> Sort (cost=90.83..93.33 rows=1000 width=36)\n Sort Key: \"ANY_subquery\".ctid\n -> Subquery Scan on \"ANY_subquery\" (cost=0.00..41.00\nrows=1000 width=36)\n -> Limit (cost=0.00..31.00 rows=1000 width=6)\n -> Seq Scan on log log_1\n(cost=0.00..11821391.10 rows=381284367 width=6)\n Filter: (\"timestamp\" < (now() - '2\nmons'::interval))\n\nAnd it takes infinity to complete (with any number in LIMIT from 1 to 1000).\n\nHowever if I extract CTIDs manually:\n\nselect array_agg(ctid) from (\n select ctid from s.log\n where timestamp < now() - interval '2 month'\n limit 5\n) v\n\nand substitute the result inside the DELETE query, it does basic TID scan\nand completes in just milliseconds:\n\nexplain\ndelete from s.log\nwhere ctid =\nany('{\"(3020560,1)\",\"(3020560,2)\",\"(3020560,3)\",\"(3020560,4)\",\"(3020560,5)\"}'::tid[])\n\n[Plan 2]\nDelete on log (cost=0.01..20.06 rows=5 width=6)\n -> Tid Scan on log (cost=0.01..20.06 rows=5 width=6)\n TID Cond: (ctid = ANY\n('{\"(3020560,1)\",\"(3020560,2)\",\"(3020560,3)\",\"(3020560,4)\",\"(3020560,5)\"}'::tid[]))\n\nIn case the table's definition helps:\n\nCREATE TABLE s.log\n(\n article_id bigint NOT NULL,\n topic_id integer NOT NULL,\n weight double precision NOT NULL,\n cat_system character varying(50) NOT NULL,\n lang character varying(5) NOT NULL,\n is_final boolean NOT NULL,\n comment character varying(50),\n \"timestamp\" timestamp without time zone DEFAULT now()\n)\n\nNumber of rows ~ 423M\nn_live_tup = 422426725\nlast_vacuum = 2018-10-22\nPostgres version(): PostgreSQL 10.3 (Ubuntu 10.3-1.pgdg14.04+1) on\nx86_64-pc-linux-gnu, compiled by gcc (Ubuntu 4.8.4-2ubuntu1~14.04.4) 4.8.4,\n64-bit\n\nWhy does this query want to use Seq Scan and Sort on a 423M rows table?\nHow to fix this (reduce it to Plan 2)?\n\n--\nVlad\n\nI want to clean a large log table by chunks. I write such a query:delete from categorization.logwhere ctid in ( select ctid from categorization.log where timestamp < now() - interval '2 month' limit 1000)But I am getting the following weird plan:[Plan 1]Delete on log (cost=74988058.17..77101421.77 rows=211334860 width=36) -> Merge Semi Join (cost=74988058.17..77101421.77 rows=211334860 width=36) Merge Cond: (log.ctid = \"ANY_subquery\".ctid) -> Sort (cost=74987967.33..76044641.63 rows=422669720 width=6) Sort Key: log.ctid -> Seq Scan on log (cost=0.00..8651368.20 rows=422669720 width=6) -> Sort (cost=90.83..93.33 rows=1000 width=36) Sort Key: \"ANY_subquery\".ctid -> Subquery Scan on \"ANY_subquery\" (cost=0.00..41.00 rows=1000 width=36) -> Limit (cost=0.00..31.00 rows=1000 width=6) -> Seq Scan on log log_1 (cost=0.00..11821391.10 rows=381284367 width=6) Filter: (\"timestamp\" < (now() - '2 mons'::interval))And it takes infinity to complete (with any number in LIMIT from 1 to 1000).However if I extract CTIDs manually:select array_agg(ctid) from ( select ctid from s.log where timestamp < now() - interval '2 month' limit 5) vand substitute the result inside the DELETE query, it does basic TID scan and completes in just milliseconds:explaindelete from s.logwhere ctid = any('{\"(3020560,1)\",\"(3020560,2)\",\"(3020560,3)\",\"(3020560,4)\",\"(3020560,5)\"}'::tid[])[Plan 2]Delete on log (cost=0.01..20.06 rows=5 width=6) -> Tid Scan on log (cost=0.01..20.06 rows=5 width=6) TID Cond: (ctid = ANY ('{\"(3020560,1)\",\"(3020560,2)\",\"(3020560,3)\",\"(3020560,4)\",\"(3020560,5)\"}'::tid[]))In case the table's definition helps:CREATE TABLE s.log( article_id bigint NOT NULL, topic_id integer NOT NULL, weight double precision NOT NULL, cat_system character varying(50) NOT NULL, lang character varying(5) NOT NULL, is_final boolean NOT NULL, comment character varying(50), \"timestamp\" timestamp without time zone DEFAULT now())Number of rows ~ 423Mn_live_tup = 422426725last_vacuum = 2018-10-22Postgres version(): PostgreSQL 10.3 (Ubuntu 10.3-1.pgdg14.04+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 4.8.4-2ubuntu1~14.04.4) 4.8.4, 64-bitWhy does this query want to use Seq Scan and Sort on a 423M rows table?How to fix this (reduce it to Plan 2)?--Vlad",
"msg_date": "Mon, 17 Dec 2018 17:16:08 -0800",
"msg_from": "Vladimir Ryabtsev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Why Postgres doesn't use TID scan?"
},
{
"msg_contents": "Vladimir Ryabtsev <[email protected]> writes:\n> I want to clean a large log table by chunks. I write such a query:\n> delete from categorization.log\n> where ctid in (\n> select ctid from categorization.log\n> where timestamp < now() - interval '2 month'\n> limit 1000\n> )\n\n> Why does this query want to use Seq Scan and Sort on a 423M rows table?\n\nThere's no support for using ctid as a join key in this way; specifically,\nnodeTidscan.c doesn't have support for being a parameterized inner scan,\nnor does tidpath.c have code to generate such a plan. The header comments\nfor the latter say\n\n * There is currently no special support for joins involving CTID; in\n * particular nothing corresponding to best_inner_indexscan(). Since it's\n * not very useful to store TIDs of one table in another table, there\n * doesn't seem to be enough use-case to justify adding a lot of code\n * for that.\n\nQueries like yours are kinda sorta counterexamples to that, but pretty\nmuch all the ones I've seen seem like crude hacks (and this one is not\nan exception). Writing a bunch of code to support them feels like\nsolving the wrong problem. Admittedly, it's not clear to me what the\nright problem to solve instead would be.\n\n(It's possible that I'm overestimating the amount of new code that would\nbe needed to implement this, however. indxpath.c is pretty huge, but\nthat's mostly because there are so many cases to consider. There'd only\nbe one interesting case for an inner TID scan. Also, this comment is\nancient, predating the current approach with parameterized paths ---\nin fact best_inner_indexscan doesn't exist as such anymore. So maybe\nthat old judgment that it'd take a lot of added code is wrong.)\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 17 Dec 2018 20:40:37 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why Postgres doesn't use TID scan?"
},
{
"msg_contents": "I can't believe it.\nI see some recommendations in Internet to do like this (e.g.\nhttps://stackoverflow.com/questions/5170546/how-do-i-delete-a-fixed-number-of-rows-with-sorting-in-postgresql\n).\nDid it really work in 2011? Are you saying they broke it? It's a shame...\n\nAnyway I think the problem is pretty clear: I want to eventually clear the\ntable based on the predicate but I don't want to lock it for a long time.\nThe table does not have a primary key.\nWhat should be a proper solution?\n\n--\nVlad\n\nпн, 17 дек. 2018 г. в 17:40, Tom Lane <[email protected]>:\n\n> Vladimir Ryabtsev <[email protected]> writes:\n> > I want to clean a large log table by chunks. I write such a query:\n> > delete from categorization.log\n> > where ctid in (\n> > select ctid from categorization.log\n> > where timestamp < now() - interval '2 month'\n> > limit 1000\n> > )\n>\n> > Why does this query want to use Seq Scan and Sort on a 423M rows table?\n>\n> There's no support for using ctid as a join key in this way; specifically,\n> nodeTidscan.c doesn't have support for being a parameterized inner scan,\n> nor does tidpath.c have code to generate such a plan. The header comments\n> for the latter say\n>\n> * There is currently no special support for joins involving CTID; in\n> * particular nothing corresponding to best_inner_indexscan(). Since it's\n> * not very useful to store TIDs of one table in another table, there\n> * doesn't seem to be enough use-case to justify adding a lot of code\n> * for that.\n>\n> Queries like yours are kinda sorta counterexamples to that, but pretty\n> much all the ones I've seen seem like crude hacks (and this one is not\n> an exception). Writing a bunch of code to support them feels like\n> solving the wrong problem. Admittedly, it's not clear to me what the\n> right problem to solve instead would be.\n>\n> (It's possible that I'm overestimating the amount of new code that would\n> be needed to implement this, however. indxpath.c is pretty huge, but\n> that's mostly because there are so many cases to consider. There'd only\n> be one interesting case for an inner TID scan. Also, this comment is\n> ancient, predating the current approach with parameterized paths ---\n> in fact best_inner_indexscan doesn't exist as such anymore. So maybe\n> that old judgment that it'd take a lot of added code is wrong.)\n>\n> regards, tom lane\n>\n\nI can't believe it.I see some recommendations in Internet to do like this (e.g. https://stackoverflow.com/questions/5170546/how-do-i-delete-a-fixed-number-of-rows-with-sorting-in-postgresql).Did it really work in 2011? Are you saying they broke it? It's a shame...Anyway I think the problem is pretty clear: I want to eventually clear the table based on the predicate but I don't want to lock it for a long time.The table does not have a primary key.What should be a proper solution?--Vladпн, 17 дек. 2018 г. в 17:40, Tom Lane <[email protected]>:Vladimir Ryabtsev <[email protected]> writes:\n> I want to clean a large log table by chunks. I write such a query:\n> delete from categorization.log\n> where ctid in (\n> select ctid from categorization.log\n> where timestamp < now() - interval '2 month'\n> limit 1000\n> )\n\n> Why does this query want to use Seq Scan and Sort on a 423M rows table?\n\nThere's no support for using ctid as a join key in this way; specifically,\nnodeTidscan.c doesn't have support for being a parameterized inner scan,\nnor does tidpath.c have code to generate such a plan. The header comments\nfor the latter say\n\n * There is currently no special support for joins involving CTID; in\n * particular nothing corresponding to best_inner_indexscan(). Since it's\n * not very useful to store TIDs of one table in another table, there\n * doesn't seem to be enough use-case to justify adding a lot of code\n * for that.\n\nQueries like yours are kinda sorta counterexamples to that, but pretty\nmuch all the ones I've seen seem like crude hacks (and this one is not\nan exception). Writing a bunch of code to support them feels like\nsolving the wrong problem. Admittedly, it's not clear to me what the\nright problem to solve instead would be.\n\n(It's possible that I'm overestimating the amount of new code that would\nbe needed to implement this, however. indxpath.c is pretty huge, but\nthat's mostly because there are so many cases to consider. There'd only\nbe one interesting case for an inner TID scan. Also, this comment is\nancient, predating the current approach with parameterized paths ---\nin fact best_inner_indexscan doesn't exist as such anymore. So maybe\nthat old judgment that it'd take a lot of added code is wrong.)\n\n regards, tom lane",
"msg_date": "Mon, 17 Dec 2018 17:54:19 -0800",
"msg_from": "Vladimir Ryabtsev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why Postgres doesn't use TID scan?"
},
{
"msg_contents": "Vladimir Ryabtsev <[email protected]> writes:\n> I see some recommendations in Internet to do like this (e.g.\n> https://stackoverflow.com/questions/5170546/how-do-i-delete-a-fixed-number-of-rows-with-sorting-in-postgresql\n> ).\n> Did it really work in 2011?\n\nNo, or at least not any better than today. (For context, \"git blame\"\nsays I wrote the comment I just quoted to you in 2005. The feature it\nsays isn't there wasn't there before that, either.)\n\n> Anyway I think the problem is pretty clear: I want to eventually clear the\n> table based on the predicate but I don't want to lock it for a long time.\n\nDELETE doesn't lock the whole table. What problem are you actually\nfacing?\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 17 Dec 2018 21:32:51 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why Postgres doesn't use TID scan?"
},
{
"msg_contents": "OK, good to know.\nI saw some timeout errors in the code writing to the log table during my\nDELETE and decided they are relevant. Probably they had nothing to do with\nmy actions, need to investigate.\nThanks anyway.\n\nBest regards,\nVlad\n\nпн, 17 дек. 2018 г. в 18:32, Tom Lane <[email protected]>:\n\n>\n> DELETE doesn't lock the whole table. What problem are you actually\n> facing?\n>\n>\n\nOK, good to know.I saw some timeout errors in the code writing to the log table during my DELETE and decided they are relevant. Probably they had nothing to do with my actions, need to investigate.Thanks anyway.Best regards,Vladпн, 17 дек. 2018 г. в 18:32, Tom Lane <[email protected]>:\n\nDELETE doesn't lock the whole table. What problem are you actually\nfacing?",
"msg_date": "Mon, 17 Dec 2018 18:40:49 -0800",
"msg_from": "Vladimir Ryabtsev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why Postgres doesn't use TID scan?"
},
{
"msg_contents": "On 2018-Dec-17, Tom Lane wrote:\n\n> Queries like yours are kinda sorta counterexamples to that, but pretty\n> much all the ones I've seen seem like crude hacks (and this one is not\n> an exception). Writing a bunch of code to support them feels like\n> solving the wrong problem. Admittedly, it's not clear to me what the\n> right problem to solve instead would be.\n\nYeah, over the years I've confronted several times with situations where\na deletion by ctid (and sometimes updates, IIRC) was the most convenient\nway out of. It's not the kind of thing that you'd do with any\nfrequency, just one-offs. It's always been a bit embarrasing that this\ndoesn't \"work properly\". There's always been some way around it, much\nslower and less convenient ...\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Tue, 18 Dec 2018 13:40:04 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why Postgres doesn't use TID scan?"
},
{
"msg_contents": ">>>>> \"Vladimir\" == Vladimir Ryabtsev <[email protected]> writes:\n\n Vladimir> I can't believe it.\n Vladimir> I see some recommendations in Internet to do like this\n\nwell, 90% of what you read on the Internet is wrong.\n\n Vladimir> Did it really work in 2011? Are you saying they broke it?\n Vladimir> It's a shame...\n\nThe method in that SO link does work, it's just slow. The workaround is\nto do it like this instead:\n\ndelete from mytable\n where ctid = any (array(select ctid from mytable\n where ...\n order by ...\n limit 1000));\n\nBut of course that's still an ugly hack.\n\n-- \nAndrew (irc:RhodiumToad)\n\n",
"msg_date": "Wed, 19 Dec 2018 11:41:27 +0000",
"msg_from": "Andrew Gierth <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why Postgres doesn't use TID scan?"
},
{
"msg_contents": "> The workaround is to do it like this instead:\n\nStrange, I tried to do like this, but the first thing came into my\nmind was array_agg()\nnot array():\n\ndelete from log\nwhere ctid = any(\n select array_agg(ctid) from (\n select ctid from log\n where timestamp < now() at time zone 'pst' - interval '2 month'\n limit 10\n ) v);\n\nThis query complained like this:\n\nERROR: operator does not exist: tid = tid[]\nLINE 2: where ctid = any(\n ^\nHINT: No operator matches the given name and argument type(s). You might\nneed to add explicit type casts.\n\nWhich is strange because both array(select ...) and select array_agg() ...\nreturn the same datatype ctid[].\n\n> But of course that's still an ugly hack.\n\nCome on... Due to declarative nature of SQL developers sometimes need to\nwrite much dirtier and uglier hacks.\nThis one is just a fluffy hacky.\n\n--\nVlad\n\n> The workaround is to do it like this instead:Strange, I tried to do like this, but the first thing came into my mind was array_agg() not array():delete from logwhere ctid = any( select array_agg(ctid) from ( select ctid from log where timestamp < now() at time zone 'pst' - interval '2 month' limit 10 ) v);This query complained like this:ERROR: operator does not exist: tid = tid[]LINE 2: where ctid = any( ^HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts.Which is strange because both array(select ...) and select array_agg() ... return the same datatype ctid[].> But of course that's still an ugly hack.Come on... Due to declarative nature of SQL developers sometimes need to write much dirtier and uglier hacks.This one is just a fluffy hacky.--Vlad",
"msg_date": "Wed, 19 Dec 2018 12:22:03 -0800",
"msg_from": "Vladimir Ryabtsev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why Postgres doesn't use TID scan?"
},
{
"msg_contents": ">>>>> \"Vladimir\" == Vladimir Ryabtsev <[email protected]> writes:\n\n >> The workaround is to do it like this instead:\n\n Vladimir> Strange, I tried to do like this, but the first thing came\n Vladimir> into my mind was array_agg() not array():\n\n Vladimir> delete from log\n Vladimir> where ctid = any(\n Vladimir> select array_agg(ctid) from (\n Vladimir> select ctid from log\n Vladimir> where timestamp < now() at time zone 'pst' - interval '2 month'\n Vladimir> limit 10\n Vladimir> ) v);\n\n Vladimir> This query complained like this:\n\n Vladimir> ERROR: operator does not exist: tid = tid[]\n Vladimir> LINE 2: where ctid = any(\n Vladimir> ^\n Vladimir> HINT: No operator matches the given name and argument\n Vladimir> type(s). You might need to add explicit type casts.\n\n Vladimir> Which is strange because both array(select ...) and select\n Vladimir> array_agg() ... return the same datatype ctid[].\n\nIt's not so strange when you understand what's going on here. The\nfundamental issue is that \"ANY\" has two meanings in PG, one of them\nfollowing the SQL standard and one not:\n\n x <operator> ANY (<subselect>) -- standard\n x <operator> ANY (<expression>) -- PG-specific\n\nIn the first case, the behavior follows the standard, which makes this a\ngeneralization of IN: specifically, in the standard,\n\n x IN (select ...)\n\nis just alternative syntax for\n\n x = ANY (select ...)\n\nObviously in this form, the result of the subselect is expected to be of\nthe same type and degree as \"x\", hence the error since tid and tid[] are\nnot the same type.\n\n(Because this is the standard form, it's the one chosen when the syntax\nis otherwise ambiguous between the two.)\n\nThe form x = ANY (somearray) is a PG extension, but because of the\nambiguity, the array can only be specified by something that doesn't\nparse as a select. So array() works (as does array[] for the commonly\nused case of an explicit list), but if you want to use a select to get\nthe array value, you have to add some kind of syntax that makes it not\nparse as a select, e.g.:\n\n WHERE ctid = ANY ((select array_agg(...) from ...)::tid[])\n\nIn this case the cast forces it to parse as an expression and not a\nsubquery (it's not enough to just use the parens alone, because PG,\nagain unlike the SQL standard, allows any number of excess parens around\na subquery).\n\n-- \nAndrew (irc:RhodiumToad)\n\n",
"msg_date": "Wed, 19 Dec 2018 22:23:15 +0000",
"msg_from": "Andrew Gierth <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why Postgres doesn't use TID scan?"
},
{
"msg_contents": "> The fundamental issue is that \"ANY\" has two meanings in PG, one of them\nfollowing the SQL standard and one not:\n\nOh yes, I was aware about two forms but it did not come into my mind, I was\nthinking I use the same form in both cases since my query returns only one\nrow and column.\nThanks for pointing me into that.\n\n--\nVlad\n\n> The fundamental issue is that \"ANY\" has two meanings in PG, one of them following the SQL standard and one not:Oh yes, I was aware about two forms but it did not come into my mind, I was thinking I use the same form in both cases since my query returns only one row and column.Thanks for pointing me into that.--Vlad",
"msg_date": "Wed, 19 Dec 2018 15:44:58 -0800",
"msg_from": "Vladimir Ryabtsev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why Postgres doesn't use TID scan?"
},
{
"msg_contents": "On Wed, Dec 19, 2018 at 6:45 PM Vladimir Ryabtsev <[email protected]>\nwrote:\n\n> > The fundamental issue is that \"ANY\" has two meanings in PG, one of them\n> following the SQL standard and one not:\n>\n> Oh yes, I was aware about two forms but it did not come into my mind, I\n> was thinking I use the same form in both cases since my query returns only\n> one row and column.\n> Thanks for pointing me into that.\n>\n> --\n> Vlad\n>\n\nFor what it is worth, I have found that if I am checking for the presence\nof an object in an array, while this syntax is easy to understand and more\nintuitive to craft:\n\n select\n *\n from\n mytable\n where\n ' test' = ANY (my_varchar_array_column)\n ;\n\nThis syntax is almost always much faster:\n\n select\n *\n from\n mytable\n where\n ARRAY['test'::varchar] <@ my_varchar_array_column\n ;\n\n(Since this is a performance list after all.)\n\nOn Wed, Dec 19, 2018 at 6:45 PM Vladimir Ryabtsev <[email protected]> wrote:> The fundamental issue is that \"ANY\" has two meanings in PG, one of them following the SQL standard and one not:Oh yes, I was aware about two forms but it did not come into my mind, I was thinking I use the same form in both cases since my query returns only one row and column.Thanks for pointing me into that.--VladFor what it is worth, I have found that if I am checking for the presence of an object in an array, while this syntax is easy to understand and more intuitive to craft: select * from mytable where ' test' = ANY (my_varchar_array_column) ;This syntax is almost always much faster: select * from mytable where ARRAY['test'::varchar] <@ my_varchar_array_column ;(Since this is a performance list after all.)",
"msg_date": "Thu, 20 Dec 2018 08:46:04 -0500",
"msg_from": "Rick Otten <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why Postgres doesn't use TID scan?"
}
] |
[
{
"msg_contents": "Wondering if anyone had any thoughts on how to tweak my setup to get it \nto read many files at once instead of one at a time when using file fdw \nand partitions. We have a bunch of data tied up in files (each file > 4M \nrows, 5,000+ files per year) that I would like to be able to query \ndirectly using FDW. The files are genomic VCF format and I find that \nvcf_fdw ( https://github.com/ergo70/vcf_fdw ) works really well to read \nthe data. We only want to be able to search the data as quickly as \npossible, no updates / deletes / ...\n\nI gave an example below of the basic setup and the output of explain \nanalyze. I get the same performance if I setup the table such that the \nthousands of files end up in one non-partitioned table or setup each \nfile as it's own partition of the table.\n\nI have tried increasing ( / decreasing ) the worker threads and workers, \nbut don't see any change in the number of files open at any given time. \nI tried reducing the cost of parallel queries to force them to run, but \ncan't get them to kick in.\n\nAny ideas or anything I can try?\n\nThanks!\n\nPat\n\nPostgreSQL: PostgreSQL 10.6 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28), 64-bit\nMulticorn: 1.3.5\nVCF_FDW ( https://github.com/ergo70/vcf_fdw ) : 1.0.0\n\n\nCREATE DATABASE variants;\n\nCREATE EXTENSION multicorn;\n\nCREATE SERVER multicorn_vcf FOREIGN DATA WRAPPER multicorn OPTIONS (wrapper 'vcf_fdw.VCFForeignDataWrapper');\n\nCREATE SCHEMA vcf;\n\nCREATE TABLE vcf.variants ( ..., species text, ... ) PARTITION BY LIST ( species );\n\nCREATE FOREIGN TABLE vcf.human ( ... ) SERVER multicorn_vcf OPTIONS (basedir '/path', species 'human', suffix '.vcf.gz');\nALTER TABLE vcf.variants ATTACH PARTITION vcf.human FOR VALUES IN ( 'human' );\n\nCREATE FOREIGN TABLE vcf.dog ( ... ) SERVER multicorn_vcf OPTIONS (basedir '/path', species 'dog', suffix '.vcf.gz');\nALTER TABLE vcf.variants ATTACH PARTITION vcf.dog FOR VALUES IN ( 'dog' );\n\nCREATE FOREIGN TABLE vcf.cat ( ... ) SERVER multicorn_vcf OPTIONS (basedir '/path', species 'cat', suffix '.vcf.gz');\nALTER TABLE vcf.variants ATTACH PARTITION vcf.cat FOR VALUES IN ( 'cat' );\n\n* My real data repeats this 1000+ more times\n\nEXPLAIN ( ANALYZE, BUFFERS ) SELECT * FROM vcf.variants WHERE chrom = '1' AND pos = 10120 LIMIT 1000;\n\nOn my real data I get the following results:\n--------------------------\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=20.00..352020.00 rows=1000 width=347) (actual time=445.548..101709.307 rows=20 loops=1)\n -> Append (cost=20.00..3555200000.00 rows=10100000 width=347) (actual time=445.547..101709.285 rows=20 loops=1)\n -> Foreign Scan on dog (cost=20.00..3520000.00 rows=10000 width=352) (actual time=198.653..198.654 rows=0 loops=1)\n Filter: ((chrom = '1'::text) AND (pos = 10120))\n -> Foreign Scan on cat (cost=20.00..3520000.00 rows=10000 width=352) (actual time=111.840..111.840 rows=0 loops=1)\n Filter: ((chrom = '1'::text) AND (pos = 10120))\n -> Foreign Scan on human (cost=20.00..3520000.00 rows=10000 width=352) (actual time=135.050..138.534 rows=1 loops=1)\n Filter: ((chrom = '1'::text) AND (pos = 10120))\n ... repeats many more times for each partition\n Planning time: 613.815 ms\n Execution time: 101873.880 ms\n(2024 rows)\n\n\n",
"msg_date": "Tue, 18 Dec 2018 20:39:36 -0800",
"msg_from": "Patrick Mulrooney <[email protected]>",
"msg_from_op": true,
"msg_subject": "Increasing parallelism of queries while using file fdw and partitions"
},
{
"msg_contents": "On Tue, Dec 18, 2018 at 08:39:36PM -0800, Patrick Mulrooney wrote:\n> Wondering if anyone had any thoughts on how to tweak my setup to get it to\n> read many files at once instead of one at a time when using file fdw and\n> partitions.\n\nI found this:\n\nhttps://www.postgresql.org/docs/current/parallel-safety.html\n|The following operations are always parallel restricted.\n|Scans of foreign tables, unless the foreign data wrapper has an IsForeignScanParallelSafe API which indicates otherwise.\n\nhttps://github.com/ergo70/vcf_fdw/blob/master/vcf_fdw/__init__.py\n=> has no such API marker, since it's couple years old, same as multicorn.\n\nJustin\n\n",
"msg_date": "Wed, 19 Dec 2018 00:51:49 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Increasing parallelism of queries while using file fdw and\n partitions"
},
{
"msg_contents": "Justin,\n\nThanks for the idea. I pulled down the source for multicorn and added that to it. I do not see parallel queries in the analyze output (unless I force it and then it only gets one worker), but it does look like it is reading more than one file at once if I go with a non-partitioned table that looks at all the files. Not any better if I have the table split up into partitions. \n\nSo it’s better, but still curious if this would work with partitions. \n\nThanks again. \n\nPat\n\n> On Dec 18, 2018, at 22:51, Justin Pryzby <[email protected]> wrote:\n> \n>> On Tue, Dec 18, 2018 at 08:39:36PM -0800, Patrick Mulrooney wrote:\n>> Wondering if anyone had any thoughts on how to tweak my setup to get it to\n>> read many files at once instead of one at a time when using file fdw and\n>> partitions.\n> \n> I found this:\n> \n> https://www.postgresql.org/docs/current/parallel-safety.html\n> |The following operations are always parallel restricted.\n> |Scans of foreign tables, unless the foreign data wrapper has an IsForeignScanParallelSafe API which indicates otherwise.\n> \n> https://github.com/ergo70/vcf_fdw/blob/master/vcf_fdw/__init__.py\n> => has no such API marker, since it's couple years old, same as multicorn.\n> \n> Justin\n\n",
"msg_date": "Tue, 18 Dec 2018 23:36:45 -0800",
"msg_from": "Patrick Mulrooney <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Increasing parallelism of queries while using file fdw and\n partitions"
}
] |
[
{
"msg_contents": "Hi All,\n\nI am looking into a performance issue and needed your input and thoughts.\n\nWe have table (non-partitioned) of 500Gb with 11 indexes \n\n+--------------+---------------+--------------+-------------+--------------+---------+--------+------------+--------+\n\n| row_estimate | total_bytes | index_bytes | toast_bytes | table_bytes | \ntotal | index | toast | table |\n\n+--------------+---------------+--------------+-------------+--------------+---------+--------+------------+--------+\n\n| 1.28611e+09 | 1400081645568 | 858281418752 | 8192 | 541800218624 |\n1304 GB | 799 GB | 8192 bytes | 505 GB |\n\n+--------------+---------------+--------------+-------------+--------------+---------+--------+------------+--------+\n\n\nApplication runs a simple sql ,\n\nselect distinct testtbl_.id as col_0_0_ from demo.test_table testtbl_ where\ntesttbl_.entity_id='10001' and testtbl_.last_updated>=to_date('22-10-2018',\n'dd-MM-yyyy') and testtbl_.last_updated<to_date('23-10-2018', 'dd-MM-yyyy')\nand testtbl_.quantity_available>0 and testtbl_.src_name='distribute_item'\nand (testtbl_.item not like 'SHIP%') order by testtbl_.id limit 10000;\n\nThe Execution time for the above sql is 17841.467 ms during normal\noperations but when autovacuum runs on table test_table, the same sql took\n1628495.850 ms (from the postgres log). \n\nWe have noticed this increase in execution times for the sqls only when\nautovacuum runs and it runs with prevent wraparound mode. I think during the\nautovacuum process the Buffers: shared hit are increasing causing increase\nin execution time.\n\nI need help with the approach to debug this issue. Is this expected\nbehaviour wherein sql execution timing incease during the autovacuum? If so\n, what is the reason for the same? \n\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n",
"msg_date": "Tue, 18 Dec 2018 23:04:40 -0700 (MST)",
"msg_from": "anand086 <[email protected]>",
"msg_from_op": true,
"msg_subject": "SQL Perfomance during autovacuum"
},
{
"msg_contents": "On Wed, 19 Dec 2018 at 19:04, anand086 <[email protected]> wrote:\n> We have noticed this increase in execution times for the sqls only when\n> autovacuum runs and it runs with prevent wraparound mode. I think during the\n> autovacuum process the Buffers: shared hit are increasing causing increase\n> in execution time.\n>\n> I need help with the approach to debug this issue. Is this expected\n> behaviour wherein sql execution timing incease during the autovacuum? If so\n> , what is the reason for the same?\n\nThis is unsurprising. There are various GUC settings designed to\nthrottle vacuum to help minimise this problem. The auto-vacuum process\nis competing for the same resources as your query is, and is likely\nloading many new buffers, therefore flushing buffers out of cache that\nmight be useful for your query.\n\nShowing the output of:\n\nselect name,setting from pg_Settings where name like '%vacuum%';\n\nmay be of use here.\n\nYou'll particularly want to pay attention to the settings of\nautovacuum_vacuum_cost_delay, autovacuum_vacuum_cost_limit and\nvacuum_cost_limit. The settings of vacuum_cost_page_dirty,\nvacuum_cost_page_hit, vacuum_cost_page_miss matter too, but these are\nless often changed by users.\n\nYou may be able to learn exactly what's going on with the query by doing:\n\nset track_io_timing = on;\nexplain (analyze, buffers, timing) <your query here>\n\nboth during the auto-vacuum run, and at a time when it's not running.\n\nIf the query plans of each match, then pay attention to the number of\nbuffers read and how long they took to read. If you find that these\ndon't explain the variation then something else is at fault, perhaps\nCPU contention, or perhaps swapping due to high memory usage.\n\nIt also seems pretty strange that you should need to use DISTINCT on a\ncolumn that's named \"id\".\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Wed, 19 Dec 2018 19:33:41 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL Perfomance during autovacuum"
},
{
"msg_contents": "On Wed, Dec 19, 2018 at 1:04 AM anand086 <[email protected]> wrote:\n\n>\n> The Execution time for the above sql is 17841.467 ms during normal\n> operations but when autovacuum runs on table test_table, the same sql took\n> 1628495.850 ms (from the postgres log).\n>\n> We have noticed this increase in execution times for the sqls only when\n> autovacuum runs and it runs with prevent wraparound mode.\n\n\nSome competition for resource is to be expected with autovacuum, but making\na one-hundred fold difference in run time is rather extreme. I'd suggest\nthat what you have is a locking issue. Something is trying to take a brief\nAccess Exclusive lock on the table. It blocks on the lock held by the\nautovacuum, and then the Access Share lock needed for your query blocks\nbehind that.\n\nNormally an autovacuum will yield the lock when it notices it is blocking\nsomething else, but will not do so for wraparound.\n\nIf you have log_lock_waits turned on, you should see some evidence in the\nlog file if this is the case.\n\nCheers,\n\nJeff\n\nOn Wed, Dec 19, 2018 at 1:04 AM anand086 <[email protected]> wrote:\nThe Execution time for the above sql is 17841.467 ms during normal\noperations but when autovacuum runs on table test_table, the same sql took\n1628495.850 ms (from the postgres log). \n\nWe have noticed this increase in execution times for the sqls only when\nautovacuum runs and it runs with prevent wraparound mode. Some competition for resource is to be expected with autovacuum, but making a one-hundred fold difference in run time is rather extreme. I'd suggest that what you have is a locking issue. Something is trying to take a brief Access Exclusive lock on the table. It blocks on the lock held by the autovacuum, and then the Access Share lock needed for your query blocks behind that.Normally an autovacuum will yield the lock when it notices it is blocking something else, but will not do so for wraparound.If you have log_lock_waits turned on, you should see some evidence in the log file if this is the case.Cheers,Jeff",
"msg_date": "Fri, 21 Dec 2018 19:13:14 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL Perfomance during autovacuum"
}
] |
[
{
"msg_contents": "I would love to see a feature on psql cli tool where i can point it to a\nconnection pool provider back end instead of a postgres host. I read that\npgpool2 emulates a postgres server, but i've never tried it myself. I know\nthat some languages provide connection pooling, but i'm relying heavily on\npsql and its features and i think built-in master/slave connection pooling\nin psql cli would make my day, and that of any other dirty bash scripters\nout there.\n\nCan anyone comment on their experience with pgpool2 in a high activity\nproduction environment?\nAre there other tools or suggestions anyone can point me to?\nIs there any appetite to support connection pooling natively by either the\npostmaster or the psql cli or some other device that could be contrib to\nthe source tree?\nDoes it even matter? Is server version 10 ddos-proof, other than\nmax_connections?\n\nThanks.\n\nI would love to see a feature on psql cli tool where i can point it to a connection pool provider back end instead of a postgres host. I read that pgpool2 emulates a postgres server, but i've never tried it myself. I know that some languages provide connection pooling, but i'm relying heavily on psql and its features and i think built-in master/slave connection pooling in psql cli would make my day, and that of any other dirty bash scripters out there.Can anyone comment on their experience with pgpool2 in a high activity production environment?Are there other tools or suggestions anyone can point me to?Is there any appetite to support connection pooling natively by either the postmaster or the psql cli or some other device that could be contrib to the source tree?Does it even matter? Is server version 10 ddos-proof, other than max_connections?Thanks.",
"msg_date": "Fri, 21 Dec 2018 21:08:33 +0200",
"msg_from": "DJ Coertzen <[email protected]>",
"msg_from_op": true,
"msg_subject": "psql cli tool and connection pooling"
},
{
"msg_contents": "Hi D.J.,\n\nHope this helps.\n\nGenerally, I tend to think of it like there are three separate features provided by connection poolers:\nConnection pooling, where you are trying to save connection overhead\nOffloading read-only queries to standby’s\nDelivering transparent client failover, where you can failover master/standby transparent to client connections\n\nDepending on the solution you choose, it might implement some of these features.\nReading your mail, you are looking for all of them, and are not clear yet which to focus on.\nI would bring in a specialist at this moment, but let me try to give you a head start:\n\nI am aware if wo main connection pooling implementations and they all deliver some of these features:\nThe one built into the application language\nJava has a connection pooling mechanisme built in\n.NET has one too\nThere might be others \nLibpq has native functionality for transparent client failover (psql is based on libpq)\nConnection poolers that mimic a postgres backend,\nPgpool-II is one like that\nPgbouncer is another example\nThere are others, but let's stick to these two for now.\n\nSince you mention psql, the first implementation will not help you that much (except for transparent client failover).\nThe second implementation will do what you require. You connect to the pooler, and the pooler connects to postgres.\nTo psql, connecting to the pooler is transparent. He connects to a port and gets a Postgres connection.\nWhat happens in the background of that, is transparent.\n\nNow, getting into your comments / questions:\n> I would love to see a feature on psql cli tool where i can point it to a connection pool provider back end instead of a postgres host.\nGreat, look at Pgpool-II and PgBouncer. They have overlapping use cases, but depending on the exact situation, might be that one fits better than the other.\n\n> I read that pgpool2 emulates a postgres server, but i've never tried it myself\nYes it does (as do all connection poolers that mimic a postgres backend,)\n\n> I know that some languages provide connection pooling, but i'm relying heavily on psql and its features and i think built-in master/slave connection pooling in psql cli would make my day, and that of any other dirty bash scripters out there.\nSound like you are looking for client connection failover here.\nRead this: https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING <https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING> and specifically '34.1.1.3. Specifying Multiple Hosts’ for the most basic approach to implement this.\n> \n> Can anyone comment on their experience with pgpool2 in a high activity production environment?\nYes. It works, and depending on your use case, it can even add performance enhancing options.\nOn the other hand, it tries to fix many things in one tool, and that makes it a complex solution too.\nAnd it adds limitations to the solution too. I have seen a lot of implementations, where people focussed on one thing, but neglected another important thing.\nMy best advice is: Bring in a specialist for this one.\n\n> Are there other tools or suggestions anyone can point me to?\nWell, read the documentation on Pgpool-II: http://www.pgpool.net/mediawiki/index.php/Documentation <http://www.pgpool.net/mediawiki/index.php/Documentation>\nAnd look into PGBouncer too: https://pgbouncer.github.io/faq.html <https://pgbouncer.github.io/faq.html>\n\n> Is there any appetite to support connection pooling natively by either the postmaster or the psql cli or some other device that could be contrib to the source tree?\nThere is client failover in libpq.\nI think t was specifically decided to not fix connection pooling in core, since fixing it in the app layer / external connection poolers keeps Postgres cor code cleaner.\nAnd there are a lot of situations, where you want connection pooler features, so let's keep lean code for that.\nFixing the 'read-only queries’ feature must be done on the client side at all times.\n\n> Does it even matter? Is server version 10 ddos-proof, other than max_connections?\nThere is no real DDOS proof. In the end, any system can be brought down by a DDOS attack if done under the right circumstances.\nAnd all mitigations for DDOS can be circumvented in one way or another.\nThis is not specific to Postgres. It is a very generic thing.\nYou can build a very DDOS-resilient solution with postgres. But that greatly depends on what you want to mitigate and how much effort you want to put into it.\n\nAn example is connection exhaustion: You can manage that in a lot of ways\nSuperuser connections vs normal connections\nLimit max connections per user\nYou can do a lot with customer logon triggers\netc.\nBut every mitigation needs some thinking, setting some limit, and depending on what you want to do, you might need to code (like a logon trigger).\n\nEnterpriseDB has a lot of experience with this regard. And we have a product that even extents possibilities here.\nSo I would say, bring in a professional with a lot of experience.\nIt is probably the best way to build a solution that fits best to the things you mentioned in this question.\n\n \n\n\t \t\nSebastiaan Alexander Mannem\nSenior Consultant\nAnthony Fokkerweg 1\n1059 CM Amsterdam, The Netherlands\nT: +31 6 82521560\nwww.edbpostgres.com\n\n\t\t\t\t\n\n> On 21 Dec 2018, at 20:08, DJ Coertzen <[email protected]> wrote:\n> \n> I would love to see a feature on psql cli tool where i can point it to a connection pool provider back end instead of a postgres host. I read that pgpool2 emulates a postgres server, but i've never tried it myself. I know that some languages provide connection pooling, but i'm relying heavily on psql and its features and i think built-in master/slave connection pooling in psql cli would make my day, and that of any other dirty bash scripters out there.\n> \n> Can anyone comment on their experience with pgpool2 in a high activity production environment?\n> Are there other tools or suggestions anyone can point me to?\n> Is there any appetite to support connection pooling natively by either the postmaster or the psql cli or some other device that could be contrib to the source tree?\n> Does it even matter? Is server version 10 ddos-proof, other than max_connections?\n> \n> Thanks.\n>",
"msg_date": "Fri, 28 Dec 2018 10:04:16 +0100",
"msg_from": "Sebastiaan Mannem <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql cli tool and connection pooling"
}
] |
[
{
"msg_contents": "*PG Version:*\n\nPostgreSQL 9.6.10 on x86_64-pc-linux-gnu (Debian 9.6.10-1.pgdg80+1),\ncompiled by gcc (Debian 4.9.2-10+deb8u1) 4.9.2, 64-bit\n\n*Installed via apt-get:*\n\napt-get install -y postgresql-9.6=9.6.10-1.pgdg80+1\npostgresql-client-9.6=9.6.10-1.pgdg80+1\npostgresql-contrib-9.6=9.6.10-1.pgdg80+1\n\n*On a Debian 9.4 machine, 4.9 Kernel:*\n\nuname -a: Linux srv-7 4.9.0-6-amd64 #1 SMP Debian 4.9.82-1+deb9u3\n(2018-03-02) x86_64 GNU/Linux\n\nRunning inside a Docker 17.05 container.\n\n*Hardware:*\n\nServer: Dell R430 96 GB RAM, 2 Xeon processors with 10 cores, 20 threads\neach, total 40 threads.\n\nConnected to SAN: Dell Compellent SC2020, with 7 x Samsung PM1633 SSDs\nhttps://www.samsung.com/us/labs/pdfs/collateral/pm1633-prodoverview-2015.pdf,\nRAID10+RAID5 configuration, 8GB Cache, read-write battery backed cache\nenabled, connected via dedicated iSCSI switches and dedicated Ethernet\nports, in link aggregation mode (2x1Gbps max bandwidth).\n\nData files and log files on above SAN storage on same volume, dedicated\nvolume for temporary files.\n\n\n\n*Performance issue:*\n\nI’m trying to figure out if PostgreSQL (PG) has some inherent limit on IOPS\nper connection.\n\nRunning pgbench with multiple clients (-c 30) we are able to see 20K+ IOPS\n, which is what we expect. But, if we use just one client, we get 1200\nIOPS, avg disk queue size around 1:\n\npgbench -U postgres -S -T 60 -c 1\n\niotop:\n\nDevice: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz\navgqu-sz await r_await w_await svctm %util\n\ndm-10 0.00 0.00 1242.00 1.00 10796.00 20.00\n17.40 0.96 0.78 0.78 0.00 0.68 84.00\n\n\n\nWe tried to increase effective_io_size from 1 to 30, to no effect on\nmultiple tests.\n\nRunning the fio disk benchmarking tool, we found the same number of IOPS\n(1200) on a random read test if we set the io depth to 1.\n\nIf we increase the io depth to 30, we find about the same number of IOPS\n(20K) we see on pgbench with multiple clients:\n\n--fio config file\n\n[job]\n\nbs=8k\n\nrw=randread\n\nrandom_generator=lfsr\n\ndirect=1\n\nioengine=libaio\n\niodepth=30\n\ntime_based\n\nruntime=60s\n\nsize=128M\n\nfilename=/var/lib/postgresql/data_9.6/file.fio\n\niotsat:\n\nDevice: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz\navgqu-sz await r_await w_await svctm %util\n\ndm-10 0.00 0.00 19616.00 0.00 156928.00 0.00\n16.00 29.53 1.51 1.51 0.00 0.05 100.00\n\n\n\nWhich leads us to believe PG is limited to an IO depth of 1 per connection\n(PG submits just 1 I/O request per connection, not multiple ones), even\nthough effective_io_concurrency could lead to greater I/O queue and\nprobably greater IOPS as well.\n\nIs this some inherent limitation of PG or am I misunderstanding something?\n\nOne of the issues I’m trying to solve is related to extracting data from a\nlarge table, which users a full table scan. We see the same 1200 IOPS limit\nof pgbench when we SELECT on this table using just one connection. If there\nis a limitation per connection, I might set up the application to have\nseveral connections, and then issue SELECTs for different sections of the\ntable, and later join the data, but it looks cumbersome, especially if the\nDB can do extract data using more IOPS.\n\n\nBest regards,\nHaroldo Kerry\nCTO/COO\n\nPG Version:\nPostgreSQL 9.6.10 on x86_64-pc-linux-gnu (Debian\n9.6.10-1.pgdg80+1), compiled by gcc (Debian 4.9.2-10+deb8u1) 4.9.2, 64-bit\nInstalled via apt-get:\napt-get install -y postgresql-9.6=9.6.10-1.pgdg80+1\npostgresql-client-9.6=9.6.10-1.pgdg80+1\npostgresql-contrib-9.6=9.6.10-1.pgdg80+1\nOn a Debian 9.4 machine, 4.9 Kernel:\nuname -a: Linux srv-7 4.9.0-6-amd64 #1 SMP Debian\n4.9.82-1+deb9u3 (2018-03-02) x86_64 GNU/Linux\nRunning inside a Docker 17.05 container.\nHardware:\nServer: Dell R430 96 GB RAM, 2 Xeon processors with 10 cores, 20\nthreads each, total 40 threads.\nConnected to SAN: Dell Compellent SC2020, with 7 x Samsung PM1633\nSSDs https://www.samsung.com/us/labs/pdfs/collateral/pm1633-prodoverview-2015.pdf,\nRAID10+RAID5 configuration, 8GB Cache, read-write battery backed cache enabled,\nconnected via dedicated iSCSI switches and dedicated Ethernet ports, in link\naggregation mode (2x1Gbps max bandwidth).\nData files and log files on above SAN storage on same volume,\ndedicated volume for temporary files.\n \nPerformance issue:\nI’m trying to figure out if PostgreSQL (PG) has some\ninherent limit on IOPS per connection.\nRunning pgbench with multiple clients (-c 30) we are able to\nsee 20K+ IOPS , which is what we expect. But, if we use just one client, we\nget 1200 IOPS, avg disk queue size around 1:\npgbench -U postgres -S -T 60\n-c 1\niotop:\nDevice: \nrrqm/s wrqm/s r/s \nw/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm \n%util\ndm-10 0.00 0.00 1242.00 1.00 10796.00 20.00 \n17.40 0.96 0.78 \n0.78 0.00 0.68 \n84.00\n \nWe tried to increase effective_io_size from 1 to 30, to no\neffect on multiple tests.\nRunning the fio disk benchmarking tool, we found the same\nnumber of IOPS (1200) on a random read test if we set the io depth to 1.\nIf we increase the io depth to 30, we find about the same\nnumber of IOPS (20K) we see on pgbench with multiple clients:\n--fio config file\n[job]\nbs=8k\nrw=randread\nrandom_generator=lfsr\ndirect=1\nioengine=libaio\niodepth=30\ntime_based\nruntime=60s\nsize=128M\nfilename=/var/lib/postgresql/data_9.6/file.fio\niotsat:\nDevice: \nrrqm/s wrqm/s r/s \nw/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm \n%util\ndm-10 \n0.00 0.00 19616.00 0.00 156928.00 0.00 \n16.00 29.53 1.51 \n1.51 0.00 0.05 100.00\n \nWhich leads us to believe PG is limited to an IO depth of 1\nper connection (PG submits just 1 I/O request per connection, not multiple\nones), even though effective_io_concurrency could lead to greater I/O queue and\nprobably greater IOPS as well.\nIs this some inherent limitation of PG or am I\nmisunderstanding something?\nOne of the issues I’m trying to solve is related to extracting\ndata from a large table, which users a full table scan. We see the same 1200\nIOPS limit of pgbench when we SELECT on this table using just one connection.\nIf there is a limitation per connection, I might set up the application to have\nseveral connections, and then issue SELECTs for different sections of the\ntable, and later join the data, but it looks cumbersome, especially if the DB\ncan do extract data using more IOPS.Best regards,Haroldo KerryCTO/COO",
"msg_date": "Thu, 27 Dec 2018 14:44:55 -0200",
"msg_from": "Haroldo Kerry <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL Read IOPS limit per connection"
},
{
"msg_contents": "On Thu, Dec 27, 2018 at 02:44:55PM -0200, Haroldo Kerry wrote:\n> PostgreSQL 9.6.10 on x86_64-pc-linux-gnu (Debian 9.6.10-1.pgdg80+1),\n\n> Connected to SAN: Dell Compellent SC2020, with 7 x Samsung PM1633 SSDs\n> https://www.samsung.com/us/labs/pdfs/collateral/pm1633-prodoverview-2015.pdf,\n> RAID10+RAID5 configuration, 8GB Cache, read-write battery backed cache\n> enabled, connected via dedicated iSCSI switches and dedicated Ethernet\n> ports, in link aggregation mode (2x1Gbps max bandwidth).\n\n> I’m trying to figure out if PostgreSQL (PG) has some inherent limit on IOPS\n> per connection.\n\npostgres uses one server backend per client.\n\n> We tried to increase effective_io_size from 1 to 30, to no effect on\n> multiple tests.\n\nhttps://www.postgresql.org/docs/current/runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-ASYNC-BEHAVIOR\n=> \"Currently, this setting only affects bitmap heap scans.\"\n\n> Is this some inherent limitation of PG or am I misunderstanding something?\n\nIt is a hsitoric limitation, but nowadays there's parallel query, which uses\n2ndary \"backend worker\" processes.\n\nIt's supported in v9.6 but much more versatile in v10 and v11.\n\nJustin\n\n",
"msg_date": "Thu, 27 Dec 2018 10:55:33 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL Read IOPS limit per connection"
},
{
"msg_contents": "Justin,\nThanks for the quick response, I'll check it out.\n\nHappy holidays,\nHaroldo Kerry\n\nOn Thu, Dec 27, 2018 at 2:55 PM Justin Pryzby <[email protected]> wrote:\n\n> On Thu, Dec 27, 2018 at 02:44:55PM -0200, Haroldo Kerry wrote:\n> > PostgreSQL 9.6.10 on x86_64-pc-linux-gnu (Debian 9.6.10-1.pgdg80+1),\n>\n> > Connected to SAN: Dell Compellent SC2020, with 7 x Samsung PM1633 SSDs\n> >\n> https://www.samsung.com/us/labs/pdfs/collateral/pm1633-prodoverview-2015.pdf\n> ,\n> > RAID10+RAID5 configuration, 8GB Cache, read-write battery backed cache\n> > enabled, connected via dedicated iSCSI switches and dedicated Ethernet\n> > ports, in link aggregation mode (2x1Gbps max bandwidth).\n>\n> > I’m trying to figure out if PostgreSQL (PG) has some inherent limit on\n> IOPS\n> > per connection.\n>\n> postgres uses one server backend per client.\n>\n> > We tried to increase effective_io_size from 1 to 30, to no effect on\n> > multiple tests.\n>\n>\n> https://www.postgresql.org/docs/current/runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-ASYNC-BEHAVIOR\n> => \"Currently, this setting only affects bitmap heap scans.\"\n>\n> > Is this some inherent limitation of PG or am I misunderstanding\n> something?\n>\n> It is a hsitoric limitation, but nowadays there's parallel query, which\n> uses\n> 2ndary \"backend worker\" processes.\n>\n> It's supported in v9.6 but much more versatile in v10 and v11.\n>\n> Justin\n>\n\n\n-- \n\nHaroldo Kerry\n\nCTO/COO\n\nRua do Rócio, 220, 7° andar, conjunto 72\n\nSão Paulo – SP / CEP 04552-000\n\[email protected]\n\nwww.callix.com.br\n\nJustin,Thanks for the quick response, I'll check it out.Happy holidays,Haroldo KerryOn Thu, Dec 27, 2018 at 2:55 PM Justin Pryzby <[email protected]> wrote:On Thu, Dec 27, 2018 at 02:44:55PM -0200, Haroldo Kerry wrote:\n> PostgreSQL 9.6.10 on x86_64-pc-linux-gnu (Debian 9.6.10-1.pgdg80+1),\n\n> Connected to SAN: Dell Compellent SC2020, with 7 x Samsung PM1633 SSDs\n> https://www.samsung.com/us/labs/pdfs/collateral/pm1633-prodoverview-2015.pdf,\n> RAID10+RAID5 configuration, 8GB Cache, read-write battery backed cache\n> enabled, connected via dedicated iSCSI switches and dedicated Ethernet\n> ports, in link aggregation mode (2x1Gbps max bandwidth).\n\n> I’m trying to figure out if PostgreSQL (PG) has some inherent limit on IOPS\n> per connection.\n\npostgres uses one server backend per client.\n\n> We tried to increase effective_io_size from 1 to 30, to no effect on\n> multiple tests.\n\nhttps://www.postgresql.org/docs/current/runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-ASYNC-BEHAVIOR\n=> \"Currently, this setting only affects bitmap heap scans.\"\n\n> Is this some inherent limitation of PG or am I misunderstanding something?\n\nIt is a hsitoric limitation, but nowadays there's parallel query, which uses\n2ndary \"backend worker\" processes.\n\nIt's supported in v9.6 but much more versatile in v10 and v11.\n\nJustin\n-- Haroldo KerryCTO/COORua do Rócio, 220, 7° andar, conjunto 72São Paulo – SP / CEP [email protected]",
"msg_date": "Thu, 27 Dec 2018 17:33:40 -0200",
"msg_from": "Haroldo Kerry <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL Read IOPS limit per connection"
},
{
"msg_contents": ">\n>\n> *Performance issue:*\n>\n> I’m trying to figure out if PostgreSQL (PG) has some inherent limit on\n> IOPS per connection.\n>\n> Running pgbench with multiple clients (-c 30) we are able to see 20K+ IOPS\n> , which is what we expect. But, if we use just one client, we get 1200\n> IOPS, avg disk queue size around 1:\n>\n\nThe default transaction done by pgbench simply has no opportunity for\ndispatching multiple io requests per connection. It just a series of\nsingle-row lookups and single-row updates or inserts. You will have to use\na different benchmark if you want to exercise this area. Probably\nsomething analytics heavy.\n\nAlso, you would want to use the newest version of PostgreSQL, as 9.6\ndoesn't have parallel query, which is much more generally applicable than\neffective_io_concurrency is.\n\nOne of the issues I’m trying to solve is related to extracting data from a\n> large table, which users a full table scan. We see the same 1200 IOPS limit\n> of pgbench when we SELECT on this table using just one connection. If there\n> is a limitation per connection, I might set up the application to have\n> several connections, and then issue SELECTs for different sections of the\n> table, and later join the data, but it looks cumbersome, especially if the\n> DB can do extract data using more IOPS.\n>\nThe kernel should detect a sequential read in progress and invoke\nreadahead. That should be able to keep the CPU quite busy with data for\nany decent IO system. Are you sure IO is even the bottleneck for your\nquery?\n\nPerhaps your kernel readahead settings need to be tuned. Also, you may\nbenefit from parallel query features implemented in newer versions of\nPostgreSQL. In any event, the default transactions of pgbench are not\ngoing to be useful for benchmarking what you care about.\n\nCheers,\n\nJeff\n\n \nPerformance issue:\nI’m trying to figure out if PostgreSQL (PG) has some\ninherent limit on IOPS per connection.\nRunning pgbench with multiple clients (-c 30) we are able to\nsee 20K+ IOPS , which is what we expect. But, if we use just one client, we\nget 1200 IOPS, avg disk queue size around 1:The default transaction done by pgbench simply has no opportunity for dispatching multiple io requests per connection. It just a series of single-row lookups and single-row updates or inserts. You will have to use a different benchmark if you want to exercise this area. Probably something analytics heavy.Also, you would want to use the newest version of PostgreSQL, as 9.6 doesn't have parallel query, which is much more generally applicable than effective_io_concurrency is.\nOne of the issues I’m trying to solve is related to extracting\ndata from a large table, which users a full table scan. We see the same 1200\nIOPS limit of pgbench when we SELECT on this table using just one connection.\nIf there is a limitation per connection, I might set up the application to have\nseveral connections, and then issue SELECTs for different sections of the\ntable, and later join the data, but it looks cumbersome, especially if the DB\ncan do extract data using more IOPS.The kernel should detect a sequential read in progress and invoke readahead. That should be able to keep the CPU quite busy with data for any decent IO system. Are you sure IO is even the bottleneck for your query?Perhaps your kernel readahead settings need to be tuned. Also, you may benefit from parallel query features implemented in newer versions of PostgreSQL. In any event, the default transactions of pgbench are not going to be useful for benchmarking what you care about.Cheers,Jeff",
"msg_date": "Thu, 27 Dec 2018 20:20:23 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL Read IOPS limit per connection"
},
{
"msg_contents": "On Thu, Dec 27, 2018 at 08:20:23PM -0500, Jeff Janes wrote:\n> Also, you would want to use the newest version of PostgreSQL, as 9.6\n> doesn't have parallel query, which is much more generally applicable than\n> effective_io_concurrency is.\n\nIt *does* have parallel query (early, somewhat limited support),\nbut not enabled by default.\nhttps://www.postgresql.org/docs/9.6/parallel-query.html\n\nThere was some confusion due to being disabled in 9.6, only:\nhttps://www.postgresql.org/message-id/20180620151349.GB7500%40momjian.us\n\nCheers,\nJustin\n\n",
"msg_date": "Thu, 27 Dec 2018 19:29:00 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL Read IOPS limit per connection"
},
{
"msg_contents": "On Thu, Dec 27, 2018 at 7:29 PM Justin Pryzby <[email protected]> wrote:\n>\n> On Thu, Dec 27, 2018 at 08:20:23PM -0500, Jeff Janes wrote:\n> > Also, you would want to use the newest version of PostgreSQL, as 9.6\n> > doesn't have parallel query, which is much more generally applicable than\n> > effective_io_concurrency is.\n\neffective_io_concurrency only applies to certain queries. When it\ndoes apply it can work wonders. See:\nhttps://www.postgresql.org/message-id/CAHyXU0yiVvfQAnR9cyH=HWh1WbLRsioe=mzRJTHwtr=2azsTdQ@mail.gmail.com\n for an example of how it can benefit.\n\nparallel query is not going to help single threaded pg_bench results.\nyou are going to be entirely latency bound (network from bebench to\npostgres, then postgres to storage). On my dell crapbox I was getting\n2200tps so you have some point of slowness relative to me, probably\nnot the disk itself.\n\nGeetting faster performance is an age-old problem; you need to\naggregate specific requests into more general ones, move the\ncontrolling logic into the database itself, or use various other\nstrategies. Lowering latency is a hardware problem and can force\ntrade-offs (like, don't use a SAN) and has specific boundaries that\nare not easy to bust through.\n\nmerlin\n\n",
"msg_date": "Wed, 9 Jan 2019 13:14:11 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL Read IOPS limit per connection"
},
{
"msg_contents": "@Justin @Merlin @ Jeff,\nThanks so much for your time and insights, they improved our understanding\nof the underpinnings of PostgreSQL and allowed us to deal the issues we\nwere facing.\nUsing parallel query on our PG 9.6 improved a lot the query performance -\nit turns out that a lot of our real world queries could benefit of parallel\nquery, we saw about 4x improvements after turning it on, and now we see\nmuch higher storage IOPS thanks to the multiple workers.\nOn our tests effective_io_concurrency did not show such a large effect as\nthe link you sent, I'll have a new look at it, maybe we are doing something\nwrong or the fact that the SSDs are on the SAN and not local affects the\nresults.\nOn the process we also learned that changing the default Linux I/O\nscheduler from CFQ to Deadline worked wonders for our Dell SC2020 SAN\nStorage setup, we used to see latency peaks of 6,000 milliseconds on busy\nperiods (yes, 6 seconds), we now see 80 milliseconds, an almost 100 fold\nimprovement.\n\n\nBest regards,\nHaroldo Kerry\n\n\n\nOn Wed, Jan 9, 2019 at 5:14 PM Merlin Moncure <[email protected]> wrote:\n\n> On Thu, Dec 27, 2018 at 7:29 PM Justin Pryzby <[email protected]>\n> wrote:\n> >\n> > On Thu, Dec 27, 2018 at 08:20:23PM -0500, Jeff Janes wrote:\n> > > Also, you would want to use the newest version of PostgreSQL, as 9.6\n> > > doesn't have parallel query, which is much more generally applicable\n> than\n> > > effective_io_concurrency is.\n>\n> effective_io_concurrency only applies to certain queries. When it\n> does apply it can work wonders. See:\n>\n> https://www.postgresql.org/message-id/CAHyXU0yiVvfQAnR9cyH=HWh1WbLRsioe=mzRJTHwtr=2azsTdQ@mail.gmail.com\n> for an example of how it can benefit.\n>\n> parallel query is not going to help single threaded pg_bench results.\n> you are going to be entirely latency bound (network from bebench to\n> postgres, then postgres to storage). On my dell crapbox I was getting\n> 2200tps so you have some point of slowness relative to me, probably\n> not the disk itself.\n>\n> Geetting faster performance is an age-old problem; you need to\n> aggregate specific requests into more general ones, move the\n> controlling logic into the database itself, or use various other\n> strategies. Lowering latency is a hardware problem and can force\n> trade-offs (like, don't use a SAN) and has specific boundaries that\n> are not easy to bust through.\n>\n> merlin\n>\n>\n\n-- \n\nHaroldo Kerry\n\nCTO/COO\n\nRua do Rócio, 220, 7° andar, conjunto 72\n\nSão Paulo – SP / CEP 04552-000\n\[email protected]\n\nwww.callix.com.br\n\n@Justin \n\n@Merlin @ Jeff, Thanks so much for your time and insights, they improved our understanding of the underpinnings of PostgreSQL and allowed us to deal the issues we were facing.Using parallel query on our PG 9.6 improved a lot the query performance - it turns out that a lot of our real world queries could benefit of parallel query, we saw about 4x improvements after turning it on, and now we see much higher storage IOPS thanks to the multiple workers.On our tests effective_io_concurrency did not show such a large effect as the link you sent, I'll have a new look at it, maybe we are doing something wrong or the fact that the SSDs are on the SAN and not local affects the results. On the process we also learned that changing the default Linux I/O scheduler from CFQ to Deadline worked wonders for our Dell SC2020 SAN Storage setup, we used to see latency peaks of 6,000 milliseconds on busy periods (yes, 6 seconds), we now see 80 milliseconds, an almost 100 fold improvement.Best regards,Haroldo KerryOn Wed, Jan 9, 2019 at 5:14 PM Merlin Moncure <[email protected]> wrote:On Thu, Dec 27, 2018 at 7:29 PM Justin Pryzby <[email protected]> wrote:\n>\n> On Thu, Dec 27, 2018 at 08:20:23PM -0500, Jeff Janes wrote:\n> > Also, you would want to use the newest version of PostgreSQL, as 9.6\n> > doesn't have parallel query, which is much more generally applicable than\n> > effective_io_concurrency is.\n\neffective_io_concurrency only applies to certain queries. When it\ndoes apply it can work wonders. See:\nhttps://www.postgresql.org/message-id/CAHyXU0yiVvfQAnR9cyH=HWh1WbLRsioe=mzRJTHwtr=2azsTdQ@mail.gmail.com\n for an example of how it can benefit.\n\nparallel query is not going to help single threaded pg_bench results.\nyou are going to be entirely latency bound (network from bebench to\npostgres, then postgres to storage). On my dell crapbox I was getting\n2200tps so you have some point of slowness relative to me, probably\nnot the disk itself.\n\nGeetting faster performance is an age-old problem; you need to\naggregate specific requests into more general ones, move the\ncontrolling logic into the database itself, or use various other\nstrategies. Lowering latency is a hardware problem and can force\ntrade-offs (like, don't use a SAN) and has specific boundaries that\nare not easy to bust through.\n\nmerlin\n\n-- Haroldo KerryCTO/COORua do Rócio, 220, 7° andar, conjunto 72São Paulo – SP / CEP [email protected]",
"msg_date": "Wed, 9 Jan 2019 19:52:42 -0200",
"msg_from": "Haroldo Kerry <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL Read IOPS limit per connection"
},
{
"msg_contents": "On Wed, Jan 9, 2019 at 3:52 PM Haroldo Kerry <[email protected]> wrote:\n\n> @Justin @Merlin @ Jeff,\n> Thanks so much for your time and insights, they improved our understanding\n> of the underpinnings of PostgreSQL and allowed us to deal the issues we\n> were facing.\n> Using parallel query on our PG 9.6 improved a lot the query performance -\n> it turns out that a lot of our real world queries could benefit of parallel\n> query, we saw about 4x improvements after turning it on, and now we see\n> much higher storage IOPS thanks to the multiple workers.\n> On our tests effective_io_concurrency did not show such a large effect as\n> the link you sent, I'll have a new look at it, maybe we are doing something\n> wrong or the fact that the SSDs are on the SAN and not local affects the\n> results.\n> On the process we also learned that changing the default Linux I/O\n> scheduler from CFQ to Deadline worked wonders for our Dell SC2020 SAN\n> Storage setup, we used to see latency peaks of 6,000 milliseconds on busy\n> periods (yes, 6 seconds), we now see 80 milliseconds, an almost 100 fold\n> improvement.\n>\n\nThe links sent was using a contrived query to force a type of scan that\nbenefits from that kind of query; it's a very situational benefit. It\nwould be interesting if you couldn't reproduce using the same mechanic.\n\nmerlin\n\n>\n\nOn Wed, Jan 9, 2019 at 3:52 PM Haroldo Kerry <[email protected]> wrote:@Justin \n\n@Merlin @ Jeff, Thanks so much for your time and insights, they improved our understanding of the underpinnings of PostgreSQL and allowed us to deal the issues we were facing.Using parallel query on our PG 9.6 improved a lot the query performance - it turns out that a lot of our real world queries could benefit of parallel query, we saw about 4x improvements after turning it on, and now we see much higher storage IOPS thanks to the multiple workers.On our tests effective_io_concurrency did not show such a large effect as the link you sent, I'll have a new look at it, maybe we are doing something wrong or the fact that the SSDs are on the SAN and not local affects the results. On the process we also learned that changing the default Linux I/O scheduler from CFQ to Deadline worked wonders for our Dell SC2020 SAN Storage setup, we used to see latency peaks of 6,000 milliseconds on busy periods (yes, 6 seconds), we now see 80 milliseconds, an almost 100 fold improvement.The links sent was using a contrived query to force a type of scan that benefits from that kind of query; it's a very situational benefit. It would be interesting if you couldn't reproduce using the same mechanic.merlin",
"msg_date": "Wed, 9 Jan 2019 16:47:53 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL Read IOPS limit per connection"
},
{
"msg_contents": "Hello,\n\nI am happy to hear that you have received all the help.\n\nPlease feel free to contact us for professional assistance any time you may\nneed in the future.\n\nMost Welcome!\n\n\nRegards,\n\n\nMark Avinash Hogg\n\nDirector of Business Development\n\n2ndQuadrant\n\n+1(647) 770 9821 Cell\n\nwww.2ndquadrant.com\n\[email protected]\n\n\nOn Wed, 9 Jan 2019 at 19:20, Merlin Moncure (via Accelo) <[email protected]>\nwrote:\n\n> On Wed, Jan 9, 2019 at 3:52 PM Haroldo Kerry <[email protected]> wrote:\n>\n>> @Justin @Merlin @ Jeff,\n>> Thanks so much for your time and insights, they improved our\n>> understanding of the underpinnings of PostgreSQL and allowed us to deal the\n>> issues we were facing.\n>> Using parallel query on our PG 9.6 improved a lot the query performance -\n>> it turns out that a lot of our real world queries could benefit of parallel\n>> query, we saw about 4x improvements after turning it on, and now we see\n>> much higher storage IOPS thanks to the multiple workers.\n>> On our tests effective_io_concurrency did not show such a large effect as\n>> the link you sent, I'll have a new look at it, maybe we are doing something\n>> wrong or the fact that the SSDs are on the SAN and not local affects the\n>> results.\n>> On the process we also learned that changing the default Linux I/O\n>> scheduler from CFQ to Deadline worked wonders for our Dell SC2020 SAN\n>> Storage setup, we used to see latency peaks of 6,000 milliseconds on busy\n>> periods (yes, 6 seconds), we now see 80 milliseconds, an almost 100 fold\n>> improvement.\n>>\n>\n> The links sent was using a contrived query to force a type of scan that\n> benefits from that kind of query; it's a very situational benefit. It\n> would be interesting if you couldn't reproduce using the same mechanic.\n>\n> merlin\n>\n>>\n\nHello,I am happy to hear that you have received all the help.Please feel free to contact us for professional assistance any time you may need in the future.Most Welcome! \nRegards,\nMark Avinash Hogg\nDirector of Business Development\n2ndQuadrant\n+1(647) 770 9821 Cell\nwww.2ndquadrant.com\[email protected] Wed, 9 Jan 2019 at 19:20, Merlin Moncure (via Accelo) <[email protected]> wrote:On Wed, Jan 9, 2019 at 3:52 PM Haroldo Kerry <[email protected]> wrote:@Justin \n\n@Merlin @ Jeff, Thanks so much for your time and insights, they improved our understanding of the underpinnings of PostgreSQL and allowed us to deal the issues we were facing.Using parallel query on our PG 9.6 improved a lot the query performance - it turns out that a lot of our real world queries could benefit of parallel query, we saw about 4x improvements after turning it on, and now we see much higher storage IOPS thanks to the multiple workers.On our tests effective_io_concurrency did not show such a large effect as the link you sent, I'll have a new look at it, maybe we are doing something wrong or the fact that the SSDs are on the SAN and not local affects the results. On the process we also learned that changing the default Linux I/O scheduler from CFQ to Deadline worked wonders for our Dell SC2020 SAN Storage setup, we used to see latency peaks of 6,000 milliseconds on busy periods (yes, 6 seconds), we now see 80 milliseconds, an almost 100 fold improvement.The links sent was using a contrived query to force a type of scan that benefits from that kind of query; it's a very situational benefit. It would be interesting if you couldn't reproduce using the same mechanic.merlin",
"msg_date": "Wed, 9 Jan 2019 21:49:00 -0500",
"msg_from": "Mark Hogg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL Read IOPS limit per connection"
}
] |
[
{
"msg_contents": "Hi everyone ,\n\nHave this explain analyze output :\n\n*https://explain.depesz.com/s/Pra8a <https://explain.depesz.com/s/Pra8a>*\n\nAppreciated for any help .\n\n*PG version*\n-----------------------------------------------------------------------------------------------------------\n PostgreSQL 9.6.11 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5\n20150623 (Red Hat 4.8.5-28), 64-bit\n\n*OS version :*\nCentOS Linux release 7.5.1804 (Core)\n\nshared_buffers : 4GB\nwork_mem : 8MB\n\nHi everyone , Have this explain analyze output : https://explain.depesz.com/s/Pra8aAppreciated for any help .PG version----------------------------------------------------------------------------------------------------------- PostgreSQL 9.6.11 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28), 64-bitOS version : CentOS Linux release 7.5.1804 (Core)shared_buffers : 4GBwork_mem : 8MB",
"msg_date": "Thu, 27 Dec 2018 22:25:47 +0300",
"msg_from": "=?UTF-8?Q?nesli=C5=9Fah_demirci?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query Performance Issue"
},
{
"msg_contents": "> *https://explain.depesz.com/s/Pra8a*\n\nCould you share the query itself please?\nAnd the tables definitions including indexes.\n\n> work_mem : 8MB\nThat's not a lot. The 16-batches hash join may have worked faster if you \nhad resources to increase work_mem.\n\n\n\n\n\n\n\n\n\n\n\n\nhttps://explain.depesz.com/s/Pra8a\n\n\n\n\n\n\n Could you share the query itself please?\n And the tables definitions including indexes.\n\n\n\n\n\nwork_mem : 8MB\n\n\n\n\n That's not a lot. The 16-batches hash join may have worked faster if\n you had resources to increase work_mem.",
"msg_date": "Fri, 28 Dec 2018 14:53:58 +0000",
"msg_from": "Alexey Bashtanov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query Performance Issue"
},
{
"msg_contents": "On Thu, Dec 27, 2018 at 10:25:47PM +0300, neslişah demirci wrote:\n> Have this explain analyze output :\n> \n> *https://explain.depesz.com/s/Pra8a <https://explain.depesz.com/s/Pra8a>*\n\nRow counts are being badly underestimated leading to nested loop joins:\n|Index Scan using product_content_recommendation_main2_recommended_content_id_idx on product_content_recommendation_main2 prm (cost=0.57..2,031.03 ROWS=345 width=8) (actual time=0.098..68.314 ROWS=3,347 loops=1)\n|Index Cond: (recommended_content_id = 3371132)\n|Filter: (version = 1)\n\nApparently, recommended_content_id and version aren't independent condition,\nbut postgres thinks they are.\n\nWould you send statistics about those tables ? MCVs, ndistinct, etc.\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions#Statistics:_n_distinct.2C_MCV.2C_histogram\n\nI think the solution is to upgrade (at least) to PG10 and CREATE STATISTICS\n(dependencies).\n\nhttps://www.postgresql.org/docs/10/catalog-pg-statistic-ext.html\nhttps://www.postgresql.org/docs/10/sql-createstatistics.html\nhttps://www.postgresql.org/docs/10/planner-stats.html#PLANNER-STATS-EXTENDED\nhttps://www.postgresql.org/docs/10/multivariate-statistics-examples.html\n\nJustin\n\n",
"msg_date": "Fri, 28 Dec 2018 09:32:05 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query Performance Issue"
},
{
"msg_contents": "On Sat, 29 Dec 2018 at 04:32, Justin Pryzby <[email protected]> wrote:\n> I think the solution is to upgrade (at least) to PG10 and CREATE STATISTICS\n> (dependencies).\n\nUnfortunately, I don't think that'll help this situation. Extended\nstatistics are currently only handled for base quals, not join quals.\nSee dependency_is_compatible_clause().\n\nIt would be interesting to see how far out the estimate is without the\nversion = 1 clause. If just the recommended_content_id clause is\nunderestimated enough it could be enough to have the planner choose\nthe nested loop. Perhaps upping the stats on that column may help, but\nit may only help so far as to reduce the chances of a nested loop. If\nthe number of distinct recommended_content_id values is higher than\nthe statistic targets and is skewed enough then there still may be\nsome magic values in there that end up causing a bad plan.\n\nIt would also be good to know what random_page_cost is set to, and\nalso if effective_cache_size isn't set too high. Increasing\nrandom_page_cost would help reduce the chances of this nested loop\nplan, but it's a pretty global change and could also have a negative\neffect on other queries.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Sat, 29 Dec 2018 19:58:28 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query Performance Issue"
},
{
"msg_contents": "On Thu, Dec 27, 2018 at 10:25:47PM +0300, neslişah demirci wrote:\n> Have this explain analyze output :\n> \n> *https://explain.depesz.com/s/Pra8a <https://explain.depesz.com/s/Pra8a>*\n\nOn Sat, Dec 29, 2018 at 07:58:28PM +1300, David Rowley wrote:\n> On Sat, 29 Dec 2018 at 04:32, Justin Pryzby <[email protected]> wrote:\n> > I think the solution is to upgrade (at least) to PG10 and CREATE STATISTICS\n> > (dependencies).\n> \n> Unfortunately, I don't think that'll help this situation. Extended\n> statistics are currently only handled for base quals, not join quals.\n> See dependency_is_compatible_clause().\n\nRight, understand.\n\nCorrrect me if I'm wrong though, but I think the first major misestimate is in\nthe scan, not the join:\n\n|Index Scan using product_content_recommendation_main2_recommended_content_id_idx on product_content_recommendation_main2 prm (cost=0.57..2,031.03 rows=345 width=8) (actual time=0.098..68.314 rows=3,347 loops=1)\n|Index Cond: (recommended_content_id = 3371132)\n|Filter: (version = 1)\n|Rows Removed by Filter: 2708\n\nJustin\n\n",
"msg_date": "Sat, 29 Dec 2018 01:15:35 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query Performance Issue"
},
{
"msg_contents": "Try a pg_hint_plan Rows hint to explore what would happen to the plan if you\nfixed the bad join cardinality estimate:\n\n/*+ rows(prm prc #2028) */\n\nalternatively you could specify a HashJoin hint, but I think it's better to\nfix the cardinality estimate and then let the optimizer decide what the best\nplan is.\n\nI agree with Justin that it looks like the version and\nrecommended_content_id columns are correlated and that's the likely root\ncause of the problem, but you don't need to upgrade to fix this one query.\n\n\n\n-----\nJim Finnerty, AWS, Amazon Aurora PostgreSQL\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n",
"msg_date": "Sat, 29 Dec 2018 05:00:05 -0700 (MST)",
"msg_from": "Jim Finnerty <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query Performance Issue"
},
{
"msg_contents": "On Sat, Dec 29, 2018 at 1:58 AM David Rowley <[email protected]>\nwrote:\n\n> On Sat, 29 Dec 2018 at 04:32, Justin Pryzby <[email protected]> wrote:\n> > I think the solution is to upgrade (at least) to PG10 and CREATE\n> STATISTICS\n> > (dependencies).\n>\n> Unfortunately, I don't think that'll help this situation. Extended\n> statistics are currently only handled for base quals, not join quals.\n> See dependency_is_compatible_clause().\n>\n>\nBut \"recommended_content_id\" and \"version\" are both in the same table,\ndoesn't that make them base quals?\n\nThe most obvious thing to me would be to vacuum\nproduct_content_recommendation_main2 to get rid of the massive number of\nheap fetches. And to analyze everything to make sure the estimation errors\nare not simply due to out-of-date stats. And to increase work_mem.\n\nIt isn't clear we want to get rid of the nested loop, from the info we have\nto go on the hash join might be even slower yet. Seeing the plan with\nenable_nestloop=off could help there.\n\nCheers,\n\nJeff\n\nOn Sat, Dec 29, 2018 at 1:58 AM David Rowley <[email protected]> wrote:On Sat, 29 Dec 2018 at 04:32, Justin Pryzby <[email protected]> wrote:\n> I think the solution is to upgrade (at least) to PG10 and CREATE STATISTICS\n> (dependencies).\n\nUnfortunately, I don't think that'll help this situation. Extended\nstatistics are currently only handled for base quals, not join quals.\nSee dependency_is_compatible_clause().\nBut \"recommended_content_id\" and \"version\" are both in the same table, doesn't that make them base quals?The most obvious thing to me would be to vacuum product_content_recommendation_main2 to get rid of the massive number of heap fetches. And to analyze everything to make sure the estimation errors are not simply due to out-of-date stats. And to increase work_mem.It isn't clear we want to get rid of the nested loop, from the info we have to go on the hash join might be even slower yet. Seeing the plan with enable_nestloop=off could help there.Cheers,Jeff",
"msg_date": "Sat, 29 Dec 2018 15:27:39 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query Performance Issue"
},
{
"msg_contents": "On Sat, 29 Dec 2018 at 20:15, Justin Pryzby <[email protected]> wrote:\n> On Sat, Dec 29, 2018 at 07:58:28PM +1300, David Rowley wrote:\n> > Unfortunately, I don't think that'll help this situation. Extended\n> > statistics are currently only handled for base quals, not join quals.\n> > See dependency_is_compatible_clause().\n>\n> Right, understand.\n>\n> Corrrect me if I'm wrong though, but I think the first major misestimate is in\n> the scan, not the join:\n\nI should have checked more carefully. Of course, they are base quals.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Sun, 30 Dec 2018 11:00:08 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query Performance Issue"
}
] |
[
{
"msg_contents": "Hi all\nI would appreciate any hints as this problem looks to me rather strange...I tried to google it but in vain.\nselect t.name, c.smpl_time,c.nanosecs,c.float_val,c.num_val,c.str_val,c.datatype,c.array_val from sample c, channel t where t.channel_id=c.channel_id and t.name='BUIL-B36-VA-RT-RT1:CL0001-2-ABW' order by c.smpl_time desc limit 5;\ntakes 20mn to execute because it picks up the wrong index...see explain analyse below. I would expect this query to use the (channel_id,smpl_time) but it uses the smpl_time index.\nI have run analyse on the sample table. I have set default_statistics_target = 1000\n\nWhen I removed this index, then the query goes down to a few seconds...\n\nAny ideas, why the planner is not taking the right index?\nPostgresql server is 10.5.1 running on RHEL 7.4\n\nMore details about the table and explain...\nThanks for your help\nLana\n\n\n\\d+ sample\n Table \"public.sample\"\n Column | Type | Collation | Nullable | Default | Storage | Stats target | Description\n-------------+-----------------------------+-----------+----------+-------------+----------+--------------+-------------\nchannel_id | bigint | | not null | | plain | |\nsmpl_time | timestamp without time zone | | not null | | plain | |\nnanosecs | bigint | | not null | | plain | |\nseverity_id | bigint | | not null | | plain | |\nstatus_id | bigint | | not null | | plain | |\nnum_val | integer | | | | plain | |\nfloat_val | double precision | | | | plain | |\nstr_val | character varying(120) | | | | extended | |\ndatatype | character(1) | | | ' '::bpchar | extended | |\narray_val | bytea | | | | extended | |\nIndexes:\n \"sample_time_1_idx\" btree (channel_id, smpl_time)\n \"sample_time_all_idx\" btree (smpl_time, channel_id)\n \"smpl_time_qa_idx\" btree (smpl_time)\nChild tables: sample_buil,\n sample_ctrl,\n sample_util\n\n\\d+ sample_buil\n Table \"public.sample_buil\"\n Column | Type | Collation | Nullable | Default | Storage | Stats target | Description\n-------------+-----------------------------+-----------+----------+-------------+----------+--------------+-------------\nchannel_id | bigint | | not null | | plain | |\nsmpl_time | timestamp without time zone | | not null | | plain | |\nnanosecs | bigint | | not null | | plain | |\nseverity_id | bigint | | not null | | plain | |\nstatus_id | bigint | | not null | | plain | |\nnum_val | integer | | | | plain | |\nfloat_val | double precision | | | | plain | |\nstr_val | character varying(120) | | | | extended | |\ndatatype | character(1) | | | ' '::bpchar | extended | |\narray_val | bytea | | | | extended | |\nIndexes:\n \"sample_time_b1_idx\" btree (smpl_time, channel_id)\n \"sample_time_b_idx\" btree (channel_id, smpl_time)\n \"smpl_time_bx0_idx\" btree (smpl_time)\nInherits: sample\nChild tables: sample_buil_month,\n sample_buil_year\n\n\\d+ sample_buil_month\n Table \"public.sample_buil_month\"\n Column | Type | Collation | Nullable | Default | Storage | Stats target | Description\n-------------+-----------------------------+-----------+----------+-------------+----------+--------------+-------------\nchannel_id | bigint | | not null | | plain | |\nsmpl_time | timestamp without time zone | | not null | | plain | |\nnanosecs | bigint | | not null | | plain | |\nseverity_id | bigint | | not null | | plain | |\nstatus_id | bigint | | not null | | plain | |\nnum_val | integer | | | | plain | |\nfloat_val | double precision | | | | plain | |\nstr_val | character varying(120) | | | | extended | |\ndatatype | character(1) | | | ' '::bpchar | extended | |\narray_val | bytea | | | | extended | |\nIndexes:\n \"sample_time_bm_idx\" btree (channel_id, smpl_time)\n \"sample_time_mb1_idx\" btree (smpl_time, channel_id)\n \"smpl_time_bx1_idx\" btree (smpl_time)\nCheck constraints:\n \"sample_buil_month_smpl_time_check\" CHECK (smpl_time >= (now() - '32 days'::interval)::timestamp without time zone AND smpl_time <= now())\nInherits: sample_buil\n\n\ncss_archive_3_0_0=# explain analyze select t.name, c.smpl_time,c.nanosecs,c.float_val,c.num_ val,c.str_val,c.datatype,c.array_val from sample c, channel t where t.channel_id=c.channel_i d and t.name='BUIL-B36-VA-RT-RT1:CL0001-2-ABW' order by c.smpl_time desc limit 5;\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------\n--------------------------------------------------------------------------------------------\n-------------\nGather (cost=1004.71..125606.08 rows=5 width=150) (actual time=38737.443..1220277.244 rows\n=3 loops=1)\n Workers Planned: 1\n Workers Launched: 1\n Single Copy: true\n -> Limit (cost=4.71..124605.58 rows=5 width=150) (actual time=38731.488..1220117.046 ro\nws=3 loops=1)\n -> Nested Loop (cost=4.71..240130785.25 rows=9636 width=150) (actual time=38731.4\n86..1220117.040 rows=3 loops=1)\n Join Filter: (c.channel_id = t.channel_id)\n Rows Removed by Join Filter: 322099471\n -> Merge Append (cost=4.71..235298377.47 rows=322099464 width=125) (actual\ntime=0.681..943623.198 rows=322099474 loops=1)\n Sort Key: c.smpl_time DESC\n -> Index Scan Backward using smpl_time_qa_idx on sample c (cost=0.12.\n.8.14 rows=1 width=334) (actual time=0.010..0.010 rows=0 loops=1)\n -> Index Scan Backward using smpl_time_bx0_idx on sample_buil c_1 (co\nst=0.42..3543026.23 rows=1033169 width=328) (actual time=0.122..723.286 rows=1033169 loops=1\n)\n -> Index Scan Backward using smpl_time_cmx0_idx on sample_ctrl c_2 (c\nost=0.42..2891856.90 rows=942520 width=328) (actual time=0.069..712.386 rows=942520 loops=1)\n -> Index Scan Backward using smpl_time_ux0_idx on sample_util c_3 (co\nst=0.43..11310958.12 rows=5282177 width=328) (actual time=0.066..3688.980 rows=5282177 loops\n=1)\n -> Index Scan Backward using smpl_time_bx1_idx on sample_buil_month c_\n4 (cost=0.43..49358435.15 rows=14768705 width=82) (actual time=0.070..9341.396 rows=1476870\n5 loops=1)\n -> Index Scan Backward using smpl_time_bx2_idx on sample_buil_year c_5\n (cost=0.56..1897430.89 rows=50597832 width=328) (actual time=0.068..139840.439 rows=505978\n34 loops=1)\n -> Index Scan Backward using smpl_time_cmx1_idx on sample_ctrl_month c\n_6 (cost=0.44..55253292.21 rows=18277124 width=85) (actual time=0.061..14610.389 rows=18277\n123 loops=1)\n -> Index Scan Backward using smpl_time_cmx2_idx on sample_ctrl_year c_\n7 (cost=0.57..2987358.31 rows=79579072 width=76) (actual time=0.067..286316.865 rows=795790\n75 loops=1)\n -> Index Scan Backward using smpl_time_ux1_idx on sample_util_month c_\n8 (cost=0.57..98830163.45 rows=70980976 width=82) (actual time=0.071..60766.643 rows=709809\n80 loops=1)\n -> Index Scan Backward using smpl_time_ux2_idx on sample_util_year c_9\n (cost=0.57..3070642.94 rows=80637888 width=83) (actual time=0.069..307091.673 rows=8063789\n1 loops=1)\n -> Materialize (cost=0.00..915.83 rows=1 width=41) (actual time=0.000..0.00\n0 rows=1 loops=322099474)\n -> Seq Scan on channel t (cost=0.00..915.83 rows=1 width=41) (actual\ntime=4.683..7.885 rows=1 loops=1)\n Filter: ((name)::text = 'BUIL-B36-VA-RT-RT1:CL0001-2-ABW'::text)\n Rows Removed by Filter: 33425\nPlanning time: 31.392 ms\nExecution time: 1220277.424 ms\n(26 rows)\n\n\n\n\n\n\n\n\n\n\nHi all\nI would appreciate any hints as this problem looks to me rather strange…I tried to google it but in vain.\nselect t.name, c.smpl_time,c.nanosecs,c.float_val,c.num_val,c.str_val,c.datatype,c.array_val from sample c, channel t where t.channel_id=c.channel_id and t.name='BUIL-B36-VA-RT-RT1:CL0001-2-ABW' order by c.smpl_time desc limit 5; \ntakes 20mn to execute because it picks up the wrong index…see explain analyse below. I would expect this query to use the (channel_id,smpl_time) but it uses the smpl_time index.\nI have run analyse on the sample table. I have set default_statistics_target = 1000\n \nWhen I removed this index, then the query goes down to a few seconds…\n \nAny ideas, why the planner is not taking the right index?\nPostgresql server is 10.5.1 running on RHEL 7.4\n \nMore details about the table and explain…\nThanks for your help\nLana\n \n \n\\d+ sample\n Table \"public.sample\"\n Column | Type | Collation | Nullable | Default | Storage | Stats target | Description\n-------------+-----------------------------+-----------+----------+-------------+----------+--------------+-------------\nchannel_id | bigint | | not null | | plain | |\nsmpl_time | timestamp without time zone | | not null | | plain | |\nnanosecs | bigint | | not null | | plain | |\nseverity_id | bigint | | not null | | plain | |\nstatus_id | bigint | | not null | | plain | |\nnum_val | integer | | | | plain | |\nfloat_val | double precision | | | | plain | |\nstr_val | character varying(120) | | | | extended | |\ndatatype | character(1) | | | ' '::bpchar | extended | |\narray_val | bytea | | | | extended | |\nIndexes:\n \"sample_time_1_idx\" btree (channel_id, smpl_time)\n \"sample_time_all_idx\" btree (smpl_time, channel_id)\n \"smpl_time_qa_idx\" btree (smpl_time)\nChild tables: sample_buil,\n sample_ctrl,\n sample_util\n \n\\d+ sample_buil\n Table \"public.sample_buil\"\n Column | Type | Collation | Nullable | Default | Storage | Stats target | Description\n-------------+-----------------------------+-----------+----------+-------------+----------+--------------+-------------\nchannel_id | bigint | | not null | | plain | |\nsmpl_time | timestamp without time zone | | not null | | plain | |\nnanosecs | bigint | | not null | | plain | |\nseverity_id | bigint | | not null | | plain | |\nstatus_id | bigint | | not null | | plain | |\nnum_val | integer | | | | plain | |\nfloat_val | double precision | | | | plain | |\nstr_val | character varying(120) | | | | extended | |\ndatatype | character(1) | | | ' '::bpchar | extended | |\narray_val | bytea | | | | extended | |\nIndexes:\n \"sample_time_b1_idx\" btree (smpl_time, channel_id)\n \"sample_time_b_idx\" btree (channel_id, smpl_time)\n \"smpl_time_bx0_idx\" btree (smpl_time)\nInherits: sample\nChild tables: sample_buil_month,\n sample_buil_year\n \n\\d+ sample_buil_month\n Table \"public.sample_buil_month\"\n Column | Type | Collation | Nullable | Default | Storage | Stats target | Description\n-------------+-----------------------------+-----------+----------+-------------+----------+--------------+-------------\nchannel_id | bigint | | not null | | plain | |\nsmpl_time | timestamp without time zone | | not null | | plain | |\nnanosecs | bigint | | not null | | plain | |\nseverity_id | bigint | | not null | | plain | |\nstatus_id | bigint | | not null | | plain | |\nnum_val | integer | | | | plain | |\nfloat_val | double precision | | | | plain | |\nstr_val | character varying(120) | | | | extended | |\ndatatype | character(1) | | | ' '::bpchar | extended | |\narray_val | bytea | | | | extended | |\nIndexes:\n \"sample_time_bm_idx\" btree (channel_id, smpl_time)\n \"sample_time_mb1_idx\" btree (smpl_time, channel_id)\n \"smpl_time_bx1_idx\" btree (smpl_time)\nCheck constraints:\n \"sample_buil_month_smpl_time_check\" CHECK (smpl_time >= (now() - '32 days'::interval)::timestamp without time zone AND smpl_time <= now())\nInherits: sample_buil\n \n \ncss_archive_3_0_0=# explain analyze select t.name, c.smpl_time,c.nanosecs,c.float_val,c.num_ val,c.str_val,c.datatype,c.array_val from sample c, channel t where t.channel_id=c.channel_i \n d and t.name='BUIL-B36-VA-RT-RT1:CL0001-2-ABW' order by c.smpl_time desc limit 5; \n\n QUERY PLAN\n \n--------------------------------------------------------------------------------------------\n--------------------------------------------------------------------------------------------\n-------------\nGather (cost=1004.71..125606.08 rows=5 width=150) (actual time=38737.443..1220277.244 rows\n=3 loops=1)\n Workers Planned: 1\n Workers Launched: 1\n Single Copy: true\n -> Limit (cost=4.71..124605.58 rows=5 width=150) (actual time=38731.488..1220117.046 ro\nws=3 loops=1)\n -> Nested Loop (cost=4.71..240130785.25 rows=9636 width=150) (actual time=38731.4\n86..1220117.040 rows=3 loops=1)\n Join Filter: (c.channel_id = t.channel_id)\n Rows Removed by Join Filter: 322099471\n -> Merge Append (cost=4.71..235298377.47 rows=322099464 width=125) (actual\ntime=0.681..943623.198 rows=322099474 loops=1)\n Sort Key: c.smpl_time DESC\n -> Index Scan Backward using smpl_time_qa_idx on sample c (cost=0.12.\n.8.14 rows=1 width=334) (actual time=0.010..0.010 rows=0 loops=1)\n -> Index Scan Backward using smpl_time_bx0_idx on sample_buil c_1 (co\nst=0.42..3543026.23 rows=1033169 width=328) (actual time=0.122..723.286 rows=1033169 loops=1\n)\n -> Index Scan Backward using smpl_time_cmx0_idx on sample_ctrl c_2 (c\nost=0.42..2891856.90 rows=942520 width=328) (actual time=0.069..712.386 rows=942520 loops=1)\n -> Index Scan Backward using smpl_time_ux0_idx on sample_util c_3 (co\nst=0.43..11310958.12 rows=5282177 width=328) (actual time=0.066..3688.980 rows=5282177 loops\n=1)\n -> Index Scan Backward using smpl_time_bx1_idx on sample_buil_month c_\n4 (cost=0.43..49358435.15 rows=14768705 width=82) (actual time=0.070..9341.396 rows=1476870\n5 loops=1)\n -> Index Scan Backward using smpl_time_bx2_idx on sample_buil_year c_5\n (cost=0.56..1897430.89 rows=50597832 width=328) (actual time=0.068..139840.439 rows=505978\n34 loops=1)\n -> Index Scan Backward using smpl_time_cmx1_idx on sample_ctrl_month c\n_6 (cost=0.44..55253292.21 rows=18277124 width=85) (actual time=0.061..14610.389 rows=18277\n123 loops=1)\n -> Index Scan Backward using smpl_time_cmx2_idx on sample_ctrl_year c_\n7 (cost=0.57..2987358.31 rows=79579072 width=76) (actual time=0.067..286316.865 rows=795790\n75 loops=1)\n -> Index Scan Backward using smpl_time_ux1_idx on sample_util_month c_\n8 (cost=0.57..98830163.45 rows=70980976 width=82) (actual time=0.071..60766.643 rows=709809\n80 loops=1)\n -> Index Scan Backward using smpl_time_ux2_idx on sample_util_year c_9\n (cost=0.57..3070642.94 rows=80637888 width=83) (actual time=0.069..307091.673 rows=8063789\n1 loops=1)\n -> Materialize (cost=0.00..915.83 rows=1 width=41) (actual time=0.000..0.00\n0 rows=1 loops=322099474)\n -> Seq Scan on channel t (cost=0.00..915.83 rows=1 width=41) (actual\ntime=4.683..7.885 rows=1 loops=1)\n Filter: ((name)::text = 'BUIL-B36-VA-RT-RT1:CL0001-2-ABW'::text)\n Rows Removed by Filter: 33425\nPlanning time: 31.392 ms\nExecution time: 1220277.424 ms\n(26 rows)",
"msg_date": "Wed, 2 Jan 2019 16:28:41 +0000",
"msg_from": "Abadie Lana <[email protected]>",
"msg_from_op": true,
"msg_subject": "select query does not pick up the right index"
},
{
"msg_contents": "On Wed, Jan 02, 2019 at 04:28:41PM +0000, Abadie Lana wrote:\n> css_archive_3_0_0=# explain analyze select t.name, c.smpl_time,c.nanosecs,c.float_val,c.num_ val,c.str_val,c.datatype,c.array_val from sample c, channel t where t.channel_id=c.channel_i d and t.name='BUIL-B36-VA-RT-RT1:CL0001-2-ABW' order by c.smpl_time desc limit 5;\n> QUERY PLAN\n> \n> --------------------------------------------------------------------------------------------\n> --------------------------------------------------------------------------------------------\n> -------------\n> Gather (cost=1004.71..125606.08 rows=5 width=150) (actual time=38737.443..1220277.244 rows\n> =3 loops=1)\n> Workers Planned: 1\n> Workers Launched: 1\n> Single Copy: true\n\nDo you have force_parallel_mode set ?\n\nhttp://rhaas.blogspot.com/2018/06/using-forceparallelmode-correctly.html\n\n-- \nJustin Pryzby\nSystem Administrator\nTelsasoft\n+1-952-707-8581\n\n",
"msg_date": "Wed, 2 Jan 2019 10:45:25 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: select query does not pick up the right index"
},
{
"msg_contents": "On Thu, 3 Jan 2019 at 05:28, Abadie Lana <[email protected]> wrote:\n> I would appreciate any hints as this problem looks to me rather strange…I tried to google it but in vain.\n>\n> select t.name, c.smpl_time,c.nanosecs,c.float_val,c.num_val,c.str_val,c.datatype,c.array_val from sample c, channel t where t.channel_id=c.channel_id and t.name='BUIL-B36-VA-RT-RT1:CL0001-2-ABW' order by c.smpl_time desc limit 5;\n>\n> takes 20mn to execute because it picks up the wrong index…see explain analyse below. I would expect this query to use the (channel_id,smpl_time) but it uses the smpl_time index.\n\n[...]\n\n> Any ideas, why the planner is not taking the right index?\n\nThe planner assumes that the required channel values are evenly\ndistributed through the scan of the index on smpl_time. If your\nrequired 5 rows were found quickly (i.e channels with recent sample\nvalues), then the plan would have worked out well. It looks like\n'BUIL-B36-VA-RT-RT1:CL0001-2-ABW' is probably a channel which has some\nvery old sample values. I can see that from \"Rows Removed by Join\nFilter: 322099471\", meaning that on backwards scanning the smpl_time\nindex, that many rows were found not to match the channel you\nrequested.\n\nThe planner, by default only has statistics to say how common each\nchannel is in the sample table. I think in this case since the planner\nhas no knowledge of which channel_id it will be searching for (that's\nonly determined during execution), then I suppose it must be using the\nn_distinct of the sample.channel_id table. It would be interesting to\nknow how far off the n_distinct estimation is. You can find out with:\n\nselect stadistinct from pg_statistic where starelid='sample'::regclass\nand staattnum = 1;\nselect count(*) from (select distinct channel_id from sample) s; --\nthis may take a while to run...\n\nIf the stadistinct estimate is far out from the reality, then you\ncould consider setting this manually with:\n\nalter table sample alter column channel_id set (n_distinct = <actual\nvalue here>);\n\nbut keep in mind, that as the table evolves, whatever you set there\ncould become outdated.\n\nAnother method to fix you could try would be to coax the planner into\ndoing something different would be to give it a better index to work\nwith.\n\ncreate index on channel(name, channel_id);\n\nYou didn't show us the details from the channel table, but if there's\nnot an index like this then this might reduce the cost of a Merge\nJoin, but since the order rows output from that join would be in\nchannel_id order, a Sort would be required, which would require\njoining all matching rows, not just the first 5 matches. Depending on\nhow many rows actually match will determine if that's faster or not.\n\nIf you don't have luck with either of the above then, one other thing\nyou could try would be to disallow the planner from using the\nsmpl_time index by changing the order by to \"ORDER BY c.smpl_time +\nINTERVAL '0 sec'; that's a bit of a hack, but we don't have anything\nwe officially call \"query hints\" in PostgreSQL, so often we're left to\nsolve issues like this with ugly tricks like that.\n\nAlso, going by:\n\n> -> Seq Scan on channel t (cost=0.00..915.83 rows=1 width=41) (actual time=4.683..7.885 rows=1 loops=1)\n\nperhaps \"name\" is unique on the channel table? (I doubt there's an\nindex/constraint to back that up, however, since such an index would\nhave likely been used here instead of the Seq Scan)\n\nIf so, and you can add a constraint to back that up, you might be\nable to reform the query to be:\n\nselect 'BUIL-B36-VA-RT-RT1:CL0001-2-ABW',\nc.smpl_time,c.nanosecs,c.float_val,c.num_val,c.str_val,c.datatype,c.array_val\nfrom sample c\nWHERE c.channel_id = (SELECT channel_id FROM channel WHERE\nname='BUIL-B36-VA-RT-RT1:CL0001-2-ABW')\norder by c.smpl_time desc limit 5;\n\nIf you can do that then it's highly likely to be *very* fast to\nexecute since I see there's an index on (channel_id, smpl_time) on\neach of the inherited tables.\n\n(If our planner was smarter then in the presence of the correct unique\nindex, we could have rewritten the query as such automatically.... but\nit's not / we don't. I believe I've mentioned about improving this\nsomewhere in the distant past of the -hackers mailing list, although I\ncan't find it right now. I recall naming the idea \"scalar value lookup\njoins\", but development didn't get much beyond thinking of that name)\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Thu, 3 Jan 2019 13:15:30 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: select query does not pick up the right index"
},
{
"msg_contents": "\r\n\r\n\r\nLana ABADIE\r\nDatabase Engineer\r\nCODAC Section\r\n\r\nITER Organization, Building 72/4108, SCOD, Control System Division\r\nRoute de Vinon-sur-Verdon - CS 90 046 - 13067 St Paul Lez Durance Cedex - France\r\nPhone: +33 4 42 17 84 02\r\nGet the latest ITER news on http://www.iter.org/whatsnew\r\n\r\n-----Original Message-----\r\nFrom: David Rowley <[email protected]> \r\nSent: 03 January 2019 01:16\r\nTo: Abadie Lana <[email protected]>\r\nCc: [email protected]\r\nSubject: Re: select query does not pick up the right index\r\n\r\nOn Thu, 3 Jan 2019 at 05:28, Abadie Lana <[email protected]> wrote:\r\n> I would appreciate any hints as this problem looks to me rather strange…I tried to google it but in vain.\r\n>\r\n> select t.name, c.smpl_time,c.nanosecs,c.float_val,c.num_val,c.str_val,c.datatype,c.array_val from sample c, channel t where t.channel_id=c.channel_id and t.name='BUIL-B36-VA-RT-RT1:CL0001-2-ABW' order by c.smpl_time desc limit 5;\r\n>\r\n> takes 20mn to execute because it picks up the wrong index…see explain analyse below. I would expect this query to use the (channel_id,smpl_time) but it uses the smpl_time index.\r\n\r\n[...]\r\n\r\n> Any ideas, why the planner is not taking the right index?\r\n\r\nThe planner assumes that the required channel values are evenly distributed through the scan of the index on smpl_time. If your required 5 rows were found quickly (i.e channels with recent sample values), then the plan would have worked out well. It looks like 'BUIL-B36-VA-RT-RT1:CL0001-2-ABW' is probably a channel which has some very old sample values. I can see that from \"Rows Removed by Join\r\nFilter: 322099471\", meaning that on backwards scanning the smpl_time index, that many rows were found not to match the channel you requested.\r\n\r\nThe planner, by default only has statistics to say how common each channel is in the sample table. I think in this case since the planner has no knowledge of which channel_id it will be searching for (that's only determined during execution), then I suppose it must be using the n_distinct of the sample.channel_id table. It would be interesting to know how far off the n_distinct estimation is. You can find out with:\r\n\r\nselect stadistinct from pg_statistic where starelid='sample'::regclass and staattnum = 1; select count(*) from (select distinct channel_id from sample) s; -- this may take a while to run...\r\n\r\nIf the stadistinct estimate is far out from the reality, then you could consider setting this manually with:\r\n\r\nalter table sample alter column channel_id set (n_distinct = <actual value here>);\r\n\r\nbut keep in mind, that as the table evolves, whatever you set there could become outdated.\r\n\r\nAnother method to fix you could try would be to coax the planner into doing something different would be to give it a better index to work with.\r\n\r\ncreate index on channel(name, channel_id);\r\n\r\nYou didn't show us the details from the channel table, but if there's not an index like this then this might reduce the cost of a Merge Join, but since the order rows output from that join would be in channel_id order, a Sort would be required, which would require joining all matching rows, not just the first 5 matches. Depending on how many rows actually match will determine if that's faster or not.\r\n\r\nIf you don't have luck with either of the above then, one other thing you could try would be to disallow the planner from using the smpl_time index by changing the order by to \"ORDER BY c.smpl_time +\r\nINTERVAL '0 sec'; that's a bit of a hack, but we don't have anything\r\nwe officially call \"query hints\" in PostgreSQL, so often we're left to solve issues like this with ugly tricks like that.\r\n\r\nAlso, going by:\r\n\r\n> -> Seq Scan on channel t (cost=0.00..915.83 rows=1 width=41) (actual \r\n> -> time=4.683..7.885 rows=1 loops=1)\r\n\r\nperhaps \"name\" is unique on the channel table? (I doubt there's an index/constraint to back that up, however, since such an index would have likely been used here instead of the Seq Scan)\r\n\r\nIf so, and you can add a constraint to back that up, you might be able to reform the query to be:\r\n\r\nselect 'BUIL-B36-VA-RT-RT1:CL0001-2-ABW',\r\nc.smpl_time,c.nanosecs,c.float_val,c.num_val,c.str_val,c.datatype,c.array_val\r\nfrom sample c\r\nWHERE c.channel_id = (SELECT channel_id FROM channel WHERE\r\nname='BUIL-B36-VA-RT-RT1:CL0001-2-ABW')\r\norder by c.smpl_time desc limit 5;\r\n\r\nIf you can do that then it's highly likely to be *very* fast to execute since I see there's an index on (channel_id, smpl_time) on each of the inherited tables.\r\n\r\n(If our planner was smarter then in the presence of the correct unique index, we could have rewritten the query as such automatically.... but it's not / we don't. I believe I've mentioned about improving this somewhere in the distant past of the -hackers mailing list, although I can't find it right now. I recall naming the idea \"scalar value lookup joins\", but development didn't get much beyond thinking of that name)\r\n\r\n-- \r\n David Rowley http://www.2ndQuadrant.com/\r\n PostgreSQL Development, 24x7 Support, Training & Services\r\n\r\n\r\n\r\n\r\nHi David\r\nThanks for your tips. So answers to your questions/comments\r\n1) About n_distinct\r\nfirst query returns 2136, second query returns 33425. So it seems that there is some discrepancies...Also the sample table is very big...roughly 322099474 rows.\r\nI did the alter statement but without success. Still long execution time with wrong index\r\n2) index on channel(name,channel_id)\r\nThere was no indexes on channel. So I created it. Same execution time, still wrong index used regardless of the n_distinct values\r\n3)The \"trick\" (+ interval '0s') did the job. The index on channel_id, smpl_time is used. Query time can vary between a few ms to 25 sec\r\n4) name is unique, constraint and index created. Right index is picked up and query time is rather constant there 40sec.\r\n\r\nA few comments : \r\n- I have disabled force_parallel_mode when running all the tests. \r\n- The difference between the two plans is in the case of query with the trick, the planner is using a bitmap index scan, in the second one it uses index scan backward.\r\n- when I execute the initial query, there is a big read access on disk almost 17.7 GB...whereas the total size of the smpl_time index is roughly 7GB...Could it be a wrong configuration on my side?\r\nDuring the tests, no insert/delete/or update was performed...only my select queries...\r\nMain parameters : effective_cache_size : 4GB, shared_buffers 4GB, work_mem 4MB\r\n\r\nThanks a lot !\r\n\r\n\r\n\r\n\r\n\r\n\r\n",
"msg_date": "Thu, 3 Jan 2019 12:57:27 +0000",
"msg_from": "Abadie Lana <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: select query does not pick up the right index"
},
{
"msg_contents": "\n\n\nLana ABADIE\nDatabase Engineer\nCODAC Section\n\nITER Organization, Building 72/4108, SCOD, Control System Division\nRoute de Vinon-sur-Verdon - CS 90 046 - 13067 St Paul Lez Durance Cedex - France\nPhone: +33 4 42 17 84 02\nGet the latest ITER news on http://www.iter.org/whatsnew\n\n-----Original Message-----\nFrom: Justin Pryzby <[email protected]> \nSent: 02 January 2019 17:45\nTo: Abadie Lana <[email protected]>\nCc: [email protected]\nSubject: Re: select query does not pick up the right index\n\nOn Wed, Jan 02, 2019 at 04:28:41PM +0000, Abadie Lana wrote:\n> css_archive_3_0_0=# explain analyze select t.name, c.smpl_time,c.nanosecs,c.float_val,c.num_ val,c.str_val,c.datatype,c.array_val from sample c, channel t where t.channel_id=c.channel_i d and t.name='BUIL-B36-VA-RT-RT1:CL0001-2-ABW' order by c.smpl_time desc limit 5;\n> QUERY PLAN\n> \n> --------------------------------------------------------------------------------------------\n> --------------------------------------------------------------------------------------------\n> -------------\n> Gather (cost=1004.71..125606.08 rows=5 width=150) (actual time=38737.443..1220277.244 rows\n> =3 loops=1)\n> Workers Planned: 1\n> Workers Launched: 1\n> Single Copy: true\n\nDo you have force_parallel_mode set ?\n\nhttp://rhaas.blogspot.com/2018/06/using-forceparallelmode-correctly.html\n\n-- \nJustin Pryzby\nSystem Administrator\nTelsasoft\n+1-952-707-8581\n\n\nHi Justin\nIndeed force_parallel_mode was set to on. Even after disabling it, same issue...\ncheers\n\n",
"msg_date": "Thu, 3 Jan 2019 12:58:06 +0000",
"msg_from": "Abadie Lana <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: select query does not pick up the right index"
},
{
"msg_contents": "On Fri, 4 Jan 2019 at 01:57, Abadie Lana <[email protected]> wrote:\n> 4) name is unique, constraint and index created. Right index is picked up and query time is rather constant there 40sec.\n\nThat's surprisingly slow. Can you share the EXPLAIN (ANALYZE, BUFFERS) of that?\n\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Fri, 4 Jan 2019 02:01:09 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: select query does not pick up the right index"
},
{
"msg_contents": "\r\n\r\n\r\nLana ABADIE\r\nDatabase Engineer\r\nCODAC Section\r\n\r\nITER Organization, Building 72/4108, SCOD, Control System Division\r\nRoute de Vinon-sur-Verdon - CS 90 046 - 13067 St Paul Lez Durance Cedex - France\r\nPhone: +33 4 42 17 84 02\r\nGet the latest ITER news on http://www.iter.org/whatsnew\r\n\r\n-----Original Message-----\r\nFrom: David Rowley <[email protected]> \r\nSent: 03 January 2019 14:01\r\nTo: Abadie Lana <[email protected]>\r\nCc: [email protected]\r\nSubject: Re: select query does not pick up the right index\r\n\r\nOn Fri, 4 Jan 2019 at 01:57, Abadie Lana <[email protected]> wrote:\r\n> 4) name is unique, constraint and index created. Right index is picked up and query time is rather constant there 40sec.\r\n\r\nThat's surprisingly slow. Can you share the EXPLAIN (ANALYZE, BUFFERS) of that?\r\n\r\n\r\n-- \r\n David Rowley http://www.2ndQuadrant.com/\r\n PostgreSQL Development, 24x7 Support, Training & Services\r\n\r\n\r\n\r\n\r\n\r\nexplain (analyze,buffers) select 'BUIL-B36-VA-RT-RT1:CL0001-2-ABW',c.smpl_time,c.nanosecs,c.float_val,c.num_val,c.str_val,c.datatype,c.array_val from sample c WHERE c.channel_id = (SELECT channel_id FROM channel WHERE name='BUIL-B36-VA-RT-RT1:CL0001-2-ABW') order by c.smpl_time desc limit 5;\r\n QUERY PLAN\r\n\r\n----------------------------------------------------------------------------------------------------------------------------\r\n-------------------------------------------------------------\r\n Limit (cost=13.40..20.22 rows=5 width=233) (actual time=41023.057..41027.412 rows=3 loops=1)\r\n Buffers: shared hit=75782139 read=1834969\r\n InitPlan 1 (returns $0)\r\n -> Index Scan using unique_chname on channel (cost=0.41..8.43 rows=1 width=8) (actual time=2.442..2.443 rows=1 loops=\r\n1)\r\n Index Cond: ((name)::text = 'BUIL-B36-VA-RT-RT1:CL0001-2-ABW'::text)\r\n Buffers: shared read=4\r\n -> Result (cost=4.96..8344478.65 rows=6117323 width=233) (actual time=41023.055..41027.408 rows=3 loops=1)\r\n Buffers: shared hit=75782139 read=1834969\r\n -> Merge Append (cost=4.96..8283305.42 rows=6117323 width=201) (actual time=41023.054..41027.404 rows=3 loops=1)\r\n Sort Key: c.smpl_time DESC\r\n Buffers: shared hit=75782139 read=1834969\r\n -> Index Scan Backward using smpl_time_qa_idx on sample c (cost=0.12..8.14 rows=1 width=326) (actual time=0\r\n.008..0.009 rows=0 loops=1)\r\n Filter: (channel_id = $0)\r\n Buffers: shared hit=1\r\n -> Index Scan Backward using sample_time_b_idx on sample_buil c_1 (cost=0.42..22318.03 rows=6300 width=320)\r\n (actual time=2.478..2.478 rows=0 loops=1)\r\n Index Cond: (channel_id = $0)\r\n Buffers: shared read=7\r\n -> Index Scan Backward using sample_time_c_idx on sample_ctrl c_2 (cost=0.42..116482.81 rows=33661 width=32\r\n0) (actual time=0.022..0.022 rows=0 loops=1)\r\n Index Cond: (channel_id = $0)\r\n Buffers: shared read=3\r\n -> Index Scan Backward using sample_time_u_idx on sample_util c_3 (cost=0.43..35366.72 rows=9483 width=320)\r\n (actual time=0.022..0.022 rows=0 loops=1)\r\n Index Cond: (channel_id = $0)\r\n Buffers: shared read=3\r\n -> Index Scan Backward using sample_time_bm_idx on sample_buil_month c_4 (cost=0.56..60293.88 rows=15711 wi\r\ndth=74) (actual time=5.499..9.847 rows=3 loops=1)\r\n Index Cond: (channel_id = $0)\r\n Buffers: shared read=8\r\n -> Index Scan Backward using smpl_time_bx2_idx on sample_buil_year c_5 (cost=0.56..2023925.30 rows=3162364\r\nwidth=320) (actual time=15167.330..15167.330 rows=0 loops=1)\r\n Filter: (channel_id = $0)\r\n Rows Removed by Filter: 50597834\r\n Buffers: shared hit=25913147 read=713221\r\n -> Index Scan Backward using sample_time_cm_idx on sample_ctrl_month c_6 (cost=0.56..1862587.12 rows=537562\r\n width=77) (actual time=0.048..0.048 rows=0 loops=1)\r\n Index Cond: (channel_id = $0)\r\n Buffers: shared read=4\r\n -> Index Scan Backward using smpl_time_cmx2_idx on sample_ctrl_year c_7 (cost=0.57..3186305.67 rows=2094186\r\n width=68) (actual time=25847.549..25847.549 rows=0 loops=1)\r\n Filter: (channel_id = $0)\r\n Rows Removed by Filter: 79579075\r\n Buffers: shared hit=49868991 read=1121715\r\n -> Index Scan Backward using sample_time_um_idx on sample_util_month c_8 (cost=0.57..360454.53 rows=97101 w\r\nidth=74) (actual time=0.058..0.059 rows=0 loops=1)\r\n Index Cond: (channel_id = $0)\r\n Buffers: shared read=4\r\n -> Index Scan Backward using sample_time_uy_idx on sample_util_year c_9 (cost=0.57..498663.22 rows=160954 w\r\nidth=75) (actual time=0.030..0.030 rows=0 loops=1)\r\n Index Cond: (channel_id = $0)\r\n Buffers: shared read=4\r\n Planning time: 0.782 ms\r\n Execution time: 41027.570 ms\r\n(45 rows)\r\n\r\n",
"msg_date": "Thu, 3 Jan 2019 13:13:14 +0000",
"msg_from": "Abadie Lana <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: select query does not pick up the right index"
},
{
"msg_contents": "> From: David Rowley <[email protected]>\n> Sent: 03 January 2019 14:01\n> That's surprisingly slow. Can you share the EXPLAIN (ANALYZE, BUFFERS) of that?\n>\n> explain (analyze,buffers) select 'BUIL-B36-VA-RT-RT1:CL0001-2-ABW',c.smpl_time,c.nanosecs,c.float_val,c.num_val,c.str_val,c.datatype,c.array_val from sample c WHERE c.channel_id = (SELECT channel_id FROM channel WHERE name='BUIL-B36-VA-RT-RT1:CL0001-2-ABW') order by c.smpl_time desc limit 5;\n\n\n> -> Index Scan Backward using smpl_time_bx2_idx on sample_buil_year c_5 (cost=0.56..2023925.30 rows=3162364\n> width=320) (actual time=15167.330..15167.330 rows=0 loops=1)\n> Filter: (channel_id = $0)\n> Rows Removed by Filter: 50597834\n> Buffers: shared hit=25913147 read=713221\n> -> Index Scan Backward using sample_time_cm_idx on sample_ctrl_month c_6 (cost=0.56..1862587.12 rows=537562\n> width=77) (actual time=0.048..0.048 rows=0 loops=1)\n> Index Cond: (channel_id = $0)\n> Buffers: shared read=4\n> -> Index Scan Backward using smpl_time_cmx2_idx on sample_ctrl_year c_7 (cost=0.57..3186305.67 rows=2094186\n> width=68) (actual time=25847.549..25847.549 rows=0 loops=1)\n> Filter: (channel_id = $0)\n> Rows Removed by Filter: 79579075\n> Buffers: shared hit=49868991 read=1121715\n\nRight, so you need to check your indexes on sample_ctrl_year and\nsample_buil_year. You need an index on (channel_id, smpl_time) on\nthose.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Fri, 4 Jan 2019 02:17:30 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: select query does not pick up the right index"
},
{
"msg_contents": "\r\n\r\n\r\nLana ABADIE\r\nDatabase Engineer\r\nCODAC Section\r\n\r\nITER Organization, Building 72/4108, SCOD, Control System Division\r\nRoute de Vinon-sur-Verdon - CS 90 046 - 13067 St Paul Lez Durance Cedex - France\r\nPhone: +33 4 42 17 84 02\r\nGet the latest ITER news on http://www.iter.org/whatsnew\r\n\r\n-----Original Message-----\r\nFrom: David Rowley <[email protected]> \r\nSent: 03 January 2019 14:18\r\nTo: Abadie Lana <[email protected]>\r\nCc: [email protected]\r\nSubject: Re: select query does not pick up the right index\r\n\r\n> From: David Rowley <[email protected]>\r\n> Sent: 03 January 2019 14:01\r\n> That's surprisingly slow. Can you share the EXPLAIN (ANALYZE, BUFFERS) of that?\r\n>\r\n> explain (analyze,buffers) select \r\n> 'BUIL-B36-VA-RT-RT1:CL0001-2-ABW',c.smpl_time,c.nanosecs,c.float_val,c\r\n> .num_val,c.str_val,c.datatype,c.array_val from sample c WHERE \r\n> c.channel_id = (SELECT channel_id FROM channel WHERE \r\n> name='BUIL-B36-VA-RT-RT1:CL0001-2-ABW') order by c.smpl_time desc \r\n> limit 5;\r\n\r\n\r\n> -> Index Scan Backward using smpl_time_bx2_idx on \r\n> sample_buil_year c_5 (cost=0.56..2023925.30 rows=3162364\r\n> width=320) (actual time=15167.330..15167.330 rows=0 loops=1)\r\n> Filter: (channel_id = $0)\r\n> Rows Removed by Filter: 50597834\r\n> Buffers: shared hit=25913147 read=713221\r\n> -> Index Scan Backward using sample_time_cm_idx on \r\n> sample_ctrl_month c_6 (cost=0.56..1862587.12 rows=537562\r\n> width=77) (actual time=0.048..0.048 rows=0 loops=1)\r\n> Index Cond: (channel_id = $0)\r\n> Buffers: shared read=4\r\n> -> Index Scan Backward using smpl_time_cmx2_idx on \r\n> sample_ctrl_year c_7 (cost=0.57..3186305.67 rows=2094186\r\n> width=68) (actual time=25847.549..25847.549 rows=0 loops=1)\r\n> Filter: (channel_id = $0)\r\n> Rows Removed by Filter: 79579075\r\n> Buffers: shared hit=49868991 read=1121715\r\n\r\nRight, so you need to check your indexes on sample_ctrl_year and sample_buil_year. You need an index on (channel_id, smpl_time) on those.\r\n\r\n\r\nThese indexes exist already\r\n\\d sample_ctrl_year\r\n Table \"public.sample_ctrl_year\"\r\n Column | Type | Collation | Nullable | Default\r\n-------------+-----------------------------+-----------+----------+-------------\r\n channel_id | bigint | | not null |\r\n smpl_time | timestamp without time zone | | not null |\r\n nanosecs | bigint | | not null |\r\n severity_id | bigint | | not null |\r\n status_id | bigint | | not null |\r\n num_val | integer | | |\r\n float_val | double precision | | |\r\n str_val | character varying(120) | | |\r\n datatype | character(1) | | | ' '::bpchar\r\n array_val | bytea | | |\r\nIndexes:\r\n \"sample_time_cy_idx\" btree (channel_id, smpl_time)\r\n \"sample_time_yc1_idx\" btree (smpl_time, channel_id)\r\n \"smpl_time_cmx2_idx\" btree (smpl_time)\r\nCheck constraints:\r\n \"sample_ctrl_year_smpl_time_check\" CHECK (smpl_time >= (now() - '1 year 1 mon'::interval)::timestamp without time zone AND smpl_time <= now())\r\nInherits: sample_ctrl\r\n\r\ncss_archive_3_0_0=# \\d sample_buil_year\r\n Table \"public.sample_buil_year\"\r\n Column | Type | Collation | Nullable | Default\r\n-------------+-----------------------------+-----------+----------+-------------\r\n channel_id | bigint | | not null |\r\n smpl_time | timestamp without time zone | | not null |\r\n nanosecs | bigint | | not null |\r\n severity_id | bigint | | not null |\r\n status_id | bigint | | not null |\r\n num_val | integer | | |\r\n float_val | double precision | | |\r\n str_val | character varying(120) | | |\r\n datatype | character(1) | | | ' '::bpchar\r\n array_val | bytea | | |\r\nIndexes:\r\n \"sample_time_by_idx\" btree (channel_id, smpl_time)\r\n \"sample_time_yb1_idx\" btree (smpl_time, channel_id)\r\n \"smpl_time_bx2_idx\" btree (smpl_time)\r\nCheck constraints:\r\n \"sample_buil_year_smpl_time_check\" CHECK (smpl_time >= (now() - '1 year 1 mon'::interval)::timestamp without time zone AND smpl_time <= now())\r\nInherits: sample_buil\r\n\r\ncss_archive_3_0_0=#\r\n\r\n-- \r\n David Rowley http://www.2ndQuadrant.com/\r\n PostgreSQL Development, 24x7 Support, Training & Services\r\n",
"msg_date": "Thu, 3 Jan 2019 13:20:30 +0000",
"msg_from": "Abadie Lana <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: select query does not pick up the right index"
},
{
"msg_contents": "\r\n\r\n\r\nLana ABADIE\r\nDatabase Engineer\r\nCODAC Section\r\n\r\nITER Organization, Building 72/4108, SCOD, Control System Division\r\nRoute de Vinon-sur-Verdon - CS 90 046 - 13067 St Paul Lez Durance Cedex - France\r\nPhone: +33 4 42 17 84 02\r\nGet the latest ITER news on http://www.iter.org/whatsnew\r\n\r\n-----Original Message-----\r\nFrom: [email protected] <[email protected]> On Behalf Of Abadie Lana\r\nSent: 03 January 2019 14:21\r\nTo: David Rowley <[email protected]>\r\nCc: [email protected]\r\nSubject: [Possible Spoof] RE: select query does not pick up the right index\r\n\r\nWarning: This message was sent by [email protected] supposedly on behalf of Abadie Lana <[email protected]>. Please contact\r\n\r\n\r\n\r\n\r\nLana ABADIE\r\nDatabase Engineer\r\nCODAC Section\r\n\r\nITER Organization, Building 72/4108, SCOD, Control System Division Route de Vinon-sur-Verdon - CS 90 046 - 13067 St Paul Lez Durance Cedex - France\r\nPhone: +33 4 42 17 84 02\r\nGet the latest ITER news on http://www.iter.org/whatsnew\r\n\r\n-----Original Message-----\r\nFrom: David Rowley <[email protected]>\r\nSent: 03 January 2019 14:18\r\nTo: Abadie Lana <[email protected]>\r\nCc: [email protected]\r\nSubject: Re: select query does not pick up the right index\r\n\r\n> From: David Rowley <[email protected]>\r\n> Sent: 03 January 2019 14:01\r\n> That's surprisingly slow. Can you share the EXPLAIN (ANALYZE, BUFFERS) of that?\r\n>\r\n> explain (analyze,buffers) select\r\n> 'BUIL-B36-VA-RT-RT1:CL0001-2-ABW',c.smpl_time,c.nanosecs,c.float_val,c\r\n> .num_val,c.str_val,c.datatype,c.array_val from sample c WHERE \r\n> c.channel_id = (SELECT channel_id FROM channel WHERE\r\n> name='BUIL-B36-VA-RT-RT1:CL0001-2-ABW') order by c.smpl_time desc \r\n> limit 5;\r\n\r\n\r\n> -> Index Scan Backward using smpl_time_bx2_idx on \r\n> sample_buil_year c_5 (cost=0.56..2023925.30 rows=3162364\r\n> width=320) (actual time=15167.330..15167.330 rows=0 loops=1)\r\n> Filter: (channel_id = $0)\r\n> Rows Removed by Filter: 50597834\r\n> Buffers: shared hit=25913147 read=713221\r\n> -> Index Scan Backward using sample_time_cm_idx on \r\n> sample_ctrl_month c_6 (cost=0.56..1862587.12 rows=537562\r\n> width=77) (actual time=0.048..0.048 rows=0 loops=1)\r\n> Index Cond: (channel_id = $0)\r\n> Buffers: shared read=4\r\n> -> Index Scan Backward using smpl_time_cmx2_idx on \r\n> sample_ctrl_year c_7 (cost=0.57..3186305.67 rows=2094186\r\n> width=68) (actual time=25847.549..25847.549 rows=0 loops=1)\r\n> Filter: (channel_id = $0)\r\n> Rows Removed by Filter: 79579075\r\n> Buffers: shared hit=49868991 read=1121715\r\n\r\nRight, so you need to check your indexes on sample_ctrl_year and sample_buil_year. You need an index on (channel_id, smpl_time) on those.\r\n\r\n\r\nThese indexes exist already\r\n\\d sample_ctrl_year\r\n Table \"public.sample_ctrl_year\"\r\n Column | Type | Collation | Nullable | Default\r\n-------------+-----------------------------+-----------+----------+-----\r\n-------------+-----------------------------+-----------+----------+-----\r\n-------------+-----------------------------+-----------+----------+---\r\n channel_id | bigint | | not null |\r\n smpl_time | timestamp without time zone | | not null |\r\n nanosecs | bigint | | not null |\r\n severity_id | bigint | | not null |\r\n status_id | bigint | | not null |\r\n num_val | integer | | |\r\n float_val | double precision | | |\r\n str_val | character varying(120) | | |\r\n datatype | character(1) | | | ' '::bpchar\r\n array_val | bytea | | |\r\nIndexes:\r\n \"sample_time_cy_idx\" btree (channel_id, smpl_time)\r\n \"sample_time_yc1_idx\" btree (smpl_time, channel_id)\r\n \"smpl_time_cmx2_idx\" btree (smpl_time) Check constraints:\r\n \"sample_ctrl_year_smpl_time_check\" CHECK (smpl_time >= (now() - '1 year 1 mon'::interval)::timestamp without time zone AND smpl_time <= now())\r\nInherits: sample_ctrl\r\n\r\ncss_archive_3_0_0=# \\d sample_buil_year\r\n Table \"public.sample_buil_year\"\r\n Column | Type | Collation | Nullable | Default\r\n-------------+-----------------------------+-----------+----------+-----\r\n-------------+-----------------------------+-----------+----------+-----\r\n-------------+-----------------------------+-----------+----------+---\r\n channel_id | bigint | | not null |\r\n smpl_time | timestamp without time zone | | not null |\r\n nanosecs | bigint | | not null |\r\n severity_id | bigint | | not null |\r\n status_id | bigint | | not null |\r\n num_val | integer | | |\r\n float_val | double precision | | |\r\n str_val | character varying(120) | | |\r\n datatype | character(1) | | | ' '::bpchar\r\n array_val | bytea | | |\r\nIndexes:\r\n \"sample_time_by_idx\" btree (channel_id, smpl_time)\r\n \"sample_time_yb1_idx\" btree (smpl_time, channel_id)\r\n \"smpl_time_bx2_idx\" btree (smpl_time) Check constraints:\r\n \"sample_buil_year_smpl_time_check\" CHECK (smpl_time >= (now() - '1 year 1 mon'::interval)::timestamp without time zone AND smpl_time <= now())\r\nInherits: sample_buil\r\n\r\ncss_archive_3_0_0=#\r\n\r\n-- \r\n David Rowley http://www.2ndQuadrant.com/\r\n PostgreSQL Development, 24x7 Support, Training & Services\r\n\r\nIn case I'm also posting the explain analyse of the other query\r\nexplain (analyze,buffers) select t.name, c.smpl_time,c.nanosecs,c.float_val,c.num_val,c.str_val,c.datatype,c.array_val from sample c, channel t where t.channel_id=c.channel_id and t.name='BUIL-B36-VA-RT-RT1:CL0001-2-ABW' order by c.smpl_time +INTERVAL '0 sec' desc limit 5; \r\n QUERY PLAN\r\n\r\n----------------------------------------------------------------------------------------------------------------------------\r\n-------------------------------------\r\n Limit (cost=2746650.57..2746650.59 rows=5 width=158) (actual time=119.927..119.929 rows=3 loops=1)\r\n Buffers: shared hit=3 read=531\r\n -> Sort (cost=2746650.57..2746674.66 rows=9636 width=158) (actual time=119.925..119.926 rows=3 loops=1)\r\n Sort Key: ((c.smpl_time + '00:00:00'::interval)) DESC\r\n Sort Method: quicksort Memory: 25kB\r\n Buffers: shared hit=3 read=531\r\n -> Nested Loop (cost=0.00..2746490.52 rows=9636 width=158) (actual time=46.946..119.897 rows=3 loops=1)\r\n Buffers: shared hit=3 read=531\r\n -> Seq Scan on channel t (cost=0.00..915.83 rows=1 width=41) (actual time=16.217..18.257 rows=1 loops=1)\r\n Filter: ((name)::text = 'BUIL-B36-VA-RT-RT1:CL0001-2-ABW'::text)\r\n Rows Removed by Filter: 33425\r\n Buffers: shared hit=1 read=497\r\n -> Append (cost=0.00..2684377.38 rows=6117323 width=125) (actual time=30.717..101.624 rows=3 loops=1)\r\n Buffers: shared hit=2 read=34\r\n -> Seq Scan on sample c (cost=0.00..0.00 rows=1 width=334) (actual time=0.002..0.002 rows=0 loops=1)\r\n Filter: (t.channel_id = channel_id)\r\n -> Bitmap Heap Scan on sample_buil c_1 (cost=149.25..10404.32 rows=6300 width=328) (actual time=9.241\r\n..9.242 rows=0 loops=1)\r\n Recheck Cond: (channel_id = t.channel_id)\r\n Buffers: shared read=3\r\n -> Bitmap Index Scan on sample_time_b_idx (cost=0.00..147.68 rows=6300 width=0) (actual time=9.\r\n237..9.237 rows=0 loops=1)\r\n Index Cond: (channel_id = t.channel_id)\r\n Buffers: shared read=3\r\n -> Bitmap Heap Scan on sample_ctrl c_2 (cost=781.30..11912.06 rows=33661 width=328) (actual time=0.02\r\n0..0.020 rows=0 loops=1)\r\n Recheck Cond: (channel_id = t.channel_id)\r\n Buffers: shared read=3\r\n -> Bitmap Index Scan on sample_time_c_idx (cost=0.00..772.88 rows=33661 width=0) (actual time=0\r\n.018..0.018 rows=0 loops=1)\r\n Index Cond: (channel_id = t.channel_id)\r\n Buffers: shared read=3\r\n -> Bitmap Heap Scan on sample_util c_3 (cost=221.93..25401.37 rows=9483 width=328) (actual time=7.888\r\n..7.888 rows=0 loops=1)\r\n Recheck Cond: (channel_id = t.channel_id)\r\n Buffers: shared read=3\r\n -> Bitmap Index Scan on sample_time_u_idx (cost=0.00..219.56 rows=9483 width=0) (actual time=7.\r\n886..7.886 rows=0 loops=1)\r\n Index Cond: (channel_id = t.channel_id)\r\n Buffers: shared read=3\r\n -> Bitmap Heap Scan on sample_buil_month c_4 (cost=366.32..47118.08 rows=15711 width=82) (actual time\r\n=13.556..24.870 rows=3 loops=1)\r\n Recheck Cond: (channel_id = t.channel_id)\r\n Heap Blocks: exact=3\r\n Buffers: shared read=7\r\n -> Bitmap Index Scan on sample_time_bm_idx (cost=0.00..362.39 rows=15711 width=0) (actual time=\r\n6.712..6.712 rows=3 loops=1)\r\n Index Cond: (channel_id = t.channel_id)\r\n Buffers: shared read=4\r\n -> Bitmap Heap Scan on sample_buil_year c_5 (cost=73216.89..687718.44 rows=3162364 width=328) (actual\r\n time=18.015..18.015 rows=0 loops=1)\r\n Recheck Cond: (channel_id = t.channel_id)\r\n Buffers: shared read=4\r\n -> Bitmap Index Scan on sample_time_by_idx (cost=0.00..72426.29 rows=3162364 width=0) (actual t\r\nime=18.011..18.011 rows=0 loops=1)\r\n Index Cond: (channel_id = t.channel_id)\r\n Buffers: shared read=4\r\n -> Bitmap Heap Scan on sample_ctrl_month c_6 (cost=12446.67..226848.19 rows=537562 width=85) (actual\r\ntime=0.029..0.029 rows=0 loops=1)\r\n Recheck Cond: (channel_id = t.channel_id)\r\n Buffers: shared read=4\r\n -> Bitmap Index Scan on sample_time_cm_idx (cost=0.00..12312.28 rows=537562 width=0) (actual ti\r\nme=0.026..0.026 rows=0 loops=1)\r\n Index Cond: (channel_id = t.channel_id)\r\n Buffers: shared read=4\r\n -> Bitmap Heap Scan on sample_ctrl_year c_7 (cost=48486.51..978945.83 rows=2094186 width=76) (actual\r\ntime=23.088..23.088 rows=0 loops=1)\r\n Recheck Cond: (channel_id = t.channel_id)\r\n Buffers: shared read=4\r\n -> Bitmap Index Scan on sample_time_cy_idx (cost=0.00..47962.96 rows=2094186 width=0) (actual t\r\nime=23.086..23.086 rows=0 loops=1)\r\n Index Cond: (channel_id = t.channel_id)\r\n Buffers: shared read=4\r\n -> Bitmap Heap Scan on sample_util_month c_8 (cost=2249.10..277115.63 rows=97101 width=82) (actual ti\r\nme=7.623..7.623 rows=0 loops=1)\r\n Recheck Cond: (channel_id = t.channel_id)\r\n Buffers: shared hit=1 read=3\r\n -> Bitmap Index Scan on sample_time_um_idx (cost=0.00..2224.82 rows=97101 width=0) (actual time\r\n=7.619..7.619 rows=0 loops=1)\r\n Index Cond: (channel_id = t.channel_id)\r\n Buffers: shared hit=1 read=3\r\n -> Bitmap Heap Scan on sample_util_year c_9 (cost=3727.96..418913.45 rows=160954 width=83) (actual ti\r\nme=10.815..10.815 rows=0 loops=1)\r\n Recheck Cond: (channel_id = t.channel_id)\r\n Buffers: shared hit=1 read=3\r\n -> Bitmap Index Scan on sample_time_uy_idx (cost=0.00..3687.72 rows=160954 width=0) (actual tim\r\ne=10.811..10.811 rows=0 loops=1)\r\n Index Cond: (channel_id = t.channel_id)\r\n Buffers: shared hit=1 read=3\r\n Planning time: 15.656 ms\r\n Execution time: 120.062 ms\r\n(73 rows)\r\n\r\ncss_archive_3_0_0=#\r\n",
"msg_date": "Thu, 3 Jan 2019 14:34:03 +0000",
"msg_from": "Abadie Lana <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: select query does not pick up the right index"
},
{
"msg_contents": "On Fri, 4 Jan 2019 at 02:20, Abadie Lana <[email protected]> wrote:\n> > From: David Rowley <[email protected]>\n> > Sent: 03 January 2019 14:01\n> Right, so you need to check your indexes on sample_ctrl_year and sample_buil_year. You need an index on (channel_id, smpl_time) on those.\n\n> These indexes exist already\n\nThat's interesting. The \\d output indicates that the indexes are not\nINVALID, so it's not all that obvious why the planner would choose a\nlesser index to provide the required rows. One thought is that the\nmore suitable index is very bloated. This would increase the\nestimated cost of scanning the index and reduce the chances of the\nindex being selected by the query planner.\n\nIf you execute:\n\nselect indrelid::regclass as table_name, indexrelid::Regclass as\nindex_name,pg_size_pretty(pg_relation_size(indrelid))\ntable_size,pg_size_pretty(pg_relation_size(indexrelid)) index_size\nfrom pg_index\nwhere indrelid in('sample_ctrl_year'::regclass, 'sample_buil_year'::regclass)\norder by indrelid::regclass::name, indexrelid::regclass::name;\n\nThis should show you the size of the tables and indexes in question.\nIf the sample_time_cy_idx and sample_time_by_idx indexes are very\nlarge when compared with the size of their table, then it is likely\nworth building a new index for these then dropping the old index then\nretrying the re-written version of the query. If this is a live\nsystem then you can build the new indexes by using the CREATE INDEX\nCONCURRENTLY command. This will allow other DML operations to work\nwithout being blocked. The old indexes can then be dropped with DROP\nINDEX CONCURRENTLY.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Fri, 4 Jan 2019 10:42:28 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: select query does not pick up the right index"
},
{
"msg_contents": "On Thu, Jan 03, 2019 at 12:57:27PM +0000, Abadie Lana wrote:\n> Main parameters : effective_cache_size : 4GB, shared_buffers 4GB, work_mem 4MB\n\nI doubt it will help much, but you should consider increasing work_mem, unless\nyou have many expensive queries running at once.\n\nCould you also send the rest of the pg_statistic for that table ?\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions#Statistics:_n_distinct.2C_MCV.2C_histogram\n\nSELECT (SELECT sum(x) FROM unnest(most_common_freqs) x) frac_MCV, tablename, attname, null_frac, n_distinct, array_length(most_common_vals,1) n_mcv, array_length(histogram_bounds,1) n_hist FROM pg_stats WHERE attname='...' AND tablename='...' ORDER BY 1 DESC; \n\n",
"msg_date": "Thu, 3 Jan 2019 17:47:43 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: select query does not pick up the right index"
},
{
"msg_contents": "\r\n-----Original Message-----\r\nFrom: David Rowley <[email protected]> \r\nSent: 03 January 2019 22:42\r\nTo: Abadie Lana <[email protected]>\r\nCc: [email protected]\r\nSubject: Re: select query does not pick up the right index\r\n\r\nOn Fri, 4 Jan 2019 at 02:20, Abadie Lana <[email protected]> wrote:\r\n> > From: David Rowley <[email protected]>\r\n> > Sent: 03 January 2019 14:01\r\n> Right, so you need to check your indexes on sample_ctrl_year and sample_buil_year. You need an index on (channel_id, smpl_time) on those.\r\n\r\n> These indexes exist already\r\n\r\nThat's interesting. The \\d output indicates that the indexes are not INVALID, so it's not all that obvious why the planner would choose a lesser index to provide the required rows. One thought is that the more suitable index is very bloated. This would increase the estimated cost of scanning the index and reduce the chances of the index being selected by the query planner.\r\n\r\nIf you execute:\r\n\r\nselect indrelid::regclass as table_name, indexrelid::Regclass as\r\nindex_name,pg_size_pretty(pg_relation_size(indrelid))\r\ntable_size,pg_size_pretty(pg_relation_size(indexrelid)) index_size from pg_index where indrelid in('sample_ctrl_year'::regclass, 'sample_buil_year'::regclass) order by indrelid::regclass::name, indexrelid::regclass::name;\r\n\r\nThis should show you the size of the tables and indexes in question.\r\nIf the sample_time_cy_idx and sample_time_by_idx indexes are very large when compared with the size of their table, then it is likely worth building a new index for these then dropping the old index then retrying the re-written version of the query. If this is a live system then you can build the new indexes by using the CREATE INDEX CONCURRENTLY command. This will allow other DML operations to work without being blocked. The old indexes can then be dropped with DROP INDEX CONCURRENTLY.\r\n\r\n-- \r\n David Rowley http://www.2ndQuadrant.com/\r\n PostgreSQL Development, 24x7 Support, Training & Services\r\n\r\nHere the result...For me it does not sound that it is bloated...Also still a mystery why wrong indexes are picked up for buil and ctrl and not for util...\r\n\r\nselect indrelid::regclass as table_name, indexrelid::Regclass as\r\nindex_name,pg_size_pretty(pg_relation_size(indrelid))\r\ntable_size,pg_size_pretty(pg_relation_size(indexrelid)) index_size from pg_index where indrelid in('sample_ctrl_year'::regclass, 'sample_buil_year'::regclass,'sample_util_year'::regclass) order by indrelid::regclass::name, indexrelid::regclass::name;\r\n table_name | index_name | table_size | index_size\r\n------------------+---------------------+------------+------------\r\n sample_buil_year | sample_time_by_idx | 4492 MB | 1522 MB\r\n sample_buil_year | sample_time_yb1_idx | 4492 MB | 1522 MB\r\n sample_buil_year | smpl_time_bx2_idx | 4492 MB | 1084 MB\r\n sample_ctrl_year | sample_time_cy_idx | 7065 MB | 2394 MB\r\n sample_ctrl_year | sample_time_yc1_idx | 7065 MB | 2394 MB\r\n sample_ctrl_year | smpl_time_cmx2_idx | 7065 MB | 1705 MB\r\n sample_util_year | sample_time_uy_idx | 7140 MB | 2426 MB\r\n sample_util_year | sample_time_yu1_idx | 7140 MB | 2426 MB\r\n sample_util_year | smpl_time_ux2_idx | 7140 MB | 1727 MB\r\n(9 rows)\r\n\r\nI have recreated the indexes for sample_ctrl_year and sample_buil_year and same index size.\r\nI rerun the query... and still the same plan execution as previously sent....\r\nThanks for your support...One thing I spot is the I/O on this machine is rather slow... the very first time I run this query it will take Execution time: 247503.006 ms ( I can see that postgres process is in state D and low CPU...,using iotop I can see I/O read speed cannot go beyond 20MB/sec. The second time I run the query, the CPU goes up to 100%, no D state).\r\n\r\n\r\n\r\n\r\n",
"msg_date": "Fri, 4 Jan 2019 08:10:44 +0000",
"msg_from": "Abadie Lana <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: select query does not pick up the right index"
},
{
"msg_contents": "\n-----Original Message-----\nFrom: Justin Pryzby <[email protected]> \nSent: 04 January 2019 00:48\nTo: Abadie Lana <[email protected]>\nCc: David Rowley <[email protected]>; [email protected]\nSubject: Re: select query does not pick up the right index\n\nOn Thu, Jan 03, 2019 at 12:57:27PM +0000, Abadie Lana wrote:\n> Main parameters : effective_cache_size : 4GB, shared_buffers 4GB, \n> work_mem 4MB\n\nI doubt it will help much, but you should consider increasing work_mem, unless you have many expensive queries running at once.\n\nCould you also send the rest of the pg_statistic for that table ?\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions#Statistics:_n_distinct.2C_MCV.2C_histogram\n\nSELECT (SELECT sum(x) FROM unnest(most_common_freqs) x) frac_MCV, tablename, attname, null_frac, n_distinct, array_length(most_common_vals,1) n_mcv, array_length(histogram_bounds,1) n_hist FROM pg_stats WHERE attname='...' AND tablename='...' ORDER BY 1 DESC; \n\nHmm. Is it normal that the couple (tablename,attname ) is not unique? I'm surprised to see sample_{ctrl,util,buil} quoted twice\n\ncss_archive_3_0_0=# SELECT (SELECT sum(x) FROM unnest(most_common_freqs) x) frac_MCV, tablename, attname, null_frac, n_distinct, array_length(most_common_vals,1) n_mcv, array_length(histogram_bounds,1) n_hist FROM pg_stats WHERE attname='channel_id' AND tablename like 'sample%' ORDER BY 1 DESC;\n frac_mcv | tablename | attname | null_frac | n_distinct | n_mcv | n_hist\n----------+-------------------+------------+-----------+------------+-------+--------\n 1 | sample_buil_year | channel_id | 0 | 16 | 16 |\n 0.98249 | sample_ctrl | channel_id | 0 | 26 | 17 | 9\n 0.982333 | sample_ctrl_month | channel_id | 0 | 34 | 17 | 17\n 0.981533 | sample_ctrl | channel_id | 0 | 28 | 18 | 10\n 0.9371 | sample_ctrl_year | channel_id | 0 | 38 | 16 | 22\n 0.928767 | sample_buil_month | channel_id | 0 | 940 | 54 | 101\n 0.92535 | sample | channel_id | 0 | 2144 | 167 | 1001\n 0.907501 | sample_buil | channel_id | 0 | 565 | 43 | 101\n 0.8876 | sample_util_year | channel_id | 0 | 501 | 45 | 101\n 0.815 | sample_util | channel_id | 0 | 557 | 82 | 101\n 0.807667 | sample_buil | channel_id | 0 | 164 | 31 | 101\n 0.806267 | sample_util | channel_id | 0 | 732 | 100 | 101\n 0.803766 | sample_util_month | channel_id | 0 | 731 | 100 | 101\n(13 rows)\n\nAh...sample_ctrl_year and sample_buil_year have n_distinct -1? Unlike sample_util_year. Could that explain the wrong choice? \n\nSELECT (SELECT sum(x) FROM unnest(most_common_freqs) x) frac_MCV, tablename, attname, null_frac, n_distinct, array_length(most_common_vals,1) n_mcv, array_length(histogram_bounds,1) n_hist FROM pg_stats WHERE attname='smpl_time' AND tablename like 'sample%' ORDER BY 1 DESC;\n frac_mcv | tablename | attname | null_frac | n_distinct | n_mcv | n_hist\n------------+-------------------+-----------+-----------+-------------+-------+--------\n | sample_ctrl_month | smpl_time | 0 | -1 | | 101\n | sample_ctrl_year | smpl_time | 0 | -1 | | 101\n | sample_ctrl | smpl_time | 0 | -1 | | 101\n | sample_ctrl | smpl_time | 0 | -1 | | 101\n | sample_buil_year | smpl_time | 0 | -1 | | 101\n 0.0154667 | sample_buil_month | smpl_time | 0 | 1.03857e+06 | 100 | 101\n 0.0154523 | sample_buil | smpl_time | 0 | 854250 | 100 | 101\n 0.0115 | sample_util | smpl_time | 0 | 405269 | 100 | 101\n 0.0112333 | sample_util | smpl_time | 0 | 537030 | 100 | 101\n 0.0106667 | sample_util_month | smpl_time | 0 | 539001 | 100 | 101\n 0.00946667 | sample_buil | smpl_time | 0 | -0.328554 | 100 | 101\n 0.00852342 | sample | smpl_time | 0 | 1.5125e+07 | 1000 | 1001\n 0.00780001 | sample_util_year | smpl_time | 0 | 1.73199e+06 | 100 | 101\n(13 rows)\n\n\n\n",
"msg_date": "Fri, 4 Jan 2019 08:17:46 +0000",
"msg_from": "Abadie Lana <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: select query does not pick up the right index"
},
{
"msg_contents": "-----Original Message-----\nFrom: [email protected] <[email protected]> On Behalf Of Abadie Lana\nSent: 04 January 2019 09:18\nTo: Justin Pryzby <[email protected]>\nCc: David Rowley <[email protected]>; [email protected]\nSubject: [Possible Spoof] RE: select query does not pick up the right index\n\nWarning: This message was sent by [email protected] supposedly on behalf of Abadie Lana <[email protected]>. Please contact\n\n-----Original Message-----\nFrom: Justin Pryzby <[email protected]>\nSent: 04 January 2019 00:48\nTo: Abadie Lana <[email protected]>\nCc: David Rowley <[email protected]>; [email protected]\nSubject: Re: select query does not pick up the right index\n\nOn Thu, Jan 03, 2019 at 12:57:27PM +0000, Abadie Lana wrote:\n> Main parameters : effective_cache_size : 4GB, shared_buffers 4GB, \n> work_mem 4MB\n\nI doubt it will help much, but you should consider increasing work_mem, unless you have many expensive queries running at once.\n\nCould you also send the rest of the pg_statistic for that table ?\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions#Statistics:_n_distinct.2C_MCV.2C_histogram\n\nSELECT (SELECT sum(x) FROM unnest(most_common_freqs) x) frac_MCV, tablename, attname, null_frac, n_distinct, array_length(most_common_vals,1) n_mcv, array_length(histogram_bounds,1) n_hist FROM pg_stats WHERE attname='...' AND tablename='...' ORDER BY 1 DESC; \n\nHmm. Is it normal that the couple (tablename,attname ) is not unique? I'm surprised to see sample_{ctrl,util,buil} quoted twice\n\ncss_archive_3_0_0=# SELECT (SELECT sum(x) FROM unnest(most_common_freqs) x) frac_MCV, tablename, attname, null_frac, n_distinct, array_length(most_common_vals,1) n_mcv, array_length(histogram_bounds,1) n_hist FROM pg_stats WHERE attname='channel_id' AND tablename like 'sample%' ORDER BY 1 DESC;\n frac_mcv | tablename | attname | null_frac | n_distinct | n_mcv | n_hist\n----------+-------------------+------------+-----------+------------+-------+--------\n 1 | sample_buil_year | channel_id | 0 | 16 | 16 |\n 0.98249 | sample_ctrl | channel_id | 0 | 26 | 17 | 9\n 0.982333 | sample_ctrl_month | channel_id | 0 | 34 | 17 | 17\n 0.981533 | sample_ctrl | channel_id | 0 | 28 | 18 | 10\n 0.9371 | sample_ctrl_year | channel_id | 0 | 38 | 16 | 22\n 0.928767 | sample_buil_month | channel_id | 0 | 940 | 54 | 101\n 0.92535 | sample | channel_id | 0 | 2144 | 167 | 1001\n 0.907501 | sample_buil | channel_id | 0 | 565 | 43 | 101\n 0.8876 | sample_util_year | channel_id | 0 | 501 | 45 | 101\n 0.815 | sample_util | channel_id | 0 | 557 | 82 | 101\n 0.807667 | sample_buil | channel_id | 0 | 164 | 31 | 101\n 0.806267 | sample_util | channel_id | 0 | 732 | 100 | 101\n 0.803766 | sample_util_month | channel_id | 0 | 731 | 100 | 101\n(13 rows)\n\nAh...sample_ctrl_year and sample_buil_year have n_distinct -1? Unlike sample_util_year. Could that explain the wrong choice? \n\nSELECT (SELECT sum(x) FROM unnest(most_common_freqs) x) frac_MCV, tablename, attname, null_frac, n_distinct, array_length(most_common_vals,1) n_mcv, array_length(histogram_bounds,1) n_hist FROM pg_stats WHERE attname='smpl_time' AND tablename like 'sample%' ORDER BY 1 DESC;\n frac_mcv | tablename | attname | null_frac | n_distinct | n_mcv | n_hist\n------------+-------------------+-----------+-----------+-------------+-------+--------\n | sample_ctrl_month | smpl_time | 0 | -1 | | 101\n | sample_ctrl_year | smpl_time | 0 | -1 | | 101\n | sample_ctrl | smpl_time | 0 | -1 | | 101\n | sample_ctrl | smpl_time | 0 | -1 | | 101\n | sample_buil_year | smpl_time | 0 | -1 | | 101\n 0.0154667 | sample_buil_month | smpl_time | 0 | 1.03857e+06 | 100 | 101\n 0.0154523 | sample_buil | smpl_time | 0 | 854250 | 100 | 101\n 0.0115 | sample_util | smpl_time | 0 | 405269 | 100 | 101\n 0.0112333 | sample_util | smpl_time | 0 | 537030 | 100 | 101\n 0.0106667 | sample_util_month | smpl_time | 0 | 539001 | 100 | 101\n 0.00946667 | sample_buil | smpl_time | 0 | -0.328554 | 100 | 101\n 0.00852342 | sample | smpl_time | 0 | 1.5125e+07 | 1000 | 1001\n 0.00780001 | sample_util_year | smpl_time | 0 | 1.73199e+06 | 100 | 101\n(13 rows)\n\nBased on your feedback...i rerun analyse directly on the two table sample_ctrl_year and sample_buil_year\nThe new values are\nSELECT (SELECT sum(x) FROM unnest(most_common_freqs) x) frac_MCV, tablename, attname, null_frac, n_distinct, array_length(most_common_vals,1) n_mcv, array_length(histogram_bounds,1) n_hist FROM pg_stats WHERE attname='channel_id' AND tablename like 'sample%' ORDER BY 1 DESC;\n frac_mcv | tablename | attname | null_frac | n_distinct | n_mcv | n_hist\n----------+-------------------+------------+-----------+------------+-------+--------\n 0.99987 | sample_buil_year | channel_id | 0 | 76 | 16 | 60\n 0.999632 | sample_ctrl_year | channel_id | 0 | 132 | 31 | 101\n 0.999628 | sample_ctrl_month | channel_id | 0 | 84 | 23 | 61\n 0.999627 | sample_ctrl | channel_id | 0 | 132 | 31 | 101\n 0.999599 | sample_ctrl | channel_id | 0 | 42 | 22 | 20\n 0.998074 | sample_buil | channel_id | 0 | 493 | 122 | 371\n 0.997693 | sample_util | channel_id | 0 | 1379 | 509 | 870\n 0.991841 | sample_buil | channel_id | 0 | 9867 | 107 | 9740\n 0.991567 | sample_util_month | channel_id | 0 | 5716 | 504 | 5209\n 0.990369 | sample_util_year | channel_id | 0 | 4946 | 255 | 4689\n 0.990062 | sample_util | channel_id | 0 | 5804 | 641 | 5160\n 0.972386 | sample_buil_month | channel_id | 0 | 19946 | 148 | 10001\n 0.967391 | sample | channel_id | 0 | 7597 | 409 | 7178\n(13 rows)\n\n\nNow when running the query again, only for sample_buil_year table the wrong index is picked up...\nexplain (analyze, buffers) select 'BUIL-B36-VA-RT-RT1:CL0001-2-ABW',c.smpl_time,c.nanosecs,c.float_val,c.num_val,c.str_val,c.datatype,c.array_val from sample c WHERE c.channel_id = (SELECT channel_id FROM channel WHERE name='BUIL-B36-VA-RT-RT1:CL0001-2-ABW') order by c.smpl_time desc limit 5;\n QUERY PLAN\n\n----------------------------------------------------------------------------------------------------------------------------\n-----------------------------------------------------------\n Limit (cost=13.40..30.01 rows=5 width=112) (actual time=13554.536..13554.570 rows=3 loops=1)\n Buffers: shared hit=26626389 read=17\n InitPlan 1 (returns $0)\n -> Index Scan using unique_chname on channel (cost=0.41..8.43 rows=1 width=8) (actual time=26.858..26.860 rows=1 loop\ns=1)\n Index Cond: ((name)::text = 'BUIL-B36-VA-RT-RT1:CL0001-2-ABW'::text)\n Buffers: shared hit=2 read=2\n -> Result (cost=4.96..5131208.65 rows=1544048 width=112) (actual time=13554.534..13554.567 rows=3 loops=1)\n Buffers: shared hit=26626389 read=17\n -> Merge Append (cost=4.96..5115768.17 rows=1544048 width=80) (actual time=13554.531..13554.562 rows=3 loops=1)\n Sort Key: c.smpl_time DESC\n Buffers: shared hit=26626389 read=17\n -> Index Scan Backward using smpl_time_qa_idx on sample c (cost=0.12..8.14 rows=1 width=326) (actual time=0\n.005..0.005 rows=0 loops=1)\n Filter: (channel_id = $0)\n Buffers: shared hit=1\n -> Index Scan Backward using sample_time_b_idx on sample_buil c_1 (cost=0.42..7775.26 rows=2096 width=320)\n(actual time=38.931..38.932 rows=0 loops=1)\n Index Cond: (channel_id = $0)\n Buffers: shared hit=3 read=4\n -> Index Scan Backward using sample_time_c_idx on sample_ctrl c_2 (cost=0.42..77785.57 rows=22441 width=320\n) (actual time=0.010..0.010 rows=0 loops=1)\n Index Cond: (channel_id = $0)\n Buffers: shared hit=3\n -> Index Scan Backward using sample_time_u_idx on sample_util c_3 (cost=0.43..14922.72 rows=3830 width=320)\n (actual time=8.939..8.939 rows=0 loops=1)\n Index Cond: (channel_id = $0)\n Buffers: shared hit=1 read=2\n -> Index Scan Backward using sample_time_bm_idx on sample_buil_month c_4 (cost=0.56..2967.10 rows=740 width\n=74) (actual time=260.282..260.311 rows=3 loops=1)\n Index Cond: (channel_id = $0)\n Buffers: shared hit=3 read=5\n -> Index Scan Backward using smpl_time_bx2_idx on sample_buil_year c_5 (cost=0.56..2023054.76 rows=665761 w\nidth=75) (actual time=13216.589..13216.589 rows=0 loops=1)\n Filter: (channel_id = $0)\n Rows Removed by Filter: 50597834\n Buffers: shared hit=26626368\n -> Index Scan Backward using sample_time_cm_idx on sample_ctrl_month c_6 (cost=0.56..759241.36 rows=217585\nwidth=75) (actual time=0.019..0.019 rows=0 loops=1)\n Index Cond: (channel_id = $0)\n Buffers: shared hit=4\n -> Index Scan Backward using sample_time_cy_idx on sample_ctrl_year c_7 (cost=0.57..2097812.02 rows=602872\nwidth=76) (actual time=0.007..0.007 rows=0 loops=1)\n Index Cond: (channel_id = $0)\n Buffers: shared hit=4\n -> Index Scan Backward using sample_time_um_idx on sample_util_month c_8 (cost=0.57..48401.65 rows=12418 wi\ndth=75) (actual time=18.999..19.000 rows=0 loops=1)\n Index Cond: (channel_id = $0)\n Buffers: shared hit=1 read=3\n -> Index Scan Backward using sample_time_uy_idx on sample_util_year c_9 (cost=0.57..54293.22 rows=16304 wid\nth=74) (actual time=10.739..10.739 rows=0 loops=1)\n Index Cond: (channel_id = $0)\n Buffers: shared hit=1 read=3\n Planning time: 0.741 ms\n Execution time: 13554.666 ms\n(44 rows)\nLooking more closely to the sample_buil_year table\nselect count(distinct channel_id),count(*) from sample_buil_year;\n count | count\n-------+----------\n 100 | 50597834\n(1 row)\n\nNow, the channel name I gave has no entries in sample_buil_year...(and when I run the query directly against sample_buil_year the right index is picked up).... So maybe something related with the partitioning?\n\nselect 'BUIL-B36-VA-RT-RT1:CL0001-2-ABW',c.smpl_time,c.nanosecs,c.float_val,c.num_val,c.str_val,c.datatype,c.array_val from sample_buil_year c WHERE c.channel_id = (SELECT channel_id FROM channel WHERE name='BUIL-B36-VA-RT-RT1:CL0001-2-ABW') order by c.smpl_time desc limit 5;\n ?column? | smpl_time | nanosecs | float_val | num_val | str_val | datatype | array_val\n----------+-----------+----------+-----------+---------+---------+----------+-----------\n(0 rows)\n\ncss_archive_3_0_0=# explain analyze select 'BUIL-B36-VA-RT-RT1:CL0001-2-ABW',c.smpl_time,c.nanosecs,c.float_val,c.num_val,c.str_val,c.datatype,c.array_val from sample_buil_year c WHERE c.channel_id = (SELECT channel_id FROM channel WHERE name='BUIL-B36-VA-RT-RT1:CL0001-2-ABW') order by c.smpl_time desc limit 5;\n QUERY PLAN\n\n----------------------------------------------------------------------------------------------------------------------------\n---------------------------------------\n Limit (cost=9.00..21.31 rows=5 width=107) (actual time=0.055..0.055 rows=0 loops=1)\n InitPlan 1 (returns $0)\n -> Index Scan using unique_chname on channel (cost=0.41..8.43 rows=1 width=8) (actual time=0.038..0.040 rows=1 loops=\n1)\n Index Cond: ((name)::text = 'BUIL-B36-VA-RT-RT1:CL0001-2-ABW'::text)\n -> Index Scan Backward using sample_time_by_idx on sample_buil_year c (cost=0.56..1639944.37 rows=665761 width=107) (ac\ntual time=0.054..0.054 rows=0 loops=1)\n Index Cond: (channel_id = $0)\n Planning time: 0.178 ms\n Execution time: 0.088 ms\n(8 rows)\n\n\n",
"msg_date": "Fri, 4 Jan 2019 08:58:57 +0000",
"msg_from": "Abadie Lana <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: select query does not pick up the right index"
},
{
"msg_contents": "On Fri, Jan 04, 2019 at 08:58:57AM +0000, Abadie Lana wrote:\n> SELECT (SELECT sum(x) FROM unnest(most_common_freqs) x) frac_MCV, tablename, attname, null_frac, n_distinct, array_length(most_common_vals,1) n_mcv, array_length(histogram_bounds,1) n_hist FROM pg_stats WHERE attname='...' AND tablename='...' ORDER BY 1 DESC; \n> \n> Hmm. Is it normal that the couple (tablename,attname ) is not unique? I'm surprised to see sample_{ctrl,util,buil} quoted twice\n\nOne of the rows is for \"inherited stats\" (including child tables) stats and one\nis \"noninherited stats\".\n\nThe unique index on the table behind that view is:\n \"pg_statistic_relid_att_inh_index\" UNIQUE, btree (starelid, staattnum, stainherit)\n\nOn the wiki, I added inherited and correlation columns. Would you rerun that query ?\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions#Statistics:_n_distinct.2C_MCV.2C_histogram\n\nI'm also interested to see \\d and channel_id statistics for the channel table.\n\n> explain (analyze, buffers) select 'BUIL-B36-VA-RT-RT1:CL0001-2-ABW',c.smpl_time,c.nanosecs,c.float_val,c.num_val,c.str_val,c.datatype,c.array_val from sample c WHERE c.channel_id = (SELECT channel_id FROM channel WHERE name='BUIL-B36-VA-RT-RT1:CL0001-2-ABW') order by c.smpl_time desc limit 5;\n\nYou originally wrote this as a implicit comma join. Does the original query\nstill have an issue ? The =(subselect query) doesn't allow the planner to\noptimize for the given channel, which seems to be a fundamental problem.\n\nOn Fri, Jan 04, 2019 at 08:58:57AM +0000, Abadie Lana wrote:\n> Based on your feedback...i rerun analyse directly on the two table sample_ctrl_year and sample_buil_year\n> [...] Now when running the query again, only for sample_buil_year table the wrong index is picked up...\n\nIt looks like statistics on your tables were completely wrong; not just\nsample_ctrl_year and sample_buil_year. Right ?\n\nAutoanalyze would normally handle this on nonempty tables (children or\notherwise) and you should manually run ANALZYE on the parents (both levels of\nthem) whenever statistics change, like after running a big DELETE or DROP or\nafter a significant interval of time has passed relative to the range of time\nin the table's timestamp columns.\n\nDo you know why autoanalze didn't handle the nonempty tables on its own ?\n\n> Now, the channel name I gave has no entries in sample_buil_year...(and when I run the query directly against sample_buil_year the right index is picked up).... So maybe something related with the partitioning?\n\n> -> Index Scan Backward using smpl_time_bx2_idx on sample_buil_year c_5 (cost=0.56..2023054.76 rows=665761 width=75) (actual time=13216.589..13216.589 rows=0 loops=1)\n> Filter: (channel_id = $0)\n> Rows Removed by Filter: 50597834\n> Buffers: shared hit=26626368\n\nSo it scanned the entire index expecting to find 5 matching channel IDs \"pretty\nsoon\", based on the generic distribution of channel IDs, without the benefit of\nknowing that this channel ID doesn't exist at all (due to =(subquery)).\n\n26e6 buffers is 200GB, apparently accessing some pages many\ntimes (even if cached).\n\n table_name | index_name | table_size | index_size \n sample_buil_year | smpl_time_bx2_idx | 4492 MB | 1084 MB \n\nGeneral comments:\n\nOn Wed, Jan 02, 2019 at 04:28:41PM +0000, Abadie Lana wrote:\n> \"sample_time_bm_idx\" btree (channel_id, smpl_time)\n> \"sample_time_mb1_idx\" btree (smpl_time, channel_id)\n> \"smpl_time_bx1_idx\" btree (smpl_time)\n\nThe smpl_time index is loosely redundant with index on (smpl_time,channel_id).\nYou might consider dropping it, or otherwise dropping the smpl_time,channel_id\nindex and making two separate indices on smpl_time and channel. That would\nallow bitmap ANDing them together.\n\nOr possibly (depending on detail of your data loading) leaving the composite\nindex and changing smpl_time to a BRIN index - it's nice to be able to CLUSTER\non the btree index to maximize the efficiency of the brin index.\n\n>Check constraints:\n> \"sample_buil_month_smpl_time_check\" CHECK (smpl_time >= (now() - '32 days'::interval)::timestamp without time zone AND smpl_time <= now())\n\nI'm surprised that works, and not really sure what it's doing..but in any case\nit's maybe not doing what you wanted(??). I'm guessing you never get\nconstraint exclusion (which is irrelevant for this query but still).\n\nJustin\n\n",
"msg_date": "Fri, 4 Jan 2019 22:23:57 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: select query does not pick up the right index"
},
{
"msg_contents": "\n-----Original Message-----\nFrom: Justin Pryzby <[email protected]> \nSent: 05 January 2019 05:24\nTo: Abadie Lana <[email protected]>\nCc: David Rowley <[email protected]>; [email protected]\nSubject: Re: select query does not pick up the right index\n\nOn Fri, Jan 04, 2019 at 08:58:57AM +0000, Abadie Lana wrote:\n> SELECT (SELECT sum(x) FROM unnest(most_common_freqs) x) frac_MCV, \n> tablename, attname, null_frac, n_distinct, \n> array_length(most_common_vals,1) n_mcv, \n> array_length(histogram_bounds,1) n_hist FROM pg_stats WHERE \n> attname='...' AND tablename='...' ORDER BY 1 DESC;\n> \n> Hmm. Is it normal that the couple (tablename,attname ) is not unique? \n> I'm surprised to see sample_{ctrl,util,buil} quoted twice\n\nOne of the rows is for \"inherited stats\" (including child tables) stats and one is \"noninherited stats\".\n\nThe unique index on the table behind that view is:\n \"pg_statistic_relid_att_inh_index\" UNIQUE, btree (starelid, staattnum, stainherit)\n\nOn the wiki, I added inherited and correlation columns. Would you rerun that query ?\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions#Statistics:_n_distinct.2C_MCV.2C_histogram\n/*********************REPLY**********************************************************/\ncss_archive_3_0_0=# SELECT (SELECT sum(x) FROM unnest(most_common_freqs) x) frac_MCV, tablename, attname, inherited, null_frac, n_distinct, array_length(most_common_vals,1) n_mcv, array_length(histogram_bounds,1) n_hist, correlation FROM pg_stats WHERE attname='smpl_time' AND tablename like 'sample%' ORDER BY 1 DESC;\n frac_mcv | tablename | attname | inherited | null_frac | n_distinct | n_mcv | n_hist | correlation\n-------------+-------------------+-----------+-----------+-----------+------------+-------+--------+-------------\n 0.124457 | sample_buil | smpl_time | f | 0 | -0.752503 | 10000 | 10001 | 0.0802559\n 0.100454 | sample_util | smpl_time | f | 0 | -0.323349 | 10000 | 10001 | 0.614187\n 0.0393624 | sample_buil_month | smpl_time | f | 0 | -0.617567 | 10000 | 10001 | 0.181361\n 0.0305711 | sample_util_month | smpl_time | f | 0 | -0.169437 | 10000 | 10001 | 0.781718\n 0.0194441 | sample_util_year | smpl_time | f | 0 | -0.428909 | 10000 | 10001 | 0.999893\n 0.0172493 | sample_util | smpl_time | t | 0 | -0.179957 | 10000 | 10001 | -0.563603\n 0.0117653 | sample | smpl_time | t | 0 | -0.235397 | 10000 | 10001 | 0.0880253\n 0.0116284 | sample_buil | smpl_time | t | 0 | -0.743071 | 10000 | 10001 | -0.100979\n 2.66667e-05 | sample_ctrl_month | smpl_time | f | 0 | -0.999848 | 32 | 10001 | -0.356626\n 8.48788e-06 | sample_ctrl | smpl_time | f | 0 | -0.999996 | 4 | 10001 | 0.331492\n 6.33333e-06 | sample_ctrl_year | smpl_time | f | 0 | -0.999835 | 9 | 10001 | 0.999971\n 5.33333e-06 | sample_ctrl | smpl_time | t | 0 | -0.999827 | 8 | 10001 | 0.0492292\n 5e-06 | sample_buil_year | smpl_time | f | 0 | -0.999918 | 7 | 10001 | 0.999978\n(13 rows)\n\ncss_archive_3_0_0=# SELECT (SELECT sum(x) FROM unnest(most_common_freqs) x) frac_MCV, tablename, attname, inherited, null_frac, n_distinct, array_length(most_common_vals,1) n_mcv, array_length(histogram_bounds,1) n_hist, correlation FROM pg_stats WHERE attname='channel_id' AND tablename like 'sample%' ORDER BY 1 DESC;\n frac_mcv | tablename | attname | inherited | null_frac | n_distinct | n_mcv | n_hist | correlation\n----------+-------------------+------------+-----------+-----------+------------+-------+--------+-------------\n 0.99987 | sample_buil_year | channel_id | f | 0 | 76 | 16 | 60 | 0.207932\n 0.999632 | sample_ctrl_year | channel_id | f | 0 | 132 | 31 | 101 | 0.201352\n 0.999628 | sample_ctrl_month | channel_id | f | 0 | 84 | 23 | 61 | 0.104656\n 0.999627 | sample_ctrl | channel_id | t | 0 | 132 | 31 | 101 | 0.143691\n 0.999599 | sample_ctrl | channel_id | f | 0 | 42 | 22 | 20 | 0.0874279\n 0.998074 | sample_buil | channel_id | f | 0 | 493 | 122 | 371 | 0.0206452\n 0.997693 | sample_util | channel_id | f | 0 | 1379 | 509 | 870 | 0.079591\n 0.991841 | sample_buil | channel_id | t | 0 | 9867 | 107 | 9740 | 0.00540782\n 0.991567 | sample_util_month | channel_id | f | 0 | 5716 | 504 | 5209 | 0.216868\n 0.990369 | sample_util_year | channel_id | f | 0 | 4946 | 255 | 4689 | 0.547934\n 0.990062 | sample_util | channel_id | t | 0 | 5804 | 641 | 5160 | -0.31778\n 0.972386 | sample_buil_month | channel_id | f | 0 | 19946 | 148 | 10001 | 0.0932767\n 0.967391 | sample | channel_id | t | 0 | 7597 | 409 | 7178 | 0.501865\n(13 rows)\n\ncss_archive_3_0_0=\n/**********************ENDREPLY************************************************/\n\nI'm also interested to see \\d and channel_id statistics for the channel table.\n\n/***********************REPLY***********************************************/\n\\d channel\n Table \"public.channel\"\n Column | Type | Collation | Nullable | Default\n--------------+------------------------+-----------+----------+-----------------------------------\n channel_id | bigint | | not null | nextval('channel_chid'::regclass)\n name | character varying(100) | | not null |\n descr | character varying(100) | | |\n grp_id | bigint | | |\n smpl_mode_id | bigint | | |\n smpl_val | double precision | | |\n smpl_per | double precision | | |\n retent_id | bigint | | | 1\n retent_val | double precision | | |\nIndexes:\n \"channel_pkey\" PRIMARY KEY, btree (channel_id)\n \"unique_chname\" UNIQUE CONSTRAINT, btree (name)\n \"channel_name_channel_id_idx\" btree (name, channel_id)\n\n\nSELECT (SELECT sum(x) FROM unnest(most_common_freqs) x) frac_MCV, tablename, attname, inherited, null_frac, n_distinct, array_length(most_common_vals,1) n_mcv, array_length(histogram_bounds,1) n_hist, correlation FROM pg_stats WHERE attname in ('name','channel_id') AND tablename ='channel' ORDER BY 1 DESC;\n frac_mcv | tablename | attname | inherited | null_frac | n_distinct | n_mcv | n_hist | correlation\n----------+-----------+------------+-----------+-----------+------------+-------+--------+-------------\n | channel | channel_id | f | 0 | -1 | | 10001 | 0.0200338\n | channel | name | f | 0 | -1 | | 10001 | -0.257645\n\n\n\n/*********************ENDREPLY****************************************************************/\n\n> explain (analyze, buffers) select \n> 'BUIL-B36-VA-RT-RT1:CL0001-2-ABW',c.smpl_time,c.nanosecs,c.float_val,c\n> .num_val,c.str_val,c.datatype,c.array_val from sample c WHERE \n> c.channel_id = (SELECT channel_id FROM channel WHERE \n> name='BUIL-B36-VA-RT-RT1:CL0001-2-ABW') order by c.smpl_time desc \n> limit 5;\n\nYou originally wrote this as a implicit comma join. Does the original query still have an issue ? The =(subselect query) doesn't allow the planner to optimize for the given channel, which seems to be a fundamental problem.\n/****************************REPLY***************************************************/\nYes the original query still picks up the wrong index. This query actually was suggested by David Rowley and actually with this one the planner is taking the wring index for only sample_ctrl_year and sample_buil_year tables. With some proper analyse, now only sample_ctrl_year.\n/*****************************ENDREPLY**************************************************/\nOn Fri, Jan 04, 2019 at 08:58:57AM +0000, Abadie Lana wrote:\n> Based on your feedback...i rerun analyse directly on the two table \n> sample_ctrl_year and sample_buil_year [...] Now when running the query again, only for sample_buil_year table the wrong index is picked up...\n\nIt looks like statistics on your tables were completely wrong; not just sample_ctrl_year and sample_buil_year. Right ?\n/*****************************REPLY*******************************************************/\nI would say that when you have a partitioned table, running analyse on the parent table (which includes the children) does not give the same result as running analyse on each individual child table. I don't know if it is an expected behaviour?\n\n/********************************ENDREPLY****************************************************/\nAutoanalyze would normally handle this on nonempty tables (children or\notherwise) and you should manually run ANALZYE on the parents (both levels of\nthem) whenever statistics change, like after running a big DELETE or DROP or after a significant interval of time has passed relative to the range of time in the table's timestamp columns.\n\nDo you know why autoanalze didn't handle the nonempty tables on its own ?\n/******************************REPLY***************************************************************/\nThis database has been loaded via a dump. After there was no change in the actual tables'content apart from creating/droping.\nindexes. \nSo I guess that's why autoanalyze didn't run (also I didn't change the default configuration for this part in postgresql.conf)\n/*******************************ENDREPLY**********************************************************/\n> Now, the channel name I gave has no entries in sample_buil_year...(and when I run the query directly against sample_buil_year the right index is picked up).... So maybe something related with the partitioning?\n\n> -> Index Scan Backward using smpl_time_bx2_idx on sample_buil_year c_5 (cost=0.56..2023054.76 rows=665761 width=75) (actual time=13216.589..13216.589 rows=0 loops=1)\n> Filter: (channel_id = $0)\n> Rows Removed by Filter: 50597834\n> Buffers: shared hit=26626368\n\nSo it scanned the entire index expecting to find 5 matching channel IDs \"pretty soon\", based on the generic distribution of channel IDs, without the benefit of knowing that this channel ID doesn't exist at all (due to =(subquery)).\n/*********************************REPLY******************************************************/\nExactly it took hearethe wrong index smpl_time_bx2_idx instead of sample_time_by_idx.\n/*********************************ENDREPLY**************************************************/\n26e6 buffers is 200GB, apparently accessing some pages many times (even if cached).\n/**********************************REPLY********************************************************/\nYes this is what I observed when running iotop...more than 17GB was read from disk. I'm surprised as I would expect that the max. would be the index size...~7GB. We also get an swap alert...because it uses swap...\n/********************************ENDREPLY**************************************************/\n table_name | index_name | table_size | index_size \n sample_buil_year | smpl_time_bx2_idx | 4492 MB | 1084 MB \n\nGeneral comments:\n\nOn Wed, Jan 02, 2019 at 04:28:41PM +0000, Abadie Lana wrote:\n> \"sample_time_bm_idx\" btree (channel_id, smpl_time)\n> \"sample_time_mb1_idx\" btree (smpl_time, channel_id)\n> \"smpl_time_bx1_idx\" btree (smpl_time)\n\nThe smpl_time index is loosely redundant with index on (smpl_time,channel_id).\nYou might consider dropping it, or otherwise dropping the smpl_time,channel_id index and making two separate indices on smpl_time and channel. That would allow bitmap ANDing them together.\n/******************************REPLY***********************************************************/\nYes I know. The thing is I had to find a quick solution to fix as my application was taking ages - two types of queries (one which requires channeld_id=XX + order by time and another one by time range (all channels between T1 and T2).\nAs the smpl_time_bx1_idx was slowing down the first query, I created sample_time_mb1_idx and drop smpl_time_bx1_idx.\nNow it has been recreated as I wanted to understand why the planner picked up the wrong indexes. \n/*****************************ENDREPLY**********************************************************/\nOr possibly (depending on detail of your data loading) leaving the composite index and changing smpl_time to a BRIN index - it's nice to be able to CLUSTER on the btree index to maximize the efficiency of the brin index.\n\n>Check constraints:\n> \"sample_buil_month_smpl_time_check\" CHECK (smpl_time >= (now() - \n>'32 days'::interval)::timestamp without time zone AND smpl_time <= \n>now())\n\nI'm surprised that works, and not really sure what it's doing..but in any case it's maybe not doing what you wanted(??). I'm guessing you never get constraint exclusion (which is irrelevant for this query but still).\n/*********************************REPLY************************************************/\nI know that the partitioning is not exclusive in this one. In fact the insert is done at sample_{util/buil/ctrl} table. The data is in this table. Then there are some scripts which moves data from sample -> sample_month and then sample_month-> sample_year. \nI'm not the owner of this schema...so cannot comment why it has been done like that... \nAnd same for indexes. I cannot change them. \nI did it in that case, because I did a copy of the database and launched the apps on this one (part of annual maintenance activities).\nI created the BRIN index on smpl_time and now the original query runs fine because it uses the right index, the one on (channel_id,smpl_time)\n\nexplain analyze select t.name, c.smpl_time,c.nanosecs,c.float_val,c.num_val,c.str_val,c.datatype,c.array_val from sample c, channel t where t.channel_id=c.channel_id and t.name='BUIL-B36-VA-RT-RT1:CL0001-2-ABW' order by c.smpl_time desc limit 5;\n QUERY PLAN\n\n---------------------------------------------------------------------------------------------------------------------------\n-----------------------------------------------\n Limit (cost=1869725.53..1869725.54 rows=5 width=113) (actual time=3.898..3.900 rows=3 loops=1)\n -> Sort (cost=1869725.53..1869749.62 rows=9636 width=113) (actual time=3.896..3.897 rows=3 loops=1)\n Sort Key: c.smpl_time DESC\n Sort Method: quicksort Memory: 25kB\n -> Nested Loop (cost=0.00..1869565.48 rows=9636 width=113) (actual time=2.270..3.878 rows=3 loops=1)\n -> Seq Scan on channel t (cost=0.00..915.83 rows=1 width=41) (actual time=2.212..3.773 rows=1 loops=1)\n Filter: ((name)::text = 'BUIL-B36-VA-RT-RT1:CL0001-2-ABW'::text)\n Rows Removed by Filter: 33425\n -> Append (cost=0.00..1853209.17 rows=1544048 width=88) (actual time=0.053..0.099 rows=3 loops=1)\n -> Seq Scan on sample c (cost=0.00..0.00 rows=1 width=334) (actual time=0.002..0.002 rows=0 loops=1)\n Filter: (t.channel_id = channel_id)\n -> Bitmap Heap Scan on sample_buil c_1 (cost=52.67..5440.29 rows=2096 width=328) (actual time=0.016.\n.0.016 rows=0 loops=1)\n Recheck Cond: (channel_id = t.channel_id)\n -> Bitmap Index Scan on sample_time_b_idx (cost=0.00..52.14 rows=2096 width=0) (actual time=0.\n008..0.008 rows=0 loops=1)\n Index Cond: (channel_id = t.channel_id)\n -> Bitmap Heap Scan on sample_ctrl c_2 (cost=522.34..11512.86 rows=22441 width=328) (actual time=0.0\n05..0.006 rows=0 loops=1)\n Recheck Cond: (channel_id = t.channel_id)\n -> Bitmap Index Scan on sample_time_c_idx (cost=0.00..516.73 rows=22441 width=0) (actual time=\n0.005..0.005 rows=0 loops=1)\n Index Cond: (channel_id = t.channel_id)\n -> Bitmap Heap Scan on sample_util c_3 (cost=90.11..12215.14 rows=3830 width=328) (actual time=0.009\n..0.009 rows=0 loops=1)\n Recheck Cond: (channel_id = t.channel_id)\n -> Bitmap Index Scan on sample_time_u_idx (cost=0.00..89.16 rows=3830 width=0) (actual time=0.\n006..0.006 rows=0 loops=1)\n Index Cond: (channel_id = t.channel_id)\n -> Bitmap Heap Scan on sample_buil_month c_4 (cost=18.29..2836.29 rows=740 width=82) (actual time=0.\n017..0.021 rows=3 loops=1)\n Recheck Cond: (channel_id = t.channel_id)\n Heap Blocks: exact=3\n -> Bitmap Index Scan on sample_time_bm_idx (cost=0.00..18.11 rows=740 width=0) (actual time=0.\n012..0.012 rows=3 loops=1)\n Index Cond: (channel_id = t.channel_id)\n -> Bitmap Heap Scan on sample_buil_year c_5 (cost=15416.21..627094.50 rows=665761 width=83) (actual\ntime=0.008..0.008 rows=0 loops=1)\n Recheck Cond: (channel_id = t.channel_id)\n -> Bitmap Index Scan on sample_time_by_idx (cost=0.00..15249.77 rows=665761 width=0) (actual t\nime=0.007..0.007 rows=0 loops=1)\n Index Cond: (channel_id = t.channel_id)\n -> Bitmap Heap Scan on sample_ctrl_month c_6 (cost=5038.85..223721.75 rows=217585 width=83) (actual\ntime=0.006..0.007 rows=0 loops=1)\n Recheck Cond: (channel_id = t.channel_id)\n -> Bitmap Index Scan on sample_time_cm_idx (cost=0.00..4984.45 rows=217585 width=0) (actual ti\nme=0.006..0.006 rows=0 loops=1)\n Index Cond: (channel_id = t.channel_id)\n -> Bitmap Heap Scan on sample_ctrl_year c_7 (cost=13960.83..870933.00 rows=602872 width=84) (actual\ntime=0.006..0.006 rows=0 loops=1)\n Recheck Cond: (channel_id = t.channel_id)\n -> Bitmap Index Scan on sample_time_cy_idx (cost=0.00..13810.11 rows=602872 width=0) (actual t\nime=0.005..0.015 rows=0 loops=1)\n Index Cond: (channel_id = t.channel_id)\n -> Bitmap Heap Scan on sample_util_month c_8 (cost=288.81..45162.12 rows=12418 width=83) (actual tim\ne=0.008..0.008 rows=0 loops=1)\n Recheck Cond: (channel_id = t.channel_id)\n -> Bitmap Index Scan on sample_time_um_idx (cost=0.00..285.70 rows=12418 width=0) (actual time\n=0.007..0.007 rows=0 loops=1)\n Index Cond: (channel_id = t.channel_id)\n -> Index Scan using sample_time_uy_idx on sample_util_year c_9 (cost=0.57..54293.22 rows=16304 width\n=82) (actual time=0.010..0.010 rows=0 loops=1)\n Index Cond: (channel_id = t.channel_id)\n Planning time: 1.752 ms\n Execution time: 4.004\n\n\nBut not the other query...still time-consuming because still using the wrong index in case of sample_buil_year (but curiously not the BRIN index)\n\nexplain (analyze, buffers) select 'BUIL-B36-VA-RT-RT1:CL0001-2-ABW',c.smpl_time,c.nanosecs,c.float_val,c.num_val,c.str_val,c.datatype,c.array_val from sample c WHERE c.channel_id = (SELECT channel_id FROM channel WHERE name='BUIL-B36-VA-RT-RT1:CL0001-2-ABW') order by c.smpl_time desc limit 5; QUERY PLAN\n\n---------------------------------------------------------------------------------------------------------------------------\n--------------------------------------------------------------\n Limit (cost=13.40..30.54 rows=5 width=112) (actual time=63411.725..63411.744 rows=3 loops=1)\n Buffers: shared hit=38 read=193865\n InitPlan 1 (returns $0)\n -> Index Scan using unique_chname on channel (cost=0.41..8.43 rows=1 width=8) (actual time=0.039..0.040 rows=1 loops\n=1)\n Index Cond: ((name)::text = 'BUIL-B36-VA-RT-RT1:CL0001-2-ABW'::text)\n Buffers: shared hit=4\n -> Result (cost=4.96..5294364.58 rows=1544048 width=112) (actual time=63411.723..63411.740 rows=3 loops=1)\n Buffers: shared hit=38 read=193865\n -> Merge Append (cost=4.96..5278924.10 rows=1544048 width=80) (actual time=63411.719..63411.735 rows=3 loops=1)\n Sort Key: c.smpl_time DESC\n Buffers: shared hit=38 read=193865\n -> Index Scan Backward using sample_time_all_idx on sample c (cost=0.12..8.14 rows=1 width=326) (actual ti\nme=0.048..0.048 rows=0 loops=1)\n Index Cond: (channel_id = $0)\n Buffers: shared hit=5\n -> Index Scan Backward using sample_time_b_idx on sample_buil c_1 (cost=0.42..7775.26 rows=2096 width=320)\n (actual time=0.008..0.009 rows=0 loops=1)\n Index Cond: (channel_id = $0)\n Buffers: shared hit=3\n -> Index Scan Backward using sample_time_c_idx on sample_ctrl c_2 (cost=0.42..77785.57 rows=22441 width=32\n0) (actual time=0.006..0.006 rows=0 loops=1)\n Index Cond: (channel_id = $0)\n Buffers: shared hit=3\n -> Index Scan Backward using sample_time_u_idx on sample_util c_3 (cost=0.43..14922.72 rows=3830 width=320\n) (actual time=0.008..0.008 rows=0 loops=1)\n Index Cond: (channel_id = $0)\n Buffers: shared hit=3\n -> Index Scan Backward using sample_time_bm_idx on sample_buil_month c_4 (cost=0.56..2967.10 rows=740 widt\nh=74) (actual time=0.011..0.025 rows=3 loops=1)\n Index Cond: (channel_id = $0)\n Buffers: shared hit=8\n -> Index Scan Backward using sample_time_yb1_idx on sample_buil_year c_5 (cost=0.56..2186210.68 rows=66576\n1 width=75) (actual time=63411.573..63411.574 rows=0 loops=1)\n Index Cond: (channel_id = $0)\n Buffers: shared read=193865\n -> Index Scan Backward using sample_time_cm_idx on sample_ctrl_month c_6 (cost=0.56..759241.36 rows=217585\n width=75) (actual time=0.030..0.030 rows=0 loops=1)\n Index Cond: (channel_id = $0)\n Buffers: shared hit=4\n -> Index Scan Backward using sample_time_cy_idx on sample_ctrl_year c_7 (cost=0.57..2097812.02 rows=602872\n width=76) (actual time=0.009..0.009 rows=0 loops=1)\n Index Cond: (channel_id = $0)\n Buffers: shared hit=4\n -> Index Scan Backward using sample_time_um_idx on sample_util_month c_8 (cost=0.57..48401.65 rows=12418 w\nidth=75) (actual time=0.009..0.009 rows=0 loops=1)\n Index Cond: (channel_id = $0)\n Buffers: shared hit=4\n -> Index Scan Backward using sample_time_uy_idx on sample_util_year c_9 (cost=0.57..54293.22 rows=16304 wi\ndth=74) (actual time=0.009..0.009 rows=0 loops=1)\n Index Cond: (channel_id = $0)\n Buffers: shared hit=4\n Planning time: 0.727 ms\n Execution time: 63411.858 ms\n(43 rows)\n\n\n\\d sample_buil_year\n Table \"public.sample_buil_year\"\n Column | Type | Collation | Nullable | Default\n-------------+-----------------------------+-----------+----------+-------------\n channel_id | bigint | | not null |\n smpl_time | timestamp without time zone | | not null |\n nanosecs | bigint | | not null |\n severity_id | bigint | | not null |\n status_id | bigint | | not null |\n num_val | integer | | |\n float_val | double precision | | |\n str_val | character varying(120) | | |\n datatype | character(1) | | | ' '::bpchar\n array_val | bytea | | |\nIndexes:\n \"sample_time_by_idx\" btree (channel_id, smpl_time)\n \"sample_time_yb1_idx\" btree (smpl_time, channel_id)\n \"smpl__by_brin_idx\" brin (smpl_time) WITH (pages_per_range='128')\nCheck constraints:\n \"sample_buil_year_smpl_time_check\" CHECK (smpl_time >= (now() - '1 year 1 mon'::interval)::timestamp without time zone AND smpl_time <= now())\nInherits: sample_buil\n\nIt works when I dropped the other index sample_time_yb1_idx\n\nThe BRIN works well with the other query. Thanks for the tip I will look into more details on this BRIN.\nThanks for your help\n/********************************ENDREPLY*********************************************/\n\nJustin\n\n",
"msg_date": "Mon, 7 Jan 2019 16:09:50 +0000",
"msg_from": "Abadie Lana <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: select query does not pick up the right index"
},
{
"msg_contents": "On Mon, Jan 07, 2019 at 04:09:50PM +0000, Abadie Lana wrote:\n\n> \"channel_pkey\" PRIMARY KEY, btree (channel_id)\n> \"unique_chname\" UNIQUE CONSTRAINT, btree (name)\n> \"channel_name_channel_id_idx\" btree (name, channel_id)\n\nNote, the third index is more or less redundant.\n\n> I would say that when you have a partitioned table, running analyse on the parent table (which includes the children) does not give the same result as running analyse on each individual child table. I don't know if it is an expected behaviour?\n\nRight, for relkind='r' inheritence, ANALYZE parent gathers 1) stats for the\nparent ONLY (stored with pg_stats inherited='f'); and, 2) stats for the parent\nand its children (stored in pg_stats with inherited='t').\n\nIt *doesn't* update statistics for each of the children themselves. Note\nhowever that for partitions of relkind='p' tables (available since postgres 10)\nANALYZE parent *ALSO* updates stats for the children.\n\n> But not the other query...still time-consuming because still using the wrong index in case of sample_buil_year (but curiously not the BRIN index)\n> explain (analyze, buffers) select 'BUIL-B36-VA-RT-RT1:CL0001-2-ABW',c.smpl_time,c.nanosecs,c.float_val,c.num_val,c.str_val,c.datatype,c.array_val from sample c WHERE c.channel_id = (SELECT channel_id FROM channel WHERE name='BUIL-B36-VA-RT-RT1:CL0001-2-ABW') order by c.smpl_time desc limit 5;\n> Limit (cost=13.40..30.54 rows=5 width=112) (actual time=63411.725..63411.744 rows=3 loops=1)\n> Buffers: shared hit=38 read=193865\n> InitPlan 1 (returns $0)\n> -> Index Scan using unique_chname on channel (cost=0.41..8.43 rows=1 width=8) (actual time=0.039..0.040 rows=1 loops =1)\n> Index Cond: ((name)::text = 'BUIL-B36-VA-RT-RT1:CL0001-2-ABW'::text)\n> Buffers: shared hit=4\n> -> Result (cost=4.96..5294364.58 rows=1544048 width=112) (actual time=63411.723..63411.740 rows=3 loops=1)\n> Buffers: shared hit=38 read=193865\n> -> Merge Append (cost=4.96..5278924.10 rows=1544048 width=80) (actual time=63411.719..63411.735 rows=3 loops=1)\n> Sort Key: c.smpl_time DESC\n> Buffers: shared hit=38 read=193865\n> -> Index Scan Backward using sample_time_all_idx on sample c (cost=0.12..8.14 rows=1 width=326) (actual time=0.048..0.048 rows=0 loops=1)\n> Index Cond: (channel_id = $0)\n> Buffers: shared hit=5\n> -> Index Scan Backward using sample_time_b_idx on sample_buil c_1 (cost=0.42..7775.26 rows=2096 width=320) (actual time=0.008..0.009 rows=0 loops=1)\n> Index Cond: (channel_id = $0)\n> Buffers: shared hit=3\n> -> Index Scan Backward using sample_time_c_idx on sample_ctrl c_2 (cost=0.42..77785.57 rows=22441 width=320) (actual time=0.006..0.006 rows=0 loops=1)\n> Index Cond: (channel_id = $0)\n> Buffers: shared hit=3\n> -> Index Scan Backward using sample_time_u_idx on sample_util c_3 (cost=0.43..14922.72 rows=3830 width=320) (actual time=0.008..0.008 rows=0 loops=1)\n> Index Cond: (channel_id = $0)\n> Buffers: shared hit=3\n> -> Index Scan Backward using sample_time_bm_idx on sample_buil_month c_4 (cost=0.56..2967.10 rows=740 width=74) (actual time=0.011..0.025 rows=3 loops=1)\n> Index Cond: (channel_id = $0)\n> Buffers: shared hit=8\n> -> Index Scan Backward using sample_time_yb1_idx on sample_buil_year c_5 (cost=0.56..2186210.68 rows=665761 width=75) (actual time=63411.573..63411.574 rows=0 loops=1)\n> Index Cond: (channel_id = $0)\n> Buffers: shared read=193865\n\nI think I see the issue..\n\nNote, this is different than before.\n\nInitially the query was slow due to reading the indices for the entire\nheirarchy, then sorting them, then joining:\n|\t-> Index Scan Backward using smpl_time_bx2_idx on sample_buil_year c_5(cost=0.56..1897430.89 rows=50597832 width=328) (actual time=0.068..139840.439 rows=50597834 loops=1)\n|\t-> Index Scan Backward using smpl_time_cmx1_idx on sample_ctrl_month c_6 (cost=0.44..55253292.21 rows=18277124 width=85) (actual time=0.061..14610.389 rows=18277123 loops=1)\n|\t-> Index Scan Backward using smpl_time_cmx2_idx on sample_ctrl_year c_7 (cost=0.57..2987358.31 rows=79579072 width=76) (actual time=0.067..286316.865 rows=79579075 loops=1)\n|\t-> Index Scan Backward using smpl_time_ux1_idx on sample_util_month c_8 (cost=0.57..98830163.45 rows=70980976 width=82) (actual time=0.071..60766.643 rows=70980980 loops=1)\n|\t-> Index Scan Backward using smpl_time_ux2_idx on sample_util_year c_9 (cost=0.57..3070642.94 rows=80637888 width=83) (actual time=0.069..307091.673 rows=80637891 loops=1)\n\nThen you ANALYZEd parent tables and added indices and constraints and started\ngetting bitmap scans, with new query using David's INTERVAL '0 sec':\n|\t...\n|\t-> Index Scan Backward using smpl_time_bx2_idx on sample_buil_year c_5 (cost=0.56..2023925.30 rows=3162364 width=320) (actual time=15167.330..15167.330 rows=0 loops=1)\n|\t Filter: (channel_id = $0)\n|\t Rows Removed by Filter: 50597834\n|\t Buffers: shared hit=25913147 read=713221\n|\t-> Index Scan Backward using sample_time_cm_idx on sample_ctrl_month c_6 (cost=0.56..1862587.12 rows=537562 width=77) (actual time=0.048..0.048 rows=0 loops=1)\n|\t Index Cond: (channel_id = $0)\n|\t Buffers: shared read=4\n|\t-> Index Scan Backward using smpl_time_cmx2_idx on sample_ctrl_year c_7 (cost=0.57..3186305.67 rows=2094186 width=68) (actual time=25847.549..25847.549 rows=0 loops=1)\n|\t Filter: (channel_id = $0)\n|\t Rows Removed by Filter: 79579075\n|\t Buffers: shared hit=49868991 read=1121715\n|\t...\n\nI didn't notice this at first, but compare the two slow scans with the fast one.\nThe slow scans have no index condition: they're reading the entire index and\nFILTERING on channel_id rather than searching the index for it.\n\nNow for the \"bad\" query you're getting:\n|\t...\n|\t-> Index Scan Backward using sample_time_bm_idx on sample_buil_month c_4 (cost=0.56..2967.10 rows=740 width=74) (actual time=0.011..0.025 rows=3 loops=1)\n|\t Index Cond: (channel_id = $0)\n|\t Buffers: shared hit=8\n|\t-> Index Scan Backward using sample_time_yb1_idx on sample_buil_year c_5 (cost=0.56..2186210.68 rows=665761 width=75) (actual time=63411.573..63411.574 rows=0 loops=1)\n|\t Index Cond: (channel_id = $0)\n|\t Buffers: shared read=193865\n|\t...\n\nThis time, the bad scan *is* kind-of searching on channel_id, but reading the\nentire 1500MB index to do it ... because channel_id is not a leading column:\n| \"sample_time_yb1_idx\" btree (smpl_time, channel_id)\n\nAnd I think the explanation is here:\n\n> css_archive_3_0_0=# SELECT (SELECT sum(x) FROM unnest(most_common_freqs) x) frac_MCV, tablename, attname, inherited, null_frac, n_distinct, array_length(most_common_vals,1) n_mcv, array_length(histogram_bounds,1) n_hist, correlation FROM pg_stats WHERE attname='channel_id' AND tablename like 'sample%' ORDER BY 1 DESC;\n> frac_mcv | tablename | attname | inherited | null_frac | n_distinct | n_mcv | n_hist | correlation\n> ----------+-------------------+------------+-----------+-----------+------------+-------+--------+-------------\n> 5e-06 | sample_buil_year | smpl_time | f | 0 | -0.999918 | 7 | 10001 | 0.999978\n...\n> 0.99987 | sample_buil_year | channel_id | f | 0 | 76 | 16 | 60 | 0.207932\n\n\nThe table is highly correlated in its physical storage WRT correlation, and\npoorly correlated WRT channel_id. Thats matter since it thinks the index will\nbe almost entirely cached, but not the table:\n| sample_buil_year | sample_time_yb1_idx | 4492 MB | 1522 MB\n\nSo the planner thinks that reading up to 1500MB index from cache will pay off\nin ability to read the table sequentially. If it searches the index on\nchannel_id, it would have to read 665761 tuples across a wide fraction of the\ntable (a pages here and a page there), defeating readahead, rather than reading\npages clustered/clumped together.\n\nOn Thu, Jan 03, 2019 at 12:57:27PM +0000, Abadie Lana wrote:\n> Main parameters : effective_cache_size : 4GB, shared_buffers 4GB, work_mem 4MB\n\nThe issue here may just be that you have effective_cache_size=4GB, so planner\nthinks that sample_time_yb1_idx is likely to be cached. Try decreasing that\nalot, since it's clearly not cached ? Also,\neffective_cache_size==shared_buffers is only accurate if you've allocated the\nserver's entire RAM to shared_buffers, which is unreasonable. (Or perhaps if\nthe OS cache is 10x busier with other processes than postgres).\n\nI'm not sure why your query plan changed with a brin indx...it wasn't actually\nused, preferring to scan the original index on channel_id, as you hoped.\n\n| -> Bitmap Heap Scan on sample_buil_year c_5 (cost=15416.21..627094.50 rows=665761 width=83) (actual time=0.008..0.008 rows=0 loops=1)\n|\t Recheck Cond: (channel_id = t.channel_id)\n|\t -> Bitmap Index Scan on sample_time_by_idx (cost=0.00..15249.77 rows=665761 width=0) (actual time=0.007..0.007 rows=0 loops=1)\n|\t\t Index Cond: (channel_id = t.channel_id) \n\nJustin\n\n",
"msg_date": "Tue, 8 Jan 2019 02:15:28 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: select query does not pick up the right index"
},
{
"msg_contents": "\n-----Original Message-----\nFrom: Justin Pryzby <[email protected]> \nSent: 08 January 2019 09:15\nTo: Abadie Lana <[email protected]>\nCc: David Rowley <[email protected]>; [email protected]\nSubject: Re: select query does not pick up the right index\n\nOn Mon, Jan 07, 2019 at 04:09:50PM +0000, Abadie Lana wrote:\n\n> \"channel_pkey\" PRIMARY KEY, btree (channel_id)\n> \"unique_chname\" UNIQUE CONSTRAINT, btree (name)\n> \"channel_name_channel_id_idx\" btree (name, channel_id)\n\nNote, the third index is more or less redundant.\n\n> I would say that when you have a partitioned table, running analyse on the parent table (which includes the children) does not give the same result as running analyse on each individual child table. I don't know if it is an expected behaviour?\n\nRight, for relkind='r' inheritence, ANALYZE parent gathers 1) stats for the parent ONLY (stored with pg_stats inherited='f'); and, 2) stats for the parent and its children (stored in pg_stats with inherited='t').\n\nIt *doesn't* update statistics for each of the children themselves. Note however that for partitions of relkind='p' tables (available since postgres 10) ANALYZE parent *ALSO* updates stats for the children.\n\n> But not the other query...still time-consuming because still using the \n> wrong index in case of sample_buil_year (but curiously not the BRIN \n> index) explain (analyze, buffers) select 'BUIL-B36-VA-RT-RT1:CL0001-2-ABW',c.smpl_time,c.nanosecs,c.float_val,c.num_val,c.str_val,c.datatype,c.array_val from sample c WHERE c.channel_id = (SELECT channel_id FROM channel WHERE name='BUIL-B36-VA-RT-RT1:CL0001-2-ABW') order by c.smpl_time desc limit 5; Limit (cost=13.40..30.54 rows=5 width=112) (actual time=63411.725..63411.744 rows=3 loops=1)\n> Buffers: shared hit=38 read=193865\n> InitPlan 1 (returns $0)\n> -> Index Scan using unique_chname on channel (cost=0.41..8.43 rows=1 width=8) (actual time=0.039..0.040 rows=1 loops =1)\n> Index Cond: ((name)::text = 'BUIL-B36-VA-RT-RT1:CL0001-2-ABW'::text)\n> Buffers: shared hit=4\n> -> Result (cost=4.96..5294364.58 rows=1544048 width=112) (actual time=63411.723..63411.740 rows=3 loops=1)\n> Buffers: shared hit=38 read=193865\n> -> Merge Append (cost=4.96..5278924.10 rows=1544048 width=80) (actual time=63411.719..63411.735 rows=3 loops=1)\n> Sort Key: c.smpl_time DESC\n> Buffers: shared hit=38 read=193865\n> -> Index Scan Backward using sample_time_all_idx on sample c (cost=0.12..8.14 rows=1 width=326) (actual time=0.048..0.048 rows=0 loops=1)\n> Index Cond: (channel_id = $0)\n> Buffers: shared hit=5\n> -> Index Scan Backward using sample_time_b_idx on sample_buil c_1 (cost=0.42..7775.26 rows=2096 width=320) (actual time=0.008..0.009 rows=0 loops=1)\n> Index Cond: (channel_id = $0)\n> Buffers: shared hit=3\n> -> Index Scan Backward using sample_time_c_idx on sample_ctrl c_2 (cost=0.42..77785.57 rows=22441 width=320) (actual time=0.006..0.006 rows=0 loops=1)\n> Index Cond: (channel_id = $0)\n> Buffers: shared hit=3\n> -> Index Scan Backward using sample_time_u_idx on sample_util c_3 (cost=0.43..14922.72 rows=3830 width=320) (actual time=0.008..0.008 rows=0 loops=1)\n> Index Cond: (channel_id = $0)\n> Buffers: shared hit=3\n> -> Index Scan Backward using sample_time_bm_idx on sample_buil_month c_4 (cost=0.56..2967.10 rows=740 width=74) (actual time=0.011..0.025 rows=3 loops=1)\n> Index Cond: (channel_id = $0)\n> Buffers: shared hit=8\n> -> Index Scan Backward using sample_time_yb1_idx on sample_buil_year c_5 (cost=0.56..2186210.68 rows=665761 width=75) (actual time=63411.573..63411.574 rows=0 loops=1)\n> Index Cond: (channel_id = $0)\n> Buffers: shared read=193865\n\nI think I see the issue..\n\nNote, this is different than before.\n\nInitially the query was slow due to reading the indices for the entire heirarchy, then sorting them, then joining:\n|\t-> Index Scan Backward using smpl_time_bx2_idx on sample_buil_year c_5(cost=0.56..1897430.89 rows=50597832 width=328) (actual time=0.068..139840.439 rows=50597834 loops=1)\n|\t-> Index Scan Backward using smpl_time_cmx1_idx on sample_ctrl_month c_6 (cost=0.44..55253292.21 rows=18277124 width=85) (actual time=0.061..14610.389 rows=18277123 loops=1)\n|\t-> Index Scan Backward using smpl_time_cmx2_idx on sample_ctrl_year c_7 (cost=0.57..2987358.31 rows=79579072 width=76) (actual time=0.067..286316.865 rows=79579075 loops=1)\n|\t-> Index Scan Backward using smpl_time_ux1_idx on sample_util_month c_8 (cost=0.57..98830163.45 rows=70980976 width=82) (actual time=0.071..60766.643 rows=70980980 loops=1)\n|\t-> Index Scan Backward using smpl_time_ux2_idx on sample_util_year \n|c_9 (cost=0.57..3070642.94 rows=80637888 width=83) (actual \n|time=0.069..307091.673 rows=80637891 loops=1)\n\nThen you ANALYZEd parent tables and added indices and constraints and started getting bitmap scans, with new query using David's INTERVAL '0 sec':\n|\t...\n|\t-> Index Scan Backward using smpl_time_bx2_idx on sample_buil_year c_5 (cost=0.56..2023925.30 rows=3162364 width=320) (actual time=15167.330..15167.330 rows=0 loops=1)\n|\t Filter: (channel_id = $0)\n|\t Rows Removed by Filter: 50597834\n|\t Buffers: shared hit=25913147 read=713221\n|\t-> Index Scan Backward using sample_time_cm_idx on sample_ctrl_month c_6 (cost=0.56..1862587.12 rows=537562 width=77) (actual time=0.048..0.048 rows=0 loops=1)\n|\t Index Cond: (channel_id = $0)\n|\t Buffers: shared read=4\n|\t-> Index Scan Backward using smpl_time_cmx2_idx on sample_ctrl_year c_7 (cost=0.57..3186305.67 rows=2094186 width=68) (actual time=25847.549..25847.549 rows=0 loops=1)\n|\t Filter: (channel_id = $0)\n|\t Rows Removed by Filter: 79579075\n|\t Buffers: shared hit=49868991 read=1121715\n|\t...\n\nI didn't notice this at first, but compare the two slow scans with the fast one.\nThe slow scans have no index condition: they're reading the entire index and FILTERING on channel_id rather than searching the index for it.\n\nNow for the \"bad\" query you're getting:\n|\t...\n|\t-> Index Scan Backward using sample_time_bm_idx on sample_buil_month c_4 (cost=0.56..2967.10 rows=740 width=74) (actual time=0.011..0.025 rows=3 loops=1)\n|\t Index Cond: (channel_id = $0)\n|\t Buffers: shared hit=8\n|\t-> Index Scan Backward using sample_time_yb1_idx on sample_buil_year c_5 (cost=0.56..2186210.68 rows=665761 width=75) (actual time=63411.573..63411.574 rows=0 loops=1)\n|\t Index Cond: (channel_id = $0)\n|\t Buffers: shared read=193865\n|\t...\n\nThis time, the bad scan *is* kind-of searching on channel_id, but reading the entire 1500MB index to do it ... because channel_id is not a leading column:\n| \"sample_time_yb1_idx\" btree (smpl_time, channel_id)\n\nAnd I think the explanation is here:\n\n> css_archive_3_0_0=# SELECT (SELECT sum(x) FROM unnest(most_common_freqs) x) frac_MCV, tablename, attname, inherited, null_frac, n_distinct, array_length(most_common_vals,1) n_mcv, array_length(histogram_bounds,1) n_hist, correlation FROM pg_stats WHERE attname='channel_id' AND tablename like 'sample%' ORDER BY 1 DESC;\n> frac_mcv | tablename | attname | inherited | null_frac | n_distinct | n_mcv | n_hist | correlation\n> ----------+-------------------+------------+-----------+-----------+------------+-------+--------+-------------\n> 5e-06 | sample_buil_year | smpl_time | f | 0 | -0.999918 | 7 | 10001 | 0.999978\n...\n> 0.99987 | sample_buil_year | channel_id | f | 0 | 76 | 16 | 60 | 0.207932\n\n\nThe table is highly correlated in its physical storage WRT correlation, and poorly correlated WRT channel_id. Thats matter since it thinks the index will be almost entirely cached, but not the table:\n| sample_buil_year | sample_time_yb1_idx | 4492 MB | 1522 MB\n\nSo the planner thinks that reading up to 1500MB index from cache will pay off in ability to read the table sequentially. If it searches the index on channel_id, it would have to read 665761 tuples across a wide fraction of the table (a pages here and a page there), defeating readahead, rather than reading pages clustered/clumped together.\n\nOn Thu, Jan 03, 2019 at 12:57:27PM +0000, Abadie Lana wrote:\n> Main parameters : effective_cache_size : 4GB, shared_buffers 4GB, \n> work_mem 4MB\n\nThe issue here may just be that you have effective_cache_size=4GB, so planner thinks that sample_time_yb1_idx is likely to be cached. Try decreasing that alot, since it's clearly not cached ? Also, effective_cache_size==shared_buffers is only accurate if you've allocated the server's entire RAM to shared_buffers, which is unreasonable. (Or perhaps if the OS cache is 10x busier with other processes than postgres).\n\nI'm not sure why your query plan changed with a brin indx...it wasn't actually used, preferring to scan the original index on channel_id, as you hoped.\n\n| -> Bitmap Heap Scan on sample_buil_year c_5 (cost=15416.21..627094.50 rows=665761 width=83) (actual time=0.008..0.008 rows=0 loops=1)\n|\t Recheck Cond: (channel_id = t.channel_id)\n|\t -> Bitmap Index Scan on sample_time_by_idx (cost=0.00..15249.77 rows=665761 width=0) (actual time=0.007..0.007 rows=0 loops=1)\n|\t\t Index Cond: (channel_id = t.channel_id) \n\nJustin\n\nHi,\nFirst I'm using postgresql 10.5, so it means that running analyse on sample table was also triggering a analyse on sample children. However as I said, it is not what I observed. Analyze sample is not the same as analyse children tables. Maybe because in that case it is two-level partitioning, i.e. children of children\n\nI run the tests once more with all your inputs...but this time I change the postgres settings - but no real success\nEffective_cache_size=512MB (was 6GB)\nShared_buffers=2GB (was 6GB)\nWork=512MB (was 4MB)\n\noriginal query still expensive (no filter + wrong index) : still I can see some swap activities even though I have plenty of memory....\n\nexplain analyze select t.name, c.smpl_time,c.nanosecs,c.float_val,c.num_val,c.str_val,c.datatype,c.array_val from sample c, channel t where t.channel_id=c.channel_id and t.name='BUIL-B36-VA-RT-RT1:CL0001-2-ABW' order by c.smpl_time desc limit 5; QUERY PLAN\n\n-----------------------------------------------------------------------------------------------------------------------------------------\n-------------------------------------------------------\n Limit (cost=5.13..140077.00 rows=5 width=114) (actual time=159467.334..1486064.079 rows=3 loops=1)\n -> Nested Loop (cost=5.13..269946514.15 rows=9636 width=114) (actual time=159467.332..1486064.073 rows=3 loops=1)\n Join Filter: (c.channel_id = t.channel_id)\n Rows Removed by Join Filter: 322099471\n -> Merge Append (cost=4.71..265115013.75 rows=322099464 width=89) (actual time=170.874..1205525.136 rows=322099474 loops=1)\n Sort Key: c.smpl_time DESC\n -> Index Scan Backward using smpl_time_a_idx on sample c (cost=0.12..8.14 rows=1 width=334) (actual time=0.004..0.004 ro\nws=0 loops=1)\n -> Index Scan Backward using smpl_time_b_idx on sample_buil c_1 (cost=0.42..4059177.39 rows=1033169 width=328) (actual t\nime=14.487..13290.596 rows=1033169 loops=1)\n -> Index Scan Backward using smpl_time_c_idx on sample_ctrl c_2 (cost=0.42..3321314.50 rows=942520 width=328) (actual ti\nme=12.598..11553.956 rows=942520 loops=1)\n -> Index Scan Backward using smpl_time_u_idx on sample_util c_3 (cost=0.43..13064997.74 rows=5282177 width=328) (actual\ntime=17.136..33692.383 rows=5282177 loops=1)\n -> Index Scan Backward using smpl_time_bm_idx on sample_buil_month c_4 (cost=0.43..56507719.34 rows=14768705 width=82) (\nactual time=12.616..69994.281 rows=14768705 loops=1)\n -> Index Scan Backward using smpl_time_by_idx on sample_buil_year c_5 (cost=0.56..1897685.68 rows=50597832 width=84) (ac\ntual time=33.374..221346.806 rows=50597834 loops=1)\n -> Index Scan Backward using smpl_time_cm_idx on sample_ctrl_month c_6 (cost=0.44..63167512.05 rows=18277124 width=84) (\nactual time=17.823..80242.045 rows=18277123 loops=1)\n -> Index Scan Backward using smpl_time_cy_idx on sample_ctrl_year c_7 (cost=0.57..2988555.40 rows=79579072 width=84) (ac\ntual time=18.082..195370.352 rows=79579075 loops=1)\n -> Index Scan Backward using smpl_time_um_idx on sample_util_month c_8 (cost=0.57..110877026.27 rows=70980976 width=83)\n(actual time=26.942..184412.358 rows=70980980 loops=1)\n -> Index Scan Backward using smpl_time_uy_idx on sample_util_year c_9 (cost=0.57..3075812.13 rows=80637888 width=83) (ac\ntual time=17.794..275571.960 rows=80637891 loops=1)\n -> Materialize (cost=0.41..8.44 rows=1 width=41) (actual time=0.000..0.000 rows=1 loops=322099474)\n -> Index Only Scan using channel_name_channel_id_idx on channel t (cost=0.41..8.43 rows=1 width=41) (actual time=15.385.\n.15.388 rows=1 loops=1)\n Index Cond: (name = 'BUIL-B36-VA-RT-RT1:CL0001-2-ABW'::text)\n Heap Fetches: 1\n Planning time: 1.677 ms\n Execution time: 1486064.165 ms\n(22 rows)\n\nThe other query suggested by D.Rowley has the same issue : still swap activity is higher.\nexplain analyze select 'BUIL-B36-VA-RT-RT1:CL0001-2-ABW', c.smpl_time,c.nanosecs,c.float_val,c.num_val,c.str_val,c.datatype,c.array_val from sample c where c.channel_id in (select channel_id from channel where name ='BUIL-B36-VA-RT-RT1:CL0001-2-ABW') order by c.smpl_time desc limit 5;\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------\n---------------------------------\n Limit (cost=5.13..140077.00 rows=5 width=113) (actual time=38582.017..1549136.681 rows=3 loops=1)\n -> Nested Loop (cost=5.13..269946514.15 rows=9636 width=113) (actual time=38582.014..1549136.674 rows=3 loops=1)\n Join Filter: (c.channel_id = channel.channel_id)\n Rows Removed by Join Filter: 322099471\n -> Merge Append (cost=4.71..265115013.75 rows=322099464 width=89) (actual time=0.437..1269913.701 rows=322099474 loops=1)\n Sort Key: c.smpl_time DESC\n -> Index Scan Backward using smpl_time_a_idx on sample c (cost=0.12..8.14 rows=1 width=334) (actual time=0.006..0.006 rows=0 loops=1)\n -> Index Scan Backward using smpl_time_b_idx on sample_buil c_1 (cost=0.42..4059177.39 rows=1033169 width=328) (actual time=0.055..702.253 ro\nws=1033169 loops=1)\n -> Index Scan Backward using smpl_time_c_idx on sample_ctrl c_2 (cost=0.42..3321314.50 rows=942520 width=328) (actual time=0.039..684.282 row\ns=942520 loops=1)\n -> Index Scan Backward using smpl_time_u_idx on sample_util c_3 (cost=0.43..13064997.74 rows=5282177 width=328) (actual time=0.045..3624.667\nrows=5282177 loops=1)\n -> Index Scan Backward using smpl_time_bm_idx on sample_buil_month c_4 (cost=0.43..56507719.34 rows=14768705 width=82) (actual time=0.039..65\n099.797 rows=14768705 loops=1)\n -> Index Scan Backward using smpl_time_by_idx on sample_buil_year c_5 (cost=0.56..1897685.68 rows=50597832 width=84) (actual time=0.053..1173\n26.709 rows=50597834 loops=1)\n -> Index Scan Backward using smpl_time_cm_idx on sample_ctrl_month c_6 (cost=0.44..63167512.05 rows=18277124 width=84) (actual time=0.037..76\n905.550 rows=18277123 loops=1)\n -> Index Scan Backward using smpl_time_cy_idx on sample_ctrl_year c_7 (cost=0.57..2988555.40 rows=79579072 width=84) (actual time=0.052..4150\n67.696 rows=79579075 loops=1)\n -> Index Scan Backward using smpl_time_um_idx on sample_util_month c_8 (cost=0.57..110877026.27 rows=70980976 width=83) (actual time=0.053..1\n41602.620 rows=70980980 loops=1)\n -> Index Scan Backward using smpl_time_uy_idx on sample_util_year c_9 (cost=0.57..3075812.13 rows=80637888 width=83) (actual time=0.050..3298\n99.409 rows=80637891 loops=1)\n -> Materialize (cost=0.41..8.44 rows=1 width=8) (actual time=0.000..0.000 rows=1 loops=322099474)\n -> Index Only Scan using channel_name_channel_id_idx on channel (cost=0.41..8.43 rows=1 width=8) (actual time=0.102..0.103 rows=1 loops=1)\n Index Cond: (name = 'BUIL-B36-VA-RT-RT1:CL0001-2-ABW'::text)\n Heap Fetches: 1\n Planning time: 11.566 ms\n Execution time: 1549156.273 ms\n\n\n\nThe query which works the best so far - no swap\nexplain analyze select 'BUIL-B36-VA-RT-RT1:CL0001-2-ABW', c.smpl_time,c.nanosecs,c.float_val,c.num_val,c.str_val,c.datatype,c.array_val from sample c where c.channel_id in (select channel_id from channel where name ='BUIL-B36-VA-RT-RT1:CL0001-2-ABW') order by c.smpl_time + interval '0 sec' desc limit 5;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=1865809.94..1865809.95 rows=5 width=121) (actual time=220.854..220.856 rows=3 loops=1)\n -> Sort (cost=1865809.94..1865834.03 rows=9636 width=121) (actual time=220.852..220.853 rows=3 loops=1)\n Sort Key: ((c.smpl_time + '00:00:00'::interval)) DESC\n Sort Method: quicksort Memory: 25kB\n -> Nested Loop (cost=0.00..1865649.88 rows=9636 width=121) (actual time=133.087..220.823 rows=3 loops=1)\n -> Seq Scan on channel (cost=0.00..915.83 rows=1 width=8) (actual time=19.561..21.602 rows=1 loops=1)\n Filter: ((name)::text = 'BUIL-B36-VA-RT-RT1:CL0001-2-ABW'::text)\n Rows Removed by Filter: 33425\n -> Append (cost=0.00..1849451.58 rows=1525839 width=89) (actual time=113.510..199.202 rows=3 loops=1)\n -> Seq Scan on sample c (cost=0.00..0.00 rows=1 width=334) (actual time=0.010..0.010 rows=0 loops=1)\n Filter: (channel.channel_id = channel_id)\n -> Bitmap Heap Scan on sample_buil c_1 (cost=52.67..5440.29 rows=2096 width=328) (actual time=12.217..12.217 rows=0 loops=1)\n Recheck Cond: (channel_id = channel.channel_id)\n -> Bitmap Index Scan on sample_time_b_idx (cost=0.00..52.14 rows=2096 width=0) (actual time=12.207..12.208 rows=0 loops=1)\n Index Cond: (channel_id = channel.channel_id)\n -> Bitmap Heap Scan on sample_ctrl c_2 (cost=522.34..11512.86 rows=22441 width=328) (actual time=23.037..23.037 rows=0 loops=1)\n Recheck Cond: (channel_id = channel.channel_id)\n -> Bitmap Index Scan on sample_time_c_idx (cost=0.00..516.73 rows=22441 width=0) (actual time=23.032..23.033 rows=0 loops=1)\n Index Cond: (channel_id = channel.channel_id)\n -> Bitmap Heap Scan on sample_util c_3 (cost=89.99..12171.59 rows=3814 width=328) (actual time=52.641..52.642 rows=0 loops=1)\n Recheck Cond: (channel_id = channel.channel_id)\n -> Bitmap Index Scan on sample_time_u_idx (cost=0.00..89.04 rows=3814 width=0) (actual time=52.636..52.636 rows=0 loops=1)\n Index Cond: (channel_id = channel.channel_id)\n -> Bitmap Heap Scan on sample_buil_month c_4 (cost=18.28..2828.85 rows=738 width=82) (actual time=25.584..25.617 rows=3 loops=1)\n Recheck Cond: (channel_id = channel.channel_id)\n Heap Blocks: exact=3\n -> Bitmap Index Scan on sample_time_bm_idx (cost=0.00..18.09 rows=738 width=0) (actual time=22.164..22.164 rows=3 loops=1)\n Index Cond: (channel_id = channel.channel_id)\n -> Bitmap Heap Scan on sample_buil_year c_5 (cost=15217.21..626249.52 rows=657115 width=84) (actual time=15.325..15.325 rows=0 loops=1)\n Recheck Cond: (channel_id = channel.channel_id)\n -> Bitmap Index Scan on sample_by_time_idx (cost=0.00..15052.93 rows=657115 width=0) (actual time=15.319..15.319 rows=0 loops=1)\n Index Cond: (channel_id = channel.channel_id)\n -> Bitmap Heap Scan on sample_ctrl_month c_6 (cost=4923.63..222921.32 rows=212525 width=84) (actual time=16.785..16.786 rows=0 loops=1)\n Recheck Cond: (channel_id = channel.channel_id)\n -> Bitmap Index Scan on sample_time_cm_idx (cost=0.00..4870.50 rows=212525 width=0) (actual time=16.779..16.779 rows=0 loops=1)\n Index Cond: (channel_id = channel.channel_id)\n -> Bitmap Heap Scan on sample_ctrl_year c_7 (cost=13853.69..868668.59 rows=598339 width=84) (actual time=21.316..21.316 rows=0 loops=1)\n Recheck Cond: (channel_id = channel.channel_id)\n -> Bitmap Index Scan on sample_cy_time_idx (cost=0.00..13704.11 rows=598339 width=0) (actual time=21.312..21.312 rows=0 loops=1)\n Index Cond: (channel_id = channel.channel_id)\n -> Bitmap Heap Scan on sample_util_month c_8 (cost=288.92..45214.06 rows=12433 width=83) (actual time=17.240..17.240 rows=0 loops=1)\n Recheck Cond: (channel_id = channel.channel_id)\n -> Bitmap Index Scan on sample_time_um_idx (cost=0.00..285.81 rows=12433 width=0) (actual time=17.235..17.235 rows=0 loops=1)\n Index Cond: (channel_id = channel.channel_id)\n -> Index Scan using sample_time_uy_idx on sample_util_year c_9 (cost=0.57..54444.50 rows=16337 width=83) (actual time=14.964..14.964 rows=0 loops=1)\n Index Cond: (channel_id = channel.channel_id)\n Planning time: 1.976 ms\n Execution time: 221.009 ms\n\nSo it seems that the possible solutions (without a schema change on tables) are either to drop the index and use a composite index or to use the trick mentioned by D.Rowley...\nThanks Justin and David for your help and time, I learnt quite a lot with your feedback.\nLana\n\n\n\n\n",
"msg_date": "Wed, 9 Jan 2019 12:55:24 +0000",
"msg_from": "Abadie Lana <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: select query does not pick up the right index"
},
{
"msg_contents": "On Thu, 10 Jan 2019 at 01:55, Abadie Lana <[email protected]> wrote:\n> The other query suggested by D.Rowley has the same issue : still swap activity is higher.\n> explain analyze select 'BUIL-B36-VA-RT-RT1:CL0001-2-ABW', c.smpl_time,c.nanosecs,c.float_val,c.num_val,c.str_val,c.datatype,c.array_val from sample c where c.channel_id in (select channel_id from channel where name ='BUIL-B36-VA-RT-RT1:CL0001-2-ABW') order by c.smpl_time desc limit 5;\n\nThis is not the query I suggested. I mentioned if channel.name had a\nunique index, you'd be able to do WHERE c.channel_id = (select\nchannel_id from channel where name = '...'). That's pretty different\nto what you have above.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Thu, 10 Jan 2019 05:41:24 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: select query does not pick up the right index"
},
{
"msg_contents": "Oups wrong copy and paste. I did run your query with equal instead of in but it resulted in the same plan\n________________________________\nFrom: David Rowley <[email protected]>\nSent: 09 January 2019 17:41:24\nTo: Abadie Lana\nCc: Justin Pryzby; [email protected]\nSubject: Re: select query does not pick up the right index\n\nOn Thu, 10 Jan 2019 at 01:55, Abadie Lana <[email protected]> wrote:\n> The other query suggested by D.Rowley has the same issue : still swap activity is higher.\n> explain analyze select 'BUIL-B36-VA-RT-RT1:CL0001-2-ABW', c.smpl_time,c.nanosecs,c.float_val,c.num_val,c.str_val,c.datatype,c.array_val from sample c where c.channel_id in (select channel_id from channel where name ='BUIL-B36-VA-RT-RT1:CL0001-2-ABW') order by c.smpl_time desc limit 5;\n\nThis is not the query I suggested. I mentioned if channel.name had a\nunique index, you'd be able to do WHERE c.channel_id = (select\nchannel_id from channel where name = '...'). That's pretty different\nto what you have above.\n\n--\n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n\n\n\n\n\n\nOups wrong copy and paste. I did run your query with equal instead of in but it resulted in the same plan\n\nFrom: David Rowley <[email protected]>\nSent: 09 January 2019 17:41:24\nTo: Abadie Lana\nCc: Justin Pryzby; [email protected]\nSubject: Re: select query does not pick up the right index\n \n\n\n\nOn Thu, 10 Jan 2019 at 01:55, Abadie Lana <[email protected]> wrote:\n> The other query suggested by D.Rowley has the same issue : still swap activity is higher.\n> explain analyze select 'BUIL-B36-VA-RT-RT1:CL0001-2-ABW', c.smpl_time,c.nanosecs,c.float_val,c.num_val,c.str_val,c.datatype,c.array_val from sample c where c.channel_id in (select channel_id from channel where name ='BUIL-B36-VA-RT-RT1:CL0001-2-ABW') order\n by c.smpl_time desc limit 5;\n\nThis is not the query I suggested. I mentioned if channel.name had a\nunique index, you'd be able to do WHERE c.channel_id = (select\nchannel_id from channel where name = '...'). That's pretty different\nto what you have above.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Wed, 9 Jan 2019 19:22:45 +0000",
"msg_from": "Abadie Lana <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: select query does not pick up the right index"
}
] |
[
{
"msg_contents": "Hi,\nI'm trying to understand some issues that I'm having with the unix_socket\nsettings and pgsql.\nI have 2 machines with pg v9.2.5 with the same next settings :\n#listen_addresses = 'localhost'\n#unix_socket_directory = ''\n\nin both of the machines I run netstat to check on what socket the postgres\nlistens for connections and I got the same output :\nmachine 1\nnetstat -nlp | grep postgres\ntcp 0 0 127.0.0.1:5432 0.0.0.0:*\n LISTEN 2049/postgres\nunix 2 [ ACC ] STREAM LISTENING 12086 2049/postgres\n /tmp/.s.PGSQL.5432\n\nmachine 2\ntcp 0 0 127.0.0.1:5432 0.0.0.0:*\n LISTEN 3729/postgres\nunix 2 [ ACC ] STREAM LISTENING 51587140 3729/postgres\n /tmp/.s.PGSQL.5432\n\n\n\nIn both of the machines I tried to check if there are some PG environment\nvariables but nothing was set :\nenv | grep PG\n\nThe pg_hba in both cases is the default pg_hba.\n\nNow, In machine 1 when I run psql I get the prompt password but in machine\n2 I keep getting the next error :\n\npsql: could not connect to server: No such file or directory\n Is the server running locally and accepting\n connections on Unix domain socket\n\"/var/run/postgresql/.s.PGSQL.5432\"?\n\nOne important thing that I didnt mention, is that I installed in machine 2\npackage postgresql-libs.x86_64 0:8.4.20-8.el6_9 from the postgres\nrepository (in order to upgrade it to 9.6).\n\nI solved it in machine 2 by setting the unix_socket_directory to\n/var/run/postgresql/.s.PGSQL.5432 and restarting the database.\n\nMy questions are :\n1)Why in machine 1, where I dont have a soft link\n/var/run/postgresql/.s.PGSQL.5432 that directs to the temp dir I can\nconnect succesfully ? (env|grep PG didnt show anything).\n2)What might explain the issue on machine 2? Or maybe machine2 works\nnormally but machine1 has an issue ?\n\nHi,I'm trying to understand some issues that I'm having with the unix_socket settings and pgsql.I have 2 machines with pg v9.2.5 with the same next settings :#listen_addresses = 'localhost'#unix_socket_directory = ''in both of the machines I run netstat to check on what socket the postgres listens for connections and I got the same output : machine 1netstat -nlp | grep postgrestcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN 2049/postgresunix 2 [ ACC ] STREAM LISTENING 12086 2049/postgres /tmp/.s.PGSQL.5432machine 2tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN 3729/postgresunix 2 [ ACC ] STREAM LISTENING 51587140 3729/postgres /tmp/.s.PGSQL.5432In both of the machines I tried to check if there are some PG environment variables but nothing was set : env | grep PGThe pg_hba in both cases is the default pg_hba.Now, In machine 1 when I run psql I get the prompt password but in machine 2 I keep getting the next error : psql: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket \"/var/run/postgresql/.s.PGSQL.5432\"?One important thing that I didnt mention, is that I installed in machine 2 package postgresql-libs.x86_64 0:8.4.20-8.el6_9 from the postgres repository (in order to upgrade it to 9.6).I solved it in machine 2 by setting the unix_socket_directory to /var/run/postgresql/.s.PGSQL.5432 and restarting the database.My questions are :1)Why in machine 1, where I dont have a soft link /var/run/postgresql/.s.PGSQL.5432 that directs to the temp dir I can connect succesfully ? (env|grep PG didnt show anything). 2)What might explain the issue on machine 2? Or maybe machine2 works normally but machine1 has an issue ?",
"msg_date": "Wed, 9 Jan 2019 10:35:41 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "postgresql unix socket connections"
},
{
"msg_contents": ">\n> I installed on machine 2 the next packages and not what I mentioned on my\n> last comment :\n>\n---> Package postgresql96.x86_64 0:9.6.10-1PGDG.rhel6 will be installed\n---> Package postgresql96-contrib.x86_64 0:9.6.10-1PGDG.rhel6 will be\ninstalled\n---> Package postgresql96-libs.x86_64 0:9.6.10-1PGDG.rhel6 will be installed\n---> Package postgresql96-server.x86_64 0:9.6.10-1PGDG.rhel6 will be\ninstalled\n\nI installed on machine 2 the next packages and not what I mentioned on my last comment : ---> Package postgresql96.x86_64 0:9.6.10-1PGDG.rhel6 will be installed---> Package postgresql96-contrib.x86_64 0:9.6.10-1PGDG.rhel6 will be installed---> Package postgresql96-libs.x86_64 0:9.6.10-1PGDG.rhel6 will be installed---> Package postgresql96-server.x86_64 0:9.6.10-1PGDG.rhel6 will be installed",
"msg_date": "Wed, 9 Jan 2019 10:46:58 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgresql unix socket connections"
},
{
"msg_contents": "Mariel Cherkassky <[email protected]> writes:\n> I'm trying to understand some issues that I'm having with the unix_socket\n> settings and pgsql.\n> I have 2 machines with pg v9.2.5 with the same next settings :\n> #listen_addresses = 'localhost'\n> #unix_socket_directory = ''\n\nThis will result in the server creating the socket in whatever it thinks\nis the default socket directory. Traditionally PG uses /tmp as the\ndefault socket directory, and your netstat result is consistent with that:\n\n> unix 2 [ ACC ] STREAM LISTENING 51587140 3729/postgres\n> /tmp/.s.PGSQL.5432\n\nHowever, this:\n\n> psql: could not connect to server: No such file or directory\n> Is the server running locally and accepting\n> connections on Unix domain socket\n> \"/var/run/postgresql/.s.PGSQL.5432\"?\n\nshows that your psql is using a libpq that thinks the default socket\ndirectory is /var/run/postgresql. That's a build-time option, and\nI recall that Red Hat builds their postgresql package that way.\nI'm not 100% sure which way the PGDG RPMs do it.\n\nYou could override libpq's default, for instance via \"psql -h /tmp\".\nBut probably you'd be better off removing any packages that provide\nlibpq versions that don't match your server.\n\nAlternatively, you could configure the server to create socket\nfiles in both places.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 09 Jan 2019 09:55:02 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql unix socket connections"
},
{
"msg_contents": "Hey Tom,\nI'm aware of how I can solve it. I wanted to understand why after\ninstalling the pg 9.6 packages suddenly psql tries to access the socket on\n/var/run/postgresql. Does the libpq default unix socket is changed between\nthose two versions ? (9.6,9.2)\n\nבתאריך יום ד׳, 9 בינו׳ 2019 ב-16:55 מאת Tom Lane <[email protected]\n>:\n\n> Mariel Cherkassky <[email protected]> writes:\n> > I'm trying to understand some issues that I'm having with the unix_socket\n> > settings and pgsql.\n> > I have 2 machines with pg v9.2.5 with the same next settings :\n> > #listen_addresses = 'localhost'\n> > #unix_socket_directory = ''\n>\n> This will result in the server creating the socket in whatever it thinks\n> is the default socket directory. Traditionally PG uses /tmp as the\n> default socket directory, and your netstat result is consistent with that:\n>\n> > unix 2 [ ACC ] STREAM LISTENING 51587140 3729/postgres\n> > /tmp/.s.PGSQL.5432\n>\n> However, this:\n>\n> > psql: could not connect to server: No such file or directory\n> > Is the server running locally and accepting\n> > connections on Unix domain socket\n> > \"/var/run/postgresql/.s.PGSQL.5432\"?\n>\n> shows that your psql is using a libpq that thinks the default socket\n> directory is /var/run/postgresql. That's a build-time option, and\n> I recall that Red Hat builds their postgresql package that way.\n> I'm not 100% sure which way the PGDG RPMs do it.\n>\n> You could override libpq's default, for instance via \"psql -h /tmp\".\n> But probably you'd be better off removing any packages that provide\n> libpq versions that don't match your server.\n>\n> Alternatively, you could configure the server to create socket\n> files in both places.\n>\n> regards, tom lane\n>\n\nHey Tom,I'm aware of how I can solve it. I wanted to understand why after installing the pg 9.6 packages suddenly psql tries to access the socket on /var/run/postgresql. Does the libpq default unix socket is changed between those two versions ? (9.6,9.2)בתאריך יום ד׳, 9 בינו׳ 2019 ב-16:55 מאת Tom Lane <[email protected]>:Mariel Cherkassky <[email protected]> writes:\n> I'm trying to understand some issues that I'm having with the unix_socket\n> settings and pgsql.\n> I have 2 machines with pg v9.2.5 with the same next settings :\n> #listen_addresses = 'localhost'\n> #unix_socket_directory = ''\n\nThis will result in the server creating the socket in whatever it thinks\nis the default socket directory. Traditionally PG uses /tmp as the\ndefault socket directory, and your netstat result is consistent with that:\n\n> unix 2 [ ACC ] STREAM LISTENING 51587140 3729/postgres\n> /tmp/.s.PGSQL.5432\n\nHowever, this:\n\n> psql: could not connect to server: No such file or directory\n> Is the server running locally and accepting\n> connections on Unix domain socket\n> \"/var/run/postgresql/.s.PGSQL.5432\"?\n\nshows that your psql is using a libpq that thinks the default socket\ndirectory is /var/run/postgresql. That's a build-time option, and\nI recall that Red Hat builds their postgresql package that way.\nI'm not 100% sure which way the PGDG RPMs do it.\n\nYou could override libpq's default, for instance via \"psql -h /tmp\".\nBut probably you'd be better off removing any packages that provide\nlibpq versions that don't match your server.\n\nAlternatively, you could configure the server to create socket\nfiles in both places.\n\n regards, tom lane",
"msg_date": "Wed, 9 Jan 2019 17:08:53 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgresql unix socket connections"
},
{
"msg_contents": "On Wed, Jan 9, 2019 at 3:35 AM Mariel Cherkassky <\[email protected]> wrote:\n\n>\n> Now, In machine 1 when I run psql I get the prompt password but in machine\n> 2 I keep getting the next error :\n>\n> psql: could not connect to server: No such file or directory\n> Is the server running locally and accepting\n> connections on Unix domain socket\n> \"/var/run/postgresql/.s.PGSQL.5432\"?\n>\n> One important thing that I didnt mention, is that I installed in machine 2\n> package postgresql-libs.x86_64 0:8.4.20-8.el6_9 from the postgres\n> repository (in order to upgrade it to 9.6).\n>\n\nThe front end and the backend have compiled-in defaults for the socket\ndirectory. If you installed them from different sources, they may have\ndifferent compiled-in defaults. Which means they may not be able to\nrendezvous using the default settings for both of them.\n\nYou can override the default using unix_socket_directory on the server (as\nyou discovered). On the client you can override it by using -h (or PGHOST\nor host= or whatever mechanism), with an argument that looks like a\ndirectory, starting with a '/'.\n\nCheers,\n\nJeff\n\nOn Wed, Jan 9, 2019 at 3:35 AM Mariel Cherkassky <[email protected]> wrote:Now, In machine 1 when I run psql I get the prompt password but in machine 2 I keep getting the next error : psql: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket \"/var/run/postgresql/.s.PGSQL.5432\"?One important thing that I didnt mention, is that I installed in machine 2 package postgresql-libs.x86_64 0:8.4.20-8.el6_9 from the postgres repository (in order to upgrade it to 9.6).The front end and the backend have compiled-in defaults for the socket directory. If you installed them from different sources, they may have different compiled-in defaults. Which means they may not be able to rendezvous using the default settings for both of them. You can override the default using unix_socket_directory on the server (as you discovered). On the client you can override it by using -h (or PGHOST or host= or whatever mechanism), with an argument that looks like a directory, starting with a '/'.Cheers,Jeff",
"msg_date": "Wed, 9 Jan 2019 10:09:04 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql unix socket connections"
},
{
"msg_contents": "On Wed, Jan 9, 2019 at 10:09 AM Mariel Cherkassky <\[email protected]> wrote:\n\n> Hey Tom,\n> I'm aware of how I can solve it. I wanted to understand why after\n> installing the pg 9.6 packages suddenly psql tries to access the socket on\n> /var/run/postgresql. Does the libpq default unix socket is changed between\n> those two versions ? (9.6,9.2)\n>\n\nIt is not a version issue, but a packaging issue. Different systems have\ndifferent conventions on where sockets should go, and the packager imposes\ntheir opinion on the things they package.\n\nCheers,\n\nJeff\n\nOn Wed, Jan 9, 2019 at 10:09 AM Mariel Cherkassky <[email protected]> wrote:Hey Tom,I'm aware of how I can solve it. I wanted to understand why after installing the pg 9.6 packages suddenly psql tries to access the socket on /var/run/postgresql. Does the libpq default unix socket is changed between those two versions ? (9.6,9.2)It is not a version issue, but a packaging issue. Different systems have different conventions on where sockets should go, and the packager imposes their opinion on the things they package.Cheers,Jeff",
"msg_date": "Wed, 9 Jan 2019 10:13:17 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql unix socket connections"
},
{
"msg_contents": "But in both of the machines I have the same os and I used the same\nrepository - postgresql rpm repository. The only difference is that in\nmachine 2 I also installed all pg 9.6 packages. Even When I try to use\n/usr/pgsql-9.2/bin/psql the psql still tries to access the\n/var/run/run/postgresql dir as the socket dir. Does those packages include\na different libpq ? What postgres package change the libpq ?\n\nבתאריך יום ד׳, 9 בינו׳ 2019 ב-17:13 מאת Jeff Janes <\[email protected]>:\n\n> On Wed, Jan 9, 2019 at 10:09 AM Mariel Cherkassky <\n> [email protected]> wrote:\n>\n>> Hey Tom,\n>> I'm aware of how I can solve it. I wanted to understand why after\n>> installing the pg 9.6 packages suddenly psql tries to access the socket on\n>> /var/run/postgresql. Does the libpq default unix socket is changed between\n>> those two versions ? (9.6,9.2)\n>>\n>\n> It is not a version issue, but a packaging issue. Different systems have\n> different conventions on where sockets should go, and the packager imposes\n> their opinion on the things they package.\n>\n> Cheers,\n>\n> Jeff\n>\n\nBut in both of the machines I have the same os and I used the same repository - postgresql rpm repository. The only difference is that in machine 2 I also installed all pg 9.6 packages. Even When I try to use /usr/pgsql-9.2/bin/psql the psql still tries to access the /var/run/run/postgresql dir as the socket dir. Does those packages include a different libpq ? What postgres package change the libpq ?בתאריך יום ד׳, 9 בינו׳ 2019 ב-17:13 מאת Jeff Janes <[email protected]>:On Wed, Jan 9, 2019 at 10:09 AM Mariel Cherkassky <[email protected]> wrote:Hey Tom,I'm aware of how I can solve it. I wanted to understand why after installing the pg 9.6 packages suddenly psql tries to access the socket on /var/run/postgresql. Does the libpq default unix socket is changed between those two versions ? (9.6,9.2)It is not a version issue, but a packaging issue. Different systems have different conventions on where sockets should go, and the packager imposes their opinion on the things they package.Cheers,Jeff",
"msg_date": "Wed, 9 Jan 2019 17:17:03 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgresql unix socket connections"
},
{
"msg_contents": "Mariel Cherkassky <[email protected]> writes:\n> But in both of the machines I have the same os and I used the same\n> repository - postgresql rpm repository. The only difference is that in\n> machine 2 I also installed all pg 9.6 packages. Even When I try to use\n> /usr/pgsql-9.2/bin/psql the psql still tries to access the\n> /var/run/run/postgresql dir as the socket dir. Does those packages include\n> a different libpq ? What postgres package change the libpq ?\n\n\"rpm -ql\" would tell you about which packages supply what.\n\nAssuming there's more than one libpq.so on your machine, which it sounds\nlike there is, which one gets used depends on the dynamic linker's\nconfiguration -- see /etc/ld.so.conf and \"man ldconfig\".\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 09 Jan 2019 11:11:19 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql unix socket connections"
},
{
"msg_contents": "In machine 2 :\nI found 4 libpq.so files :\n[root@~]# locate libpq.so\n/usr/lib64/libpq.so.5\n/usr/lib64/libpq.so.5.2\n/usr/pgsql-9.6/lib/libpq.so.5\n/usr/pgsql-9.6/lib/libpq.so.5.9\n\n cat /etc/ld.so.conf\ninclude ld.so.conf.d/*.conf\n ld.so.conf.d]# cat /etc/ld.so.conf.d/postgresql-pgdg-libs.conf\n/usr/pgsql-9.6/lib/\n\n\nIn machine 1 :\nlocate libpq.so\n/usr/lib64/libpq.so.5\n/usr/lib64/libpq.so.5.2\n/usr/pgsql-9.2/lib/libpq.so.5\n/usr/pgsql-9.2/lib/libpq.so.5.5\n\n\n\nI checked with rpm -ql the packge postgresql96-libs.x86_64\n0:9.6.10-1PGDG.rhel6 and it seems that it indeed put the new libpq.so in\nthe system. My question is, is it possible that it also deleted the 9.2\nlibpq file ?\n\n\nבתאריך יום ד׳, 9 בינו׳ 2019 ב-18:11 מאת Tom Lane <[email protected]\n>:\n\n> Mariel Cherkassky <[email protected]> writes:\n> > But in both of the machines I have the same os and I used the same\n> > repository - postgresql rpm repository. The only difference is that in\n> > machine 2 I also installed all pg 9.6 packages. Even When I try to use\n> > /usr/pgsql-9.2/bin/psql the psql still tries to access the\n> > /var/run/run/postgresql dir as the socket dir. Does those packages\n> include\n> > a different libpq ? What postgres package change the libpq ?\n>\n> \"rpm -ql\" would tell you about which packages supply what.\n>\n> Assuming there's more than one libpq.so on your machine, which it sounds\n> like there is, which one gets used depends on the dynamic linker's\n> configuration -- see /etc/ld.so.conf and \"man ldconfig\".\n>\n> regards, tom lane\n>\n\nIn machine 2 : I found 4 libpq.so files : [root@~]# locate libpq.so/usr/lib64/libpq.so.5/usr/lib64/libpq.so.5.2/usr/pgsql-9.6/lib/libpq.so.5/usr/pgsql-9.6/lib/libpq.so.5.9 cat /etc/ld.so.confinclude ld.so.conf.d/*.conf ld.so.conf.d]# cat /etc/ld.so.conf.d/postgresql-pgdg-libs.conf/usr/pgsql-9.6/lib/In machine 1 :locate libpq.so/usr/lib64/libpq.so.5/usr/lib64/libpq.so.5.2/usr/pgsql-9.2/lib/libpq.so.5/usr/pgsql-9.2/lib/libpq.so.5.5I checked with rpm -ql the packge postgresql96-libs.x86_64 0:9.6.10-1PGDG.rhel6 and it seems that it indeed put the new libpq.so in the system. My question is, is it possible that it also deleted the 9.2 libpq file ?בתאריך יום ד׳, 9 בינו׳ 2019 ב-18:11 מאת Tom Lane <[email protected]>:Mariel Cherkassky <[email protected]> writes:\n> But in both of the machines I have the same os and I used the same\n> repository - postgresql rpm repository. The only difference is that in\n> machine 2 I also installed all pg 9.6 packages. Even When I try to use\n> /usr/pgsql-9.2/bin/psql the psql still tries to access the\n> /var/run/run/postgresql dir as the socket dir. Does those packages include\n> a different libpq ? What postgres package change the libpq ?\n\n\"rpm -ql\" would tell you about which packages supply what.\n\nAssuming there's more than one libpq.so on your machine, which it sounds\nlike there is, which one gets used depends on the dynamic linker's\nconfiguration -- see /etc/ld.so.conf and \"man ldconfig\".\n\n regards, tom lane",
"msg_date": "Thu, 10 Jan 2019 10:22:44 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgresql unix socket connections"
},
{
"msg_contents": "On Wed, Jan 9, 2019 at 7:09 AM Mariel Cherkassky <\[email protected]> wrote:\n\n> Hey Tom,\n> I'm aware of how I can solve it. I wanted to understand why after\n> installing the pg 9.6 packages suddenly psql tries to access the socket on\n> /var/run/postgresql. Does the libpq default unix socket is changed between\n> those two versions ? (9.6,9.2)\n>\n> I hit this kind of problem too. Per Devrim in this thread, the default\nsocket location changed in v. 9.4.\n\nhttps://www.postgresql.org/message-id/flat/CAD3a31XLfN0hgEVJPzfKj9JzVqEOpLrn6eE06PGNMq5JsFngPA%40mail.gmail.com\n\nCheers,\nKen\n\n\n-- \nAGENCY Software\nA Free Software data system\nBy and for non-profits\n*http://agency-software.org/ <http://agency-software.org/>*\n*https://demo.agency-software.org/client\n<https://demo.agency-software.org/client>*\[email protected]\n(253) 245-3801\n\nSubscribe to the mailing list\n<[email protected]?body=subscribe> to\nlearn more about AGENCY or\nfollow the discussion.\n\nOn Wed, Jan 9, 2019 at 7:09 AM Mariel Cherkassky <[email protected]> wrote:Hey Tom,I'm aware of how I can solve it. I wanted to understand why after installing the pg 9.6 packages suddenly psql tries to access the socket on /var/run/postgresql. Does the libpq default unix socket is changed between those two versions ? (9.6,9.2)I hit this kind of problem too. Per Devrim in this thread, the default socket location changed in v. 9.4.https://www.postgresql.org/message-id/flat/CAD3a31XLfN0hgEVJPzfKj9JzVqEOpLrn6eE06PGNMq5JsFngPA%40mail.gmail.comCheers,Ken-- AGENCY Software A Free Software data systemBy and for non-profitshttp://agency-software.org/https://demo.agency-software.org/[email protected](253) 245-3801Subscribe to the mailing list tolearn more about AGENCY orfollow the discussion.",
"msg_date": "Thu, 10 Jan 2019 01:41:56 -0800",
"msg_from": "Ken Tanzer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql unix socket connections"
},
{
"msg_contents": "Thanks Ken. I just wanted to make sure that it happened because of 9.6\npackages installation and not because of any other reason.\n\nבתאריך יום ה׳, 10 בינו׳ 2019 ב-11:42 מאת Ken Tanzer <\[email protected]>:\n\n> On Wed, Jan 9, 2019 at 7:09 AM Mariel Cherkassky <\n> [email protected]> wrote:\n>\n>> Hey Tom,\n>> I'm aware of how I can solve it. I wanted to understand why after\n>> installing the pg 9.6 packages suddenly psql tries to access the socket on\n>> /var/run/postgresql. Does the libpq default unix socket is changed between\n>> those two versions ? (9.6,9.2)\n>>\n>> I hit this kind of problem too. Per Devrim in this thread, the default\n> socket location changed in v. 9.4.\n>\n>\n> https://www.postgresql.org/message-id/flat/CAD3a31XLfN0hgEVJPzfKj9JzVqEOpLrn6eE06PGNMq5JsFngPA%40mail.gmail.com\n>\n> Cheers,\n> Ken\n>\n>\n> --\n> AGENCY Software\n> A Free Software data system\n> By and for non-profits\n> *http://agency-software.org/ <http://agency-software.org/>*\n> *https://demo.agency-software.org/client\n> <https://demo.agency-software.org/client>*\n> [email protected]\n> (253) 245-3801\n>\n> Subscribe to the mailing list\n> <[email protected]?body=subscribe> to\n> learn more about AGENCY or\n> follow the discussion.\n>\n\nThanks Ken. I just wanted to make sure that it happened because of 9.6 packages installation and not because of any other reason.בתאריך יום ה׳, 10 בינו׳ 2019 ב-11:42 מאת Ken Tanzer <[email protected]>:On Wed, Jan 9, 2019 at 7:09 AM Mariel Cherkassky <[email protected]> wrote:Hey Tom,I'm aware of how I can solve it. I wanted to understand why after installing the pg 9.6 packages suddenly psql tries to access the socket on /var/run/postgresql. Does the libpq default unix socket is changed between those two versions ? (9.6,9.2)I hit this kind of problem too. Per Devrim in this thread, the default socket location changed in v. 9.4.https://www.postgresql.org/message-id/flat/CAD3a31XLfN0hgEVJPzfKj9JzVqEOpLrn6eE06PGNMq5JsFngPA%40mail.gmail.comCheers,Ken-- AGENCY Software A Free Software data systemBy and for non-profitshttp://agency-software.org/https://demo.agency-software.org/[email protected](253) 245-3801Subscribe to the mailing list tolearn more about AGENCY orfollow the discussion.",
"msg_date": "Thu, 10 Jan 2019 12:36:25 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgresql unix socket connections"
}
] |
[
{
"msg_contents": "Hey,\nIt is clear that when we query some data, if that data isnt in the shared\nbuffers pg will go bring the relevant blocks from the disk to the shared\nbuffers. I wanted to ask if the same logic works with\ndml(insert/update/delete). I'm familiar with the writing logic, that the\ncheckpointer is the process that writing the data changes into the data\nfiles during every checkpoint and that the commit write the changes from\nthe wal buffers to to the wal files. I wanted to ask about a situation\nwhere we run dmls and that data isnt available in the shared buffers.\n\nHey,It is clear that when we query some data, if that data isnt in the shared buffers pg will go bring the relevant blocks from the disk to the shared buffers. I wanted to ask if the same logic works with dml(insert/update/delete). I'm familiar with the writing logic, that the checkpointer is the process that writing the data changes into the data files during every checkpoint and that the commit write the changes from the wal buffers to to the wal files. I wanted to ask about a situation where we run dmls and that data isnt available in the shared buffers.",
"msg_date": "Thu, 10 Jan 2019 10:06:51 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "does dml operations load the blocks to the shared buffers ?"
},
{
"msg_contents": "Le jeu. 10 janv. 2019 à 09:07, Mariel Cherkassky <\[email protected]> a écrit :\n\n> Hey,\n> It is clear that when we query some data, if that data isnt in the shared\n> buffers pg will go bring the relevant blocks from the disk to the shared\n> buffers. I wanted to ask if the same logic works with\n> dml(insert/update/delete). I'm familiar with the writing logic, that the\n> checkpointer is the process that writing the data changes into the data\n> files during every checkpoint and that the commit write the changes from\n> the wal buffers to to the wal files. I wanted to ask about a situation\n> where we run dmls and that data isnt available in the shared buffers.\n>\n>\nIt works the same.\n\n\n-- \nGuillaume.\n\nLe jeu. 10 janv. 2019 à 09:07, Mariel Cherkassky <[email protected]> a écrit :Hey,It is clear that when we query some data, if that data isnt in the shared buffers pg will go bring the relevant blocks from the disk to the shared buffers. I wanted to ask if the same logic works with dml(insert/update/delete). I'm familiar with the writing logic, that the checkpointer is the process that writing the data changes into the data files during every checkpoint and that the commit write the changes from the wal buffers to to the wal files. I wanted to ask about a situation where we run dmls and that data isnt available in the shared buffers.It works the same. -- Guillaume.",
"msg_date": "Thu, 10 Jan 2019 09:55:04 +0100",
"msg_from": "Guillaume Lelarge <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: does dml operations load the blocks to the shared buffers ?"
},
{
"msg_contents": ". Lets assume the amount of data I insert is bigger than the\nshared_buffers. I didnt commit the transaction yet, the data will be saved\non temp files until I commit ?\nWhat happens if I have in my transaction,I did a lot of changes and I\nfilled the wal_buffers / shared buffers but I still didnt commit. How the\ndatabase will handle it ?\n\nבתאריך יום ה׳, 10 בינו׳ 2019 ב-10:55 מאת Guillaume Lelarge <\[email protected]>:\n\n> Le jeu. 10 janv. 2019 à 09:07, Mariel Cherkassky <\n> [email protected]> a écrit :\n>\n>> Hey,\n>> It is clear that when we query some data, if that data isnt in the shared\n>> buffers pg will go bring the relevant blocks from the disk to the shared\n>> buffers. I wanted to ask if the same logic works with\n>> dml(insert/update/delete). I'm familiar with the writing logic, that the\n>> checkpointer is the process that writing the data changes into the data\n>> files during every checkpoint and that the commit write the changes from\n>> the wal buffers to to the wal files. I wanted to ask about a situation\n>> where we run dmls and that data isnt available in the shared buffers.\n>>\n>>\n> It works the same.\n>\n>\n> --\n> Guillaume.\n>\n\n. Lets assume the amount of data I insert is bigger than the shared_buffers. I didnt commit the transaction yet, the data will be saved on temp files until I commit ? What happens if I have in my transaction,I did a lot of changes and I filled the wal_buffers / shared buffers but I still didnt commit. How the database will handle it ?בתאריך יום ה׳, 10 בינו׳ 2019 ב-10:55 מאת Guillaume Lelarge <[email protected]>:Le jeu. 10 janv. 2019 à 09:07, Mariel Cherkassky <[email protected]> a écrit :Hey,It is clear that when we query some data, if that data isnt in the shared buffers pg will go bring the relevant blocks from the disk to the shared buffers. I wanted to ask if the same logic works with dml(insert/update/delete). I'm familiar with the writing logic, that the checkpointer is the process that writing the data changes into the data files during every checkpoint and that the commit write the changes from the wal buffers to to the wal files. I wanted to ask about a situation where we run dmls and that data isnt available in the shared buffers.It works the same. -- Guillaume.",
"msg_date": "Thu, 10 Jan 2019 12:40:22 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: does dml operations load the blocks to the shared buffers ?"
},
{
"msg_contents": "Le jeu. 10 janv. 2019 à 11:40, Mariel Cherkassky <\[email protected]> a écrit :\n\n> . Lets assume the amount of data I insert is bigger than the\n> shared_buffers. I didnt commit the transaction yet, the data will be saved\n> on temp files until I commit ?\n> What happens if I have in my transaction,I did a lot of changes and I\n> filled the wal_buffers / shared buffers but I still didnt commit. How the\n> database will handle it ?\n>\n\nPlease, don't top-post. It makes it hard to follow the thread.\n\nWhatever happens, it will get to disk in the data files, and it doesn't\nactually matter. The database has system informations on the datafiles (on\neach tuple actually) that will allow to make the distinction between\ncommited tuples, rollbacked tuples and not-yet-committed-or-rollbacked\ntuples.\n\n\n> בתאריך יום ה׳, 10 בינו׳ 2019 ב-10:55 מאת Guillaume Lelarge <\n> [email protected]>:\n>\n>> Le jeu. 10 janv. 2019 à 09:07, Mariel Cherkassky <\n>> [email protected]> a écrit :\n>>\n>>> Hey,\n>>> It is clear that when we query some data, if that data isnt in the\n>>> shared buffers pg will go bring the relevant blocks from the disk to the\n>>> shared buffers. I wanted to ask if the same logic works with\n>>> dml(insert/update/delete). I'm familiar with the writing logic, that the\n>>> checkpointer is the process that writing the data changes into the data\n>>> files during every checkpoint and that the commit write the changes from\n>>> the wal buffers to to the wal files. I wanted to ask about a situation\n>>> where we run dmls and that data isnt available in the shared buffers.\n>>>\n>>>\n>> It works the same.\n>>\n>>\n>> --\n>> Guillaume.\n>>\n>\n\n-- \nGuillaume.\n\nLe jeu. 10 janv. 2019 à 11:40, Mariel Cherkassky <[email protected]> a écrit :. Lets assume the amount of data I insert is bigger than the shared_buffers. I didnt commit the transaction yet, the data will be saved on temp files until I commit ? What happens if I have in my transaction,I did a lot of changes and I filled the wal_buffers / shared buffers but I still didnt commit. How the database will handle it ?Please, don't top-post. It makes it hard to follow the thread.Whatever happens, it will get to disk in the data files, and it doesn't actually matter. The database has system informations on the datafiles (on each tuple actually) that will allow to make the distinction between commited tuples, rollbacked tuples and not-yet-committed-or-rollbacked tuples.בתאריך יום ה׳, 10 בינו׳ 2019 ב-10:55 מאת Guillaume Lelarge <[email protected]>:Le jeu. 10 janv. 2019 à 09:07, Mariel Cherkassky <[email protected]> a écrit :Hey,It is clear that when we query some data, if that data isnt in the shared buffers pg will go bring the relevant blocks from the disk to the shared buffers. I wanted to ask if the same logic works with dml(insert/update/delete). I'm familiar with the writing logic, that the checkpointer is the process that writing the data changes into the data files during every checkpoint and that the commit write the changes from the wal buffers to to the wal files. I wanted to ask about a situation where we run dmls and that data isnt available in the shared buffers.It works the same. -- Guillaume.\n\n-- Guillaume.",
"msg_date": "Thu, 10 Jan 2019 19:42:33 +0100",
"msg_from": "Guillaume Lelarge <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: does dml operations load the blocks to the shared buffers ?"
}
] |
[
{
"msg_contents": "Is there a way to detect missing combined indexes automatically\n\nI am managing a lot of databases and I think a lot of performance\ncould get gained.\n\nBut I don't want to do this manually.\n\nMy focus is on missing combined indexes, since for missing\nsingle indexes there are already tools available.\n\nRegards,\n Thomas Güttler\n\n\n-- \nThomas Guettler http://www.thomas-guettler.de/\nI am looking for feedback: https://github.com/guettli/programming-guidelines\n\n",
"msg_date": "Thu, 10 Jan 2019 13:56:02 +0100",
"msg_from": "=?UTF-8?Q?Thomas_G=c3=bcttler?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Detect missing combined indexes (automatically)"
},
{
"msg_contents": "By the term 'combined indexes', do you mean a multi-column index, or a set of\nsingle-column indexes that need to be combined by the planner? What\nmethodology are you using to recommend missing indexes?\n\nYou may be able to enlist help from more people if you provide a specific\nexample of a query that you have that isn't performing well (with the\nexplain (analyze, verbose, buffers) plan on https://explain.depesz.com/),\nthe index(es) that improve performance (with the plan on\nhttps://explain.depesz.com/), and the 'single index' tools / methodology\nthat you're currently using to suggest missing indexes.\n\n /Jim F\n\n\n\n-----\nJim Finnerty, AWS, Amazon Aurora PostgreSQL\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n",
"msg_date": "Sat, 12 Jan 2019 08:47:09 -0700 (MST)",
"msg_from": "Jim Finnerty <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Detect missing combined indexes (automatically)"
},
{
"msg_contents": "Thomas Güttler schrieb am 10.01.2019 um 13:56:\n> Is there a way to detect missing combined indexes automatically\n> \n> I am managing a lot of databases and I think a lot of performance\n> could get gained.\n> \n> But I don't want to do this manually.\n> \n> My focus is on missing combined indexes, since for missing\n> single indexes there are already tools available.\n\nThe PoWA monitoring tool contains an extension to suggest missing indexes. \n\nI don't know if that includes multi-column indexes though, but it might be worth a try:\n\n https://powa.readthedocs.io/en/latest/stats_extensions/pg_qualstats.html\n\nThomas\n\n\n\n\n",
"msg_date": "Mon, 14 Jan 2019 08:19:50 +0100",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Detect missing combined indexes (automatically)"
},
{
"msg_contents": "On Mon, Jan 14, 2019 at 8:20 AM Thomas Kellerer <[email protected]> wrote:\n>\n> Thomas Güttler schrieb am 10.01.2019 um 13:56:\n> > Is there a way to detect missing combined indexes automatically\n> >\n> > I am managing a lot of databases and I think a lot of performance\n> > could get gained.\n> >\n> > But I don't want to do this manually.\n> >\n> > My focus is on missing combined indexes, since for missing\n> > single indexes there are already tools available.\n>\n> The PoWA monitoring tool contains an extension to suggest missing indexes.\n>\n> I don't know if that includes multi-column indexes though, but it might be worth a try:\n>\n> https://powa.readthedocs.io/en/latest/stats_extensions/pg_qualstats.html\n\nYes, it can handle multi-column indexes.\n\n",
"msg_date": "Mon, 14 Jan 2019 08:42:56 +0100",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Detect missing combined indexes (automatically)"
},
{
"msg_contents": "Hi Julien Rouhaud,\n\npowa can handle multi-column indexes now? Great news. This must be a new\nfeature. I checked this roughly one year ago and it was not possible at this time.\nThank you very much powa!\n\nRegards,\n Thomas Güttler\n\nAm 14.01.19 um 08:42 schrieb Julien Rouhaud:\n> On Mon, Jan 14, 2019 at 8:20 AM Thomas Kellerer <[email protected]> wrote:\n>>\n>> Thomas Güttler schrieb am 10.01.2019 um 13:56:\n>>> Is there a way to detect missing combined indexes automatically\n>>>\n>>> I am managing a lot of databases and I think a lot of performance\n>>> could get gained.\n>>>\n>>> But I don't want to do this manually.\n>>>\n>>> My focus is on missing combined indexes, since for missing\n>>> single indexes there are already tools available.\n>>\n>> The PoWA monitoring tool contains an extension to suggest missing indexes.\n>>\n>> I don't know if that includes multi-column indexes though, but it might be worth a try:\n>>\n>> https://powa.readthedocs.io/en/latest/stats_extensions/pg_qualstats.html\n> \n> Yes, it can handle multi-column indexes.\n> \n\n-- \nThomas Guettler http://www.thomas-guettler.de/\nI am looking for feedback: https://github.com/guettli/programming-guidelines\n\n",
"msg_date": "Tue, 15 Jan 2019 10:23:12 +0100",
"msg_from": "=?UTF-8?Q?Thomas_G=c3=bcttler?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Detect missing combined indexes (automatically)"
},
{
"msg_contents": "Hi,\n\nOn Tue, Jan 15, 2019 at 10:22 AM Thomas Güttler\n<[email protected]> wrote:\n>\n> Hi Julien Rouhaud,\n>\n> powa can handle multi-column indexes now? Great news. This must be a new\n> feature. I checked this roughly one year ago and it was not possible at this time.\n> Thank you very much powa!\n\nOh, that's unexpected. The first version of the \"wizard\" (the\n\"optimize this database\" button on the database page) we published was\nsupposed to handle multi-column indexes. We had few naive tests for\nthat, so at least some cases were working. What it's doing is\ngathering all the quals that have been sampled by pg_qualstats in the\ngiven interval on the given database, and then try to combine them\n(possibly merging a single column qual into a multi-column qual),\norder them by number of distinct queryid so it can come up with a\nquite good set of indexes. So if there are queries with multiple\nAND-ed quals on the same table in your workload, it should be able to\nsuggest a multi-column index. If it doesn't, you should definitely\nopen a bug on the powa-web repo :)\n\nWhat it won't do is to suggest to replace a single column index with a\nmulti-column one, or create a multi-column index if one of the column\nis already indexes since only one of the column will be seen as\nneeding optimization.\n\n",
"msg_date": "Tue, 15 Jan 2019 11:25:06 +0100",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Detect missing combined indexes (automatically)"
}
] |
[
{
"msg_contents": "Hi team,\n\nWe have enabled the monitoring to monitor the vacuuming of tables via check_postgres_last_vacuum plugin but we are getting the below warning message.\n\nNotification Type: PROBLEM\nService: PostgreSQL last vacuum ()\nHost Alias: vmshowcasedb2.vpc.prod.scl1.us.tribalfusion.net\nAddress: 10.26.12.89\nState: UNKNOWN\nInfo: POSTGRES_LAST_VACUUM UNKNOWN: DB postgres (host:vmshowcasedb2.vpc.prod.scl1.us.tribalfusion.net) No matching tables have ever been vacuumed\n\nKindly suggest how we can overcome on this.\n\nRegards,\nDaulat\n\n\n\n\n\n\n\n\n\nHi team,\n \nWe have enabled the monitoring to monitor the vacuuming of tables via check_postgres_last_vacuum plugin but we are getting the below warning message.\n \nNotification Type: PROBLEM\nService: PostgreSQL last vacuum ()\nHost Alias: vmshowcasedb2.vpc.prod.scl1.us.tribalfusion.net\nAddress: 10.26.12.89\nState: UNKNOWN\nInfo: POSTGRES_LAST_VACUUM UNKNOWN: DB postgres (host:vmshowcasedb2.vpc.prod.scl1.us.tribalfusion.net) No matching tables have ever been vacuumed\n \nKindly suggest how we can overcome on this.\n \nRegards,\nDaulat",
"msg_date": "Wed, 16 Jan 2019 07:06:00 +0000",
"msg_from": "Daulat Ram <[email protected]>",
"msg_from_op": true,
"msg_subject": "No matching tables have ever been vacuumed"
},
{
"msg_contents": "Daulat Ram wrote:\n> We have enabled the monitoring to monitor the vacuuming of tables via check_postgres_last_vacuum plugin but we are getting the below warning message.\n> \n> Notification Type: PROBLEM\n> Service: PostgreSQL last vacuum ()\n> Host Alias: vmshowcasedb2.vpc.prod.scl1.us.tribalfusion.net\n> Address: 10.26.12.89\n> State: UNKNOWN\n> Info: POSTGRES_LAST_VACUUM UNKNOWN: DB postgres (host:vmshowcasedb2.vpc.prod.scl1.us.tribalfusion.net) No matching tables have ever been vacuumed\n> \n> Kindly suggest how we can overcome on this.\n\nDisable the test, it is mostly pointless.\n\nOnly tables that regularly receive updates and deletes need to be vacuumed.\nA table that is never modified needs to be vacuumed at most once during its lifetime\nfor transaction wraparound, but there are other checks for problems with that.\n\nAlternatively, you can just manually vacuum all tables once - if all it\nchecks is if it *ever* has been vacuumed.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Thu, 17 Jan 2019 10:49:37 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: No matching tables have ever been vacuumed"
}
] |
[
{
"msg_contents": "Hi folks -\n\nI'm having trouble understanding what some of the stats mean in the \nexecution plan output when parallel workers are used. I've tried to read \nup about it, but I haven't been able to find anything that explains what \nI'm seeing. Apologies in advance if there's documentation I've been too \nstupid to find.\n\nI've run the following query. The \"towns\" table is a massive table that \nI created in order to get some big numbers on a parallel query - don't \nworry, this isn't a real query I want to make faster, just a silly \nexample I'd like to understand.\n\nEXPLAIN (ANALYZE, FORMAT JSON, BUFFERS, VERBOSE)\nSELECT name, code, article\nFROM towns\nORDER BY nameASC,\n codeDESC;\n\nThe output looks like this:\n\n[\n {\n \"Plan\": {\n \"Node Type\": \"Gather Merge\", \"Parallel Aware\": false, \"Startup Cost\": \n1013948.54, \"Total Cost\": 1986244.55, \"Plan Rows\": 8333384, \"Plan \nWidth\": 77, \"Actual Startup Time\": 42978.838, \"Actual Total Time\": \n60628.982, \"Actual Rows\": 10000010, \"Actual Loops\": 1, \"Output\": [\"name\", \"code\", \"article\"], \"Workers Planned\": 2, \"Workers Launched\": 2, \"Shared Hit Blocks\": 29, \n\"Shared Read Blocks\": 47641, \"Shared Dirtied Blocks\": 0, \"Shared Written \nBlocks\": 0, \"Local Hit Blocks\": 0, \"Local Read Blocks\": 0, \"Local \nDirtied Blocks\": 0, \"Local Written Blocks\": 0, \"Temp Read Blocks\": \n91342, \"Temp Written Blocks\": 91479, \"Plans\": [\n {\n \"Node Type\": \"Sort\", \"Parent Relationship\": \"Outer\", \"Parallel Aware\": \nfalse, \"Startup Cost\": 1012948.52, \"Total Cost\": 1023365.25, \"Plan \nRows\": 4166692, \"Plan Width\": 77, \"Actual Startup Time\": 42765.496, \n\"Actual Total Time\": 48526.168, \"Actual Rows\": 3333337, \"Actual Loops\": \n3, \"Output\": [\"name\", \"code\", \"article\"], \"Sort Key\": [\"towns.name\", \"towns.code DESC\"], \"Sort Method\": \"external merge\", \"Sort Space Used\": 283856, \"Sort \nSpace Type\": \"Disk\", \"Shared Hit Blocks\": 170, \"Shared Read Blocks\": \n142762, \"Shared Dirtied Blocks\": 0, \"Shared Written Blocks\": 0, \"Local \nHit Blocks\": 0, \"Local Read Blocks\": 0, \"Local Dirtied Blocks\": 0, \n\"Local Written Blocks\": 0, \"Temp Read Blocks\": 273289, \"Temp Written \nBlocks\": 273700, \"Workers\": [\n {\n \"Worker Number\": 0, \"Actual Startup Time\": 42588.662, \"Actual Total \nTime\": 48456.662, \"Actual Rows\": 3277980, \"Actual Loops\": 1, \"Shared Hit \nBlocks\": 72, \"Shared Read Blocks\": 46794, \"Shared Dirtied Blocks\": 0, \n\"Shared Written Blocks\": 0, \"Local Hit Blocks\": 0, \"Local Read Blocks\": \n0, \"Local Dirtied Blocks\": 0, \"Local Written Blocks\": 0, \"Temp Read \nBlocks\": 89067, \"Temp Written Blocks\": 89202 }, {\n \"Worker Number\": 1, \"Actual Startup Time\": 42946.705, \"Actual Total \nTime\": 48799.414, \"Actual Rows\": 3385130, \"Actual Loops\": 1, \"Shared Hit \nBlocks\": 69, \"Shared Read Blocks\": 48327, \"Shared Dirtied Blocks\": 0, \n\"Shared Written Blocks\": 0, \"Local Hit Blocks\": 0, \"Local Read Blocks\": \n0, \"Local Dirtied Blocks\": 0, \"Local Written Blocks\": 0, \"Temp Read \nBlocks\": 92880, \"Temp Written Blocks\": 93019 }\n ], \"Plans\": [\n {\n \"Node Type\": \"Seq Scan\", \"Parent Relationship\": \"Outer\", \"Parallel \nAware\": true, \"Relation Name\": \"towns\", \"Schema\": \"public\", \"Alias\": \n\"towns\", \"Startup Cost\": 0.00, \"Total Cost\": 184524.92, \"Plan Rows\": \n4166692, \"Plan Width\": 77, \"Actual Startup Time\": 0.322, \"Actual Total \nTime\": 8305.886, \"Actual Rows\": 3333337, \"Actual Loops\": 3, \"Output\": [\"name\", \"code\", \"article\"], \"Shared Hit Blocks\": 96, \"Shared Read Blocks\": 142762, \"Shared Dirtied \nBlocks\": 0, \"Shared Written Blocks\": 0, \"Local Hit Blocks\": 0, \"Local \nRead Blocks\": 0, \"Local Dirtied Blocks\": 0, \"Local Written Blocks\": 0, \n\"Temp Read Blocks\": 0, \"Temp Written Blocks\": 0, \"Workers\": [\n {\n \"Worker Number\": 0, \"Actual Startup Time\": 0.105, \"Actual Total Time\": \n8394.629, \"Actual Rows\": 3277980, \"Actual Loops\": 1, \"Shared Hit \nBlocks\": 35, \"Shared Read Blocks\": 46794, \"Shared Dirtied Blocks\": 0, \n\"Shared Written Blocks\": 0, \"Local Hit Blocks\": 0, \"Local Read Blocks\": \n0, \"Local Dirtied Blocks\": 0, \"Local Written Blocks\": 0, \"Temp Read \nBlocks\": 0, \"Temp Written Blocks\": 0 }, {\n \"Worker Number\": 1, \"Actual Startup Time\": 0.113, \"Actual Total Time\": \n8139.382, \"Actual Rows\": 3385130, \"Actual Loops\": 1, \"Shared Hit \nBlocks\": 32, \"Shared Read Blocks\": 48327, \"Shared Dirtied Blocks\": 0, \n\"Shared Written Blocks\": 0, \"Local Hit Blocks\": 0, \"Local Read Blocks\": \n0, \"Local Dirtied Blocks\": 0, \"Local Written Blocks\": 0, \"Temp Read \nBlocks\": 0, \"Temp Written Blocks\": 0 }\n ]\n }\n ]\n }\n ]\n }, \"Planning Time\": 22.898, \"Triggers\": [\n ], \"Execution Time\": 61133.161 }\n]\n\nOr a more slimmed-down version, with just the confusing fields:\n\n[\n {\n \"Plan\": {\n \"Node Type\": \"Gather Merge\", \"Parallel Aware\": false,\"Actual Total \nTime\": 60628.982, \"Actual Rows\": 10000010, \"Actual Loops\": 1,\"Workers \nPlanned\": 2, \"Workers Launched\": 2,\"Plans\": [\n {\n \"Node Type\": \"Sort\", \"Parent Relationship\": \"Outer\", \"Parallel Aware\": \nfalse,\"Actual Total Time\": 48526.168, \"Actual Rows\": 3333337, \"Actual \nLoops\": 3,\"Workers\": [\n {\n \"Worker Number\": 0,\"Actual Total Time\": 48456.662, \"Actual Rows\": \n3277980, \"Actual Loops\": 1}, {\n \"Worker Number\": 1,\"Actual Total Time\": 48799.414, \"Actual Rows\": \n3385130, \"Actual Loops\": 1}\n ], \"Plans\": [\n {\n \"Node Type\": \"Seq Scan\",\"Parallel Aware\": true,\"Actual Total Time\": \n8305.886, \"Actual Rows\": 3333337, \"Actual Loops\": 3,\"Workers\": [\n {\n \"Worker Number\": 0,\"Actual Total Time\": 8394.629, \"Actual Rows\": \n3277980, \"Actual Loops\": 1}, {\n \"Worker Number\": 1,\"Actual Total Time\": 8139.382, \"Actual Rows\": \n3385130, \"Actual Loops\": 1}\n ]\n }\n ]\n }\n ]\n },\"Execution Time\": 61133.161 }\n]\n\nThe things I'm struggling to understand are:\n\n * How the time values combine with parallelism. For example, each\n execution of the sort node takes an average of 48.5s, over three\n loops. This makes a total running time of 145.5s. Even if this was\n perfectly distributed between the two workers, I would expect this\n to take 72.75s, which is more than the total execution time, so it\n can't take this long.\n * How the row numbers combine with those in the \"Workers\" subkey.\n For example, in the Sort node, worker #0 produces 3,277,980 rows,\n while worker #1 produces 3,385,130 rows. The Sort node as a whole\n produces 3,333,337 rows per loop, for a total of 10,000,010 (the\n value in the gather merge node). I would have expected the number of\n rows produced by the two workers to sum to the number produced by\n the Sort node as a whole, either per loop or in total.\n * How the \"Actual Loops\" values combine with those in the \"Workers\"\n subkey. For example, the \"Sort\" node has 3 loops, but each of the\n workers inside it have 1 loop. I would have expected either:\n o each of the workers to have done 3 loops (since the sort is\n executed 3 times), or\n o the number of loops in the two workers to sum to three (if the\n three executions of the sort are distributed across the two workers)\n\nOther info about my setup:\n\n * Postgres version: \"PostgreSQL 10.4, compiled by Visual C++ build\n 1800, 64-bit\"\n * I installed Postgres using the EnterpriseDB one-click installer\n * OS: Windows 10, v1803\n * I'm using Jetbrains Datagrip to connect to Postgres\n * No errors are logged.\n * Altered config settings:\n *\n\n\n application_name \tPostgreSQL JDBC Driver \tsession\n client_encoding \tUTF8 \tclient\n DateStyle \tISO, MDY \tclient\n default_text_search_config \tpg_catalog.english \tconfiguration file\n dynamic_shared_memory_type \twindows \tconfiguration file\n extra_float_digits \t3 \tsession\n lc_messages \tEnglish_United States.1252 \tconfiguration file\n lc_monetary \tEnglish_United States.1252 \tconfiguration file\n lc_numeric \tEnglish_United States.1252 \tconfiguration file\n lc_time \tEnglish_United States.1252 \tconfiguration file\n listen_addresses \t* \tconfiguration file\n log_destination \tstderr \tconfiguration file\n logging_collector \ton \tconfiguration file\n max_connections \t100 \tconfiguration file\n max_stack_depth \t2MB \tenvironment variable\n port \t5432 \tconfiguration file\n shared_buffers \t128MB \tconfiguration file\n TimeZone \tUTC \tclient\n\nThanks in advance, I'm sure I've done something silly or misunderstood \nsomething obvious but I can't work out what it is for the life of me.\n\nDave\n\n\n\n\n\n\n\nHi folks -\n\nI'm having trouble understanding what some of the stats mean in\n the execution plan output when parallel workers are used. I've\n tried to read up about it, but I haven't been able to find\n anything that explains what I'm seeing. Apologies in advance if\n there's documentation I've been too stupid to find.\nI've run the following query. The \"towns\" table is a massive\n table that I created in order to get some big numbers on a\n parallel query - don't worry, this isn't a real query I want to\n make faster, just a silly example I'd like to understand.\n\nEXPLAIN (ANALYZE, FORMAT JSON, BUFFERS, VERBOSE)\nSELECT\n name, code, article\nFROM\n towns\nORDER BY\n name ASC,\n code DESC;\nThe output looks like this:\n[\n {\n \"Plan\": {\n \"Node Type\": \"Gather Merge\",\n \"Parallel Aware\": false,\n \"Startup Cost\": 1013948.54,\n \"Total Cost\": 1986244.55,\n \"Plan Rows\": 8333384,\n \"Plan Width\": 77,\n \"Actual Startup Time\": 42978.838,\n \"Actual Total Time\": 60628.982,\n \"Actual Rows\": 10000010,\n \"Actual Loops\": 1,\n \"Output\": [\"name\", \"code\", \"article\"],\n \"Workers Planned\": 2,\n \"Workers Launched\": 2,\n \"Shared Hit Blocks\": 29,\n \"Shared Read Blocks\": 47641,\n \"Shared Dirtied Blocks\": 0,\n \"Shared Written Blocks\": 0,\n \"Local Hit Blocks\": 0,\n \"Local Read Blocks\": 0,\n \"Local Dirtied Blocks\": 0,\n \"Local Written Blocks\": 0,\n \"Temp Read Blocks\": 91342,\n \"Temp Written Blocks\": 91479,\n \"Plans\": [\n {\n \"Node Type\": \"Sort\",\n \"Parent Relationship\": \"Outer\",\n \"Parallel Aware\": false,\n \"Startup Cost\": 1012948.52,\n \"Total Cost\": 1023365.25,\n \"Plan Rows\": 4166692,\n \"Plan Width\": 77,\n \"Actual Startup Time\": 42765.496,\n \"Actual Total Time\": 48526.168,\n \"Actual Rows\": 3333337,\n \"Actual Loops\": 3,\n \"Output\": [\"name\", \"code\", \"article\"],\n \"Sort Key\": [\"towns.name\", \"towns.code DESC\"],\n \"Sort Method\": \"external merge\",\n \"Sort Space Used\": 283856,\n \"Sort Space Type\": \"Disk\",\n \"Shared Hit Blocks\": 170,\n \"Shared Read Blocks\": 142762,\n \"Shared Dirtied Blocks\": 0,\n \"Shared Written Blocks\": 0,\n \"Local Hit Blocks\": 0,\n \"Local Read Blocks\": 0,\n \"Local Dirtied Blocks\": 0,\n \"Local Written Blocks\": 0,\n \"Temp Read Blocks\": 273289,\n \"Temp Written Blocks\": 273700,\n \"Workers\": [\n {\n \"Worker Number\": 0,\n \"Actual Startup Time\": 42588.662,\n \"Actual Total Time\": 48456.662,\n \"Actual Rows\": 3277980,\n \"Actual Loops\": 1,\n \"Shared Hit Blocks\": 72,\n \"Shared Read Blocks\": 46794,\n \"Shared Dirtied Blocks\": 0,\n \"Shared Written Blocks\": 0,\n \"Local Hit Blocks\": 0,\n \"Local Read Blocks\": 0,\n \"Local Dirtied Blocks\": 0,\n \"Local Written Blocks\": 0,\n \"Temp Read Blocks\": 89067,\n \"Temp Written Blocks\": 89202\n },\n {\n \"Worker Number\": 1,\n \"Actual Startup Time\": 42946.705,\n \"Actual Total Time\": 48799.414,\n \"Actual Rows\": 3385130,\n \"Actual Loops\": 1,\n \"Shared Hit Blocks\": 69,\n \"Shared Read Blocks\": 48327,\n \"Shared Dirtied Blocks\": 0,\n \"Shared Written Blocks\": 0,\n \"Local Hit Blocks\": 0,\n \"Local Read Blocks\": 0,\n \"Local Dirtied Blocks\": 0,\n \"Local Written Blocks\": 0,\n \"Temp Read Blocks\": 92880,\n \"Temp Written Blocks\": 93019\n }\n ],\n \"Plans\": [\n {\n \"Node Type\": \"Seq Scan\",\n \"Parent Relationship\": \"Outer\",\n \"Parallel Aware\": true,\n \"Relation Name\": \"towns\",\n \"Schema\": \"public\",\n \"Alias\": \"towns\",\n \"Startup Cost\": 0.00,\n \"Total Cost\": 184524.92,\n \"Plan Rows\": 4166692,\n \"Plan Width\": 77,\n \"Actual Startup Time\": 0.322,\n \"Actual Total Time\": 8305.886,\n \"Actual Rows\": 3333337,\n \"Actual Loops\": 3,\n \"Output\": [\"name\", \"code\", \"article\"],\n \"Shared Hit Blocks\": 96,\n \"Shared Read Blocks\": 142762,\n \"Shared Dirtied Blocks\": 0,\n \"Shared Written Blocks\": 0,\n \"Local Hit Blocks\": 0,\n \"Local Read Blocks\": 0,\n \"Local Dirtied Blocks\": 0,\n \"Local Written Blocks\": 0,\n \"Temp Read Blocks\": 0,\n \"Temp Written Blocks\": 0,\n \"Workers\": [\n {\n \"Worker Number\": 0,\n \"Actual Startup Time\": 0.105,\n \"Actual Total Time\": 8394.629,\n \"Actual Rows\": 3277980,\n \"Actual Loops\": 1,\n \"Shared Hit Blocks\": 35,\n \"Shared Read Blocks\": 46794,\n \"Shared Dirtied Blocks\": 0,\n \"Shared Written Blocks\": 0,\n \"Local Hit Blocks\": 0,\n \"Local Read Blocks\": 0,\n \"Local Dirtied Blocks\": 0,\n \"Local Written Blocks\": 0,\n \"Temp Read Blocks\": 0,\n \"Temp Written Blocks\": 0\n },\n {\n \"Worker Number\": 1,\n \"Actual Startup Time\": 0.113,\n \"Actual Total Time\": 8139.382,\n \"Actual Rows\": 3385130,\n \"Actual Loops\": 1,\n \"Shared Hit Blocks\": 32,\n \"Shared Read Blocks\": 48327,\n \"Shared Dirtied Blocks\": 0,\n \"Shared Written Blocks\": 0,\n \"Local Hit Blocks\": 0,\n \"Local Read Blocks\": 0,\n \"Local Dirtied Blocks\": 0,\n \"Local Written Blocks\": 0,\n \"Temp Read Blocks\": 0,\n \"Temp Written Blocks\": 0\n }\n ]\n }\n ]\n }\n ]\n },\n \"Planning Time\": 22.898,\n \"Triggers\": [\n ],\n \"Execution Time\": 61133.161\n }\n]\nOr a more slimmed-down version, with just the confusing fields:\n\n[\n {\n \"Plan\": {\n \"Node Type\": \"Gather Merge\",\n \"Parallel Aware\": false,\n \"Actual Total Time\": 60628.982,\n \"Actual Rows\": 10000010,\n \"Actual Loops\": 1,\n \"Workers Planned\": 2,\n \"Workers Launched\": 2,\n \"Plans\": [\n {\n \"Node Type\": \"Sort\",\n \"Parent Relationship\": \"Outer\",\n \"Parallel Aware\": false,\n \"Actual Total Time\": 48526.168,\n \"Actual Rows\": 3333337,\n \"Actual Loops\": 3,\n \"Workers\": [\n {\n \"Worker Number\": 0,\n \"Actual Total Time\": 48456.662,\n \"Actual Rows\": 3277980,\n \"Actual Loops\": 1\n },\n {\n \"Worker Number\": 1,\n \"Actual Total Time\": 48799.414,\n \"Actual Rows\": 3385130,\n \"Actual Loops\": 1\n }\n ],\n \"Plans\": [\n {\n \"Node Type\": \"Seq Scan\",\n \"Parallel Aware\": true,\n \"Actual Total Time\": 8305.886,\n \"Actual Rows\": 3333337,\n \"Actual Loops\": 3,\n \"Workers\": [\n {\n \"Worker Number\": 0,\n \"Actual Total Time\": 8394.629,\n \"Actual Rows\": 3277980,\n \"Actual Loops\": 1\n },\n {\n \"Worker Number\": 1,\n \"Actual Total Time\": 8139.382,\n \"Actual Rows\": 3385130,\n \"Actual Loops\": 1\n }\n ]\n }\n ]\n }\n ]\n },\n \"Execution Time\": 61133.161\n }\n]\nThe things I'm struggling to understand are:\n\nHow the time values combine with parallelism. For example,\n each execution of the sort node takes an average of 48.5s, over\n three loops. This makes a total running time of 145.5s. Even if\n this was perfectly distributed between the two workers, I would\n expect this to take 72.75s, which is more than the total\n execution time, so it can't take this long.\n\n How the row numbers combine with those in the \"Workers\"\n subkey. For example, in the Sort node, worker #0 produces\n 3,277,980 rows, while worker #1 produces 3,385,130 rows. The\n Sort node as a whole produces 3,333,337 rows per loop, for a\n total of 10,000,010 (the value in the gather merge node). I\n would have expected the number of rows produced by the two\n workers to sum to the number produced by the Sort node as a\n whole, either per loop or in total.\nHow the \"Actual Loops\" values combine with those in the\n \"Workers\" subkey. For example, the \"Sort\" node has 3 loops, but\n each of the workers inside it have 1 loop. I would have expected\n either: \n\n\neach of the workers to have done 3 loops (since the sort is\n executed 3 times), or \n\nthe number of loops in the two workers to sum to three (if\n the three executions of the sort are distributed across the\n two workers)\n\n\nOther info about my setup:\n\nPostgres version: \"PostgreSQL 10.4, compiled by Visual C++\n build 1800, 64-bit\"\nI installed Postgres using the EnterpriseDB one-click\n installer\nOS: Windows 10, v1803\nI'm using Jetbrains Datagrip to connect to Postgres\nNo errors are logged.\n\nAltered config settings: \n\n\n\n \n\napplication_name\nPostgreSQL JDBC Driver\nsession\n\n\nclient_encoding\nUTF8\nclient\n\n\nDateStyle\nISO, MDY\nclient\n\n\ndefault_text_search_config\npg_catalog.english\nconfiguration file\n\n\ndynamic_shared_memory_type\nwindows\nconfiguration file\n\n\nextra_float_digits\n3\nsession\n\n\nlc_messages\nEnglish_United States.1252\nconfiguration file\n\n\nlc_monetary\nEnglish_United States.1252\nconfiguration file\n\n\nlc_numeric\nEnglish_United States.1252\nconfiguration file\n\n\nlc_time\nEnglish_United States.1252\nconfiguration file\n\n\nlisten_addresses\n*\nconfiguration file\n\n\nlog_destination\nstderr\nconfiguration file\n\n\nlogging_collector\non\nconfiguration file\n\n\nmax_connections\n100\nconfiguration file\n\n\nmax_stack_depth\n2MB\nenvironment variable\n\n\nport\n5432\nconfiguration file\n\n\nshared_buffers\n128MB\nconfiguration file\n\n\nTimeZone\nUTC\nclient\n\n\n\n\n\n\n\nThanks in advance, I'm sure I've done something silly or\n misunderstood something obvious but I can't work out what it is\n for the life of me.\nDave",
"msg_date": "Wed, 16 Jan 2019 11:31:08 +0000",
"msg_from": "David Conlin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Parallel stats in execution plans"
},
{
"msg_contents": "It seems like no-one has any ideas on this - does anyone know anywhere \nelse I can try to look/ask to find out more?\n\nIs it possible that this is a bug?\n\nThanks\n\nDave\n\nOn 16/01/2019 11:31, David Conlin wrote:\n>\n> Hi folks -\n>\n> I'm having trouble understanding what some of the stats mean in the \n> execution plan output when parallel workers are used. I've tried to \n> read up about it, but I haven't been able to find anything that \n> explains what I'm seeing. Apologies in advance if there's \n> documentation I've been too stupid to find.\n>\n> I've run the following query. The \"towns\" table is a massive table \n> that I created in order to get some big numbers on a parallel query - \n> don't worry, this isn't a real query I want to make faster, just a \n> silly example I'd like to understand.\n>\n> EXPLAIN (ANALYZE, FORMAT JSON, BUFFERS, VERBOSE)\n> SELECT name, code, article\n> FROM towns\n> ORDER BY nameASC,\n> codeDESC;\n>\n> The output looks like this:\n>\n> [\n> {\n> \"Plan\": {\n> \"Node Type\": \"Gather Merge\", \"Parallel Aware\": false, \"Startup Cost\": \n> 1013948.54, \"Total Cost\": 1986244.55, \"Plan Rows\": 8333384, \"Plan \n> Width\": 77, \"Actual Startup Time\": 42978.838, \"Actual Total Time\": \n> 60628.982, \"Actual Rows\": 10000010, \"Actual Loops\": 1, \"Output\": [\"name\", \"code\", \"article\"], \"Workers Planned\": 2, \"Workers Launched\": 2, \"Shared Hit Blocks\": \n> 29, \"Shared Read Blocks\": 47641, \"Shared Dirtied Blocks\": 0, \"Shared \n> Written Blocks\": 0, \"Local Hit Blocks\": 0, \"Local Read Blocks\": 0, \n> \"Local Dirtied Blocks\": 0, \"Local Written Blocks\": 0, \"Temp Read \n> Blocks\": 91342, \"Temp Written Blocks\": 91479, \"Plans\": [\n> {\n> \"Node Type\": \"Sort\", \"Parent Relationship\": \"Outer\", \"Parallel Aware\": \n> false, \"Startup Cost\": 1012948.52, \"Total Cost\": 1023365.25, \"Plan \n> Rows\": 4166692, \"Plan Width\": 77, \"Actual Startup Time\": 42765.496, \n> \"Actual Total Time\": 48526.168, \"Actual Rows\": 3333337, \"Actual \n> Loops\": 3, \"Output\": [\"name\", \"code\", \"article\"], \"Sort Key\": [\"towns.name\", \"towns.code DESC\"], \"Sort Method\": \"external merge\", \"Sort Space Used\": 283856, \"Sort \n> Space Type\": \"Disk\", \"Shared Hit Blocks\": 170, \"Shared Read Blocks\": \n> 142762, \"Shared Dirtied Blocks\": 0, \"Shared Written Blocks\": 0, \"Local \n> Hit Blocks\": 0, \"Local Read Blocks\": 0, \"Local Dirtied Blocks\": 0, \n> \"Local Written Blocks\": 0, \"Temp Read Blocks\": 273289, \"Temp Written \n> Blocks\": 273700, \"Workers\": [\n> {\n> \"Worker Number\": 0, \"Actual Startup Time\": 42588.662, \"Actual Total \n> Time\": 48456.662, \"Actual Rows\": 3277980, \"Actual Loops\": 1, \"Shared \n> Hit Blocks\": 72, \"Shared Read Blocks\": 46794, \"Shared Dirtied Blocks\": \n> 0, \"Shared Written Blocks\": 0, \"Local Hit Blocks\": 0, \"Local Read \n> Blocks\": 0, \"Local Dirtied Blocks\": 0, \"Local Written Blocks\": 0, \n> \"Temp Read Blocks\": 89067, \"Temp Written Blocks\": 89202 }, {\n> \"Worker Number\": 1, \"Actual Startup Time\": 42946.705, \"Actual Total \n> Time\": 48799.414, \"Actual Rows\": 3385130, \"Actual Loops\": 1, \"Shared \n> Hit Blocks\": 69, \"Shared Read Blocks\": 48327, \"Shared Dirtied Blocks\": \n> 0, \"Shared Written Blocks\": 0, \"Local Hit Blocks\": 0, \"Local Read \n> Blocks\": 0, \"Local Dirtied Blocks\": 0, \"Local Written Blocks\": 0, \n> \"Temp Read Blocks\": 92880, \"Temp Written Blocks\": 93019 }\n> ], \"Plans\": [\n> {\n> \"Node Type\": \"Seq Scan\", \"Parent Relationship\": \"Outer\", \"Parallel \n> Aware\": true, \"Relation Name\": \"towns\", \"Schema\": \"public\", \"Alias\": \n> \"towns\", \"Startup Cost\": 0.00, \"Total Cost\": 184524.92, \"Plan Rows\": \n> 4166692, \"Plan Width\": 77, \"Actual Startup Time\": 0.322, \"Actual Total \n> Time\": 8305.886, \"Actual Rows\": 3333337, \"Actual Loops\": 3, \"Output\": [\"name\", \"code\", \"article\"], \"Shared Hit Blocks\": 96, \"Shared Read Blocks\": 142762, \"Shared \n> Dirtied Blocks\": 0, \"Shared Written Blocks\": 0, \"Local Hit Blocks\": 0, \n> \"Local Read Blocks\": 0, \"Local Dirtied Blocks\": 0, \"Local Written \n> Blocks\": 0, \"Temp Read Blocks\": 0, \"Temp Written Blocks\": 0, \"Workers\": [\n> {\n> \"Worker Number\": 0, \"Actual Startup Time\": 0.105, \"Actual Total Time\": \n> 8394.629, \"Actual Rows\": 3277980, \"Actual Loops\": 1, \"Shared Hit \n> Blocks\": 35, \"Shared Read Blocks\": 46794, \"Shared Dirtied Blocks\": 0, \n> \"Shared Written Blocks\": 0, \"Local Hit Blocks\": 0, \"Local Read \n> Blocks\": 0, \"Local Dirtied Blocks\": 0, \"Local Written Blocks\": 0, \n> \"Temp Read Blocks\": 0, \"Temp Written Blocks\": 0 }, {\n> \"Worker Number\": 1, \"Actual Startup Time\": 0.113, \"Actual Total Time\": \n> 8139.382, \"Actual Rows\": 3385130, \"Actual Loops\": 1, \"Shared Hit \n> Blocks\": 32, \"Shared Read Blocks\": 48327, \"Shared Dirtied Blocks\": 0, \n> \"Shared Written Blocks\": 0, \"Local Hit Blocks\": 0, \"Local Read \n> Blocks\": 0, \"Local Dirtied Blocks\": 0, \"Local Written Blocks\": 0, \n> \"Temp Read Blocks\": 0, \"Temp Written Blocks\": 0 }\n> ]\n> }\n> ]\n> }\n> ]\n> }, \"Planning Time\": 22.898, \"Triggers\": [\n> ], \"Execution Time\": 61133.161 }\n> ]\n>\n> Or a more slimmed-down version, with just the confusing fields:\n>\n> [\n> {\n> \"Plan\": {\n> \"Node Type\": \"Gather Merge\", \"Parallel Aware\": false,\"Actual Total \n> Time\": 60628.982, \"Actual Rows\": 10000010, \"Actual Loops\": 1,\"Workers \n> Planned\": 2, \"Workers Launched\": 2,\"Plans\": [\n> {\n> \"Node Type\": \"Sort\", \"Parent Relationship\": \"Outer\", \"Parallel Aware\": \n> false,\"Actual Total Time\": 48526.168, \"Actual Rows\": 3333337, \"Actual \n> Loops\": 3,\"Workers\": [\n> {\n> \"Worker Number\": 0,\"Actual Total Time\": 48456.662, \"Actual Rows\": \n> 3277980, \"Actual Loops\": 1}, {\n> \"Worker Number\": 1,\"Actual Total Time\": 48799.414, \"Actual Rows\": \n> 3385130, \"Actual Loops\": 1}\n> ], \"Plans\": [\n> {\n> \"Node Type\": \"Seq Scan\",\"Parallel Aware\": true,\"Actual Total Time\": \n> 8305.886, \"Actual Rows\": 3333337, \"Actual Loops\": 3,\"Workers\": [\n> {\n> \"Worker Number\": 0,\"Actual Total Time\": 8394.629, \"Actual Rows\": \n> 3277980, \"Actual Loops\": 1}, {\n> \"Worker Number\": 1,\"Actual Total Time\": 8139.382, \"Actual Rows\": \n> 3385130, \"Actual Loops\": 1}\n> ]\n> }\n> ]\n> }\n> ]\n> },\"Execution Time\": 61133.161 }\n> ]\n>\n> The things I'm struggling to understand are:\n>\n> * How the time values combine with parallelism. For example, each\n> execution of the sort node takes an average of 48.5s, over three\n> loops. This makes a total running time of 145.5s. Even if this was\n> perfectly distributed between the two workers, I would expect this\n> to take 72.75s, which is more than the total execution time, so it\n> can't take this long.\n> * How the row numbers combine with those in the \"Workers\" subkey.\n> For example, in the Sort node, worker #0 produces 3,277,980 rows,\n> while worker #1 produces 3,385,130 rows. The Sort node as a whole\n> produces 3,333,337 rows per loop, for a total of 10,000,010 (the\n> value in the gather merge node). I would have expected the number\n> of rows produced by the two workers to sum to the number produced\n> by the Sort node as a whole, either per loop or in total.\n> * How the \"Actual Loops\" values combine with those in the \"Workers\"\n> subkey. For example, the \"Sort\" node has 3 loops, but each of the\n> workers inside it have 1 loop. I would have expected either:\n> o each of the workers to have done 3 loops (since the sort is\n> executed 3 times), or\n> o the number of loops in the two workers to sum to three (if the\n> three executions of the sort are distributed across the two\n> workers)\n>\n> Other info about my setup:\n>\n> * Postgres version: \"PostgreSQL 10.4, compiled by Visual C++ build\n> 1800, 64-bit\"\n> * I installed Postgres using the EnterpriseDB one-click installer\n> * OS: Windows 10, v1803\n> * I'm using Jetbrains Datagrip to connect to Postgres\n> * No errors are logged.\n> * Altered config settings:\n> *\n>\n>\n> application_name \tPostgreSQL JDBC Driver \tsession\n> client_encoding \tUTF8 \tclient\n> DateStyle \tISO, MDY \tclient\n> default_text_search_config \tpg_catalog.english \tconfiguration file\n> dynamic_shared_memory_type \twindows \tconfiguration file\n> extra_float_digits \t3 \tsession\n> lc_messages \tEnglish_United States.1252 \tconfiguration file\n> lc_monetary \tEnglish_United States.1252 \tconfiguration file\n> lc_numeric \tEnglish_United States.1252 \tconfiguration file\n> lc_time \tEnglish_United States.1252 \tconfiguration file\n> listen_addresses \t* \tconfiguration file\n> log_destination \tstderr \tconfiguration file\n> logging_collector \ton \tconfiguration file\n> max_connections \t100 \tconfiguration file\n> max_stack_depth \t2MB \tenvironment variable\n> port \t5432 \tconfiguration file\n> shared_buffers \t128MB \tconfiguration file\n> TimeZone \tUTC \tclient\n>\n> Thanks in advance, I'm sure I've done something silly or misunderstood \n> something obvious but I can't work out what it is for the life of me.\n>\n> Dave\n>\n\n\n\n\n\n\nIt seems like no-one has any ideas on this - does anyone know\n anywhere else I can try to look/ask to find out more?\nIs it possible that this is a bug?\nThanks\nDave\n\nOn 16/01/2019 11:31, David Conlin\n wrote:\n\n\n\nHi folks -\n\nI'm having trouble understanding what some of the stats mean in\n the execution plan output when parallel workers are used. I've\n tried to read up about it, but I haven't been able to find\n anything that explains what I'm seeing. Apologies in advance if\n there's documentation I've been too stupid to find.\nI've run the following query. The \"towns\" table is a massive\n table that I created in order to get some big numbers on a\n parallel query - don't worry, this isn't a real query I want to\n make faster, just a silly example I'd like to understand.\n\nEXPLAIN (ANALYZE, FORMAT JSON, BUFFERS, VERBOSE)\nSELECT\n name, code, article\nFROM\n towns\nORDER BY\n name ASC,\n code DESC;\nThe output looks like this:\n[\n {\n \"Plan\": {\n \"Node Type\": \"Gather Merge\",\n \"Parallel Aware\": false,\n \"Startup Cost\": 1013948.54,\n \"Total Cost\": 1986244.55,\n \"Plan Rows\": 8333384,\n \"Plan Width\": 77,\n \"Actual Startup Time\": 42978.838,\n \"Actual Total Time\": 60628.982,\n \"Actual Rows\": 10000010,\n \"Actual Loops\": 1,\n \"Output\": [\"name\", \"code\", \"article\"],\n \"Workers Planned\": 2,\n \"Workers Launched\": 2,\n \"Shared Hit Blocks\": 29,\n \"Shared Read Blocks\": 47641,\n \"Shared Dirtied Blocks\": 0,\n \"Shared Written Blocks\": 0,\n \"Local Hit Blocks\": 0,\n \"Local Read Blocks\": 0,\n \"Local Dirtied Blocks\": 0,\n \"Local Written Blocks\": 0,\n \"Temp Read Blocks\": 91342,\n \"Temp Written Blocks\": 91479,\n \"Plans\": [\n {\n \"Node Type\": \"Sort\",\n \"Parent Relationship\": \"Outer\",\n \"Parallel Aware\": false,\n \"Startup Cost\": 1012948.52,\n \"Total Cost\": 1023365.25,\n \"Plan Rows\": 4166692,\n \"Plan Width\": 77,\n \"Actual Startup Time\": 42765.496,\n \"Actual Total Time\": 48526.168,\n \"Actual Rows\": 3333337,\n \"Actual Loops\": 3,\n \"Output\": [\"name\", \"code\", \"article\"],\n \"Sort Key\": [\"towns.name\", \"towns.code DESC\"],\n \"Sort Method\": \"external merge\",\n \"Sort Space Used\": 283856,\n \"Sort Space Type\": \"Disk\",\n \"Shared Hit Blocks\": 170,\n \"Shared Read Blocks\": 142762,\n \"Shared Dirtied Blocks\": 0,\n \"Shared Written Blocks\": 0,\n \"Local Hit Blocks\": 0,\n \"Local Read Blocks\": 0,\n \"Local Dirtied Blocks\": 0,\n \"Local Written Blocks\": 0,\n \"Temp Read Blocks\": 273289,\n \"Temp Written Blocks\": 273700,\n \"Workers\": [\n {\n \"Worker Number\": 0,\n \"Actual Startup Time\": 42588.662,\n \"Actual Total Time\": 48456.662,\n \"Actual Rows\": 3277980,\n \"Actual Loops\": 1,\n \"Shared Hit Blocks\": 72,\n \"Shared Read Blocks\": 46794,\n \"Shared Dirtied Blocks\": 0,\n \"Shared Written Blocks\": 0,\n \"Local Hit Blocks\": 0,\n \"Local Read Blocks\": 0,\n \"Local Dirtied Blocks\": 0,\n \"Local Written Blocks\": 0,\n \"Temp Read Blocks\": 89067,\n \"Temp Written Blocks\": 89202\n },\n {\n \"Worker Number\": 1,\n \"Actual Startup Time\": 42946.705,\n \"Actual Total Time\": 48799.414,\n \"Actual Rows\": 3385130,\n \"Actual Loops\": 1,\n \"Shared Hit Blocks\": 69,\n \"Shared Read Blocks\": 48327,\n \"Shared Dirtied Blocks\": 0,\n \"Shared Written Blocks\": 0,\n \"Local Hit Blocks\": 0,\n \"Local Read Blocks\": 0,\n \"Local Dirtied Blocks\": 0,\n \"Local Written Blocks\": 0,\n \"Temp Read Blocks\": 92880,\n \"Temp Written Blocks\": 93019\n }\n ],\n \"Plans\": [\n {\n \"Node Type\": \"Seq Scan\",\n \"Parent Relationship\": \"Outer\",\n \"Parallel Aware\": true,\n \"Relation Name\": \"towns\",\n \"Schema\": \"public\",\n \"Alias\": \"towns\",\n \"Startup Cost\": 0.00,\n \"Total Cost\": 184524.92,\n \"Plan Rows\": 4166692,\n \"Plan Width\": 77,\n \"Actual Startup Time\": 0.322,\n \"Actual Total Time\": 8305.886,\n \"Actual Rows\": 3333337,\n \"Actual Loops\": 3,\n \"Output\": [\"name\", \"code\", \"article\"],\n \"Shared Hit Blocks\": 96,\n \"Shared Read Blocks\": 142762,\n \"Shared Dirtied Blocks\": 0,\n \"Shared Written Blocks\": 0,\n \"Local Hit Blocks\": 0,\n \"Local Read Blocks\": 0,\n \"Local Dirtied Blocks\": 0,\n \"Local Written Blocks\": 0,\n \"Temp Read Blocks\": 0,\n \"Temp Written Blocks\": 0,\n \"Workers\": [\n {\n \"Worker Number\": 0,\n \"Actual Startup Time\": 0.105,\n \"Actual Total Time\": 8394.629,\n \"Actual Rows\": 3277980,\n \"Actual Loops\": 1,\n \"Shared Hit Blocks\": 35,\n \"Shared Read Blocks\": 46794,\n \"Shared Dirtied Blocks\": 0,\n \"Shared Written Blocks\": 0,\n \"Local Hit Blocks\": 0,\n \"Local Read Blocks\": 0,\n \"Local Dirtied Blocks\": 0,\n \"Local Written Blocks\": 0,\n \"Temp Read Blocks\": 0,\n \"Temp Written Blocks\": 0\n },\n {\n \"Worker Number\": 1,\n \"Actual Startup Time\": 0.113,\n \"Actual Total Time\": 8139.382,\n \"Actual Rows\": 3385130,\n \"Actual Loops\": 1,\n \"Shared Hit Blocks\": 32,\n \"Shared Read Blocks\": 48327,\n \"Shared Dirtied Blocks\": 0,\n \"Shared Written Blocks\": 0,\n \"Local Hit Blocks\": 0,\n \"Local Read Blocks\": 0,\n \"Local Dirtied Blocks\": 0,\n \"Local Written Blocks\": 0,\n \"Temp Read Blocks\": 0,\n \"Temp Written Blocks\": 0\n }\n ]\n }\n ]\n }\n ]\n },\n \"Planning Time\": 22.898,\n \"Triggers\": [\n ],\n \"Execution Time\": 61133.161\n }\n]\nOr a more slimmed-down version, with just the confusing fields:\n\n[\n {\n \"Plan\": {\n \"Node Type\": \"Gather Merge\",\n \"Parallel Aware\": false,\n \"Actual Total Time\": 60628.982,\n \"Actual Rows\": 10000010,\n \"Actual Loops\": 1,\n \"Workers Planned\": 2,\n \"Workers Launched\": 2,\n \"Plans\": [\n {\n \"Node Type\": \"Sort\",\n \"Parent Relationship\": \"Outer\",\n \"Parallel Aware\": false,\n \"Actual Total Time\": 48526.168,\n \"Actual Rows\": 3333337,\n \"Actual Loops\": 3,\n \"Workers\": [\n {\n \"Worker Number\": 0,\n \"Actual Total Time\": 48456.662,\n \"Actual Rows\": 3277980,\n \"Actual Loops\": 1\n },\n {\n \"Worker Number\": 1,\n \"Actual Total Time\": 48799.414,\n \"Actual Rows\": 3385130,\n \"Actual Loops\": 1\n }\n ],\n \"Plans\": [\n {\n \"Node Type\": \"Seq Scan\",\n \"Parallel Aware\": true,\n \"Actual Total Time\": 8305.886,\n \"Actual Rows\": 3333337,\n \"Actual Loops\": 3,\n \"Workers\": [\n {\n \"Worker Number\": 0,\n \"Actual Total Time\": 8394.629,\n \"Actual Rows\": 3277980,\n \"Actual Loops\": 1\n },\n {\n \"Worker Number\": 1,\n \"Actual Total Time\": 8139.382,\n \"Actual Rows\": 3385130,\n \"Actual Loops\": 1\n }\n ]\n }\n ]\n }\n ]\n },\n \"Execution Time\": 61133.161\n }\n]\nThe things I'm struggling to understand are:\n\nHow the time values combine with parallelism. For example,\n each execution of the sort node takes an average of 48.5s,\n over three loops. This makes a total running time of 145.5s.\n Even if this was perfectly distributed between the two\n workers, I would expect this to take 72.75s, which is more\n than the total execution time, so it can't take this long.\n\n How the row numbers combine with those in the \"Workers\"\n subkey. For example, in the Sort node, worker #0 produces\n 3,277,980 rows, while worker #1 produces 3,385,130 rows. The\n Sort node as a whole produces 3,333,337 rows per loop, for a\n total of 10,000,010 (the value in the gather merge node). I\n would have expected the number of rows produced by the two\n workers to sum to the number produced by the Sort node as a\n whole, either per loop or in total.\nHow the \"Actual Loops\" values combine with those in the\n \"Workers\" subkey. For example, the \"Sort\" node has 3 loops,\n but each of the workers inside it have 1 loop. I would have\n expected either: \n\n\neach of the workers to have done 3 loops (since the sort\n is executed 3 times), or \n\nthe number of loops in the two workers to sum to three (if\n the three executions of the sort are distributed across the\n two workers)\n\n\nOther info about my setup:\n\nPostgres version: \"PostgreSQL 10.4, compiled by Visual C++\n build 1800, 64-bit\"\nI installed Postgres using the EnterpriseDB one-click\n installer\nOS: Windows 10, v1803\nI'm using Jetbrains Datagrip to connect to Postgres\nNo errors are logged.\n\nAltered config settings: \n\n \n\n\n \n\napplication_name\nPostgreSQL JDBC\n Driver\nsession\n\n\nclient_encoding\nUTF8\nclient\n\n\nDateStyle\nISO, MDY\nclient\n\n\ndefault_text_search_config\npg_catalog.english\nconfiguration file\n\n\ndynamic_shared_memory_type\nwindows\nconfiguration file\n\n\nextra_float_digits\n3\nsession\n\n\nlc_messages\nEnglish_United States.1252\nconfiguration file\n\n\nlc_monetary\nEnglish_United States.1252\nconfiguration file\n\n\nlc_numeric\nEnglish_United States.1252\nconfiguration file\n\n\nlc_time\nEnglish_United States.1252\nconfiguration file\n\n\nlisten_addresses\n*\nconfiguration file\n\n\nlog_destination\nstderr\nconfiguration file\n\n\nlogging_collector\non\nconfiguration file\n\n\nmax_connections\n100\nconfiguration file\n\n\nmax_stack_depth\n2MB\nenvironment variable\n\n\nport\n5432\nconfiguration file\n\n\nshared_buffers\n128MB\nconfiguration file\n\n\nTimeZone\nUTC\nclient\n\n\n\n\n\nThanks in advance, I'm sure I've done something silly or\n misunderstood something obvious but I can't work out what it is\n for the life of me.\nDave",
"msg_date": "Thu, 24 Jan 2019 08:18:03 +0000",
"msg_from": "David Conlin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Parallel stats in execution plans"
},
{
"msg_contents": "On Thu, 17 Jan 2019 at 00:31, David Conlin <[email protected]> wrote:\n> How the time values combine with parallelism. For example, each execution of the sort node takes an average of 48.5s, over three loops. This makes a total running time of 145.5s. Even if this was perfectly distributed between the two workers, I would expect this to take 72.75s, which is more than the total execution time, so it can't take this long.\n> How the row numbers combine with those in the \"Workers\" subkey. For example, in the Sort node, worker #0 produces 3,277,980 rows, while worker #1 produces 3,385,130 rows. The Sort node as a whole produces 3,333,337 rows per loop, for a total of 10,000,010 (the value in the gather merge node). I would have expected the number of rows produced by the two workers to sum to the number produced by the Sort node as a whole, either per loop or in total.\n> How the \"Actual Loops\" values combine with those in the \"Workers\" subkey. For example, the \"Sort\" node has 3 loops, but each of the workers inside it have 1 loop. I would have expected either:\n>\n> each of the workers to have done 3 loops (since the sort is executed 3 times), or\n> the number of loops in the two workers to sum to three (if the three executions of the sort are distributed across the two workers)\n\nIt's important to know that all of the actual row counts and actual\ntime are divided by the number of loops, which in this case is 3, one\nper process working on that part of the plan. There are two workers,\nbut also the main process helps out too.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Fri, 25 Jan 2019 00:55:42 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parallel stats in execution plans"
}
] |
[
{
"msg_contents": "Hey,\nI have a table with 3 columns and one of those columns is bytea type\nA(int,int,bytea).\nEvery row that I insert is pretty big and thats why postgresql decided to\nsave that column in a toasted table(pg_toasted_Aid). I had a lot of bloat\nissues with that table so I set the vacuum_threshold of the original\ntable(A) into 0.05. Usually the A table has about 1000+ rows but the\ntoasted table has more then 25M . Now, I realized from the autovacuum\nlogging, that when autovacuum runs on the original table (A) it doesn't\nnecessary run on the toasted table and this is very weird.\n\nI tried to set the same threshold for the toasted table but got an error\nthat it is a catalog table and therefore permission is denied.\n2019-01-17 12:04:15 EST db116109 ERROR: permission denied:\n\"pg_toast_13388392\" is a system catalog\n2019-01-17 12:04:15 EST db116109 STATEMENT: alter table\npg_toast.pg_toast_13388392 set (autovacuum_vacuum_scale_factor=0.05);\n\n\nAn example for the autovacuum run :\n2019-01-17 00:00:51 EST 15652 LOG: automatic vacuum of table\n\"db1.public.A\": index scans: 1\n pages: 0 removed, 117 remain\n tuples: 142 removed, 1466 remain\n buffer usage: 162 hits, 34 misses, 29 dirtied\n avg read rate: 1.356 MiB/s, avg write rate: 1.157 MiB/s\n--\n2019-01-17 00:07:51 EST 25666 LOG: automatic vacuum of table\n\"db1.public.A\": index scans: 1\n pages: 0 removed, 117 remain\n tuples: 144 removed, 1604 remain\n buffer usage: 157 hits, 41 misses, 27 dirtied\n avg read rate: 1.651 MiB/s, avg write rate: 1.087 MiB/s\n--\n*2019-01-17 00:12:39 EST 3902 LOG: automatic vacuum of table\n\"db1.pg_toast.pg_toast_13388392\": index scans: 17*\n* pages: 459 removed, 25973888 remain*\n* tuples: 45130560 removed, 54081616 remain*\n* buffer usage: 30060044 hits, 43418591 misses, 37034834 dirtied*\n* avg read rate: 2.809 MiB/s, avg write rate: 2.396 MiB/s*\n--\n2019-01-17 00:13:51 EST 2684 LOG: automatic vacuum of table\n\"db1.public.A\": index scans: 1\n pages: 0 removed, 117 remain\n tuples: 122 removed, 1470 remain\n buffer usage: 152 hits, 41 misses, 30 dirtied\n avg read rate: 2.981 MiB/s, avg write rate: 2.181 MiB/s\n--\n2019-01-17 00:19:51 EST 10935 LOG: automatic vacuum of table\n\"db1.public.A\": index scans: 1\n pages: 0 removed, 117 remain\n tuples: 120 removed, 1471 remain\n buffer usage: 145 hits, 41 misses, 28 dirtied\n avg read rate: 3.637 MiB/s, avg write rate: 2.484 MiB/s\n--\n2019-01-17 00:42:51 EST 24385 LOG: automatic vacuum of table\n\"db1.public.A\": index scans: 1\n pages: 0 removed, 117 remain\n tuples: 130 removed, 1402 remain\n buffer usage: 175 hits, 76 misses, 34 dirtied\n\nAny idea why the autovacuum doesnt vacuum both tables ?\n\nHey,I have a table with 3 columns and one of those columns is bytea type A(int,int,bytea).Every row that I insert is pretty big and thats why postgresql decided to save that column in a toasted table(pg_toasted_Aid). I had a lot of bloat issues with that table so I set the vacuum_threshold of the original table(A) into 0.05. Usually the A table has about 1000+ rows but the toasted table has more then 25M . Now, I realized from the autovacuum logging, that when autovacuum runs on the original table (A) it doesn't necessary run on the toasted table and this is very weird. I tried to set the same threshold for the toasted table but got an error that it is a catalog table and therefore permission is denied.2019-01-17 12:04:15 EST db116109 ERROR: permission denied: \"pg_toast_13388392\" is a system catalog2019-01-17 12:04:15 EST db116109 STATEMENT: alter table pg_toast.pg_toast_13388392 set (autovacuum_vacuum_scale_factor=0.05);An example for the autovacuum run : 2019-01-17 00:00:51 EST 15652 LOG: automatic vacuum of table \"db1.public.A\": index scans: 1 pages: 0 removed, 117 remain tuples: 142 removed, 1466 remain buffer usage: 162 hits, 34 misses, 29 dirtied avg read rate: 1.356 MiB/s, avg write rate: 1.157 MiB/s--2019-01-17 00:07:51 EST 25666 LOG: automatic vacuum of table \"db1.public.A\": index scans: 1 pages: 0 removed, 117 remain tuples: 144 removed, 1604 remain buffer usage: 157 hits, 41 misses, 27 dirtied avg read rate: 1.651 MiB/s, avg write rate: 1.087 MiB/s--2019-01-17 00:12:39 EST 3902 LOG: automatic vacuum of table \"db1.pg_toast.pg_toast_13388392\": index scans: 17 pages: 459 removed, 25973888 remain tuples: 45130560 removed, 54081616 remain buffer usage: 30060044 hits, 43418591 misses, 37034834 dirtied avg read rate: 2.809 MiB/s, avg write rate: 2.396 MiB/s--2019-01-17 00:13:51 EST 2684 LOG: automatic vacuum of table \"db1.public.A\": index scans: 1 pages: 0 removed, 117 remain tuples: 122 removed, 1470 remain buffer usage: 152 hits, 41 misses, 30 dirtied avg read rate: 2.981 MiB/s, avg write rate: 2.181 MiB/s--2019-01-17 00:19:51 EST 10935 LOG: automatic vacuum of table \"db1.public.A\": index scans: 1 pages: 0 removed, 117 remain tuples: 120 removed, 1471 remain buffer usage: 145 hits, 41 misses, 28 dirtied avg read rate: 3.637 MiB/s, avg write rate: 2.484 MiB/s--2019-01-17 00:42:51 EST 24385 LOG: automatic vacuum of table \"db1.public.A\": index scans: 1 pages: 0 removed, 117 remain tuples: 130 removed, 1402 remain buffer usage: 175 hits, 76 misses, 34 dirtiedAny idea why the autovacuum doesnt vacuum both tables ?",
"msg_date": "Thu, 17 Jan 2019 19:28:52 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "autovacuum doesnt run on the pg_toast_id table"
},
{
"msg_contents": "On Thu, Jan 17, 2019 at 07:28:52PM +0200, Mariel Cherkassky wrote:\n...\n> Now, I realized from the autovacuum\n> logging, that when autovacuum runs on the original table (A) it doesn't\n> necessary run on the toasted table and this is very weird.\n...\n> Any idea why the autovacuum doesnt vacuum both tables ?\n\nIt *does* vacuum both, just not *necessarily*, as you saw.\n\nThe toast is a separate table, so it's tracked separately.\n\nNote that:\n|If a table parameter value is set and the\n|equivalent <literal>toast.</literal> parameter is not, the TOAST table\n|will use the table's parameter value.\n\nYou could look in pg_stat_all_tables, to see how frequently the toast is being\nautovacuumed relative to its table.\n\nJustin\n\n",
"msg_date": "Thu, 17 Jan 2019 11:46:51 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autovacuum doesnt run on the pg_toast_id table"
},
{
"msg_contents": "On 2019-Jan-17, Mariel Cherkassky wrote:\n\n> I tried to set the same threshold for the toasted table but got an error\n> that it is a catalog table and therefore permission is denied.\n> 2019-01-17 12:04:15 EST db116109 ERROR: permission denied:\n> \"pg_toast_13388392\" is a system catalog\n> 2019-01-17 12:04:15 EST db116109 STATEMENT: alter table\n> pg_toast.pg_toast_13388392 set (autovacuum_vacuum_scale_factor=0.05);\n\nThe right way to do this is\n alter table main_table set (toast.autovacuum_vacuum_scale_factor = 0.05);\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Thu, 17 Jan 2019 14:52:41 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autovacuum doesnt run on the pg_toast_id table"
},
{
"msg_contents": "I did it for the original table. But I see in the logs that the autovacuun\non the toasted table isn't synced with the autovacuun of the original\ntable. Therefore I thought that it worth to set it also for the toasted\ntable. Can you explain why in the logs I see more vacuums of the original\ntable then the toasted table ? Should they vacuumed together ?\n\nOn Jan 17, 2019 7:52 PM, \"Alvaro Herrera\" <[email protected]> wrote:\n\nOn 2019-Jan-17, Mariel Cherkassky wrote:\n\n> I tried to set the same threshold for the toasted table but got an error\n> that it is a catalog table and therefore permission is denied.\n> 2019-01-17 12:04:15 EST db116109 ERROR: permission denied:\n> \"pg_toast_13388392\" is a system catalog\n> 2019-01-17 12:04:15 EST db116109 STATEMENT: alter table\n> pg_toast.pg_toast_13388392 set (autovacuum_vacuum_scale_factor=0.05);\n\nThe right way to do this is\n alter table main_table set (toast.autovacuum_vacuum_scale_factor = 0.05);\n\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nI did it for the original table. But I see in the logs that the autovacuun on the toasted table isn't synced with the autovacuun of the original table. Therefore I thought that it worth to set it also for the toasted table. Can you explain why in the logs I see more vacuums of the original table then the toasted table ? Should they vacuumed together ?On Jan 17, 2019 7:52 PM, \"Alvaro Herrera\" <[email protected]> wrote:On 2019-Jan-17, Mariel Cherkassky wrote:\n\n> I tried to set the same threshold for the toasted table but got an error\n> that it is a catalog table and therefore permission is denied.\n> 2019-01-17 12:04:15 EST db116109 ERROR: permission denied:\n> \"pg_toast_13388392\" is a system catalog\n> 2019-01-17 12:04:15 EST db116109 STATEMENT: alter table\n> pg_toast.pg_toast_13388392 set (autovacuum_vacuum_scale_factor=0.05);\n\nThe right way to do this is\n alter table main_table set (toast.autovacuum_vacuum_scale_factor = 0.05);\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 17 Jan 2019 19:58:17 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: autovacuum doesnt run on the pg_toast_id table"
},
{
"msg_contents": "On 2019-Jan-17, Mariel Cherkassky wrote:\n\n> I did it for the original table. But I see in the logs that the autovacuun\n> on the toasted table isn't synced with the autovacuun of the original\n> table. Therefore I thought that it worth to set it also for the toasted\n> table. Can you explain why in the logs I see more vacuums of the original\n> table then the toasted table ? Should they vacuumed together ?\n\nNo, they are processed separately, according to the formula explained in\nthe documentation.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Thu, 17 Jan 2019 16:09:01 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autovacuum doesnt run on the pg_toast_id table"
},
{
"msg_contents": "But you said that the threshold that is chosen for the toasted table is\nidentical to the originals table threshold right ? Is that a normal\nbehavior that the original table has 1000recrods but the toasted has more\nthan 10m? How can I set a different threshold for the toasted table ? As it\nseems right now the threshold for the original table is set to 0.05 and it\nit to often for the original but for the toasted table it isn't enough\nbecause it has more then 10 m records..\n\nOn Jan 17, 2019 9:09 PM, \"Alvaro Herrera\" <[email protected]> wrote:\n\nOn 2019-Jan-17, Mariel Cherkassky wrote:\n\n> I did it for the original table. But I see in the logs that the autovacuun\n> on the toasted table isn't synced with the autovacuun of the original\n> table. Therefore I thought that it worth to set it also for the toasted\n> table. Can you explain why in the logs I see more vacuums of the original\n> table then the toasted table ? Should they vacuumed together ?\n\nNo, they are processed separately, according to the formula explained in\nthe documentation.\n\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nBut you said that the threshold that is chosen for the toasted table is identical to the originals table threshold right ? Is that a normal behavior that the original table has 1000recrods but the toasted has more than 10m? How can I set a different threshold for the toasted table ? As it seems right now the threshold for the original table is set to 0.05 and it it to often for the original but for the toasted table it isn't enough because it has more then 10 m records..On Jan 17, 2019 9:09 PM, \"Alvaro Herrera\" <[email protected]> wrote:On 2019-Jan-17, Mariel Cherkassky wrote:\n\n> I did it for the original table. But I see in the logs that the autovacuun\n> on the toasted table isn't synced with the autovacuun of the original\n> table. Therefore I thought that it worth to set it also for the toasted\n> table. Can you explain why in the logs I see more vacuums of the original\n> table then the toasted table ? Should they vacuumed together ?\n\nNo, they are processed separately, according to the formula explained in\nthe documentation.\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 17 Jan 2019 21:59:54 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: autovacuum doesnt run on the pg_toast_id table"
},
{
"msg_contents": "On 2019-Jan-17, Mariel Cherkassky wrote:\n\n> But you said that the threshold that is chosen for the toasted table is\n> identical to the originals table threshold right ?\n\nYou can configure them identical, or different. Up to you.\n\n> Is that a normal behavior that the original table has 1000recrods but\n> the toasted has more than 10m?\n\nSure -- each large record in the main table is split into many 2kb\nrecords in the toast table.\n\n> How can I set a different threshold for the toasted table ?\n\nJust choose a different value in the command I showed.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Thu, 17 Jan 2019 17:16:59 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autovacuum doesnt run on the pg_toast_id table"
},
{
"msg_contents": "Got it, I didn't see the toast word in the command. Thanks !\n\nOn Thu, Jan 17, 2019, 10:17 PM Alvaro Herrera <[email protected]\nwrote:\n\n> On 2019-Jan-17, Mariel Cherkassky wrote:\n>\n> > But you said that the threshold that is chosen for the toasted table is\n> > identical to the originals table threshold right ?\n>\n> You can configure them identical, or different. Up to you.\n>\n> > Is that a normal behavior that the original table has 1000recrods but\n> > the toasted has more than 10m?\n>\n> Sure -- each large record in the main table is split into many 2kb\n> records in the toast table.\n>\n> > How can I set a different threshold for the toasted table ?\n>\n> Just choose a different value in the command I showed.\n>\n> --\n> Álvaro Herrera https://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\nGot it, I didn't see the toast word in the command. Thanks !On Thu, Jan 17, 2019, 10:17 PM Alvaro Herrera <[email protected] wrote:On 2019-Jan-17, Mariel Cherkassky wrote:\n\n> But you said that the threshold that is chosen for the toasted table is\n> identical to the originals table threshold right ?\n\nYou can configure them identical, or different. Up to you.\n\n> Is that a normal behavior that the original table has 1000recrods but\n> the toasted has more than 10m?\n\nSure -- each large record in the main table is split into many 2kb\nrecords in the toast table.\n\n> How can I set a different threshold for the toasted table ?\n\nJust choose a different value in the command I showed.\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 17 Jan 2019 22:18:29 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: autovacuum doesnt run on the pg_toast_id table"
}
] |
[
{
"msg_contents": "Hello,\n\n we noticed that in the presence of a schema with many partitions the jitting overhead penalizes the total query execution time so much that the planner should have decided not to jit at all. For example without jitting we go a 8.3 s execution time and with jitting enabled 13.8 s.\n\n\n\n Attached you can find the TPC-h schema, a query to trigger it and the plans that we obtained.\n\n\nSetup:\n\n Current master from PSQL git repo, only compiled with llvm\n\n TPC-h schema attached, plus a single index per table, and a scale factor of 10. Tables where analyzed.\n\n The query is variation of query 12 to make the effect more relevant.\n\n Max_workers_per_gather is 8\n\n And we only vary the jit flag, we do not modify the costs.\n\n\n\nIs this behavior expected? Is the cost function for jitting missing some circumstances?\n\n\nCheers\nLuis\n\n\nDr. Luis M. Carril Rodríguez\nSenior Software Engineer\[email protected]<mailto:[email protected]>\n\nSwarm64 AS\nParkveien 41 B | 0258 Oslo | Norway\nRegistered at Brønnøysundregistrene in Norway under Org.-Number 911 662 787\nCEO/Geschäftsführer (Daglig Leder): Dr. Karsten Rönner; Chairman/Vorsitzender (Styrets Leder): Dr. Sverre Munck\n\nSwarm64 AS Zweigstelle Hive\nUllsteinstr. 120 | 12109 Berlin | Germany\n\nRegistered at Amtsgericht Charlottenburg - HRB 154382 B\n\n[cid:6213a8cc-8698-4af8-a8ac-7e2b6f201592]",
"msg_date": "Fri, 18 Jan 2019 14:12:23 +0000",
"msg_from": "Luis Carril <[email protected]>",
"msg_from_op": true,
"msg_subject": "JIT overhead slowdown"
},
{
"msg_contents": "Hi,\n\nOn 2019-01-18 14:12:23 +0000, Luis Carril wrote:\n> Is this behavior expected? Is the cost function for jitting missing some circumstances?\n\nThe costing doesn't take the effect of overhead of repeated JITing in\neach worker into account. I could give you a test patch that does, if\nyou want to play around with it?\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Fri, 18 Jan 2019 08:42:54 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: JIT overhead slowdown"
},
{
"msg_contents": "Hi Andres,\n\n yes please it would be much apreciated. Would it not be possible to share the jitted program across the workers?\n\n\nCheers,\nLuis\n\n\nDr. Luis M. Carril Rodríguez\nSenior Software Engineer\[email protected]<mailto:[email protected]>\n\nSwarm64 AS\nParkveien 41 B | 0258 Oslo | Norway\nRegistered at Brønnøysundregistrene in Norway under Org.-Number 911 662 787\nCEO/Geschäftsführer (Daglig Leder): Dr. Karsten Rönner; Chairman/Vorsitzender (Styrets Leder): Dr. Sverre Munck\n\nSwarm64 AS Zweigstelle Hive\nUllsteinstr. 120 | 12109 Berlin | Germany\n\nRegistered at Amtsgericht Charlottenburg - HRB 154382 B\n\n[cid:01158560-9a6f-4435-8170-02d99a0c9cd0]\n________________________________\nFrom: Andres Freund <[email protected]>\nSent: Friday, January 18, 2019 5:42:54 PM\nTo: Luis Carril\nCc: [email protected]\nSubject: Re: JIT overhead slowdown\n\nHi,\n\nOn 2019-01-18 14:12:23 +0000, Luis Carril wrote:\n> Is this behavior expected? Is the cost function for jitting missing some circumstances?\n\nThe costing doesn't take the effect of overhead of repeated JITing in\neach worker into account. I could give you a test patch that does, if\nyou want to play around with it?\n\nGreetings,\n\nAndres Freund",
"msg_date": "Fri, 18 Jan 2019 18:02:43 +0000",
"msg_from": "Luis Carril <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: JIT overhead slowdown"
},
{
"msg_contents": "Hi,\n\nOn Fri, Jan 18, 2019 at 02:12:23PM +0000, Luis Carril wrote:\n> we noticed that in the presence of a schema with many partitions the jitting overhead penalizes the total query execution time so much that the planner should have decided not to jit at all. For example without jitting we go a 8.3 s execution time and with jitting enabled 13.8 s.\n...\n> Is this behavior expected? Is the cost function for jitting missing some circumstances?\n\nOn Fri, Jan 18, 2019 at 08:42:54AM -0800, Andres Freund wrote:\n> The costing doesn't take the effect of overhead of repeated JITing in\n> each worker into account. I could give you a test patch that does, if\n> you want to play around with it?\n\nOn Fri, Jan 18, 2019 at 06:02:43PM +0000, Luis Carril wrote:\n> yes please it would be much apreciated.\n\nI'm also interested to try that ; on re-enabling JIT in 11.2, I see that JITed\nqueries seem to be universally slower than non-JIT.\n\nI found that was discussed here:\nhttps://www.postgresql.org/message-id/20180822161241.je6nghzjsktbb57b%40alap3.anarazel.de\nhttps://www.postgresql.org/message-id/20180624203633.uxirvmigzdhcyjsd%40alap3.anarazel.de\n\nMultiplying JIT cost by nworkers seems like an obvious thing to try, but I\nwondered whether it's really correct? Certainly repeated JITing takes N times\nmore CPU time, but doesn't make the query slower...unless the CPU resources are\nstarved and limiting ?\n\nJustin\n\n",
"msg_date": "Thu, 14 Feb 2019 15:03:34 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: JIT overhead slowdown"
}
] |
[
{
"msg_contents": "Hi,\n\nMemory and hard ware calculation :\n\n\nHow much memory required to achieve performance with the 6GB RAM, 8 Core, Max connection 1100 concurrent connection and O/p memory from procedure 1GB ?\n\n\nIs there any changes required in hardware and work memory expansion ?\n\n\nRegards\nRANGARAJ G\n\n\n\n\n\n\n\n\n\n\nHi,\n \nMemory and hard ware calculation :\n \n \nHow much memory required to achieve performance with the 6GB RAM, 8 Core, Max connection 1100 concurrent connection and O/p memory from procedure\n 1GB ?\n \n \nIs there any changes required in hardware and work memory expansion ? \n \n \nRegards\nRANGARAJ G",
"msg_date": "Mon, 21 Jan 2019 11:24:07 +0000",
"msg_from": "Rangaraj G <[email protected]>",
"msg_from_op": true,
"msg_subject": "Memory and hard ware calculation :"
},
{
"msg_contents": "On Mon, Jan 21, 2019 at 5:35 PM Rangaraj G <[email protected]> wrote:\n\n> Hi,\n>\n>\n>\n> Memory and hard ware calculation :\n>\n>\n>\n>\n>\n> How much memory required to achieve performance with the 6GB RAM, 8 Core,\n> Max connection 1100 concurrent connection and O/p memory from procedure 1GB\n> ?\n>\n>\n>\nhttps://pgtune.leopard.in.ua/#/\n\n>\n>\n> Is there any changes required in hardware and work memory expansion ?\n>\n>\n>\n>\n>\n> Regards\n>\n> RANGARAJ G\n>\n>\n>\n\nOn Mon, Jan 21, 2019 at 5:35 PM Rangaraj G <[email protected]> wrote:\n\n\nHi,\n \nMemory and hard ware calculation :\n \n \nHow much memory required to achieve performance with the 6GB RAM, 8 Core, Max connection 1100 concurrent connection and O/p memory from procedure\n 1GB ?\n https://pgtune.leopard.in.ua/#/ \n \nIs there any changes required in hardware and work memory expansion ? \n \n \nRegards\nRANGARAJ G",
"msg_date": "Mon, 21 Jan 2019 17:43:20 +0000",
"msg_from": "Cleiton Luiz Domazak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Memory and hard ware calculation :"
},
{
"msg_contents": "Hi,\n\nAs per '1100 concurrent connections' I think you want to have a close\nlook to your CPU.\n\nIt looks to me a lot of connections, much more than what 8 cores can\nhandle and you will probably end up in a situation where your CPU spends\na lot of time in handling the connections rather than serving them.\n\n\nregards,\n\nfabio pardi\n\n\nOn 1/21/19 12:24 PM, Rangaraj G wrote:\n> Hi,\n> \n> �\n> \n> Memory and hard ware calculation :\n> \n> �\n> \n> �\n> \n> How much memory required to achieve performance with the 6GB RAM, 8\n> Core, Max connection 1100 concurrent connection and O/p memory from\n> procedure 1GB ?\n> \n> �\n> \n> �\n> \n> Is there any changes required in hardware and work memory expansion ? �\n> \n> �\n> \n> �\n> \n> Regards\n> \n> RANGARAJ G\n> \n> �\n> \n\n",
"msg_date": "Tue, 22 Jan 2019 09:11:15 +0100",
"msg_from": "Fabio Pardi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Memory and hard ware calculation :"
},
{
"msg_contents": "Hi,\r\n\r\nMy question\r\nOur connection is 1100 parallel connection and 1 GB I/p data and 1 GB o/p data in each connection, currently we are using 64 GB RAM and 8 core.\r\n\r\nBut we need all the reports below 3 seconds.\r\n\r\nSo kindly suggest expanding my hard ware and work memory.\r\n\r\nIs there any possibility to get your mobile number ?\r\n\r\nRegards,\r\nRANGARAJ G\r\n\r\n\r\nFrom: Cleiton Luiz Domazak <[email protected]>\r\nSent: Monday, January 21, 2019 11:13 PM\r\nTo: Rangaraj G <[email protected]>\r\nCc: [email protected]; [email protected]; [email protected]\r\nSubject: Re: Memory and hard ware calculation :\r\n\r\n\r\n\r\nOn Mon, Jan 21, 2019 at 5:35 PM Rangaraj G <[email protected]<mailto:[email protected]>> wrote:\r\nHi,\r\n\r\nMemory and hard ware calculation :\r\n\r\n\r\nHow much memory required to achieve performance with the 6GB RAM, 8 Core, Max connection 1100 concurrent connection and O/p memory from procedure 1GB ?\r\n\r\nhttps://pgtune.leopard.in.ua/#/\r\n\r\nIs there any changes required in hardware and work memory expansion ?\r\n\r\n\r\nRegards\r\nRANGARAJ G\r\n\r\n\n\n\n\n\n\n\n\n\nHi,\n \nMy question \nOur connection is 1100 parallel connection and 1 GB I/p data and 1 GB o/p data in each connection, currently we are using 64 GB RAM and 8 core.\n \nBut we need all the reports below 3 seconds.\n \nSo kindly suggest expanding my hard ware and work memory.\n \nIs there any possibility to get your mobile number ?\n \nRegards,\nRANGARAJ G\n \n\n \nFrom: Cleiton Luiz Domazak <[email protected]>\r\n\nSent: Monday, January 21, 2019 11:13 PM\nTo: Rangaraj G <[email protected]>\nCc: [email protected]; [email protected]; [email protected]\nSubject: Re: Memory and hard ware calculation :\n \n\n\n\n\n \n\n\n \n\n\nOn Mon, Jan 21, 2019 at 5:35 PM Rangaraj G <[email protected]> wrote:\n\n\n\n\nHi,\n \nMemory and hard ware calculation :\n \n \nHow much memory required to achieve performance with the 6GB RAM, 8 Core, Max connection\r\n 1100 concurrent connection and O/p memory from procedure 1GB ?\n \n\n\n\n\nhttps://pgtune.leopard.in.ua/#/ \n\n\n\n\n \nIs there any changes required in hardware and work memory expansion ? \n \n \nRegards\nRANGARAJ G",
"msg_date": "Tue, 22 Jan 2019 12:54:06 +0000",
"msg_from": "Rangaraj G <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Memory and hard ware calculation :"
},
{
"msg_contents": "Have you analyzed the queries to ensure that they are efficient?\nHave you examined the tables to ensure that they have indexes to support the \njoins?\nHave you minimized the amount of data selected?\n\nOn 1/22/19 6:54 AM, Rangaraj G wrote:\n>\n> Hi,\n>\n> My question\n>\n> Our connection is 1100 parallel connection and 1 GB I/p data and 1 GB o/p \n> data in each connection, currently we are using 64 GB RAM and 8 core.\n>\n> But we need all the reports below 3 seconds.\n>\n> So kindly suggest expanding my hard ware and work memory.\n>\n> Is there any possibility to get your mobile number ?\n>\n> Regards,\n>\n> RANGARAJ G\n>\n> *From:* Cleiton Luiz Domazak <[email protected]>\n> *Sent:* Monday, January 21, 2019 11:13 PM\n> *To:* Rangaraj G <[email protected]>\n> *Cc:* [email protected]; [email protected]; \n> [email protected]\n> *Subject:* Re: Memory and hard ware calculation :\n>\n> On Mon, Jan 21, 2019 at 5:35 PM Rangaraj G <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n> Hi,\n>\n> Memory and hard ware calculation :\n>\n> How much memory required to achieve performance with the 6GB RAM, 8\n> Core, Max connection 1100 concurrent connection and O/p memory from\n> procedure 1GB ?\n>\n> https://pgtune.leopard.in.ua/#/\n>\n> Is there any changes required in hardware and work memory expansion ?\n>\n> Regards\n>\n> RANGARAJ G\n>\n\n-- \nAngular momentum makes the world go 'round.\n\n\n\n\n\n\n\n Have you analyzed the queries to ensure that they are efficient?\n Have you examined the tables to ensure that they have indexes to\n support the joins?\n Have you minimized the amount of data selected?\n\nOn 1/22/19 6:54 AM, Rangaraj G wrote:\n\n\n\n\n\n\nHi,\n \nMy question \nOur connection is 1100 parallel connection\n and 1 GB I/p data and 1 GB o/p data in each connection,\n currently we are using 64 GB RAM and 8 core.\n \nBut we need all the reports below 3\n seconds.\n \nSo kindly suggest expanding my hard ware\n and work memory.\n \nIs there any possibility to get your mobile\n number ?\n \nRegards,\nRANGARAJ G\n \n\n \nFrom: Cleiton Luiz Domazak\n <[email protected]>\n\nSent: Monday, January 21, 2019 11:13 PM\nTo: Rangaraj G <[email protected]>\nCc: [email protected];\n [email protected]; [email protected]\nSubject: Re: Memory and hard ware calculation :\n \n\n\n\n\n \n\n\n \n\n\nOn Mon, Jan 21, 2019 at 5:35 PM\n Rangaraj G <[email protected]>\n wrote:\n\n\n\n\nHi,\n \nMemory\n and hard ware calculation :\n \n \nHow\n much memory required to achieve performance with\n the 6GB RAM, 8 Core, Max connection 1100\n concurrent connection and O/p memory from\n procedure 1GB ?\n \n\n\n\n\nhttps://pgtune.leopard.in.ua/#/ \n\n\n\n\n \nIs\n there any changes required in hardware and work\n memory expansion ? \n \n \nRegards\nRANGARAJ\n G\n \n\n\n\n\n\n\n\n\n\n-- \n Angular momentum makes the world go 'round.",
"msg_date": "Tue, 22 Jan 2019 14:05:25 -0600",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Memory and hard ware calculation :"
}
] |
[
{
"msg_contents": "Hey everyone,\n\nI have a PostgreSQL 10 database that contains two tables which both have\ntwo levels of partitioning (by list and using a single value). Meaning that\na partitioned table gets repartitioned again.\n\nThe data in my use case is stored on 5K to 10K partitioned tables (children\nand grand-children of the two tables mentioned above) depending on usage\nlevels.\n\nThree indexes are set on the grand-child partition. The partitioning\ncolumns are not covered by them.\n(I don't believe that it is needed to index partition columns no?)\n\nWith this setup, I experience queries that have very slow planning times\nbut fast execution times.\nEven for simple queries where only a couple partitions are searched on and\nthe partition values are hard-coded.\n\nResearching the issue, I thought that the linear search in use by\nPostgreSQL 10 to find the partition table metadata was the cause.\n\ncf: https://blog.2ndquadrant.com/partition-elimination-postgresql-11/\n\nSo I decided to try ou PostgreSQL 11 which included the two aforementioned\nfixes:\n\n-\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=499be013de65242235ebdde06adb08db887f0ea5\n-\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=9fdb675fc5d2de825414e05939727de8b120ae81\n\nHelas, it seems that the version update didn't change anything.\nI ran an `ANALYZE` before doing my tests so I believe that the statistics\nare calculated and fresh.\n\nNow I know that PostgreSQL doesn't like having lots of partitions but I\nstill would like to understand why the query planner is so slow in\nPostgreSQL 10 and PostgreSQL 11.\n(I was also wondering what \"a lot\" of partitions is in PostgreSQL? When I\nlook at use cases of extensions like TimescaleDB, I would expect that 5K to\n10K partitions wouldn't be a whole lot.)\n\nAn example of a simple query that I run on both PostgreSQL version would be:\n\nEXPLAIN ANALYZE\n> SELECT\n> table_a.a,\n> table_b.a\n> FROM\n> (\n> SELECT\n> a,\n> b\n> FROM\n> table_a\n> WHERE\n> partition_level_1_column = 'foo'\n> AND\n> partition_level_2_column = 'bar'\n> )\n> AS table_a\n> INNER JOIN\n> (\n> SELECT\n> a,\n> b\n> FROM\n> table_b\n> WHERE\n> partition_level_1_column = 'baz'\n> AND\n> partition_level_2_column = 'bat'\n> )\n> AS table_b\n> ON table_b.b = table_a.b\n> LIMIT\n> 10;\n\n\nRunning this query on my database with 5K partitions (split roughly 2/3rds\nof the partitions for table_b and 1/3rd of the partitions for table_a) will\nreturn:\n\n- Planning Time: 7155.647 ms\n- Execution Time: 2.827 ms\n\nThank you in advance for your help!\n\nMickael\n\nHey everyone,I have a PostgreSQL 10 database that contains two tables which both have two levels of partitioning (by list and using a single value). Meaning that a partitioned table gets repartitioned again.The data in my use case is stored on 5K to 10K partitioned tables (children and grand-children of the two tables mentioned above) depending on usage levels.Three indexes are set on the grand-child partition. The partitioning columns are not covered by them.(I don't believe that it is needed to index partition columns no?)With this setup, I experience queries that have very slow planning times but fast execution times.Even for simple queries where only a couple partitions are searched on and the partition values are hard-coded.Researching the issue, I thought that the linear search in use by PostgreSQL 10 to find the partition table metadata was the cause.cf: https://blog.2ndquadrant.com/partition-elimination-postgresql-11/So I decided to try ou PostgreSQL 11 which included the two aforementioned fixes:- https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=499be013de65242235ebdde06adb08db887f0ea5- https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=9fdb675fc5d2de825414e05939727de8b120ae81Helas, it seems that the version update didn't change anything.I ran an `ANALYZE` before doing my tests so I believe that the statistics are calculated and fresh.Now I know that PostgreSQL doesn't like having lots of partitions but I still would like to understand why the query planner is so slow in PostgreSQL 10 and PostgreSQL 11.(I was also wondering what \"a lot\" of partitions is in PostgreSQL? When I look at use cases of extensions like TimescaleDB, I would expect that 5K to 10K partitions wouldn't be a whole lot.) An example of a simple query that I run on both PostgreSQL version would be:EXPLAIN ANALYZESELECT table_a.a, table_b.aFROM ( SELECT a, b FROM table_a WHERE partition_level_1_column = 'foo' AND partition_level_2_column = 'bar' ) AS table_aINNER JOIN ( SELECT a, b FROM table_b WHERE partition_level_1_column = 'baz' AND partition_level_2_column = 'bat' ) AS table_b ON table_b.b = table_a.bLIMIT 10;Running this query on my database with 5K partitions (split roughly 2/3rds of the partitions for table_b and 1/3rd of the partitions for table_a) will return:- Planning Time: 7155.647 ms- Execution Time: 2.827 msThank you in advance for your help!Mickael",
"msg_date": "Tue, 22 Jan 2019 14:44:29 +0100",
"msg_from": "Mickael van der Beek <[email protected]>",
"msg_from_op": true,
"msg_subject": "Very long query planning times for database with lots of partitions"
},
{
"msg_contents": "On Tue, Jan 22, 2019 at 02:44:29PM +0100, Mickael van der Beek wrote:\n> Hey everyone,\n> \n> I have a PostgreSQL 10 database that contains two tables which both have\n> two levels of partitioning (by list and using a single value). Meaning that\n> a partitioned table gets repartitioned again.\n> \n> The data in my use case is stored on 5K to 10K partitioned tables (children\n> and grand-children of the two tables mentioned above) depending on usage\n> levels.\n> \n> Three indexes are set on the grand-child partition. The partitioning\n> columns are not covered by them.\n> (I don't believe that it is needed to index partition columns no?)\n> \n> With this setup, I experience queries that have very slow planning times\n> but fast execution times.\n> Even for simple queries where only a couple partitions are searched on and\n> the partition values are hard-coded.\n> \n> Researching the issue, I thought that the linear search in use by\n> PostgreSQL 10 to find the partition table metadata was the cause.\n> \n> cf: https://blog.2ndquadrant.com/partition-elimination-postgresql-11/\n> \n> So I decided to try ou PostgreSQL 11 which included the two aforementioned\n> fixes:\n> \n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=499be013de65242235ebdde06adb08db887f0ea5\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=9fdb675fc5d2de825414e05939727de8b120ae81\n\nThose reduce the CPU time needed, but that's not the most significant issue.\n\nFor postgres up through 11, including for relkind=p, planning requires 1)\nstat()ing every 1GB file for every partition, even those partitions which are\neventually excluded by constraints or partition bounds ; AND, 2) open()ing\nevery index on every partition, even if it's excluded later.\n\nPostgres 12 is expected to resolve this and allow \"many\" (perhaps 10k) of\npartitions: https://commitfest.postgresql.org/20/1778/\n\nI think postgres through 11 would consider 1000 partitions to be \"too many\".\n\nYou *might* be able to mitigate the high cost of stat()ing tables by ensuring\nthat the table metadata stays in OS cache, by running something like:\n find /var/lib/pgsql /tablespace -ls\n\nYou *might* be able to mitigate the high cost of open()ing the indices by\nkeeping their first page in cache (preferably postgres buffer cache)..either by\nrunning a cronjob to run explain, or perhaps something like pg_prewarm on the\nindices. (I did something like this for our largest customers to improve\nperformance as a stopgap).\n\nJustin\n\n",
"msg_date": "Tue, 22 Jan 2019 08:02:05 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very long query planning times for database with lots of\n partitions"
},
{
"msg_contents": "Do you have constraint_exclusion set correctly (i.e. ‘on’ or ‘partition’)?\r\nIf so, does the EXPLAIN output mention all of your parent partitions, or are some being successfully pruned?\r\nPlanning times can be sped up significantly if the planner can exclude parent partitions, without ever having to examine the constraints of the child (and grandchild) partitions. If this is not the case, take another look at your query and try to figure out why the planner might believe a parent partition cannot be outright disregarded from the query – does the query contain a filter on the parent partitions’ partition key, for example?\r\n\r\nI believe Timescaledb has its own query planner optimisations for discarding partitions early at planning time.\r\n\r\nGood luck,\r\nSteve.\r\n\r\n\r\n\r\nThis email is confidential. If you are not the intended recipient, please advise us immediately and delete this message. \r\nThe registered name of Cantab- part of GAM Systematic is Cantab Capital Partners LLP. \r\nSee - http://www.gam.com/en/Legal/Email+disclosures+EU for further information on confidentiality, the risks of non-secure electronic communication, and certain disclosures which we are required to make in accordance with applicable legislation and regulations. \r\nIf you cannot access this link, please notify us by reply message and we will send the contents to you.\r\n\r\nGAM Holding AG and its subsidiaries (Cantab – GAM Systematic) will collect and use information about you in the course of your interactions with us. \r\nFull details about the data types we collect and what we use this for and your related rights is set out in our online privacy policy at https://www.gam.com/en/legal/privacy-policy. \r\nPlease familiarise yourself with this policy and check it from time to time for updates as it supplements this notice.\r\n\n\n\n\n\n\n\n\n\n\n\n\nDo you have constraint_exclusion set correctly (i.e. ‘on’ or ‘partition’)?\nIf so, does the EXPLAIN output mention all of your parent partitions, or are some being successfully pruned?\nPlanning times can be sped up significantly if the planner can exclude parent partitions, without ever having to examine the constraints of the child (and grandchild)\r\n partitions. If this is not the case, take another look at your query and try to figure out why the planner might believe a parent partition cannot be outright disregarded from the query – does the query contain a filter on the parent partitions’ partition\r\n key, for example?\n \nI believe Timescaledb has its own query planner optimisations for discarding partitions early at planning time.\n \nGood luck,\nSteve.\n\n\n\n\n\n\n\n\n\n\n This email is confidential. If you are not the intended recipient, please advise us immediately and delete this message. The registered name of Cantab- part of GAM Systematic is Cantab Capital Partners LLP. See - http://www.gam.com/en/Legal/Email+disclosures+EU for further information on confidentiality, the risks of non-secure electronic communication, and certain disclosures which we are required to make in accordance with applicable legislation and regulations. If you cannot access this link, please notify us by reply message and we will send the contents to you.GAM Holding AG and its subsidiaries (Cantab – GAM Systematic) will collect and use information about you in the course of your interactions with us. Full details about the data types we collect and what we use this for and your related rights is set out in our online privacy policy at https://www.gam.com/en/legal/privacy-policy. Please familiarise yourself with this policy and check it from time to time for updates as it supplements this notice",
"msg_date": "Tue, 22 Jan 2019 14:07:38 +0000",
"msg_from": "Steven Winfield <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Very long query planning times for database with lots of\n partitions"
},
{
"msg_contents": "Thank both of you for your quick answers,\n\n@Justin Based on your answer it would seem to confirm that partitioning or\nat least partitioning this much is not the correct direction to take.\nThe reason I originally wanted to use partitioning was that I'm storing a\nmulti-tenant graph and that as the data grew, so did the indexes and once\nthey were larger than the available RAM, query performance went down the\ndrain.\nThe two levels of partitioning let me create one level for the tenant-level\npartitioning and one level for the business logic where I could further\npartition the tables into the different types of nodes and edges I was\nstoring.\n(The table_a and table_b in my example query. There is also a table_c which\nconnect table_a and table_b but I wanted to keep it simple.)\nAnother reason was that we do regular, automated cleanups of the data and\ndropping all the data (hundreds of thousands of rows) for a tenant is very\nfast with DROP TABLE of a partition and rather slow with a regular DELETE\nquery (even if indexed).\nWith the redesign of the database schema (that included the partitioning\nchanges), I also dramatically reduced the amounts and size of data per row\non the nodes and edges by storing the large and numerous metadata fields on\nseparate tables that are not part of the graph traversal process.\nBased on the usage number I see, I would expect around 12K tenants in the\nmedium future which means that even partitioning per tenant on those two\ntables would lead to 24K partitions which is way above your approximate\nlimit of 1K partitions.\nQueries are always limited to one tenant's data which was one of the\nmotivations behind partitioning in the first place.\nNot sure what you would advise in this case for a multi-tenant graph?\n\n@Steven, yes, constaint_exclusion is set to the default value of\n'partition'.\nThe EXPLAIN ANALYZE output also successfully prunes the partitions\ncorrectly.\nSo the query plan looks sounds and the query execution confirms this.\nBut reaching that point is really what the issue is for me.\n\n\n\nOn Tue, Jan 22, 2019 at 3:07 PM Steven Winfield <\[email protected]> wrote:\n\n> Do you have constraint_exclusion set correctly (i.e. ‘on’ or ‘partition’)?\n>\n> If so, does the EXPLAIN output mention all of your parent partitions, or\n> are some being successfully pruned?\n>\n> Planning times can be sped up significantly if the planner can exclude\n> parent partitions, without ever having to examine the constraints of the\n> child (and grandchild) partitions. If this is not the case, take another\n> look at your query and try to figure out why the planner might believe a\n> parent partition cannot be outright disregarded from the query – does the\n> query contain a filter on the parent partitions’ partition key, for example?\n>\n>\n>\n> I believe Timescaledb has its own query planner optimisations for\n> discarding partitions early at planning time.\n>\n>\n>\n> Good luck,\n>\n> Steve.\n>\n>\n>\n> ------------------------------\n>\n>\n> *This email is confidential. If you are not the intended recipient, please\n> advise us immediately and delete this message. The registered name of\n> Cantab- part of GAM Systematic is Cantab Capital Partners LLP. See -\n> http://www.gam.com/en/Legal/Email+disclosures+EU\n> <http://www.gam.com/en/Legal/Email+disclosures+EU> for further information\n> on confidentiality, the risks of non-secure electronic communication, and\n> certain disclosures which we are required to make in accordance with\n> applicable legislation and regulations. If you cannot access this link,\n> please notify us by reply message and we will send the contents to you.GAM\n> Holding AG and its subsidiaries (Cantab – GAM Systematic) will collect and\n> use information about you in the course of your interactions with us. Full\n> details about the data types we collect and what we use this for and your\n> related rights is set out in our online privacy policy at\n> https://www.gam.com/en/legal/privacy-policy\n> <https://www.gam.com/en/legal/privacy-policy>. Please familiarise yourself\n> with this policy and check it from time to time for updates as it\n> supplements this notice------------------------------ *\n>\n\nThank both of you for your quick answers,@Justin Based on your answer it would seem to confirm that partitioning or at least partitioning this much is not the correct direction to take.The reason I originally wanted to use partitioning was that I'm storing a multi-tenant graph and that as the data grew, so did the indexes and once they were larger than the available RAM, query performance went down the drain.The two levels of partitioning let me create one level for the tenant-level partitioning and one level for the business logic where I could further partition the tables into the different types of nodes and edges I was storing.(The table_a and table_b in my example query. There is also a table_c which connect table_a and table_b but I wanted to keep it simple.)Another reason was that we do regular, automated cleanups of the data and dropping all the data (hundreds of thousands of rows) for a tenant is very fast with DROP TABLE of a partition and rather slow with a regular DELETE query (even if indexed).With the redesign of the database schema (that included the partitioning changes), I also dramatically reduced the amounts and size of data per row on the nodes and edges by storing the large and numerous metadata fields on separate tables that are not part of the graph traversal process.Based on the usage number I see, I would expect around 12K tenants in the medium future which means that even partitioning per tenant on those two tables would lead to 24K partitions which is way above your approximate limit of 1K partitions.Queries are always limited to one tenant's data which was one of the motivations behind partitioning in the first place.Not sure what you would advise in this case for a multi-tenant graph?@Steven, yes, constaint_exclusion is set to the default value of 'partition'.The EXPLAIN ANALYZE output also successfully prunes the partitions correctly.So the query plan looks sounds and the query execution confirms this.But reaching that point is really what the issue is for me. On Tue, Jan 22, 2019 at 3:07 PM Steven Winfield <[email protected]> wrote:\n\n\n\n\n\n\n\nDo you have constraint_exclusion set correctly (i.e. ‘on’ or ‘partition’)?\nIf so, does the EXPLAIN output mention all of your parent partitions, or are some being successfully pruned?\nPlanning times can be sped up significantly if the planner can exclude parent partitions, without ever having to examine the constraints of the child (and grandchild)\n partitions. If this is not the case, take another look at your query and try to figure out why the planner might believe a parent partition cannot be outright disregarded from the query – does the query contain a filter on the parent partitions’ partition\n key, for example?\n \nI believe Timescaledb has its own query planner optimisations for discarding partitions early at planning time.\n \nGood luck,\nSteve.\n\n\n\n\n\n\n\n\n\n\n This email is confidential. If you are not the intended recipient, please advise us immediately and delete this message. The registered name of Cantab- part of GAM Systematic is Cantab Capital Partners LLP. See - http://www.gam.com/en/Legal/Email+disclosures+EU for further information on confidentiality, the risks of non-secure electronic communication, and certain disclosures which we are required to make in accordance with applicable legislation and regulations. If you cannot access this link, please notify us by reply message and we will send the contents to you.GAM Holding AG and its subsidiaries (Cantab – GAM Systematic) will collect and use information about you in the course of your interactions with us. Full details about the data types we collect and what we use this for and your related rights is set out in our online privacy policy at https://www.gam.com/en/legal/privacy-policy. Please familiarise yourself with this policy and check it from time to time for updates as it supplements this notice",
"msg_date": "Tue, 22 Jan 2019 16:24:20 +0100",
"msg_from": "Mickael van der Beek <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very long query planning times for database with lots of\n partitions"
}
] |
[
{
"msg_contents": "I just notice that one of my Hibernate JPA SELECTs against my Heroku PG\n10.4 instance is taking a l o o o g to complete\n<https://explain.depesz.com/s/r2GU> as this EXPLAIN (ANALYZE, BUFFERS)\nshows. The database is 591MB running in PG 10.4 on Heroku with the\nfollowing row counts and index use:\n\n\n relname | percent_of_times_index_used | rows_in_table\n----------------+-----------------------------+---------------\nfm_order | 99 | 2233237\nfm_grant | Insufficient data | 204282\nfm_trader | 5 | 89037\nfm_capital | 99 | 84267\nfm_session | 99 | 7182\nfm_person | 99 | 4365\nfm_allocation | 96 | 4286\nfm_approval | Insufficient data | 920\nfm_market | 97 | 583\nfm_account | 93 | 451\nfm_marketplace | 22 | 275\n\n\nand the offending JPA JPQL is:\n\n @Query(\"SELECT o FROM Order o WHERE \"\n + \" o.type = 'LIMIT' \"\n + \" AND o.session.original = :originalSessionId \"\n + \" AND ( ( \"\n + \" o.consumer IS NULL \"\n + \" ) OR ( \"\n + \" o.consumer IS NOT NULL \"\n + \" AND o.consumer > 0 \"\n + \" AND EXISTS ( \"\n + \" SELECT 1 FROM Order oo WHERE \"\n + \" oo.id = o.consumer \"\n + \" AND oo.session.original = :originalSessionId \"\n + \" AND oo.type = 'LIMIT' \"\n + \" AND oo.owner != o.owner \"\n + \" ) \"\n + \" ) \"\n + \" ) \"\n + \" ORDER BY o.lastModifiedDate DESC \")\n\nI'd like get this SELECT to complete in a few milliseconds again instead of\nthe several minutes (!) it is now taking. Any ideas what I might try?\n\nThanks for your time,\n\nJan\n\nI just notice that one of my Hibernate JPA SELECTs against my Heroku PG 10.4 instance is taking a l o o o g to complete as this EXPLAIN (ANALYZE, BUFFERS) shows. The database is 591MB running in PG 10.4 on Heroku with the following row counts and index use: relname | percent_of_times_index_used | rows_in_table ----------------+-----------------------------+---------------\r\n fm_order | 99 | 2233237\r\n fm_grant | Insufficient data | 204282\r\n fm_trader | 5 | 89037\r\n fm_capital | 99 | 84267\r\n fm_session | 99 | 7182\r\n fm_person | 99 | 4365\r\n fm_allocation | 96 | 4286\r\n fm_approval | Insufficient data | 920\r\n fm_market | 97 | 583\r\n fm_account | 93 | 451\r\n fm_marketplace | 22 | 275\nand the offending JPA JPQL is: @Query(\"SELECT o FROM Order o WHERE \" + \" o.type = 'LIMIT' \" + \" AND o.session.original = :originalSessionId \" + \" AND ( ( \" + \" o.consumer IS NULL \" + \" ) OR ( \" + \" o.consumer IS NOT NULL \" + \" AND o.consumer > 0 \" + \" AND EXISTS ( \" + \" SELECT 1 FROM Order oo WHERE \" + \" oo.id = o.consumer \" + \" AND oo.session.original = :originalSessionId \" + \" AND oo.type = 'LIMIT' \" + \" AND oo.owner != o.owner \" + \" ) \" + \" ) \" + \" ) \" + \" ORDER BY o.lastModifiedDate DESC \")I'd like get this SELECT to complete in a few milliseconds again instead of the several minutes (!) it is now taking. Any ideas what I might try?Thanks for your time,Jan",
"msg_date": "Tue, 22 Jan 2019 13:04:38 -0700",
"msg_from": "Jan Nielsen <[email protected]>",
"msg_from_op": true,
"msg_subject": "SELECT performance drop"
},
{
"msg_contents": "On Tue, Jan 22, 2019 at 1:04 PM Jan Nielsen <[email protected]>\nwrote:\n\n> I just notice that one of my Hibernate JPA SELECTs against my Heroku PG\n> 10.4 instance is taking a l o o o g to complete\n> <https://explain.depesz.com/s/r2GU> as this EXPLAIN (ANALYZE, BUFFERS)\n> shows. The database is 591MB running in PG 10.4 on Heroku with the\n> following row counts and index use:\n>\n>\n> relname | percent_of_times_index_used | rows_in_table\n> ----------------+-----------------------------+---------------\n> fm_order | 99 | 2233237\n> fm_grant | Insufficient data | 204282\n> fm_trader | 5 | 89037\n> fm_capital | 99 | 84267\n> fm_session | 99 | 7182\n> fm_person | 99 | 4365\n> fm_allocation | 96 | 4286\n> fm_approval | Insufficient data | 920\n> fm_market | 97 | 583\n> fm_account | 93 | 451\n> fm_marketplace | 22 | 275\n>\n>\n> and the offending JPA JPQL is:\n>\n> @Query(\"SELECT o FROM Order o WHERE \"\n> + \" o.type = 'LIMIT' \"\n> + \" AND o.session.original = :originalSessionId \"\n> + \" AND ( ( \"\n> + \" o.consumer IS NULL \"\n> + \" ) OR ( \"\n> + \" o.consumer IS NOT NULL \"\n> + \" AND o.consumer > 0 \"\n> + \" AND EXISTS ( \"\n> + \" SELECT 1 FROM Order oo WHERE \"\n> + \" oo.id = o.consumer \"\n> + \" AND oo.session.original = :originalSessionId \"\n> + \" AND oo.type = 'LIMIT' \"\n> + \" AND oo.owner != o.owner \"\n> + \" ) \"\n> + \" ) \"\n> + \" ) \"\n> + \" ORDER BY o.lastModifiedDate DESC \")\n>\n>\n...which Hibernate converts to:\n\nSELECT order0_.id AS id1_7_,\n order0_.created_by AS created_2_7_,\n order0_.created_date AS created_3_7_,\n order0_.last_modified_by AS last_mod4_7_,\n order0_.last_modified_date AS last_mod5_7_,\n order0_.consumer AS consumer6_7_,\n order0_.market_id AS market_14_7_,\n order0_.original AS original7_7_,\n order0_.owner_id AS owner_i15_7_,\n order0_.owner_target AS owner_ta8_7_,\n order0_.price AS price9_7_,\n order0_.session_id AS session16_7_,\n order0_.side AS side10_7_,\n order0_.supplier AS supplie11_7_,\n order0_.type AS type12_7_,\n order0_.units AS units13_7_\nFROM fm_order order0_\n CROSS JOIN fm_session session1_\nWHERE order0_.session_id = session1_.id\n AND order0_.type = 'LIMIT'\n AND session1_.original = 7569\n AND ( order0_.consumer IS NULL\n OR ( order0_.consumer IS NOT NULL )\n AND order0_.consumer > 0\n AND ( EXISTS (SELECT 1\n FROM fm_order order2_\n CROSS JOIN fm_session session3_\n WHERE order2_.session_id = session3_.id\n AND order2_.id = order0_.consumer\n AND session3_.original = 7569\n AND order2_.type = 'LIMIT'\n AND\n order2_.owner_id <> order0_.owner_id) ) )\nORDER BY order0_.last_modified_date DESC;\n\n\n\n\n> I'd like get this SELECT to complete in a few milliseconds again instead\n> of the several minutes (!) it is now taking. Any ideas what I might try?\n>\n> Thanks for your time,\n>\n> Jan\n>\n\nOn Tue, Jan 22, 2019 at 1:04 PM Jan Nielsen <[email protected]> wrote:I just notice that one of my Hibernate JPA SELECTs against my Heroku PG 10.4 instance is taking a l o o o g to complete as this EXPLAIN (ANALYZE, BUFFERS) shows. The database is 591MB running in PG 10.4 on Heroku with the following row counts and index use: relname | percent_of_times_index_used | rows_in_table ----------------+-----------------------------+---------------\n fm_order | 99 | 2233237\n fm_grant | Insufficient data | 204282\n fm_trader | 5 | 89037\n fm_capital | 99 | 84267\n fm_session | 99 | 7182\n fm_person | 99 | 4365\n fm_allocation | 96 | 4286\n fm_approval | Insufficient data | 920\n fm_market | 97 | 583\n fm_account | 93 | 451\n fm_marketplace | 22 | 275\nand the offending JPA JPQL is: @Query(\"SELECT o FROM Order o WHERE \" + \" o.type = 'LIMIT' \" + \" AND o.session.original = :originalSessionId \" + \" AND ( ( \" + \" o.consumer IS NULL \" + \" ) OR ( \" + \" o.consumer IS NOT NULL \" + \" AND o.consumer > 0 \" + \" AND EXISTS ( \" + \" SELECT 1 FROM Order oo WHERE \" + \" oo.id = o.consumer \" + \" AND oo.session.original = :originalSessionId \" + \" AND oo.type = 'LIMIT' \" + \" AND oo.owner != o.owner \" + \" ) \" + \" ) \" + \" ) \" + \" ORDER BY o.lastModifiedDate DESC \")...which Hibernate converts to:SELECT order0_.id AS id1_7_, order0_.created_by AS created_2_7_, order0_.created_date AS created_3_7_, order0_.last_modified_by AS last_mod4_7_, order0_.last_modified_date AS last_mod5_7_, order0_.consumer AS consumer6_7_, order0_.market_id AS market_14_7_, order0_.original AS original7_7_, order0_.owner_id AS owner_i15_7_, order0_.owner_target AS owner_ta8_7_, order0_.price AS price9_7_, order0_.session_id AS session16_7_, order0_.side AS side10_7_, order0_.supplier AS supplie11_7_, order0_.type AS type12_7_, order0_.units AS units13_7_FROM fm_order order0_ CROSS JOIN fm_session session1_WHERE order0_.session_id = session1_.id AND order0_.type = 'LIMIT' AND session1_.original = 7569 AND ( order0_.consumer IS NULL OR ( order0_.consumer IS NOT NULL ) AND order0_.consumer > 0 AND ( EXISTS (SELECT 1 FROM fm_order order2_ CROSS JOIN fm_session session3_ WHERE order2_.session_id = session3_.id AND order2_.id = order0_.consumer AND session3_.original = 7569 AND order2_.type = 'LIMIT' AND order2_.owner_id <> order0_.owner_id) ) )ORDER BY order0_.last_modified_date DESC; I'd like get this SELECT to complete in a few milliseconds again instead of the several minutes (!) it is now taking. Any ideas what I might try?Thanks for your time,Jan",
"msg_date": "Tue, 22 Jan 2019 14:00:06 -0700",
"msg_from": "Jan Nielsen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SELECT performance drop"
},
{
"msg_contents": "Hello,\ncould you check that statistics for fm_session are accurate ?\n\nRegards\nPAscal\n\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n",
"msg_date": "Tue, 22 Jan 2019 14:55:23 -0700 (MST)",
"msg_from": "legrand legrand <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT performance drop"
},
{
"msg_contents": "On Tue, Jan 22, 2019 at 2:55 PM legrand legrand <[email protected]>\nwrote:\n\n> Hello,\n> could you check that statistics for fm_session are accurate ?\n>\n> Regards\n> PAscal\n>\n\nheroku pg:psql -c \"SELECT schemaname, relname, last_analyze FROM\npg_stat_all_tables WHERE relname LIKE 'fm_%'\"\nschemaname | relname | last_analyze\n------------+----------------+--------------\npublic | fm_account |\npublic | fm_allocation |\npublic | fm_approval |\npublic | fm_capital |\npublic | fm_grant |\npublic | fm_market |\npublic | fm_marketplace |\npublic | fm_order |\npublic | fm_person |\npublic | fm_session |\npublic | fm_trader |\n\n\nI suspect you'd say \"not accurate\"? :-o After ANALYZE, the performance is\nmuch better <https://explain.depesz.com/s/p9KX>. Thank you so much!\n\nOn Tue, Jan 22, 2019 at 2:55 PM legrand legrand <[email protected]> wrote:Hello,\ncould you check that statistics for fm_session are accurate ?\n\nRegards\nPAscalheroku pg:psql -c \"SELECT schemaname, relname, last_analyze FROM pg_stat_all_tables WHERE relname LIKE 'fm_%'\" schemaname | relname | last_analyze ------------+----------------+--------------\n public | fm_account | public | fm_allocation | public | fm_approval | public | fm_capital | public | fm_grant | public | fm_market | public | fm_marketplace | public | fm_order | public | fm_person | public | fm_session | public | fm_trader | \nI suspect you'd say \"not accurate\"? :-o After ANALYZE, the performance is much better. Thank you so much!",
"msg_date": "Tue, 22 Jan 2019 15:28:31 -0700",
"msg_from": "Jan Nielsen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SELECT performance drop"
},
{
"msg_contents": "One thing that isn't helping is that you have a redundant predicate. The\nselectivity of this predicate is also estimated too low, so removing the\nredundant predicate might improve the estimate and change the plan:\n\n( \"\n + \" o.consumer IS NULL \"\n + \" ) OR ( \"\n + \" o.consumer IS NOT NULL \"\n + \" AND o.consumer > 0 \n\nremove \"o.consumer IS NOT NULL AND\", which is implied by o.consumer > 0. \nThis predicate should have been automatically removed, but the filter shown\nin depesz shows that it was not.\n\nIf you can find out what the faster plan was, that would be helpful to know.\n\n\n\n-----\nJim Finnerty, AWS, Amazon Aurora PostgreSQL\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n",
"msg_date": "Wed, 23 Jan 2019 06:50:52 -0700 (MST)",
"msg_from": "Jim Finnerty <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT performance drop"
},
{
"msg_contents": "On Wed, Jan 23, 2019 at 6:51 AM Jim Finnerty <[email protected]> wrote:\n\n> One thing that isn't helping is that you have a redundant predicate. The\n> selectivity of this predicate is also estimated too low, so removing the\n> redundant predicate might improve the estimate and change the plan:\n>\n> ( \"\n> + \" o.consumer IS NULL \"\n> + \" ) OR ( \"\n> + \" o.consumer IS NOT NULL \"\n> + \" AND o.consumer > 0\n>\n> remove \"o.consumer IS NOT NULL AND\", which is implied by o.consumer > 0.\n> This predicate should have been automatically removed, but the filter shown\n> in depesz shows that it was not.\n>\n\nGood point -- the new generated SQL is\n\n select\n order0_.id as id1_7_,\n order0_.created_by as created_2_7_,\n order0_.created_date as created_3_7_,\n order0_.last_modified_by as last_mod4_7_,\n order0_.last_modified_date as last_mod5_7_,\n order0_.consumer as consumer6_7_,\n order0_.market_id as market_14_7_,\n order0_.original as original7_7_,\n order0_.owner_id as owner_i15_7_,\n order0_.owner_target as owner_ta8_7_,\n order0_.price as price9_7_,\n order0_.session_id as session16_7_,\n order0_.side as side10_7_,\n order0_.supplier as supplie11_7_,\n order0_.type as type12_7_,\n order0_.units as units13_7_\n from\n fm_order order0_ cross\n join\n fm_session session1_\n where\n order0_.session_id=session1_.id\n and order0_.type='LIMIT'\n and session1_.original=7569\n and (\n order0_.consumer is null\n or order0_.consumer>0\n and (\n exists (\n select\n 1\n from\n fm_order order2_ cross\n join\n fm_session session3_\n where\n order2_.session_id=session3_.id\n and order2_.id=order0_.consumer\n and session3_.original=7569\n and order2_.type='LIMIT'\n and order2_.owner_id<>order0_.owner_id\n )\n )\n )\n order by\n order0_.last_modified_date DESC;\n\n\n> If you can find out what the faster plan was, that would be helpful to\n> know.\n>\n\nwhich results in:\n\n https://explain.depesz.com/s/vGVo\n\n\n\n\n>\n>\n>\n> -----\n> Jim Finnerty, AWS, Amazon Aurora PostgreSQL\n> --\n> Sent from:\n> http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n>\n>\n\nOn Wed, Jan 23, 2019 at 6:51 AM Jim Finnerty <[email protected]> wrote:One thing that isn't helping is that you have a redundant predicate. The\nselectivity of this predicate is also estimated too low, so removing the\nredundant predicate might improve the estimate and change the plan:\n\n( \"\n + \" o.consumer IS NULL \"\n + \" ) OR ( \"\n + \" o.consumer IS NOT NULL \"\n + \" AND o.consumer > 0 \n\nremove \"o.consumer IS NOT NULL AND\", which is implied by o.consumer > 0. \nThis predicate should have been automatically removed, but the filter shown\nin depesz shows that it was not.Good point -- the new generated SQL is select order0_.id as id1_7_, order0_.created_by as created_2_7_, order0_.created_date as created_3_7_, order0_.last_modified_by as last_mod4_7_, order0_.last_modified_date as last_mod5_7_, order0_.consumer as consumer6_7_, order0_.market_id as market_14_7_, order0_.original as original7_7_, order0_.owner_id as owner_i15_7_, order0_.owner_target as owner_ta8_7_, order0_.price as price9_7_, order0_.session_id as session16_7_, order0_.side as side10_7_, order0_.supplier as supplie11_7_, order0_.type as type12_7_, order0_.units as units13_7_ from fm_order order0_ cross join fm_session session1_ where order0_.session_id=session1_.id and order0_.type='LIMIT' and session1_.original=7569 and ( order0_.consumer is null or order0_.consumer>0 and ( exists ( select 1 from fm_order order2_ cross join fm_session session3_ where order2_.session_id=session3_.id and order2_.id=order0_.consumer and session3_.original=7569 and order2_.type='LIMIT' and order2_.owner_id<>order0_.owner_id ) ) ) order by order0_.last_modified_date DESC; If you can find out what the faster plan was, that would be helpful to know.which results in: https://explain.depesz.com/s/vGVo \n\n\n\n-----\nJim Finnerty, AWS, Amazon Aurora PostgreSQL\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html",
"msg_date": "Wed, 23 Jan 2019 10:28:52 -0700",
"msg_from": "Jan Nielsen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SELECT performance drop"
},
{
"msg_contents": "Hi,\nis there an index on\n fm_order(session_id,type)\n?\n\nregards\nPAscal\n",
"msg_date": "Wed, 23 Jan 2019 19:37:24 +0000",
"msg_from": "legrand legrand <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE:SELECT performance drop"
},
{
"msg_contents": "On Wed, 2019-01-23 at 10:28 -0700, Jan Nielsen wrote:\n> select\n> order0_.id as id1_7_,\n> order0_.created_by as created_2_7_,\n> order0_.created_date as created_3_7_,\n> order0_.last_modified_by as last_mod4_7_,\n> order0_.last_modified_date as last_mod5_7_,\n> order0_.consumer as consumer6_7_,\n> order0_.market_id as market_14_7_,\n> order0_.original as original7_7_,\n> order0_.owner_id as owner_i15_7_,\n> order0_.owner_target as owner_ta8_7_,\n> order0_.price as price9_7_,\n> order0_.session_id as session16_7_,\n> order0_.side as side10_7_,\n> order0_.supplier as supplie11_7_,\n> order0_.type as type12_7_,\n> order0_.units as units13_7_ \n> from\n> fm_order order0_ cross \n> join\n> fm_session session1_ \n> where\n> order0_.session_id=session1_.id \n> and order0_.type='LIMIT' \n> and session1_.original=7569 \n> and (\n> order0_.consumer is null \n> or order0_.consumer>0 \n> and (\n> exists (\n> select\n> 1 \n> from\n> fm_order order2_ cross \n> join\n> fm_session session3_ \n> where\n> order2_.session_id=session3_.id \n> and order2_.id=order0_.consumer \n> and session3_.original=7569 \n> and order2_.type='LIMIT' \n> and order2_.owner_id<>order0_.owner_id\n> )\n> )\n> ) \n> order by\n> order0_.last_modified_date DESC;\n\nIt might be more efficient to rewrite that along these lines:\n\nSELECT DISTINCT order0_.*\nFROM fm_order order0_\n JOIN fm_session session1_ ON order0_.session_id = session1_.id\n LEFT JOIN fm_order order2_ ON order2_.id = order0_.consumer\n LEFT JOIN fm_session session3_ ON order2_.session_id = session3_.id\nWHERE coalesce(order2_.id, 1) > 0\nAND /* all the other conditions */;\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Wed, 23 Jan 2019 23:41:03 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT performance drop"
},
{
"msg_contents": "On Wed, Jan 23, 2019 at 12:37 PM legrand legrand <\[email protected]> wrote:\n\n> Hi,\n> is there an index on\n> fm_order(session_id,type)?\n>\n\nThere isn't at the moment:\n\n table_name | index_name | column_name\n----------------+------------------------------+-------------\nfm_account | fm_account_pkey | id\nfm_account | uk_5p6qalvucbxmw9u64wf0aif9d | name\nfm_allocation | fm_allocation_pkey | id\nfm_approval | fm_approval_pkey | id\nfm_capital | fm_capital_pkey | id\nfm_grant | fm_grant_pkey | id\nfm_market | fm_market_pkey | id\nfm_marketplace | fm_marketplace_pkey | id\nfm_order | fm_order_pkey | id\nfm_person | fm_person_pkey | id\nfm_session | fm_session_pkey | id\nfm_trader | fm_trader_pkey | id\n\n\n\n\n>\n> regards\n> PAscal\n\nOn Wed, Jan 23, 2019 at 12:37 PM legrand legrand <[email protected]> wrote:Hi,\nis there an index on\n fm_order(session_id,type)?There isn't at the moment: table_name | index_name | column_name ----------------+------------------------------+-------------\n fm_account | fm_account_pkey | id\n fm_account | uk_5p6qalvucbxmw9u64wf0aif9d | name\n fm_allocation | fm_allocation_pkey | id\n fm_approval | fm_approval_pkey | id\n fm_capital | fm_capital_pkey | id\n fm_grant | fm_grant_pkey | id\n fm_market | fm_market_pkey | id\n fm_marketplace | fm_marketplace_pkey | id\n fm_order | fm_order_pkey | id\n fm_person | fm_person_pkey | id\n fm_session | fm_session_pkey | id\n fm_trader | fm_trader_pkey | id\n \n\nregards\nPAscal",
"msg_date": "Thu, 24 Jan 2019 09:52:03 -0700",
"msg_from": "Jan Nielsen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SELECT performance drop"
},
{
"msg_contents": "With Pascal's suggestion, I added a new index:\n\n CREATE INDEX fm_order_sid_type_idx ON fm_order (session_id, type);\n\nwhich improved the query to 2mS!\n\n https://explain.depesz.com/s/oxvs\n\nThank you, Pascal!\n\nOn Thu, Jan 24, 2019 at 9:52 AM Jan Nielsen <[email protected]>\nwrote:\n\n>\n>\n> On Wed, Jan 23, 2019 at 12:37 PM legrand legrand <\n> [email protected]> wrote:\n>\n>> Hi,\n>> is there an index on\n>> fm_order(session_id,type)?\n>>\n>\n> There isn't at the moment:\n>\n> table_name | index_name | column_name\n> ----------------+------------------------------+-------------\n> fm_account | fm_account_pkey | id\n> fm_account | uk_5p6qalvucbxmw9u64wf0aif9d | name\n> fm_allocation | fm_allocation_pkey | id\n> fm_approval | fm_approval_pkey | id\n> fm_capital | fm_capital_pkey | id\n> fm_grant | fm_grant_pkey | id\n> fm_market | fm_market_pkey | id\n> fm_marketplace | fm_marketplace_pkey | id\n> fm_order | fm_order_pkey | id\n> fm_person | fm_person_pkey | id\n> fm_session | fm_session_pkey | id\n> fm_trader | fm_trader_pkey | id\n>\n>\n>\n>\n>>\n>> regards\n>> PAscal\n>\n>\n\nWith Pascal's suggestion, I added a new index: CREATE INDEX fm_order_sid_type_idx ON fm_order (session_id, type);which improved the query to 2mS! https://explain.depesz.com/s/oxvsThank you, Pascal!On Thu, Jan 24, 2019 at 9:52 AM Jan Nielsen <[email protected]> wrote:On Wed, Jan 23, 2019 at 12:37 PM legrand legrand <[email protected]> wrote:Hi,\nis there an index on\n fm_order(session_id,type)?There isn't at the moment: table_name | index_name | column_name ----------------+------------------------------+-------------\n fm_account | fm_account_pkey | id\n fm_account | uk_5p6qalvucbxmw9u64wf0aif9d | name\n fm_allocation | fm_allocation_pkey | id\n fm_approval | fm_approval_pkey | id\n fm_capital | fm_capital_pkey | id\n fm_grant | fm_grant_pkey | id\n fm_market | fm_market_pkey | id\n fm_marketplace | fm_marketplace_pkey | id\n fm_order | fm_order_pkey | id\n fm_person | fm_person_pkey | id\n fm_session | fm_session_pkey | id\n fm_trader | fm_trader_pkey | id\n \n\nregards\nPAscal",
"msg_date": "Thu, 24 Jan 2019 11:04:57 -0700",
"msg_from": "Jan Nielsen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SELECT performance drop"
}
] |
[
{
"msg_contents": "It has been noted in various places that ANALYZE does not estimate n_distinct\nvalues accurately on large tables. For example, here:\n\nhttps://www.postgresql-archive.org/serious-under-estimation-of-n-distinct-for-clustered-distributions-tt5738290.html#a5738707\n\nANALYZE uses a random page-level sample to perform as well as it does. With\nthe default statistics target of 100, that might correspond to a 30000 row\nsample. If the table cardinality is very large, the extrapolated estimate\nmight have a lot of error. If there are many nulls in a column, then the\neffective sample size is even smaller. If the values are correlated with\nheap order, then the n_distinct will be under-estimated more.\n\nWhat may be surprising is just how bad things are on a table containing a\nfew 10's of millions of rows, with some columns that have realistic data\ndistributions. Such is the case on the Internet Movie Database, where with\nthe default_statistics_target = 100 I see errors as large as 190x, with\nseveral columns having errors in the 50x - 75x range, a bunch that are off\nby roughly 10x, and a bevy that are off by 2x.\n\nIf that is not surprising, then it may yet be surprising that increasing the\ndefault_statistics_target to 1000 doesn't reduce some of the big errors all\nthat much. On a few columns that I noticed first, there were still errors\nof 50x, 30x, and 35x, and these may not be the largest. The errors are\nstill huge, even at 1000.\n\nn_distinct is used as the basis for non-frequent equality selectivity, but\nit's also used for the threshold of how many mcv's will be collected, so\nit's pretty important for n_distinct to be accurate to get good plans.\n\nEstimating distinct values from a sample is an intrinsically hard problem. \nThere are distributions that will require sampling nearly all of the data in\norder to get an estimate that has a 5% error.\n\nWell, we had an RDS Hackathon last week, and I wrote a little script that\nwould estimate n_distinct accurately. It will be accurate because it scans\nall rows, but it gains efficiency because it can estimate many or all\ncolumns in a single pass. It can do many columns on the same scan by using\nthe HyperLogLog algorithm to estimate the distinct counts with a small\namount of memory (1.2kB per column), and it further gains performance by\nusing a type-specific hash function so that it can avoid a dynamic dispatch\nat run time. If there are so many columns that it can't be handled in a\nsingle pass, then it chunks up the columns into groups, and handles as many\ncolumns in a single pass as it can.\n\nSince it was a hackathon, I added a hack-around for handling partitioned\ntables, where if the child table has a name that is a prefix of the name of\nits parent, then it will be analyzed, too.\n\nAt the moment it is implemented as a SQL script that defines a schema, a\ncouple of tables, a view, and a couple of pl/pgsql functions. One function,\ncurrently named 'analyze', accepts a comma-separated list of schema names, a\ncomma-separated list of table names, and a flag that says whether you want\nto apply it to foreign tables that could deal with an approximate count\ndistinct function call (external tables not tested yet), and it calculates\nall the n_distinct values and does an ALTER TABLE ... SET column (n_distinct\n= v), as well as inflating the statistics target for any column with a high\nnull_frac. The other function resets all the column overrides back to their\ndefaults for the specified schemas and tables, in case you want to go back\nto the way it was.\n\nSince setting pg_statistic requires superuser privs, you run the\naux_stats.analyze function, and then you run ANALYZE. Before running\nANALYZE, you can view the stats as currently reflected in pg_statistic, and\nwhat the new stats will be.\n\nWe would like to offer this up to the community if there is interest in it,\nand if anyone would like to work with me on it to polish it up and try it on\nsome large databases to see what we get, and to see how long it takes, etc. \nI envision this either as a script that we could post alongside some of the\nhelpful scripts that we have in our documentation, or possibly as an\nextension. It would be easy to package it as an extension, but it's pretty\nsimple to use as just a script that you install with \\i. I'm open to\nsuggestion.\n\nSo, if you're interested in trying this out and making some helpful\nsuggestions, please respond. After enough people kick the tires and are\nhappy with it, I'll post the code.\n\n /Jim Finnerty\n\n\n\n-----\nJim Finnerty, AWS, Amazon Aurora PostgreSQL\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n",
"msg_date": "Tue, 22 Jan 2019 14:13:48 -0700 (MST)",
"msg_from": "Jim Finnerty <[email protected]>",
"msg_from_op": true,
"msg_subject": "ANALYZE accuracy problems for n_distinct, and a solution"
}
] |
[
{
"msg_contents": "Hey,\nI'm trying to help a guy that is using pg9.6 but I'm not so familiar with\nthe error message :\nERROR: found xmin 16804535 from before relfrozenxid 90126924\nCONTEXT: automatic vacuum of table db1.public.table_1\"\n\n\nIt seems that the error has started appearing two weeks ago. Data that I\ncollected :\n\n-all the autovacuum params are set to default\n\n-SELECT relname, age(relfrozenxid) as xid_age,\n pg_size_pretty(pg_table_size(oid)) as table_size\nFROM pg_class\nWHERE relkind = 'r' and pg_table_size(oid) > 1073741824\nORDER BY age(relfrozenxid) DESC LIMIT 4;\n relname | xid_age | table_size\n-------------------------------+-----------+------------\n table_1 | 180850538 | 10 GB\ntable_2 | 163557812 | 10 GB\ntable_3 | 143732477 | 1270 MB\ntable_4 | 70464685 | 3376 MB\n\npg_controldata :\nLatest checkpoint's NextXID: 0:270977386\nLatest checkpoint's NextOID: 25567991\nLatest checkpoint's NextMultiXactId: 1079168\nLatest checkpoint's NextMultiOffset: 68355\nLatest checkpoint's oldestXID: 77980003\nLatest checkpoint's oldestXID's DB: 16403\nLatest checkpoint's oldestActiveXID: 0\nLatest checkpoint's oldestMultiXid: 1047846\nLatest checkpoint's oldestMulti's DB: 16403\nLatest checkpoint's oldestCommitTsXid:0\nLatest checkpoint's newestCommitTsXid:0\n\nIt seems that the autovacuum cant vacuum table_1 and it has alot of\ndead_tuples. Moreover, it seems that the indexes are bloated.\n\nschemaname relname n_tup_upd n_tup_del n_tup_hot_upd n_live_tup n_dead_tup\nn_mod_since_analyze last_vacuum last_autovacuum last_analyze\npublic table_1 0 5422370 0 382222 109582923 10760701\nI tried to vacuum the table (full,freeze) but it didnt help.\nI read about the wrap that can happen but to be honest I'm not sure that I\nunderstood id.\nWhat can I do to vacuum the table ? Can some one explain the logic behind\nthe error message ?\n\nThanks.\n\nHey,I'm trying to help a guy that is using pg9.6 but I'm not so familiar with the error message : ERROR: found xmin 16804535 from before relfrozenxid 90126924CONTEXT: automatic vacuum of table db1.public.table_1\"It seems that the error has started appearing two weeks ago. Data that I collected : -all the autovacuum params are set to default-SELECT relname, age(relfrozenxid) as xid_age, pg_size_pretty(pg_table_size(oid)) as table_sizeFROM pg_class WHERE relkind = 'r' and pg_table_size(oid) > 1073741824ORDER BY age(relfrozenxid) DESC LIMIT 4; relname | xid_age | table_size -------------------------------+-----------+------------ \n\ntable_1 | 180850538 | 10 GBtable_2 | 163557812 | 10 GBtable_3 | 143732477 | 1270 MBtable_4 | 70464685 | 3376 MBpg_controldata : Latest checkpoint's NextXID: 0:270977386Latest checkpoint's NextOID: 25567991Latest checkpoint's NextMultiXactId: 1079168Latest checkpoint's NextMultiOffset: 68355Latest checkpoint's oldestXID: 77980003Latest checkpoint's oldestXID's DB: 16403Latest checkpoint's oldestActiveXID: 0Latest checkpoint's oldestMultiXid: 1047846Latest checkpoint's oldestMulti's DB: 16403Latest checkpoint's oldestCommitTsXid:0Latest checkpoint's newestCommitTsXid:0It seems that the autovacuum cant vacuum table_1 and it has alot of dead_tuples. Moreover, it seems that the indexes are bloated.\n\n\n\n\n\n\n\n\n\n\n\n\nschemaname\nrelname\nn_tup_upd\nn_tup_del\nn_tup_hot_upd\nn_live_tup\nn_dead_tup\nn_mod_since_analyze\nlast_vacuum\nlast_autovacuum\nlast_analyze\n\n\npublic\ntable_10\n5422370\n0\n382222\n109582923\n10760701\n\n\n\n\nI tried to vacuum the table (full,freeze) but it didnt help.I read about the wrap that can happen but to be honest I'm not sure that I understood id. What can I do to vacuum the table ? Can some one explain the logic behind the error message ?Thanks.",
"msg_date": "Wed, 23 Jan 2019 17:25:33 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "ERROR: found xmin from before relfrozenxid"
},
{
"msg_contents": "Mariel Cherkassky <[email protected]> writes:\n\n> Hey,\n> I'm trying to help a guy that is using pg9.6 but I'm not so familiar\n> with the error message : \n> ERROR: found xmin 16804535 from before relfrozenxid 90126924\n> CONTEXT: automatic vacuum of table db1.public.table_1\"\n\n9.6.?...\n\nThat error or a very similar one was fixed in a recent point release.\n\nHTH\n\n-- \nJerry Sievers\nPostgres DBA/Development Consulting\ne: [email protected]\n\n",
"msg_date": "Wed, 23 Jan 2019 13:42:49 -0600",
"msg_from": "Jerry Sievers <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: found xmin from before relfrozenxid"
},
{
"msg_contents": "Yeah 9.6 !\n\nOn Wed, Jan 23, 2019, 9:51 PM Jerry Sievers <[email protected] wrote:\n\n> Mariel Cherkassky <[email protected]> writes:\n>\n> > Hey,\n> > I'm trying to help a guy that is using pg9.6 but I'm not so familiar\n> > with the error message :\n> > ERROR: found xmin 16804535 from before relfrozenxid 90126924\n> > CONTEXT: automatic vacuum of table db1.public.table_1\"\n>\n> 9.6.?...\n>\n> That error or a very similar one was fixed in a recent point release.\n>\n> HTH\n>\n> --\n> Jerry Sievers\n> Postgres DBA/Development Consulting\n> e: [email protected]\n>\n\nYeah 9.6 !On Wed, Jan 23, 2019, 9:51 PM Jerry Sievers <[email protected] wrote:Mariel Cherkassky <[email protected]> writes:\n\n> Hey,\n> I'm trying to help a guy that is using pg9.6 but I'm not so familiar\n> with the error message : \n> ERROR: found xmin 16804535 from before relfrozenxid 90126924\n> CONTEXT: automatic vacuum of table db1.public.table_1\"\n\n9.6.?...\n\nThat error or a very similar one was fixed in a recent point release.\n\nHTH\n\n-- \nJerry Sievers\nPostgres DBA/Development Consulting\ne: [email protected]",
"msg_date": "Thu, 24 Jan 2019 08:32:49 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ERROR: found xmin from before relfrozenxid"
},
{
"msg_contents": "I'm checking the full version.\nAs you said I saw that in 9.6.9 there was a fix for the next bug :\n\nAvoid spuriously marking pages as all-visible (Dan Wood, Pavan Deolasee,\nÁlvaro Herrera)\n\nThis could happen if some tuples were locked (but not deleted). While\nqueries would still function correctly, vacuum would normally ignore such\npages, with the long-term effect that the tuples were never frozen. In\nrecent releases this would eventually result in errors such as \"found\nmultixact nnnnn from before relminmxid nnnnn\".\n\nSo basically, he just need to upgrade in order to fix it ? Or there is\nsomething else that need to be done?\n\nבתאריך יום ד׳, 23 בינו׳ 2019 ב-21:51 מאת Jerry Sievers <\[email protected]>:\n\n> Mariel Cherkassky <[email protected]> writes:\n>\n> > Hey,\n> > I'm trying to help a guy that is using pg9.6 but I'm not so familiar\n> > with the error message :\n> > ERROR: found xmin 16804535 from before relfrozenxid 90126924\n> > CONTEXT: automatic vacuum of table db1.public.table_1\"\n>\n> 9.6.?...\n>\n> That error or a very similar one was fixed in a recent point release.\n>\n> HTH\n>\n> --\n> Jerry Sievers\n> Postgres DBA/Development Consulting\n> e: [email protected]\n>\n\nI'm checking the full version. As you said I saw that in 9.6.9 there was a fix for the next bug : Avoid spuriously marking pages as all-visible (Dan Wood, Pavan Deolasee, Álvaro Herrera)This could happen if some tuples were locked (but not deleted). While queries would still function correctly, vacuum would normally ignore such pages, with the long-term effect that the tuples were never frozen. In recent releases this would eventually result in errors such as \"found multixact nnnnn from before relminmxid nnnnn\".So basically, he just need to upgrade in order to fix it ? Or there is something else that need to be done?בתאריך יום ד׳, 23 בינו׳ 2019 ב-21:51 מאת Jerry Sievers <[email protected]>:Mariel Cherkassky <[email protected]> writes:\n\n> Hey,\n> I'm trying to help a guy that is using pg9.6 but I'm not so familiar\n> with the error message : \n> ERROR: found xmin 16804535 from before relfrozenxid 90126924\n> CONTEXT: automatic vacuum of table db1.public.table_1\"\n\n9.6.?...\n\nThat error or a very similar one was fixed in a recent point release.\n\nHTH\n\n-- \nJerry Sievers\nPostgres DBA/Development Consulting\ne: [email protected]",
"msg_date": "Thu, 24 Jan 2019 16:14:21 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ERROR: found xmin from before relfrozenxid"
},
{
"msg_contents": "On 1/24/19 3:14 PM, Mariel Cherkassky wrote:\n> I'm checking the full version.\n> As you said I saw that in 9.6.9 there was a fix for the next bug :\n> \n> Avoid spuriously marking pages as all-visible (Dan Wood, Pavan Deolasee, \n> Álvaro Herrera)\n> \n> This could happen if some tuples were locked (but not deleted). While \n> queries would still function correctly, vacuum would normally ignore \n> such pages, with the long-term effect that the tuples were never frozen. \n> In recent releases this would eventually result in errors such as \"found \n> multixact nnnnn from before relminmxid nnnnn\".\n> \n> So basically, he just need to upgrade in order to fix it ? Or there is \n> something else that need to be done?\n> \n> \n\nHello,\n\nThe fix prevent this error occur, but it doesn't fix tuples impacted by \nthis bug.\n\nDid you try this : psql -o /dev/null -c \"select * from table for update\" \ndatabase\n\n\nAs suggested by Alexandre Arruda : \nhttps://www.postgresql.org/message-id/CAGewt-ukbL6WL8cc-G%2BiN9AVvmMQkhA9i2TKP4-6wJr6YOQkzA%40mail.gmail.com\n\n\n\nRegards,\n\n",
"msg_date": "Fri, 25 Jan 2019 09:18:55 +0100",
"msg_from": "Adrien NAYRAT <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: found xmin from before relfrozenxid"
},
{
"msg_contents": "I'm getting this issue when I try to connect to a specific db. Does it\nmatters what table I specify ? Should I just choose a random table from the\nproblematic db? If I'll dump the db and restore it it can help ?\n\nOn Fri, Jan 25, 2019, 10:19 AM Adrien NAYRAT <[email protected]\nwrote:\n\n> On 1/24/19 3:14 PM, Mariel Cherkassky wrote:\n> > I'm checking the full version.\n> > As you said I saw that in 9.6.9 there was a fix for the next bug :\n> >\n> > Avoid spuriously marking pages as all-visible (Dan Wood, Pavan Deolasee,\n> > Álvaro Herrera)\n> >\n> > This could happen if some tuples were locked (but not deleted). While\n> > queries would still function correctly, vacuum would normally ignore\n> > such pages, with the long-term effect that the tuples were never frozen.\n> > In recent releases this would eventually result in errors such as \"found\n> > multixact nnnnn from before relminmxid nnnnn\".\n> >\n> > So basically, he just need to upgrade in order to fix it ? Or there is\n> > something else that need to be done?\n> >\n> >\n>\n> Hello,\n>\n> The fix prevent this error occur, but it doesn't fix tuples impacted by\n> this bug.\n>\n> Did you try this : psql -o /dev/null -c \"select * from table for update\"\n> database\n>\n>\n> As suggested by Alexandre Arruda :\n>\n> https://www.postgresql.org/message-id/CAGewt-ukbL6WL8cc-G%2BiN9AVvmMQkhA9i2TKP4-6wJr6YOQkzA%40mail.gmail.com\n>\n>\n>\n> Regards,\n>\n>\n\nI'm getting this issue when I try to connect to a specific db. Does it matters what table I specify ? Should I just choose a random table from the problematic db? If I'll dump the db and restore it it can help ?On Fri, Jan 25, 2019, 10:19 AM Adrien NAYRAT <[email protected] wrote:On 1/24/19 3:14 PM, Mariel Cherkassky wrote:\n> I'm checking the full version.\n> As you said I saw that in 9.6.9 there was a fix for the next bug :\n> \n> Avoid spuriously marking pages as all-visible (Dan Wood, Pavan Deolasee, \n> Álvaro Herrera)\n> \n> This could happen if some tuples were locked (but not deleted). While \n> queries would still function correctly, vacuum would normally ignore \n> such pages, with the long-term effect that the tuples were never frozen. \n> In recent releases this would eventually result in errors such as \"found \n> multixact nnnnn from before relminmxid nnnnn\".\n> \n> So basically, he just need to upgrade in order to fix it ? Or there is \n> something else that need to be done?\n> \n> \n\nHello,\n\nThe fix prevent this error occur, but it doesn't fix tuples impacted by \nthis bug.\n\nDid you try this : psql -o /dev/null -c \"select * from table for update\" \ndatabase\n\n\nAs suggested by Alexandre Arruda : \nhttps://www.postgresql.org/message-id/CAGewt-ukbL6WL8cc-G%2BiN9AVvmMQkhA9i2TKP4-6wJr6YOQkzA%40mail.gmail.com\n\n\n\nRegards,",
"msg_date": "Fri, 25 Jan 2019 19:20:30 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ERROR: found xmin from before relfrozenxid"
},
{
"msg_contents": "On 1/25/19 6:20 PM, Mariel Cherkassky wrote:\n> I'm getting this issue when I try to connect to a specific db. Does it \n> matters what table I specify ? Should I just choose a random table from \n> the problematic db? If I'll dump the db and restore it it can help ?\n\nError message is on \"db1.public.table_1\", but maybe other tables are \nimpacted.\n\nIf you can dump and restore your database it should fix your issue. Be \ncareful to apply minor update (9.6.11 f you can).\n\n\n",
"msg_date": "Sat, 26 Jan 2019 11:48:49 +0100",
"msg_from": "Adrien NAYRAT <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: found xmin from before relfrozenxid"
},
{
"msg_contents": "Update to the minor version should be an easy solution - yum update\npostgresql . What did you mean by carful\n\nOn Sat, Jan 26, 2019, 12:48 PM Adrien NAYRAT <[email protected]\nwrote:\n\n> On 1/25/19 6:20 PM, Mariel Cherkassky wrote:\n> > I'm getting this issue when I try to connect to a specific db. Does it\n> > matters what table I specify ? Should I just choose a random table from\n> > the problematic db? If I'll dump the db and restore it it can help ?\n>\n> Error message is on \"db1.public.table_1\", but maybe other tables are\n> impacted.\n>\n> If you can dump and restore your database it should fix your issue. Be\n> careful to apply minor update (9.6.11 f you can).\n>\n>\n\nUpdate to the minor version should be an easy solution - yum update postgresql . What did you mean by carfulOn Sat, Jan 26, 2019, 12:48 PM Adrien NAYRAT <[email protected] wrote:On 1/25/19 6:20 PM, Mariel Cherkassky wrote:\n> I'm getting this issue when I try to connect to a specific db. Does it \n> matters what table I specify ? Should I just choose a random table from \n> the problematic db? If I'll dump the db and restore it it can help ?\n\nError message is on \"db1.public.table_1\", but maybe other tables are \nimpacted.\n\nIf you can dump and restore your database it should fix your issue. Be \ncareful to apply minor update (9.6.11 f you can).",
"msg_date": "Sat, 26 Jan 2019 12:56:08 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ERROR: found xmin from before relfrozenxid"
},
{
"msg_contents": "On 1/26/19 11:56 AM, Mariel Cherkassky wrote:\n> Update to the minor version should be an easy solution - yum update \n> postgresql . What did you mean by carful\n\n\nSorry, I meant, do not forget to apply update to be sure same bug do not \nhappen again.\n\n",
"msg_date": "Sat, 26 Jan 2019 11:59:57 +0100",
"msg_from": "Adrien NAYRAT <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: found xmin from before relfrozenxid"
},
{
"msg_contents": "It seems that the version of the db is 9.6.10 :\n\npsql -U db -d db -c \"select version()\";\nPassword for user db:\nversion\n-----------------------------------------------------------------------------------------------------------\nPostgreSQL 9.6.10 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7\n20120313 (Red Hat 4.4.7-23), 64-bit\n(1 row)\n\n\nand the error is still exist..\n\nבתאריך שבת, 26 בינו׳ 2019 ב-12:59 מאת Adrien NAYRAT <\[email protected]>:\n\n> On 1/26/19 11:56 AM, Mariel Cherkassky wrote:\n> > Update to the minor version should be an easy solution - yum update\n> > postgresql . What did you mean by carful\n>\n>\n> Sorry, I meant, do not forget to apply update to be sure same bug do not\n> happen again.\n>\n\nIt seems that the version of the db is 9.6.10 : psql -U db -d db -c \"select version()\";\nPassword for user db: \nversion \n-----------------------------------------------------------------------------------------------------------\nPostgreSQL 9.6.10 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-23), 64-bit\n(1 row)and the error is still exist..בתאריך שבת, 26 בינו׳ 2019 ב-12:59 מאת Adrien NAYRAT <[email protected]>:On 1/26/19 11:56 AM, Mariel Cherkassky wrote:\n> Update to the minor version should be an easy solution - yum update \n> postgresql . What did you mean by carful\n\n\nSorry, I meant, do not forget to apply update to be sure same bug do not \nhappen again.",
"msg_date": "Wed, 30 Jan 2019 09:51:43 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ERROR: found xmin from before relfrozenxid"
},
{
"msg_contents": "On 2019-Jan-30, Mariel Cherkassky wrote:\n\n> It seems that the version of the db is 9.6.10 :\n> \n> psql -U db -d db -c \"select version()\";\n> Password for user db:\n> version\n> -----------------------------------------------------------------------------------------------------------\n> PostgreSQL 9.6.10 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7\n> 20120313 (Red Hat 4.4.7-23), 64-bit\n> (1 row)\n> \n> \n> and the error is still exist..\n\nDid you apply the suggested SELECT .. FOR UPDATE to the problem table?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Wed, 30 Jan 2019 06:13:59 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: found xmin from before relfrozenxid"
},
{
"msg_contents": "Hey,\nAs I said, I'm getting this error for all the objects in a specific db. I\ncant even connect to the database, I immediatly getting this error.\nThe bug was fixed in 9.6.10 but the db version is 9.6.10 so how can it\nhappen ? The db was installed in that version from the first place and *no\nupgrade was done*\n\nבתאריך יום ד׳, 30 בינו׳ 2019 ב-11:14 מאת Alvaro Herrera <\[email protected]>:\n\n> On 2019-Jan-30, Mariel Cherkassky wrote:\n>\n> > It seems that the version of the db is 9.6.10 :\n> >\n> > psql -U db -d db -c \"select version()\";\n> > Password for user db:\n> > version\n> >\n> -----------------------------------------------------------------------------------------------------------\n> > PostgreSQL 9.6.10 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7\n> > 20120313 (Red Hat 4.4.7-23), 64-bit\n> > (1 row)\n> >\n> >\n> > and the error is still exist..\n>\n> Did you apply the suggested SELECT .. FOR UPDATE to the problem table?\n>\n> --\n> Álvaro Herrera https://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\nHey,As I said, I'm getting this error for all the objects in a specific db. I cant even connect to the database, I immediatly getting this error.The bug was fixed in 9.6.10 but the db version is 9.6.10 so how can it happen ? The db was installed in that version from the first place and no upgrade was doneבתאריך יום ד׳, 30 בינו׳ 2019 ב-11:14 מאת Alvaro Herrera <[email protected]>:On 2019-Jan-30, Mariel Cherkassky wrote:\n\n> It seems that the version of the db is 9.6.10 :\n> \n> psql -U db -d db -c \"select version()\";\n> Password for user db:\n> version\n> -----------------------------------------------------------------------------------------------------------\n> PostgreSQL 9.6.10 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7\n> 20120313 (Red Hat 4.4.7-23), 64-bit\n> (1 row)\n> \n> \n> and the error is still exist..\n\nDid you apply the suggested SELECT .. FOR UPDATE to the problem table?\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 30 Jan 2019 12:35:56 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ERROR: found xmin from before relfrozenxid"
},
{
"msg_contents": "dumping the table and then restoring it solved the case for me. select for\nupdate didnt help..\n\nthanks !\n\nבתאריך יום ד׳, 30 בינו׳ 2019 ב-12:35 מאת Mariel Cherkassky <\[email protected]>:\n\n> Hey,\n> As I said, I'm getting this error for all the objects in a specific db. I\n> cant even connect to the database, I immediatly getting this error.\n> The bug was fixed in 9.6.10 but the db version is 9.6.10 so how can it\n> happen ? The db was installed in that version from the first place and *no\n> upgrade was done*\n>\n> בתאריך יום ד׳, 30 בינו׳ 2019 ב-11:14 מאת Alvaro Herrera <\n> [email protected]>:\n>\n>> On 2019-Jan-30, Mariel Cherkassky wrote:\n>>\n>> > It seems that the version of the db is 9.6.10 :\n>> >\n>> > psql -U db -d db -c \"select version()\";\n>> > Password for user db:\n>> > version\n>> >\n>> -----------------------------------------------------------------------------------------------------------\n>> > PostgreSQL 9.6.10 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7\n>> > 20120313 (Red Hat 4.4.7-23), 64-bit\n>> > (1 row)\n>> >\n>> >\n>> > and the error is still exist..\n>>\n>> Did you apply the suggested SELECT .. FOR UPDATE to the problem table?\n>>\n>> --\n>> Álvaro Herrera https://www.2ndQuadrant.com/\n>> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>>\n>\n\ndumping the table and then restoring it solved the case for me. select for update didnt help..thanks !בתאריך יום ד׳, 30 בינו׳ 2019 ב-12:35 מאת Mariel Cherkassky <[email protected]>:Hey,As I said, I'm getting this error for all the objects in a specific db. I cant even connect to the database, I immediatly getting this error.The bug was fixed in 9.6.10 but the db version is 9.6.10 so how can it happen ? The db was installed in that version from the first place and no upgrade was doneבתאריך יום ד׳, 30 בינו׳ 2019 ב-11:14 מאת Alvaro Herrera <[email protected]>:On 2019-Jan-30, Mariel Cherkassky wrote:\n\n> It seems that the version of the db is 9.6.10 :\n> \n> psql -U db -d db -c \"select version()\";\n> Password for user db:\n> version\n> -----------------------------------------------------------------------------------------------------------\n> PostgreSQL 9.6.10 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7\n> 20120313 (Red Hat 4.4.7-23), 64-bit\n> (1 row)\n> \n> \n> and the error is still exist..\n\nDid you apply the suggested SELECT .. FOR UPDATE to the problem table?\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 4 Feb 2019 18:42:19 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ERROR: found xmin from before relfrozenxid"
},
{
"msg_contents": "Hi All,\nApparently the issue appeared again in the same database but on different\ntable . In the last time dumping and restoring the table helped. However, I\ndont understand why another table hit the bug if it was fixed in 9.6.9\nwhile my db version is 9.6.10.\n\nAny idea ?\n\nבתאריך יום ב׳, 4 בפבר׳ 2019 ב-18:42 מאת Mariel Cherkassky <\[email protected]>:\n\n> dumping the table and then restoring it solved the case for me. select for\n> update didnt help..\n>\n> thanks !\n>\n> בתאריך יום ד׳, 30 בינו׳ 2019 ב-12:35 מאת Mariel Cherkassky <\n> [email protected]>:\n>\n>> Hey,\n>> As I said, I'm getting this error for all the objects in a specific db. I\n>> cant even connect to the database, I immediatly getting this error.\n>> The bug was fixed in 9.6.10 but the db version is 9.6.10 so how can it\n>> happen ? The db was installed in that version from the first place and *no\n>> upgrade was done*\n>>\n>> בתאריך יום ד׳, 30 בינו׳ 2019 ב-11:14 מאת Alvaro Herrera <\n>> [email protected]>:\n>>\n>>> On 2019-Jan-30, Mariel Cherkassky wrote:\n>>>\n>>> > It seems that the version of the db is 9.6.10 :\n>>> >\n>>> > psql -U db -d db -c \"select version()\";\n>>> > Password for user db:\n>>> > version\n>>> >\n>>> -----------------------------------------------------------------------------------------------------------\n>>> > PostgreSQL 9.6.10 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7\n>>> > 20120313 (Red Hat 4.4.7-23), 64-bit\n>>> > (1 row)\n>>> >\n>>> >\n>>> > and the error is still exist..\n>>>\n>>> Did you apply the suggested SELECT .. FOR UPDATE to the problem table?\n>>>\n>>> --\n>>> Álvaro Herrera https://www.2ndQuadrant.com/\n>>> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>>>\n>>\n\nHi All,Apparently the issue appeared again in the same database but on different table . In the last time dumping and restoring the table helped. However, I dont understand why another table hit the bug if it was fixed in 9.6.9 while my db version is 9.6.10.Any idea ?בתאריך יום ב׳, 4 בפבר׳ 2019 ב-18:42 מאת Mariel Cherkassky <[email protected]>:dumping the table and then restoring it solved the case for me. select for update didnt help..thanks !בתאריך יום ד׳, 30 בינו׳ 2019 ב-12:35 מאת Mariel Cherkassky <[email protected]>:Hey,As I said, I'm getting this error for all the objects in a specific db. I cant even connect to the database, I immediatly getting this error.The bug was fixed in 9.6.10 but the db version is 9.6.10 so how can it happen ? The db was installed in that version from the first place and no upgrade was doneבתאריך יום ד׳, 30 בינו׳ 2019 ב-11:14 מאת Alvaro Herrera <[email protected]>:On 2019-Jan-30, Mariel Cherkassky wrote:\n\n> It seems that the version of the db is 9.6.10 :\n> \n> psql -U db -d db -c \"select version()\";\n> Password for user db:\n> version\n> -----------------------------------------------------------------------------------------------------------\n> PostgreSQL 9.6.10 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7\n> 20120313 (Red Hat 4.4.7-23), 64-bit\n> (1 row)\n> \n> \n> and the error is still exist..\n\nDid you apply the suggested SELECT .. FOR UPDATE to the problem table?\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 12 Mar 2019 09:58:32 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ERROR: found xmin from before relfrozenxid"
},
{
"msg_contents": "On 3/12/19 8:58 AM, Mariel Cherkassky wrote:\n> Apparently the issue appeared again in the same database but on \n> different table . In the last time dumping and restoring the table \n> helped. However, I dont understand why another table hit the bug if it \n> was fixed in 9.6.9 while my db version is 9.6.10.\n\nHello,\n\nCould you provide more details (logs...) and remind how you perform \ndatabase dump/restore?\n\nThis will help community to help you ;)\n\nRegards,\n\n",
"msg_date": "Wed, 13 Mar 2019 13:24:02 +0100",
"msg_from": "Adrien NAYRAT <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: found xmin from before relfrozenxid"
},
{
"msg_contents": "Hey,\nThe logs are full of info that I cant share. However, it full of the next\nmessages :\nERROR: found xmin 16804535 from before relfrozenxid 90126924\nCONTEXT: automatic vacuum of table db1.public.table_1\"\n...\n\nWhat I'm trying to understand here is if the bug was fixed or not. In the\nfirst time it appeared the dump and the restore solved the issue. However,\nis happened the second time on a different table. So basically I'm trying\nto understand how to solve it permanently.\n\nthe dump command ; pg_dump -d db -U username -t table_name -f table.sql\nI dropped the old table and restored it :\ndrop table table_name;\npsql -d db -U username -f table.sql\n\nבתאריך יום ד׳, 13 במרץ 2019 ב-14:24 מאת Adrien NAYRAT <\[email protected]>:\n\n> On 3/12/19 8:58 AM, Mariel Cherkassky wrote:\n> > Apparently the issue appeared again in the same database but on\n> > different table . In the last time dumping and restoring the table\n> > helped. However, I dont understand why another table hit the bug if it\n> > was fixed in 9.6.9 while my db version is 9.6.10.\n>\n> Hello,\n>\n> Could you provide more details (logs...) and remind how you perform\n> database dump/restore?\n>\n> This will help community to help you ;)\n>\n> Regards,\n>\n\nHey,The logs are full of info that I cant share. However, it full of the next messages : ERROR: found xmin 16804535 from before relfrozenxid 90126924CONTEXT: automatic vacuum of table db1.public.table_1\"...What I'm trying to understand here is if the bug was fixed or not. In the first time it appeared the dump and the restore solved the issue. However, is happened the second time on a different table. So basically I'm trying to understand how to solve it permanently. the dump command ; pg_dump -d db -U username -t table_name -f table.sqlI dropped the old table and restored it :drop table table_name;psql -d db -U username -f table.sqlבתאריך יום ד׳, 13 במרץ 2019 ב-14:24 מאת Adrien NAYRAT <[email protected]>:On 3/12/19 8:58 AM, Mariel Cherkassky wrote:\n> Apparently the issue appeared again in the same database but on \n> different table . In the last time dumping and restoring the table \n> helped. However, I dont understand why another table hit the bug if it \n> was fixed in 9.6.9 while my db version is 9.6.10.\n\nHello,\n\nCould you provide more details (logs...) and remind how you perform \ndatabase dump/restore?\n\nThis will help community to help you ;)\n\nRegards,",
"msg_date": "Wed, 13 Mar 2019 14:29:18 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ERROR: found xmin from before relfrozenxid"
},
{
"msg_contents": "To avoid a dump/restore, use this:\n\npsql -o /dev/null -c \"select * from table for update\" database\n\nUsing the last releases of the major versions solve the bug for me.\n\nBest regards\n\nEm qua, 13 de mar de 2019 às 09:29, Mariel Cherkassky <\[email protected]> escreveu:\n\n> Hey,\n> The logs are full of info that I cant share. However, it full of the next\n> messages :\n> ERROR: found xmin 16804535 from before relfrozenxid 90126924\n> CONTEXT: automatic vacuum of table db1.public.table_1\"\n> ...\n>\n> What I'm trying to understand here is if the bug was fixed or not. In the\n> first time it appeared the dump and the restore solved the issue. However,\n> is happened the second time on a different table. So basically I'm trying\n> to understand how to solve it permanently.\n>\n> the dump command ; pg_dump -d db -U username -t table_name -f table.sql\n> I dropped the old table and restored it :\n> drop table table_name;\n> psql -d db -U username -f table.sql\n>\n> בתאריך יום ד׳, 13 במרץ 2019 ב-14:24 מאת Adrien NAYRAT <\n> [email protected]>:\n>\n>> On 3/12/19 8:58 AM, Mariel Cherkassky wrote:\n>> > Apparently the issue appeared again in the same database but on\n>> > different table . In the last time dumping and restoring the table\n>> > helped. However, I dont understand why another table hit the bug if it\n>> > was fixed in 9.6.9 while my db version is 9.6.10.\n>>\n>> Hello,\n>>\n>> Could you provide more details (logs...) and remind how you perform\n>> database dump/restore?\n>>\n>> This will help community to help you ;)\n>>\n>> Regards,\n>>\n>\n\nTo avoid a dump/restore, use this:psql -o /dev/null -c \"select * from table for update\" database Using the last releases of the major versions solve the bug for me.Best regards Em qua, 13 de mar de 2019 às 09:29, Mariel Cherkassky <[email protected]> escreveu:Hey,The logs are full of info that I cant share. However, it full of the next messages : ERROR: found xmin 16804535 from before relfrozenxid 90126924CONTEXT: automatic vacuum of table db1.public.table_1\"...What I'm trying to understand here is if the bug was fixed or not. In the first time it appeared the dump and the restore solved the issue. However, is happened the second time on a different table. So basically I'm trying to understand how to solve it permanently. the dump command ; pg_dump -d db -U username -t table_name -f table.sqlI dropped the old table and restored it :drop table table_name;psql -d db -U username -f table.sqlבתאריך יום ד׳, 13 במרץ 2019 ב-14:24 מאת Adrien NAYRAT <[email protected]>:On 3/12/19 8:58 AM, Mariel Cherkassky wrote:\n> Apparently the issue appeared again in the same database but on \n> different table . In the last time dumping and restoring the table \n> helped. However, I dont understand why another table hit the bug if it \n> was fixed in 9.6.9 while my db version is 9.6.10.\n\nHello,\n\nCould you provide more details (logs...) and remind how you perform \ndatabase dump/restore?\n\nThis will help community to help you ;)\n\nRegards,",
"msg_date": "Wed, 13 Mar 2019 09:48:14 -0300",
"msg_from": "Alexandre Arruda <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: found xmin from before relfrozenxid"
},
{
"msg_contents": "Hey,\nThe query was the first thing that I tried, it didnt solve the issue.\nGuess I'll update to the latest version.\n\nבתאריך יום ד׳, 13 במרץ 2019 ב-14:48 מאת Alexandre Arruda <\[email protected]>:\n\n> To avoid a dump/restore, use this:\n>\n> psql -o /dev/null -c \"select * from table for update\" database\n>\n> Using the last releases of the major versions solve the bug for me.\n>\n> Best regards\n>\n> Em qua, 13 de mar de 2019 às 09:29, Mariel Cherkassky <\n> [email protected]> escreveu:\n>\n>> Hey,\n>> The logs are full of info that I cant share. However, it full of the next\n>> messages :\n>> ERROR: found xmin 16804535 from before relfrozenxid 90126924\n>> CONTEXT: automatic vacuum of table db1.public.table_1\"\n>> ...\n>>\n>> What I'm trying to understand here is if the bug was fixed or not. In the\n>> first time it appeared the dump and the restore solved the issue. However,\n>> is happened the second time on a different table. So basically I'm trying\n>> to understand how to solve it permanently.\n>>\n>> the dump command ; pg_dump -d db -U username -t table_name -f table.sql\n>> I dropped the old table and restored it :\n>> drop table table_name;\n>> psql -d db -U username -f table.sql\n>>\n>> בתאריך יום ד׳, 13 במרץ 2019 ב-14:24 מאת Adrien NAYRAT <\n>> [email protected]>:\n>>\n>>> On 3/12/19 8:58 AM, Mariel Cherkassky wrote:\n>>> > Apparently the issue appeared again in the same database but on\n>>> > different table . In the last time dumping and restoring the table\n>>> > helped. However, I dont understand why another table hit the bug if it\n>>> > was fixed in 9.6.9 while my db version is 9.6.10.\n>>>\n>>> Hello,\n>>>\n>>> Could you provide more details (logs...) and remind how you perform\n>>> database dump/restore?\n>>>\n>>> This will help community to help you ;)\n>>>\n>>> Regards,\n>>>\n>>\n\nHey,The query was the first thing that I tried, it didnt solve the issue.Guess I'll update to the latest version.בתאריך יום ד׳, 13 במרץ 2019 ב-14:48 מאת Alexandre Arruda <[email protected]>:To avoid a dump/restore, use this:psql -o /dev/null -c \"select * from table for update\" database Using the last releases of the major versions solve the bug for me.Best regards Em qua, 13 de mar de 2019 às 09:29, Mariel Cherkassky <[email protected]> escreveu:Hey,The logs are full of info that I cant share. However, it full of the next messages : ERROR: found xmin 16804535 from before relfrozenxid 90126924CONTEXT: automatic vacuum of table db1.public.table_1\"...What I'm trying to understand here is if the bug was fixed or not. In the first time it appeared the dump and the restore solved the issue. However, is happened the second time on a different table. So basically I'm trying to understand how to solve it permanently. the dump command ; pg_dump -d db -U username -t table_name -f table.sqlI dropped the old table and restored it :drop table table_name;psql -d db -U username -f table.sqlבתאריך יום ד׳, 13 במרץ 2019 ב-14:24 מאת Adrien NAYRAT <[email protected]>:On 3/12/19 8:58 AM, Mariel Cherkassky wrote:\n> Apparently the issue appeared again in the same database but on \n> different table . In the last time dumping and restoring the table \n> helped. However, I dont understand why another table hit the bug if it \n> was fixed in 9.6.9 while my db version is 9.6.10.\n\nHello,\n\nCould you provide more details (logs...) and remind how you perform \ndatabase dump/restore?\n\nThis will help community to help you ;)\n\nRegards,",
"msg_date": "Wed, 13 Mar 2019 14:59:23 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ERROR: found xmin from before relfrozenxid"
},
{
"msg_contents": "On 3/13/19 1:59 PM, Mariel Cherkassky wrote:\n> Hey,\n> The query was the first thing that I tried, it didnt solve the issue.\n> Guess I'll update to the latest version.\n\nI read releases notes and I don't find any item that could be related to \nthe error you encounter. It could be either another bug in postgres or a \nstorage corruption.\n\nDid you enable checksum when your have restored your database? (In order \nto exclude possible storage corruption).\n\nYou don't have other error messages in logs?\n\n",
"msg_date": "Wed, 13 Mar 2019 16:30:49 +0100",
"msg_from": "Adrien NAYRAT <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: found xmin from before relfrozenxid"
}
] |
[
{
"msg_contents": "Hi,\n\nPlease pardon me if this question is already answered in the documentation,\nWiki, or the mailing list archive. The problem is, that I don't know the\nexact term to search for - I've tried searching for \"linear scalability\"\nand \"concurrency vs performance\" but didn't find what I was looking for.\n\n## MAIN QUESTION\n\npgbench -c 1 achieves approx 80 TPS\npgbench -c 6 should achieve approx 480 TPS, but only achieves 360 TPS\npgbench -c 12, should achieve approx 960 TPS, but only achieves 610 TPS\n\nIf pgbench is being run on a 4c/8t machine and pg-server is being run on a\n6c/12t machine with 32GB RAM [1], and the two servers are connected with 1\nGbit/s connection, I don't think either pgbench or pg-server is being\nconstrained by hardware, right?\n\n*In that case why is it not possible to achieve linear scalability, at\nleast till 12 concurrent connections (i.e. the thread-count of pg-server)?*\nWhat is an easy way to identify the limiting factor? Is it network\nconnectivity? Disk IOPS? CPU load? Some config parameter?\n\n## SECONDARY QUESTION\n\n*At what level of concurrent connections should settings like\nshared_buffers, effective_cache_size, max_wal_size start making a\ndifference?* With my hardware [1], I'm seeing a difference only after 48\nconcurrent connections. And that too it's just a 15-30% improvement over\nthe default settings that ship with the Ubuntu 18.04 package. Is this\nexpected? Isn't this allocating too many resources for too little gain?\n\n## CONTEXT\n\nI am currently trying to benchmark PG 11 (via pgbench) to figure out the\nconfiguration parameters that deliver optimum performance for my hardware\n[1] and workload [2]\n\nBased on https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\nI've made the following relevant changes to the default PG config on Ubuntu\n18.04:\n\n max_connection=400\n work_mem=4MB\n maintenance_work_mem=64MB\n shared_buffers=12288MB\n temp_buffers=8MB\n effective_cache_size=16GB\n wal_buffers=-1\n wal_sync_method=fsync\n max_wal_size=5GB\n autovacuum=off # NOTE: Only for benchmarking\n\n[1] 32 GB RAM - 6 core/12 thread - 2x SSD in RAID1\n[2] SaaS webapp -- it's a mixed workload which looks a lot like TPC-B\n\nThanks,\nSaurabh.\n\nHi,Please pardon me if this question is already answered in the documentation, Wiki, or the mailing list archive. The problem is, that I don't know the exact term to search for - I've tried searching for \"linear scalability\" and \"concurrency vs performance\" but didn't find what I was looking for.## MAIN QUESTIONpgbench -c 1 achieves approx 80 TPSpgbench -c 6 should achieve approx 480 TPS, but only achieves 360 TPSpgbench -c 12, should achieve approx 960 TPS, but only achieves 610 TPSIf pgbench is being run on a 4c/8t machine and pg-server is being run on a 6c/12t machine with 32GB RAM [1], and the two servers are connected with 1 Gbit/s connection, I don't think either pgbench or pg-server is being constrained by hardware, right?In that case why is it not possible to achieve linear scalability, at least till 12 concurrent connections (i.e. the thread-count of pg-server)? What is an easy way to identify the limiting factor? Is it network connectivity? Disk IOPS? CPU load? Some config parameter?## SECONDARY QUESTIONAt what level of concurrent connections should settings like shared_buffers, effective_cache_size, max_wal_size start making a difference? With my hardware [1], I'm seeing a difference only after 48 concurrent connections. And that too it's just a 15-30% improvement over the default settings that ship with the Ubuntu 18.04 package. Is this expected? Isn't this allocating too many resources for too little gain?## CONTEXTI am currently trying to benchmark PG 11 (via pgbench) to figure out the configuration parameters that deliver optimum performance for my hardware [1] and workload [2]Based on https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server I've made the following relevant changes to the default PG config on Ubuntu 18.04: max_connection=400 work_mem=4MB maintenance_work_mem=64MB shared_buffers=12288MB temp_buffers=8MB effective_cache_size=16GB wal_buffers=-1 wal_sync_method=fsync max_wal_size=5GB autovacuum=off # NOTE: Only for benchmarking[1] 32 GB RAM - 6 core/12 thread - 2x SSD in RAID1[2] SaaS webapp -- it's a mixed workload which looks a lot like TPC-BThanks,Saurabh.",
"msg_date": "Thu, 24 Jan 2019 00:46:06 +0530",
"msg_from": "Saurabh Nanda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Benchmarking: How to identify bottleneck (limiting factor) and\n achieve \"linear scalability\"?"
},
{
"msg_contents": "Is there any material on how to benchmark Postgres meaningfully? I'm\ngetting very frustrated with the numbers that `pgbench` is reporting:\n\n-- allocating more resources to Postgres seems to be randomly dropping\nperformance\n-- there seems to be no repeatability in the benchmarking numbers [1]\n-- there is no to figure out what is causing a bottleneck and which\nknob/setting is going to alleviate it.\n\nHow do the PG wizards figure all this out?\n\n[1]\nhttps://dba.stackexchange.com/questions/227790/pgbench-20-30-variation-in-benchmark-results-non-repeatable-benchmarks\n\n-- Saurabh.\n\nOn Thu, Jan 24, 2019 at 12:46 AM Saurabh Nanda <[email protected]>\nwrote:\n\n> Hi,\n>\n> Please pardon me if this question is already answered in the\n> documentation, Wiki, or the mailing list archive. The problem is, that I\n> don't know the exact term to search for - I've tried searching for \"linear\n> scalability\" and \"concurrency vs performance\" but didn't find what I was\n> looking for.\n>\n> ## MAIN QUESTION\n>\n> pgbench -c 1 achieves approx 80 TPS\n> pgbench -c 6 should achieve approx 480 TPS, but only achieves 360 TPS\n> pgbench -c 12, should achieve approx 960 TPS, but only achieves 610 TPS\n>\n> If pgbench is being run on a 4c/8t machine and pg-server is being run on a\n> 6c/12t machine with 32GB RAM [1], and the two servers are connected with 1\n> Gbit/s connection, I don't think either pgbench or pg-server is being\n> constrained by hardware, right?\n>\n> *In that case why is it not possible to achieve linear scalability, at\n> least till 12 concurrent connections (i.e. the thread-count of pg-server)?*\n> What is an easy way to identify the limiting factor? Is it network\n> connectivity? Disk IOPS? CPU load? Some config parameter?\n>\n> ## SECONDARY QUESTION\n>\n> *At what level of concurrent connections should settings like\n> shared_buffers, effective_cache_size, max_wal_size start making a\n> difference?* With my hardware [1], I'm seeing a difference only after 48\n> concurrent connections. And that too it's just a 15-30% improvement over\n> the default settings that ship with the Ubuntu 18.04 package. Is this\n> expected? Isn't this allocating too many resources for too little gain?\n>\n> ## CONTEXT\n>\n> I am currently trying to benchmark PG 11 (via pgbench) to figure out the\n> configuration parameters that deliver optimum performance for my hardware\n> [1] and workload [2]\n>\n> Based on https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n> I've made the following relevant changes to the default PG config on Ubuntu\n> 18.04:\n>\n> max_connection=400\n> work_mem=4MB\n> maintenance_work_mem=64MB\n> shared_buffers=12288MB\n> temp_buffers=8MB\n> effective_cache_size=16GB\n> wal_buffers=-1\n> wal_sync_method=fsync\n> max_wal_size=5GB\n> autovacuum=off # NOTE: Only for benchmarking\n>\n> [1] 32 GB RAM - 6 core/12 thread - 2x SSD in RAID1\n> [2] SaaS webapp -- it's a mixed workload which looks a lot like TPC-B\n>\n> Thanks,\n> Saurabh.\n>\n\n\n-- \nhttp://www.saurabhnanda.com\n\nIs there any material on how to benchmark Postgres meaningfully? I'm getting very frustrated with the numbers that `pgbench` is reporting:-- allocating more resources to Postgres seems to be randomly dropping performance-- there seems to be no repeatability in the benchmarking numbers [1]-- there is no to figure out what is causing a bottleneck and which knob/setting is going to alleviate it.How do the PG wizards figure all this out?[1] https://dba.stackexchange.com/questions/227790/pgbench-20-30-variation-in-benchmark-results-non-repeatable-benchmarks-- Saurabh.On Thu, Jan 24, 2019 at 12:46 AM Saurabh Nanda <[email protected]> wrote:Hi,Please pardon me if this question is already answered in the documentation, Wiki, or the mailing list archive. The problem is, that I don't know the exact term to search for - I've tried searching for \"linear scalability\" and \"concurrency vs performance\" but didn't find what I was looking for.## MAIN QUESTIONpgbench -c 1 achieves approx 80 TPSpgbench -c 6 should achieve approx 480 TPS, but only achieves 360 TPSpgbench -c 12, should achieve approx 960 TPS, but only achieves 610 TPSIf pgbench is being run on a 4c/8t machine and pg-server is being run on a 6c/12t machine with 32GB RAM [1], and the two servers are connected with 1 Gbit/s connection, I don't think either pgbench or pg-server is being constrained by hardware, right?In that case why is it not possible to achieve linear scalability, at least till 12 concurrent connections (i.e. the thread-count of pg-server)? What is an easy way to identify the limiting factor? Is it network connectivity? Disk IOPS? CPU load? Some config parameter?## SECONDARY QUESTIONAt what level of concurrent connections should settings like shared_buffers, effective_cache_size, max_wal_size start making a difference? With my hardware [1], I'm seeing a difference only after 48 concurrent connections. And that too it's just a 15-30% improvement over the default settings that ship with the Ubuntu 18.04 package. Is this expected? Isn't this allocating too many resources for too little gain?## CONTEXTI am currently trying to benchmark PG 11 (via pgbench) to figure out the configuration parameters that deliver optimum performance for my hardware [1] and workload [2]Based on https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server I've made the following relevant changes to the default PG config on Ubuntu 18.04: max_connection=400 work_mem=4MB maintenance_work_mem=64MB shared_buffers=12288MB temp_buffers=8MB effective_cache_size=16GB wal_buffers=-1 wal_sync_method=fsync max_wal_size=5GB autovacuum=off # NOTE: Only for benchmarking[1] 32 GB RAM - 6 core/12 thread - 2x SSD in RAID1[2] SaaS webapp -- it's a mixed workload which looks a lot like TPC-BThanks,Saurabh.\n-- http://www.saurabhnanda.com",
"msg_date": "Fri, 25 Jan 2019 12:47:31 +0530",
"msg_from": "Saurabh Nanda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Benchmarking: How to identify bottleneck (limiting factor) and\n achieve \"linear scalability\"?"
},
{
"msg_contents": "Hi Jeff,\n\nThank you for replying.\n\n\n\n> wal_sync_method=fsync\n>\n>\nWhy this change?\n\n\nActually, I re-checked and noticed that this config section was left to\nit's default values, which is the following. Since the commented line said\n`wal_sync_method = fsync`, I _assumed_ that's the default value. But it\nseems that Linux uses fdatasync, by default, and the output of\npg_test_fsync also shows that it is /probably/ the fastest method on my\nhardware.\n\n # wal_sync_method = fsync\n # the default is the first option supported by the operating system:\n # open_datasync\n # fdatasync (default on Linux)\n # fsync\n # fsync_writethrough\n # open_sync\n\n\nPGOPTIONS=\"-c synchronous_commit=off\" pgbench -T 3600 -P 10 ....\n\n\nI am currently running all my benchmarks with synchronous_commit=off and\nwill get back with my findings.\n\nYou could also try pg_test_fsync to get low-level information, to\n> supplement the high level you get from pgbench.\n\n\nThanks for pointing me to this tool. never knew pg_test_fsync existed! I've\nrun `pg_test_fsync -s 60` two times and this is the output -\nhttps://gist.github.com/saurabhnanda/b60e8cf69032b570c5b554eb50df64f8 I'm\nnot sure what to make of it? Shall I tweak the setting of `wal_sync_method`\nto something other than the default value?\n\nThe effects of max_wal_size are going to depend on how you have IO\n> configured, for example does pg_wal shared the same devices and controllers\n> as the base data? It is mostly about controlling disk usage and\n> crash-recovery performance, neither of which is of primary importance to\n> pgbench performance.\n\n\n The WAL and the data-directory reside on the same SSD disk -- is this a\nbad idea? I was under the impression that smaller values for max_wal_size\ncause pg-server to do \"maintenance work\" related to wal rotation, etc. more\nfrequently and would lead to lower pgbench performance.\n\nNot all SSD are created equal, so the details here matter, both for the\n> underlying drives and the raid controller.\n\n\nHere's the relevant output from lshw --\nhttps://gist.github.com/saurabhnanda/d7107d4ab1bb48e94e0a5e3ef96e7260 It\nseems I have Micron SSDs. I tried finding more information on RAID but\ncouldn't get anything in the lshw or lspci output except the following --\n`SATA controller: Intel Corporation Sunrise Point-H SATA controller [AHCI\nmode] (rev 31)`. Moreover, the devices are showing up as /dev/md1,\n/dev/md2, etc. So, if my understanding is correct, I don't think I'm on\nhardware RAID, but software RAID, right?\n\nThese machines are from the EX-line of dedicated servers provided by\nHetzner, btw.\n\nPS: Cc-ing the list back again because I assume you didn't intend for your\nreply to be private, right?\n\n-- Saurabh.\n\nHi Jeff,Thank you for replying. wal_sync_method=fsyncWhy this change? Actually, I re-checked and noticed that this config section was left to it's default values, which is the following. Since the commented line said `wal_sync_method = fsync`, I _assumed_ that's the default value. But it seems that Linux uses fdatasync, by default, and the output of pg_test_fsync also shows that it is /probably/ the fastest method on my hardware. # wal_sync_method = fsync # the default is the first option supported by the operating system: # open_datasync # fdatasync (default on Linux) # fsync # fsync_writethrough # open_syncPGOPTIONS=\"-c synchronous_commit=off\" pgbench -T 3600 -P 10 ....I am currently running all my benchmarks with synchronous_commit=off and will get back with my findings. You could also try pg_test_fsync to get low-level information, to supplement the high level you get from pgbench.Thanks for pointing me to this tool. never knew pg_test_fsync existed! I've run `pg_test_fsync -s 60` two times and this is the output - https://gist.github.com/saurabhnanda/b60e8cf69032b570c5b554eb50df64f8 I'm not sure what to make of it? Shall I tweak the setting of `wal_sync_method` to something other than the default value?The effects of max_wal_size are going to depend on how you have IO configured, for example does pg_wal shared the same devices and controllers as the base data? It is mostly about controlling disk usage and crash-recovery performance, neither of which is of primary importance to pgbench performance. The WAL and the data-directory reside on the same SSD disk -- is this a bad idea? I was under the impression that smaller values for max_wal_size cause pg-server to do \"maintenance work\" related to wal rotation, etc. more frequently and would lead to lower pgbench performance.Not all SSD are created equal, so the details here matter, both for the underlying drives and the raid controller.Here's the relevant output from lshw -- https://gist.github.com/saurabhnanda/d7107d4ab1bb48e94e0a5e3ef96e7260 It seems I have Micron SSDs. I tried finding more information on RAID but couldn't get anything in the lshw or lspci output except the following -- `SATA controller: Intel Corporation Sunrise Point-H SATA controller [AHCI mode] (rev 31)`. Moreover, the devices are showing up as /dev/md1, /dev/md2, etc. So, if my understanding is correct, I don't think I'm on hardware RAID, but software RAID, right?These machines are from the EX-line of dedicated servers provided by Hetzner, btw.PS: Cc-ing the list back again because I assume you didn't intend for your reply to be private, right?-- Saurabh.",
"msg_date": "Sat, 26 Jan 2019 16:40:55 +0530",
"msg_from": "Saurabh Nanda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Benchmarking: How to identify bottleneck (limiting factor) and\n achieve \"linear scalability\"?"
},
{
"msg_contents": ">\n>\n> PGOPTIONS=\"-c synchronous_commit=off\" pgbench -T 3600 -P 10 ....\n>\n>\n> I am currently running all my benchmarks with synchronous_commit=off and\n> will get back with my findings.\n>\n\n\nIt seems that PGOPTIONS=\"-c synchronous_commit=off\" has a significant\nimpact. However, I still can not understand why the TPS for the optimised\ncase is LOWER than the default for higher concurrency levels!\n\n+--------+---------------------+------------------------+\n| client | Mostly defaults [1] | Optimised settings [2] |\n+--------+---------------------+------------------------+\n| 1 | 80-86 | 169-180 |\n+--------+---------------------+------------------------+\n| 6 | 350-376 | 1265-1397 |\n+--------+---------------------+------------------------+\n| 12 | 603-619 | 1746-2352 |\n+--------+---------------------+------------------------+\n| 24 | 947-1015 | 1869-2518 |\n+--------+---------------------+------------------------+\n| 48 | 1435-1512 | 1912-2818 |\n+--------+---------------------+------------------------+\n| 96 | 1769-1811 | 1546-1753 |\n+--------+---------------------+------------------------+\n| 192 | 1857-1992 | 1332-1508 |\n+--------+---------------------+------------------------+\n| 384 | 1667-1793 | 1356-1450 |\n+--------+---------------------+------------------------+\n\n\n[1] \"Mostly default\" settings are whatever ships with Ubuntu 18.04 + PG 11.\nA snippet of the relevant setts are given below:\n\n max_connection=400\n work_mem=4MB\n maintenance_work_mem=64MB\n shared_buffers=128MB\n temp_buffers=8MB\n effective_cache_size=4GB\n wal_buffers=-1\n wal_sync_method=fsync\n max_wal_size=1GB\n* autovacuum=off # Auto-vacuuming was disabled*\n\n\n[2] An optimised version of settings was obtained from\nhttps://pgtune.leopard.in.ua/#/ and along with that the benchmarks were\nrun with *PGOPTIONS=\"-c synchronous_commit=off\"*\n\n max_connections = 400\n shared_buffers = 8GB\n effective_cache_size = 24GB\n maintenance_work_mem = 2GB\n checkpoint_completion_target = 0.7\n wal_buffers = 16MB\n default_statistics_target = 100\n random_page_cost = 1.1\n effective_io_concurrency = 200\n work_mem = 3495kB\n min_wal_size = 1GB\n max_wal_size = 2GB\n max_worker_processes = 12\n max_parallel_workers_per_gather = 6\n max_parallel_workers = 12\n* autovacuum=off # Auto-vacuuming was disabled*\n\nPGOPTIONS=\"-c synchronous_commit=off\" pgbench -T 3600 -P 10 ....I am currently running all my benchmarks with synchronous_commit=off and will get back with my findings. It seems that PGOPTIONS=\"-c synchronous_commit=off\" has a significant impact. However, I still can not understand why the TPS for the optimised case is LOWER than the default for higher concurrency levels!+--------+---------------------+------------------------+\n| client | Mostly defaults [1] | Optimised settings [2] |\n+--------+---------------------+------------------------+\n| 1 | 80-86 | 169-180 |\n+--------+---------------------+------------------------+\n| 6 | 350-376 | 1265-1397 |\n+--------+---------------------+------------------------+\n| 12 | 603-619 | 1746-2352 |\n+--------+---------------------+------------------------+\n| 24 | 947-1015 | 1869-2518 |\n+--------+---------------------+------------------------+\n| 48 | 1435-1512 | 1912-2818 |\n+--------+---------------------+------------------------+\n| 96 | 1769-1811 | 1546-1753 |\n+--------+---------------------+------------------------+\n| 192 | 1857-1992 | 1332-1508 |\n+--------+---------------------+------------------------+\n| 384 | 1667-1793 | 1356-1450 |\n+--------+---------------------+------------------------+[1] \"Mostly default\" settings are whatever ships with Ubuntu 18.04 + PG 11. A snippet of the relevant setts are given below: max_connection=400 work_mem=4MB maintenance_work_mem=64MB shared_buffers=128MB temp_buffers=8MB effective_cache_size=4GB wal_buffers=-1 wal_sync_method=fsync max_wal_size=1GB autovacuum=off # Auto-vacuuming was disabled[2] An optimised version of settings was obtained from https://pgtune.leopard.in.ua/#/ and along with that the benchmarks were run with PGOPTIONS=\"-c synchronous_commit=off\" max_connections = 400 shared_buffers = 8GB effective_cache_size = 24GB maintenance_work_mem = 2GB checkpoint_completion_target = 0.7 wal_buffers = 16MB default_statistics_target = 100 random_page_cost = 1.1 effective_io_concurrency = 200 work_mem = 3495kB min_wal_size = 1GB max_wal_size = 2GB max_worker_processes = 12 max_parallel_workers_per_gather = 6 max_parallel_workers = 12 autovacuum=off # Auto-vacuuming was disabled",
"msg_date": "Sun, 27 Jan 2019 13:09:16 +0530",
"msg_from": "Saurabh Nanda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Benchmarking: How to identify bottleneck (limiting factor) and\n achieve \"linear scalability\"?"
},
{
"msg_contents": "On Sun, Jan 27, 2019 at 01:09:16PM +0530, Saurabh Nanda wrote:\n> It seems that PGOPTIONS=\"-c synchronous_commit=off\" has a significant\n> impact. However, I still can not understand why the TPS for the optimised\n> case is LOWER than the default for higher concurrency levels!\n\nDo you know which of the settings is causing lower TPS ?\n\nI suggest to check shared_buffers.\n\nIf you haven't done it, disabling THP and KSM can resolve performance issues,\nesp. with large RAM like shared_buffers, at least with older kernels.\nhttps://www.postgresql.org/message-id/20170718180152.GE17566%40telsasoft.com\n\nJustin\n\n",
"msg_date": "Sun, 27 Jan 2019 10:04:06 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Benchmarking: How to identify bottleneck (limiting factor) and\n achieve \"linear scalability\"?"
},
{
"msg_contents": "> Do you know which of the settings is causing lower TPS ?\n\n\n> I suggest to check shared_buffers.\n>\n\nI'm trying to find this, but it's taking a lot of time in re-running the\nbenchmarks changing one config setting at a time. Thanks for the tip\nrelated to shared_buffers.\n\n\n>\n> If you haven't done it, disabling THP and KSM can resolve performance\n> issues,\n> esp. with large RAM like shared_buffers, at least with older kernels.\n>\n> https://www.postgresql.org/message-id/20170718180152.GE17566%40telsasoft.com\n\n\nIs this a well-known performance \"hack\"? Is there any reason why it is not\nmentioned at https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n? Are the stability implications of fiddling with THP and KSM well-known?\nAlso, wrt KSM, my understand was that when a process forks the process'\nmemory is anyways \"copy on write\", right? What other kind of pages would\nend-up being de-duplicated by ksmd? (Caveat: This is the first time I'm\nhearing about KSM and my knowledge is based off a single reading of\nhttps://www.kernel.org/doc/html/latest/admin-guide/mm/ksm.html )\n\n-- Saurabh.\n\n \nDo you know which of the settings is causing lower TPS ? \n\nI suggest to check shared_buffers.I'm trying to find this, but it's taking a lot of time in re-running the benchmarks changing one config setting at a time. Thanks for the tip related to shared_buffers. \n\nIf you haven't done it, disabling THP and KSM can resolve performance issues,\nesp. with large RAM like shared_buffers, at least with older kernels.\nhttps://www.postgresql.org/message-id/20170718180152.GE17566%40telsasoft.comIs this a well-known performance \"hack\"? Is there any reason why it is not mentioned at https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server ? Are the stability implications of fiddling with THP and KSM well-known? Also, wrt KSM, my understand was that when a process forks the process' memory is anyways \"copy on write\", right? What other kind of pages would end-up being de-duplicated by ksmd? (Caveat: This is the first time I'm hearing about KSM and my knowledge is based off a single reading of https://www.kernel.org/doc/html/latest/admin-guide/mm/ksm.html )-- Saurabh.",
"msg_date": "Sun, 27 Jan 2019 22:08:55 +0530",
"msg_from": "Saurabh Nanda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Benchmarking: How to identify bottleneck (limiting factor) and\n achieve \"linear scalability\"?"
},
{
"msg_contents": ">\n>\n> You could also try pg_test_fsync to get low-level information, to\n>> supplement the high level you get from pgbench.\n>\n>\n> Thanks for pointing me to this tool. never knew pg_test_fsync existed!\n> I've run `pg_test_fsync -s 60` two times and this is the output -\n> https://gist.github.com/saurabhnanda/b60e8cf69032b570c5b554eb50df64f8 I'm\n> not sure what to make of it?\n>\n\nI don't know what to make of that either. I'd expect fdatasync using two\n8kB writes to be about the same throughput as using one 8kB write, but\ninstead it is 4 times slower. Also, I'd expect open_datasync to get slower\nby a factor of 2, not a factor of 8, when going from one to two 8kB writes\n(that is not directly relevant, as you aren't using open_datasync, but is\ncurious nonetheless). Is this reproducible with different run lengths? I\nwonder if your write cache (or something) gets \"tired\" during the first\npart of pg_test_fsync and thus degrades the subsequent parts of the test.\nI would say something in your IO stack is not optimal, maybe some component\nis \"consumer grade\" rather than \"server grade\". Maybe you can ask Hetzner\nabout that.\n\n\n> The effects of max_wal_size are going to depend on how you have IO\n>> configured, for example does pg_wal shared the same devices and controllers\n>> as the base data? It is mostly about controlling disk usage and\n>> crash-recovery performance, neither of which is of primary importance to\n>> pgbench performance.\n>\n>\n> The WAL and the data-directory reside on the same SSD disk -- is this a\n> bad idea?\n>\n\nIf you are trying to squeeze out every last bit of performance, then I\nthink it is bad idea. Or at least, something to try the alternative and\nsee. The flushing that occurs during checkpoints and the flushing that\noccurs for every commit can interfere with each other.\n\n\n> I was under the impression that smaller values for max_wal_size cause\n> pg-server to do \"maintenance work\" related to wal rotation, etc. more\n> frequently and would lead to lower pgbench performance.\n>\n\nIf you choose ridiculously small values it would. But once the value is\nsufficient, increasing it further wouldn't do much. Given your low level\nof throughput, I would think the default is already sufficient.\n\nThanks for including the storage info. Nothing about it stands out to me\nas either good or bad, but I'm not a hardware maven; hopefully one will be\nreading along and speak up.\n\n\n> PS: Cc-ing the list back again because I assume you didn't intend for your\n> reply to be private, right?\n>\n\nYes, I had intended to include the list but hit the wrong button, sorry.\n\nCheers,\n\nJeff\n\n>\n\nYou could also try pg_test_fsync to get low-level information, to supplement the high level you get from pgbench.Thanks for pointing me to this tool. never knew pg_test_fsync existed! I've run `pg_test_fsync -s 60` two times and this is the output - https://gist.github.com/saurabhnanda/b60e8cf69032b570c5b554eb50df64f8 I'm not sure what to make of it? I don't know what to make of that either. I'd expect fdatasync using two 8kB writes to be about the same throughput as using one 8kB write, but instead it is 4 times slower. Also, I'd expect open_datasync to get slower by a factor of 2, not a factor of 8, when going from one to two 8kB writes (that is not directly relevant, as you aren't using open_datasync, but is curious nonetheless). Is this reproducible with different run lengths? I wonder if your write cache (or something) gets \"tired\" during the first part of pg_test_fsync and thus degrades the subsequent parts of the test. I would say something in your IO stack is not optimal, maybe some component is \"consumer grade\" rather than \"server grade\". Maybe you can ask Hetzner about that.The effects of max_wal_size are going to depend on how you have IO configured, for example does pg_wal shared the same devices and controllers as the base data? It is mostly about controlling disk usage and crash-recovery performance, neither of which is of primary importance to pgbench performance. The WAL and the data-directory reside on the same SSD disk -- is this a bad idea? If you are trying to squeeze out every last bit of performance, then I think it is bad idea. Or at least, something to try the alternative and see. The flushing that occurs during checkpoints and the flushing that occurs for every commit can interfere with each other. I was under the impression that smaller values for max_wal_size cause pg-server to do \"maintenance work\" related to wal rotation, etc. more frequently and would lead to lower pgbench performance.If you choose ridiculously small values it would. But once the value is sufficient, increasing it further wouldn't do much. Given your low level of throughput, I would think the default is already sufficient. Thanks for including the storage info. Nothing about it stands out to me as either good or bad, but I'm not a hardware maven; hopefully one will be reading along and speak up.PS: Cc-ing the list back again because I assume you didn't intend for your reply to be private, right?Yes, I had intended to include the list but hit the wrong button, sorry.Cheers,Jeff",
"msg_date": "Sun, 27 Jan 2019 12:48:12 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Benchmarking: How to identify bottleneck (limiting factor) and\n achieve \"linear scalability\"?"
},
{
"msg_contents": "On Sun, Jan 27, 2019 at 2:39 AM Saurabh Nanda <[email protected]>\nwrote:\n\n>\n>> PGOPTIONS=\"-c synchronous_commit=off\" pgbench -T 3600 -P 10 ....\n>>\n>>\n>> I am currently running all my benchmarks with synchronous_commit=off and\n>> will get back with my findings.\n>>\n>\n>\n> It seems that PGOPTIONS=\"-c synchronous_commit=off\" has a significant\n> impact.\n>\n\nIt is usually not acceptable to run applications with\nsynchronous_commit=off, so once you have identified that the bottleneck is\nin implementing synchronous_commit=on, you probably need to take a deep\ndive into your hardware to figure out why it isn't performing the way you\nneed/want/expect it to. Tuning the server under synchronous_commit=off\nwhen you don't intend to run your production server with that setting is\nunlikely to be fruitful.\n\n\n> However, I still can not understand why the TPS for the optimised case is\nLOWER than the default for higher concurrency levels!\n\nIn case you do intend to run with synchronous_commit=off, or if you are\njust curious: running with a very high number of active connections often\nreveals subtle bottlenecks and interactions, and is very dependent on your\nhardware. Unless you actually intend to run our server with\nsynchronous_commit=off and with a large number of active connections, it is\nprobably not worth investigating this. You can make a hobby of it, of\ncourse, but it is a time consuming hobby to have. If you do want to, I\nthink you should start out with your optimized settings and revert them one\nat a time to find the one the caused the performance regression.\n\nI'm more interested in the low end, you should do much better than those\nreported numbers when clients=1 and synchronous_commit=off with the data on\nSSD. I think you said that pgbench is running on a different machine than\nthe database, so perhaps it is just network overhead that is keeping this\nvalue down. What happens if you run them on the same machine?\n\n> +--------+---------------------+------------------------+\n> > | client | Mostly defaults [1] | Optimised settings [2] |\n> > +--------+---------------------+------------------------+\n> > | 1 | 80-86 | 169-180 |\n> > +--------+---------------------+------------------------+\n>\n\nCheers,\n\nJeff\n\nOn Sun, Jan 27, 2019 at 2:39 AM Saurabh Nanda <[email protected]> wrote:PGOPTIONS=\"-c synchronous_commit=off\" pgbench -T 3600 -P 10 ....I am currently running all my benchmarks with synchronous_commit=off and will get back with my findings. It seems that PGOPTIONS=\"-c synchronous_commit=off\" has a significant impact.It is usually not acceptable to run applications with synchronous_commit=off, so once you have identified that the bottleneck is in implementing synchronous_commit=on, you probably need to take a deep dive into your hardware to figure out why it isn't performing the way you need/want/expect it to. Tuning the server under synchronous_commit=off when you don't intend to run your production server with that setting is unlikely to be fruitful.> However, I still can not understand why the TPS for the optimised case is LOWER than the default for higher concurrency levels! In case you do intend to run with synchronous_commit=off, or if you are just curious: running with a very high number of active connections often reveals subtle bottlenecks and interactions, and is very dependent on your hardware. Unless you actually intend to run our server with synchronous_commit=off and with a large number of active connections, it is probably not worth investigating this. You can make a hobby of it, of course, but it is a time consuming hobby to have. If you do want to, I think you should start out with your optimized settings and revert them one at a time to find the one the caused the performance regression.I'm more interested in the low end, you should do much better than those reported numbers when clients=1 and synchronous_commit=off with the data on SSD. I think you said that pgbench is running on a different machine than the database, so perhaps it is just network overhead that is keeping this value down. What happens if you run them on the same machine?> +--------+---------------------+------------------------+> | client | Mostly defaults [1] | Optimised settings [2] |> +--------+---------------------+------------------------+> | 1 | 80-86 | 169-180 |> +--------+---------------------+------------------------+Cheers,Jeff",
"msg_date": "Sun, 27 Jan 2019 14:41:57 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Benchmarking: How to identify bottleneck (limiting factor) and\n achieve \"linear scalability\"?"
},
{
"msg_contents": ">\n> It is usually not acceptable to run applications with\n> synchronous_commit=off, so once you have identified that the bottleneck is\n> in implementing synchronous_commit=on, you probably need to take a deep\n> dive into your hardware to figure out why it isn't performing the way you\n> need/want/expect it to. Tuning the server under synchronous_commit=off\n> when you don't intend to run your production server with that setting is\n> unlikely to be fruitful.\n>\n\nI do not intend to run the server with synchronous_commit=off, but based on\nmy limited knowledge, I'm wondering if all these observations are somehow\nrelated and are caused by the same underlying bottleneck (or\nmisconfiguration):\n\n1) At higher concurrency levels, TPS for synchronous_commit=off is lower\nfor optimised settings when compared to default settings\n2) At ALL concurrency levels, TPS for synchronous_commit=on is lower for\noptimised settings (irrespective of shared_buffers value), compared to\ndefault settings\n3) At higher concurrency levels, optimised + synchronous_commit=on +\nshared_buffers=2G has HIGHER TPS than optimised + synchronous_commit=off +\nshared_buffers=8G\n\nHere are the (completely counter-intuitive) numbers for these observations:\n\n+--------+-----------------------------------------------------------------+------------------------+\n| | synchronous_commit=on\n | synchronous_commit=off |\n+--------+-----------------------------------------------------------------+------------------------+\n| client | Mostly defaults [1] | Optimised [2] | Optimised [2]\n | Optimised [2] |\n| | | + shared_buffers=2G | +\nshared_buffers=8G | + shared_buffers=8G |\n+--------+---------------------+---------------------+---------------------+------------------------+\n| 1 | 80-86 | 74-77 | 75-75\n | 169-180 |\n+--------+---------------------+---------------------+---------------------+------------------------+\n| 6 | 350-376 | 301-304 | 295-300\n | 1265-1397 |\n+--------+---------------------+---------------------+---------------------+------------------------+\n| 12 | 603-619 | 476-488 | 485-493\n | 1746-2352 |\n+--------+---------------------+---------------------+---------------------+------------------------+\n| 24 | 947-1015 | 678-739 | 723-770\n | 1869-2518 |\n+--------+---------------------+---------------------+---------------------+------------------------+\n| 48 | 1435-1512 | 950-1043 | 1029-1086\n | 1912-2818 |\n+--------+---------------------+---------------------+---------------------+------------------------+\n| 96 | 1769-1811 | 3337-3459 | 1302-1346\n | 1546-1753 |\n+--------+---------------------+---------------------+---------------------+------------------------+\n| 192 | 1857-1992 | 3613-3715 | 1269-1345\n | 1332-1508 |\n+--------+---------------------+---------------------+---------------------+------------------------+\n| 384 | 1667-1793 | 3180-3300 | 1262-1364\n | 1356-1450 |\n+--------+---------------------+---------------------+---------------------+------------------------+\n\n\n\n>\n> In case you do intend to run with synchronous_commit=off, or if you are\n> just curious: running with a very high number of active connections often\n> reveals subtle bottlenecks and interactions, and is very dependent on your\n> hardware. Unless you actually intend to run our server with\n> synchronous_commit=off and with a large number of active connections, it is\n> probably not worth investigating this.\n>\n\nPlease see the table above. The reason why I'm digging deeper into this is\nbecause of observation (2) above, i.e. I am unable to come up with any\noptimised setting that performs better than the default settings for the\nconcurrency levels that I care about (100-150).\n\n\n> I'm more interested in the low end, you should do much better than those\n> reported numbers when clients=1 and synchronous_commit=off with the data on\n> SSD. I think you said that pgbench is running on a different machine than\n> the database, so perhaps it is just network overhead that is keeping this\n> value down. What happens if you run them on the same machine?\n>\n\nI'm currently running this, but the early numbers are surprising. For\nclient=1, the numbers for optimised settings + shared_buffers=2G are:\n\n-- pgbench run over a 1Gbps network: 74-77 tps\n-- pgbench run on the same machine: 152-153 tps (is this absolute number\ngood enough given my hardware?)\n\nIs 1 Gbps network the bottleneck? Does it explain the three observations\ngiven above? I'll wait for the current set of benchmarks to finish and\nre-run the benchmarks over the network and monitor network utilisation.\n\n[1] \"Mostly default\" settings are whatever ships with Ubuntu 18.04 + PG 11.\nA snippet of the relevant setts are given below:\n\n max_connection=400\n work_mem=4MB\n maintenance_work_mem=64MB\n shared_buffers=128MB\n temp_buffers=8MB\n effective_cache_size=4GB\n wal_buffers=-1\n wal_sync_method=fsync\n max_wal_size=1GB\n* autovacuum=off # Auto-vacuuming was disabled*\n\n\n[2] Optimized settings\n\n max_connections = 400\n* shared_buffers = 8GB # or 2GB -- depending upon which\nscenario was being evaluated*\n effective_cache_size = 24GB\n maintenance_work_mem = 2GB\n checkpoint_completion_target = 0.7\n wal_buffers = 16MB\n default_statistics_target = 100\n random_page_cost = 1.1\n effective_io_concurrency = 200\n work_mem = 3495kB\n min_wal_size = 1GB\n max_wal_size = 2GB\n max_worker_processes = 12\n max_parallel_workers_per_gather = 6\n max_parallel_workers = 12\n* autovacuum=off # Auto-vacuuming was disabled*\n\nIt is usually not acceptable to run applications with synchronous_commit=off, so once you have identified that the bottleneck is in implementing synchronous_commit=on, you probably need to take a deep dive into your hardware to figure out why it isn't performing the way you need/want/expect it to. Tuning the server under synchronous_commit=off when you don't intend to run your production server with that setting is unlikely to be fruitful.I do not intend to run the server with synchronous_commit=off, but based on my limited knowledge, I'm wondering if all these observations are somehow related and are caused by the same underlying bottleneck (or misconfiguration):1) At higher concurrency levels, TPS for synchronous_commit=off is lower for optimised settings when compared to default settings2) At ALL concurrency levels, TPS for synchronous_commit=on is lower for optimised settings (irrespective of shared_buffers value), compared to default settings3) At higher concurrency levels, optimised + synchronous_commit=on + shared_buffers=2G has HIGHER TPS than optimised + synchronous_commit=off + shared_buffers=8GHere are the (completely counter-intuitive) numbers for these observations:+--------+-----------------------------------------------------------------+------------------------+\n| | synchronous_commit=on | synchronous_commit=off |\n+--------+-----------------------------------------------------------------+------------------------+\n| client | Mostly defaults [1] | Optimised [2] | Optimised [2] | Optimised [2] |\n| | | + shared_buffers=2G | + shared_buffers=8G | + shared_buffers=8G |\n+--------+---------------------+---------------------+---------------------+------------------------+\n| 1 | 80-86 | 74-77 | 75-75 | 169-180 |\n+--------+---------------------+---------------------+---------------------+------------------------+\n| 6 | 350-376 | 301-304 | 295-300 | 1265-1397 |\n+--------+---------------------+---------------------+---------------------+------------------------+\n| 12 | 603-619 | 476-488 | 485-493 | 1746-2352 |\n+--------+---------------------+---------------------+---------------------+------------------------+\n| 24 | 947-1015 | 678-739 | 723-770 | 1869-2518 |\n+--------+---------------------+---------------------+---------------------+------------------------+\n| 48 | 1435-1512 | 950-1043 | 1029-1086 | 1912-2818 |\n+--------+---------------------+---------------------+---------------------+------------------------+\n| 96 | 1769-1811 | 3337-3459 | 1302-1346 | 1546-1753 |\n+--------+---------------------+---------------------+---------------------+------------------------+\n| 192 | 1857-1992 | 3613-3715 | 1269-1345 | 1332-1508 |\n+--------+---------------------+---------------------+---------------------+------------------------+\n| 384 | 1667-1793 | 3180-3300 | 1262-1364 | 1356-1450 |\n+--------+---------------------+---------------------+---------------------+------------------------+ In case you do intend to run with synchronous_commit=off, or if you are just curious: running with a very high number of active connections often reveals subtle bottlenecks and interactions, and is very dependent on your hardware. Unless you actually intend to run our server with synchronous_commit=off and with a large number of active connections, it is probably not worth investigating this. Please see the table above. The reason why I'm digging deeper into this is because of observation (2) above, i.e. I am unable to come up with any optimised setting that performs better than the default settings for the concurrency levels that I care about (100-150). I'm more interested in the low end, you should do much better than those reported numbers when clients=1 and synchronous_commit=off with the data on SSD. I think you said that pgbench is running on a different machine than the database, so perhaps it is just network overhead that is keeping this value down. What happens if you run them on the same machine?I'm currently running this, but the early numbers are surprising. For client=1, the numbers for optimised settings + shared_buffers=2G are:-- pgbench run over a 1Gbps network: 74-77 tps-- pgbench run on the same machine: 152-153 tps (is this absolute number good enough given my hardware?)Is 1 Gbps network the bottleneck? Does it explain the three observations given above? I'll wait for the current set of benchmarks to finish and re-run the benchmarks over the network and monitor network utilisation.[1] \"Mostly default\" settings are whatever ships with Ubuntu 18.04 + PG 11. A snippet of the relevant setts are given below: max_connection=400 work_mem=4MB maintenance_work_mem=64MB shared_buffers=128MB temp_buffers=8MB effective_cache_size=4GB wal_buffers=-1 wal_sync_method=fsync max_wal_size=1GB autovacuum=off # Auto-vacuuming was disabled[2] Optimized settings max_connections = 400 shared_buffers = 8GB # or 2GB -- depending upon which scenario was being evaluated effective_cache_size = 24GB maintenance_work_mem = 2GB checkpoint_completion_target = 0.7 wal_buffers = 16MB default_statistics_target = 100 random_page_cost = 1.1 effective_io_concurrency = 200 work_mem = 3495kB min_wal_size = 1GB max_wal_size = 2GB max_worker_processes = 12 max_parallel_workers_per_gather = 6 max_parallel_workers = 12 autovacuum=off # Auto-vacuuming was disabled",
"msg_date": "Mon, 28 Jan 2019 10:22:17 +0530",
"msg_from": "Saurabh Nanda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Benchmarking: How to identify bottleneck (limiting factor) and\n achieve \"linear scalability\"?"
},
{
"msg_contents": "Here's the previous table again -- trying to prevent the wrapping.\n\n+--------+------------------------------------------------+-----------------+\n| | synchronous_commit=on | sync_commit=off |\n+--------+------------------------------------------------+-----------------+\n| client | Defaults [1] | Optimised [2] | Optimised [2] | Optimised [2] |\n| | | (buffers=2G) | (buffers=8G) | (buffers=8G) |\n+--------+--------------+----------------+----------------+-----------------+\n| 1 | 80-86 | 74-77 | 75-75 | 169-180 |\n+--------+--------------+----------------+----------------+-----------------+\n| 6 | 350-376 | 301-304 | 295-300 | 1265-1397 |\n+--------+--------------+----------------+----------------+-----------------+\n| 12 | 603-619 | 476-488 | 485-493 | 1746-2352 |\n+--------+--------------+----------------+----------------+-----------------+\n| 24 | 947-1015 | 678-739 | 723-770 | 1869-2518 |\n+--------+--------------+----------------+----------------+-----------------+\n| 48 | 1435-1512 | 950-1043 | 1029-1086 | 1912-2818 |\n+--------+--------------+----------------+----------------+-----------------+\n| 96 | 1769-1811 | 3337-3459 | 1302-1346 | 1546-1753 |\n+--------+--------------+----------------+----------------+-----------------+\n| 192 | 1857-1992 | 3613-3715 | 1269-1345 | 1332-1508 |\n+--------+--------------+----------------+----------------+-----------------+\n| 384 | 1667-1793 | 3180-3300 | 1262-1364 | 1356-1450 |\n+--------+--------------+----------------+----------------+-----------------+\n\nHere's the previous table again -- trying to prevent the wrapping.+--------+------------------------------------------------+-----------------+\n| | synchronous_commit=on | sync_commit=off |\n+--------+------------------------------------------------+-----------------+\n| client | Defaults [1] | Optimised [2] | Optimised [2] | Optimised [2] |\n| | | (buffers=2G) | (buffers=8G) | (buffers=8G) |\n+--------+--------------+----------------+----------------+-----------------+\n| 1 | 80-86 | 74-77 | 75-75 | 169-180 |\n+--------+--------------+----------------+----------------+-----------------+\n| 6 | 350-376 | 301-304 | 295-300 | 1265-1397 |\n+--------+--------------+----------------+----------------+-----------------+\n| 12 | 603-619 | 476-488 | 485-493 | 1746-2352 |\n+--------+--------------+----------------+----------------+-----------------+\n| 24 | 947-1015 | 678-739 | 723-770 | 1869-2518 |\n+--------+--------------+----------------+----------------+-----------------+\n| 48 | 1435-1512 | 950-1043 | 1029-1086 | 1912-2818 |\n+--------+--------------+----------------+----------------+-----------------+\n| 96 | 1769-1811 | 3337-3459 | 1302-1346 | 1546-1753 |\n+--------+--------------+----------------+----------------+-----------------+\n| 192 | 1857-1992 | 3613-3715 | 1269-1345 | 1332-1508 |\n+--------+--------------+----------------+----------------+-----------------+\n| 384 | 1667-1793 | 3180-3300 | 1262-1364 | 1356-1450 |\n+--------+--------------+----------------+----------------+-----------------+",
"msg_date": "Mon, 28 Jan 2019 10:29:41 +0530",
"msg_from": "Saurabh Nanda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Benchmarking: How to identify bottleneck (limiting factor) and\n achieve \"linear scalability\"?"
},
{
"msg_contents": "All this benchmarking has led me to a philosophical question, why does PG\nneed shared_buffers in the first place? What's wrong with letting the OS do\nthe caching/buffering? Isn't it optimised for this kind of stuff?\n\nAll this benchmarking has led me to a philosophical question, why does PG need shared_buffers in the first place? What's wrong with letting the OS do the caching/buffering? Isn't it optimised for this kind of stuff?",
"msg_date": "Mon, 28 Jan 2019 10:33:21 +0530",
"msg_from": "Saurabh Nanda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Benchmarking: How to identify bottleneck (limiting factor) and\n achieve \"linear scalability\"?"
},
{
"msg_contents": "An update. It seems (to my untrained eye) that something is wrong with the\nsecond SSD in the RAID configuration. Here's my question on serverfault\nrelated to what I saw with iostat -\nhttps://serverfault.com/questions/951096/difference-in-utilisation-reported-by-iostat-for-two-identical-disks-in-raid1\n\nI've disabled RAID and rebooted the server to run the benchmarks with\nclient=1,4,8,12 with shared_buffers=8MB (default) vs shared_buffers=2GB\n(optimised?) and will report back.\n\n-- Saurabh.\n\nAn update. It seems (to my untrained eye) that something is wrong with the second SSD in the RAID configuration. Here's my question on serverfault related to what I saw with iostat - https://serverfault.com/questions/951096/difference-in-utilisation-reported-by-iostat-for-two-identical-disks-in-raid1I've disabled RAID and rebooted the server to run the benchmarks with client=1,4,8,12 with shared_buffers=8MB (default) vs shared_buffers=2GB (optimised?) and will report back.-- Saurabh.",
"msg_date": "Mon, 28 Jan 2019 19:33:31 +0530",
"msg_from": "Saurabh Nanda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Benchmarking: How to identify bottleneck (limiting factor) and\n achieve \"linear scalability\"?"
},
{
"msg_contents": "Le 28/01/2019 à 15:03, Saurabh Nanda a écrit :\n> An update. It seems (to my untrained eye) that something is wrong with \n> the second SSD in the RAID configuration. Here's my question on \n> serverfault related to what I saw with iostat - \n> https://serverfault.com/questions/951096/difference-in-utilisation-reported-by-iostat-for-two-identical-disks-in-raid1\n>\n> I've disabled RAID and rebooted the server to run the benchmarks with \n> client=1,4,8,12 with shared_buffers=8MB (default) vs \n> shared_buffers=2GB (optimised?) and will report back.\n>\n>\nYou should probably include the detailed hardware you are working on - \nespecially for the SSD, the model can have a big impact, as well as its \nwear.\n\n\nNicolas\n\n\n\n",
"msg_date": "Mon, 28 Jan 2019 15:22:19 +0100",
"msg_from": "Nicolas Charles <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Benchmarking: How to identify bottleneck (limiting factor) and\n achieve \"linear scalability\"?"
},
{
"msg_contents": ">\n> You should probably include the detailed hardware you are working on -\n> especially for the SSD, the model can have a big impact, as well as its\n> wear.\n>\n\nWhat's the best tool to get meaningful information for SSD drives?\n\n-- Saurabh.\n\nYou should probably include the detailed hardware you are working on - \nespecially for the SSD, the model can have a big impact, as well as its \nwear.What's the best tool to get meaningful information for SSD drives?-- Saurabh.",
"msg_date": "Mon, 28 Jan 2019 21:25:42 +0530",
"msg_from": "Saurabh Nanda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Benchmarking: How to identify bottleneck (limiting factor) and\n achieve \"linear scalability\"?"
},
{
"msg_contents": "Le 28/01/2019 à 16:55, Saurabh Nanda a écrit :\n>\n> You should probably include the detailed hardware you are working\n> on -\n> especially for the SSD, the model can have a big impact, as well\n> as its\n> wear.\n>\n>\n> What's the best tool to get meaningful information for SSD drives?\n>\nsmartctl is a good start\n\nNicolas\n\n\n\n\n\n\n\n Le 28/01/2019 à 16:55, Saurabh Nanda a écrit :\n\n\n\n\nYou should probably\n include the detailed hardware you are working on - \n especially for the SSD, the model can have a big impact, as\n well as its \n wear.\n\n\n\nWhat's the best tool to get meaningful information for\n SSD drives?\n\n\n\n\n\nsmartctl is a good start\nNicolas",
"msg_date": "Mon, 28 Jan 2019 17:06:14 +0100",
"msg_from": "Nicolas Charles <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Benchmarking: How to identify bottleneck (limiting factor) and\n achieve \"linear scalability\"?"
},
{
"msg_contents": ">\n> smartctl is a good start\n>\nHere's the output of `smartctl --xall /dev/sda` --\nhttps://gist.github.com/saurabhnanda/ec3c95c1eb3896b3efe55181e7c78dde\n\nI've disabled RAID so /dev/sda is the only disk which is being currently\nused.\n\nI'm still seeing very weird numbers. There seems to be absolutely no\ndifference in performance if I increase shared_buffers from 8MB to 2GB. Is\nthere some other setting which needs to be changed to take advantage of the\nincrease in shared_buffers? Also, something even weirder is happening for\nclient=1.\n\nIs my benchmarking script alright -\nhttps://gist.github.com/saurabhnanda/78e8523cf079ce7b78704acb1aa8d9fc ?\n\n+--------+--------------+----------------+\n| client | Defaults [1] | buffers=2G [2] |\n+--------+--------------+----------------+\n| 1 | 348-475 (??) | 529-583 (??) |\n+--------+--------------+----------------+\n| 4 | 436-452 | 451-452 |\n+--------+--------------+----------------+\n| 8 | 862-869 | 859-861 |\n+--------+--------------+----------------+\n| 12 | 1210-1219 | 1220-1225 |\n+--------+--------------+----------------+\n\n\n[1] Default settings\n checkpoint_completion_target=0.5\n default_statistics_target=100\n effective_io_concurrency=1\n max_parallel_workers=8\n max_parallel_workers_per_gather=2\n max_wal_size=1024 MB\n max_worker_processes=20\n min_wal_size=80 MB\n random_page_cost=4\n* shared_buffers=1024 8kB*\n wal_buffers=32 8kB\n work_mem=4096 kB\n\n[2] Increased shared_buffers\n checkpoint_completion_target=0.5\n default_statistics_target=100\n effective_io_concurrency=1\n max_parallel_workers=8\n max_parallel_workers_per_gather=2\n max_wal_size=1024 MB\n max_worker_processes=20\n min_wal_size=80 MB\n random_page_cost=4\n* shared_buffers=262144 8kB*\n wal_buffers=2048 8kB\n work_mem=4096 kB\n\n-- Saurabh.\n\nsmartctl is a good startHere's the output of `smartctl --xall /dev/sda` -- https://gist.github.com/saurabhnanda/ec3c95c1eb3896b3efe55181e7c78ddeI've disabled RAID so /dev/sda is the only disk which is being currently used.I'm still seeing very weird numbers. There seems to be absolutely no difference in performance if I increase shared_buffers from 8MB to 2GB. Is there some other setting which needs to be changed to take advantage of the increase in shared_buffers? Also, something even weirder is happening for client=1.Is my benchmarking script alright - https://gist.github.com/saurabhnanda/78e8523cf079ce7b78704acb1aa8d9fc ?+--------+--------------+----------------+\n| client | Defaults [1] | buffers=2G [2] |\n+--------+--------------+----------------+\n| 1 | 348-475 (??) | 529-583 (??) |\n+--------+--------------+----------------+\n| 4 | 436-452 | 451-452 |\n+--------+--------------+----------------+\n| 8 | 862-869 | 859-861 |\n+--------+--------------+----------------+\n| 12 | 1210-1219 | 1220-1225 |\n+--------+--------------+----------------+[1] Default settings checkpoint_completion_target=0.5 default_statistics_target=100 effective_io_concurrency=1 max_parallel_workers=8 max_parallel_workers_per_gather=2 max_wal_size=1024 MB max_worker_processes=20 min_wal_size=80 MB random_page_cost=4 shared_buffers=1024 8kB wal_buffers=32 8kB work_mem=4096 kB[2] Increased shared_buffers checkpoint_completion_target=0.5 default_statistics_target=100 effective_io_concurrency=1 max_parallel_workers=8 max_parallel_workers_per_gather=2 max_wal_size=1024 MB max_worker_processes=20 min_wal_size=80 MB random_page_cost=4 shared_buffers=262144 8kB wal_buffers=2048 8kB work_mem=4096 kB-- Saurabh.",
"msg_date": "Mon, 28 Jan 2019 22:05:05 +0530",
"msg_from": "Saurabh Nanda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Benchmarking: How to identify bottleneck (limiting factor) and\n achieve \"linear scalability\"?"
},
{
"msg_contents": "I've disabled transpare huge-pages and enabled huge_pages as given below.\nLet's see what happens. (I feel like a monkey pressing random buttons\ntrying to turn a light bulb on... and I'm sure the monkey would've had it\neasier!)\n\nAnonHugePages: 0 kB\nShmemHugePages: 0 kB\nHugePages_Total: 5000\nHugePages_Free: 4954\nHugePages_Rsvd: 1015\nHugePages_Surp: 0\nHugepagesize: 2048 kB\n\n-- Saurabh.\n\nI've disabled transpare huge-pages and enabled huge_pages as given below. Let's see what happens. (I feel like a monkey pressing random buttons trying to turn a light bulb on... and I'm sure the monkey would've had it easier!)AnonHugePages: 0 kBShmemHugePages: 0 kBHugePages_Total: 5000HugePages_Free: 4954HugePages_Rsvd: 1015HugePages_Surp: 0Hugepagesize: 2048 kB-- Saurabh.",
"msg_date": "Mon, 28 Jan 2019 22:26:49 +0530",
"msg_from": "Saurabh Nanda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Benchmarking: How to identify bottleneck (limiting factor) and\n achieve \"linear scalability\"?"
},
{
"msg_contents": ">\n> Do you know which of the settings is causing lower TPS ?\n>\n> I suggest to check shared_buffers.\n>\n> If you haven't done it, disabling THP and KSM can resolve performance\n> issues,\n> esp. with large RAM like shared_buffers, at least with older kernels.\n>\n> https://www.postgresql.org/message-id/20170718180152.GE17566%40telsasoft.com\n\n\nI've tried reducing the number of variables to a bare minimum and have the\nfollowing three cases now with RAID disabled:\n\na) only default settings [1]\nb) default settings with shared_buffers=2G [2]\nc) default settings with shared_buffers=2G & huge_pages=on [3]\n\nThe numbers are still not making any sense whatsoever.\n\n+--------+--------------+----------------+---------------+\n| client | Defaults [1] | buffers=2G [2] | buffers=2G |\n| | | | huge_pages=on |\n+--------+--------------+----------------+---------------+\n| 1 | 348-475 (??) | 529-583 (??) | 155-290 |\n+--------+--------------+----------------+---------------+\n| 4 | 436-452 | 451-452 | 388-403 |\n+--------+--------------+----------------+---------------+\n| 8 | 862-869 | 859-861 | 778-781 |\n+--------+--------------+----------------+---------------+\n| 12 | 1210-1219 | 1220-1225 | 1110-1111 |\n+--------+--------------+----------------+---------------+\n\n\n\n[1] Default settings\n checkpoint_completion_target=0.5\n default_statistics_target=100\n effective_io_concurrency=1\n max_parallel_workers=8\n max_parallel_workers_per_gather=2\n max_wal_size=1024 MB\n max_worker_processes=20\n min_wal_size=80 MB\n random_page_cost=4\n* shared_buffers=1024 8kB*\n wal_buffers=32 8kB\n work_mem=4096 kB\n\n[2] Increased shared_buffers\n checkpoint_completion_target=0.5\n default_statistics_target=100\n effective_io_concurrency=1\n max_parallel_workers=8\n max_parallel_workers_per_gather=2\n max_wal_size=1024 MB\n max_worker_processes=20\n min_wal_size=80 MB\n random_page_cost=4\n* shared_buffers=262144 8kB*\n wal_buffers=2048 8kB\n work_mem=4096 kB\n\n[3] Same settings as [2] with huge_pages=on and the following changes:\n\n $ cat /sys/kernel/mm/transparent_hugepage/enabled\n always madvise [never]\n\n $ cat /proc/meminfo |grep -i huge\n AnonHugePages: 0 kB\n ShmemHugePages: 0 kB\n HugePages_Total: 5000\n HugePages_Free: 3940\n HugePages_Rsvd: 1\n HugePages_Surp: 0\n Hugepagesize: 2048 kB\n\n-- Saurabh.\n\nDo you know which of the settings is causing lower TPS ?\n\nI suggest to check shared_buffers.\n\nIf you haven't done it, disabling THP and KSM can resolve performance issues,\nesp. with large RAM like shared_buffers, at least with older kernels.\nhttps://www.postgresql.org/message-id/20170718180152.GE17566%40telsasoft.comI've tried reducing the number of variables to a bare minimum and have the following three cases now with RAID disabled:a) only default settings [1]b) default settings with shared_buffers=2G [2]c) default settings with shared_buffers=2G & huge_pages=on [3]The numbers are still not making any sense whatsoever.+--------+--------------+----------------+---------------+\n| client | Defaults [1] | buffers=2G [2] | buffers=2G |\n| | | | huge_pages=on |\n+--------+--------------+----------------+---------------+\n| 1 | 348-475 (??) | 529-583 (??) | 155-290 |\n+--------+--------------+----------------+---------------+\n| 4 | 436-452 | 451-452 | 388-403 |\n+--------+--------------+----------------+---------------+\n| 8 | 862-869 | 859-861 | 778-781 |\n+--------+--------------+----------------+---------------+\n| 12 | 1210-1219 | 1220-1225 | 1110-1111 |\n+--------+--------------+----------------+---------------+[1] Default settings checkpoint_completion_target=0.5 default_statistics_target=100 effective_io_concurrency=1 max_parallel_workers=8 max_parallel_workers_per_gather=2 max_wal_size=1024 MB max_worker_processes=20 min_wal_size=80 MB random_page_cost=4 shared_buffers=1024 8kB wal_buffers=32 8kB work_mem=4096 kB[2] Increased shared_buffers checkpoint_completion_target=0.5 default_statistics_target=100 effective_io_concurrency=1 max_parallel_workers=8 max_parallel_workers_per_gather=2 max_wal_size=1024 MB max_worker_processes=20 min_wal_size=80 MB random_page_cost=4 shared_buffers=262144 8kB wal_buffers=2048 8kB work_mem=4096 kB[3] Same settings as [2] with huge_pages=on and the following changes: $ cat /sys/kernel/mm/transparent_hugepage/enabled always madvise [never] $ cat /proc/meminfo |grep -i huge AnonHugePages: 0 kB ShmemHugePages: 0 kB HugePages_Total: 5000 HugePages_Free: 3940 HugePages_Rsvd: 1 HugePages_Surp: 0 Hugepagesize: 2048 kB-- Saurabh.",
"msg_date": "Tue, 29 Jan 2019 07:14:15 +0530",
"msg_from": "Saurabh Nanda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Benchmarking: How to identify bottleneck (limiting factor) and\n achieve \"linear scalability\"?"
},
{
"msg_contents": "Yet another update:\n\na) I've tried everything with me EX41-SSD server on Hetzner and nothing is\nincreasing the performance over & above the default configuration.\nb) I tried commissioning a new EX41-SSD server and was able to replicate\nthe same pathetic performance numbers.\nc) I tried another cloud hosting provider (E2E Networks) and just the raw\nperformance numbers (with default configuration) are blowing Hetzner out of\nthe water.\n\nThis leads me to believe that my assumption of the first hardware (or SSD)\nbeing faulty is incorrect. Something is wrong with either the EX41-SSD\nhardware or the out-of-box configuration. I'm commissioning something from\ntheir PX line (which is marked as \"Datacenter Edition\") and checking if\nthat makes things better.\n\n+--------+--------------+------------------+\n| client | Hetzner | E2E Networks |\n| | EX41-SSD [1] | Cloud Server [2] |\n+--------+--------------+------------------+\n| 1 | ~160 | ~400 |\n+--------+--------------+------------------+\n| 4 | ~460 | ~1450 |\n+--------+--------------+------------------+\n| 8 | ~850 | ~2600 |\n+--------+--------------+------------------+\n| 12 | ~1200 | ~4000 |\n+--------+--------------+------------------+\n\n\n[1] lshw output for Hetzner -\nhttps://gist.github.com/saurabhnanda/613813d0d58fe1a406a8ce9b62ad10a9\n[2] lshw output for E2E -\nhttps://gist.github.com/saurabhnanda/d276603990aa773269bad35f335344eb -\nsince this is a cloud server low-level hardware info is not available. It's\nadvertised as a 9vCPU + 30GB RAM + SSD cloud instance.\n\n-- Saurabh.\n\nYet another update:a) I've tried everything with me EX41-SSD server on Hetzner and nothing is increasing the performance over & above the default configuration.b) I tried commissioning a new EX41-SSD server and was able to replicate the same pathetic performance numbers. c) I tried another cloud hosting provider (E2E Networks) and just the raw performance numbers (with default configuration) are blowing Hetzner out of the water.This leads me to believe that my assumption of the first hardware (or SSD) being faulty is incorrect. Something is wrong with either the EX41-SSD hardware or the out-of-box configuration. I'm commissioning something from their PX line (which is marked as \"Datacenter Edition\") and checking if that makes things better.+--------+--------------+------------------+\n| client | Hetzner | E2E Networks |\n| | EX41-SSD [1] | Cloud Server [2] |\n+--------+--------------+------------------+\n| 1 | ~160 | ~400 |\n+--------+--------------+------------------+\n| 4 | ~460 | ~1450 |\n+--------+--------------+------------------+\n| 8 | ~850 | ~2600 |\n+--------+--------------+------------------+\n| 12 | ~1200 | ~4000 |\n+--------+--------------+------------------+[1] lshw output for Hetzner - https://gist.github.com/saurabhnanda/613813d0d58fe1a406a8ce9b62ad10a9[2] lshw output for E2E - https://gist.github.com/saurabhnanda/d276603990aa773269bad35f335344eb - since this is a cloud server low-level hardware info is not available. It's advertised as a 9vCPU + 30GB RAM + SSD cloud instance.-- Saurabh.",
"msg_date": "Tue, 29 Jan 2019 10:18:43 +0530",
"msg_from": "Saurabh Nanda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Benchmarking: How to identify bottleneck (limiting factor) and\n achieve \"linear scalability\"?"
},
{
"msg_contents": "> c) I tried another cloud hosting provider (E2E Networks) and just the raw\n> performance numbers (with default configuration) are blowing Hetzner out of\n> the water.\n>\n\nI noticed that on E2E, the root filesystem is mounted with the following\noptions:\n\n /dev/xvda on / type ext4\n\n(rw,noatime,nodiratime,nobarrier,errors=remount-ro,stripe=512,data=ordered)\n\nwhereas on Hetzner, it is mounted with the following options:\n\n /dev/nvme0n1p3 on / type ext4\n (rw,relatime,data=ordered)\n\nHow much of a difference can this have on absolute TPS numbers?\n\n-- Saurabh.\n\nc) I tried another cloud hosting provider (E2E Networks) and just the raw performance numbers (with default configuration) are blowing Hetzner out of the water.I noticed that on E2E, the root filesystem is mounted with the following options: /dev/xvda on / type ext4 (rw,noatime,nodiratime,nobarrier,errors=remount-ro,stripe=512,data=ordered)whereas on Hetzner, it is mounted with the following options: /dev/nvme0n1p3 on / type ext4 (rw,relatime,data=ordered)How much of a difference can this have on absolute TPS numbers?-- Saurabh.",
"msg_date": "Tue, 29 Jan 2019 11:45:08 +0530",
"msg_from": "Saurabh Nanda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Benchmarking: How to identify bottleneck (limiting factor) and\n achieve \"linear scalability\"?"
},
{
"msg_contents": "Le 29/01/2019 à 07:15, Saurabh Nanda a écrit :\n>\n> c) I tried another cloud hosting provider (E2E Networks) and just\n> the raw performance numbers (with default configuration) are\n> blowing Hetzner out of the water.\n>\n>\n> I noticed that on E2E, the root filesystem is mounted with the \n> following options:\n>\n> /dev/xvda on / type ext4\n> (rw,noatime,nodiratime,nobarrier,errors=remount-ro,stripe=512,data=ordered)\n>\n> whereas on Hetzner, it is mounted with the following options:\n>\n> /dev/nvme0n1p3 on / type ext4\n> (rw,relatime,data=ordered)\n>\n> How much of a difference can this have on absolute TPS numbers?\n>\n\nDifferences can be significative. noatime does not update inode access \ntime, while relatime updates the inode access time if the change time \nwas before access time (which can be often the case for a database)\n\nnobarrier disable block-level write barriers. Barriers ensure that data \nis effectively stored on system, The man command says: \"If disabled on \na device with a volatile (non-battery-backed) write-back cache, \nthe nobarrier option will lead to filesystem corruption on a system \ncrash or power loss.\"\n\nYou should probably consider noatime compared to relatime, and \nnobarriers depends if you have a battery or not\n\nAlso, this is an SSD, so you should TRIM it, either with preiodical \nfstrim, or using the discard option\n\n\nNicolas\n\n\n\n\n\n\n\n\n\n\n\nLe 29/01/2019 à 07:15, Saurabh Nanda a\n écrit :\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nc) I tried another cloud hosting provider\n (E2E Networks) and just the raw performance\n numbers (with default configuration) are\n blowing Hetzner out of the water.\n\n\n\n\n\n\n\n\nI noticed that on E2E, the root filesystem is mounted\n with the following options:\n\n\n\n /dev/xvda on / type ext4 \n \n(rw,noatime,nodiratime,nobarrier,errors=remount-ro,stripe=512,data=ordered)\n\n\n\nwhereas on Hetzner, it is mounted with the following\n options:\n\n\n\n /dev/nvme0n1p3 on / type ext4\n (rw,relatime,data=ordered)\n\n\n\nHow much of a difference can this have on absolute\n TPS numbers?\n\n\n\n\n\n\n\n\nDifferences can be significative. noatime does not update inode\n access time, while relatime updates the inode access time if the\n change time was before access time (which can be often the case\n for a database)\nnobarrier disable block-level write barriers. Barriers ensure\n that data is effectively stored on system, The man command says:\n \"If disabled on a device with a volatile \n (non-battery-backed) write-back cache, the nobarrier option\n will lead to filesystem corruption on a system crash or power\n loss.\"\nYou should probably consider noatime compared to relatime, and\n nobarriers depends if you have a battery or not\n\nAlso, this is an SSD, so you should TRIM it, either with\n preiodical fstrim, or using the discard option\n\n\nNicolas",
"msg_date": "Tue, 29 Jan 2019 10:14:13 +0100",
"msg_from": "Nicolas Charles <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Benchmarking: How to identify bottleneck (limiting factor) and\n achieve \"linear scalability\"?"
},
{
"msg_contents": "On Mon, Jan 28, 2019 at 12:03 AM Saurabh Nanda <[email protected]>\nwrote:\n\n> All this benchmarking has led me to a philosophical question, why does PG\n> need shared_buffers in the first place?\n>\n\nPostgreSQL cannot let the OS get its hands on a dirty shared buffer until\nthe WAL record \"protecting\" that buffer has been flushed to disk. If a\ndirty shared buffer got written to disk, but then a crash happened before\nthe WAL record go flushed to disk, then the data could be corrupted when it\ncomes back up. So shared_buffers effectively serves as cooling pond where\ndirty buffers wait for their WAL to be flushed naturally so they can be\nwritten without instigating a performance-reducing flush just for them.\n\nAlso, concurrent clients needs to access the same disk pages at overlapping\ntimes without corrupting each other. Perhaps that could be implemented to\nhave just the buffer headers in shared memory to coordinate the locking,\nand not having the buffers themselves in shared memory. But that is not\nhow it is currently implemented.\n\n\n> What's wrong with letting the OS do the caching/buffering?\n>\n\nNothing, and that is what it does. Which is why the advice for\nshared_buffers is often to use a small fraction of RAM, leaving the rest\nfor the OS to do its thing. But PostgreSQL still needs a way to lock those\npages, both against concurrent access by its own clients, and against\ngetting flushed out of order by the OS. There is no performant way to\nrelease the dirty pages immediately to the OS while still constraining the\norder in which the OS flushes them to disk.\n\nFinally, while reading a page from the OS cache into shared_buffers is much\nfaster than reading it from disk, it is still much slower than finding it\nalready located in shared_buffers. So if your entire database fits in RAM,\nyou will get better performance if shared_buffers is large enough for the\nentire thing to fit in there, as well. This is an exception to the rule\nthat shared_buffers should be a small fraction of RAM.\n\n\n> Isn't it optimised for this kind of stuff?\n>\n\nMaybe. But you might be surprised at poorly optimized it is. It depends\non your OS and version of it, of course. If you have a high usage_count\nbuffer which is re-dirtied constantly, it will only get written and flushed\nto disk once per checkpoint if under PostgreSQL control. But I've seen\npages like that get written many times per second under kernel control.\nWhatever optimization it tried to do, it wasn't very good at. Also, if\nmany contiguous pages are dirtied in a close time-frame, but not dirtied in\ntheir physical order, the kernel should be able to re-order them into long\nsequential writes, correct? But empirically, it doesn't, at least back in\nthe late 2.* series kernels when I did the experiments. I don't know if it\ndidn't even try, or tried but failed. (Of course back then, PostgreSQL\ndidn't do a good job of it either)\n\nCheers,\n\nJeff\n\nOn Mon, Jan 28, 2019 at 12:03 AM Saurabh Nanda <[email protected]> wrote:All this benchmarking has led me to a philosophical question, why does PG need shared_buffers in the first place? PostgreSQL cannot let the OS get its hands on a dirty shared buffer until the WAL record \"protecting\" that buffer has been flushed to disk. If a dirty shared buffer got written to disk, but then a crash happened before the WAL record go flushed to disk, then the data could be corrupted when it comes back up. So shared_buffers effectively serves as cooling pond where dirty buffers wait for their WAL to be flushed naturally so they can be written without instigating a performance-reducing flush just for them.Also, concurrent clients needs to access the same disk pages at overlapping times without corrupting each other. Perhaps that could be implemented to have just the buffer headers in shared memory to coordinate the locking, and not having the buffers themselves in shared memory. But that is not how it is currently implemented. What's wrong with letting the OS do the caching/buffering? Nothing, and that is what it does. Which is why the advice for shared_buffers is often to use a small fraction of RAM, leaving the rest for the OS to do its thing. But PostgreSQL still needs a way to lock those pages, both against concurrent access by its own clients, and against getting flushed out of order by the OS. There is no performant way to release the dirty pages immediately to the OS while still constraining the order in which the OS flushes them to disk. Finally, while reading a page from the OS cache into shared_buffers is much faster than reading it from disk, it is still much slower than finding it already located in shared_buffers. So if your entire database fits in RAM, you will get better performance if shared_buffers is large enough for the entire thing to fit in there, as well. This is an exception to the rule that shared_buffers should be a small fraction of RAM. Isn't it optimised for this kind of stuff?Maybe. But you might be surprised at poorly optimized it is. It depends on your OS and version of it, of course. If you have a high usage_count buffer which is re-dirtied constantly, it will only get written and flushed to disk once per checkpoint if under PostgreSQL control. But I've seen pages like that get written many times per second under kernel control. Whatever optimization it tried to do, it wasn't very good at. Also, if many contiguous pages are dirtied in a close time-frame, but not dirtied in their physical order, the kernel should be able to re-order them into long sequential writes, correct? But empirically, it doesn't, at least back in the late 2.* series kernels when I did the experiments. I don't know if it didn't even try, or tried but failed. (Of course back then, PostgreSQL didn't do a good job of it either)Cheers,Jeff",
"msg_date": "Tue, 29 Jan 2019 10:42:12 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Benchmarking: How to identify bottleneck (limiting factor) and\n achieve \"linear scalability\"?"
}
] |
[
{
"msg_contents": "Hello,\r\n\r\n\r\n\r\nWe have been stuck for the past week on a query that simply won’t “execute”. We have a table with 1.2B rows that took around 14h to load, but a simple select takes forever and after 10h, no records are coming through still.\r\n\r\n\r\n\r\nEnvironment:\r\n\r\n - Table tmp_outpatient_rev with 41 VARCHAR columns (desy_sort_key, claim_no, clm_line_num, clm_thru_dt, nch_clm_type_cd, rev_cntr, rev_cntr_dt, …)\r\n\r\n - 1.2B rows (Billion with a ‘B’)\r\n\r\n - A single Unique Index on columns desy_sort_key, claim_no, clm_line_num\r\n\r\n - select pg_size_pretty(pg_relation_size('tmp_outpatient_rev')) --> 215GB\r\n\r\n - Database Server: 64GB, 8 cores/16 threads, HDDs 10K\r\n\r\n - Linux\r\n\r\n - PG 11.1\r\n\r\n\r\n\r\nQuery:\r\n\r\n select * from tmp_outpatient_rev order by desy_sort_key, claim_no\r\n\r\n\r\n\r\nPlan:\r\n\r\n Gather Merge (cost=61001461.16..216401602.29 rows=1242732290 width=250)\r\n\r\n Output: desy_sort_key, claim_no, clm_line_num, clm_thru_dt, nch_clm_type_cd, rev_cntr, rev_cntr_dt, …\r\n\r\n Workers Planned: 10\r\n\r\n -> Sort (cost=61000460.97..61311144.04 rows=124273229 width=250)\r\n\r\n Output: desy_sort_key, claim_no, clm_line_num, clm_thru_dt, nch_clm_type_cd, rev_cntr, rev_cntr_dt, …\r\n\r\n Sort Key: tmp_outpatient_rev.desy_sort_key, tmp_outpatient_rev.claim_no\r\n\r\n -> Parallel Seq Scan on public.tmp_outpatient_rev (cost=0.00..29425910.29 rows=124273229 width=250)\r\n\r\n Output: desy_sort_key, claim_no, clm_line_num, clm_thru_dt, nch_clm_type_cd, rev_cntr, rev_cntr_dt, …\r\n\r\n\r\n\r\nMethod of access:\r\n\r\n - Using Pentaho Kettle (an ETL tool written in Java and using JDBC), we simply issue the query and expect records to start streaming in ASAP.\r\n\r\n - Issue was replicated with really basic JDBC code in a Java test program.\r\n\r\n - The database doesn't have much other data and the table was loaded from a CSV data source with LOAD over something like 14h (average throughput of about 25K rows/s)\r\n\r\n - Settings:\r\n\r\n alter database \"CMS_TMP\" set seq_page_cost=1;\r\n\r\n alter database \"CMS_TMP\" set random_page_cost=4;\r\n\r\n alter database \"CMS_TMP\" set enable_seqscan=true;\r\n\r\n JDBC connection string with no extra params.\r\n\r\n Database has been generally configured properly.\r\n\r\n\r\n\r\nProblem:\r\n\r\n - The plan shows a full table scan followed by a sort, and then a gather merge. With 1.2B rows, that's crazy to try to sort that 😊\r\n\r\n - After 10h, the query is still \"silent\" and no record is streaming in. IO is very high (80-90% disk throughput utilization) on the machine (the sort…).\r\n\r\n - I have tried to hack the planner to force an index scan (which would avoid the sort/gather steps and should start streaming data right away), in particular, enable_seqscan=false or seq_page_cost=2. This had ZERO impact on the plan to my surprise.\r\n\r\n - I changed the “order by” to include all 3 columns from the index, or created a non-unique index with only the first 2 columns, all to no effect whatsoever either.\r\n\r\n - The table was written over almost 14h at about 25K row/s and it seems to me I should be able to read the data back at least as fast.\r\n\r\n\r\n\r\nWhy is a simple index scan not used? Why are all our efforts to try to force the use of the index failing?\r\n\r\n\r\n\r\nAny help is very much appreciated as we are really hitting a wall here with that table.\r\n\r\n\r\n\r\nThank you so much.\r\n\r\n\r\nLaurent Hasson\r\n\r\n\r\n\r\n\r\n\n\n\n\n\n\n\n\n\nHello,\n \nWe have been stuck for the past week on a query that simply won’t “execute”. We have a table with 1.2B rows that took around 14h to load, but a simple select takes forever and after 10h, no records are coming through still.\n \nEnvironment:\n - Table tmp_outpatient_rev with 41 VARCHAR columns (desy_sort_key, claim_no, clm_line_num, clm_thru_dt, nch_clm_type_cd, rev_cntr, rev_cntr_dt, …)\n - 1.2B rows (Billion with a ‘B’)\n - A single Unique Index on columns desy_sort_key, claim_no, clm_line_num\n - select pg_size_pretty(pg_relation_size('tmp_outpatient_rev')) --> 215GB\n - Database Server: 64GB, 8 cores/16 threads, HDDs 10K\n - Linux\n - PG 11.1\n \nQuery:\n select * from tmp_outpatient_rev order by desy_sort_key, claim_no\n \nPlan:\n Gather Merge (cost=61001461.16..216401602.29 rows=1242732290 width=250)\n Output: desy_sort_key, claim_no, clm_line_num, clm_thru_dt, nch_clm_type_cd, rev_cntr, rev_cntr_dt, …\n Workers Planned: 10\n -> Sort (cost=61000460.97..61311144.04 rows=124273229 width=250)\n Output: desy_sort_key, claim_no, clm_line_num, clm_thru_dt, nch_clm_type_cd, rev_cntr, rev_cntr_dt, …\n Sort Key: tmp_outpatient_rev.desy_sort_key, tmp_outpatient_rev.claim_no\n -> Parallel Seq Scan on public.tmp_outpatient_rev (cost=0.00..29425910.29 rows=124273229 width=250)\n Output: desy_sort_key, claim_no, clm_line_num, clm_thru_dt, nch_clm_type_cd, rev_cntr, rev_cntr_dt, …\n \nMethod of access:\n - Using Pentaho Kettle (an ETL tool written in Java and using JDBC), we simply issue the query and expect records to start streaming in ASAP.\n - Issue was replicated with really basic JDBC code in a Java test program.\n - The database doesn't have much other data and the table was loaded from a CSV data source with LOAD over something like 14h (average throughput of about 25K rows/s)\n - Settings:\n alter database \"CMS_TMP\" set seq_page_cost=1;\n alter database \"CMS_TMP\" set random_page_cost=4;\n alter database \"CMS_TMP\" set enable_seqscan=true;\n JDBC connection string with no extra params.\n Database has been generally configured properly.\n \nProblem:\n - The plan shows a full table scan followed by a sort, and then a gather merge. With 1.2B rows, that's crazy to try to sort that\r\n😊\n - After 10h, the query is still \"silent\" and no record is streaming in. IO is very high (80-90% disk throughput utilization) on the machine (the sort…).\n - I have tried to hack the planner to force an index scan (which would avoid the sort/gather steps and should start streaming data right away), in particular, enable_seqscan=false or seq_page_cost=2. This had ZERO impact on the plan\r\n to my surprise.\n - I changed the “order by” to include all 3 columns from the index, or created a non-unique index with only the first 2 columns, all to no effect whatsoever either.\n - The table was written over almost 14h at about 25K row/s and it seems to me I should be able to read the data back at least as fast.\n \nWhy is a simple index scan not used? Why are all our efforts to try to force the use of the index failing?\n \nAny help is very much appreciated as we are really hitting a wall here with that table.\n \nThank you so much.\n \n \nLaurent Hasson",
"msg_date": "Fri, 25 Jan 2019 05:20:00 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Zero throughput on a query on a very large table."
},
{
"msg_contents": "\n\nAm 25.01.19 um 06:20 schrieb [email protected]:\n>\n> Hello,\n>\n> We have been stuck for the past week on a query that simply won’t \n> “execute”. We have a table with 1.2B rows that took around 14h to \n> load, but a simple select takes forever and after 10h, no records are \n> coming through still.\n>\n> Environment:\n>\n> - Table tmp_outpatient_rev with 41 VARCHAR columns \n> (desy_sort_key, claim_no, clm_line_num, clm_thru_dt, nch_clm_type_cd, \n> rev_cntr, rev_cntr_dt, …)\n>\n> - 1.2B rows (Billion with a ‘B’)\n>\n> - A single Unique Index on columns desy_sort_key, claim_no, \n> clm_line_num\n>\n> - select pg_size_pretty(pg_relation_size('tmp_outpatient_rev')) \n> --> 215GB\n>\n> - Database Server: 64GB, 8 cores/16 threads, HDDs 10K\n>\n> - Linux\n>\n> - PG 11.1\n>\n> Query:\n>\n> select * from tmp_outpatient_rev order by desy_sort_key, claim_no\n>\n> Plan:\n>\n> Gather Merge (cost=61001461.16..216401602.29 rows=1242732290 \n> width=250)\n>\n> Output: desy_sort_key, claim_no, clm_line_num, clm_thru_dt, \n> nch_clm_type_cd, rev_cntr, rev_cntr_dt, …\n>\n> Workers Planned: 10\n>\n> -> Sort (cost=61000460.97..61311144.04 rows=124273229 width=250)\n>\n> Output: desy_sort_key, claim_no, clm_line_num, \n> clm_thru_dt, nch_clm_type_cd, rev_cntr, rev_cntr_dt, …\n>\n> Sort Key: tmp_outpatient_rev.desy_sort_key, \n> tmp_outpatient_rev.claim_no\n>\n> -> Parallel Seq Scan on public.tmp_outpatient_rev \n> (cost=0.00..29425910.29 rows=124273229 width=250)\n>\n> Output: desy_sort_key, claim_no, clm_line_num, \n> clm_thru_dt, nch_clm_type_cd, rev_cntr, rev_cntr_dt, …\n>\n> Method of access:\n>\n> - Using Pentaho Kettle (an ETL tool written in Java and using \n> JDBC), we simply issue the query and expect records to start streaming \n> in ASAP.\n>\n> - Issue was replicated with really basic JDBC code in a Java test \n> program.\n>\n> - The database doesn't have much other data and the table was \n> loaded from a CSV data source with LOAD over something like 14h \n> (average throughput of about 25K rows/s)\n>\n> - Settings:\n>\n> alter database \"CMS_TMP\" set seq_page_cost=1;\n>\n> alter database \"CMS_TMP\" set random_page_cost=4;\n>\n> alter database \"CMS_TMP\" set enable_seqscan=true;\n>\n> JDBC connection string with no extra params.\n>\n> Database has been generally configured properly.\n>\n> Problem:\n>\n> - The plan shows a full table scan followed by a sort, and then a \n> gather merge. With 1.2B rows, that's crazy to try to sort that 😊\n>\n> - After 10h, the query is still \"silent\" and no record is \n> streaming in. IO is very high (80-90% disk throughput utilization) on \n> the machine (the sort…).\n>\n> - I have tried to hack the planner to force an index scan (which \n> would avoid the sort/gather steps and should start streaming data \n> right away), in particular, enable_seqscan=false or seq_page_cost=2. \n> This had ZERO impact on the plan to my surprise.\n>\n> - I changed the “order by” to include all 3 columns from the index, \n> or created a non-unique index with only the first 2 columns, all to no \n> effect whatsoever either.\n>\n> - The table was written over almost 14h at about 25K row/s and it \n> seems to me I should be able to read the data back at least as fast.\n>\n> Why is a simple index scan not used? Why are all our efforts to try to \n> force the use of the index failing?\n>\n>\n\nthe query isn't that simple, there is no where condition, so PG has to \nread the whole table and the index is useless. Would it be enought to \nselect only the columns covered by the index?\n(run a vacuum on the table after loading the data, that's can enable a \nindex-only-scan in this case)\n\n\n\n\nRegards, Andreas\n\n-- \n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com\n\n\n",
"msg_date": "Fri, 25 Jan 2019 06:54:39 +0100",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Zero throughput on a query on a very large table."
},
{
"msg_contents": "\"[email protected]\" <[email protected]> writes:\n> Query:\n> select * from tmp_outpatient_rev order by desy_sort_key, claim_no\n\n> Plan:\n> [ seqscan-and-sort ... parallelized, but still seqscan-and-sort ]\n\n> - I have tried to hack the planner to force an index scan (which would avoid the sort/gather steps and should start streaming data right away), in particular, enable_seqscan=false or seq_page_cost=2. This had ZERO impact on the plan to my surprise.\n\nIf you can't get an indexscan plan despite setting enable_seqscan=false,\nthat typically means that the planner thinks the index's sort order\ndoes not match what the query is asking for. I wonder whether you\ncreated the index with nondefault collation, or asc/desc ordering,\nor something like that. There's not enough detail here to diagnose\nthat.\n\nIt should also be noted that what enable_seqscan=false actually does\nis to add a cost penalty of 1e10 to seqscan plans. It's possible\nthat your table is so large and badly ordered that the estimated\ncost differential between seqscan and indexscan is more than 1e10,\nso that the planner goes for the seqscan anyway. You could probably\novercome that by aggressively decreasing random_page_cost (and by\n\"aggressive\" I don't mean 2, I mean 0.2, or maybe 0.00002, whatever\nit takes). However, if that's what's happening, I'm worried that\ngetting what you asked for may not really be the outcome you wanted.\nJust because you start to see some data streaming to your app right\naway doesn't mean the process is going to complete in less time than\nit would if you waited for the sort to happen.\n\nYou didn't mention what you have work_mem set to, but a small value\nof that would handicap the sort-based plan a lot. I wonder whether\njacking up work_mem to help the sorts run faster won't end up being\nthe better idea in the end.\n\n\t\t\tregards, tom lane\n\nPS: On the third hand, you mention having created new indexes on this\ntable with apparently not a lot of pain, which is a tad surprising\nif you don't have the patience to wait for a sort to finish. How\nlong did those index builds take?\n\n",
"msg_date": "Fri, 25 Jan 2019 01:24:45 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Zero throughput on a query on a very large table."
},
{
"msg_contents": "On Fri, 25 Jan 2019 at 19:24, Tom Lane <[email protected]> wrote:\n> PS: On the third hand, you mention having created new indexes on this\n> table with apparently not a lot of pain, which is a tad surprising\n> if you don't have the patience to wait for a sort to finish. How\n> long did those index builds take?\n\nIt would certainly be good to look at psql's \\d tmp_outpatient_rev\noutput to ensure that the index is not marked as INVALID.\n\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Fri, 25 Jan 2019 19:55:31 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Zero throughput on a query on a very large table."
},
{
"msg_contents": "> -----Original Message-----\r\n> From: Andreas Kretschmer <[email protected]>\r\n> Sent: Friday, January 25, 2019 00:55\r\n> To: [email protected]\r\n> Subject: Re: Zero throughput on a query on a very large table.\r\n> \r\n> \r\n> \r\n> Am 25.01.19 um 06:20 schrieb [email protected]:\r\n> >\r\n> > Hello,\r\n> >\r\n> > We have been stuck for the past week on a query that simply won’t\r\n> > “execute”. We have a table with 1.2B rows that took around 14h to\r\n> > load, but a simple select takes forever and after 10h, no records are\r\n> > coming through still.\r\n> >\r\n> > Environment:\r\n> >\r\n> > - Table tmp_outpatient_rev with 41 VARCHAR columns\r\n> > (desy_sort_key, claim_no, clm_line_num, clm_thru_dt, nch_clm_type_cd,\r\n> > rev_cntr, rev_cntr_dt, …)\r\n> >\r\n> > - 1.2B rows (Billion with a ‘B’)\r\n> >\r\n> > - A single Unique Index on columns desy_sort_key, claim_no,\r\n> > clm_line_num\r\n> >\r\n> > - select pg_size_pretty(pg_relation_size('tmp_outpatient_rev'))\r\n> > --> 215GB\r\n> >\r\n> > - Database Server: 64GB, 8 cores/16 threads, HDDs 10K\r\n> >\r\n> > - Linux\r\n> >\r\n> > - PG 11.1\r\n> >\r\n> > Query:\r\n> >\r\n> > select * from tmp_outpatient_rev order by desy_sort_key, claim_no\r\n> >\r\n> > Plan:\r\n> >\r\n> > Gather Merge (cost=61001461.16..216401602.29 rows=1242732290\r\n> > width=250)\r\n> >\r\n> > Output: desy_sort_key, claim_no, clm_line_num, clm_thru_dt,\r\n> > nch_clm_type_cd, rev_cntr, rev_cntr_dt, …\r\n> >\r\n> > Workers Planned: 10\r\n> >\r\n> > -> Sort (cost=61000460.97..61311144.04 rows=124273229\r\n> > width=250)\r\n> >\r\n> > Output: desy_sort_key, claim_no, clm_line_num,\r\n> > clm_thru_dt, nch_clm_type_cd, rev_cntr, rev_cntr_dt, …\r\n> >\r\n> > Sort Key: tmp_outpatient_rev.desy_sort_key,\r\n> > tmp_outpatient_rev.claim_no\r\n> >\r\n> > -> Parallel Seq Scan on public.tmp_outpatient_rev\r\n> > (cost=0.00..29425910.29 rows=124273229 width=250)\r\n> >\r\n> > Output: desy_sort_key, claim_no, clm_line_num,\r\n> > clm_thru_dt, nch_clm_type_cd, rev_cntr, rev_cntr_dt, …\r\n> >\r\n> > Method of access:\r\n> >\r\n> > - Using Pentaho Kettle (an ETL tool written in Java and using\r\n> > JDBC), we simply issue the query and expect records to start streaming\r\n> > in ASAP.\r\n> >\r\n> > - Issue was replicated with really basic JDBC code in a Java test\r\n> > program.\r\n> >\r\n> > - The database doesn't have much other data and the table was\r\n> > loaded from a CSV data source with LOAD over something like 14h\r\n> > (average throughput of about 25K rows/s)\r\n> >\r\n> > - Settings:\r\n> >\r\n> > alter database \"CMS_TMP\" set seq_page_cost=1;\r\n> >\r\n> > alter database \"CMS_TMP\" set random_page_cost=4;\r\n> >\r\n> > alter database \"CMS_TMP\" set enable_seqscan=true;\r\n> >\r\n> > JDBC connection string with no extra params.\r\n> >\r\n> > Database has been generally configured properly.\r\n> >\r\n> > Problem:\r\n> >\r\n> > - The plan shows a full table scan followed by a sort, and then a\r\n> > gather merge. With 1.2B rows, that's crazy to try to sort that 😊\r\n> >\r\n> > - After 10h, the query is still \"silent\" and no record is\r\n> > streaming in. IO is very high (80-90% disk throughput utilization) on\r\n> > the machine (the sort…).\r\n> >\r\n> > - I have tried to hack the planner to force an index scan (which\r\n> > would avoid the sort/gather steps and should start streaming data\r\n> > right away), in particular, enable_seqscan=false or seq_page_cost=2.\r\n> > This had ZERO impact on the plan to my surprise.\r\n> >\r\n> > - I changed the “order by” to include all 3 columns from the index,\r\n> > or created a non-unique index with only the first 2 columns, all to no\r\n> > effect whatsoever either.\r\n> >\r\n> > - The table was written over almost 14h at about 25K row/s and it\r\n> > seems to me I should be able to read the data back at least as fast.\r\n> >\r\n> > Why is a simple index scan not used? Why are all our efforts to try to\r\n> > force the use of the index failing?\r\n> >\r\n> >\r\n> \r\n> the query isn't that simple, there is no where condition, so PG has to read the\r\n> whole table and the index is useless. Would it be enought to select only the\r\n> columns covered by the index?\r\n> (run a vacuum on the table after loading the data, that's can enable a index-\r\n> only-scan in this case)\r\n> \r\n> \r\n> \r\n> \r\n> Regards, Andreas\r\n> \r\n> --\r\n> 2ndQuadrant - The PostgreSQL Support Company.\r\n> www.2ndQuadrant.com\r\n> \r\n\r\nWell, even without a where clause, and a straight select with an order by on an index... The index may perform slightly more slowly, but stream data more rapidly... I guess what i am pointing out is that in ETL scenarios, enabling better continuous throughput would be better than total overall query performance?\r\n\r\nThank you,\r\nLaurent.\r\n\r\n\r\n \r\n\r\n",
"msg_date": "Fri, 25 Jan 2019 17:47:18 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Zero throughput on a query on a very large table."
},
{
"msg_contents": "Sorry, the web outlook client may be \"prepending\" this message instead of appending, as is the custom on this mailing list.\n\n\nThe indices are defined as:\n\nCREATE INDEX i_outprev_ptclaim\n ON public.tmp_outpatient_rev USING btree\n (desy_sort_key COLLATE pg_catalog.\"default\", claim_no COLLATE pg_catalog.\"default\")\n TABLESPACE pg_default;\n\nCREATE UNIQUE INDEX ui_outprev_ptclaimline\n ON public.tmp_outpatient_rev USING btree\n (desy_sort_key COLLATE pg_catalog.\"default\", claim_no COLLATE pg_catalog.\"default\", clm_line_num COLLATE pg_catalog.\"default\")\n TABLESPACE pg_default;\n\n\nI am using PGAdmin4 and the client times out, so i don't have the exact timing, but each one of those indices completed under 5h (started at lunch time and was done before the end of the afternoon). So when i ran the query and it didn't move for about 10h, i figured it might \"never end\" :).\n\n\nI'll try changing the random page cost and see. The work_men param is set to 128MB... So maybe that's something too? I'll try.\n\n\nAdditionally, do note that we have a second table, similar in structure, with 180M rows, select pg_size_pretty(pg_relation_size('tmp_inpatient_rev')) --> 18GB (so it's 10x smaller) but we get 40K rows/s read throughput on that with a similar query and index and the plan does chose an index scan and returns the first thousands of row almost immediately (a few secs).\n\n\n________________________________\nFrom: Tom Lane <[email protected]>\nSent: Friday, January 25, 2019 1:24:45 AM\nTo: [email protected]\nCc: [email protected]\nSubject: Re: Zero throughput on a query on a very large table.\n\n\"[email protected]\" <[email protected]> writes:\n> Query:\n> select * from tmp_outpatient_rev order by desy_sort_key, claim_no\n\n> Plan:\n> [ seqscan-and-sort ... parallelized, but still seqscan-and-sort ]\n\n> - I have tried to hack the planner to force an index scan (which would avoid the sort/gather steps and should start streaming data right away), in particular, enable_seqscan=false or seq_page_cost=2. This had ZERO impact on the plan to my surprise.\n\nIf you can't get an indexscan plan despite setting enable_seqscan=false,\nthat typically means that the planner thinks the index's sort order\ndoes not match what the query is asking for. I wonder whether you\ncreated the index with nondefault collation, or asc/desc ordering,\nor something like that. There's not enough detail here to diagnose\nthat.\n\nIt should also be noted that what enable_seqscan=false actually does\nis to add a cost penalty of 1e10 to seqscan plans. It's possible\nthat your table is so large and badly ordered that the estimated\ncost differential between seqscan and indexscan is more than 1e10,\nso that the planner goes for the seqscan anyway. You could probably\novercome that by aggressively decreasing random_page_cost (and by\n\"aggressive\" I don't mean 2, I mean 0.2, or maybe 0.00002, whatever\nit takes). However, if that's what's happening, I'm worried that\ngetting what you asked for may not really be the outcome you wanted.\nJust because you start to see some data streaming to your app right\naway doesn't mean the process is going to complete in less time than\nit would if you waited for the sort to happen.\n\nYou didn't mention what you have work_mem set to, but a small value\nof that would handicap the sort-based plan a lot. I wonder whether\njacking up work_mem to help the sorts run faster won't end up being\nthe better idea in the end.\n\n regards, tom lane\n\nPS: On the third hand, you mention having created new indexes on this\ntable with apparently not a lot of pain, which is a tad surprising\nif you don't have the patience to wait for a sort to finish. How\nlong did those index builds take?\n\n\n\n\n\n\n\n\nSorry, the web outlook client may be \"prepending\" this message instead of appending, as is the custom on this mailing list.\n\n\nThe indices are defined as:\n\nCREATE INDEX i_outprev_ptclaim\n ON public.tmp_outpatient_rev USING btree\n (desy_sort_key COLLATE pg_catalog.\"default\", claim_no COLLATE pg_catalog.\"default\")\n TABLESPACE pg_default;\n\n\nCREATE UNIQUE INDEX ui_outprev_ptclaimline\n ON public.tmp_outpatient_rev USING btree\n (desy_sort_key COLLATE pg_catalog.\"default\", claim_no COLLATE pg_catalog.\"default\", clm_line_num COLLATE pg_catalog.\"default\")\n TABLESPACE pg_default;\n\n\n\nI am using PGAdmin4 and the client times out, so i don't have the exact timing, but each one of those indices completed under 5h (started at lunch time and was done before the end of the afternoon). So when i ran the\n query and it didn't move for about 10h, i figured it might \"never end\" :).\n\n\nI'll try changing the random page cost and see. The work_men param is set to 128MB... So maybe that's something too? I'll try.\n\n\nAdditionally, do note that we have a second table, similar in structure, with 180M rows,\nselect pg_size_pretty(pg_relation_size('tmp_inpatient_rev')) --> 18GB (so it's 10x smaller) but we get 40K rows/s read throughput on that with a similar query and index and the plan does chose an index scan and returns the first thousands of row almost\n immediately (a few secs).\n\n\n\n\n\nFrom: Tom Lane <[email protected]>\nSent: Friday, January 25, 2019 1:24:45 AM\nTo: [email protected]\nCc: [email protected]\nSubject: Re: Zero throughput on a query on a very large table.\n \n\n\n\"[email protected]\" <[email protected]> writes:\n> Query:\n> select * from tmp_outpatient_rev order by desy_sort_key, claim_no\n\n> Plan:\n> [ seqscan-and-sort ... parallelized, but still seqscan-and-sort ]\n\n> - I have tried to hack the planner to force an index scan (which would avoid the sort/gather steps and should start streaming data right away), in particular, enable_seqscan=false or seq_page_cost=2. This had ZERO impact on the plan to my surprise.\n\nIf you can't get an indexscan plan despite setting enable_seqscan=false,\nthat typically means that the planner thinks the index's sort order\ndoes not match what the query is asking for. I wonder whether you\ncreated the index with nondefault collation, or asc/desc ordering,\nor something like that. There's not enough detail here to diagnose\nthat.\n\nIt should also be noted that what enable_seqscan=false actually does\nis to add a cost penalty of 1e10 to seqscan plans. It's possible\nthat your table is so large and badly ordered that the estimated\ncost differential between seqscan and indexscan is more than 1e10,\nso that the planner goes for the seqscan anyway. You could probably\novercome that by aggressively decreasing random_page_cost (and by\n\"aggressive\" I don't mean 2, I mean 0.2, or maybe 0.00002, whatever\nit takes). However, if that's what's happening, I'm worried that\ngetting what you asked for may not really be the outcome you wanted.\nJust because you start to see some data streaming to your app right\naway doesn't mean the process is going to complete in less time than\nit would if you waited for the sort to happen.\n\nYou didn't mention what you have work_mem set to, but a small value\nof that would handicap the sort-based plan a lot. I wonder whether\njacking up work_mem to help the sorts run faster won't end up being\nthe better idea in the end.\n\n regards, tom lane\n\nPS: On the third hand, you mention having created new indexes on this\ntable with apparently not a lot of pain, which is a tad surprising\nif you don't have the patience to wait for a sort to finish. How\nlong did those index builds take?",
"msg_date": "Fri, 25 Jan 2019 18:00:31 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Zero throughput on a query on a very large table."
},
{
"msg_contents": "\"[email protected]\" <[email protected]> writes:\n> The indices are defined as:\n\n> CREATE INDEX i_outprev_ptclaim\n> ON public.tmp_outpatient_rev USING btree\n> (desy_sort_key COLLATE pg_catalog.\"default\", claim_no COLLATE pg_catalog.\"default\")\n> TABLESPACE pg_default;\n\n> CREATE UNIQUE INDEX ui_outprev_ptclaimline\n> ON public.tmp_outpatient_rev USING btree\n> (desy_sort_key COLLATE pg_catalog.\"default\", claim_no COLLATE pg_catalog.\"default\", clm_line_num COLLATE pg_catalog.\"default\")\n> TABLESPACE pg_default;\n\nI'm a bit suspicious of those explicit COLLATE clauses; seems like maybe\nthey could be accounting for not matching to the query-requested order.\nPerhaps they're different from the collations specified on the underlying\ntable columns?\n\nAlso, it seems unlikely that it's worth the maintenance work to keep\nboth of these indexes, though that's not related to your immediate\nproblem.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 25 Jan 2019 13:10:55 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Zero throughput on a query on a very large table."
},
{
"msg_contents": "Since the PGADmin4 client timed out when creating the index, you picked my interest here and i was wondering if the index creation itself had failed... but:\n\n\\d tmp_outpatient_rev\n\nIndexes:\n \"ui_outprev_ptclaimline\" UNIQUE, btree (desy_sort_key, claim_no, clm_line_num)\n \"i_outprev_ptclaim\" btree (desy_sort_key, claim_no)\n\nSo looks like the indices are file. I am pursuing some of the other recommendations you suggested before.\n\n\nThank you,\n\nLaurent.\n\n________________________________\nFrom: David Rowley <[email protected]>\nSent: Friday, January 25, 2019 1:55:31 AM\nTo: Tom Lane\nCc: [email protected]; [email protected]\nSubject: Re: Zero throughput on a query on a very large table.\n\nOn Fri, 25 Jan 2019 at 19:24, Tom Lane <[email protected]> wrote:\n> PS: On the third hand, you mention having created new indexes on this\n> table with apparently not a lot of pain, which is a tad surprising\n> if you don't have the patience to wait for a sort to finish. How\n> long did those index builds take?\n\nIt would certainly be good to look at psql's \\d tmp_outpatient_rev\noutput to ensure that the index is not marked as INVALID.\n\n\n--\n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n\n\n\n\n\n\n\n\n\nSince the PGADmin4 client timed out when creating the index, you picked my interest here and i was wondering if the index creation itself had failed... but:\n\n\n\\d tmp_outpatient_rev\n\n\n\nIndexes:\n \"ui_outprev_ptclaimline\" UNIQUE, btree (desy_sort_key, claim_no, clm_line_num)\n \"i_outprev_ptclaim\" btree (desy_sort_key, claim_no)\n\n\nSo looks like the indices are file. I am pursuing some of the other recommendations you suggested before.\n\n\n\nThank you,\nLaurent.\n\n\n\nFrom: David Rowley <[email protected]>\nSent: Friday, January 25, 2019 1:55:31 AM\nTo: Tom Lane\nCc: [email protected]; [email protected]\nSubject: Re: Zero throughput on a query on a very large table.\n \n\n\nOn Fri, 25 Jan 2019 at 19:24, Tom Lane <[email protected]> wrote:\n> PS: On the third hand, you mention having created new indexes on this\n> table with apparently not a lot of pain, which is a tad surprising\n> if you don't have the patience to wait for a sort to finish. How\n> long did those index builds take?\n\nIt would certainly be good to look at psql's \\d tmp_outpatient_rev\noutput to ensure that the index is not marked as INVALID.\n\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Fri, 25 Jan 2019 18:20:53 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Zero throughput on a query on a very large table."
},
{
"msg_contents": "Agreed on the 2 indices. I only added the second non-unique index to test the hypothesis that i was doing an order-by col1, col2 when the original unique index was on col1, col2, col3...\n\n\nAlso, the original statement i implemented did not have all of that. This is the normalized SQL that Postgres now gives when looking at the indices. Collation for the DB is \"en_US.UTF-8\" and that's used for the defaults i suspect?\n\n________________________________\nFrom: Tom Lane <[email protected]>\nSent: Friday, January 25, 2019 1:10:55 PM\nTo: [email protected]\nCc: [email protected]\nSubject: Re: Zero throughput on a query on a very large table.\n\n\"[email protected]\" <[email protected]> writes:\n> The indices are defined as:\n\n> CREATE INDEX i_outprev_ptclaim\n> ON public.tmp_outpatient_rev USING btree\n> (desy_sort_key COLLATE pg_catalog.\"default\", claim_no COLLATE pg_catalog.\"default\")\n> TABLESPACE pg_default;\n\n> CREATE UNIQUE INDEX ui_outprev_ptclaimline\n> ON public.tmp_outpatient_rev USING btree\n> (desy_sort_key COLLATE pg_catalog.\"default\", claim_no COLLATE pg_catalog.\"default\", clm_line_num COLLATE pg_catalog.\"default\")\n> TABLESPACE pg_default;\n\nI'm a bit suspicious of those explicit COLLATE clauses; seems like maybe\nthey could be accounting for not matching to the query-requested order.\nPerhaps they're different from the collations specified on the underlying\ntable columns?\n\nAlso, it seems unlikely that it's worth the maintenance work to keep\nboth of these indexes, though that's not related to your immediate\nproblem.\n\n regards, tom lane\n\n\n\n\n\n\n\n\nAgreed on the 2 indices. I only added the second non-unique index to test the hypothesis that i was doing an order-by col1, col2 when the original unique index was on col1, col2, col3...\n\n\nAlso, the original statement i implemented did not have all of that. This is the normalized SQL that Postgres now gives when looking at the indices. Collation for the DB is\n\"en_US.UTF-8\" and that's used for the defaults i suspect?\n\n\n\nFrom: Tom Lane <[email protected]>\nSent: Friday, January 25, 2019 1:10:55 PM\nTo: [email protected]\nCc: [email protected]\nSubject: Re: Zero throughput on a query on a very large table.\n \n\n\n\"[email protected]\" <[email protected]> writes:\n> The indices are defined as:\n\n> CREATE INDEX i_outprev_ptclaim\n> ON public.tmp_outpatient_rev USING btree\n> (desy_sort_key COLLATE pg_catalog.\"default\", claim_no COLLATE pg_catalog.\"default\")\n> TABLESPACE pg_default;\n\n> CREATE UNIQUE INDEX ui_outprev_ptclaimline\n> ON public.tmp_outpatient_rev USING btree\n> (desy_sort_key COLLATE pg_catalog.\"default\", claim_no COLLATE pg_catalog.\"default\", clm_line_num COLLATE pg_catalog.\"default\")\n> TABLESPACE pg_default;\n\nI'm a bit suspicious of those explicit COLLATE clauses; seems like maybe\nthey could be accounting for not matching to the query-requested order.\nPerhaps they're different from the collations specified on the underlying\ntable columns?\n\nAlso, it seems unlikely that it's worth the maintenance work to keep\nboth of these indexes, though that's not related to your immediate\nproblem.\n\n regards, tom lane",
"msg_date": "Fri, 25 Jan 2019 18:29:12 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Zero throughput on a query on a very large table."
},
{
"msg_contents": "\"[email protected]\" <[email protected]> writes:\n> Also, the original statement i implemented did not have all of that. This is the normalized SQL that Postgres now gives when looking at the indices.\n\n[ squint... ] What do you mean exactly by \"Postgres gives that\"?\nI don't see any redundant COLLATE clauses in e.g. psql \\d.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 25 Jan 2019 13:34:01 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Zero throughput on a query on a very large table."
},
{
"msg_contents": "Sorry :) When i look at the \"SQL\" tab in PGAdmin when i select the index in the schema browser. But you are right that /d doesn't show that.\n\n________________________________\nFrom: Tom Lane <[email protected]>\nSent: Friday, January 25, 2019 1:34:01 PM\nTo: [email protected]\nCc: [email protected]\nSubject: Re: Zero throughput on a query on a very large table.\n\n\"[email protected]\" <[email protected]> writes:\n> Also, the original statement i implemented did not have all of that. This is the normalized SQL that Postgres now gives when looking at the indices.\n\n[ squint... ] What do you mean exactly by \"Postgres gives that\"?\nI don't see any redundant COLLATE clauses in e.g. psql \\d.\n\n regards, tom lane\n\n\n\n\n\n\n\n\nSorry :) When i look at the \"SQL\" tab in PGAdmin when i select the index in the schema browser. But you are right that /d doesn't show that.\n\n\n\nFrom: Tom Lane <[email protected]>\nSent: Friday, January 25, 2019 1:34:01 PM\nTo: [email protected]\nCc: [email protected]\nSubject: Re: Zero throughput on a query on a very large table.\n \n\n\n\"[email protected]\" <[email protected]> writes:\n> Also, the original statement i implemented did not have all of that. This is the normalized SQL that Postgres now gives when looking at the indices.\n\n[ squint... ] What do you mean exactly by \"Postgres gives that\"?\nI don't see any redundant COLLATE clauses in e.g. psql \\d.\n\n regards, tom lane",
"msg_date": "Fri, 25 Jan 2019 18:36:21 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Zero throughput on a query on a very large table."
},
{
"msg_contents": "OK... I think we may have cracked this.\r\n\r\n\r\nFirst, do you think that 128MB work_mem is ok? We have a 64GB machine and expecting fewer than 100 connections. This is really an ETL workload environment at this time.\r\n\r\n\r\nSecond, here is what i found and what messed us up.\r\n\r\n select current_setting('random_page_cost'); --> 4\r\n\r\n alter database \"CMS_TMP\" set random_page_cost=0.00000001;\r\n select current_setting('random_page_cost'); --> 4 ????\r\n\r\nI also tried:\r\n select current_setting('random_page_cost'); --> 4\r\n select set_config('random_page_cost', '0.000001', true);\r\n select current_setting('random_page_cost'); --> 4 ????\r\n\r\n\r\nIs there something that is happening that is causing those settings to not stick? I then tried:\r\n\r\n\r\n select current_setting('random_page_cost'); --> 4\r\n select set_config('random_page_cost', '0.000001', false); -- false now, i.e., global\r\n select current_setting('random_page_cost'); --> 0.000001 !!!!\r\n\r\n\r\nSo i think we just spent 4 days on that issue. I then did\r\n\r\n select set_config('enable_seqscan', 'off', false);\r\nAnd the plan is now using an index scan, and we are getting 12K rows/s in throughput immediately!!! 😊\r\n\r\nSo i guess my final question is that i really want to only affect that one query executing, and i seem to not be able to change the settings used by the planner just for that one transaction. I have to change it globally which i would prefer not to do. Any help here?\r\n\r\n\r\nThanks,\r\n\r\nLaurent.\r\n\r\n________________________________\r\nFrom: [email protected] <[email protected]>\r\nSent: Friday, January 25, 2019 1:36:21 PM\r\nTo: Tom Lane\r\nCc: [email protected]\r\nSubject: Re: Zero throughput on a query on a very large table.\r\n\r\n\r\nSorry :) When i look at the \"SQL\" tab in PGAdmin when i select the index in the schema browser. But you are right that /d doesn't show that.\r\n\r\n________________________________\r\nFrom: Tom Lane <[email protected]>\r\nSent: Friday, January 25, 2019 1:34:01 PM\r\nTo: [email protected]\r\nCc: [email protected]\r\nSubject: Re: Zero throughput on a query on a very large table.\r\n\r\n\"[email protected]\" <[email protected]> writes:\r\n> Also, the original statement i implemented did not have all of that. This is the normalized SQL that Postgres now gives when looking at the indices.\r\n\r\n[ squint... ] What do you mean exactly by \"Postgres gives that\"?\r\nI don't see any redundant COLLATE clauses in e.g. psql \\d.\r\n\r\n regards, tom lane\r\n\n\n\n\n\n\n\n\nOK... I think we may have cracked this.\n\n\nFirst, do you think that 128MB work_mem is ok? We have a 64GB machine and expecting fewer than 100 connections. This is really an ETL workload environment at this time.\n\n\nSecond, here is what i found and what messed us up.\n select current_setting('random_page_cost'); --> 4\n\n\n alter database \"CMS_TMP\" set random_page_cost=0.00000001;\n select current_setting('random_page_cost'); --> 4 ????\n\n\nI also tried:\n\n select current_setting('random_page_cost'); --> 4\n select set_config('random_page_cost', '0.000001', true);\n\n select current_setting('random_page_cost'); --> 4 ????\n\n\n\n\nIs there something that is happening that is causing those settings to not stick? I then tried:\n\n\n\n select current_setting('random_page_cost'); --> 4\n select set_config('random_page_cost', '0.000001', false); -- false now, i.e., global\n select current_setting('random_page_cost'); --> 0.000001 !!!!\n\n\n\nSo i think we just spent 4 days on that issue. I then did\n\n select set_config('enable_seqscan', 'off', false);\r\nAnd the plan is now using an index scan, and we are getting 12K rows/s in throughput immediately!!!\r\n😊\n\n\nSo i guess my final question is that i really want to only affect that one query executing, and i seem to not be able to change the settings used by the planner just for that one transaction. I have to change it globally which i would prefer not to do.\r\n Any help here?\n\n\n\nThanks,\nLaurent.\n\n\n\nFrom: [email protected] <[email protected]>\nSent: Friday, January 25, 2019 1:36:21 PM\nTo: Tom Lane\nCc: [email protected]\nSubject: Re: Zero throughput on a query on a very large table.\n \n\n\n\n\nSorry :) When i look at the \"SQL\" tab in PGAdmin when i select the index in the schema browser. But you are right that /d doesn't show that.\n\n\n\nFrom: Tom Lane <[email protected]>\nSent: Friday, January 25, 2019 1:34:01 PM\nTo: [email protected]\nCc: [email protected]\nSubject: Re: Zero throughput on a query on a very large table.\n \n\n\n\"[email protected]\" <[email protected]> writes:\r\n> Also, the original statement i implemented did not have all of that. This is the normalized SQL that Postgres now gives when looking at the indices.\n\r\n[ squint... ] What do you mean exactly by \"Postgres gives that\"?\r\nI don't see any redundant COLLATE clauses in e.g. psql \\d.\n\r\n regards, tom lane",
"msg_date": "Fri, 25 Jan 2019 19:06:54 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Zero throughput on a query on a very large table."
},
{
"msg_contents": "Just a correction from my previous message regarding the throughput we get.\r\n\r\n\r\nOn that one table with 1.2B row, the plan through the index scan delivers actually 50K rows/s in read speed to the application, almost immediately. It would go through the entire table in under 7h vs the other approach which still didn't deliver any data after 10h.\r\n\r\n\r\nWe do additional joins and logic and out final throughput is about 12K/s (what i quoted previously), but this is a case where clearly the index_scan plan delivers vastly better performance than the table_seq_scan+sort plan.\r\n\r\n\r\nAny insight here?\r\n\r\n\r\nThank you,\r\n\r\nLaurent.\r\n\r\n________________________________\r\nFrom: [email protected] <[email protected]>\r\nSent: Friday, January 25, 2019 2:06:54 PM\r\nTo: Tom Lane\r\nCc: [email protected]\r\nSubject: Re: Zero throughput on a query on a very large table.\r\n\r\n\r\nOK... I think we may have cracked this.\r\n\r\n\r\nFirst, do you think that 128MB work_mem is ok? We have a 64GB machine and expecting fewer than 100 connections. This is really an ETL workload environment at this time.\r\n\r\n\r\nSecond, here is what i found and what messed us up.\r\n\r\n select current_setting('random_page_cost'); --> 4\r\n\r\n alter database \"CMS_TMP\" set random_page_cost=0.00000001;\r\n select current_setting('random_page_cost'); --> 4 ????\r\n\r\nI also tried:\r\n select current_setting('random_page_cost'); --> 4\r\n select set_config('random_page_cost', '0.000001', true);\r\n select current_setting('random_page_cost'); --> 4 ????\r\n\r\n\r\nIs there something that is happening that is causing those settings to not stick? I then tried:\r\n\r\n\r\n select current_setting('random_page_cost'); --> 4\r\n select set_config('random_page_cost', '0.000001', false); -- false now, i.e., global\r\n select current_setting('random_page_cost'); --> 0.000001 !!!!\r\n\r\n\r\nSo i think we just spent 4 days on that issue. I then did\r\n\r\n select set_config('enable_seqscan', 'off', false);\r\nAnd the plan is now using an index scan, and we are getting 12K rows/s in throughput immediately!!! 😊\r\n\r\nSo i guess my final question is that i really want to only affect that one query executing, and i seem to not be able to change the settings used by the planner just for that one transaction. I have to change it globally which i would prefer not to do. Any help here?\r\n\r\n\r\nThanks,\r\n\r\nLaurent.\r\n\r\n________________________________\r\nFrom: [email protected] <[email protected]>\r\nSent: Friday, January 25, 2019 1:36:21 PM\r\nTo: Tom Lane\r\nCc: [email protected]\r\nSubject: Re: Zero throughput on a query on a very large table.\r\n\r\n\r\nSorry :) When i look at the \"SQL\" tab in PGAdmin when i select the index in the schema browser. But you are right that /d doesn't show that.\r\n\r\n________________________________\r\nFrom: Tom Lane <[email protected]>\r\nSent: Friday, January 25, 2019 1:34:01 PM\r\nTo: [email protected]\r\nCc: [email protected]\r\nSubject: Re: Zero throughput on a query on a very large table.\r\n\r\n\"[email protected]\" <[email protected]> writes:\r\n> Also, the original statement i implemented did not have all of that. This is the normalized SQL that Postgres now gives when looking at the indices.\r\n\r\n[ squint... ] What do you mean exactly by \"Postgres gives that\"?\r\nI don't see any redundant COLLATE clauses in e.g. psql \\d.\r\n\r\n regards, tom lane\r\n\n\n\n\n\n\n\n\nJust a correction from my previous message regarding the throughput we get.\n\n\nOn that one table with 1.2B row, the plan through the index scan delivers actually 50K rows/s in read speed to the application, almost immediately. It would go through the entire table in under 7h vs the other approach\r\n which still didn't deliver any data after 10h.\n\n\nWe do additional joins and logic and out final throughput is about 12K/s (what i quoted previously), but this is a case where clearly the index_scan plan delivers vastly better performance than the table_seq_scan+sort\r\n plan.\n\n\nAny insight here?\n\n\nThank you,\nLaurent.\n\n\n\nFrom: [email protected] <[email protected]>\nSent: Friday, January 25, 2019 2:06:54 PM\nTo: Tom Lane\nCc: [email protected]\nSubject: Re: Zero throughput on a query on a very large table.\n \n\n\n\n\nOK... I think we may have cracked this.\n\n\nFirst, do you think that 128MB work_mem is ok? We have a 64GB machine and expecting fewer than 100 connections. This is really an ETL workload environment at this time.\n\n\nSecond, here is what i found and what messed us up.\n select current_setting('random_page_cost'); --> 4\n\n\n alter database \"CMS_TMP\" set random_page_cost=0.00000001;\n select current_setting('random_page_cost'); --> 4 ????\n\n\nI also tried:\n\n select current_setting('random_page_cost'); --> 4\n select set_config('random_page_cost', '0.000001', true);\n\n select current_setting('random_page_cost'); --> 4 ????\n\n\n\n\nIs there something that is happening that is causing those settings to not stick? I then tried:\n\n\n\n select current_setting('random_page_cost'); --> 4\n select set_config('random_page_cost', '0.000001', false); -- false now, i.e., global\n select current_setting('random_page_cost'); --> 0.000001 !!!!\n\n\n\nSo i think we just spent 4 days on that issue. I then did\n\n select set_config('enable_seqscan', 'off', false);\r\nAnd the plan is now using an index scan, and we are getting 12K rows/s in throughput immediately!!!\r\n😊\n\n\nSo i guess my final question is that i really want to only affect that one query executing, and i seem to not be able to change the settings used by the planner just for that one transaction. I have to change it globally which i would prefer not to do.\r\n Any help here?\n\n\n\nThanks,\nLaurent.\n\n\n\nFrom: [email protected] <[email protected]>\nSent: Friday, January 25, 2019 1:36:21 PM\nTo: Tom Lane\nCc: [email protected]\nSubject: Re: Zero throughput on a query on a very large table.\n \n\n\n\n\nSorry :) When i look at the \"SQL\" tab in PGAdmin when i select the index in the schema browser. But you are right that /d doesn't show that.\n\n\n\nFrom: Tom Lane <[email protected]>\nSent: Friday, January 25, 2019 1:34:01 PM\nTo: [email protected]\nCc: [email protected]\nSubject: Re: Zero throughput on a query on a very large table.\n \n\n\n\"[email protected]\" <[email protected]> writes:\r\n> Also, the original statement i implemented did not have all of that. This is the normalized SQL that Postgres now gives when looking at the indices.\n\r\n[ squint... ] What do you mean exactly by \"Postgres gives that\"?\r\nI don't see any redundant COLLATE clauses in e.g. psql \\d.\n\r\n regards, tom lane",
"msg_date": "Fri, 25 Jan 2019 19:24:30 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Zero throughput on a query on a very large table."
},
{
"msg_contents": "\"[email protected]\" <[email protected]> writes:\n> Second, here is what i found and what messed us up.\n\n> select current_setting('random_page_cost'); --> 4\n> alter database \"CMS_TMP\" set random_page_cost=0.00000001;\n> select current_setting('random_page_cost'); --> 4 ????\n\nALTER DATABASE only affects subsequently-started sessions.\n\n> I also tried:\n> select current_setting('random_page_cost'); --> 4\n> select set_config('random_page_cost', '0.000001', true);\n> select current_setting('random_page_cost'); --> 4 ????\n\nThat \"true\" means \"local to the current transaction\", which is\njust the one statement if you don't have a BEGIN.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 25 Jan 2019 15:04:37 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Zero throughput on a query on a very large table."
},
{
"msg_contents": "Correct, but in the Java code, it's multiple statements in a single transaction, so it should stick. Not sure if something else stupid is going on.\n\n\nGood to know about the ALTER DATABASE effect. I didn't realize that.\n\n\nThanks a billion.\n\n\nLaurent.\n\n________________________________\nFrom: Tom Lane <[email protected]>\nSent: Friday, January 25, 2019 3:04:37 PM\nTo: [email protected]\nCc: [email protected]\nSubject: Re: Zero throughput on a query on a very large table.\n\n\"[email protected]\" <[email protected]> writes:\n> Second, here is what i found and what messed us up.\n\n> select current_setting('random_page_cost'); --> 4\n> alter database \"CMS_TMP\" set random_page_cost=0.00000001;\n> select current_setting('random_page_cost'); --> 4 ????\n\nALTER DATABASE only affects subsequently-started sessions.\n\n> I also tried:\n> select current_setting('random_page_cost'); --> 4\n> select set_config('random_page_cost', '0.000001', true);\n> select current_setting('random_page_cost'); --> 4 ????\n\nThat \"true\" means \"local to the current transaction\", which is\njust the one statement if you don't have a BEGIN.\n\n regards, tom lane\n\n\n\n\n\n\n\n\nCorrect, but in the Java code, it's multiple statements in a single transaction, so it should stick. Not sure if something else stupid is going on.\n\n\nGood to know about the ALTER DATABASE effect. I didn't realize that.\n\n\nThanks a billion.\n\n\nLaurent.\n\n\n\nFrom: Tom Lane <[email protected]>\nSent: Friday, January 25, 2019 3:04:37 PM\nTo: [email protected]\nCc: [email protected]\nSubject: Re: Zero throughput on a query on a very large table.\n \n\n\n\"[email protected]\" <[email protected]> writes:\n> Second, here is what i found and what messed us up.\n\n> select current_setting('random_page_cost'); --> 4\n> alter database \"CMS_TMP\" set random_page_cost=0.00000001;\n> select current_setting('random_page_cost'); --> 4 ????\n\nALTER DATABASE only affects subsequently-started sessions.\n\n> I also tried:\n> select current_setting('random_page_cost'); --> 4\n> select set_config('random_page_cost', '0.000001', true);\n> select current_setting('random_page_cost'); --> 4 ????\n\nThat \"true\" means \"local to the current transaction\", which is\njust the one statement if you don't have a BEGIN.\n\n regards, tom lane",
"msg_date": "Fri, 25 Jan 2019 20:31:21 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Zero throughput on a query on a very large table."
}
] |
[
{
"msg_contents": "Hi Team, I've few Questions on SQL perf tuning.\n\n\n1) Is there any SQL monitoring report that's available in Oracle. Highlight of the report is it tells the % of time spent on CPU & IO. And which step took how much % in overall execution.\n\n2) Is there anyway to know the historical execution plan details of a particular SQL ? Per my understanding so far since there is no concept of shared pool unlike Oracle every execution demands a new hard parse. However wanted to check with experts to know if any extension available on this?\n\n\nThanks!\n-Kaushik\n\n----------------------------------------------------------------------\nThis message and any attachments are intended only for the use of the addressee and may contain information that is privileged and confidential. If the reader of the message is not the intended recipient or an authorized representative of the intended recipient, you are hereby notified that any dissemination of this communication is strictly prohibited. If you have received this communication in error, notify the sender immediately by return email and delete the message and any attachments from your system.\n\n\n\n\n\n\n\n\n\nHi Team, I’ve few Questions on SQL perf tuning.\n \n1) \nIs there any SQL monitoring report that’s available in Oracle. Highlight of the report is it tells the % of time spent on CPU & IO. And which step took how much % in overall execution.\n\n2) \nIs there anyway to know the historical execution plan details of a particular SQL ? Per my understanding so far since there is no concept of shared pool unlike Oracle every execution demands a new hard parse. However wanted to check\n with experts to know if any extension available on this?\n \n \nThanks!\n-Kaushik\n\nThis message and any attachments are intended only for the use of the addressee and may contain information that is privileged and confidential. If the reader of the message is not the intended recipient or an authorized representative of the intended recipient, you are hereby notified that any dissemination of this communication is strictly prohibited. If you have received this communication in error, notify the sender immediately by return email and delete the message and any attachments from your system.",
"msg_date": "Sun, 27 Jan 2019 08:43:15 +0000",
"msg_from": "\"Bhupathi, Kaushik (CORP)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Q on SQL Performance tuning"
},
{
"msg_contents": "Hi,\n\nThere are many tools:\n- (core) extension pg_stat_statements will give you informations of SQL\nexecutions,\n- extension pgsentinel https://github.com/pgsentinel/pgsentinel\n gives the same results as Oracle ASH view\n- java front end PASH viewer https://github.com/dbacvetkov/PASH-Viewer \n gives a nice view of CPU IO per query\n- extension pg_stat_sql_plans (alpha) gives all of pg_stat_statements and\nmuch more\n (parsing time, planid, plan text, ...)\n\nRegards\nPAscal\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n",
"msg_date": "Sun, 27 Jan 2019 04:28:59 -0700 (MST)",
"msg_from": "legrand legrand <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Q on SQL Performance tuning"
},
{
"msg_contents": "On Sun, Jan 27, 2019 at 08:43:15AM +0000, Bhupathi, Kaushik (CORP) wrote:\n> 2) Is there anyway to know the historical execution plan details of a particular SQL ? Per my understanding so far since there is no concept of shared pool unlike Oracle every execution demands a new hard parse. However wanted to check with experts to know if any extension available on this?\n\nThere's also autoexplain, althought I think that's typically configured to only\noutput plans for queries which longer than a minimum duration.\n\nJustin\n\n",
"msg_date": "Sun, 27 Jan 2019 08:31:08 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Q on SQL Performance tuning"
},
{
"msg_contents": "On Sun, 27 Jan 2019 at 06:29, legrand legrand\n<[email protected]> wrote:\n>\n> Hi,\n>\n> There are many tools:\n> - (core) extension pg_stat_statements will give you informations of SQL\n> executions,\n\nI've had enormous success using pg_stat_statements and gathering the\ndata over time in Prometheus. That let me build a dashboard in Grafana\nthat can dive into specific queries and see when their executions rate\nsuddenly spiked or the resource usage for the query suddenly changed.\n\n> - extension pg_stat_sql_plans (alpha) gives all of pg_stat_statements and\nmuch more\n\nExtending pg_stat_statements to track statistics per-plan would be a\nhuge advance. And being able to link the metrics with data dumped in\nthe log from things like log_min_duration and pg_auto_explain would\nmake them both more useful.\n\n-- \ngreg\n\n",
"msg_date": "Thu, 14 Feb 2019 15:29:56 -0500",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Q on SQL Performance tuning"
}
] |
[
{
"msg_contents": "Hi,\nI'm planning our db upgrade from 9.6. Basically I wanted to check how\nstable is pg11 version. I'm considering upgrading from 9.6 to 10 and then\nto 11 immediatly. Is there a way to upgrade directly to 11 and jump on 10.\n\nThanks.\n\nHi,I'm planning our db upgrade from 9.6. Basically I wanted to check how stable is pg11 version. I'm considering upgrading from 9.6 to 10 and then to 11 immediatly. Is there a way to upgrade directly to 11 and jump on 10.Thanks.",
"msg_date": "Mon, 28 Jan 2019 12:04:37 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "upgrade from 9.6 to 10/11"
},
{
"msg_contents": "Mariel Cherkassky wrote:\n> I'm planning our db upgrade from 9.6. Basically I wanted to check how stable\n> is pg11 version. I'm considering upgrading from 9.6 to 10 and then to 11 immediatly.\n> Is there a way to upgrade directly to 11 and jump on 10.\n\nv11 is stable, else the PGDG would not release it.\n\nThere is no need to upgrade via v10, I recommend that you upgrade from 9.6\nto v11 directly, either via dump/restore or with pg_upgrade.\n\nhttps://www.postgresql.org/docs/current/upgrading.html\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Mon, 28 Jan 2019 11:20:28 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: upgrade from 9.6 to 10/11"
},
{
"msg_contents": "On Mon, Jan 28, 2019 at 11:20:28AM +0100, Laurenz Albe wrote:\n> Mariel Cherkassky wrote:\n> > I'm planning our db upgrade from 9.6. Basically I wanted to check how stable\n> > is pg11 version. I'm considering upgrading from 9.6 to 10 and then to 11 immediatly.\n> > Is there a way to upgrade directly to 11 and jump on 10.\n> \n> v11 is stable, else the PGDG would not release it.\n> \n> There is no need to upgrade via v10, I recommend that you upgrade from 9.6\n> to v11 directly, either via dump/restore or with pg_upgrade.\n\nKeep in mind that v11.2 will be released in 2 weeks, you should plan to install\nthe minor upgrade to it shortly after it's released, or perhaps defer upgrading\nto 11 until then.\n\nhttps://www.postgresql.org/developer/roadmap/\n\nJustin\n\n",
"msg_date": "Mon, 28 Jan 2019 09:43:46 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: upgrade from 9.6 to 10/11"
}
] |
[
{
"msg_contents": "Hey,\nI noticed that pg_locks has an addition row for every transaction that is\ncreated with a locktype \"virtualxid\". Tried to search it online but I didnt\nfind an explanation for this behavior. Does anyone can explain why it\nhappens ?\n\nHey,I noticed that pg_locks has an addition row for every transaction that is created with a locktype \"virtualxid\". Tried to search it online but I didnt find an explanation for this behavior. Does anyone can explain why it happens ?",
"msg_date": "Tue, 29 Jan 2019 10:57:11 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_locks - what is a virtualxid locktype"
},
{
"msg_contents": "The virtualxid lock is special. It’s a exclusive lock on the transaction’s\nown virtual transaction ID that every transaction always holds. No other\ntransaction can ever acquire it while the transaction is running.\nThe purpose of this is to allow one transaction to wait until another\ntransaction commits or rolls back using PostgreSQL’s locking mechanism, and\nit’s used internally.\n\nThanks & Regards,\n*Shreeyansh DBA Team*\nwww.shreeyansh.com\n\n\nOn Tue, Jan 29, 2019 at 2:27 PM Mariel Cherkassky <\[email protected]> wrote:\n\n> Hey,\n> I noticed that pg_locks has an addition row for every transaction that is\n> created with a locktype \"virtualxid\". Tried to search it online but I didnt\n> find an explanation for this behavior. Does anyone can explain why it\n> happens ?\n>\n\nThe virtualxid lock is special. It’s a exclusive lock on the transaction’s own virtual transaction ID that every transaction always holds. No other transaction can ever acquire it while the transaction is running. The purpose of this is to allow one transaction to wait until another transaction commits or rolls back using PostgreSQL’s locking mechanism, and it’s used internally.Thanks & Regards,Shreeyansh DBA Teamwww.shreeyansh.comOn Tue, Jan 29, 2019 at 2:27 PM Mariel Cherkassky <[email protected]> wrote:Hey,I noticed that pg_locks has an addition row for every transaction that is created with a locktype \"virtualxid\". Tried to search it online but I didnt find an explanation for this behavior. Does anyone can explain why it happens ?",
"msg_date": "Tue, 29 Jan 2019 15:03:17 +0530",
"msg_from": "Shreeyansh Dba <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_locks - what is a virtualxid locktype"
},
{
"msg_contents": "On 2019-Jan-29, Shreeyansh Dba wrote:\n\n> The virtualxid lock is special. It’s a exclusive lock on the transaction’s\n> own virtual transaction ID that every transaction always holds. No other\n> transaction can ever acquire it while the transaction is running.\n> The purpose of this is to allow one transaction to wait until another\n> transaction commits or rolls back using PostgreSQL’s locking mechanism, and\n> it’s used internally.\n\nA little more specific than that: it's used when some process (such as\nCREATE INDEX CONCURRENTLY) needs to wait even on sessions that might be\nread-only. Such transactions don't have transaction-ids that write\ntransactions have, which is why the only way is to wait on the virtual\ntransaction-id.\n\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Tue, 29 Jan 2019 12:12:18 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_locks - what is a virtualxid locktype"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm going crazy trying to optimise my Postgres config for a production\nsetting [1] Once I realised random changes weren't getting my anywhere, I\nfinally purchased PostgreSQL 10 - Higher Performance [2] and understood the\nimpact of shared_buffers.\n\nIIUC, shared_buffers won't have any significant impact in the following\nscenario, right?\n\n-- DB size = 30GB\n-- shared_buffers = 2GB\n-- workload = tpcb-like\n\nThis is because the tpcb-like workload selects & updates random rows from\nthe DB [3]. Therefore, with a 2GB shared buffer, there is only a 6-7%\nchance (did I get my probability correct?) that the required data will be\nin the shared_buffer. Did I understand this correctly?\n\nIf nothing else becomes the bottleneck (eg. periodically writing dirty\npages to disk), increasing the shared_buffers to 15GB+ should have a\nsignificant impact, for this DB-size and workload, right? (The system has\n64 GB RAM)\n\n[1] Related thread at\nhttps://www.postgresql.org/message-id/flat/CAPz%3D2oGdmvirLNX5kys%2BuiY7LKzCP4sTiXXob39qq6eDkEuk2Q%40mail.gmail.com\n[2]\nhttps://www.packtpub.com/big-data-and-business-intelligence/postgresql-10-high-performance\n[3] https://www.postgresql.org/docs/11/pgbench.html#id-1.9.4.10.7.2\n\n-- Saurabh.\n\nHi,I'm going crazy trying to optimise my Postgres config for a production setting [1] Once I realised random changes weren't getting my anywhere, I finally purchased PostgreSQL 10 - Higher Performance [2] and understood the impact of shared_buffers.IIUC, shared_buffers won't have any significant impact in the following scenario, right?-- DB size = 30GB-- shared_buffers = 2GB-- workload = tpcb-likeThis is because the tpcb-like workload selects & updates random rows from the DB [3]. Therefore, with a 2GB shared buffer, there is only a 6-7% chance (did I get my probability correct?) that the required data will be in the shared_buffer. Did I understand this correctly?If nothing else becomes the bottleneck (eg. periodically writing dirty pages to disk), increasing the shared_buffers to 15GB+ should have a significant impact, for this DB-size and workload, right? (The system has 64 GB RAM)[1] Related thread at https://www.postgresql.org/message-id/flat/CAPz%3D2oGdmvirLNX5kys%2BuiY7LKzCP4sTiXXob39qq6eDkEuk2Q%40mail.gmail.com[2] https://www.packtpub.com/big-data-and-business-intelligence/postgresql-10-high-performance[3] https://www.postgresql.org/docs/11/pgbench.html#id-1.9.4.10.7.2-- Saurabh.",
"msg_date": "Tue, 29 Jan 2019 17:09:14 +0530",
"msg_from": "Saurabh Nanda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Will higher shared_buffers improve tpcb-like benchmarks?"
},
{
"msg_contents": "Please remove me from this list Serv. I do not use this db anymore and\nfills my alerts daily.\n\n\nOn Tue, Jan 29, 2019 at 06:39 Saurabh Nanda <[email protected]> wrote:\n\n> Hi,\n>\n> I'm going crazy trying to optimise my Postgres config for a production\n> setting [1] Once I realised random changes weren't getting my anywhere, I\n> finally purchased PostgreSQL 10 - Higher Performance [2] and understood the\n> impact of shared_buffers.\n>\n> IIUC, shared_buffers won't have any significant impact in the following\n> scenario, right?\n>\n> -- DB size = 30GB\n> -- shared_buffers = 2GB\n> -- workload = tpcb-like\n>\n> This is because the tpcb-like workload selects & updates random rows from\n> the DB [3]. Therefore, with a 2GB shared buffer, there is only a 6-7%\n> chance (did I get my probability correct?) that the required data will be\n> in the shared_buffer. Did I understand this correctly?\n>\n> If nothing else becomes the bottleneck (eg. periodically writing dirty\n> pages to disk), increasing the shared_buffers to 15GB+ should have a\n> significant impact, for this DB-size and workload, right? (The system has\n> 64 GB RAM)\n>\n> [1] Related thread at\n> https://www.postgresql.org/message-id/flat/CAPz%3D2oGdmvirLNX5kys%2BuiY7LKzCP4sTiXXob39qq6eDkEuk2Q%40mail.gmail.com\n> [2]\n> https://www.packtpub.com/big-data-and-business-intelligence/postgresql-10-high-performance\n> [3] https://www.postgresql.org/docs/11/pgbench.html#id-1.9.4.10.7.2\n>\n> -- Saurabh.\n>\n-- \nEthical axioms are found and tested not very differently from the axioms of\nscience. Truth is what stands the the test if experience.\n\nPlease remove me from this list Serv. I do not use this db anymore and fills my alerts daily. On Tue, Jan 29, 2019 at 06:39 Saurabh Nanda <[email protected]> wrote:Hi,I'm going crazy trying to optimise my Postgres config for a production setting [1] Once I realised random changes weren't getting my anywhere, I finally purchased PostgreSQL 10 - Higher Performance [2] and understood the impact of shared_buffers.IIUC, shared_buffers won't have any significant impact in the following scenario, right?-- DB size = 30GB-- shared_buffers = 2GB-- workload = tpcb-likeThis is because the tpcb-like workload selects & updates random rows from the DB [3]. Therefore, with a 2GB shared buffer, there is only a 6-7% chance (did I get my probability correct?) that the required data will be in the shared_buffer. Did I understand this correctly?If nothing else becomes the bottleneck (eg. periodically writing dirty pages to disk), increasing the shared_buffers to 15GB+ should have a significant impact, for this DB-size and workload, right? (The system has 64 GB RAM)[1] Related thread at https://www.postgresql.org/message-id/flat/CAPz%3D2oGdmvirLNX5kys%2BuiY7LKzCP4sTiXXob39qq6eDkEuk2Q%40mail.gmail.com[2] https://www.packtpub.com/big-data-and-business-intelligence/postgresql-10-high-performance[3] https://www.postgresql.org/docs/11/pgbench.html#id-1.9.4.10.7.2-- Saurabh.\n-- Ethical axioms are found and tested not very differently from the axioms of science. Truth is what stands the the test if experience.",
"msg_date": "Tue, 29 Jan 2019 07:12:38 -0500",
"msg_from": "Joe Mirabal <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Will higher shared_buffers improve tpcb-like benchmarks?"
},
{
"msg_contents": "On Tue, Jan 29, 2019 at 6:39 AM Saurabh Nanda <[email protected]>\nwrote:\n\n> Hi,\n>\n> I'm going crazy trying to optimise my Postgres config for a production\n> setting [1] Once I realised random changes weren't getting my anywhere, I\n> finally purchased PostgreSQL 10 - Higher Performance [2] and understood the\n> impact of shared_buffers.\n>\n> IIUC, shared_buffers won't have any significant impact in the following\n> scenario, right?\n>\n> -- DB size = 30GB\n> -- shared_buffers = 2GB\n> -- workload = tpcb-like\n>\n> This is because the tpcb-like workload selects & updates random rows from\n> the DB [3]. Therefore, with a 2GB shared buffer, there is only a 6-7%\n> chance (did I get my probability correct?) that the required data will be\n> in the shared_buffer. Did I understand this correctly?\n>\n\nThat is likely correct, but the data will likely be stored in the OS file\ncache, so reading it from there will still be pretty fast.\n\n\n>\n> If nothing else becomes the bottleneck (eg. periodically writing dirty\n> pages to disk), increasing the shared_buffers to 15GB+ should have a\n> significant impact, for this DB-size and workload, right? (The system has\n> 64 GB RAM)\n>\n\nAbout the only way to know for sure that writing dirty data is not the\nbottleneck is to use a read only benchmark, such as the -S flag for\npgbench. And at that point, the IPC overhead between pgbench and the\nbackend, even when both are running on the same machine, is likely to be\nthe bottleneck. And after that, the bottleneck might shift to opening and\nclosing transactions and taking and releasing locks[1].\n\nIf you overcome that, then you might reliably see a difference between 2GB\nand 15GB of shared buffers, because at 2GB each query to pgbench_accounts\nis likely to fetch 2 pages into shared_buffers from the OS cache: the index\nleaf page for pgbench_accounts_pkey, and the table page for\npgbench_accounts. At 15GB, the entire index should be reliably in\nshared_buffers (after enough warm-up time), so you would only need to fetch\n1 page, and often not even that.\n\nCheers,\n\nJeff\n\n[1] I have a very old patch to pgbench that introduces a new query to\novercome this,\nhttps://www.postgresql.org/message-id/BANLkTi%3DQBYOM%2Bzj%3DReQeiEKDyVpKUtHm6Q%40mail.gmail.com\n. I don't know how much work it would be to get it to compile against\nnewer versions--I stopped maintaining it because it became too much work to\nrebase it past conflicting work, and because I lost interest in this line\nof research.\n\nOn Tue, Jan 29, 2019 at 6:39 AM Saurabh Nanda <[email protected]> wrote:Hi,I'm going crazy trying to optimise my Postgres config for a production setting [1] Once I realised random changes weren't getting my anywhere, I finally purchased PostgreSQL 10 - Higher Performance [2] and understood the impact of shared_buffers.IIUC, shared_buffers won't have any significant impact in the following scenario, right?-- DB size = 30GB-- shared_buffers = 2GB-- workload = tpcb-likeThis is because the tpcb-like workload selects & updates random rows from the DB [3]. Therefore, with a 2GB shared buffer, there is only a 6-7% chance (did I get my probability correct?) that the required data will be in the shared_buffer. Did I understand this correctly?That is likely correct, but the data will likely be stored in the OS file cache, so reading it from there will still be pretty fast. If nothing else becomes the bottleneck (eg. periodically writing dirty pages to disk), increasing the shared_buffers to 15GB+ should have a significant impact, for this DB-size and workload, right? (The system has 64 GB RAM)About the only way to know for sure that writing dirty data is not the bottleneck is to use a read only benchmark, such as the -S flag for pgbench. And at that point, the IPC overhead between pgbench and the backend, even when both are running on the same machine, is likely to be the bottleneck. And after that, the bottleneck might shift to opening and closing transactions and taking and releasing locks[1].If you overcome that, then you might reliably see a difference between 2GB and 15GB of shared buffers, because at 2GB each query to pgbench_accounts is likely to fetch 2 pages into shared_buffers from the OS cache: the index leaf page for pgbench_accounts_pkey, and the table page for pgbench_accounts. At 15GB, the entire index should be reliably in shared_buffers (after enough warm-up time), so you would only need to fetch 1 page, and often not even that.Cheers,Jeff[1] I have a very old patch to pgbench that introduces a new query to overcome this, https://www.postgresql.org/message-id/BANLkTi%3DQBYOM%2Bzj%3DReQeiEKDyVpKUtHm6Q%40mail.gmail.com . I don't know how much work it would be to get it to compile against newer versions--I stopped maintaining it because it became too much work to rebase it past conflicting work, and because I lost interest in this line of research.",
"msg_date": "Tue, 29 Jan 2019 13:00:18 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Will higher shared_buffers improve tpcb-like benchmarks?"
},
{
"msg_contents": ">\n> That is likely correct, but the data will likely be stored in the OS file\n> cache, so reading it from there will still be pretty fast.\n>\n\nRight -- but increasing shared_buffers won't increase my TPS, right? Btw, I\njust realised that irrespective of shared_buffers, my entire DB is already\nin memory (DB size=30GB, RAM=64GB). I think the following output from iotop\nconfirms this. All throughout the benchmarking (client=1,4,8,12,24,48,96),\nthe *disk read* values remain zero!\n\n Total DISK READ : 0.00 B/s | Total DISK WRITE : 73.93 M/s\n Actual DISK READ: 0.00 B/s | Actual DISK WRITE: 43.69 M/s\n\n\n\nCould this explain why my TPS numbers are not changing no matter how much I\nfiddle with the Postgres configuration?\n\nIf my hypothesis is correct, increasing the pgbench scale to get a 200GB\ndatabase would immediately show different results, right?\n\n-- Saurabh.\n\nThat is likely correct, but the data will likely be stored in the OS file cache, so reading it from there will still be pretty fast.Right -- but increasing shared_buffers won't increase my TPS, right? Btw, I just realised that irrespective of shared_buffers, my entire DB is already in memory (DB size=30GB, RAM=64GB). I think the following output from iotop confirms this. All throughout the benchmarking (client=1,4,8,12,24,48,96), the disk read values remain zero! Total DISK READ : 0.00 B/s | Total DISK WRITE : 73.93 M/s Actual DISK READ: 0.00 B/s | Actual DISK WRITE: 43.69 M/s Could this explain why my TPS numbers are not changing no matter how much I fiddle with the Postgres configuration?If my hypothesis is correct, increasing the pgbench scale to get a 200GB database would immediately show different results, right?-- Saurabh.",
"msg_date": "Tue, 29 Jan 2019 23:40:53 +0530",
"msg_from": "Saurabh Nanda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Will higher shared_buffers improve tpcb-like benchmarks?"
},
{
"msg_contents": "I did one final test of increasing the shared_buffers=32GB. It seems to be\nhaving no impact on TPS (in fact, if I look closely there is a 10-15%\n**negative** impact on the TPS compared to shared_buffers=2G)\n\nI can confirm that **almost** the entire DB has been cached in the\nshared_buffers:\n\nrelname | buffered | buffers_percent |\npercent_of_relation\n-------------------------+------------+-----------------+---------------------\npgbench_accounts | 24 GB | 74.5 |\n93.9\npgbench_accounts_pkey | 4284 MB | 13.1 |\n 100.0\npgbench_history | 134 MB | 0.4 |\n95.8\npg_aggregate | 8192 bytes | 0.0 |\n50.0\npg_amproc | 32 kB | 0.0 |\n 100.0\npg_cast | 16 kB | 0.0 |\n 100.0\npg_amop | 48 kB | 0.0 |\n85.7\npg_depend | 96 kB | 0.0 |\n18.8\npg_index | 40 kB | 0.0 |\n 125.0\npg_namespace | 8192 bytes | 0.0 |\n 100.0\npg_opclass | 24 kB | 0.0 |\n 100.0\npg_operator | 96 kB | 0.0 |\n75.0\npg_rewrite | 24 kB | 0.0 |\n25.0\npg_statistic | 176 kB | 0.0 |\n75.9\npg_aggregate_fnoid_index | 16 kB | 0.0 |\n 100.0\npg_trigger | 40 kB | 0.0 |\n 500.0\npg_amop_fam_strat_index | 24 kB | 0.0 |\n60.0\npg_amop_opr_fam_index | 32 kB | 0.0 |\n80.0\npg_amproc_fam_proc_index | 24 kB | 0.0 |\n75.0\npg_constraint | 24 kB | 0.0 |\n 150.0\n\nAnd I think now I give up. I don't think I understand how PG perf tuning\nworks and what impact shared_buffers has on perf. I'll just run my DB in\nproduction with default settings and hope no one complains about the system\nbeing slow!\n\n-- Saurabh.\n\n\nOn Tue, Jan 29, 2019 at 11:40 PM Saurabh Nanda <[email protected]>\nwrote:\n\n> That is likely correct, but the data will likely be stored in the OS file\n>> cache, so reading it from there will still be pretty fast.\n>>\n>\n> Right -- but increasing shared_buffers won't increase my TPS, right? Btw,\n> I just realised that irrespective of shared_buffers, my entire DB is\n> already in memory (DB size=30GB, RAM=64GB). I think the following output\n> from iotop confirms this. All throughout the benchmarking\n> (client=1,4,8,12,24,48,96), the *disk read* values remain zero!\n>\n> Total DISK READ : 0.00 B/s | Total DISK WRITE : 73.93 M/s\n> Actual DISK READ: 0.00 B/s | Actual DISK WRITE: 43.69 M/s\n>\n>\n>\n> Could this explain why my TPS numbers are not changing no matter how much\n> I fiddle with the Postgres configuration?\n>\n> If my hypothesis is correct, increasing the pgbench scale to get a 200GB\n> database would immediately show different results, right?\n>\n> -- Saurabh.\n>\n\n\n-- \nhttp://www.saurabhnanda.com\n\nI did one final test of increasing the shared_buffers=32GB. It seems to be having no impact on TPS (in fact, if I look closely there is a 10-15% **negative** impact on the TPS compared to shared_buffers=2G)I can confirm that **almost** the entire DB has been cached in the shared_buffers:relname | buffered | buffers_percent | percent_of_relation-------------------------+------------+-----------------+---------------------pgbench_accounts | 24 GB | 74.5 | 93.9pgbench_accounts_pkey | 4284 MB | 13.1 | 100.0pgbench_history | 134 MB | 0.4 | 95.8pg_aggregate | 8192 bytes | 0.0 | 50.0pg_amproc | 32 kB | 0.0 | 100.0pg_cast | 16 kB | 0.0 | 100.0pg_amop | 48 kB | 0.0 | 85.7pg_depend | 96 kB | 0.0 | 18.8pg_index | 40 kB | 0.0 | 125.0pg_namespace | 8192 bytes | 0.0 | 100.0pg_opclass | 24 kB | 0.0 | 100.0pg_operator | 96 kB | 0.0 | 75.0pg_rewrite | 24 kB | 0.0 | 25.0pg_statistic | 176 kB | 0.0 | 75.9pg_aggregate_fnoid_index | 16 kB | 0.0 | 100.0pg_trigger | 40 kB | 0.0 | 500.0pg_amop_fam_strat_index | 24 kB | 0.0 | 60.0pg_amop_opr_fam_index | 32 kB | 0.0 | 80.0pg_amproc_fam_proc_index | 24 kB | 0.0 | 75.0pg_constraint | 24 kB | 0.0 | 150.0And I think now I give up. I don't think I understand how PG perf tuning works and what impact shared_buffers has on perf. I'll just run my DB in production with default settings and hope no one complains about the system being slow!-- Saurabh.On Tue, Jan 29, 2019 at 11:40 PM Saurabh Nanda <[email protected]> wrote:That is likely correct, but the data will likely be stored in the OS file cache, so reading it from there will still be pretty fast.Right -- but increasing shared_buffers won't increase my TPS, right? Btw, I just realised that irrespective of shared_buffers, my entire DB is already in memory (DB size=30GB, RAM=64GB). I think the following output from iotop confirms this. All throughout the benchmarking (client=1,4,8,12,24,48,96), the disk read values remain zero! Total DISK READ : 0.00 B/s | Total DISK WRITE : 73.93 M/s Actual DISK READ: 0.00 B/s | Actual DISK WRITE: 43.69 M/s Could this explain why my TPS numbers are not changing no matter how much I fiddle with the Postgres configuration?If my hypothesis is correct, increasing the pgbench scale to get a 200GB database would immediately show different results, right?-- Saurabh.\n-- http://www.saurabhnanda.com",
"msg_date": "Wed, 30 Jan 2019 09:20:40 +0530",
"msg_from": "Saurabh Nanda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Will higher shared_buffers improve tpcb-like benchmarks?"
}
] |
Subsets and Splits