threads
listlengths
1
275
[ { "msg_contents": "I have a slow query (I think) that doesn't appear to be using an index for some reason. I've tried writing the query in various ways, but have so far not had any luck. Interestingly, the query plans are almost identical even when trying different variations. It appears to spend half the time scanning aw_benchmark_record_item. Is this query really slow at all?\n\nIn any case, this is only the first half of the query. I have several more joins to complete the results and ultimately perform a crosstab to determine % correct for a given set of standards associated to the benchmark.\n\nThanks!\nMichael\n\nPostgresql Version: \nPostgreSQL 9.1.6 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit\n\nOS:\nUbuntu 12.04.1 LTS (GNU/Linux 3.2.0-23-virtual x86_64)\n\nConfig:\nshared_buffers = 1500MB\neffective_cache_size = 3725MB\nwork_mem = 10MB\n\n\n++++ Query ++++\nexplain (analyze, buffers) select\n\tbm.rid,\n\tt.rid,\n\tr.fk_user,\n\tround((avg(ri.points_received) * (100)::numeric), 0) AS percent_correct\nFROM\n\taw_benchmark bm\n\tjoin aw_benchmark_test t on (t.fk_benchmark=bm.rid)\n\tjoin aw_benchmark_item bi on (bi.fk_benchmark_test=t.rid)\n\tjoin aw_benchmark_record r on (r.fk_benchmark_test=t.rid)\n\tjoin aw_benchmark_record_item ri on (ri.fk_benchmark_test=t.rid AND ri.fk_user=r.fk_user AND ri.fk_benchmark_item=bi.rid)\nWHERE\n\tbm.rid=11\nGROUP BY 1,2,3\n\n++++ Query Plan ++++\n\nhttp://explain.depesz.com/s/kVp\n\n HashAggregate (cost=10683.67..10683.84 rows=10 width=16) (actual time=1470.199..1475.375 rows=2542 loops=1)\n Buffers: shared hit=95000\n -> Nested Loop (cost=69.26..10683.57 rows=10 width=16) (actual time=5.101..1431.242 rows=30326 loops=1)\n Buffers: shared hit=95000\n -> Seq Scan on aw_benchmark bm (cost=0.00..1.81 rows=1 width=4) (actual time=0.034..0.040 rows=1 loops=1)\n Filter: (rid = 11)\n Buffers: shared hit=1\n -> Nested Loop (cost=69.26..10681.66 rows=10 width=16) (actual time=5.056..1264.901 rows=30326 loops=1)\n Buffers: shared hit=94999\n -> Hash Join (cost=69.26..9481.38 rows=177 width=24) (actual time=4.951..981.338 rows=30326 loops=1)\n Hash Cond: ((t.rid = bi.fk_benchmark_test) AND (ri.fk_benchmark_item = bi.rid))\n Buffers: shared hit=3807\n -> Hash Join (cost=2.21..9247.15 rows=16540 width=24) (actual time=0.722..920.115 rows=30326 loops=1)\n Hash Cond: (ri.fk_benchmark_test = t.rid)\n Buffers: shared hit=3793\n -> Seq Scan on aw_benchmark_record_item ri (cost=0.00..7637.48 rows=384548 width=16) (actual time=0.061..407.944 rows=384601 loops=1)\n Buffers: shared hit=3792\n -> Hash (cost=2.16..2.16 rows=4 width=8) (actual time=0.035..0.035 rows=4 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 1kB\n Buffers: shared hit=1\n -> Seq Scan on aw_benchmark_test t (cost=0.00..2.16 rows=4 width=8) (actual time=0.016..0.027 rows=4 loops=1)\n Filter: (fk_benchmark = 11)\n Buffers: shared hit=1\n -> Hash (cost=35.22..35.22 rows=2122 width=8) (actual time=4.202..4.202 rows=2122 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 83kB\n Buffers: shared hit=14\n -> Seq Scan on aw_benchmark_item bi (cost=0.00..35.22 rows=2122 width=8) (actual time=0.059..2.050 rows=2122 loops=1)\n Buffers: shared hit=14\n -> Index Scan using aw_benchmark_record_pkey on aw_benchmark_record r (cost=0.00..6.77 rows=1 width=8) (actual time=0.003..0.004 rows=1 loops=30326)\n Index Cond: ((fk_benchmark_test = t.rid) AND (fk_user = ri.fk_user))\n Buffers: shared hit=91192\n Total runtime: 1477.611 ms\n \n++++ Row Counts ++++\n\naw_benchmark\n66 rows\n\naw_benchmark_item\n2122 rows\n\naw_benchmark_test\n93 rows\n\naw_benchmark_record\n60100 rows\n\naw_benchmark_record_item\n383670 rows\n\n++++ Table Definitions ++++\n\n-- 66 rows --\nCREATE TABLE \"public\".\"aw_benchmark\" (\n\t\"rid\" int4 NOT NULL DEFAULT nextval('aw_benchmark_rid_seq'::regclass),\n\t\"name\" varchar(100) NOT NULL,\n\t\"fk_owner\" int4 NOT NULL,\n\t\"year\" int4 NOT NULL,\n\tCONSTRAINT \"aw_benchmark_pkey\" PRIMARY KEY (\"rid\") NOT DEFERRABLE INITIALLY IMMEDIATE,\n\tCONSTRAINT \"to_owner\" FOREIGN KEY (\"fk_owner\") REFERENCES \"public\".\"aw_user\" (\"rid\") ON UPDATE CASCADE ON DELETE RESTRICT NOT DEFERRABLE INITIALLY IMMEDIATE\n);\n\nCREATE TABLE \"public\".\"aw_benchmark_item\" (\n\t\"rid\" int4 NOT NULL DEFAULT nextval('aw_benchmark_item_rid_seq'::regclass),\n\t\"fk_benchmark_test\" int4 NOT NULL,\n\t\"fk_test_item\" int4 NOT NULL,\n\t\"ordering\" int4 NOT NULL,\n\tCONSTRAINT \"aw_benchmark_item_pkey\" PRIMARY KEY (\"rid\") NOT DEFERRABLE INITIALLY IMMEDIATE,\n\tCONSTRAINT \"to_benchmark_test\" FOREIGN KEY (\"fk_benchmark_test\") REFERENCES \"public\".\"aw_benchmark_test\" (\"rid\") ON UPDATE CASCADE ON DELETE CASCADE NOT DEFERRABLE INITIALLY IMMEDIATE,\n\tCONSTRAINT \"to_test_item\" FOREIGN KEY (\"fk_test_item\") REFERENCES \"public\".\"aw_test_item\" (\"rid\") ON UPDATE CASCADE ON DELETE RESTRICT NOT DEFERRABLE INITIALLY IMMEDIATE\n);\n\n-- 93 rows --\nCREATE TABLE \"public\".\"aw_benchmark_test\" (\n\t\"rid\" int4 NOT NULL DEFAULT nextval('aw_benchmark_test_rid_seq'::regclass),\n\t\"fk_benchmark\" int4 NOT NULL,\n\t\"name\" varchar(100) NOT NULL,\n\t\"ordering\" int4 NOT NULL,\n\t\"is_assigned\" bool NOT NULL DEFAULT false,\n\tCONSTRAINT \"aw_benchmark_test_pkey\" PRIMARY KEY (\"rid\") NOT DEFERRABLE INITIALLY IMMEDIATE,\n\tCONSTRAINT \"to_benchmark\" FOREIGN KEY (\"fk_benchmark\") REFERENCES \"public\".\"aw_benchmark\" (\"rid\") ON UPDATE CASCADE ON DELETE CASCADE NOT DEFERRABLE INITIALLY IMMEDIATE\n);\n\n-- 60100 rows --\nCREATE TABLE \"public\".\"aw_benchmark_record\" (\n\t\"fk_benchmark_test\" int4 NOT NULL,\n\t\"fk_user\" int4 NOT NULL,\n\t\"assigned\" bool NOT NULL DEFAULT false,\n\t\"status\" int4 NOT NULL DEFAULT 0,\n\t\"points_possible\" int4 NOT NULL DEFAULT 0,\n\t\"points_received\" int4 NOT NULL DEFAULT 0,\n\t\"randomize\" bool NOT NULL DEFAULT true,\n\t\"time_started\" timestamp(6) WITH TIME ZONE,\n\t\"time_completed\" timestamp(6) WITH TIME ZONE,\n\t\"bubbled\" bool NOT NULL DEFAULT false,\n\t\"reset_by\" text,\n\t\"allow_review\" bool NOT NULL DEFAULT false,\n\t\"reset_count\" int2 NOT NULL DEFAULT 0,\n\t\"rescored\" bool NOT NULL DEFAULT false,\n\t\"revised_points\" bool DEFAULT false,\n\tCONSTRAINT \"aw_benchmark_record_pkey\" PRIMARY KEY (\"fk_benchmark_test\", \"fk_user\") NOT DEFERRABLE INITIALLY IMMEDIATE,\n\tCONSTRAINT \"to_benchmark\" FOREIGN KEY (\"fk_benchmark_test\") REFERENCES \"public\".\"aw_benchmark_test\" (\"rid\") ON UPDATE CASCADE ON DELETE CASCADE NOT DEFERRABLE INITIALLY IMMEDIATE,\n\tCONSTRAINT \"to_user\" FOREIGN KEY (\"fk_user\") REFERENCES \"public\".\"aw_user\" (\"rid\") ON UPDATE CASCADE ON DELETE CASCADE NOT DEFERRABLE INITIALLY IMMEDIATE\n);\nCREATE INDEX \"fk_benchmark_test_fk_user_idx\" ON \"public\".\"aw_benchmark_record\" USING btree(fk_benchmark_test ASC NULLS LAST, fk_user ASC NULLS LAST);\nCREATE INDEX \"fk_benchmark_test_idx\" ON \"public\".\"aw_benchmark_record\" USING btree(fk_benchmark_test ASC NULLS LAST);\nCREATE INDEX \"fk_benchmark_test_status_idx\" ON \"public\".\"aw_benchmark_record\" USING btree(fk_benchmark_test ASC NULLS LAST, status ASC NULLS LAST);\n\n-- 383670 rows --\nCREATE TABLE \"public\".\"aw_benchmark_record_item\" (\n\t\"fk_benchmark_test\" int4 NOT NULL,\n\t\"fk_user\" int4 NOT NULL,\n\t\"fk_benchmark_item\" int4 NOT NULL,\n\t\"status\" int4 NOT NULL DEFAULT 0,\n\t\"points_possible\" int4 NOT NULL DEFAULT 1,\n\t\"points_received\" int4 NOT NULL DEFAULT 0,\n\t\"seconds\" int4,\n\t\"ordering\" int2 NOT NULL DEFAULT 0,\n\t\"answer\" varchar(4096),\n\t\"is_valid\" bool NOT NULL DEFAULT true,\n\tCONSTRAINT \"aw_benchmark_record_item_pkey\" PRIMARY KEY (\"fk_benchmark_item\", \"fk_user\") NOT DEFERRABLE INITIALLY IMMEDIATE,\n\tCONSTRAINT \"to_benchmark\" FOREIGN KEY (\"fk_benchmark_test\") REFERENCES \"public\".\"aw_benchmark_test\" (\"rid\") ON UPDATE CASCADE ON DELETE CASCADE NOT DEFERRABLE INITIALLY IMMEDIATE,\n\tCONSTRAINT \"to_user\" FOREIGN KEY (\"fk_user\") REFERENCES \"public\".\"aw_user\" (\"rid\") ON UPDATE CASCADE ON DELETE CASCADE NOT DEFERRABLE INITIALLY IMMEDIATE,\n\tCONSTRAINT \"to_benchmark_item\" FOREIGN KEY (\"fk_benchmark_item\") REFERENCES \"public\".\"aw_benchmark_item\" (\"rid\") ON UPDATE CASCADE ON DELETE CASCADE NOT DEFERRABLE INITIALLY IMMEDIATE\n);\nCREATE INDEX \"all_idx\" ON \"public\".\"aw_benchmark_record_item\" USING btree(fk_benchmark_test ASC NULLS LAST, fk_user ASC NULLS LAST, fk_benchmark_item ASC NULLS LAST);\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 11 Dec 2013 09:41:07 -0600", "msg_from": "Michael Sacket <[email protected]>", "msg_from_op": true, "msg_subject": "When is a query slow?" } ]
[ { "msg_contents": "Hi,\n \nHow can I use this ORDER BY using index feature presented in this implementation.\nIt doesn't seem to be in use, when I have a look in my query plan.\nIt still does an cost intensive Bitmap Heap Scan and a Bitmap Index scan.\nI also can't find the \"><\" operator in any introduction of the tsearch2 extension.\nIs it just an idea?\n \nThanks for your help!\nJanek Sendrowski\n", "msg_date": "Wed, 11 Dec 2013 23:29:30 +0100 (CET)", "msg_from": "\"Janek Sendrowski\" <[email protected]>", "msg_from_op": true, "msg_subject": "ORDER BY using index, tsearch2" }, { "msg_contents": "[Sorry, this previous mail was HTML-foramted]\n\nHi,\n \nHow can I use this ORDER BY using index feature presented in this implementation.\nIt doesn't seem to be in use, when I have a look in my query plan.\nIt still does an cost intensive Bitmap Heap Scan and a Bitmap Index scan.\nI also can't find the \"><\" operator in any introduction of the tsearch2 extension.\nIs it just an idea?\n \nThanks for your help!\nJanek Sendrowski\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 11 Dec 2013 23:31:48 +0100 (CET)", "msg_from": "\"Janek Sendrowski\" <[email protected]>", "msg_from_op": true, "msg_subject": "ORDER BY using index, tsearch2 [READ THIS!]" }, { "msg_contents": "\"Janek Sendrowski\" <[email protected]> writes:\n> How can I use this ORDER BY using index feature presented in this implementation.\n> It doesn't seem to be in use, when I have a look in my query plan.\n> It still does an cost intensive Bitmap Heap Scan and a Bitmap Index scan.\n> I also can't find the \"><\" operator in any introduction of the tsearch2 extension.\n> Is it just an idea?\n\nWe're not in the habit of documenting nonexistent features, if that's what\nyou mean. However, you've not provided nearly enough information for\nanyone to help you; at minimum, the index definitions you have, the query\nyou gave, the plan you got, and the exact PG version would be critical\ninformation. More information about asking answerable questions can be\nfound here:\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nAlso, I'm a bit troubled by your reference to tsearch2, because that\ncontrib module is obsolete, and has been since well before any PG version\nthat has a feature like what I think you're asking about. So I wonder if\nyou are reading documentation not applicable to the version you're working\nwith.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 11 Dec 2013 18:33:32 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ORDER BY using index, tsearch2 [READ THIS!]" }, { "msg_contents": "On Wed, Dec 11, 2013 at 2:29 PM, Janek Sendrowski <[email protected]> wrote:\n\n> Hi,\n>\n> How can I use this ORDER BY using index feature presented in this\n> implementation.\n> It doesn't seem to be in use, when I have a look in my query plan.\n> It still does an cost intensive Bitmap Heap Scan and a Bitmap Index scan.\n> I also can't find the \"><\" operator in any introduction of the tsearch2\n> extension.\n> Is it just an idea?\n>\n\nA GIST is a tree, but there's no notion of \">\" or \"<\", only yes/no at each\ntree branch. In this regard a GIST index is more like a hash table. You\ncan't use a hash table to sort. It doesn't make sense.\n\nCraig\n\n\n> Thanks for your help!\n> Janek Sendrowski\n>\n\nOn Wed, Dec 11, 2013 at 2:29 PM, Janek Sendrowski <[email protected]> wrote:\nHi,\n \nHow can I use this ORDER BY using index feature presented in this implementation.\nIt doesn't seem to be in use, when I have a look in my query plan.\nIt still does an cost intensive Bitmap Heap Scan and a Bitmap Index scan.\nI also can't find the \"><\" operator in any introduction of the tsearch2 extension.\nIs it just an idea?A GIST is a tree, but there's no notion of \">\" or \"<\", only yes/no at each tree branch.  In this regard a GIST index is more like a hash table.  You can't use a hash table to sort.  It doesn't make sense.\nCraig\n \nThanks for your help!\nJanek Sendrowski", "msg_date": "Wed, 11 Dec 2013 15:38:09 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ORDER BY using index, tsearch2" }, { "msg_contents": "Craig James <[email protected]> writes:\n> A GIST is a tree, but there's no notion of \">\" or \"<\", only yes/no at each\n> tree branch. In this regard a GIST index is more like a hash table. You\n> can't use a hash table to sort. It doesn't make sense.\n\nRecent versions of PG do allow GIST indexes to be used to satisfy\nK-nearest-neighbor queries, if the operator class supports that.\n(This requires that the tree partitioning be done on some notion of\ndistance, and even then there'll be some traversal of irrelevant index\nentries; but it way beats a full-table scan, or even full-index scan.)\n\nBut I'm not entirely sure if that's what the OP is asking about.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 11 Dec 2013 18:55:57 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ORDER BY using index, tsearch2" }, { "msg_contents": "Sorry, I still wanted to add following link: http://www.sai.msu.su/~megera/postgres/talks/Full-text%20search%20in%20PostgreSQL%20in%20milliseconds-extended-version.pdf\nOn page 6 you can see the first example:\n\n\"postgres=# explain analyze\nSELECT docid, ts_rank(text_vector, to_tsquery('english', 'title')) AS rank\nFROM ti2\nWHERE text_vector @@ to_tsquery('english', 'title')\nORDER BY text_vector>< plainto_tsquery('english','title')\nLIMIT 3;\"\n\n\"Limit (cost=20.00..21.65 rows=3 width=282) (actual time=18.376..18.427 rows=3 loops=-> Index Scan using ti2_index on ti2 (cost=20.00..26256.30 rows=47692 width=282)\n(actual time=18.375..18.425 rows=3 loops=1)\nIndex Cond: (text_vector @@ '''titl'''::tsquery)\nOrder By: (text_vector >< '''titl'''::tsquery)\"\n\nMy PG-version is 9.3.\nI was wondering about this feature, bacause I haven't seen it yet and it a huge speed up.\n\nSorry, I thought the name is still tsearch, because the functionnames are roughly the same, but now know I noticed, that this name is obsolete.\n\nJanek Sendrowski\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 12 Dec 2013 02:00:38 +0100 (CET)", "msg_from": "\"Janek Sendrowski\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ORDER BY using index, tsearch2" }, { "msg_contents": "\"Janek Sendrowski\" <[email protected]> writes:\n> Sorry, I still wanted to add following link: http://www.sai.msu.su/~megera/postgres/talks/Full-text%20search%20in%20PostgreSQL%20in%20milliseconds-extended-version.pdf\n\nOh ... well, that's not Postgres documentation; that's Oleg and Alexander\ngiving a paper about some research work that they're doing. Which is\nstill unfinished as far as I know; it certainly hasn't been committed\nto community source code. (I'm not sure if the GIN improvements being\nworked on in the current release cycle are the same thing described in\nthis paper, but in any case they're not committed yet.)\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 11 Dec 2013 22:57:14 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ORDER BY using index, tsearch2" }, { "msg_contents": "Okay thanks.\nThat's what I wanted to know.\n\nJanek Sendrowski\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 12 Dec 2013 14:16:26 +0100 (CET)", "msg_from": "\"Janek Sendrowski\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ORDER BY using index, tsearch2" } ]
[ { "msg_contents": "Hello,_\n_\nI've got a very simple table with a very simple SELECT query, but it \ntakes longer on the initial run than I'd like, so I want to see if there \nis a strategy to optimize this.\n\nTable rt_h_nbbo contains several hundred million rows. All rows for a \ngiven entry_date are appended to this table in an overnight process \nevery night - on the order of several million rows per day.\n\nThe objective is to select all of the rows for a given product_id on a \ngiven entry_date.\n\nThere is a b-tree index on (product_id, entry_date). The index appears \nto be used correctly. I'm seeing that if the data pages are not in \nmemory, nearly all of the time is spent on disk I/O. The first time, \nthe query takes 21 sec. If I run this query a second time, it completes \nin approx 1-2 ms.\n\nI perceive an inefficiency here and I'd like your input as to how to \ndeal with it: The end result of the query is 1631 rows which is on the \norder of about a couple hundred Kb of data. Compare that to the amount \nof I/O that was done: 1634 buffers were loaded, 16Mb per page - that's \nabout 24 Gb of data! Query completed in 21 sec. I'd like to be able to \nphysically re-organize the data on disk so that the data for a given \nproduct_id on a entry_date is concentrated on a few pages instead of \nbeing scattered like I see here.\n\n_First question_ is: Does loading 24Gb of data in 21 sec seem \"about \nright\" (hardware specs at bottom of email)?\n\n_Second question_: Is it possible to tell postgres to physically store \nthe data in such a way that it parallels an index? I recall you can do \nthis in Sybase with a CLUSTERED index. The answer for Postgresql seems \nto be \"yes, use the CLUSTER command\". But this command does a one-time \nclustering and requires periodic re-clustering. Is this the best \napproach? Are there considerations with respect to the type of index \n(B-tree, GIST, SP-GIST) being used for CLUSTER ?\n\nThanks\n\n-Sev\n\n\n_Table (this is a fairly large table - hundreds of millions of rows):_\n\nCREATE TABLE rt_h_nbbo\n(\n product_id integer NOT NULL,\n bid_price double precision NOT NULL DEFAULT 0.0,\n bid_size integer NOT NULL DEFAULT 0,\n ask_price double precision NOT NULL DEFAULT 0.0,\n ask_size integer NOT NULL DEFAULT 0,\n last_price double precision NOT NULL DEFAULT 0.0,\n entry_date date NOT NULL,\n entry_time time without time zone NOT NULL,\n event_time time without time zone NOT NULL,\n day_volume bigint NOT NULL DEFAULT 0,\n day_trade_ct integer,\n entry_id bigint NOT NULL,\n CONSTRAINT rt_h_nbbo_pkey PRIMARY KEY (entry_id),\n CONSTRAINT rt_h_nbbo_pfkey FOREIGN KEY (product_id)\n REFERENCES product (product_id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION\n)\nWITH (\n OIDS=FALSE\n);\nALTER TABLE rt_h_nbbo\n OWNER TO postgres;\n\n_Index_:\n\nCREATE INDEX rt_h_nbbo_idx\n ON rt_h_nbbo\n USING btree\n (product_id, entry_date DESC);\n\n_Test_:\n\nSET track_io_timing = on;\nEXPLAIN (ANALYZE,BUFFERS,VERBOSE,COSTS,TIMING) select * from rt_h_nbbo \nwhere product_id=6508 and entry_date='2013-11-26';\n\n_Output_:\n\n\"Index Scan using rt_h_nbbo_idx on public.rt_h_nbbo (cost=0.00..12768.21 \nrows=3165 width=76) (actual time=12.549..21654.547 rows=1631 loops=1)\"\n\" Output: product_id, bid_price, bid_size, ask_price, ask_size, \nlast_price, entry_date, entry_time, event_time, day_volume, \nday_trade_ct, entry_id\"\n\" Index Cond: ((rt_h_nbbo.product_id = 6508) AND (rt_h_nbbo.entry_date \n= '2013-11-26'::date))\"\n\" Buffers: shared hit=4 read=1634\"\n\" I/O Timings: read=21645.468\"\n\"Total runtime: 21655.002 ms\"\n\n_Hardware_\n\nTop of the line HP DL380 G7 server with 32 Gb Ram, P410i RAID, 10K SAS \ndrives in Raid-1 config. Wal on separate Raid-1 volume with 15K SAS drives.\nThe only unusual thing here is that I'm running on Windows Server 2008 R2.\n\n\n\n\n\n\n Hello,\n\n I've got a very simple table with a very simple SELECT query, but it\n takes longer on the initial run than I'd like, so I want to see if\n there is a strategy to optimize this.\n\n Table rt_h_nbbo contains several hundred million rows.  All rows for\n a given entry_date are appended to this table in an overnight\n process every night - on the order of several million rows per day.\n\n The objective is to select all of the rows for a given product_id on\n a given entry_date.\n\n There is a b-tree index on (product_id, entry_date). The index\n appears to be used correctly.  I'm seeing that if the data pages are\n not in memory, nearly all of the time is spent on disk I/O.  The\n first time, the query takes 21 sec.  If I run this query a second\n time, it completes in approx 1-2 ms.\n\n I perceive an inefficiency here and I'd like your input as to how to\n deal with it: The end result of the query is 1631 rows which is on\n the order of about a couple hundred Kb of data.  Compare that to the\n amount of I/O that was done: 1634 buffers were loaded, 16Mb per page\n - that's about 24 Gb of data!  Query completed in 21 sec.  I'd like\n to be able to physically re-organize the data on disk so that the\n data for a given product_id on a entry_date is concentrated on a few\n pages instead of being scattered like I see here. \n\nFirst question is: Does loading 24Gb of data in 21 sec seem\n \"about right\" (hardware specs at bottom of email)?\n\nSecond question: Is it possible to tell postgres to\n physically store the data in such a way that it parallels an index? \n I recall you can do this in Sybase with a CLUSTERED index.  The\n answer for Postgresql seems to be \"yes, use the CLUSTER command\". \n But this command does a one-time clustering and requires periodic\n re-clustering.  Is this the best approach?  Are there considerations\n with respect to the type of index (B-tree, GIST, SP-GIST) being used\n for CLUSTER ?\n\n Thanks\n\n -Sev\n\n\nTable (this is a fairly large table - hundreds of millions of\n rows):\n\n CREATE TABLE rt_h_nbbo\n (\n   product_id integer NOT NULL,\n   bid_price double precision NOT NULL DEFAULT 0.0,\n   bid_size integer NOT NULL DEFAULT 0,\n   ask_price double precision NOT NULL DEFAULT 0.0,\n   ask_size integer NOT NULL DEFAULT 0,\n   last_price double precision NOT NULL DEFAULT 0.0,\n   entry_date date NOT NULL,\n   entry_time time without time zone NOT NULL,\n   event_time time without time zone NOT NULL,\n   day_volume bigint NOT NULL DEFAULT 0,\n   day_trade_ct integer,\n   entry_id bigint NOT NULL,\n   CONSTRAINT rt_h_nbbo_pkey PRIMARY KEY (entry_id),\n   CONSTRAINT rt_h_nbbo_pfkey FOREIGN KEY (product_id)\n       REFERENCES product (product_id) MATCH SIMPLE\n       ON UPDATE NO ACTION ON DELETE NO ACTION\n )\n WITH (\n   OIDS=FALSE\n );\n ALTER TABLE rt_h_nbbo\n   OWNER TO postgres;\n\nIndex:\n\n CREATE INDEX rt_h_nbbo_idx\n   ON rt_h_nbbo\n   USING btree\n   (product_id, entry_date DESC);\n\nTest:\n\n SET track_io_timing = on; \n EXPLAIN (ANALYZE,BUFFERS,VERBOSE,COSTS,TIMING) select * from\n rt_h_nbbo where product_id=6508 and entry_date='2013-11-26';\n\nOutput:\n\n \"Index Scan using rt_h_nbbo_idx on public.rt_h_nbbo \n (cost=0.00..12768.21 rows=3165 width=76) (actual\n time=12.549..21654.547 rows=1631 loops=1)\"\n \"  Output: product_id, bid_price, bid_size, ask_price, ask_size,\n last_price, entry_date, entry_time, event_time, day_volume,\n day_trade_ct, entry_id\"\n \"  Index Cond: ((rt_h_nbbo.product_id = 6508) AND\n (rt_h_nbbo.entry_date = '2013-11-26'::date))\"\n \"  Buffers: shared hit=4 read=1634\"\n \"  I/O Timings: read=21645.468\"\n \"Total runtime: 21655.002 ms\"\n\nHardware\n\n Top of the line HP DL380 G7 server with 32 Gb Ram,  P410i RAID, 10K\n SAS drives in Raid-1 config.  Wal on separate Raid-1 volume with 15K\n SAS drives.\n The only unusual thing here is that I'm running on Windows Server\n 2008 R2.", "msg_date": "Thu, 12 Dec 2013 12:30:10 -0500", "msg_from": "Sev Zaslavsky <[email protected]>", "msg_from_op": true, "msg_subject": "slow query - will CLUSTER help?" }, { "msg_contents": "Hello,\n__\nI've got a very simple table with a very simple SELECT query, but it \ntakes longer on the initial run than I'd like, so I want to see if there \nis a strategy to optimize this.\n\nTable rt_h_nbbo contains several hundred million rows. All rows for a \ngiven entry_date are appended to this table in an overnight process \nevery night - on the order of several million rows each night.\n\nThe objective is to select all of the rows for a given product_id on a \ngiven entry_date.\n\nThere is a b-tree index on (product_id, entry_date) called \nrt_h_nbbo_idx. The index appears to be used correctly. I'm seeing that \nif the data pages are not in memory, nearly all of the time is spent on \ndisk I/O. The _first _time, the query takes 21 sec. If I run this \nquery a _second _time, it completes in approx 1-2 ms.\n\nRunning select pg_relation_size( 'rt_h_nbbo') / reltuples FROM \npg_class WHERE relname = 'rt_h_nbbo'; yields roughly 135 bytes/row.\n\nI perceive an inefficiency here and I'd like your input as to how to \ndeal with it: The end result of the query is 1631 rows which is approx \n220 kb of data (at 135 bytes/row). Compare that to the amount of I/O \nthat was done: 1634 buffers were loaded, 8 kb per buffer - that's about \n13Mb of data! Query completed in 21 sec.\n\nSo 13Mb of data was read from disk, but only 220Kb was useful - about \n1.7%. I'd like to make this work faster and hopefully more efficiently.\n\n_First question_ is: Does loading 13Mb of data in 21 sec seem kinda slow \nor about right (hardware specs at bottom of email)?\n\n_Second question_: Perhaps I can reduce the number of pages that contain \nthe data I want by physically storing the data in such a way that it \nparallels the rt_h_nbbo_idx index? I recall you can do this in Sybase \nwith a CLUSTERED index. The answer for Postgresql seems to be \"yes, use \nthe CLUSTER command\". But this command does a one-time clustering and \nrequires periodic re-clustering. Is this the best approach? Are there \nconsiderations with respect to the type of index (B-tree, GIST, SP-GIST) \nbeing used for CLUSTER ?\n\nThanks\n\n-Sev\n\n\n_Table (this is a fairly large table - hundreds of millions of rows):_\n\nCREATE TABLE rt_h_nbbo\n(\n product_id integer NOT NULL,\n bid_price double precision NOT NULL DEFAULT 0.0,\n bid_size integer NOT NULL DEFAULT 0,\n ask_price double precision NOT NULL DEFAULT 0.0,\n ask_size integer NOT NULL DEFAULT 0,\n last_price double precision NOT NULL DEFAULT 0.0,\n entry_date date NOT NULL,\n entry_time time without time zone NOT NULL,\n event_time time without time zone NOT NULL,\n day_volume bigint NOT NULL DEFAULT 0,\n day_trade_ct integer,\n entry_id bigint NOT NULL,\n CONSTRAINT rt_h_nbbo_pkey PRIMARY KEY (entry_id),\n CONSTRAINT rt_h_nbbo_pfkey FOREIGN KEY (product_id)\n REFERENCES product (product_id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION\n)\nWITH (\n OIDS=FALSE\n);\nALTER TABLE rt_h_nbbo\n OWNER TO postgres;\n\n_Index_:\n\nCREATE INDEX rt_h_nbbo_idx\n ON rt_h_nbbo\n USING btree\n (product_id, entry_date DESC);\n\n_Test_:\n\nSET track_io_timing = on;\nEXPLAIN (ANALYZE,BUFFERS,VERBOSE,COSTS,TIMING) select * from rt_h_nbbo \nwhere product_id=6508 and entry_date='2013-11-26';\n\n_Output_:\n\n\"Index Scan using rt_h_nbbo_idx on public.rt_h_nbbo (cost=0.00..12768.21 \nrows=3165 width=76) (actual time=12.549..21654.547 rows=1631 loops=1)\"\n\" Output: product_id, bid_price, bid_size, ask_price, ask_size, \nlast_price, entry_date, entry_time, event_time, day_volume, \nday_trade_ct, entry_id\"\n\" Index Cond: ((rt_h_nbbo.product_id = 6508) AND (rt_h_nbbo.entry_date \n= '2013-11-26'::date))\"\n\" Buffers: shared hit=4 read=1634\"\n\" I/O Timings: read=21645.468\"\n\"Total runtime: 21655.002 ms\"\n\n_Hardware_\n\nTop of the line HP DL380 G7 server with 32 Gb Ram, P410i RAID, 10K SAS \ndrives in Raid-1 config. Wal on separate Raid-1 volume with 15K SAS drives.\nThe only unusual thing here is that I'm running on Windows Server 2008 R2.\n\n\n\n\n\n\n\n\n Hello,\n \n I've got a very simple table with a very simple SELECT query, but it\n takes longer on the initial run than I'd like, so I want to see if\n there is a strategy to optimize this.\n\n Table rt_h_nbbo contains several hundred million rows.  All rows for\n a given entry_date are appended to this table in an overnight\n process every night - on the order of several million rows each\n night.\n\n The objective is to select all of the rows for a given product_id on\n a given entry_date.\n\n There is a b-tree index on (product_id, entry_date) called\n rt_h_nbbo_idx. The index appears to be used correctly.  I'm seeing\n that if the data pages are not in memory, nearly all of the time is\n spent on disk I/O.  The\n first time, the query takes 21 sec.  If I run this query a\n second time, it completes in approx 1-2 ms.\n\n Running select  pg_relation_size( 'rt_h_nbbo') /  reltuples FROM\n pg_class WHERE relname = 'rt_h_nbbo'; yields roughly 135 bytes/row.\n\n I perceive an inefficiency here and I'd like your input as to how to\n deal with it: The end result of the query is 1631 rows which is\n approx 220 kb of data (at 135 bytes/row).  Compare that to the\n amount of I/O that was done: 1634 buffers were loaded, 8 kb per\n buffer - that's about 13Mb  of data!  Query completed in 21 sec.  \n\n So 13Mb of data was read from disk, but only 220Kb was useful -\n about 1.7%.  I'd like to make this work faster and hopefully more\n efficiently.\n\nFirst question is: Does loading 13Mb of data in 21 sec seem\n kinda slow or about right (hardware specs at bottom of email)?\n\nSecond question: Perhaps I can reduce the number of pages\n that contain the data I want by physically storing the data in such\n a way that it parallels the rt_h_nbbo_idx index?  I recall you can\n do this in Sybase with a CLUSTERED index.  The answer for Postgresql\n seems to be \"yes, use the CLUSTER command\".  But this command does a\n one-time clustering and requires periodic re-clustering.  Is this\n the best approach?  Are there considerations with respect to the\n type of index (B-tree, GIST, SP-GIST) being used for CLUSTER ?\n\n Thanks\n\n -Sev\n\n\nTable (this is a fairly large table - hundreds of millions of\n rows):\n\n CREATE TABLE rt_h_nbbo\n (\n   product_id integer NOT NULL,\n   bid_price double precision NOT NULL DEFAULT 0.0,\n   bid_size integer NOT NULL DEFAULT 0,\n   ask_price double precision NOT NULL DEFAULT 0.0,\n   ask_size integer NOT NULL DEFAULT 0,\n   last_price double precision NOT NULL DEFAULT 0.0,\n   entry_date date NOT NULL,\n   entry_time time without time zone NOT NULL,\n   event_time time without time zone NOT NULL,\n   day_volume bigint NOT NULL DEFAULT 0,\n   day_trade_ct integer,\n   entry_id bigint NOT NULL,\n   CONSTRAINT rt_h_nbbo_pkey PRIMARY KEY (entry_id),\n   CONSTRAINT rt_h_nbbo_pfkey FOREIGN KEY (product_id)\n       REFERENCES product (product_id) MATCH SIMPLE\n       ON UPDATE NO ACTION ON DELETE NO ACTION\n )\n WITH (\n   OIDS=FALSE\n );\n ALTER TABLE rt_h_nbbo\n   OWNER TO postgres;\n\nIndex:\n\n CREATE INDEX rt_h_nbbo_idx\n   ON rt_h_nbbo\n   USING btree\n   (product_id, entry_date DESC);\n\nTest:\n\n SET track_io_timing = on; \n EXPLAIN (ANALYZE,BUFFERS,VERBOSE,COSTS,TIMING) select * from\n rt_h_nbbo where product_id=6508 and entry_date='2013-11-26';\n\nOutput:\n\n \"Index Scan using rt_h_nbbo_idx on public.rt_h_nbbo \n (cost=0.00..12768.21 rows=3165 width=76) (actual\n time=12.549..21654.547 rows=1631 loops=1)\"\n \"  Output: product_id, bid_price, bid_size, ask_price, ask_size,\n last_price, entry_date, entry_time, event_time, day_volume,\n day_trade_ct, entry_id\"\n \"  Index Cond: ((rt_h_nbbo.product_id = 6508) AND\n (rt_h_nbbo.entry_date = '2013-11-26'::date))\"\n \"  Buffers: shared hit=4 read=1634\"\n \"  I/O Timings: read=21645.468\"\n \"Total runtime: 21655.002 ms\"\n\nHardware\n\n Top of the line HP DL380 G7 server with 32 Gb Ram,  P410i RAID, 10K\n SAS drives in Raid-1 config.  Wal on separate Raid-1 volume with 15K\n SAS drives.\n The only unusual thing here is that I'm running on Windows Server\n 2008 R2.", "msg_date": "Thu, 12 Dec 2013 16:08:48 -0500", "msg_from": "Sev Zaslavsky <[email protected]>", "msg_from_op": true, "msg_subject": "slow loading of pages for SELECT query - will CLUSTER help?" }, { "msg_contents": "On 12/12/2013 11:30 AM, Sev Zaslavsky wrote:\n\n> _First question_ is: Does loading 24Gb of data in 21 sec seem \"about\n> right\" (hardware specs at bottom of email)?\n\nThat's actually pretty good. 24GB is a lot of data to process.\n\n> _Second question_: Is it possible to tell postgres to physically store\n> the data in such a way that it parallels an index?\n\nYes and no. Unlike Sybase or SQL Server, CLUSTERed indexes in PostgreSQL \nare not maintained in the index pages. When you CLUSTER a table by a \nparticular index, it's only sorted in that order initially. New inserts \nand updates no longer honor that ordering.\n\nHowever, since you said you're inserting data by date, your data should \nalready be naturally sorted. Your query plan also looked right to me. \nYou may have some excess expectations for your hardware, though.\n\nA RAID-1 of 15K drives can deliver, at most, 1000 reads per second \ndepending on your drives and the controller cache. That's a very \noptimistic assumption. The plan said it fetched 1631 rows from the \nindex. In order to weed out dead pages, it verifies data by checking the \ndata pages, which is another 1631 fetches at minimum. All by itself, \nthat's about three seconds of IO from a cold cache.\n\nI agree that 21 seconds is rather high for this workload, but Windows \nhandles data caching and data elevator algorithms much differently than \nLinux, so I can't say if anything else is going on.\n\n> Top of the line HP DL380 G7 server with 32 Gb Ram, P410i RAID, 10K\n> SAS drives in Raid-1 config. Wal on separate Raid-1 volume with 15K\n> SAS drives.The only unusual thing here is that I'm running on Windows\n> Server 2008 R2.\n\nIn any case, you should really consider upgrading both your hardware, \nand switching your DB server to Linux. If you are handling millions of \nrows on a regular basis, 32GB will not be sufficient for longer than a \nfew months. Eventually your data will no longer fit in memory, and \nyou'll see more and more disk-related delays.\n\nFurther, a RAID1 is not good enough for that kind of data volume. If you \ncant afford a RAID1+0 consisting of several spindles, NVRAM-based \nsolution (SSD or PCIe card), or a SAN, you simply do not have enough \nIOPS to supply a fast database of any description.\n\nI only suggest Linux as your OS because that's the primary use case. \nMost testing, development, and users have that setup. You're much more \nlikely to get meaningful feedback if you follow the herd. :)\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 19 Dec 2013 12:19:18 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query - will CLUSTER help?" }, { "msg_contents": "Sev Zaslavsky <[email protected]> wrote:\n\nI want to agree with everything Shaun said and add a tiny bit.\n\n> Does loading 24Gb of data in 21 sec seem \"about right\"?\n\nIt's a little on the slow side.  You said 1634 page reads.  At 9 ms\nper read that would be 14.7 seconds.  But I'm basing the 9 ms per\npage read on my Linux experience, and I remember benchmarking the\nsame application hitting PostgreSQL on the same hardware as about\n30% faster on Linux than on Windows, so that *almost* makes up for\nthe difference.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 19 Dec 2013 12:14:16 -0800 (PST)", "msg_from": "Kevin Grittner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query - will CLUSTER help?" }, { "msg_contents": "On Thu, Dec 12, 2013 at 9:30 AM, Sev Zaslavsky <[email protected]> wrote:\n\n> Hello,\n>\n> I've got a very simple table with a very simple SELECT query, but it takes\n> longer on the initial run than I'd like, so I want to see if there is a\n> strategy to optimize this.\n>\n> Table rt_h_nbbo contains several hundred million rows. All rows for a\n> given entry_date are appended to this table in an overnight process every\n> night - on the order of several million rows per day.\n>\n> The objective is to select all of the rows for a given product_id on a\n> given entry_date.\n>\n> There is a b-tree index on (product_id, entry_date). The index appears to\n> be used correctly. I'm seeing that if the data pages are not in memory,\n> nearly all of the time is spent on disk I/O. The first time, the query\n> takes 21 sec. If I run this query a second time, it completes in approx\n> 1-2 ms.\n>\n> I perceive an inefficiency here and I'd like your input as to how to deal\n> with it: The end result of the query is 1631 rows which is on the order of\n> about a couple hundred Kb of data. Compare that to the amount of I/O that\n> was done: 1634 buffers were loaded, 16Mb per page - that's about 24 Gb of\n> data!\n>\n\nA page is usually 8KB, not 16MB (nor 16Mb).\n\n\n> Query completed in 21 sec. I'd like to be able to physically\n> re-organize the data on disk so that the data for a given product_id on a\n> entry_date is concentrated on a few pages instead of being scattered like I\n> see here.\n>\n\nIf you load the data in daily batches, it is probably already fairly well\nclustered by entry_date. If you sort the batch by product_id before bulk\nloading it, then it should stay pretty well clustered on (entry_date,\nproduct_id).\n\nCheers,\n\nJeff\n\nOn Thu, Dec 12, 2013 at 9:30 AM, Sev Zaslavsky <[email protected]> wrote:\n\n\n Hello,\n\n I've got a very simple table with a very simple SELECT query, but it\n takes longer on the initial run than I'd like, so I want to see if\n there is a strategy to optimize this.\n\n Table rt_h_nbbo contains several hundred million rows.  All rows for\n a given entry_date are appended to this table in an overnight\n process every night - on the order of several million rows per day.\n\n The objective is to select all of the rows for a given product_id on\n a given entry_date.\n\n There is a b-tree index on (product_id, entry_date). The index\n appears to be used correctly.  I'm seeing that if the data pages are\n not in memory, nearly all of the time is spent on disk I/O.  The\n first time, the query takes 21 sec.  If I run this query a second\n time, it completes in approx 1-2 ms.\n\n I perceive an inefficiency here and I'd like your input as to how to\n deal with it: The end result of the query is 1631 rows which is on\n the order of about a couple hundred Kb of data.  Compare that to the\n amount of I/O that was done: 1634 buffers were loaded, 16Mb per page\n - that's about 24 Gb of data!A page is usually 8KB, not 16MB (nor 16Mb). \n  Query completed in 21 sec.  I'd like\n to be able to physically re-organize the data on disk so that the\n data for a given product_id on a entry_date is concentrated on a few\n pages instead of being scattered like I see here. If you load the data in daily batches, it is probably already fairly well clustered by entry_date.  If you sort the batch by product_id before bulk loading it, then it should stay pretty well clustered on (entry_date, product_id).\nCheers,Jeff", "msg_date": "Thu, 19 Dec 2013 12:33:13 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query - will CLUSTER help?" }, { "msg_contents": "On Thu, Dec 12, 2013 at 9:30 AM, Sev Zaslavsky <[email protected]> wrote:\n[...]\n\n> Table rt_h_nbbo contains several hundred million rows. All rows for a given\n> entry_date are appended to this table in an overnight process every night -\n> on the order of several million rows per day.\n\n[...]\n\n> I perceive an inefficiency here and I'd like your input as to how to deal\n> with it: The end result of the query is 1631 rows which is on the order of\n> about a couple hundred Kb of data. Compare that to the amount of I/O that\n> was done: 1634 buffers were loaded, 16Mb per page - that's about 24 Gb of\n> data! Query completed in 21 sec. I'd like to be able to physically\n> re-organize the data on disk so that the data for a given product_id on a\n> entry_date is concentrated on a few pages instead of being scattered like I\n> see here.\n\nDo you perform a regular cleaning of the table with DELETEs or may be\nyou use UPDATEs for some another reason?\n\n-- \nKind regards,\nSergey Konoplev\nPostgreSQL Consultant and DBA\n\nhttp://www.linkedin.com/in/grayhemp\n+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979\[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 19 Dec 2013 12:34:11 -0800", "msg_from": "Sergey Konoplev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query - will CLUSTER help?" }, { "msg_contents": "On Thu, Dec 19, 2013 at 12:54 PM, Sev Zaslavsky <[email protected]> wrote:\n> On 12/19/2013 3:34 PM, Sergey Konoplev wrote:\n>> On Thu, Dec 12, 2013 at 9:30 AM, Sev Zaslavsky <[email protected]> wrote:\n>>> Table rt_h_nbbo contains several hundred million rows. All rows for a\n>>> given\n>>> entry_date are appended to this table in an overnight process every night\n>>> -\n>>> on the order of several million rows per day.\n>>\n>> Do you perform a regular cleaning of the table with DELETEs or may be\n>> you use UPDATEs for some another reason?\n>\n> At this point we're neither deleting nor updating the data once written to\n> the db.\n\nThan I can see two reasons of the problem:\n\n1. The indexed data is too big and index search is getting worth day by day\n\nI would try to create a partial index for one day and repeat the\nEXPLAIN ANALYZE with this day. If there will be some significant\nimprovements then I would start creating partial indexes for every new\nday before it starts and drop them after some time when they became\nobsolete.\n\n2. You are limited with IO\n\nI would also suggest you to upgrade your storage in this case.\n\n-- \nKind regards,\nSergey Konoplev\nPostgreSQL Consultant and DBA\n\nhttp://www.linkedin.com/in/grayhemp\n+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979\[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 19 Dec 2013 13:24:26 -0800", "msg_from": "Sergey Konoplev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query - will CLUSTER help?" }, { "msg_contents": "On 12/19/2013 03:24 PM, Sergey Konoplev wrote:\n\n> 2. You are limited with IO\n> I would also suggest you to upgrade your storage in this case.\n\nI think this is the case. If I recall correctly, his setup includes a \nsingle RAID-1 for everything, and he only has 32GB of RAM. In fact, the \nWAL traffic from those inserts alone are likely saturating the write IO, \nespecially if it starts a checkpoint while the load is still going on. I \nwouldn't want to be around for that.\n\nEven with a fairly selective index, just the fetches necessary to \nidentify the rows and verify the data pages will choke a RAID-1 with \nalmost every query. Any table with several hundred million rows is also \ntoo big to fit in cache if any significant portion of it is fetched on a \nregular basis. The cache turnover is probably extremely high, too.\n\nThat workload is just too high for a system of that description. It \nwould be fine for a prototype, development, or possibly a QA system, but \nif that's intended to be a production resource, it needs more memory and IO.\n\nAlso since I can't see part of this conversation and it doesn't seem \nanyone else mentioned it, the WAL directory *must* be moved to a \nseparate set of disks for a workload of this volume. The amount of \nwrites here will constantly degrade read IO and further increase fetch \ntimes.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 20 Dec 2013 08:42:22 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query - will CLUSTER help?" }, { "msg_contents": "On 12/20/2013 09:57 AM, Sev Zaslavsky wrote:\n\n> There is a separate RAID-1 for WAL, another for tablespace and another\n> for operating system.\n\nI tend to stick to DB-size / 10 as a minimum, but I also have an OLTP \nsystem. For a more OLAP-type, the ratio is negotiable.\n\nThe easiest way to tell is to monitor your disk IO stats. If you're \nseeing a READ-based utilization percentage over 50% consistently, you \nneed more RAM. On our system, we average 10% through the day except for \nmaintenance and loading phases.\n\nOf course, that's only for the current DB size. A good trick is to \nmonitor your DB size changes on a daily basis, plot the growth \npercentage for a week, and apply compounding growth to estimate the size \nin three years. Most companies I've seen are on a 3-year replacement \ncycle, so that gives you how much you'll have to buy in order to avoid \nanother spend until the next iteration.\n\nFor example, say you have a 800GB database, and it grows at 10GB per \nweek, so that's 40GB per month. In three years, you could need up to:\n\n800 * (1 + 40/800)^36 = 4632GB of space, which translates to roughly \n480-512 GB of RAM. You can probably find a comfortable middle ground \nwith 240GB.\n\nOf course, don't forget to buy modules in multiples of four, otherwise \nyou're not taking advantage of all the CPU's memory channels. :)\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 20 Dec 2013 10:11:39 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query - will CLUSTER help?" }, { "msg_contents": "On 21/12/13 05:11, Shaun Thomas wrote:\n[...]\n> .\n>\n> Of course, don't forget to buy modules in multiples of four, otherwise \n> you're not taking advantage of all the CPU's memory channels. :)\n>\nNote some processors have 3 (three) memory channels! And I know of some \nwith 4 memory channels. So it is important to check your processor & \nmother board.\n\nThe desktop I got when I joined a university on contract had 12GB about \n2 years ago.\n\n\nCheers,\nGavin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 21 Dec 2013 10:19:21 +1300", "msg_from": "Gavin Flower <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query - will CLUSTER help?" }, { "msg_contents": "> What are your thoughts on the right way to use SSDs in a RAID to\n> enhance postgres I/O performance? In an earlier reply, you\n> indicated one of a \"RAID1+0 consisting of several spindles,\n> NVRAM-based solution (SSD or PCIe card), or a SAN\"\n\nWell, it's a tiered approach. If you can identify your tables with the most critical OLTP needs, you can create a separate tablespace specifically for SSD storage to give them the performance they need. After that, you might consider partitioning even those tables, as older data won't be accessed as often, so won't need those kind of IOPS long-term. Older partitions could be slated toward the RAID.\n\nRegarding what kind of SSD, just make sure the drives themselves are capacitor-backed. Current SSDs have only a few microseconds of write delay, but it's enough of a race condition to lead to corruption in power outages without some assurance in-transit data is committed.\n\nIf you have the money, stuff like FusionIO PCIe cards are extremely fast, on the order of 10x faster than a standard SSD. I'd personally reserve these for performance-critical things like online trading platforms, since they're so costly.\n\nThen of course, SANs can mix the world of RAID and SSD, because they often have internal mechanisms to deliver requested IOPS by spreading storage allocations along installed components necessary to match them. This is probably the most expensive overall approach, but many larger companies either already have SANs, or will eventually need one anyway.\n\nThat's just a bird's eye view of everything. There's obviously more involved. :)\n\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 23 Dec 2013 16:41:19 +0000", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query - will CLUSTER help?" } ]
[ { "msg_contents": "Hi,\n\nI'm seeing a slow running query. After some optimization with indexes, \nit appears that the query plan is correct, it's just slow. Running the \nquery twice, not surprisingly, is very fast, due to OS caching or \nshared_buffers caching. If the parameters for the query are different, \nhowever, the query is slow until run a second time. In our usage \npattern, this query must run fast the first time it runs.\n\nA couple of general stats: this is a linode machine with a single 3GB DB \nwith 4GBs of ram. Shared buffers = 1024mb, effective_cache_size=2048MB. \nWe are running with postgres 9.1. The machine is otherwise dormant when \nthis query runs. Here is the schema:\n\n Table \"public.premiseaccount\"\n Column | Type \n| Modifiers\n------------------------+--------------------------+------------------------------------------------------------------------- \n\n id | integer | not null default \nnextval('premiseaccount_id_seq'::regclass)\n created | timestamp with time zone | not null default \n'2013-09-25 07:00:00+00'::timestamp with time zone\n modified | timestamp with time zone | not null default \n'2013-09-25 07:00:00+00'::timestamp with time zone\n account_id | integer | not null\n premise_id | integer | not null\n bucket | character varying(255) |\nIndexes:\n \"premiseaccount_pkey\" PRIMARY KEY, btree (id)\n \"premiseaccount_account_id\" btree (account_id)\n \"premiseaccount_bucket\" btree (bucket)\n \"premiseaccount_bucket_58c70392619aa36f\" btree (bucket, id)\n \"premiseaccount_bucket_like\" btree (bucket varchar_pattern_ops)\n \"premiseaccount_premise_id\" btree (premise_id)\nForeign-key constraints:\n \"account_id_refs_id_529631edfff28022\" FOREIGN KEY (account_id) \nREFERENCES utilityaccount(id) DEFERRABLE INITIALLY DEFERRED\n \"premise_id_refs_id_5ecea2842007328b\" FOREIGN KEY (premise_id) \nREFERENCES premise(id) DEFERRABLE INITIALLY DEFERRED\n\n Table \"public.electricusage\"\n Column | Type \n| Modifiers\n--------------------+--------------------------+------------------------------------------------------------------------ \n\n id | integer | not null default \nnextval('electricusage_id_seq'::regclass)\n created | timestamp with time zone | not null default \n'2013-09-25 07:00:00+00'::timestamp with time zone\n modified | timestamp with time zone | not null default \n'2013-09-25 07:00:00+00'::timestamp with time zone\n from_date | timestamp with time zone | not null\n to_date | timestamp with time zone | not null\n usage | numeric(9,2) |\n demand | numeric(9,2) |\n bill_amount | numeric(9,2) |\n premise_account_id | integer | not null\nIndexes:\n \"electricusage_pkey\" PRIMARY KEY, btree (id)\n \"electricusage_premise_account_id\" btree (premise_account_id)\n \"electricusage_covered_id_from_date_usage\" btree \n(premise_account_id, from_date, usage)\nForeign-key constraints:\n \"premise_account_id_refs_id_4c39e54406369128\" FOREIGN KEY \n(premise_account_id) REFERENCES premiseaccount(id) DEFERRABLE INITIALLY \nDEFERRED\n\n\nFor reference, premiseaccount has about 1 million rows, one for each \naccount, grouped in buckets with an average of 5000 accounts per bucket. \nelectricusage has 10 million rows, about 10 rows per premiseaccount.\n\nHere is the query:\n\nexplain analyze\nSELECT premiseaccount.id, SUM(electricusage.usage) AS total_kwh\nFROM premiseaccount\nLEFT OUTER JOIN electricusage\n ON premiseaccount.id = electricusage.premise_account_id\nWHERE premiseaccount.bucket = 'XXX' AND electricusage.from_date >= \n'2012-11-20 00:00:00'\nGROUP BY premiseaccount.id\nHAVING SUM(electricusage.usage) BETWEEN 3284 and 3769\nLIMIT 50;\n\n\n Limit (cost=0.00..1987.24 rows=50 width=8) (actual \ntime=931.524..203631.435 rows=50 loops=1)\n -> GroupAggregate (cost=0.00..179487.78 rows=4516 width=8) (actual \ntime=931.519..203631.275 rows=50 loops=1)\n Filter: ((sum(electricusage.usage) >= 3284::numeric) AND \n(sum(electricusage.usage) <= 3769::numeric))\n -> Nested Loop (cost=0.00..179056.87 rows=36317 width=8) \n(actual time=101.934..203450.761 rows=30711 loops=1)\n -> Index Scan using \npremiseaccount_bucket_58c70392619aa36f on premiseaccount premiseaccount \n(cost=0.00..14882.30 rows=4516 width=4) (actual time=77.620..7199.527 \nrows=3437 loops=1)\n Index Cond: ((bucket)::text = 'XXX'::text)\n -> Index Scan using \nelectricusage_premise_account_id_36bc8999ced10059 on electricusage \nelectricusage (cost=0.00..36.24 rows=9 width=8) (actual \ntime=8.607..57.055 rows=9 loops=3437)\n Index Cond: ((premise_account_id = \npremiseaccount.id) AND (from_date >= '2012-11-20 00:00:00+00'::timestamp \nwith time zone))\n Total runtime: 203631.666 ms\n\n (see online at: http://explain.depesz.com/s/zeC)\n\nWhat am I missing here? It seems like this is a relatively \nstraightforward foreign key join, using the correct indexes, that is \njust slow. Warming the OS cache seems like the only way to make this \nactually perform reasonably, but it seems like it's masking the \nunderlying problem. I could probably do some denormalization, like \nputting the bucket field on the electricusage table, but that seems like \na shoddy solution.\n\nBefore running my query, I clear the os and postgres cache. I know that \nin theory some subset of the data will be in the os cache and postgres \ncache. But in the real world, what we see is that a number of people get \ntimeouts waiting for this query to finish, because it's not in the \ncache. e.g., running the query with where bucket='xxx' will be fast a \nsecond time, but where bucket='yyy' will not.\n\n\nHere are things I've tried:\n+ Clustering premiseaccount on premiseaccount_bucket_58c70392619aa36f \nand electricusage on electricusage_covered_id_from_date_usage. This \nprovided modest improvement\n+ Fit the whole DB in memory. Since this query is fast if the data is \ncached, try to fit everything into the cache. Bumping the ram to 8GBs, \nand warming up the os cache by taring the postgres data directory. On \nthe 4GB machine, I still run into slow performance, but at 8GBs, the \nentire DB can fit in the OS cache. This made a huge improvement, but \nwill only work for a little while as the database grows.\n+ Tried on other linode servers - same results.\n+ Removing the from_date part of the query. It uses a different index \nbut same performance characteristics.\n+ Looked at kern, sys, and postgres logs. Nothing interesting.\n\n\nI would really appreciate any help I could get! Thanks!\n\nBryce\n\n\n\n\n\n\nHi,\n\nI'm seeing a slow running query. After some optimization with \nindexes, \nit appears that the query plan is correct, it's just slow. Running the \nquery twice, not surprisingly, is very fast, due to OS caching or \nshared_buffers caching. If the parameters for the query are different, \nhowever, the query is slow until run a second time. In our usage \npattern, this query must run fast the first time it runs.\n\nA couple of general stats: this is a linode machine with a single \n3GB DB \nwith 4GBs of ram. Shared buffers = 1024mb, effective_cache_size=2048MB. \nWe are running with postgres 9.1. \nThe machine is otherwise dormant when \nthis query runs. Here is the schema:\n\n                                          Table \n\"public.premiseaccount\"\n         Column         |           Type           \n|                                Modifiers\n------------------------+--------------------------+-------------------------------------------------------------------------\n id                     | integer                  | not null \ndefault \nnextval('premiseaccount_id_seq'::regclass)\n created                | timestamp with time zone | not null \ndefault \n'2013-09-25 07:00:00+00'::timestamp with time zone\n modified               | timestamp with time zone | not null \ndefault \n'2013-09-25 07:00:00+00'::timestamp with time zone\n account_id             | integer                  | not null\n premise_id             | integer                  | not null\n bucket                 | character varying(255)   |\nIndexes:\n    \"premiseaccount_pkey\" PRIMARY KEY, btree (id)\n    \"premiseaccount_account_id\" btree (account_id)\n    \"premiseaccount_bucket\" btree (bucket)\n    \"premiseaccount_bucket_58c70392619aa36f\" btree (bucket, id)\n    \"premiseaccount_bucket_like\" btree (bucket varchar_pattern_ops)\n    \"premiseaccount_premise_id\" btree (premise_id)\nForeign-key constraints:\n    \"account_id_refs_id_529631edfff28022\" FOREIGN KEY (account_id) \nREFERENCES utilityaccount(id) DEFERRABLE INITIALLY DEFERRED\n    \"premise_id_refs_id_5ecea2842007328b\" FOREIGN KEY (premise_id) \nREFERENCES premise(id) DEFERRABLE INITIALLY DEFERRED\n\n                                        Table \"public.electricusage\"\n       Column       |           Type           \n|                               Modifiers\n--------------------+--------------------------+------------------------------------------------------------------------\n id                 | integer                  | not null default \nnextval('electricusage_id_seq'::regclass)\n created            | timestamp with time zone | not null default \n'2013-09-25 07:00:00+00'::timestamp with time zone\n modified           | timestamp with time zone | not null default \n'2013-09-25 07:00:00+00'::timestamp with time zone\n from_date          | timestamp with time zone | not null\n to_date            | timestamp with time zone | not null\n usage              | numeric(9,2)             |\n demand             | numeric(9,2)             |\n bill_amount        | numeric(9,2)             |\n premise_account_id | integer                  | not null\nIndexes:\n    \"electricusage_pkey\" PRIMARY KEY, btree (id)\n    \"electricusage_premise_account_id\" btree (premise_account_id)\n    \"electricusage_covered_id_from_date_usage\" btree \n(premise_account_id, from_date, usage)\nForeign-key constraints:\n    \"premise_account_id_refs_id_4c39e54406369128\" FOREIGN KEY \n(premise_account_id) REFERENCES premiseaccount(id) DEFERRABLE INITIALLY \nDEFERRED\n\n\nFor reference, premiseaccount has about 1 million rows, one for each\n \naccount, grouped in buckets with an average of 5000 accounts per bucket.\n \nelectricusage has 10 million rows, about 10 rows per premiseaccount.\n\nHere is the query:\n\nexplain analyze\nSELECT premiseaccount.id, SUM(electricusage.usage) AS total_kwh\nFROM premiseaccount\nLEFT OUTER JOIN electricusage\n    ON premiseaccount.id = electricusage.premise_account_id\nWHERE premiseaccount.bucket = 'XXX'  AND electricusage.from_date \n>= \n'2012-11-20 00:00:00'\nGROUP BY premiseaccount.id\nHAVING SUM(electricusage.usage) BETWEEN 3284 and 3769\nLIMIT 50;\n\n\n Limit  (cost=0.00..1987.24 rows=50 width=8) (actual \ntime=931.524..203631.435 rows=50 loops=1)\n   ->  GroupAggregate  (cost=0.00..179487.78 rows=4516 width=8) \n(actual \ntime=931.519..203631.275 rows=50 loops=1)\n         Filter: ((sum(electricusage.usage) >= 3284::numeric) AND\n \n(sum(electricusage.usage) <= 3769::numeric))\n         ->  Nested Loop  (cost=0.00..179056.87 rows=36317 \nwidth=8) \n(actual time=101.934..203450.761 rows=30711 loops=1)\n               ->  Index Scan using \npremiseaccount_bucket_58c70392619aa36f on premiseaccount premiseaccount \n \n(cost=0.00..14882.30 rows=4516 width=4) (actual time=77.620..7199.527 \nrows=3437 loops=1)\n                     Index Cond: ((bucket)::text = \n'XXX'::text)\n               ->  Index Scan using \nelectricusage_premise_account_id_36bc8999ced10059 on electricusage \nelectricusage  (cost=0.00..36.24 rows=9 width=8) (actual \ntime=8.607..57.055 rows=9 loops=3437)\n                     Index Cond: ((premise_account_id = \npremiseaccount.id) AND (from_date >= '2012-11-20 \n00:00:00+00'::timestamp \nwith time zone))\n Total runtime: 203631.666 ms\n\n (see online at: http://explain.depesz.com/s/zeC)\n\nWhat am I missing here? It seems like this is a relatively \nstraightforward foreign key join, using the correct indexes, that is \njust slow. Warming the OS cache seems like the only way to make this \nactually perform reasonably, but it seems like it's masking the \nunderlying problem. I could probably do some denormalization, like \nputting the bucket field on the electricusage table, but that seems like\n \na shoddy solution.\n\nBefore running my query, I clear the os and postgres cache. I know \nthat \nin theory some subset of the data will be in the os cache and postgres \ncache. But in the real world, what we see is that a number of people get\n \ntimeouts waiting for this query to finish, because it's not in the \ncache. e.g., running the query with where bucket='xxx' will be fast a \nsecond time, but where bucket='yyy' will not.\n\nHere are things I've tried:\n+ Clustering premiseaccount on \npremiseaccount_bucket_58c70392619aa36f \nand electricusage on electricusage_covered_id_from_date_usage. This \nprovided modest improvement\n+ Fit the whole DB in memory. Since this query is fast if the data \nis \ncached, try to fit everything into the cache. Bumping the ram to 8GBs, \nand warming up the os cache by taring the postgres data directory. On \nthe 4GB machine, I still run into slow performance, but at 8GBs, the \nentire DB can fit in the OS cache.  This made a huge improvement, but \nwill only work for a little while as the database grows.\n+ Tried on other linode servers - same results.\n+ Removing the from_date part of the query. It uses a different \nindex \nbut same performance characteristics.\n+ Looked at kern, sys, and postgres logs. Nothing interesting.\n\n\nI would really appreciate any help I could get! Thanks!\n\nBryce", "msg_date": "Thu, 12 Dec 2013 12:10:30 -0800", "msg_from": "Bryce Covert <[email protected]>", "msg_from_op": true, "msg_subject": "Slow query due to slow I/O" }, { "msg_contents": "On Thu, Dec 12, 2013 at 5:10 PM, Bryce Covert\n<[email protected]> wrote:\n> Hi,\n>\n> I'm seeing a slow running query. After some optimization with indexes, it\n> appears that the query plan is correct, it's just slow. Running the query\n> twice, not surprisingly, is very fast, due to OS caching or shared_buffers\n> caching. If the parameters for the query are different, however, the query\n> is slow until run a second time. In our usage pattern, this query must run\n> fast the first time it runs.\n>\n> A couple of general stats: this is a linode machine with a single 3GB DB\n> with 4GBs of ram. Shared buffers = 1024mb, effective_cache_size=2048MB. We\n> are running with postgres 9.1. The machine is otherwise dormant when this\n> query runs. Here is the schema:\n\n\nFor this kind of diagnostic, you need to include hardware details.\n\nOS? Disks? RAID?\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 12 Dec 2013 18:15:51 -0200", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query due to slow I/O" }, { "msg_contents": "I don't have much info on disks, since this is a virtual server on \nlinode. It is running ubuntu 12.04, 8cpus, 4GB ram, 95GB ext3 volume \n(noatime). Hopefully that's helpful.\n\nBryce\n> Claudio Freire <mailto:[email protected]>\n> December 12, 2013 12:15 PM\n> On Thu, Dec 12, 2013 at 5:10 PM, Bryce Covert\n>\n>\n> For this kind of diagnostic, you need to include hardware details.\n>\n> OS? Disks? RAID?\n> Bryce Covert <mailto:[email protected]>\n> December 12, 2013 12:10 PM\n>\n> Hi,\n>\n> I'm seeing a slow running query. After some optimization with indexes, \n> it appears that the query plan is correct, it's just slow. Running the \n> query twice, not surprisingly, is very fast, due to OS caching or \n> shared_buffers caching. If the parameters for the query are different, \n> however, the query is slow until run a second time. In our usage \n> pattern, this query must run fast the first time it runs.\n>\n> A couple of general stats: this is a linode machine with a single 3GB \n> DB with 4GBs of ram. Shared buffers = 1024mb, \n> effective_cache_size=2048MB. We are running with postgres 9.1. The \n> machine is otherwise dormant when this query runs. Here is the schema:\n>\n> Table \"public.premiseaccount\"\n> Column | Type \n> | Modifiers\n> ------------------------+--------------------------+------------------------------------------------------------------------- \n>\n> id | integer | not null default \n> nextval('premiseaccount_id_seq'::regclass)\n> created | timestamp with time zone | not null default \n> '2013-09-25 07:00:00+00'::timestamp with time zone\n> modified | timestamp with time zone | not null default \n> '2013-09-25 07:00:00+00'::timestamp with time zone\n> account_id | integer | not null\n> premise_id | integer | not null\n> bucket | character varying(255) |\n> Indexes:\n> \"premiseaccount_pkey\" PRIMARY KEY, btree (id)\n> \"premiseaccount_account_id\" btree (account_id)\n> \"premiseaccount_bucket\" btree (bucket)\n> \"premiseaccount_bucket_58c70392619aa36f\" btree (bucket, id)\n> \"premiseaccount_bucket_like\" btree (bucket varchar_pattern_ops)\n> \"premiseaccount_premise_id\" btree (premise_id)\n> Foreign-key constraints:\n> \"account_id_refs_id_529631edfff28022\" FOREIGN KEY (account_id) \n> REFERENCES utilityaccount(id) DEFERRABLE INITIALLY DEFERRED\n> \"premise_id_refs_id_5ecea2842007328b\" FOREIGN KEY (premise_id) \n> REFERENCES premise(id) DEFERRABLE INITIALLY DEFERRED\n>\n> Table \"public.electricusage\"\n> Column | Type \n> | Modifiers\n> --------------------+--------------------------+------------------------------------------------------------------------ \n>\n> id | integer | not null default \n> nextval('electricusage_id_seq'::regclass)\n> created | timestamp with time zone | not null default \n> '2013-09-25 07:00:00+00'::timestamp with time zone\n> modified | timestamp with time zone | not null default \n> '2013-09-25 07:00:00+00'::timestamp with time zone\n> from_date | timestamp with time zone | not null\n> to_date | timestamp with time zone | not null\n> usage | numeric(9,2) |\n> demand | numeric(9,2) |\n> bill_amount | numeric(9,2) |\n> premise_account_id | integer | not null\n> Indexes:\n> \"electricusage_pkey\" PRIMARY KEY, btree (id)\n> \"electricusage_premise_account_id\" btree (premise_account_id)\n> \"electricusage_covered_id_from_date_usage\" btree \n> (premise_account_id, from_date, usage)\n> Foreign-key constraints:\n> \"premise_account_id_refs_id_4c39e54406369128\" FOREIGN KEY \n> (premise_account_id) REFERENCES premiseaccount(id) DEFERRABLE \n> INITIALLY DEFERRED\n>\n>\n> For reference, premiseaccount has about 1 million rows, one for each \n> account, grouped in buckets with an average of 5000 accounts per \n> bucket. electricusage has 10 million rows, about 10 rows per \n> premiseaccount.\n>\n> Here is the query:\n>\n> explain analyze\n> SELECT premiseaccount.id, SUM(electricusage.usage) AS total_kwh\n> FROM premiseaccount\n> LEFT OUTER JOIN electricusage\n> ON premiseaccount.id = electricusage.premise_account_id\n> WHERE premiseaccount.bucket = 'XXX' AND electricusage.from_date >= \n> '2012-11-20 00:00:00'\n> GROUP BY premiseaccount.id\n> HAVING SUM(electricusage.usage) BETWEEN 3284 and 3769\n> LIMIT 50;\n>\n>\n> Limit (cost=0.00..1987.24 rows=50 width=8) (actual \n> time=931.524..203631.435 rows=50 loops=1)\n> -> GroupAggregate (cost=0.00..179487.78 rows=4516 width=8) \n> (actual time=931.519..203631.275 rows=50 loops=1)\n> Filter: ((sum(electricusage.usage) >= 3284::numeric) AND \n> (sum(electricusage.usage) <= 3769::numeric))\n> -> Nested Loop (cost=0.00..179056.87 rows=36317 width=8) \n> (actual time=101.934..203450.761 rows=30711 loops=1)\n> -> Index Scan using \n> premiseaccount_bucket_58c70392619aa36f on premiseaccount \n> premiseaccount (cost=0.00..14882.30 rows=4516 width=4) (actual \n> time=77.620..7199.527 rows=3437 loops=1)\n> Index Cond: ((bucket)::text = 'XXX'::text)\n> -> Index Scan using \n> electricusage_premise_account_id_36bc8999ced10059 on electricusage \n> electricusage (cost=0.00..36.24 rows=9 width=8) (actual \n> time=8.607..57.055 rows=9 loops=3437)\n> Index Cond: ((premise_account_id = \n> premiseaccount.id) AND (from_date >= '2012-11-20 \n> 00:00:00+00'::timestamp with time zone))\n> Total runtime: 203631.666 ms\n>\n> (see online at: http://explain.depesz.com/s/zeC)\n>\n> What am I missing here? It seems like this is a relatively \n> straightforward foreign key join, using the correct indexes, that is \n> just slow. Warming the OS cache seems like the only way to make this \n> actually perform reasonably, but it seems like it's masking the \n> underlying problem. I could probably do some denormalization, like \n> putting the bucket field on the electricusage table, but that seems \n> like a shoddy solution.\n>\n> Before running my query, I clear the os and postgres cache. I know \n> that in theory some subset of the data will be in the os cache and \n> postgres cache. But in the real world, what we see is that a number of \n> people get timeouts waiting for this query to finish, because it's not \n> in the cache. e.g., running the query with where bucket='xxx' will be \n> fast a second time, but where bucket='yyy' will not.\n>\n>\n> Here are things I've tried:\n> + Clustering premiseaccount on premiseaccount_bucket_58c70392619aa36f \n> and electricusage on electricusage_covered_id_from_date_usage. This \n> provided modest improvement\n> + Fit the whole DB in memory. Since this query is fast if the data is \n> cached, try to fit everything into the cache. Bumping the ram to 8GBs, \n> and warming up the os cache by taring the postgres data directory. On \n> the 4GB machine, I still run into slow performance, but at 8GBs, the \n> entire DB can fit in the OS cache. This made a huge improvement, but \n> will only work for a little while as the database grows.\n> + Tried on other linode servers - same results.\n> + Removing the from_date part of the query. It uses a different index \n> but same performance characteristics.\n> + Looked at kern, sys, and postgres logs. Nothing interesting.\n>\n>\n> I would really appreciate any help I could get! Thanks!\n>\n> Bryce\n>\n>", "msg_date": "Thu, 12 Dec 2013 12:20:58 -0800", "msg_from": "Bryce Covert <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query due to slow I/O" }, { "msg_contents": "On Thu, Dec 12, 2013 at 5:20 PM, Bryce Covert <\[email protected]> wrote:\n\n> I don't have much info on disks, since this is a virtual server on linode.\n> It is running ubuntu 12.04, 8cpus, 4GB ram, 95GB ext3 volume (noatime).\n> Hopefully that's helpful.\n>\n> Bryce\n>\n\n\nWell, did you run benchmarks? How many IOPS do you get from the volumes?\n\nTry running \"iostat -x -m -d 10\" while the slow query is running and\npasting the results (or a relevant sample of them).\n\nAlso, do run \"explain (analyze, buffers)\" instead of plain \"explain\nanalyze\".\n\nOn Thu, Dec 12, 2013 at 5:20 PM, Bryce Covert <[email protected]> wrote:\nI don't have much info on disks, since this is a virtual server on \nlinode. It is running ubuntu 12.04, 8cpus, 4GB ram, 95GB ext3 volume \n(noatime). Hopefully that's helpful. \n\nBryceWell, did you run benchmarks? How many IOPS do you get from the volumes?Try running \"iostat -x -m -d 10\" while the slow query is running and pasting the results (or a relevant sample of them).\nAlso, do run \"explain (analyze, buffers)\" instead of plain \"explain analyze\".", "msg_date": "Thu, 12 Dec 2013 18:48:23 -0200", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query due to slow I/O" }, { "msg_contents": "Hey Claudio,\n\nThanks a lot for the help. I'm not familiar with explain buffers, but \nhere's the results:\n\n Limit (cost=0.00..648.71 rows=50 width=8) (actual \ntime=653.681..52328.707 rows=50 loops=1)\n Buffers: shared hit=7875 read=9870\n -> GroupAggregate (cost=0.00..55672.36 rows=4291 width=8) (actual \ntime=653.671..52328.480 rows=50 loops=1)\n Filter: ((sum(electricusage.usage) >= 3284::numeric) AND \n(sum(electricusage.usage) <= 3769::numeric))\n Buffers: shared hit=7875 read=9870\n -> Nested Loop (cost=0.00..55262.93 rows=34506 width=8) \n(actual time=432.129..52200.465 rows=30711 loops=1)\n Buffers: shared hit=7875 read=9870\n -> Index Scan using \npremiseaccount_bucket_58c70392619aa36f on premiseaccount premiseaccount \n(cost=0.00..15433.71 rows=4291 width=4) (actual time=338.160..10014.780 \nrows=3437 loops=1)\n Index Cond: ((bucket)::text = \n'85349_single-family'::text)\n Buffers: shared hit=744 read=2692\n -> Index Scan using electricusage_premise_account_id on \nelectricusage electricusage (cost=0.00..9.17 rows=9 width=8) (actual \ntime=11.430..12.235 rows=9 loops=3437)\n Index Cond: (premise_account_id = premiseaccount.id)\n Filter: (from_date >= '2012-11-20 \n00:00:00+00'::timestamp with time zone)\n Buffers: shared hit=7131 read=7178\n Total runtime: 52329.028 ms\n(15 rows)\n\nand the iostat results...\nLinux 3.11.6-x86_64-linode35 (preview-aps-new) 12/12/2013 \n_x86_64_ (8 CPU)\n\nDevice: rrqm/s wrqm/s r/s w/s rMB/s wMB/s \navgrq-sz avgqu-sz await r_await w_await svctm %util\nxvda 6.94 65.68 12.16 7.64 0.36 0.29 \n67.06 0.44 22.11 3.56 51.66 1.15 2.28\nxvdb 0.00 0.00 0.00 0.00 0.00 0.00 \n10.61 0.00 1.82 1.82 0.00 1.82 0.00\n\nDevice: rrqm/s wrqm/s r/s w/s rMB/s wMB/s \navgrq-sz avgqu-sz await r_await w_await svctm %util\nxvda 0.10 47.10 152.40 7.80 2.20 0.21 \n30.92 1.36 8.50 8.03 17.53 6.14 98.33\nxvdb 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00 0.00 0.00\n\nDevice: rrqm/s wrqm/s r/s w/s rMB/s wMB/s \navgrq-sz avgqu-sz await r_await w_await svctm %util\nxvda 0.20 2.30 212.10 0.70 3.22 0.01 \n31.09 1.47 6.88 6.90 2.86 4.63 98.62\nxvdb 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00 0.00 0.00\n\nDevice: rrqm/s wrqm/s r/s w/s rMB/s wMB/s \navgrq-sz avgqu-sz await r_await w_await svctm %util\nxvda 0.20 8.30 183.20 5.10 2.46 0.05 \n27.31 1.68 8.85 6.68 86.78 5.24 98.70\nxvdb 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00 0.00 0.00\n\nDevice: rrqm/s wrqm/s r/s w/s rMB/s wMB/s \navgrq-sz avgqu-sz await r_await w_await svctm %util\nxvda 0.10 0.00 165.70 0.00 2.36 0.00 \n29.20 1.46 8.86 8.86 0.00 5.95 98.63\nxvdb 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00 0.00 0.00\n\nI'm not sure how to process this except that 2.5MB/s seems really slow, \nand it looks like it is using postgres' cache quite a bit.\n\nThanks,\nBryce\n\n\n> Claudio Freire <mailto:[email protected]>\n> December 12, 2013 12:48 PM\n>\n>\n>\n> Well, did you run benchmarks? How many IOPS do you get from the volumes?\n>\n> Try running \"iostat -x -m -d 10\" while the slow query is running and \n> pasting the results (or a relevant sample of them).\n>\n> Also, do run \"explain (analyze, buffers)\" instead of plain \"explain \n> analyze\".\n>\n> Bryce Covert <mailto:[email protected]>\n> December 12, 2013 12:20 PM\n>\n> I don't have much info on disks, since this is a virtual server on \n> linode. It is running ubuntu 12.04, 8cpus, 4GB ram, 95GB ext3 volume \n> (noatime). Hopefully that's helpful.\n>\n> Bryce\n> Claudio Freire <mailto:[email protected]>\n> December 12, 2013 12:15 PM\n> On Thu, Dec 12, 2013 at 5:10 PM, Bryce Covert\n>\n>\n> For this kind of diagnostic, you need to include hardware details.\n>\n> OS? Disks? RAID?\n>\n>\n> Bryce Covert <mailto:[email protected]>\n> December 12, 2013 12:10 PM\n>\n> Hi,\n>\n> I'm seeing a slow running query. After some optimization with indexes, \n> it appears that the query plan is correct, it's just slow. Running the \n> query twice, not surprisingly, is very fast, due to OS caching or \n> shared_buffers caching. If the parameters for the query are different, \n> however, the query is slow until run a second time. In our usage \n> pattern, this query must run fast the first time it runs.\n>\n> A couple of general stats: this is a linode machine with a single 3GB \n> DB with 4GBs of ram. Shared buffers = 1024mb, \n> effective_cache_size=2048MB. We are running with postgres 9.1. The \n> machine is otherwise dormant when this query runs. Here is the schema:\n>\n> Table \"public.premiseaccount\"\n> Column | Type \n> | Modifiers\n> ------------------------+--------------------------+------------------------------------------------------------------------- \n>\n> id | integer | not null default \n> nextval('premiseaccount_id_seq'::regclass)\n> created | timestamp with time zone | not null default \n> '2013-09-25 07:00:00+00'::timestamp with time zone\n> modified | timestamp with time zone | not null default \n> '2013-09-25 07:00:00+00'::timestamp with time zone\n> account_id | integer | not null\n> premise_id | integer | not null\n> bucket | character varying(255) |\n> Indexes:\n> \"premiseaccount_pkey\" PRIMARY KEY, btree (id)\n> \"premiseaccount_account_id\" btree (account_id)\n> \"premiseaccount_bucket\" btree (bucket)\n> \"premiseaccount_bucket_58c70392619aa36f\" btree (bucket, id)\n> \"premiseaccount_bucket_like\" btree (bucket varchar_pattern_ops)\n> \"premiseaccount_premise_id\" btree (premise_id)\n> Foreign-key constraints:\n> \"account_id_refs_id_529631edfff28022\" FOREIGN KEY (account_id) \n> REFERENCES utilityaccount(id) DEFERRABLE INITIALLY DEFERRED\n> \"premise_id_refs_id_5ecea2842007328b\" FOREIGN KEY (premise_id) \n> REFERENCES premise(id) DEFERRABLE INITIALLY DEFERRED\n>\n> Table \"public.electricusage\"\n> Column | Type \n> | Modifiers\n> --------------------+--------------------------+------------------------------------------------------------------------ \n>\n> id | integer | not null default \n> nextval('electricusage_id_seq'::regclass)\n> created | timestamp with time zone | not null default \n> '2013-09-25 07:00:00+00'::timestamp with time zone\n> modified | timestamp with time zone | not null default \n> '2013-09-25 07:00:00+00'::timestamp with time zone\n> from_date | timestamp with time zone | not null\n> to_date | timestamp with time zone | not null\n> usage | numeric(9,2) |\n> demand | numeric(9,2) |\n> bill_amount | numeric(9,2) |\n> premise_account_id | integer | not null\n> Indexes:\n> \"electricusage_pkey\" PRIMARY KEY, btree (id)\n> \"electricusage_premise_account_id\" btree (premise_account_id)\n> \"electricusage_covered_id_from_date_usage\" btree \n> (premise_account_id, from_date, usage)\n> Foreign-key constraints:\n> \"premise_account_id_refs_id_4c39e54406369128\" FOREIGN KEY \n> (premise_account_id) REFERENCES premiseaccount(id) DEFERRABLE \n> INITIALLY DEFERRED\n>\n>\n> For reference, premiseaccount has about 1 million rows, one for each \n> account, grouped in buckets with an average of 5000 accounts per \n> bucket. electricusage has 10 million rows, about 10 rows per \n> premiseaccount.\n>\n> Here is the query:\n>\n> explain analyze\n> SELECT premiseaccount.id, SUM(electricusage.usage) AS total_kwh\n> FROM premiseaccount\n> LEFT OUTER JOIN electricusage\n> ON premiseaccount.id = electricusage.premise_account_id\n> WHERE premiseaccount.bucket = 'XXX' AND electricusage.from_date >= \n> '2012-11-20 00:00:00'\n> GROUP BY premiseaccount.id\n> HAVING SUM(electricusage.usage) BETWEEN 3284 and 3769\n> LIMIT 50;\n>\n>\n> Limit (cost=0.00..1987.24 rows=50 width=8) (actual \n> time=931.524..203631.435 rows=50 loops=1)\n> -> GroupAggregate (cost=0.00..179487.78 rows=4516 width=8) \n> (actual time=931.519..203631.275 rows=50 loops=1)\n> Filter: ((sum(electricusage.usage) >= 3284::numeric) AND \n> (sum(electricusage.usage) <= 3769::numeric))\n> -> Nested Loop (cost=0.00..179056.87 rows=36317 width=8) \n> (actual time=101.934..203450.761 rows=30711 loops=1)\n> -> Index Scan using \n> premiseaccount_bucket_58c70392619aa36f on premiseaccount \n> premiseaccount (cost=0.00..14882.30 rows=4516 width=4) (actual \n> time=77.620..7199.527 rows=3437 loops=1)\n> Index Cond: ((bucket)::text = 'XXX'::text)\n> -> Index Scan using \n> electricusage_premise_account_id_36bc8999ced10059 on electricusage \n> electricusage (cost=0.00..36.24 rows=9 width=8) (actual \n> time=8.607..57.055 rows=9 loops=3437)\n> Index Cond: ((premise_account_id = \n> premiseaccount.id) AND (from_date >= '2012-11-20 \n> 00:00:00+00'::timestamp with time zone))\n> Total runtime: 203631.666 ms\n>\n> (see online at: http://explain.depesz.com/s/zeC)\n>\n> What am I missing here? It seems like this is a relatively \n> straightforward foreign key join, using the correct indexes, that is \n> just slow. Warming the OS cache seems like the only way to make this \n> actually perform reasonably, but it seems like it's masking the \n> underlying problem. I could probably do some denormalization, like \n> putting the bucket field on the electricusage table, but that seems \n> like a shoddy solution.\n>\n> Before running my query, I clear the os and postgres cache. I know \n> that in theory some subset of the data will be in the os cache and \n> postgres cache. But in the real world, what we see is that a number of \n> people get timeouts waiting for this query to finish, because it's not \n> in the cache. e.g., running the query with where bucket='xxx' will be \n> fast a second time, but where bucket='yyy' will not.\n>\n>\n> Here are things I've tried:\n> + Clustering premiseaccount on premiseaccount_bucket_58c70392619aa36f \n> and electricusage on electricusage_covered_id_from_date_usage. This \n> provided modest improvement\n> + Fit the whole DB in memory. Since this query is fast if the data is \n> cached, try to fit everything into the cache. Bumping the ram to 8GBs, \n> and warming up the os cache by taring the postgres data directory. On \n> the 4GB machine, I still run into slow performance, but at 8GBs, the \n> entire DB can fit in the OS cache. This made a huge improvement, but \n> will only work for a little while as the database grows.\n> + Tried on other linode servers - same results.\n> + Removing the from_date part of the query. It uses a different index \n> but same performance characteristics.\n> + Looked at kern, sys, and postgres logs. Nothing interesting.\n>\n>\n> I would really appreciate any help I could get! Thanks!\n>\n> Bryce\n>\n>", "msg_date": "Thu, 12 Dec 2013 13:16:26 -0800", "msg_from": "Bryce Covert <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query due to slow I/O" }, { "msg_contents": "Also, I was reading this: \nhttp://wiki.postgresql.org/wiki/What's_new_in_PostgreSQL_9.2, and I \nrealized that index-only scans weren't introduced until 9.2. I tried \ncreating a covered index for this, but I don't think it helps in this \nsituation.\n\nBryce\n\n> Claudio Freire <mailto:[email protected]>\n> December 12, 2013 12:48 PM\n>\n>\n>\n> Well, did you run benchmarks? How many IOPS do you get from the volumes?\n>\n> Try running \"iostat -x -m -d 10\" while the slow query is running and \n> pasting the results (or a relevant sample of them).\n>\n> Also, do run \"explain (analyze, buffers)\" instead of plain \"explain \n> analyze\".\n>\n> Bryce Covert <mailto:[email protected]>\n> December 12, 2013 12:20 PM\n>\n> I don't have much info on disks, since this is a virtual server on \n> linode. It is running ubuntu 12.04, 8cpus, 4GB ram, 95GB ext3 volume \n> (noatime). Hopefully that's helpful.\n>\n> Bryce\n> Claudio Freire <mailto:[email protected]>\n> December 12, 2013 12:15 PM\n> On Thu, Dec 12, 2013 at 5:10 PM, Bryce Covert\n>\n>\n> For this kind of diagnostic, you need to include hardware details.\n>\n> OS? Disks? RAID?\n>\n>\n> Bryce Covert <mailto:[email protected]>\n> December 12, 2013 12:10 PM\n>\n> Hi,\n>\n> I'm seeing a slow running query. After some optimization with indexes, \n> it appears that the query plan is correct, it's just slow. Running the \n> query twice, not surprisingly, is very fast, due to OS caching or \n> shared_buffers caching. If the parameters for the query are different, \n> however, the query is slow until run a second time. In our usage \n> pattern, this query must run fast the first time it runs.\n>\n> A couple of general stats: this is a linode machine with a single 3GB \n> DB with 4GBs of ram. Shared buffers = 1024mb, \n> effective_cache_size=2048MB. We are running with postgres 9.1. The \n> machine is otherwise dormant when this query runs. Here is the schema:\n>\n> Table \"public.premiseaccount\"\n> Column | Type \n> | Modifiers\n> ------------------------+--------------------------+------------------------------------------------------------------------- \n>\n> id | integer | not null default \n> nextval('premiseaccount_id_seq'::regclass)\n> created | timestamp with time zone | not null default \n> '2013-09-25 07:00:00+00'::timestamp with time zone\n> modified | timestamp with time zone | not null default \n> '2013-09-25 07:00:00+00'::timestamp with time zone\n> account_id | integer | not null\n> premise_id | integer | not null\n> bucket | character varying(255) |\n> Indexes:\n> \"premiseaccount_pkey\" PRIMARY KEY, btree (id)\n> \"premiseaccount_account_id\" btree (account_id)\n> \"premiseaccount_bucket\" btree (bucket)\n> \"premiseaccount_bucket_58c70392619aa36f\" btree (bucket, id)\n> \"premiseaccount_bucket_like\" btree (bucket varchar_pattern_ops)\n> \"premiseaccount_premise_id\" btree (premise_id)\n> Foreign-key constraints:\n> \"account_id_refs_id_529631edfff28022\" FOREIGN KEY (account_id) \n> REFERENCES utilityaccount(id) DEFERRABLE INITIALLY DEFERRED\n> \"premise_id_refs_id_5ecea2842007328b\" FOREIGN KEY (premise_id) \n> REFERENCES premise(id) DEFERRABLE INITIALLY DEFERRED\n>\n> Table \"public.electricusage\"\n> Column | Type \n> | Modifiers\n> --------------------+--------------------------+------------------------------------------------------------------------ \n>\n> id | integer | not null default \n> nextval('electricusage_id_seq'::regclass)\n> created | timestamp with time zone | not null default \n> '2013-09-25 07:00:00+00'::timestamp with time zone\n> modified | timestamp with time zone | not null default \n> '2013-09-25 07:00:00+00'::timestamp with time zone\n> from_date | timestamp with time zone | not null\n> to_date | timestamp with time zone | not null\n> usage | numeric(9,2) |\n> demand | numeric(9,2) |\n> bill_amount | numeric(9,2) |\n> premise_account_id | integer | not null\n> Indexes:\n> \"electricusage_pkey\" PRIMARY KEY, btree (id)\n> \"electricusage_premise_account_id\" btree (premise_account_id)\n> \"electricusage_covered_id_from_date_usage\" btree \n> (premise_account_id, from_date, usage)\n> Foreign-key constraints:\n> \"premise_account_id_refs_id_4c39e54406369128\" FOREIGN KEY \n> (premise_account_id) REFERENCES premiseaccount(id) DEFERRABLE \n> INITIALLY DEFERRED\n>\n>\n> For reference, premiseaccount has about 1 million rows, one for each \n> account, grouped in buckets with an average of 5000 accounts per \n> bucket. electricusage has 10 million rows, about 10 rows per \n> premiseaccount.\n>\n> Here is the query:\n>\n> explain analyze\n> SELECT premiseaccount.id, SUM(electricusage.usage) AS total_kwh\n> FROM premiseaccount\n> LEFT OUTER JOIN electricusage\n> ON premiseaccount.id = electricusage.premise_account_id\n> WHERE premiseaccount.bucket = 'XXX' AND electricusage.from_date >= \n> '2012-11-20 00:00:00'\n> GROUP BY premiseaccount.id\n> HAVING SUM(electricusage.usage) BETWEEN 3284 and 3769\n> LIMIT 50;\n>\n>\n> Limit (cost=0.00..1987.24 rows=50 width=8) (actual \n> time=931.524..203631.435 rows=50 loops=1)\n> -> GroupAggregate (cost=0.00..179487.78 rows=4516 width=8) \n> (actual time=931.519..203631.275 rows=50 loops=1)\n> Filter: ((sum(electricusage.usage) >= 3284::numeric) AND \n> (sum(electricusage.usage) <= 3769::numeric))\n> -> Nested Loop (cost=0.00..179056.87 rows=36317 width=8) \n> (actual time=101.934..203450.761 rows=30711 loops=1)\n> -> Index Scan using \n> premiseaccount_bucket_58c70392619aa36f on premiseaccount \n> premiseaccount (cost=0.00..14882.30 rows=4516 width=4) (actual \n> time=77.620..7199.527 rows=3437 loops=1)\n> Index Cond: ((bucket)::text = 'XXX'::text)\n> -> Index Scan using \n> electricusage_premise_account_id_36bc8999ced10059 on electricusage \n> electricusage (cost=0.00..36.24 rows=9 width=8) (actual \n> time=8.607..57.055 rows=9 loops=3437)\n> Index Cond: ((premise_account_id = \n> premiseaccount.id) AND (from_date >= '2012-11-20 \n> 00:00:00+00'::timestamp with time zone))\n> Total runtime: 203631.666 ms\n>\n> (see online at: http://explain.depesz.com/s/zeC)\n>\n> What am I missing here? It seems like this is a relatively \n> straightforward foreign key join, using the correct indexes, that is \n> just slow. Warming the OS cache seems like the only way to make this \n> actually perform reasonably, but it seems like it's masking the \n> underlying problem. I could probably do some denormalization, like \n> putting the bucket field on the electricusage table, but that seems \n> like a shoddy solution.\n>\n> Before running my query, I clear the os and postgres cache. I know \n> that in theory some subset of the data will be in the os cache and \n> postgres cache. But in the real world, what we see is that a number of \n> people get timeouts waiting for this query to finish, because it's not \n> in the cache. e.g., running the query with where bucket='xxx' will be \n> fast a second time, but where bucket='yyy' will not.\n>\n>\n> Here are things I've tried:\n> + Clustering premiseaccount on premiseaccount_bucket_58c70392619aa36f \n> and electricusage on electricusage_covered_id_from_date_usage. This \n> provided modest improvement\n> + Fit the whole DB in memory. Since this query is fast if the data is \n> cached, try to fit everything into the cache. Bumping the ram to 8GBs, \n> and warming up the os cache by taring the postgres data directory. On \n> the 4GB machine, I still run into slow performance, but at 8GBs, the \n> entire DB can fit in the OS cache. This made a huge improvement, but \n> will only work for a little while as the database grows.\n> + Tried on other linode servers - same results.\n> + Removing the from_date part of the query. It uses a different index \n> but same performance characteristics.\n> + Looked at kern, sys, and postgres logs. Nothing interesting.\n>\n>\n> I would really appreciate any help I could get! Thanks!\n>\n> Bryce\n>\n>", "msg_date": "Thu, 12 Dec 2013 13:24:46 -0800", "msg_from": "Bryce Covert <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query due to slow I/O" }, { "msg_contents": "On Thu, Dec 12, 2013 at 6:16 PM, Bryce Covert <\[email protected]> wrote:\n\n>\n> Thanks a lot for the help. I'm not familiar with explain buffers, but\n> here's the results:\n>\n> Limit (cost=0.00..648.71 rows=50 width=8) (actual\n> time=653.681..52328.707 rows=50 loops=1)\n> Buffers: shared hit=7875 read=9870\n> -> GroupAggregate (cost=0.00..55672.36 rows=4291 width=8) (actual\n> time=653.671..52328.480 rows=50 loops=1)\n>\n> Filter: ((sum(electricusage.usage) >= 3284::numeric) AND\n> (sum(electricusage.usage) <= 3769::numeric))\n> Buffers: shared hit=7875 read=9870\n> -> Nested Loop (cost=0.00..55262.93 rows=34506 width=8) (actual\n> time=432.129..52200.465 rows=30711 loops=1)\n> Buffers: shared hit=7875 read=9870\n> -> Index Scan using premiseaccount_bucket_58c70392619aa36f\n> on premiseaccount premiseaccount (cost=0.00..15433.71 rows=4291 width=4)\n> (actual time=338.160..10014.780 rows=3437 loops=1)\n> Index Cond: ((bucket)::text =\n> '85349_single-family'::text)\n> Buffers: shared hit=744 read=2692\n> -> Index Scan using electricusage_premise_account_id on\n> electricusage electricusage (cost=0.00..9.17 rows=9 width=8) (actual\n> time=11.430..12.235 rows=9 loops=3437)\n> Index Cond: (premise_account_id = premiseaccount.id)\n> Filter: (from_date >= '2012-11-20\n> 00:00:00+00'::timestamp with time zone)\n> Buffers: shared hit=7131 read=7178\n> Total runtime: 52329.028 ms\n> (15 rows)\n>\n> and the iostat results...\n> Linux 3.11.6-x86_64-linode35 (preview-aps-new) 12/12/2013\n> _x86_64_ (8 CPU)\n>\n> ...\n\n> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz\n> avgqu-sz await r_await w_await svctm %util\n> xvda 0.10 47.10 152.40 7.80 2.20 0.21\n> 30.92 1.36 8.50 8.03 17.53 6.14 98.33\n> xvdb 0.00 0.00 0.00 0.00 0.00 0.00\n> 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n\n\n\nThis means it's doing random I/O, and that your disk is a single 7200RPM\ndrive (152 r/s being typical for that hardware).\n\nYou can improve this by either:\n\n1 - Turning that random I/O pattern into sequential, or\n2 - Getting better I/O.\n\nI'll assume 2 isn't available to you on linode, so for 1, you could try\nlowering effective_cache_size substantially. It seems you're not getting\nnearly as much caching as you think (ie 2GB). However, I doubt there's a\nplan that can get you significantly better performance given your hardware.\n\nYou may shave a few seconds, though, if you increase work_mem. It seems it\nshould have used a bitmap index scan for at least one of the index scans\nthere, and a low work_mem could be what's limiting the planner's\npossibilities. What are your settings in that area?\n\nOn Thu, Dec 12, 2013 at 6:16 PM, Bryce Covert <[email protected]> wrote:\n\nThanks a lot for the help. I'm not familiar with explain buffers, but \nhere's the results:\n\n Limit  (cost=0.00..648.71 rows=50 width=8) (actual \ntime=653.681..52328.707 rows=50 loops=1)\n   Buffers: shared hit=7875 read=9870\n   ->  GroupAggregate  (cost=0.00..55672.36 rows=4291 width=8) \n(actual time=653.671..52328.480 rows=50 loops=1)\n         Filter: ((sum(electricusage.usage) >= 3284::numeric) AND \n(sum(electricusage.usage) <= 3769::numeric))\n         Buffers: shared hit=7875 read=9870\n         ->  Nested Loop  (cost=0.00..55262.93 rows=34506 width=8) \n(actual time=432.129..52200.465 rows=30711 loops=1)\n               Buffers: shared hit=7875 read=9870\n               ->  Index Scan using \npremiseaccount_bucket_58c70392619aa36f on premiseaccount premiseaccount \n (cost=0.00..15433.71 rows=4291 width=4) (actual time=338.160..10014.780\n rows=3437 loops=1)\n                     Index Cond: ((bucket)::text = \n'85349_single-family'::text)\n                     Buffers: shared hit=744 read=2692\n               ->  Index Scan using electricusage_premise_account_id \non electricusage electricusage  (cost=0.00..9.17 rows=9 width=8) (actual\n time=11.430..12.235 rows=9 loops=3437)\n                     Index Cond: (premise_account_id = \npremiseaccount.id)\n                     Filter: (from_date >= '2012-11-20 \n00:00:00+00'::timestamp with time zone)\n                     Buffers: shared hit=7131 read=7178\n Total runtime: 52329.028 ms\n(15 rows)\n\nand the iostat results...\nLinux 3.11.6-x86_64-linode35 (preview-aps-new)     12/12/2013     \n_x86_64_    (8 CPU)\n... \nDevice:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s \navgrq-sz avgqu-sz   await r_await w_await  svctm  %util\nxvda              0.10    47.10  152.40    7.80     2.20     0.21    \n30.92     1.36    8.50    8.03   17.53   6.14  98.33\nxvdb              0.00     0.00    0.00    0.00     0.00     0.00     \n0.00     0.00    0.00    0.00    0.00   0.00   0.00This means it's doing random I/O, and that your disk is a single 7200RPM drive (152 r/s being typical for that hardware).\nYou can improve this by either:1 - Turning that random I/O pattern into sequential, or2 - Getting better I/O.\nI'll assume 2 isn't available to you on linode, so for 1, you could try lowering effective_cache_size substantially. It seems you're not getting nearly as much caching as you think (ie 2GB). However, I doubt there's a plan that can get you significantly better performance given your hardware.\nYou may shave a few seconds, though, if you increase work_mem. It seems it should have used a bitmap index scan for at least one of the index scans there, and a low work_mem could be what's limiting the planner's possibilities. What are your settings in that area?", "msg_date": "Thu, 12 Dec 2013 19:35:51 -0200", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query due to slow I/O" }, { "msg_contents": "It's strange that it isn't sequential at least for the electric usage, \nas i've clustered using the index that it's using..\n\nI had work_mem set to 128mb. I tried bumping it to 1024mb, and I don't \nthink I see a in the query plan.\n\nWould you think upgrading to 9.2 would help much here? Using a covered \nindex, I imagine reads would be limited quite a bit.\n\nThanks again,\nBryce\n> Claudio Freire <mailto:[email protected]>\n> December 12, 2013 1:35 PM\n>\n> On Thu, Dec 12, 2013 at 6:16 PM, Bryce Covert \n> <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n>\n> Thanks a lot for the help. I'm not familiar with explain buffers,\n> but here's the results:\n>\n> Limit (cost=0.00..648.71 rows=50 width=8) (actual\n> time=653.681..52328.707 rows=50 loops=1)\n> Buffers: shared hit=7875 read=9870\n> -> GroupAggregate (cost=0.00..55672.36 rows=4291 width=8)\n> (actual time=653.671..52328.480 rows=50 loops=1)\n>\n> Filter: ((sum(electricusage.usage) >= 3284::numeric) AND\n> (sum(electricusage.usage) <= 3769::numeric))\n> Buffers: shared hit=7875 read=9870\n> -> Nested Loop (cost=0.00..55262.93 rows=34506 width=8)\n> (actual time=432.129..52200.465 rows=30711 loops=1)\n> Buffers: shared hit=7875 read=9870\n> -> Index Scan using\n> premiseaccount_bucket_58c70392619aa36f on premiseaccount\n> premiseaccount (cost=0.00..15433.71 rows=4291 width=4) (actual\n> time=338.160..10014.780 rows=3437 loops=1)\n> Index Cond: ((bucket)::text =\n> '85349_single-family'::text)\n> Buffers: shared hit=744 read=2692\n> -> Index Scan using\n> electricusage_premise_account_id on electricusage electricusage \n> (cost=0.00..9.17 rows=9 width=8) (actual time=11.430..12.235\n> rows=9 loops=3437)\n> Index Cond: (premise_account_id =\n> premiseaccount.id <http://premiseaccount.id>)\n> Filter: (from_date >= '2012-11-20\n> 00:00:00+00'::timestamp with time zone)\n> Buffers: shared hit=7131 read=7178\n> Total runtime: 52329.028 ms\n> (15 rows)\n>\n> and the iostat results...\n> Linux 3.11.6-x86_64-linode35 (preview-aps-new) 12/12/2013 \n> _x86_64_ (8 CPU)\n>\n> ...\n>\n> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s\n> avgrq-sz avgqu-sz await r_await w_await svctm %util\n> xvda 0.10 47.10 152.40 7.80 2.20 \n> 0.21 30.92 1.36 8.50 8.03 17.53 6.14 98.33\n> xvdb 0.00 0.00 0.00 0.00 0.00 \n> 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n>\n>\n>\n> This means it's doing random I/O, and that your disk is a single \n> 7200RPM drive (152 r/s being typical for that hardware).\n>\n> You can improve this by either:\n>\n> 1 - Turning that random I/O pattern into sequential, or\n> 2 - Getting better I/O.\n>\n> I'll assume 2 isn't available to you on linode, so for 1, you could \n> try lowering effective_cache_size substantially. It seems you're not \n> getting nearly as much caching as you think (ie 2GB). However, I doubt \n> there's a plan that can get you significantly better performance given \n> your hardware.\n>\n> You may shave a few seconds, though, if you increase work_mem. It \n> seems it should have used a bitmap index scan for at least one of the \n> index scans there, and a low work_mem could be what's limiting the \n> planner's possibilities. What are your settings in that area?\n>\n>\n> Bryce Covert <mailto:[email protected]>\n> December 12, 2013 1:16 PM\n> Hey Claudio,\n>\n> Thanks a lot for the help. I'm not familiar with explain buffers, but \n> here's the results:\n>\n> Limit (cost=0.00..648.71 rows=50 width=8) (actual \n> time=653.681..52328.707 rows=50 loops=1)\n> Buffers: shared hit=7875 read=9870\n> -> GroupAggregate (cost=0.00..55672.36 rows=4291 width=8) (actual \n> time=653.671..52328.480 rows=50 loops=1)\n> Filter: ((sum(electricusage.usage) >= 3284::numeric) AND \n> (sum(electricusage.usage) <= 3769::numeric))\n> Buffers: shared hit=7875 read=9870\n> -> Nested Loop (cost=0.00..55262.93 rows=34506 width=8) \n> (actual time=432.129..52200.465 rows=30711 loops=1)\n> Buffers: shared hit=7875 read=9870\n> -> Index Scan using \n> premiseaccount_bucket_58c70392619aa36f on premiseaccount \n> premiseaccount (cost=0.00..15433.71 rows=4291 width=4) (actual \n> time=338.160..10014.780 rows=3437 loops=1)\n> Index Cond: ((bucket)::text = \n> '85349_single-family'::text)\n> Buffers: shared hit=744 read=2692\n> -> Index Scan using electricusage_premise_account_id \n> on electricusage electricusage (cost=0.00..9.17 rows=9 width=8) \n> (actual time=11.430..12.235 rows=9 loops=3437)\n> Index Cond: (premise_account_id = premiseaccount.id)\n> Filter: (from_date >= '2012-11-20 \n> 00:00:00+00'::timestamp with time zone)\n> Buffers: shared hit=7131 read=7178\n> Total runtime: 52329.028 ms\n> (15 rows)\n>\n> and the iostat results...\n> Linux 3.11.6-x86_64-linode35 (preview-aps-new) 12/12/2013 \n> _x86_64_ (8 CPU)\n>\n> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s \n> avgrq-sz avgqu-sz await r_await w_await svctm %util\n> xvda 6.94 65.68 12.16 7.64 0.36 0.29 \n> 67.06 0.44 22.11 3.56 51.66 1.15 2.28\n> xvdb 0.00 0.00 0.00 0.00 0.00 0.00 \n> 10.61 0.00 1.82 1.82 0.00 1.82 0.00\n>\n> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s \n> avgrq-sz avgqu-sz await r_await w_await svctm %util\n> xvda 0.10 47.10 152.40 7.80 2.20 0.21 \n> 30.92 1.36 8.50 8.03 17.53 6.14 98.33\n> xvdb 0.00 0.00 0.00 0.00 0.00 0.00 \n> 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n>\n> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s \n> avgrq-sz avgqu-sz await r_await w_await svctm %util\n> xvda 0.20 2.30 212.10 0.70 3.22 0.01 \n> 31.09 1.47 6.88 6.90 2.86 4.63 98.62\n> xvdb 0.00 0.00 0.00 0.00 0.00 0.00 \n> 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n>\n> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s \n> avgrq-sz avgqu-sz await r_await w_await svctm %util\n> xvda 0.20 8.30 183.20 5.10 2.46 0.05 \n> 27.31 1.68 8.85 6.68 86.78 5.24 98.70\n> xvdb 0.00 0.00 0.00 0.00 0.00 0.00 \n> 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n>\n> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s \n> avgrq-sz avgqu-sz await r_await w_await svctm %util\n> xvda 0.10 0.00 165.70 0.00 2.36 0.00 \n> 29.20 1.46 8.86 8.86 0.00 5.95 98.63\n> xvdb 0.00 0.00 0.00 0.00 0.00 0.00 \n> 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n>\n> I'm not sure how to process this except that 2.5MB/s seems really \n> slow, and it looks like it is using postgres' cache quite a bit.\n>\n> Thanks,\n> Bryce\n>\n>\n> Claudio Freire <mailto:[email protected]>\n> December 12, 2013 12:48 PM\n>\n>\n>\n> Well, did you run benchmarks? How many IOPS do you get from the volumes?\n>\n> Try running \"iostat -x -m -d 10\" while the slow query is running and \n> pasting the results (or a relevant sample of them).\n>\n> Also, do run \"explain (analyze, buffers)\" instead of plain \"explain \n> analyze\".\n>\n> Bryce Covert <mailto:[email protected]>\n> December 12, 2013 12:20 PM\n>\n> I don't have much info on disks, since this is a virtual server on \n> linode. It is running ubuntu 12.04, 8cpus, 4GB ram, 95GB ext3 volume \n> (noatime). Hopefully that's helpful.\n>\n> Bryce\n> Claudio Freire <mailto:[email protected]>\n> December 12, 2013 12:15 PM\n> On Thu, Dec 12, 2013 at 5:10 PM, Bryce Covert\n>\n>\n> For this kind of diagnostic, you need to include hardware details.\n>\n> OS? Disks? RAID?\n>\n>", "msg_date": "Thu, 12 Dec 2013 13:49:37 -0800", "msg_from": "Bryce Covert <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query due to slow I/O" }, { "msg_contents": "Not sure if this is helpful, but I tried upgrading to 9.2, and here's \nwhat I got:\n\n---------\n Limit (cost=0.00..535.78 rows=50 width=8) (actual \ntime=1037.376..135043.945 rows=50 loops=1)\n Output: premiseaccount.id, (sum(electricusage.usage))\n Buffers: shared hit=4851 read=18718\n -> GroupAggregate (cost=0.00..198012.28 rows=18479 width=8) \n(actual time=1037.369..135043.700 rows=50 loops=1)\n Output: premiseaccount.id, sum(electricusage.usage)\n Filter: ((sum(electricusage.usage) >= 3284::numeric) AND \n(sum(electricusage.usage) <= 3769::numeric))\n Rows Removed by Filter: 1476\n Buffers: shared hit=4851 read=18718\n -> Nested Loop (cost=0.00..196247.46 rows=148764 width=8) \n(actual time=107.092..134845.231 rows=15188 loops=1)\n Output: premiseaccount.id, electricusage.usage\n Buffers: shared hit=4851 read=18718\n -> Index Only Scan using \npremiseaccount_bucket_58c70392619aa36f on public.premiseaccount \npremiseaccount (cost=0.00..43135.13 rows=18479 width=4) (actual \ntime=45.368..137.340 rows=1527 loops=1)\n Output: premiseaccount.bucket, premiseaccount.id\n Index Cond: (premiseaccount.bucket = \n'85375_single-family'::text)\n Heap Fetches: 1527\n Buffers: shared hit=1 read=685\n -> Index Scan using electricusage_premise_account_id on \npublic.electricusage electricusage (cost=0.00..8.20 rows=9 width=8) \n(actual time=22.306..88.136 rows=10 loops=1527)\n Output: electricusage.id, electricusage.created, \nelectricusage.modified, electricusage.from_date, electricusage.to_date, \nelectricusage.usage, electricusage.demand, electricusage.bill_amount, \nelectricusage.premise_account_id\n Index Cond: (electricusage.premise_account_id = \npremiseaccount.id)\n Filter: (electricusage.from_date >= '2012-11-20 \n00:00:00+00'::timestamp with time zone)\n Rows Removed by Filter: 2\n Buffers: shared hit=4850 read=18033\n Total runtime: 135044.256 ms\n(23 rows)\n\n\nLooks like it is doing an index only scan for the first table, but not \nfor the second. I tried creating two indexes that theoretically should \nmake it not have to go to the physical table.:\n \"electricusage_premise_account_id_36bc8999ced10059\" btree \n(premise_account_id, from_date, usage)\n \"ix_covered_2\" btree (premise_account_id, from_date DESC, usage, id)\n\nAny idea why it's not using that?\n\nThanks!\nBryce\n\n> Claudio Freire <mailto:[email protected]>\n> December 12, 2013 1:35 PM\n>\n> On Thu, Dec 12, 2013 at 6:16 PM, Bryce Covert \n> <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n>\n> Thanks a lot for the help. I'm not familiar with explain buffers,\n> but here's the results:\n>\n> Limit (cost=0.00..648.71 rows=50 width=8) (actual\n> time=653.681..52328.707 rows=50 loops=1)\n> Buffers: shared hit=7875 read=9870\n> -> GroupAggregate (cost=0.00..55672.36 rows=4291 width=8)\n> (actual time=653.671..52328.480 rows=50 loops=1)\n>\n> Filter: ((sum(electricusage.usage) >= 3284::numeric) AND\n> (sum(electricusage.usage) <= 3769::numeric))\n> Buffers: shared hit=7875 read=9870\n> -> Nested Loop (cost=0.00..55262.93 rows=34506 width=8)\n> (actual time=432.129..52200.465 rows=30711 loops=1)\n> Buffers: shared hit=7875 read=9870\n> -> Index Scan using\n> premiseaccount_bucket_58c70392619aa36f on premiseaccount\n> premiseaccount (cost=0.00..15433.71 rows=4291 width=4) (actual\n> time=338.160..10014.780 rows=3437 loops=1)\n> Index Cond: ((bucket)::text =\n> '85349_single-family'::text)\n> Buffers: shared hit=744 read=2692\n> -> Index Scan using\n> electricusage_premise_account_id on electricusage electricusage \n> (cost=0.00..9.17 rows=9 width=8) (actual time=11.430..12.235\n> rows=9 loops=3437)\n> Index Cond: (premise_account_id =\n> premiseaccount.id <http://premiseaccount.id>)\n> Filter: (from_date >= '2012-11-20\n> 00:00:00+00'::timestamp with time zone)\n> Buffers: shared hit=7131 read=7178\n> Total runtime: 52329.028 ms\n> (15 rows)\n>\n> and the iostat results...\n> Linux 3.11.6-x86_64-linode35 (preview-aps-new) 12/12/2013 \n> _x86_64_ (8 CPU)\n>\n> ...\n>\n> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s\n> avgrq-sz avgqu-sz await r_await w_await svctm %util\n> xvda 0.10 47.10 152.40 7.80 2.20 \n> 0.21 30.92 1.36 8.50 8.03 17.53 6.14 98.33\n> xvdb 0.00 0.00 0.00 0.00 0.00 \n> 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n>\n>\n>\n> This means it's doing random I/O, and that your disk is a single \n> 7200RPM drive (152 r/s being typical for that hardware).\n>\n> You can improve this by either:\n>\n> 1 - Turning that random I/O pattern into sequential, or\n> 2 - Getting better I/O.\n>\n> I'll assume 2 isn't available to you on linode, so for 1, you could \n> try lowering effective_cache_size substantially. It seems you're not \n> getting nearly as much caching as you think (ie 2GB). However, I doubt \n> there's a plan that can get you significantly better performance given \n> your hardware.\n>\n> You may shave a few seconds, though, if you increase work_mem. It \n> seems it should have used a bitmap index scan for at least one of the \n> index scans there, and a low work_mem could be what's limiting the \n> planner's possibilities. What are your settings in that area?\n>\n>\n> Bryce Covert <mailto:[email protected]>\n> December 12, 2013 1:16 PM\n> Hey Claudio,\n>\n> Thanks a lot for the help. I'm not familiar with explain buffers, but \n> here's the results:\n>\n> Limit (cost=0.00..648.71 rows=50 width=8) (actual \n> time=653.681..52328.707 rows=50 loops=1)\n> Buffers: shared hit=7875 read=9870\n> -> GroupAggregate (cost=0.00..55672.36 rows=4291 width=8) (actual \n> time=653.671..52328.480 rows=50 loops=1)\n> Filter: ((sum(electricusage.usage) >= 3284::numeric) AND \n> (sum(electricusage.usage) <= 3769::numeric))\n> Buffers: shared hit=7875 read=9870\n> -> Nested Loop (cost=0.00..55262.93 rows=34506 width=8) \n> (actual time=432.129..52200.465 rows=30711 loops=1)\n> Buffers: shared hit=7875 read=9870\n> -> Index Scan using \n> premiseaccount_bucket_58c70392619aa36f on premiseaccount \n> premiseaccount (cost=0.00..15433.71 rows=4291 width=4) (actual \n> time=338.160..10014.780 rows=3437 loops=1)\n> Index Cond: ((bucket)::text = \n> '85349_single-family'::text)\n> Buffers: shared hit=744 read=2692\n> -> Index Scan using electricusage_premise_account_id \n> on electricusage electricusage (cost=0.00..9.17 rows=9 width=8) \n> (actual time=11.430..12.235 rows=9 loops=3437)\n> Index Cond: (premise_account_id = premiseaccount.id)\n> Filter: (from_date >= '2012-11-20 \n> 00:00:00+00'::timestamp with time zone)\n> Buffers: shared hit=7131 read=7178\n> Total runtime: 52329.028 ms\n> (15 rows)\n>\n> and the iostat results...\n> Linux 3.11.6-x86_64-linode35 (preview-aps-new) 12/12/2013 \n> _x86_64_ (8 CPU)\n>\n> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s \n> avgrq-sz avgqu-sz await r_await w_await svctm %util\n> xvda 6.94 65.68 12.16 7.64 0.36 0.29 \n> 67.06 0.44 22.11 3.56 51.66 1.15 2.28\n> xvdb 0.00 0.00 0.00 0.00 0.00 0.00 \n> 10.61 0.00 1.82 1.82 0.00 1.82 0.00\n>\n> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s \n> avgrq-sz avgqu-sz await r_await w_await svctm %util\n> xvda 0.10 47.10 152.40 7.80 2.20 0.21 \n> 30.92 1.36 8.50 8.03 17.53 6.14 98.33\n> xvdb 0.00 0.00 0.00 0.00 0.00 0.00 \n> 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n>\n> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s \n> avgrq-sz avgqu-sz await r_await w_await svctm %util\n> xvda 0.20 2.30 212.10 0.70 3.22 0.01 \n> 31.09 1.47 6.88 6.90 2.86 4.63 98.62\n> xvdb 0.00 0.00 0.00 0.00 0.00 0.00 \n> 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n>\n> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s \n> avgrq-sz avgqu-sz await r_await w_await svctm %util\n> xvda 0.20 8.30 183.20 5.10 2.46 0.05 \n> 27.31 1.68 8.85 6.68 86.78 5.24 98.70\n> xvdb 0.00 0.00 0.00 0.00 0.00 0.00 \n> 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n>\n> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s \n> avgrq-sz avgqu-sz await r_await w_await svctm %util\n> xvda 0.10 0.00 165.70 0.00 2.36 0.00 \n> 29.20 1.46 8.86 8.86 0.00 5.95 98.63\n> xvdb 0.00 0.00 0.00 0.00 0.00 0.00 \n> 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n>\n> I'm not sure how to process this except that 2.5MB/s seems really \n> slow, and it looks like it is using postgres' cache quite a bit.\n>\n> Thanks,\n> Bryce\n>\n>\n> Claudio Freire <mailto:[email protected]>\n> December 12, 2013 12:48 PM\n>\n>\n>\n> Well, did you run benchmarks? How many IOPS do you get from the volumes?\n>\n> Try running \"iostat -x -m -d 10\" while the slow query is running and \n> pasting the results (or a relevant sample of them).\n>\n> Also, do run \"explain (analyze, buffers)\" instead of plain \"explain \n> analyze\".\n>\n> Bryce Covert <mailto:[email protected]>\n> December 12, 2013 12:20 PM\n>\n> I don't have much info on disks, since this is a virtual server on \n> linode. It is running ubuntu 12.04, 8cpus, 4GB ram, 95GB ext3 volume \n> (noatime). Hopefully that's helpful.\n>\n> Bryce\n> Claudio Freire <mailto:[email protected]>\n> December 12, 2013 12:15 PM\n> On Thu, Dec 12, 2013 at 5:10 PM, Bryce Covert\n>\n>\n> For this kind of diagnostic, you need to include hardware details.\n>\n> OS? Disks? RAID?\n>\n>", "msg_date": "Thu, 12 Dec 2013 15:04:13 -0800", "msg_from": "Bryce Covert <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query due to slow I/O" }, { "msg_contents": "On Thu, Dec 12, 2013 at 6:49 PM, Bryce Covert <\[email protected]> wrote:\n\n> It's strange that it isn't sequential at least for the electric usage, as\n> i've clustered using the index that it's using..\n>\n\nelectricusage is the inner part of the nested loop, which means it will do\n~3000 small scans. That's not sequential no matter how much you cluster.\nAnd the join order cannot be reversed (because you're filtering on\npremiseaccount).\n\nI had work_mem set to 128mb. I tried bumping it to 1024mb, and I don't\n> think I see a in the query plan.\n>\n\n128mb already is abusive enough. If anything, you'd have to lower it.\n\n\nOn Thu, Dec 12, 2013 at 8:04 PM, Bryce Covert <\[email protected]> wrote:\n\n> Looks like it is doing an index only scan for the first table, but not for\n> the second. I tried creating two indexes that theoretically should make it\n> not have to go to the physical table.:\n> \"electricusage_premise_account_id_36bc8999ced10059\" btree\n> (premise_account_id, from_date, usage)\n> \"ix_covered_2\" btree (premise_account_id, from_date DESC, usage, id)\n>\n> Any idea why it's not using that?\n>\n\n\nIndex-only scans not only need the covering index, they also need fully\nvisible pages. That takes time to build up.\n\nIf after that happens you're still getting poor performance, at that point,\nI guess you just have a lousy schema. You're trying to process way too\nscattered data too fast.\n\nSee, your query processes 15k rows, and reads 18k pages. That's as\nscattered as it gets.\n\nThe biggest table you've got there (from the looks of this query) is by far\nelectricusage. You need to cluster that by bucket (since that's your\nquerying criteria), but your schema doesn't allow that. I'm not sure\nwhether it's viable, but if it were, I'd normalize bucket in premiseaccount\nand de-normalize electricusage to also refer to that bucket directly. That\nway, you can filter on electricusage, get a bitmap index scan, and live\nhappily ever after.\n\nFailing that, create materialized views, assuming your write patterns allow\nit.\n\nAnd failing that, add more hardware. If linode doesn't provide it, move\nsomewhere else.\n\nOn Thu, Dec 12, 2013 at 6:49 PM, Bryce Covert <[email protected]> wrote:\nIt's strange that it isn't sequential at least for the electric usage, \nas i've clustered using the index that it's using..electricusage is the inner part of the nested loop, which means it will \ndo ~3000 small scans. That's not sequential no matter how much you \ncluster. And the join order cannot be reversed (because you're filtering\n on premiseaccount).\n\nI had work_mem set to 128mb. I tried bumping it to 1024mb, and I don't \nthink I see a in the query plan. \n128mb already is abusive enough. If anything, you'd have to lower it.\nOn Thu, Dec 12, 2013 at 8:04 PM, Bryce Covert <[email protected]> wrote:\nLooks like it is doing an index only scan for the first table, but not \nfor the second. I tried  creating two indexes that theoretically should \nmake it not have to go to the physical table.:\n    \"electricusage_premise_account_id_36bc8999ced10059\" btree \n(premise_account_id, from_date, usage)\n    \"ix_covered_2\" btree (premise_account_id, from_date DESC, usage, id)\n\nAny idea why it's not using that?Index-only scans not only need the covering index, they also need fully visible pages. That takes time to build up.\nIf after that happens you're still getting poor performance, at that point, I guess you just have a lousy schema. You're trying to process way too scattered data too fast.\nSee, your query processes 15k rows, and reads 18k pages. That's as scattered as it gets.The biggest table you've got there (from the looks of this query) is by far electricusage. You need to cluster that by bucket (since that's your querying criteria), but your schema doesn't allow that. I'm not sure whether it's viable, but if it were, I'd normalize bucket in premiseaccount and de-normalize electricusage to also refer to that bucket directly. That way, you can filter on electricusage, get a bitmap index scan, and live happily ever after.\nFailing that, create materialized views, assuming your write patterns allow it.And failing that, add more hardware. If linode doesn't provide it, move somewhere else.", "msg_date": "Thu, 12 Dec 2013 21:39:32 -0200", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query due to slow I/O" }, { "msg_contents": "On Thu, Dec 12, 2013 at 3:04 PM, Bryce Covert <\[email protected]> wrote:\n\n> Not sure if this is helpful, but I tried upgrading to 9.2, and here's what\n> I got:\n>\n> ---------\n> Limit (cost=0.00..535.78 rows=50 width=8) (actual\n> time=1037.376..135043.945 rows=50 loops=1)\n> Output: premiseaccount.id, (sum(electricusage.usage))\n> Buffers: shared hit=4851 read=18718\n> -> GroupAggregate (cost=0.00..198012.28 rows=18479 width=8) (actual\n> time=1037.369..135043.700 rows=50 loops=1)\n> Output: premiseaccount.id, sum(electricusage.usage)\n>\n> Filter: ((sum(electricusage.usage) >= 3284::numeric) AND\n> (sum(electricusage.usage) <= 3769::numeric))\n> Rows Removed by Filter: 1476\n> Buffers: shared hit=4851 read=18718\n> -> Nested Loop (cost=0.00..196247.46 rows=148764 width=8)\n> (actual time=107.092..134845.231 rows=15188 loops=1)\n> Output: premiseaccount.id, electricusage.usage\n> Buffers: shared hit=4851 read=18718\n> -> Index Only Scan using\n> premiseaccount_bucket_58c70392619aa36f on public.premiseaccount\n> premiseaccount (cost=0.00..43135.13 rows=18479 width=4) (actual\n> time=45.368..137.340 rows=1527 loops=1)\n> Output: premiseaccount.bucket, premiseaccount.id\n> Index Cond: (premiseaccount.bucket =\n> '85375_single-family'::text)\n> Heap Fetches: 1527\n>\n\nYou had to hit the heap for every row, meaning the index-only feature was\nuseless. Are you vacuuming enough? How fast does this table change? What\nis relallvisible from pg_class for these tables?\n\n\n\n> -> Index Scan using electricusage_premise_account_id on\n> public.electricusage electricusage (cost=0.00..8.20 rows=9 width=8)\n> (actual time=22.306..88.136 rows=10 loops=1527)\n> Output: electricusage.id, electricusage.created,\n> electricusage.modified, electricusage.from_date, electricusage.to_date,\n> electricusage.usage, electricusage.demand, electricusage.bill_amount,\n> electricusage.premise_account_id\n> Index Cond: (electricusage.premise_account_id =\n> premiseaccount.id)\n> Filter: (electricusage.from_date >= '2012-11-20\n> 00:00:00+00'::timestamp with time zone)\n> Rows Removed by Filter: 2\n> Buffers: shared hit=4850 read=18033\n> Total runtime: 135044.256 ms\n> (23 rows)\n>\n>\n> Looks like it is doing an index only scan for the first table, but not for\n> the second. I tried creating two indexes that theoretically should make it\n> not have to go to the physical table.:\n> \"electricusage_premise_account_id_36bc8999ced10059\" btree\n> (premise_account_id, from_date, usage)\n> \"ix_covered_2\" btree (premise_account_id, from_date DESC, usage, id)\n>\n> Any idea why it's not using that?\n>\n\nIf the other IOS in this plan is anything to go by, then your table doesn't\nhave enough all-visible pages to make it worthwhile. So it chooses the\nsmaller index, instead of the bigger one that could theoretically support\nan IOS.\n\nCheers,\n\nJeff\n\nOn Thu, Dec 12, 2013 at 3:04 PM, Bryce Covert <[email protected]> wrote:\n\nNot sure if this is \nhelpful, but I tried upgrading to 9.2, and here's what I got:\n\n---------\n Limit  (cost=0.00..535.78 rows=50 width=8) (actual \ntime=1037.376..135043.945 rows=50 loops=1)\n   Output: premiseaccount.id, (sum(electricusage.usage))\n   Buffers: shared hit=4851 read=18718\n   ->  GroupAggregate  (cost=0.00..198012.28 rows=18479 width=8) \n(actual time=1037.369..135043.700 rows=50 loops=1)\n         Output: premiseaccount.id, sum(electricusage.usage)\n         Filter: ((sum(electricusage.usage) >= 3284::numeric) AND \n(sum(electricusage.usage) <= 3769::numeric))\n         Rows Removed by Filter: 1476\n         Buffers: shared hit=4851 read=18718\n         ->  Nested Loop  (cost=0.00..196247.46 rows=148764 width=8) \n(actual time=107.092..134845.231 rows=15188 loops=1)\n               Output: premiseaccount.id, electricusage.usage\n               Buffers: shared hit=4851 read=18718\n               ->  Index Only Scan using \npremiseaccount_bucket_58c70392619aa36f on public.premiseaccount \npremiseaccount  (cost=0.00..43135.13 rows=18479 width=4) (actual \ntime=45.368..137.340 rows=1527 loops=1)\n                     Output: premiseaccount.bucket, premiseaccount.id\n                     Index Cond: (premiseaccount.bucket = \n'85375_single-family'::text)\n                     Heap Fetches: 1527 You had to hit the heap for every row, meaning the index-only feature was useless.  Are you vacuuming enough?  How fast does this table change?  What is relallvisible from pg_class for these tables?\n \n               ->  Index Scan using electricusage_premise_account_id \non public.electricusage electricusage  (cost=0.00..8.20 rows=9 width=8) \n(actual time=22.306..88.136 rows=10 loops=1527)\n                     Output: electricusage.id, electricusage.created, \nelectricusage.modified, electricusage.from_date, electricusage.to_date, \nelectricusage.usage, electricusage.demand, electricusage.bill_amount, \nelectricusage.premise_account_id\n                     Index Cond: (electricusage.premise_account_id = \npremiseaccount.id)\n                     Filter: (electricusage.from_date >= '2012-11-20 \n00:00:00+00'::timestamp with time zone)\n                     Rows Removed by Filter: 2\n                     Buffers: shared hit=4850 read=18033\n Total runtime: 135044.256 ms\n(23 rows)\n\n\nLooks like it is doing an index only scan for the first table, but not \nfor the second. I tried  creating two indexes that theoretically should \nmake it not have to go to the physical table.:\n    \"electricusage_premise_account_id_36bc8999ced10059\" btree \n(premise_account_id, from_date, usage)\n    \"ix_covered_2\" btree (premise_account_id, from_date DESC, usage, id)\n\nAny idea why it's not using that?If the other IOS in this plan is anything to go by, then your table doesn't have enough all-visible pages to make it worthwhile.  So it chooses the smaller index, instead of the bigger one that could theoretically support an IOS.\nCheers,Jeff", "msg_date": "Thu, 12 Dec 2013 16:01:54 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query due to slow I/O" }, { "msg_contents": "It looks like you guys were right. I think vacuum analyzing this made it \ndo an IOS. It seems like materialized views are going to be the best \nbet. I see how that would allow sequential reading. Thanks!\n\nBryce\n\n> Jeff Janes <mailto:[email protected]>\n> December 12, 2013 4:01 PM\n> On Thu, Dec 12, 2013 at 3:04 PM, Bryce Covert \n> <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n> Not sure if this is helpful, but I tried upgrading to 9.2, and\n> here's what I got:\n>\n> ---------\n> Limit (cost=0.00..535.78 rows=50 width=8) (actual\n> time=1037.376..135043.945 rows=50 loops=1)\n> Output: premiseaccount.id <http://premiseaccount.id>,\n> (sum(electricusage.usage))\n> Buffers: shared hit=4851 read=18718\n> -> GroupAggregate (cost=0.00..198012.28 rows=18479 width=8)\n> (actual time=1037.369..135043.700 rows=50 loops=1)\n> Output: premiseaccount.id <http://premiseaccount.id>,\n> sum(electricusage.usage)\n>\n> Filter: ((sum(electricusage.usage) >= 3284::numeric) AND\n> (sum(electricusage.usage) <= 3769::numeric))\n> Rows Removed by Filter: 1476\n> Buffers: shared hit=4851 read=18718\n> -> Nested Loop (cost=0.00..196247.46 rows=148764\n> width=8) (actual time=107.092..134845.231 rows=15188 loops=1)\n> Output: premiseaccount.id\n> <http://premiseaccount.id>, electricusage.usage\n> Buffers: shared hit=4851 read=18718\n> -> Index Only Scan using\n> premiseaccount_bucket_58c70392619aa36f on public.premiseaccount\n> premiseaccount (cost=0.00..43135.13 rows=18479 width=4) (actual\n> time=45.368..137.340 rows=1527 loops=1)\n> Output: premiseaccount.bucket,\n> premiseaccount.id <http://premiseaccount.id>\n> Index Cond: (premiseaccount.bucket =\n> '85375_single-family'::text)\n> Heap Fetches: 1527\n>\n> You had to hit the heap for every row, meaning the index-only feature \n> was useless. Are you vacuuming enough? How fast does this table \n> change? What is relallvisible from pg_class for these tables?\n>\n> -> Index Scan using\n> electricusage_premise_account_id on public.electricusage\n> electricusage (cost=0.00..8.20 rows=9 width=8) (actual\n> time=22.306..88.136 rows=10 loops=1527)\n> Output: electricusage.id\n> <http://electricusage.id>, electricusage.created,\n> electricusage.modified, electricusage.from_date,\n> electricusage.to_date, electricusage.usage, electricusage.demand,\n> electricusage.bill_amount, electricusage.premise_account_id\n> Index Cond: (electricusage.premise_account_id\n> = premiseaccount.id <http://premiseaccount.id>)\n> Filter: (electricusage.from_date >=\n> '2012-11-20 00:00:00+00'::timestamp with time zone)\n> Rows Removed by Filter: 2\n> Buffers: shared hit=4850 read=18033\n> Total runtime: 135044.256 ms\n> (23 rows)\n>\n>\n> Looks like it is doing an index only scan for the first table, but\n> not for the second. I tried creating two indexes that\n> theoretically should make it not have to go to the physical table.:\n> \"electricusage_premise_account_id_36bc8999ced10059\" btree\n> (premise_account_id, from_date, usage)\n> \"ix_covered_2\" btree (premise_account_id, from_date DESC,\n> usage, id)\n>\n> Any idea why it's not using that?\n>\n>\n> If the other IOS in this plan is anything to go by, then your table \n> doesn't have enough all-visible pages to make it worthwhile. So it \n> chooses the smaller index, instead of the bigger one that could \n> theoretically support an IOS.\n>\n> Cheers,\n>\n> Jeff\n> Bryce Covert <mailto:[email protected]>\n> December 12, 2013 3:04 PM\n> Not sure if this is helpful, but I tried upgrading to 9.2, and here's \n> what I got:\n>\n> ---------\n> Limit (cost=0.00..535.78 rows=50 width=8) (actual \n> time=1037.376..135043.945 rows=50 loops=1)\n> Output: premiseaccount.id, (sum(electricusage.usage))\n> Buffers: shared hit=4851 read=18718\n> -> GroupAggregate (cost=0.00..198012.28 rows=18479 width=8) \n> (actual time=1037.369..135043.700 rows=50 loops=1)\n> Output: premiseaccount.id, sum(electricusage.usage)\n> Filter: ((sum(electricusage.usage) >= 3284::numeric) AND \n> (sum(electricusage.usage) <= 3769::numeric))\n> Rows Removed by Filter: 1476\n> Buffers: shared hit=4851 read=18718\n> -> Nested Loop (cost=0.00..196247.46 rows=148764 width=8) \n> (actual time=107.092..134845.231 rows=15188 loops=1)\n> Output: premiseaccount.id, electricusage.usage\n> Buffers: shared hit=4851 read=18718\n> -> Index Only Scan using \n> premiseaccount_bucket_58c70392619aa36f on public.premiseaccount \n> premiseaccount (cost=0.00..43135.13 rows=18479 width=4) (actual \n> time=45.368..137.340 rows=1527 loops=1)\n> Output: premiseaccount.bucket, premiseaccount.id\n> Index Cond: (premiseaccount.bucket = \n> '85375_single-family'::text)\n> Heap Fetches: 1527\n> Buffers: shared hit=1 read=685\n> -> Index Scan using electricusage_premise_account_id \n> on public.electricusage electricusage (cost=0.00..8.20 rows=9 \n> width=8) (actual time=22.306..88.136 rows=10 loops=1527)\n> Output: electricusage.id, electricusage.created, \n> electricusage.modified, electricusage.from_date, \n> electricusage.to_date, electricusage.usage, electricusage.demand, \n> electricusage.bill_amount, electricusage.premise_account_id\n> Index Cond: (electricusage.premise_account_id = \n> premiseaccount.id)\n> Filter: (electricusage.from_date >= '2012-11-20 \n> 00:00:00+00'::timestamp with time zone)\n> Rows Removed by Filter: 2\n> Buffers: shared hit=4850 read=18033\n> Total runtime: 135044.256 ms\n> (23 rows)\n>\n>\n> Looks like it is doing an index only scan for the first table, but not \n> for the second. I tried creating two indexes that theoretically \n> should make it not have to go to the physical table.:\n> \"electricusage_premise_account_id_36bc8999ced10059\" btree \n> (premise_account_id, from_date, usage)\n> \"ix_covered_2\" btree (premise_account_id, from_date DESC, usage, id)\n>\n> Any idea why it's not using that?\n>\n> Thanks!\n> Bryce\n>\n> Claudio Freire <mailto:[email protected]>\n> December 12, 2013 1:35 PM\n>\n> On Thu, Dec 12, 2013 at 6:16 PM, Bryce Covert \n> <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n>\n> Thanks a lot for the help. I'm not familiar with explain buffers,\n> but here's the results:\n>\n> Limit (cost=0.00..648.71 rows=50 width=8) (actual\n> time=653.681..52328.707 rows=50 loops=1)\n> Buffers: shared hit=7875 read=9870\n> -> GroupAggregate (cost=0.00..55672.36 rows=4291 width=8)\n> (actual time=653.671..52328.480 rows=50 loops=1)\n>\n> Filter: ((sum(electricusage.usage) >= 3284::numeric) AND\n> (sum(electricusage.usage) <= 3769::numeric))\n> Buffers: shared hit=7875 read=9870\n> -> Nested Loop (cost=0.00..55262.93 rows=34506 width=8)\n> (actual time=432.129..52200.465 rows=30711 loops=1)\n> Buffers: shared hit=7875 read=9870\n> -> Index Scan using\n> premiseaccount_bucket_58c70392619aa36f on premiseaccount\n> premiseaccount (cost=0.00..15433.71 rows=4291 width=4) (actual\n> time=338.160..10014.780 rows=3437 loops=1)\n> Index Cond: ((bucket)::text =\n> '85349_single-family'::text)\n> Buffers: shared hit=744 read=2692\n> -> Index Scan using\n> electricusage_premise_account_id on electricusage electricusage \n> (cost=0.00..9.17 rows=9 width=8) (actual time=11.430..12.235\n> rows=9 loops=3437)\n> Index Cond: (premise_account_id =\n> premiseaccount.id <http://premiseaccount.id>)\n> Filter: (from_date >= '2012-11-20\n> 00:00:00+00'::timestamp with time zone)\n> Buffers: shared hit=7131 read=7178\n> Total runtime: 52329.028 ms\n> (15 rows)\n>\n> and the iostat results...\n> Linux 3.11.6-x86_64-linode35 (preview-aps-new) 12/12/2013 \n> _x86_64_ (8 CPU)\n>\n> ...\n>\n> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s\n> avgrq-sz avgqu-sz await r_await w_await svctm %util\n> xvda 0.10 47.10 152.40 7.80 2.20 \n> 0.21 30.92 1.36 8.50 8.03 17.53 6.14 98.33\n> xvdb 0.00 0.00 0.00 0.00 0.00 \n> 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n>\n>\n>\n> This means it's doing random I/O, and that your disk is a single \n> 7200RPM drive (152 r/s being typical for that hardware).\n>\n> You can improve this by either:\n>\n> 1 - Turning that random I/O pattern into sequential, or\n> 2 - Getting better I/O.\n>\n> I'll assume 2 isn't available to you on linode, so for 1, you could \n> try lowering effective_cache_size substantially. It seems you're not \n> getting nearly as much caching as you think (ie 2GB). However, I doubt \n> there's a plan that can get you significantly better performance given \n> your hardware.\n>\n> You may shave a few seconds, though, if you increase work_mem. It \n> seems it should have used a bitmap index scan for at least one of the \n> index scans there, and a low work_mem could be what's limiting the \n> planner's possibilities. What are your settings in that area?\n>\n>\n> Bryce Covert <mailto:[email protected]>\n> December 12, 2013 1:16 PM\n> Hey Claudio,\n>\n> Thanks a lot for the help. I'm not familiar with explain buffers, but \n> here's the results:\n>\n> Limit (cost=0.00..648.71 rows=50 width=8) (actual \n> time=653.681..52328.707 rows=50 loops=1)\n> Buffers: shared hit=7875 read=9870\n> -> GroupAggregate (cost=0.00..55672.36 rows=4291 width=8) (actual \n> time=653.671..52328.480 rows=50 loops=1)\n> Filter: ((sum(electricusage.usage) >= 3284::numeric) AND \n> (sum(electricusage.usage) <= 3769::numeric))\n> Buffers: shared hit=7875 read=9870\n> -> Nested Loop (cost=0.00..55262.93 rows=34506 width=8) \n> (actual time=432.129..52200.465 rows=30711 loops=1)\n> Buffers: shared hit=7875 read=9870\n> -> Index Scan using \n> premiseaccount_bucket_58c70392619aa36f on premiseaccount \n> premiseaccount (cost=0.00..15433.71 rows=4291 width=4) (actual \n> time=338.160..10014.780 rows=3437 loops=1)\n> Index Cond: ((bucket)::text = \n> '85349_single-family'::text)\n> Buffers: shared hit=744 read=2692\n> -> Index Scan using electricusage_premise_account_id \n> on electricusage electricusage (cost=0.00..9.17 rows=9 width=8) \n> (actual time=11.430..12.235 rows=9 loops=3437)\n> Index Cond: (premise_account_id = premiseaccount.id)\n> Filter: (from_date >= '2012-11-20 \n> 00:00:00+00'::timestamp with time zone)\n> Buffers: shared hit=7131 read=7178\n> Total runtime: 52329.028 ms\n> (15 rows)\n>\n> and the iostat results...\n> Linux 3.11.6-x86_64-linode35 (preview-aps-new) 12/12/2013 \n> _x86_64_ (8 CPU)\n>\n> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s \n> avgrq-sz avgqu-sz await r_await w_await svctm %util\n> xvda 6.94 65.68 12.16 7.64 0.36 0.29 \n> 67.06 0.44 22.11 3.56 51.66 1.15 2.28\n> xvdb 0.00 0.00 0.00 0.00 0.00 0.00 \n> 10.61 0.00 1.82 1.82 0.00 1.82 0.00\n>\n> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s \n> avgrq-sz avgqu-sz await r_await w_await svctm %util\n> xvda 0.10 47.10 152.40 7.80 2.20 0.21 \n> 30.92 1.36 8.50 8.03 17.53 6.14 98.33\n> xvdb 0.00 0.00 0.00 0.00 0.00 0.00 \n> 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n>\n> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s \n> avgrq-sz avgqu-sz await r_await w_await svctm %util\n> xvda 0.20 2.30 212.10 0.70 3.22 0.01 \n> 31.09 1.47 6.88 6.90 2.86 4.63 98.62\n> xvdb 0.00 0.00 0.00 0.00 0.00 0.00 \n> 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n>\n> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s \n> avgrq-sz avgqu-sz await r_await w_await svctm %util\n> xvda 0.20 8.30 183.20 5.10 2.46 0.05 \n> 27.31 1.68 8.85 6.68 86.78 5.24 98.70\n> xvdb 0.00 0.00 0.00 0.00 0.00 0.00 \n> 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n>\n> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s \n> avgrq-sz avgqu-sz await r_await w_await svctm %util\n> xvda 0.10 0.00 165.70 0.00 2.36 0.00 \n> 29.20 1.46 8.86 8.86 0.00 5.95 98.63\n> xvdb 0.00 0.00 0.00 0.00 0.00 0.00 \n> 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n>\n> I'm not sure how to process this except that 2.5MB/s seems really \n> slow, and it looks like it is using postgres' cache quite a bit.\n>\n> Thanks,\n> Bryce\n>\n>\n> Claudio Freire <mailto:[email protected]>\n> December 12, 2013 12:48 PM\n>\n>\n>\n> Well, did you run benchmarks? How many IOPS do you get from the volumes?\n>\n> Try running \"iostat -x -m -d 10\" while the slow query is running and \n> pasting the results (or a relevant sample of them).\n>\n> Also, do run \"explain (analyze, buffers)\" instead of plain \"explain \n> analyze\".\n>", "msg_date": "Thu, 12 Dec 2013 16:03:09 -0800", "msg_from": "Bryce Covert <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query due to slow I/O" } ]
[ { "msg_contents": "Dear ALL,\nI am running PL/pgsql procedure with sql statements that taking a long time. I able to see them in the log just after their completion. How can I see currently running SQL statement? I am able to see in pg_stat_activity only my call to function. Many thanks in advance.\n\nSincerely yours,\n\n[Description: Celltick logo_highres]\nYuri Levinsky, DBA\nCelltick Technologies Ltd., 32 Maskit St., Herzliya 46733, Israel\nMobile: +972 54 6107703, Office: +972 9 9710239; Fax: +972 9 9710222", "msg_date": "Sun, 15 Dec 2013 16:18:18 +0000", "msg_from": "Yuri Levinsky <[email protected]>", "msg_from_op": true, "msg_subject": "Current query of the PL/pgsql procedure." }, { "msg_contents": "On Sun, Dec 15, 2013 at 8:18 AM, Yuri Levinsky <[email protected]> wrote:\n\n> Dear ALL,\n>\n> I am running PL/pgsql procedure with sql statements that taking a long\n> time. I able to see them in the log just after their completion. How can I\n> see currently running SQL statement? I am able to see in pg_stat_activity\n> only my call to function. Many thanks in advance.\n>\n\npg_stat_activity is the right table, but you have to be the super-user to\nsee queries by others. Here's what I use:\n\n$ psql -U postgres\npostgres=# select procpid, datname, usename, current_query from\npg_stat_activity where current_query !~ '<IDLE>';\n\n Craig\n\n\n>\n> *Sincerely yours*,\n>\n>\n>\n> [image: Description: Celltick logo_highres]\n>\n> Yuri Levinsky, DBA\n>\n> Celltick Technologies Ltd., 32 Maskit St., Herzliya 46733, Israel\n>\n> Mobile: +972 54 6107703, Office: +972 9 9710239; Fax: +972 9 9710222\n>\n>\n>", "msg_date": "Sun, 15 Dec 2013 10:05:10 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Current query of the PL/pgsql procedure." }, { "msg_contents": "On Sun, Dec 15, 2013 at 04:18:18PM +0000, Yuri Levinsky wrote:\n> Dear ALL,\n> I am running PL/pgsql procedure with sql statements that taking a long\n> time. I able to see them in the log just after their completion. How\n> can I see currently running SQL statement? I am able to see in\n> pg_stat_activity only my call to function. Many thanks in advance.\n\npg_stat_activity and pg logs, can't see what your function does\ninternally.\n\nWhat you can do, though, is to add some \"RAISE LOG\" to the function, so\nthat it will log its progress.\n\nCheck this for example:\nhttp://www.depesz.com/2010/03/18/profiling-stored-proceduresfunctions/\n\nBest regards,\n\ndepesz", "msg_date": "Mon, 16 Dec 2013 11:26:19 +0100", "msg_from": "hubert depesz lubaczewski <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Current query of the PL/pgsql procedure." }, { "msg_contents": " Dear Depesz,\r\nThis is very problematic solution: I have to change whole!!! my code to put appropriate comment with query text before any query execution. In addition I would like to know current execution plan, that seems to be impossible. This is very hard limitation let's say. In case of production issue I'll just unable to do it: the issue already happening, I can't stop procedure and start code change.\r\nJames,\r\nI saw your reply: I see the function is running, it's just not clear that exactly and how this function doing. \r\n\r\nSincerely yours,\r\n\r\n\r\nYuri Levinsky, DBA\r\nCelltick Technologies Ltd., 32 Maskit St., Herzliya 46733, Israel\r\nMobile: +972 54 6107703, Office: +972 9 9710239; Fax: +972 9 9710222\r\n\r\n-----Original Message-----\r\nFrom: [email protected] [mailto:[email protected]] \r\nSent: Monday, December 16, 2013 12:26 PM\r\nTo: Yuri Levinsky\r\nCc: [email protected]\r\nSubject: Re: [PERFORM] Current query of the PL/pgsql procedure.\r\n\r\nOn Sun, Dec 15, 2013 at 04:18:18PM +0000, Yuri Levinsky wrote:\r\n> Dear ALL,\r\n> I am running PL/pgsql procedure with sql statements that taking a long \r\n> time. I able to see them in the log just after their completion. How \r\n> can I see currently running SQL statement? I am able to see in \r\n> pg_stat_activity only my call to function. Many thanks in advance.\r\n\r\npg_stat_activity and pg logs, can't see what your function does internally.\r\n\r\nWhat you can do, though, is to add some \"RAISE LOG\" to the function, so that it will log its progress.\r\n\r\nCheck this for example:\r\nhttp://www.depesz.com/2010/03/18/profiling-stored-proceduresfunctions/\r\n\r\nBest regards,\r\n\r\ndepesz\r\n\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 16 Dec 2013 11:42:30 +0000", "msg_from": "Yuri Levinsky <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Current query of the PL/pgsql procedure." }, { "msg_contents": "On Sun, Dec 15, 2013 at 2:18 PM, Yuri Levinsky <[email protected]> wrote:\n\n> Dear ALL,\n>\n> I am running PL/pgsql procedure with sql statements that taking a long\n> time. I able to see them in the log just after their completion. How can I\n> see currently running SQL statement? I am able to see in pg_stat_activity\n> only my call to function. Many thanks in advance.\n>\n>\n>\n\n\n\nAs noticed, pg_stat_activity will only be able to see the call to that\nfunction, not the queries been executed inside the function itself. The\nsame will happen with the logs (configuring GUCs like log_statements or\nlog_min_duration_statement). A solution I have used to solve this issue is\neither the contrib auto_explain [1] or pg_stat_statements [2]. Both will be\nable to get the queries executed inside the functions. For that, you will\nhave to configure then (by default they will not track the queries inside):\n\n* for auto_explain: `auto_explain.log_nested_statements = on`\n* for pg_stat_statements: `pg_stat_statements.track = all`\n\nThe problem you stated about the logs, that it only logs after the\nexecution not during or before, will still remain. Both will \"get the\nquery\" right after the execution. In your use case auto_explain seems\nbetter to use to track, as it can grows with no limit (you will have to\ncontrol your log file size and auto_explain.log_min_duration to avoid a log\nflood, though).\n\n[1] http://www.postgresql.org/docs/current/static/auto-explain.html\n[2] http://www.postgresql.org/docs/current/static/pgstatstatements.html\n\n\nBest regards,\n-- \nMatheus de Oliveira\nAnalista de Banco de Dados\nDextra Sistemas - MPS.Br nível F!\nwww.dextra.com.br/postgres\n\nOn Sun, Dec 15, 2013 at 2:18 PM, Yuri Levinsky <[email protected]> wrote:\n\n\n\nDear ALL,\nI am running PL/pgsql procedure with sql statements that taking a long time. I able to see them in the log just after their completion. How can I see currently running SQL statement? \n I am able to see in pg_stat_activity only my call to function. Many thanks in advance.\n\n As noticed, pg_stat_activity will only be able to see the call to \nthat function, not the queries been executed inside the function itself.\n The same will happen with the logs (configuring GUCs like \nlog_statements or log_min_duration_statement). A solution I have used to\n solve this issue is either the contrib auto_explain [1] or \npg_stat_statements [2]. Both will be able to get the queries executed \ninside the functions. For that, you will have to configure then (by \ndefault they will not track the queries inside):* for auto_explain: `auto_explain.log_nested_statements = on`* for pg_stat_statements: `pg_stat_statements.track = all`The\n problem you stated about the logs, that it only logs after the \nexecution not during or before, will still remain. Both will \"get the \nquery\" right after the execution. In your use case auto_explain seems \nbetter to use to track, as it can grows with no limit (you will have to \ncontrol your log file size and auto_explain.log_min_duration to avoid a \nlog flood, though).[1] http://www.postgresql.org/docs/current/static/auto-explain.html[2] http://www.postgresql.org/docs/current/static/pgstatstatements.html\n Best regards,-- Matheus de OliveiraAnalista de Banco de DadosDextra Sistemas - MPS.Br nível F!www.dextra.com.br/postgres", "msg_date": "Mon, 16 Dec 2013 10:47:52 -0200", "msg_from": "Matheus de Oliveira <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Current query of the PL/pgsql procedure." }, { "msg_contents": "\nOn 12/16/2013 05:26 AM, hubert depesz lubaczewski wrote:\n> On Sun, Dec 15, 2013 at 04:18:18PM +0000, Yuri Levinsky wrote:\n>> Dear ALL,\n>> I am running PL/pgsql procedure with sql statements that taking a long\n>> time. I able to see them in the log just after their completion. How\n>> can I see currently running SQL statement? I am able to see in\n>> pg_stat_activity only my call to function. Many thanks in advance.\n> pg_stat_activity and pg logs, can't see what your function does\n> internally.\n>\n> What you can do, though, is to add some \"RAISE LOG\" to the function, so\n> that it will log its progress.\n>\n> Check this for example:\n> http://www.depesz.com/2010/03/18/profiling-stored-proceduresfunctions/\n>\n\nAlso, the auto-explain module can peer inside functions. See \n<http://www.postgresql.org/docs/current/static/auto-explain.html>\n\ncheers\n\nandrew\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 16 Dec 2013 08:29:27 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Current query of the PL/pgsql procedure." }, { "msg_contents": "On Mon, 2013-12-16 at 11:42 +0000, Yuri Levinsky wrote:\n> Dear Depesz,\n> This is very problematic solution: I have to change whole!!! my code to put appropriate comment with query text before any query execution. In addition I would like to know current execution plan, that seems to be impossible. This is very hard limitation let's say. In case of production issue I'll just unable to do it: the issue already happening, I can't stop procedure and start code change.\n> James,\n> I saw your reply: I see the function is running, it's just not clear that exactly and how this function doing. \n> \n\nThis blog post\n(http://blog.guillaume.lelarge.info/index.php/post/2012/03/31/Profiling-PL/pgsql-functions) can probably help you profiling your PL/pgsql functions without modifying them.\n\nI'm interested in any comments you can have on the log_functions hook\nfunction.\n\nRegards.\n\n\n-- \nGuillaume\nhttp://blog.guillaume.lelarge.info\nhttp://www.dalibo.com\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 18 Dec 2013 09:30:27 +0100", "msg_from": "Guillaume Lelarge <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Current query of the PL/pgsql procedure." } ]
[ { "msg_contents": "Hi,\n\nI have a long query that returns an extremely large result set. In my application, I would like to report the results as they come in, so I am creating a cursor and fetching 1000 rows at a time. After I declare the cursor (declare C cursor for), I call \"fetch 1000 from C\" over and over. Usually, the result for the \"fetch\" query comes back very quickly (less than 100 milliseconds), but sometimes, however, it takes far longer for the result to come back (18 seconds, 27 seconds, 30 seconds, etc.).\n\nI am trying to figure out why I get this intermittent slowness, and if there is anything I can do about it.\n\nI'm running \"PostgreSQL 9.2.4, compiled by Visual C++ build 1600, 64-bit\" on a Windows 7, 64-bit computer, 8 gb of ram.\n\nMy postgresql.conf file:\nport = 53641\nwal_level = minimal\narchive_mode = off\nmax_wal_senders = 0\ncheckpoint_segments = 100\nmaintenance_work_mem = 807MB\nwork_mem = 81MB\nshared_buffers = 2018MB\neffective_cache_size = 6054MB\ncursor_tuple_fraction = 1.0\n\nHere is the query without a cursor. I ran this in the pgAdmin III application:\nEXPLAIN (ANALYZE, BUFFERS) select POLYGON.ID,POLYGON.LAYER_ID,ST_AsBinary(POLYGON.GEOM),POLYGON.INDICES,POLYGON.PORTINSTANCE_ID\nfrom POLYGON\nwhere LAYER_ID = 1 and (ST_MakeEnvelope(-2732043.5012135925, -4077481.9752427186, 5956407.5012135925, 822435.9752427186, 0) && GEOM);\n\n\"Bitmap Heap Scan on polygon (cost=31524.04..700106.82 rows=1683816 width=235) (actual time=117.066..1237.018 rows=1691961 loops=1)\"\n\" Recheck Cond: (layer_id = 1)\"\n\" Filter: ('010300000001000000050000005AC427C005D844C1DFC0D4FCD41B4FC15AC427C005D844C17C0353F3471929412DE213E0CDB856417C0353F3471929412DE213E0CDB85641DFC0D4FCD41B4FC15AC427C005D844C1DFC0D4FCD41B4FC1'::geometry && geom)\"\n\" Buffers: shared hit=84071\"\n\" -> Bitmap Index Scan on polygon_layer_id_idx (cost=0.00..31103.09 rows=1683816 width=0) (actual time=103.354..103.354 rows=1691961 loops=1)\"\n\" Index Cond: (layer_id = 1)\"\n\" Buffers: shared hit=4629\"\n\"Total runtime: 1273.132 ms\"\n\nHere is the polygon table and the related indexes:\nCREATE TABLE public.polygon\n(\n id bigint NOT NULL DEFAULT nextval('polygon_id_seq'::regclass),\n layer_id bigint NOT NULL,\n geom geometry(Polygon) NOT NULL,\n indices bytea NOT NULL,\n portinstance_id bigint,\n CONSTRAINT polygon_pkey PRIMARY KEY (id),\n CONSTRAINT polygon_layer_id_fkey FOREIGN KEY (layer_id)\n REFERENCES public.layerrow (id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE CASCADE,\n CONSTRAINT polygon_portinstance_id_fkey FOREIGN KEY (portinstance_id)\n REFERENCES public.portinstance (id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE CASCADE\n)\nWITH (\n OIDS=FALSE\n);\nALTER TABLE public.polygon\n OWNER TO postgres;\nCREATE INDEX polygon_layer_id_geom_idx\n ON public.polygon\n USING gist\n (layer_id, geom);\nCREATE INDEX polygon_layer_id_idx\n ON public.polygon\n USING btree\n (layer_id);\nCREATE INDEX polygon_portinstance_id_idx\n ON public.polygon\n USING btree\n (portinstance_id);\n\nThe polygon table has about 20 million rows.\n\nHere are the queries that my application is calling:\ndeclare C cursor for select POLYGON.ID,POLYGON.LAYER_ID,ST_AsBinary(POLYGON.GEOM),POLYGON.INDICES,POLYGON.PORTINSTANCE_ID\nfrom POLYGON where LAYER_ID = 1 and (ST_MakeEnvelope(-2732043.5012135925, -4077481.9752427186, 5956407.5012135925, 822435.9752427186, 0) && GEOM);\nfetch 1000 from C;\nfetch 1000 from C;\nfetch 1000 from C;\n...and so forth.\n\nFor example, in one trial, my application called \"fetch 1000 from C\" 1,659 times, with each result coming back in less than 100 ms. Then I get these response times for the fetches on the next few \"fetch 1000 from C\" calls:\n1,142 ms\n22,295 ms\n6,551 ms\n935 ms\n809 ms\n... and so forth.\n\nBy the way, my application is written in Java. I am using JDBC to communicate with the server. If there is any other information I could give you that would be helpful, please let me know.\n\nRegards,\nDrew Jetter\nSenior Software Engineer\nMicroNet Solutions, Inc\n10501 Research RD SE, Suite C\nAlbuquerque, NM 87123\n505-765-2490\n\n\n\n\n\n\n\n\n\n\nHi,\n \nI have a long query that returns an extremely large result set. In my application, I would like to report the results as they come in, so I am creating a cursor and fetching 1000 rows at a time. After I declare the cursor (declare C cursor\n for), I call “fetch 1000 from C” over and over. Usually, the result for the “fetch” query comes back very quickly (less than 100 milliseconds), but sometimes, however, it takes far longer for the result to come back (18 seconds, 27 seconds, 30 seconds, etc.).\n \nI am trying to figure out why I get this intermittent slowness, and if there is anything I can do about it.\n \nI’m running \"PostgreSQL 9.2.4, compiled by Visual C++ build 1600, 64-bit\" on a Windows 7, 64-bit computer, 8 gb of ram.\n \nMy postgresql.conf file:\nport = 53641\nwal_level = minimal\narchive_mode = off\nmax_wal_senders = 0\ncheckpoint_segments = 100\nmaintenance_work_mem = 807MB\nwork_mem = 81MB\nshared_buffers = 2018MB\neffective_cache_size = 6054MB\ncursor_tuple_fraction = 1.0\n \nHere is the query without a cursor. I ran this in the pgAdmin III application:\nEXPLAIN (ANALYZE, BUFFERS) select POLYGON.ID,POLYGON.LAYER_ID,ST_AsBinary(POLYGON.GEOM),POLYGON.INDICES,POLYGON.PORTINSTANCE_ID\n\nfrom POLYGON \nwhere LAYER_ID = 1 and (ST_MakeEnvelope(-2732043.5012135925, -4077481.9752427186, 5956407.5012135925, 822435.9752427186, 0) && GEOM);\n \n\"Bitmap Heap Scan on polygon  (cost=31524.04..700106.82 rows=1683816 width=235) (actual time=117.066..1237.018 rows=1691961 loops=1)\"\n\"  Recheck Cond: (layer_id = 1)\"\n\"  Filter: ('010300000001000000050000005AC427C005D844C1DFC0D4FCD41B4FC15AC427C005D844C17C0353F3471929412DE213E0CDB856417C0353F3471929412DE213E0CDB85641DFC0D4FCD41B4FC15AC427C005D844C1DFC0D4FCD41B4FC1'::geometry && geom)\"\n\"  Buffers: shared hit=84071\"\n\"  ->  Bitmap Index Scan on polygon_layer_id_idx  (cost=0.00..31103.09 rows=1683816 width=0) (actual time=103.354..103.354 rows=1691961 loops=1)\"\n\"        Index Cond: (layer_id = 1)\"\n\"        Buffers: shared hit=4629\"\n\"Total runtime: 1273.132 ms\"\n \nHere is the polygon table and the related indexes:\nCREATE TABLE public.polygon\n(\n  id bigint NOT NULL DEFAULT nextval('polygon_id_seq'::regclass),\n  layer_id bigint NOT NULL,\n  geom geometry(Polygon) NOT NULL,\n  indices bytea NOT NULL,\n  portinstance_id bigint,\n  CONSTRAINT polygon_pkey PRIMARY KEY (id),\n  CONSTRAINT polygon_layer_id_fkey FOREIGN KEY (layer_id)\n      REFERENCES public.layerrow (id) MATCH SIMPLE\n      ON UPDATE NO ACTION ON DELETE CASCADE,\n  CONSTRAINT polygon_portinstance_id_fkey FOREIGN KEY (portinstance_id)\n      REFERENCES public.portinstance (id) MATCH SIMPLE\n      ON UPDATE NO ACTION ON DELETE CASCADE\n)\nWITH (\n  OIDS=FALSE\n);\nALTER TABLE public.polygon\n  OWNER TO postgres;\nCREATE INDEX polygon_layer_id_geom_idx\n  ON public.polygon\n  USING gist\n  (layer_id, geom);\nCREATE INDEX polygon_layer_id_idx\n  ON public.polygon\n  USING btree\n  (layer_id);\nCREATE INDEX polygon_portinstance_id_idx\n  ON public.polygon\n  USING btree\n  (portinstance_id);\n \nThe polygon table has about 20 million rows.\n \nHere are the queries that my application is calling:\ndeclare C cursor for select POLYGON.ID,POLYGON.LAYER_ID,ST_AsBinary(POLYGON.GEOM),POLYGON.INDICES,POLYGON.PORTINSTANCE_ID\n\nfrom POLYGON where LAYER_ID = 1 and (ST_MakeEnvelope(-2732043.5012135925, -4077481.9752427186, 5956407.5012135925, 822435.9752427186, 0) && GEOM);\nfetch 1000 from C;\nfetch 1000 from C;\nfetch 1000 from C;\n…and so forth.\n \nFor example, in one trial, my application called “fetch 1000 from C” 1,659 times, with each result coming back in less than 100 ms. Then I get these response times for the fetches on the next few “fetch 1000 from C” calls:\n1,142 ms\n22,295 ms\n6,551 ms\n935 ms\n809 ms\n… and so forth.\n \nBy the way, my application is written in Java. I am using JDBC to communicate with the server. If there is any other information I could give you that would be helpful, please let me know.\n \nRegards,\nDrew Jetter\nSenior Software Engineer\nMicroNet Solutions, Inc\n10501 Research RD SE, Suite C\nAlbuquerque, NM 87123\n505-765-2490", "msg_date": "Mon, 16 Dec 2013 20:54:43 +0000", "msg_from": "Drew Jetter <[email protected]>", "msg_from_op": true, "msg_subject": "Help with cursor query that is intermittently slow" } ]
[ { "msg_contents": "The actual query selects columns from each of those tables.\n\nIf I remove the join on order_shipping_addresses, it's very fast. Likewise,\nif I remove the join on skus, base_skus, or products, it's also very fast.\n\nI'm pretty sure I have all the necessary indexes.\n\nThe below is also at\nhttps://gist.github.com/joevandyk/88624f7c23790200cccd/raw/gistfile1.txt\n\nPostgres appears to use the number of joins to determine which plan to use?\nIf I go over that by one, then it seems to switch to a very different/slow\nplan. Is there a way I can speed this up?\n\n-- This is really slow\nexplain analyze\n select\n pl.uuid as packing_list_id\n from orders o\n join order_shipping_addresses osa on osa.order_id = o.id\n join line_items li on li.order_id = o.id\n join skus on skus.id = li.sku_id\n join base_skus bs using (base_sku_id)\n join products p on p.id = li.product_id\n left join packed_line_items plis on plis.line_item_id = li.id\n left join packing_list_items pli using (packed_line_item_id)\n left join packing_lists pl on pl.id = pli.packing_list_id\nwhere pl.uuid = '58995488567';\n\n Hash Join (cost=529945.66..1169006.25 rows=1 width=8) (actual\ntime=16994.025..18442.838 rows=1 loops=1)\n Hash Cond: (pli.packing_list_id = pl.id)\n -> Hash Join (cost=529937.20..1156754.36 rows=3264913 width=8)\n(actual time=6394.260..18186.960 rows=3373977 loops=1)\n Hash Cond: (plis.packed_line_item_id = pli.packed_line_item_id)\n -> Hash Join (cost=389265.00..911373.32 rows=3264913\nwidth=16) (actual time=5260.162..13971.003 rows=3373977 loops=1)\n Hash Cond: (li.sku_id = skus.id)\n -> Hash Join (cost=379645.45..836455.51 rows=3264913\nwidth=20) (actual time=5130.797..12370.225 rows=3373977 loops=1)\n Hash Cond: (li.order_id = osa.order_id)\n -> Hash Join (cost=7256.32..353371.98\nrows=3265060 width=24) (actual time=29.692..3674.827 rows=3373977\nloops=1)\n Hash Cond: (li.product_id = p.id)\n -> Merge Join (cost=16.25..284912.04\nrows=3265060 width=28) (actual time=0.093..2659.779 rows=3373977\nloops=1)\n Merge Cond: (li.id = plis.line_item_id)\n -> Index Only Scan using\nline_items_id_product_id_order_id_sku_id_idx on line_items li\n(cost=0.43..116593.45 rows=3240868 width=16) (actual\ntime=0.073..531.457 rows=3606900 loops=1)\n Heap Fetches: 14\n -> Index Scan using\npacked_line_items_line_item_id_idx on packed_line_items plis\n(cost=0.43..119180.75 rows=3373974 width=20) (actual\ntime=0.014..1052.544 rows=3373977 loops=1)\n -> Hash (cost=6683.92..6683.92 rows=44492\nwidth=4) (actual time=29.561..29.561 rows=44492 loops=1)\n Buckets: 8192 Batches: 1 Memory Usage: 1565kB\n -> Seq Scan on products p\n(cost=0.00..6683.92 rows=44492 width=4) (actual time=0.006..23.023\nrows=44492 loops=1)\n -> Hash (cost=325301.79..325301.79 rows=2870027\nwidth=8) (actual time=5097.168..5097.168 rows=2870028 loops=1)\n Buckets: 65536 Batches: 8 Memory Usage: 14039kB\n -> Hash Join (cost=111732.51..325301.79\nrows=2870027 width=8) (actual time=828.796..4582.395 rows=2870028\nloops=1)\n Hash Cond: (o.id = osa.order_id)\n -> Seq Scan on orders o\n(cost=0.00..126120.27 rows=2870027 width=4) (actual\ntime=0.009..636.423 rows=2870028 loops=1)\n -> Hash (cost=64643.56..64643.56\nrows=2870156 width=4) (actual time=827.832..827.832 rows=2870028\nloops=1)\n Buckets: 65536 Batches: 8\nMemory Usage: 12636kB\n -> Seq Scan on\norder_shipping_addresses osa (cost=0.00..64643.56 rows=2870156\nwidth=4) (actual time=0.008..419.783 rows=2870028 loops=1)\n -> Hash (cost=8324.48..8324.48 rows=103606 width=4)\n(actual time=129.271..129.271 rows=103606 loops=1)\n Buckets: 16384 Batches: 1 Memory Usage: 3643kB\n -> Hash Join (cost=3389.30..8324.48 rows=103606\nwidth=4) (actual time=28.641..113.012 rows=103606 loops=1)\n Hash Cond: (skus.base_sku_id = bs.base_sku_id)\n -> Seq Scan on skus (cost=0.00..2863.06\nrows=103606 width=20) (actual time=0.014..13.836 rows=103606 loops=1)\n -> Hash (cost=2098.02..2098.02\nrows=103302 width=16) (actual time=28.549..28.549 rows=103302 loops=1)\n Buckets: 16384 Batches: 1 Memory\nUsage: 4843kB\n -> Seq Scan on base_skus bs\n(cost=0.00..2098.02 rows=103302 width=16) (actual time=0.013..13.572\nrows=103302 loops=1)\n -> Hash (cost=78727.09..78727.09 rows=3374009 width=24)\n(actual time=1132.577..1132.577 rows=3373977 loops=1)\n Buckets: 65536 Batches: 16 Memory Usage: 11596kB\n -> Seq Scan on packing_list_items pli\n(cost=0.00..78727.09 rows=3374009 width=24) (actual\ntime=0.007..562.361 rows=3373977 loops=1)\n -> Hash (cost=8.45..8.45 rows=1 width=12) (actual\ntime=0.037..0.037 rows=1 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 1kB\n -> Index Scan using packing_lists_uuid_key on packing_lists\npl (cost=0.43..8.45 rows=1 width=12) (actual time=0.036..0.036 rows=1\nloops=1)\n Index Cond: (uuid = 58995488567::bigint)\n Total runtime: 18453.209 ms\n\n\n\n -- This is way faster. Only thing I did was remove one of the joins.\nexplain analyze\n select\n pl.uuid as packing_list_id\n from orders o\n -- join order_shipping_addresses osa on osa.order_id = o.id\n join line_items li on li.order_id = o.id\n join skus on skus.id = li.sku_id\n join base_skus bs using (base_sku_id)\n join products p on p.id = li.product_id\n left join packed_line_items plis on plis.line_item_id = li.id\n left join packing_list_items pli using (packed_line_item_id)\n left join packing_lists pl on pl.id = pli.packing_list_id\nwhere pl.uuid = '58995488567';\n\n Nested Loop (cost=3.15..23.08 rows=1 width=8) (actual\ntime=0.256..0.260 rows=1 loops=1)\n -> Nested Loop (cost=2.86..22.76 rows=1 width=12) (actual\ntime=0.223..0.226 rows=1 loops=1)\n -> Nested Loop (cost=2.44..22.25 rows=1 width=28) (actual\ntime=0.197..0.200 rows=1 loops=1)\n -> Nested Loop (cost=2.15..21.93 rows=1 width=16)\n(actual time=0.177..0.179 rows=1 loops=1)\n -> Nested Loop (cost=1.72..21.34 rows=1\nwidth=20) (actual time=0.146..0.148 rows=1 loops=1)\n -> Nested Loop (cost=1.29..20.86 rows=1\nwidth=12) (actual time=0.098..0.099 rows=1 loops=1)\n -> Nested Loop (cost=0.86..20.34\nrows=1 width=24) (actual time=0.082..0.082 rows=1 loops=1)\n -> Index Scan using\npacking_lists_uuid_key on packing_lists pl (cost=0.43..8.45 rows=1\nwidth=12) (actual time=0.039..0.039 rows=1 loops=1)\n Index Cond: (uuid =\n58995488567::bigint)\n -> Index Scan using\npacking_list_items_packing_list_id_idx on packing_list_items pli\n(cost=0.43..11.88 rows=2 width=24) (actual time=0.040..0.040 rows=1\nloops=1)\n Index Cond:\n(packing_list_id = pl.id)\n -> Index Scan using\npacked_line_items_packed_line_item_id_key on packed_line_items plis\n(cost=0.43..0.50 rows=1 width=20) (actual time=0.015..0.016 rows=1\nloops=1)\n Index Cond:\n(packed_line_item_id = pli.packed_line_item_id)\n -> Index Only Scan using\nline_items_id_product_id_order_id_sku_id_idx on line_items li\n(cost=0.43..0.47 rows=1 width=16) (actual time=0.044..0.044 rows=1\nloops=1)\n Index Cond: (id = plis.line_item_id)\n Heap Fetches: 0\n -> Index Only Scan using orders_pkey on orders o\n (cost=0.43..0.58 rows=1 width=4) (actual time=0.028..0.028 rows=1\nloops=1)\n Index Cond: (id = li.order_id)\n Heap Fetches: 1\n -> Index Scan using skus_id_product_id_idx on skus\n(cost=0.29..0.32 rows=1 width=20) (actual time=0.019..0.020 rows=1\nloops=1)\n Index Cond: (id = li.sku_id)\n -> Index Only Scan using base_skus_pkey on base_skus bs\n(cost=0.42..0.49 rows=1 width=16) (actual time=0.024..0.024 rows=1\nloops=1)\n Index Cond: (base_sku_id = skus.base_sku_id)\n Heap Fetches: 1\n -> Index Only Scan using products_pkey on products p\n(cost=0.29..0.32 rows=1 width=4) (actual time=0.027..0.028 rows=1\nloops=1)\n Index Cond: (id = li.product_id)\n Heap Fetches: 1\n Total runtime: 0.434 ms\n\nThe actual query selects columns from each of those tables. If I remove the join on order_shipping_addresses, it's very fast. Likewise, if I remove the join on skus, base_skus, or products, it's also very fast.\nI'm pretty sure I have all the necessary indexes.The below is also at https://gist.github.com/joevandyk/88624f7c23790200cccd/raw/gistfile1.txt\nPostgres appears to use the number of joins to determine which plan to use? If I go over that by one, then it seems to switch to a very different/slow plan. Is there a way I can speed this up?\n-- This is really slow\nexplain analyze\n select\n pl.uuid as packing_list_id\n from orders o\n join order_shipping_addresses osa on osa.order_id = o.id\n join line_items li on li.order_id = o.id\n join skus on skus.id = li.sku_id\n join base_skus bs using (base_sku_id)\n join products p on p.id = li.product_id\n left join packed_line_items plis on plis.line_item_id = li.id\n left join packing_list_items pli using (packed_line_item_id)\n left join packing_lists pl on pl.id = pli.packing_list_id\nwhere pl.uuid = '58995488567';\n\n Hash Join (cost=529945.66..1169006.25 rows=1 width=8) (actual time=16994.025..18442.838 rows=1 loops=1)\n Hash Cond: (pli.packing_list_id = pl.id)\n -> Hash Join (cost=529937.20..1156754.36 rows=3264913 width=8) (actual time=6394.260..18186.960 rows=3373977 loops=1)\n Hash Cond: (plis.packed_line_item_id = pli.packed_line_item_id)\n -> Hash Join (cost=389265.00..911373.32 rows=3264913 width=16) (actual time=5260.162..13971.003 rows=3373977 loops=1)\n Hash Cond: (li.sku_id = skus.id)\n -> Hash Join (cost=379645.45..836455.51 rows=3264913 width=20) (actual time=5130.797..12370.225 rows=3373977 loops=1)\n Hash Cond: (li.order_id = osa.order_id)\n -> Hash Join (cost=7256.32..353371.98 rows=3265060 width=24) (actual time=29.692..3674.827 rows=3373977 loops=1)\n Hash Cond: (li.product_id = p.id)\n -> Merge Join (cost=16.25..284912.04 rows=3265060 width=28) (actual time=0.093..2659.779 rows=3373977 loops=1)\n Merge Cond: (li.id = plis.line_item_id)\n -> Index Only Scan using line_items_id_product_id_order_id_sku_id_idx on line_items li (cost=0.43..116593.45 rows=3240868 width=16) (actual time=0.073..531.457 rows=3606900 loops=1)\n Heap Fetches: 14\n -> Index Scan using packed_line_items_line_item_id_idx on packed_line_items plis (cost=0.43..119180.75 rows=3373974 width=20) (actual time=0.014..1052.544 rows=3373977 loops=1)\n -> Hash (cost=6683.92..6683.92 rows=44492 width=4) (actual time=29.561..29.561 rows=44492 loops=1)\n Buckets: 8192 Batches: 1 Memory Usage: 1565kB\n -> Seq Scan on products p (cost=0.00..6683.92 rows=44492 width=4) (actual time=0.006..23.023 rows=44492 loops=1)\n -> Hash (cost=325301.79..325301.79 rows=2870027 width=8) (actual time=5097.168..5097.168 rows=2870028 loops=1)\n Buckets: 65536 Batches: 8 Memory Usage: 14039kB\n -> Hash Join (cost=111732.51..325301.79 rows=2870027 width=8) (actual time=828.796..4582.395 rows=2870028 loops=1)\n Hash Cond: (o.id = osa.order_id)\n -> Seq Scan on orders o (cost=0.00..126120.27 rows=2870027 width=4) (actual time=0.009..636.423 rows=2870028 loops=1)\n -> Hash (cost=64643.56..64643.56 rows=2870156 width=4) (actual time=827.832..827.832 rows=2870028 loops=1)\n Buckets: 65536 Batches: 8 Memory Usage: 12636kB\n -> Seq Scan on order_shipping_addresses osa (cost=0.00..64643.56 rows=2870156 width=4) (actual time=0.008..419.783 rows=2870028 loops=1)\n -> Hash (cost=8324.48..8324.48 rows=103606 width=4) (actual time=129.271..129.271 rows=103606 loops=1)\n Buckets: 16384 Batches: 1 Memory Usage: 3643kB\n -> Hash Join (cost=3389.30..8324.48 rows=103606 width=4) (actual time=28.641..113.012 rows=103606 loops=1)\n Hash Cond: (skus.base_sku_id = bs.base_sku_id)\n -> Seq Scan on skus (cost=0.00..2863.06 rows=103606 width=20) (actual time=0.014..13.836 rows=103606 loops=1)\n -> Hash (cost=2098.02..2098.02 rows=103302 width=16) (actual time=28.549..28.549 rows=103302 loops=1)\n Buckets: 16384 Batches: 1 Memory Usage: 4843kB\n -> Seq Scan on base_skus bs (cost=0.00..2098.02 rows=103302 width=16) (actual time=0.013..13.572 rows=103302 loops=1)\n -> Hash (cost=78727.09..78727.09 rows=3374009 width=24) (actual time=1132.577..1132.577 rows=3373977 loops=1)\n Buckets: 65536 Batches: 16 Memory Usage: 11596kB\n -> Seq Scan on packing_list_items pli (cost=0.00..78727.09 rows=3374009 width=24) (actual time=0.007..562.361 rows=3373977 loops=1)\n -> Hash (cost=8.45..8.45 rows=1 width=12) (actual time=0.037..0.037 rows=1 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 1kB\n -> Index Scan using packing_lists_uuid_key on packing_lists pl (cost=0.43..8.45 rows=1 width=12) (actual time=0.036..0.036 rows=1 loops=1)\n Index Cond: (uuid = 58995488567::bigint)\n Total runtime: 18453.209 ms\n \n \n \n -- This is way faster. Only thing I did was remove one of the joins.\nexplain analyze\n select\n pl.uuid as packing_list_id\n from orders o\n -- join order_shipping_addresses osa on osa.order_id = o.id\n join line_items li on li.order_id = o.id\n join skus on skus.id = li.sku_id\n join base_skus bs using (base_sku_id)\n join products p on p.id = li.product_id\n left join packed_line_items plis on plis.line_item_id = li.id\n left join packing_list_items pli using (packed_line_item_id)\n left join packing_lists pl on pl.id = pli.packing_list_id\nwhere pl.uuid = '58995488567';\n\n Nested Loop (cost=3.15..23.08 rows=1 width=8) (actual time=0.256..0.260 rows=1 loops=1)\n -> Nested Loop (cost=2.86..22.76 rows=1 width=12) (actual time=0.223..0.226 rows=1 loops=1)\n -> Nested Loop (cost=2.44..22.25 rows=1 width=28) (actual time=0.197..0.200 rows=1 loops=1)\n -> Nested Loop (cost=2.15..21.93 rows=1 width=16) (actual time=0.177..0.179 rows=1 loops=1)\n -> Nested Loop (cost=1.72..21.34 rows=1 width=20) (actual time=0.146..0.148 rows=1 loops=1)\n -> Nested Loop (cost=1.29..20.86 rows=1 width=12) (actual time=0.098..0.099 rows=1 loops=1)\n -> Nested Loop (cost=0.86..20.34 rows=1 width=24) (actual time=0.082..0.082 rows=1 loops=1)\n -> Index Scan using packing_lists_uuid_key on packing_lists pl (cost=0.43..8.45 rows=1 width=12) (actual time=0.039..0.039 rows=1 loops=1)\n Index Cond: (uuid = 58995488567::bigint)\n -> Index Scan using packing_list_items_packing_list_id_idx on packing_list_items pli (cost=0.43..11.88 rows=2 width=24) (actual time=0.040..0.040 rows=1 loops=1)\n Index Cond: (packing_list_id = pl.id)\n -> Index Scan using packed_line_items_packed_line_item_id_key on packed_line_items plis (cost=0.43..0.50 rows=1 width=20) (actual time=0.015..0.016 rows=1 loops=1)\n Index Cond: (packed_line_item_id = pli.packed_line_item_id)\n -> Index Only Scan using line_items_id_product_id_order_id_sku_id_idx on line_items li (cost=0.43..0.47 rows=1 width=16) (actual time=0.044..0.044 rows=1 loops=1)\n Index Cond: (id = plis.line_item_id)\n Heap Fetches: 0\n -> Index Only Scan using orders_pkey on orders o (cost=0.43..0.58 rows=1 width=4) (actual time=0.028..0.028 rows=1 loops=1)\n Index Cond: (id = li.order_id)\n Heap Fetches: 1\n -> Index Scan using skus_id_product_id_idx on skus (cost=0.29..0.32 rows=1 width=20) (actual time=0.019..0.020 rows=1 loops=1)\n Index Cond: (id = li.sku_id)\n -> Index Only Scan using base_skus_pkey on base_skus bs (cost=0.42..0.49 rows=1 width=16) (actual time=0.024..0.024 rows=1 loops=1)\n Index Cond: (base_sku_id = skus.base_sku_id)\n Heap Fetches: 1\n -> Index Only Scan using products_pkey on products p (cost=0.29..0.32 rows=1 width=4) (actual time=0.027..0.028 rows=1 loops=1)\n Index Cond: (id = li.product_id)\n Heap Fetches: 1\n Total runtime: 0.434 ms", "msg_date": "Mon, 16 Dec 2013 13:52:29 -0800", "msg_from": "Joe Van Dyk <[email protected]>", "msg_from_op": true, "msg_subject": "Adding an additional join causes very different/slow query plan" }, { "msg_contents": "Hm, setting set join_collapse_limit = 9 seemed to fix the problem. Is that\nmy best/only option?\n\n\nOn Mon, Dec 16, 2013 at 1:52 PM, Joe Van Dyk <[email protected]> wrote:\n\n> The actual query selects columns from each of those tables.\n>\n> If I remove the join on order_shipping_addresses, it's very fast.\n> Likewise, if I remove the join on skus, base_skus, or products, it's also\n> very fast.\n>\n> I'm pretty sure I have all the necessary indexes.\n>\n> The below is also at\n> https://gist.github.com/joevandyk/88624f7c23790200cccd/raw/gistfile1.txt\n>\n> Postgres appears to use the number of joins to determine which plan to\n> use? If I go over that by one, then it seems to switch to a very\n> different/slow plan. Is there a way I can speed this up?\n>\n> -- This is really slow\n> explain analyze\n> select\n> pl.uuid as packing_list_id\n> from orders o\n> join order_shipping_addresses osa on osa.order_id = o.id\n> join line_items li on li.order_id = o.id\n> join skus on skus.id = li.sku_id\n> join base_skus bs using (base_sku_id)\n> join products p on p.id = li.product_id\n> left join packed_line_items plis on plis.line_item_id = li.id\n> left join packing_list_items pli using (packed_line_item_id)\n> left join packing_lists pl on pl.id = pli.packing_list_id\n> where pl.uuid = '58995488567';\n>\n> Hash Join (cost=529945.66..1169006.25 rows=1 width=8) (actual time=16994.025..18442.838 rows=1 loops=1)\n> Hash Cond: (pli.packing_list_id = pl.id)\n> -> Hash Join (cost=529937.20..1156754.36 rows=3264913 width=8) (actual time=6394.260..18186.960 rows=3373977 loops=1)\n> Hash Cond: (plis.packed_line_item_id = pli.packed_line_item_id)\n> -> Hash Join (cost=389265.00..911373.32 rows=3264913 width=16) (actual time=5260.162..13971.003 rows=3373977 loops=1)\n> Hash Cond: (li.sku_id = skus.id)\n> -> Hash Join (cost=379645.45..836455.51 rows=3264913 width=20) (actual time=5130.797..12370.225 rows=3373977 loops=1)\n> Hash Cond: (li.order_id = osa.order_id)\n> -> Hash Join (cost=7256.32..353371.98 rows=3265060 width=24) (actual time=29.692..3674.827 rows=3373977 loops=1)\n> Hash Cond: (li.product_id = p.id)\n> -> Merge Join (cost=16.25..284912.04 rows=3265060 width=28) (actual time=0.093..2659.779 rows=3373977 loops=1)\n> Merge Cond: (li.id = plis.line_item_id)\n> -> Index Only Scan using line_items_id_product_id_order_id_sku_id_idx on line_items li (cost=0.43..116593.45 rows=3240868 width=16) (actual time=0.073..531.457 rows=3606900 loops=1)\n> Heap Fetches: 14\n> -> Index Scan using packed_line_items_line_item_id_idx on packed_line_items plis (cost=0.43..119180.75 rows=3373974 width=20) (actual time=0.014..1052.544 rows=3373977 loops=1)\n> -> Hash (cost=6683.92..6683.92 rows=44492 width=4) (actual time=29.561..29.561 rows=44492 loops=1)\n> Buckets: 8192 Batches: 1 Memory Usage: 1565kB\n> -> Seq Scan on products p (cost=0.00..6683.92 rows=44492 width=4) (actual time=0.006..23.023 rows=44492 loops=1)\n> -> Hash (cost=325301.79..325301.79 rows=2870027 width=8) (actual time=5097.168..5097.168 rows=2870028 loops=1)\n> Buckets: 65536 Batches: 8 Memory Usage: 14039kB\n> -> Hash Join (cost=111732.51..325301.79 rows=2870027 width=8) (actual time=828.796..4582.395 rows=2870028 loops=1)\n> Hash Cond: (o.id = osa.order_id)\n> -> Seq Scan on orders o (cost=0.00..126120.27 rows=2870027 width=4) (actual time=0.009..636.423 rows=2870028 loops=1)\n> -> Hash (cost=64643.56..64643.56 rows=2870156 width=4) (actual time=827.832..827.832 rows=2870028 loops=1)\n> Buckets: 65536 Batches: 8 Memory Usage: 12636kB\n> -> Seq Scan on order_shipping_addresses osa (cost=0.00..64643.56 rows=2870156 width=4) (actual time=0.008..419.783 rows=2870028 loops=1)\n> -> Hash (cost=8324.48..8324.48 rows=103606 width=4) (actual time=129.271..129.271 rows=103606 loops=1)\n> Buckets: 16384 Batches: 1 Memory Usage: 3643kB\n> -> Hash Join (cost=3389.30..8324.48 rows=103606 width=4) (actual time=28.641..113.012 rows=103606 loops=1)\n> Hash Cond: (skus.base_sku_id = bs.base_sku_id)\n> -> Seq Scan on skus (cost=0.00..2863.06 rows=103606 width=20) (actual time=0.014..13.836 rows=103606 loops=1)\n> -> Hash (cost=2098.02..2098.02 rows=103302 width=16) (actual time=28.549..28.549 rows=103302 loops=1)\n> Buckets: 16384 Batches: 1 Memory Usage: 4843kB\n> -> Seq Scan on base_skus bs (cost=0.00..2098.02 rows=103302 width=16) (actual time=0.013..13.572 rows=103302 loops=1)\n> -> Hash (cost=78727.09..78727.09 rows=3374009 width=24) (actual time=1132.577..1132.577 rows=3373977 loops=1)\n> Buckets: 65536 Batches: 16 Memory Usage: 11596kB\n> -> Seq Scan on packing_list_items pli (cost=0.00..78727.09 rows=3374009 width=24) (actual time=0.007..562.361 rows=3373977 loops=1)\n> -> Hash (cost=8.45..8.45 rows=1 width=12) (actual time=0.037..0.037 rows=1 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 1kB\n> -> Index Scan using packing_lists_uuid_key on packing_lists pl (cost=0.43..8.45 rows=1 width=12) (actual time=0.036..0.036 rows=1 loops=1)\n> Index Cond: (uuid = 58995488567::bigint)\n> Total runtime: 18453.209 ms\n>\n>\n>\n> -- This is way faster. Only thing I did was remove one of the joins.\n> explain analyze\n> select\n> pl.uuid as packing_list_id\n> from orders o\n> -- join order_shipping_addresses osa on osa.order_id = o.id\n> join line_items li on li.order_id = o.id\n> join skus on skus.id = li.sku_id\n> join base_skus bs using (base_sku_id)\n> join products p on p.id = li.product_id\n> left join packed_line_items plis on plis.line_item_id = li.id\n> left join packing_list_items pli using (packed_line_item_id)\n> left join packing_lists pl on pl.id = pli.packing_list_id\n> where pl.uuid = '58995488567';\n>\n> Nested Loop (cost=3.15..23.08 rows=1 width=8) (actual time=0.256..0.260 rows=1 loops=1)\n> -> Nested Loop (cost=2.86..22.76 rows=1 width=12) (actual time=0.223..0.226 rows=1 loops=1)\n> -> Nested Loop (cost=2.44..22.25 rows=1 width=28) (actual time=0.197..0.200 rows=1 loops=1)\n> -> Nested Loop (cost=2.15..21.93 rows=1 width=16) (actual time=0.177..0.179 rows=1 loops=1)\n> -> Nested Loop (cost=1.72..21.34 rows=1 width=20) (actual time=0.146..0.148 rows=1 loops=1)\n> -> Nested Loop (cost=1.29..20.86 rows=1 width=12) (actual time=0.098..0.099 rows=1 loops=1)\n> -> Nested Loop (cost=0.86..20.34 rows=1 width=24) (actual time=0.082..0.082 rows=1 loops=1)\n> -> Index Scan using packing_lists_uuid_key on packing_lists pl (cost=0.43..8.45 rows=1 width=12) (actual time=0.039..0.039 rows=1 loops=1)\n> Index Cond: (uuid = 58995488567::bigint)\n> -> Index Scan using packing_list_items_packing_list_id_idx on packing_list_items pli (cost=0.43..11.88 rows=2 width=24) (actual time=0.040..0.040 rows=1 loops=1)\n> Index Cond: (packing_list_id = pl.id)\n> -> Index Scan using packed_line_items_packed_line_item_id_key on packed_line_items plis (cost=0.43..0.50 rows=1 width=20) (actual time=0.015..0.016 rows=1 loops=1)\n> Index Cond: (packed_line_item_id = pli.packed_line_item_id)\n> -> Index Only Scan using line_items_id_product_id_order_id_sku_id_idx on line_items li (cost=0.43..0.47 rows=1 width=16) (actual time=0.044..0.044 rows=1 loops=1)\n> Index Cond: (id = plis.line_item_id)\n> Heap Fetches: 0\n> -> Index Only Scan using orders_pkey on orders o (cost=0.43..0.58 rows=1 width=4) (actual time=0.028..0.028 rows=1 loops=1)\n> Index Cond: (id = li.order_id)\n> Heap Fetches: 1\n> -> Index Scan using skus_id_product_id_idx on skus (cost=0.29..0.32 rows=1 width=20) (actual time=0.019..0.020 rows=1 loops=1)\n> Index Cond: (id = li.sku_id)\n> -> Index Only Scan using base_skus_pkey on base_skus bs (cost=0.42..0.49 rows=1 width=16) (actual time=0.024..0.024 rows=1 loops=1)\n> Index Cond: (base_sku_id = skus.base_sku_id)\n> Heap Fetches: 1\n> -> Index Only Scan using products_pkey on products p (cost=0.29..0.32 rows=1 width=4) (actual time=0.027..0.028 rows=1 loops=1)\n> Index Cond: (id = li.product_id)\n> Heap Fetches: 1\n> Total runtime: 0.434 ms\n>\n>\n\nHm, setting set join_collapse_limit = 9 seemed to fix the problem. Is that my best/only option?On Mon, Dec 16, 2013 at 1:52 PM, Joe Van Dyk <[email protected]> wrote:\nThe actual query selects columns from each of those tables. If I remove the join on order_shipping_addresses, it's very fast. Likewise, if I remove the join on skus, base_skus, or products, it's also very fast.\nI'm pretty sure I have all the necessary indexes.The below is also at https://gist.github.com/joevandyk/88624f7c23790200cccd/raw/gistfile1.txt\nPostgres appears to use the number of joins to determine which plan to use? If I go over that by one, then it seems to switch to a very different/slow plan. Is there a way I can speed this up?\n-- This is really slow\nexplain analyze\n select\n pl.uuid as packing_list_id\n from orders o\n join order_shipping_addresses osa on osa.order_id = o.id\n join line_items li on li.order_id = o.id\n join skus on skus.id = li.sku_id\n join base_skus bs using (base_sku_id)\n join products p on p.id = li.product_id\n left join packed_line_items plis on plis.line_item_id = li.id\n left join packing_list_items pli using (packed_line_item_id)\n left join packing_lists pl on pl.id = pli.packing_list_id\nwhere pl.uuid = '58995488567';\n\n Hash Join (cost=529945.66..1169006.25 rows=1 width=8) (actual time=16994.025..18442.838 rows=1 loops=1)\n Hash Cond: (pli.packing_list_id = pl.id)\n -> Hash Join (cost=529937.20..1156754.36 rows=3264913 width=8) (actual time=6394.260..18186.960 rows=3373977 loops=1)\n Hash Cond: (plis.packed_line_item_id = pli.packed_line_item_id)\n -> Hash Join (cost=389265.00..911373.32 rows=3264913 width=16) (actual time=5260.162..13971.003 rows=3373977 loops=1)\n Hash Cond: (li.sku_id = skus.id)\n -> Hash Join (cost=379645.45..836455.51 rows=3264913 width=20) (actual time=5130.797..12370.225 rows=3373977 loops=1)\n Hash Cond: (li.order_id = osa.order_id)\n -> Hash Join (cost=7256.32..353371.98 rows=3265060 width=24) (actual time=29.692..3674.827 rows=3373977 loops=1)\n Hash Cond: (li.product_id = p.id)\n -> Merge Join (cost=16.25..284912.04 rows=3265060 width=28) (actual time=0.093..2659.779 rows=3373977 loops=1)\n Merge Cond: (li.id = plis.line_item_id)\n -> Index Only Scan using line_items_id_product_id_order_id_sku_id_idx on line_items li (cost=0.43..116593.45 rows=3240868 width=16) (actual time=0.073..531.457 rows=3606900 loops=1)\n Heap Fetches: 14\n -> Index Scan using packed_line_items_line_item_id_idx on packed_line_items plis (cost=0.43..119180.75 rows=3373974 width=20) (actual time=0.014..1052.544 rows=3373977 loops=1)\n -> Hash (cost=6683.92..6683.92 rows=44492 width=4) (actual time=29.561..29.561 rows=44492 loops=1)\n Buckets: 8192 Batches: 1 Memory Usage: 1565kB\n -> Seq Scan on products p (cost=0.00..6683.92 rows=44492 width=4) (actual time=0.006..23.023 rows=44492 loops=1)\n -> Hash (cost=325301.79..325301.79 rows=2870027 width=8) (actual time=5097.168..5097.168 rows=2870028 loops=1)\n Buckets: 65536 Batches: 8 Memory Usage: 14039kB\n -> Hash Join (cost=111732.51..325301.79 rows=2870027 width=8) (actual time=828.796..4582.395 rows=2870028 loops=1)\n Hash Cond: (o.id = osa.order_id)\n -> Seq Scan on orders o (cost=0.00..126120.27 rows=2870027 width=4) (actual time=0.009..636.423 rows=2870028 loops=1)\n -> Hash (cost=64643.56..64643.56 rows=2870156 width=4) (actual time=827.832..827.832 rows=2870028 loops=1)\n Buckets: 65536 Batches: 8 Memory Usage: 12636kB\n -> Seq Scan on order_shipping_addresses osa (cost=0.00..64643.56 rows=2870156 width=4) (actual time=0.008..419.783 rows=2870028 loops=1)\n -> Hash (cost=8324.48..8324.48 rows=103606 width=4) (actual time=129.271..129.271 rows=103606 loops=1)\n Buckets: 16384 Batches: 1 Memory Usage: 3643kB\n -> Hash Join (cost=3389.30..8324.48 rows=103606 width=4) (actual time=28.641..113.012 rows=103606 loops=1)\n Hash Cond: (skus.base_sku_id = bs.base_sku_id)\n -> Seq Scan on skus (cost=0.00..2863.06 rows=103606 width=20) (actual time=0.014..13.836 rows=103606 loops=1)\n -> Hash (cost=2098.02..2098.02 rows=103302 width=16) (actual time=28.549..28.549 rows=103302 loops=1)\n Buckets: 16384 Batches: 1 Memory Usage: 4843kB\n -> Seq Scan on base_skus bs (cost=0.00..2098.02 rows=103302 width=16) (actual time=0.013..13.572 rows=103302 loops=1)\n -> Hash (cost=78727.09..78727.09 rows=3374009 width=24) (actual time=1132.577..1132.577 rows=3373977 loops=1)\n Buckets: 65536 Batches: 16 Memory Usage: 11596kB\n -> Seq Scan on packing_list_items pli (cost=0.00..78727.09 rows=3374009 width=24) (actual time=0.007..562.361 rows=3373977 loops=1)\n -> Hash (cost=8.45..8.45 rows=1 width=12) (actual time=0.037..0.037 rows=1 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 1kB\n -> Index Scan using packing_lists_uuid_key on packing_lists pl (cost=0.43..8.45 rows=1 width=12) (actual time=0.036..0.036 rows=1 loops=1)\n Index Cond: (uuid = 58995488567::bigint)\n Total runtime: 18453.209 ms\n \n \n \n -- This is way faster. Only thing I did was remove one of the joins.\nexplain analyze\n select\n pl.uuid as packing_list_id\n from orders o\n -- join order_shipping_addresses osa on osa.order_id = o.id\n join line_items li on li.order_id = o.id\n join skus on skus.id = li.sku_id\n join base_skus bs using (base_sku_id)\n join products p on p.id = li.product_id\n left join packed_line_items plis on plis.line_item_id = li.id\n left join packing_list_items pli using (packed_line_item_id)\n left join packing_lists pl on pl.id = pli.packing_list_id\nwhere pl.uuid = '58995488567';\n\n Nested Loop (cost=3.15..23.08 rows=1 width=8) (actual time=0.256..0.260 rows=1 loops=1)\n -> Nested Loop (cost=2.86..22.76 rows=1 width=12) (actual time=0.223..0.226 rows=1 loops=1)\n -> Nested Loop (cost=2.44..22.25 rows=1 width=28) (actual time=0.197..0.200 rows=1 loops=1)\n -> Nested Loop (cost=2.15..21.93 rows=1 width=16) (actual time=0.177..0.179 rows=1 loops=1)\n -> Nested Loop (cost=1.72..21.34 rows=1 width=20) (actual time=0.146..0.148 rows=1 loops=1)\n -> Nested Loop (cost=1.29..20.86 rows=1 width=12) (actual time=0.098..0.099 rows=1 loops=1)\n -> Nested Loop (cost=0.86..20.34 rows=1 width=24) (actual time=0.082..0.082 rows=1 loops=1)\n -> Index Scan using packing_lists_uuid_key on packing_lists pl (cost=0.43..8.45 rows=1 width=12) (actual time=0.039..0.039 rows=1 loops=1)\n Index Cond: (uuid = 58995488567::bigint)\n -> Index Scan using packing_list_items_packing_list_id_idx on packing_list_items pli (cost=0.43..11.88 rows=2 width=24) (actual time=0.040..0.040 rows=1 loops=1)\n Index Cond: (packing_list_id = pl.id)\n -> Index Scan using packed_line_items_packed_line_item_id_key on packed_line_items plis (cost=0.43..0.50 rows=1 width=20) (actual time=0.015..0.016 rows=1 loops=1)\n Index Cond: (packed_line_item_id = pli.packed_line_item_id)\n -> Index Only Scan using line_items_id_product_id_order_id_sku_id_idx on line_items li (cost=0.43..0.47 rows=1 width=16) (actual time=0.044..0.044 rows=1 loops=1)\n Index Cond: (id = plis.line_item_id)\n Heap Fetches: 0\n -> Index Only Scan using orders_pkey on orders o (cost=0.43..0.58 rows=1 width=4) (actual time=0.028..0.028 rows=1 loops=1)\n Index Cond: (id = li.order_id)\n Heap Fetches: 1\n -> Index Scan using skus_id_product_id_idx on skus (cost=0.29..0.32 rows=1 width=20) (actual time=0.019..0.020 rows=1 loops=1)\n Index Cond: (id = li.sku_id)\n -> Index Only Scan using base_skus_pkey on base_skus bs (cost=0.42..0.49 rows=1 width=16) (actual time=0.024..0.024 rows=1 loops=1)\n Index Cond: (base_sku_id = skus.base_sku_id)\n Heap Fetches: 1\n -> Index Only Scan using products_pkey on products p (cost=0.29..0.32 rows=1 width=4) (actual time=0.027..0.028 rows=1 loops=1)\n Index Cond: (id = li.product_id)\n Heap Fetches: 1\n Total runtime: 0.434 ms", "msg_date": "Mon, 16 Dec 2013 13:59:35 -0800", "msg_from": "Joe Van Dyk <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding an additional join causes very different/slow query plan" }, { "msg_contents": "Joe Van Dyk <[email protected]> writes:\n> Hm, setting set join_collapse_limit = 9 seemed to fix the problem. Is that\n> my best/only option?\n\nYup, that's what I was just about to suggest. You might want to use\n10 or 12 in case some of your queries are a bit more complex than\nthis one --- but don't go overboard, or you may find yourself with\nunreasonable planning time.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 16 Dec 2013 19:14:05 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding an additional join causes very different/slow query plan" }, { "msg_contents": "On Mon, Dec 16, 2013 at 4:14 PM, Tom Lane <[email protected]> wrote:\n\n> Joe Van Dyk <[email protected]> writes:\n> > Hm, setting set join_collapse_limit = 9 seemed to fix the problem. Is\n> that\n> > my best/only option?\n>\n> Yup, that's what I was just about to suggest. You might want to use\n> 10 or 12 in case some of your queries are a bit more complex than\n> this one --- but don't go overboard, or you may find yourself with\n> unreasonable planning time.\n>\n\nIs there a way to measure the planning time? It's not reported in 'explain\nanalyze' or 'explain analyze verbose', right?\n\nJoe\n\nOn Mon, Dec 16, 2013 at 4:14 PM, Tom Lane <[email protected]> wrote:\nJoe Van Dyk <[email protected]> writes:\n\n\n> Hm, setting set join_collapse_limit = 9 seemed to fix the problem. Is that\n> my best/only option?\n\nYup, that's what I was just about to suggest.  You might want to use\n10 or 12 in case some of your queries are a bit more complex than\nthis one --- but don't go overboard, or you may find yourself with\nunreasonable planning time.Is there a way to measure the planning time? It's not reported in 'explain analyze' or 'explain analyze verbose', right?\nJoe", "msg_date": "Mon, 16 Dec 2013 16:37:03 -0800", "msg_from": "Joe Van Dyk <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding an additional join causes very different/slow\n query plan" }, { "msg_contents": "* Joe Van Dyk ([email protected]) wrote:\n> On Mon, Dec 16, 2013 at 4:14 PM, Tom Lane <[email protected]> wrote:\n> > Yup, that's what I was just about to suggest. You might want to use\n> > 10 or 12 in case some of your queries are a bit more complex than\n> > this one --- but don't go overboard, or you may find yourself with\n> > unreasonable planning time.\n> >\n> \n> Is there a way to measure the planning time? It's not reported in 'explain\n> analyze' or 'explain analyze verbose', right?\n\nYou can just run 'explain' and that'll more-or-less get you there (turn\non \\timing in psql). When reading this thread, I was thinking it might\nbe useful to add plan time somewhere in explain/explain analyze output\nthough..\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Tue, 17 Dec 2013 09:01:28 -0500", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding an additional join causes very different/slow\n query plan" } ]
[ { "msg_contents": "Hi,\n\nI'm new to PostgreSQL and trying to run this query:\n\nSELECT *\nFROM \"Log\"\nLEFT JOIN \"NewsArticle\" ON \"NewsArticle\".id = \"Log\".\"targetId\" AND\n\"Log\".\"targetType\" = 'NewsArticle'\nORDER BY \"Log\".\"createdAt\" DESC\nLIMIT 10\n\nBasically I'm finding the last 10 log entries, which point (targetType) to\nnews articles.\n\nThe explain analyze is this:\n\nhttp://d.pr/i/mZhl (I didn't know how to copy from the pgAdmin, without\nhaving a huge mess)\n\nI have this index on Log:\n\nCREATE INDEX \"Log_targetId_targetType_idx\"\n ON \"Log\"\n USING btree\n (\"targetId\", \"targetType\" COLLATE pg_catalog.\"default\");\n\nI have ran Vacuum and Analyze on both tables.\n\nWhat am I missing here?\n\n\n--\nYours sincerely,\nKai Sellgren\n\nHi,\nI'm new to PostgreSQL and trying to run this query:\nSELECT *FROM \"Log\"\nLEFT JOIN \"NewsArticle\" ON \"NewsArticle\".id = \"Log\".\"targetId\" AND \"Log\".\"targetType\" = 'NewsArticle'\nORDER BY \"Log\".\"createdAt\" DESCLIMIT 10\nBasically I'm finding the last 10 log entries, which point (targetType) to news articles.\nThe explain analyze is this:\nhttp://d.pr/i/mZhl (I didn't know how to copy from the pgAdmin, without having a huge mess)\nI have this index on Log:CREATE INDEX \"Log_targetId_targetType_idx\"\n  ON \"Log\"  USING btree  (\"targetId\", \"targetType\" COLLATE pg_catalog.\"default\");\n\nI have ran Vacuum and Analyze on both tables.What am I missing here?\n--Yours sincerely,\n\nKai Sellgren", "msg_date": "Wed, 18 Dec 2013 04:48:10 +0200", "msg_from": "Kai Sellgren <[email protected]>", "msg_from_op": true, "msg_subject": "Optimizing a query" }, { "msg_contents": "On 12/17/2013 08:48 PM, Kai Sellgren wrote:\n\nThis is your select:\n\n> SELECT *\n> FROM \"Log\"\n> LEFT JOIN \"NewsArticle\" ON \"NewsArticle\".id = \"Log\".\"targetId\" AND\n> \"Log\".\"targetType\" = 'NewsArticle'\n> ORDER BY \"Log\".\"createdAt\" DESC\n> LIMIT 10\n\nThis is your index:\n\n> CREATE INDEX \"Log_targetId_targetType_idx\"\n> ON \"Log\"\n> USING btree\n> (\"targetId\", \"targetType\" COLLATE pg_catalog.\"default\");\n\nUnfortunately, this won't help you. You are not matching on any IDs you \nindexed, aside from joining against the article table. You have no WHERE \nclause to restrict the data set, so it absolutely must read the entire \ntable to find the most recent records. Without an index on \"createdAt\", \nhow is it supposed to know what the ten most recent records are?\n\nAdd an index to the createdAt column:\n\nCREATE INDEX idx_log_createdat ON \"Log\" (createdAt DESC);\n\nUsing that, it should get the ten most recent Log records almost \nimmediately, including associated article content.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 19 Dec 2013 11:53:55 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing a query" }, { "msg_contents": "On 20/12/13 06:53, Shaun Thomas wrote:\n> On 12/17/2013 08:48 PM, Kai Sellgren wrote:\n>\n> This is your select:\n>\n>> SELECT *\n>> FROM \"Log\"\n>> LEFT JOIN \"NewsArticle\" ON \"NewsArticle\".id = \"Log\".\"targetId\" AND\n>> \"Log\".\"targetType\" = 'NewsArticle'\n>> ORDER BY \"Log\".\"createdAt\" DESC\n>> LIMIT 10\n>\n> This is your index:\n>\n>> CREATE INDEX \"Log_targetId_targetType_idx\"\n>> ON \"Log\"\n>> USING btree\n>> (\"targetId\", \"targetType\" COLLATE pg_catalog.\"default\");\n>\n> Unfortunately, this won't help you. You are not matching on any IDs you\n> indexed, aside from joining against the article table. You have no WHERE\n> clause to restrict the data set, so it absolutely must read the entire\n> table to find the most recent records. Without an index on \"createdAt\",\n> how is it supposed to know what the ten most recent records are?\n>\n> Add an index to the createdAt column:\n>\n> CREATE INDEX idx_log_createdat ON \"Log\" (createdAt DESC);\n>\n> Using that, it should get the ten most recent Log records almost\n> immediately, including associated article content.\n>\n\nAlso, might be worth creating an index on NewsArticle(id) so that the \njoin to this table does not require a full table scan:\n\nCREATE INDEX newsarticle_id_idx ON \"NewsArticle\" (id);\n\n(probably not a problem when you only have a few articles - but will be \nas the volume increases over time).\n\nRegards\n\nMark\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 24 Dec 2013 13:32:19 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing a query" } ]
[ { "msg_contents": "Could you tell me each and every hardware parameters and OS parameters the\nperformance depends on.\nI need the complete list of all the required parameters and how to extract\nthem on Linux through system calls and files.\nPlease it will be highly great full of you to do so.\nThank you and regards.\n\nCould you tell me each and every hardware parameters and OS parameters the performance depends on.I need the complete list of all the required parameters and how to extract them on Linux through system calls and files.\nPlease it will be highly great full of you to do so.Thank you and regards.", "msg_date": "Wed, 18 Dec 2013 15:12:20 -0500", "msg_from": "prashant Pandey <[email protected]>", "msg_from_op": true, "msg_subject": "Regarding Hardware Tuning" }, { "msg_contents": "On 12/18/2013 12:12 PM, prashant Pandey wrote:\n> Could you tell me each and every hardware parameters and OS parameters \n> the performance depends on.\n> I need the complete list of all the required parameters and how to \n> extract them on Linux through system calls and files.\n> Please it will be highly great full of you to do so.\n> Thank you and regards.\n\nThis is not even possible given the variety of hardware, OS variants, \nwork-loads, database sizes, etc. The answer is really \"all of them.\" \nThose who specialize in tuning constantly run test cases to tease out \nclues to performance tuning and even they most likely couldn't answer this.\n\nThe closest you are likely to come is to read and reread \"PostgreSQL \nHigh Performance\" which is an invaluable resource.\n\nCheers,\nSteve\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 19 Dec 2013 15:40:25 -0800", "msg_from": "Steve Crawford <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regarding Hardware Tuning" }, { "msg_contents": "On Thu, Dec 19, 2013 at 4:40 PM, Steve Crawford\n<[email protected]> wrote:\n> On 12/18/2013 12:12 PM, prashant Pandey wrote:\n>>\n>> Could you tell me each and every hardware parameters and OS parameters the\n>> performance depends on.\n>> I need the complete list of all the required parameters and how to extract\n>> them on Linux through system calls and files.\n>> Please it will be highly great full of you to do so.\n>> Thank you and regards.\n>\n>\n> This is not even possible given the variety of hardware, OS variants,\n> work-loads, database sizes, etc. The answer is really \"all of them.\" Those\n> who specialize in tuning constantly run test cases to tease out clues to\n> performance tuning and even they most likely couldn't answer this.\n>\n> The closest you are likely to come is to read and reread \"PostgreSQL High\n> Performance\" which is an invaluable resource.\n\nThe ebook edition is on sale for $5.00 which is a STEAL.\n\nhttp://www.packtpub.com/postgresql-90-high-performance/book\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 19 Dec 2013 17:37:19 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regarding Hardware Tuning" }, { "msg_contents": "On 12/19/2013 06:37 PM, Scott Marlowe wrote:\n\n> The ebook edition is on sale for $5.00 which is a STEAL.\n\nWow, I guess I should pay better attention to all those annoying emails \nPackt sends me. That'll make a good portable copy since I tend to keep \nthe real version on my bookshelf at home. :)\n\nThis is good advice, by the way. Greg's book is great, especially for \nnewly minted DBAs who might have trouble deciding on where to start. \nThough from what I've been seeing on the list recently, they need to \nhave him update it for 9.2 and 9.3 with all of the changes in the last \ncouple versions. There are also a ton of considerations regarding new \nLinux kernel settings.\n\nGreg, go tell Packt they need to pay you to write the second edition. ;)\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 20 Dec 2013 08:29:38 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regarding Hardware Tuning" } ]
[ { "msg_contents": "I would appreciate some help optimising the following query:\n\nwith\nsubject_journals as(\n select A.sq\n from isi.rissue A,\n isi.rsc_joern_link C\n WHERE\n C.sc_id in\n ('d0963875-e438-4923-b3fa-f462e8975221',\n '04e14284-09c8-421a-b1ad-c8238051601a',\n '04e2189f-cd2a-44f0-b98d-52f6bb5dcd78',\n 'f5521c65-ec49-408a-9a42-8a69d47703cd',\n '2e47ae2f-2c4d-433e-8bdf-9983eeeafc42',\n '5d3639b1-04c2-4d94-a99a-5323277fd2b7')\n AND\n C.rj_id = A.uuid),\nsubject_articles as (\n SELECT B.article_id as art_id\n FROM\n isi.isi_l1_publication B,\n subject_journals A,\n isi.ritem C\n\n\n WHERE\n A.sq = B.journal_id\n AND\n B.publication_year <= '2012'\n AND\n B.publication_year >= '2000'\n AND\n C.ut = B.article_id\n AND\n C.dt in ('@ Article','Review')\n ),\ncountry_articles as (\n SELECT A.art_id\n FROM isi.art_country_link A\n WHERE\n A.countrycode = 'ZA')\n\nselect art_id from subject_articles\nINTERSECT\nselect art_id from country_articles\n\nAnalyze explains shows that it is not using the indexes on both\nisi.isi_l1_publication and isi.ritem (both tables with more than 43 million\nrecords).:\n\n\"HashSetOp Intersect (cost=10778065.50..11227099.44 rows=200 width=48)\n(actual time=263120.868..263279.467 rows=4000 loops=1)\"\n\" Output: \"*SELECT* 1\".art_id, (0)\"\n\" Buffers: shared hit=627401 read=4347235, temp read=234498 written=234492\"\n\" CTE subject_journals\"\n\" -> Hash Join (cost=12846.55..17503.27 rows=28818 width=8) (actual\ntime=99.762..142.439 rows=30291 loops=1)\"\n\" Output: a.sq\"\n\" Hash Cond: ((c.rj_id)::text = (a.uuid)::text)\"\n\" Buffers: shared hit=12232\"\n\" -> Bitmap Heap Scan on isi.rsc_joern_link c\n(cost=1020.92..5029.23 rows=28818 width=37) (actual time=4.238..15.806\nrows=30291 loops=1)\"\n\" Output: c.id, c.rj_id, c.sc_id\"\n\" Recheck Cond: ((c.sc_id)::text = ANY\n('{d0963875-e438-4923-b3fa-f462e8975221,04e14284-09c8-421a-b1ad-c8238051601a,04e2189f-cd2a-44f0-b98d-52f6bb5dcd78,f5521c65-ec49-408a-9a42-8a69d47703cd,2e47ae2f-2c4d-433e-8bdf-9983eeeafc42,5d3639b1-04c2-4\n(...)\"\n\" Buffers: shared hit=3516\"\n\" -> Bitmap Index Scan on rsc_joern_link_sc_id_idx\n(cost=0.00..1013.72 rows=28818 width=0) (actual time=3.722..3.722\nrows=30291 loops=1)\"\n\" Index Cond: ((c.sc_id)::text = ANY\n('{d0963875-e438-4923-b3fa-f462e8975221,04e14284-09c8-421a-b1ad-c8238051601a,04e2189f-cd2a-44f0-b98d-52f6bb5dcd78,f5521c65-ec49-408a-9a42-8a69d47703cd,2e47ae2f-2c4d-433e-8bdf-9983eeeafc42,5d3639b1-04\n(...)\"\n\" Buffers: shared hit=237\"\n\" -> Hash (cost=10098.06..10098.06 rows=138206 width=45) (actual\ntime=95.495..95.495 rows=138206 loops=1)\"\n\" Output: a.sq, a.uuid\"\n\" Buckets: 16384 Batches: 1 Memory Usage: 10393kB\"\n\" Buffers: shared hit=8716\"\n\" -> Seq Scan on isi.rissue a (cost=0.00..10098.06\nrows=138206 width=45) (actual time=0.005..58.225 rows=138206 loops=1)\"\n\" Output: a.sq, a.uuid\"\n\" Buffers: shared hit=8716\"\n\" CTE subject_articles\"\n\" -> Merge Join (cost=9660996.21..9896868.27 rows=13571895 width=16)\n(actual time=229449.020..259557.073 rows=2513896 loops=1)\"\n\" Output: b.article_id\"\n\" Merge Cond: ((a.sq)::text = (b.journal_id)::text)\"\n\" Buffers: shared hit=519891 read=4347235, temp read=234498\nwritten=234492\"\n\" -> Sort (cost=2711.01..2783.05 rows=28818 width=32) (actual\ntime=224.901..230.615 rows=30288 loops=1)\"\n\" Output: a.sq\"\n\" Sort Key: a.sq\"\n\" Sort Method: quicksort Memory: 2188kB\"\n\" Buffers: shared hit=12232\"\n\" -> CTE Scan on subject_journals a (cost=0.00..576.36\nrows=28818 width=32) (actual time=99.764..152.459 rows=30291 loops=1)\"\n\" Output: a.sq\"\n\" Buffers: shared hit=12232\"\n\" -> Materialize (cost=9658285.21..9722584.29 rows=12859816\nwidth=24) (actual time=229223.851..253191.308 rows=14664245 loops=1)\"\n\" Output: b.article_id, b.journal_id\"\n\" Buffers: shared hit=507659 read=4347235, temp read=234498\nwritten=234492\"\n\" -> Sort (cost=9658285.21..9690434.75 rows=12859816\nwidth=24) (actual time=229223.846..251142.167 rows=14072645 loops=1)\"\n\" Output: b.article_id, b.journal_id\"\n\" Sort Key: b.journal_id\"\n\" Sort Method: external merge Disk: 467704kB\"\n\" Buffers: shared hit=507659 read=4347235, temp\nread=234498 written=234492\"\n\" -> Hash Join (cost=1474828.02..7876046.06\nrows=12859816 width=24) (actual time=27181.734..91781.942 rows=14072645\nloops=1)\"\n\" Output: b.article_id, b.journal_id\"\n\" Hash Cond: ((c.ut)::text =\n(b.article_id)::text)\"\n\" Buffers: shared hit=507659 read=4347235, temp\nread=176031 written=176025\"\n\" -> Seq Scan on isi.ritem c\n(cost=0.00..5071936.72 rows=29104515 width=16) (actual\ntime=0.012..25529.577 rows=29182778 loops=1)\"\n\" Output: c.ut\"\n\" Filter: (((c.dt)::text = '@\nArticle'::text) OR ((c.dt)::text = 'Review'::text))\"\n\" Buffers: shared hit=52128 read=4347235\"\n\" -> Hash (cost=1111096.04..1111096.04\nrows=19811758 width=24) (actual time=27176.450..27176.450 rows=19820997\nloops=1)\"\n\" Output: b.article_id, b.journal_id\"\n\" Buckets: 1048576 Batches: 4 Memory\nUsage: 271177kB\"\n\" Buffers: shared hit=455531, temp\nwritten=79848\"\n\" -> Seq Scan on isi.isi_l1_publication\nb (cost=0.00..1111096.04 rows=19811758 width=24) (actual\ntime=152.219..21215.614 rows=19820997 loops=1)\"\n\" Output: b.article_id, b.journal_id\"\n\" Filter:\n(((b.publication_year)::text < '2012'::text) AND\n((b.publication_year)::text > '1999'::text))\"\n\" Buffers: shared hit=455531\"\n\" CTE country_articles\"\n\" -> Bitmap Heap Scan on isi.art_country_link a\n(cost=6427.92..863693.95 rows=244534 width=16) (actual time=65.053..256.632\nrows=205195 loops=1)\"\n\" Output: a.art_id\"\n\" Recheck Cond: ((a.countrycode)::text = 'ZA'::text)\"\n\" Buffers: shared hit=107510\"\n\" -> Bitmap Index Scan on art_country_link_countrycode_idx\n(cost=0.00..6366.79 rows=244534 width=0) (actual time=36.481..36.481\nrows=205195 loops=1)\"\n\" Index Cond: ((a.countrycode)::text = 'ZA'::text)\"\n\" Buffers: shared hit=603\"\n\" -> Append (cost=0.00..414492.87 rows=13816429 width=48) (actual\ntime=229449.025..261565.050 rows=2719091 loops=1)\"\n\" Buffers: shared hit=627401 read=4347235, temp read=234498\nwritten=234492\"\n\" -> Subquery Scan on \"*SELECT* 1\" (cost=0.00..407156.85\nrows=13571895 width=48) (actual time=229449.025..260892.314 rows=2513896\nloops=1)\"\n\" Output: \"*SELECT* 1\".art_id, 0\"\n\" Buffers: shared hit=519891 read=4347235, temp read=234498\nwritten=234492\"\n\" -> CTE Scan on subject_articles (cost=0.00..271437.90\nrows=13571895 width=48) (actual time=229449.024..260423.294 rows=2513896\nloops=1)\"\n\" Output: subject_articles.art_id\"\n\" Buffers: shared hit=519891 read=4347235, temp\nread=234498 written=234492\"\n\" -> Subquery Scan on \"*SELECT* 2\" (cost=0.00..7336.02 rows=244534\nwidth=48) (actual time=65.059..353.671 rows=205195 loops=1)\"\n\" Output: \"*SELECT* 2\".art_id, 1\"\n\" Buffers: shared hit=107510\"\n\" -> CTE Scan on country_articles (cost=0.00..4890.68\nrows=244534 width=48) (actual time=65.057..320.444 rows=205195 loops=1)\"\n\" Output: country_articles.art_id\"\n\" Buffers: shared hit=107510\"\n\"Total runtime: 263466.781 ms\"\n\nThe index for those fields:\n\nCREATE INDEX isi_l1_publication_publication_year_idx\n ON isi.isi_l1_publication\n USING btree\n (publication_year COLLATE pg_catalog.\"default\");\n\nCREATE INDEX ritem_dt_idx\n ON isi.ritem\n USING btree\n (dt COLLATE pg_catalog.\"default\");\n\n\n\n\nRegards\nJohann\n\n\n-- \nBecause experiencing your loyal love is better than life itself,\nmy lips will praise you. (Psalm 63:3)\n\nI would appreciate some help optimising the following query:withsubject_journals as(        select  A.sq        from    isi.rissue A,                isi.rsc_joern_link C        WHERE\r\n                C.sc_id in                        ('d0963875-e438-4923-b3fa-f462e8975221',                        '04e14284-09c8-421a-b1ad-c8238051601a',                        '04e2189f-cd2a-44f0-b98d-52f6bb5dcd78',\r\n                        'f5521c65-ec49-408a-9a42-8a69d47703cd',                        '2e47ae2f-2c4d-433e-8bdf-9983eeeafc42',                        '5d3639b1-04c2-4d94-a99a-5323277fd2b7')\r\n                AND                C.rj_id = A.uuid),subject_articles as (                SELECT B.article_id as art_id        FROM                isi.isi_l1_publication B,                subject_journals A,\r\n                isi.ritem C        WHERE                A.sq = B.journal_id                AND                B.publication_year <= '2012'                AND                B.publication_year >= '2000'\r\n                AND                C.ut = B.article_id                AND                C.dt in ('@ Article','Review')                ),country_articles as (                SELECT A.art_id\r\n                FROM isi.art_country_link A                WHERE                        A.countrycode = 'ZA')                select art_id from subject_articlesINTERSECTselect art_id from country_articles\nAnalyze explains shows that it is not using the indexes on both isi.isi_l1_publication and isi.ritem (both tables with more than 43 million records).:\"HashSetOp Intersect  (cost=10778065.50..11227099.44 rows=200 width=48) (actual time=263120.868..263279.467 rows=4000 loops=1)\"\r\n\"  Output: \"*SELECT* 1\".art_id, (0)\"\"  Buffers: shared hit=627401 read=4347235, temp read=234498 written=234492\"\"  CTE subject_journals\"\"    ->  Hash Join  (cost=12846.55..17503.27 rows=28818 width=8) (actual time=99.762..142.439 rows=30291 loops=1)\"\r\n\"          Output: a.sq\"\"          Hash Cond: ((c.rj_id)::text = (a.uuid)::text)\"\"          Buffers: shared hit=12232\"\"          ->  Bitmap Heap Scan on isi.rsc_joern_link c  (cost=1020.92..5029.23 rows=28818 width=37) (actual time=4.238..15.806 rows=30291 loops=1)\"\r\n\"                Output: c.id, c.rj_id, c.sc_id\"\"                Recheck Cond: ((c.sc_id)::text = ANY ('{d0963875-e438-4923-b3fa-f462e8975221,04e14284-09c8-421a-b1ad-c8238051601a,04e2189f-cd2a-44f0-b98d-52f6bb5dcd78,f5521c65-ec49-408a-9a42-8a69d47703cd,2e47ae2f-2c4d-433e-8bdf-9983eeeafc42,5d3639b1-04c2-4 (...)\"\r\n\"                Buffers: shared hit=3516\"\"                ->  Bitmap Index Scan on rsc_joern_link_sc_id_idx  (cost=0.00..1013.72 rows=28818 width=0) (actual time=3.722..3.722 rows=30291 loops=1)\"\r\n\"                      Index Cond: ((c.sc_id)::text = ANY ('{d0963875-e438-4923-b3fa-f462e8975221,04e14284-09c8-421a-b1ad-c8238051601a,04e2189f-cd2a-44f0-b98d-52f6bb5dcd78,f5521c65-ec49-408a-9a42-8a69d47703cd,2e47ae2f-2c4d-433e-8bdf-9983eeeafc42,5d3639b1-04 (...)\"\r\n\"                      Buffers: shared hit=237\"\"          ->  Hash  (cost=10098.06..10098.06 rows=138206 width=45) (actual time=95.495..95.495 rows=138206 loops=1)\"\"                Output: a.sq, a.uuid\"\r\n\"                Buckets: 16384  Batches: 1  Memory Usage: 10393kB\"\"                Buffers: shared hit=8716\"\"                ->  Seq Scan on isi.rissue a  (cost=0.00..10098.06 rows=138206 width=45) (actual time=0.005..58.225 rows=138206 loops=1)\"\r\n\"                      Output: a.sq, a.uuid\"\"                      Buffers: shared hit=8716\"\"  CTE subject_articles\"\"    ->  Merge Join  (cost=9660996.21..9896868.27 rows=13571895 width=16) (actual time=229449.020..259557.073 rows=2513896 loops=1)\"\r\n\"          Output: b.article_id\"\"          Merge Cond: ((a.sq)::text = (b.journal_id)::text)\"\"          Buffers: shared hit=519891 read=4347235, temp read=234498 written=234492\"\"          ->  Sort  (cost=2711.01..2783.05 rows=28818 width=32) (actual time=224.901..230.615 rows=30288 loops=1)\"\r\n\"                Output: a.sq\"\"                Sort Key: a.sq\"\"                Sort Method: quicksort  Memory: 2188kB\"\"                Buffers: shared hit=12232\"\"                ->  CTE Scan on subject_journals a  (cost=0.00..576.36 rows=28818 width=32) (actual time=99.764..152.459 rows=30291 loops=1)\"\r\n\"                      Output: a.sq\"\"                      Buffers: shared hit=12232\"\"          ->  Materialize  (cost=9658285.21..9722584.29 rows=12859816 width=24) (actual time=229223.851..253191.308 rows=14664245 loops=1)\"\r\n\"                Output: b.article_id, b.journal_id\"\"                Buffers: shared hit=507659 read=4347235, temp read=234498 written=234492\"\"                ->  Sort  (cost=9658285.21..9690434.75 rows=12859816 width=24) (actual time=229223.846..251142.167 rows=14072645 loops=1)\"\r\n\"                      Output: b.article_id, b.journal_id\"\"                      Sort Key: b.journal_id\"\"                      Sort Method: external merge  Disk: 467704kB\"\"                      Buffers: shared hit=507659 read=4347235, temp read=234498 written=234492\"\r\n\"                      ->  Hash Join  (cost=1474828.02..7876046.06 rows=12859816 width=24) (actual time=27181.734..91781.942 rows=14072645 loops=1)\"\"                            Output: b.article_id, b.journal_id\"\r\n\"                            Hash Cond: ((c.ut)::text = (b.article_id)::text)\"\"                            Buffers: shared hit=507659 read=4347235, temp read=176031 written=176025\"\"                            ->  Seq Scan on isi.ritem c  (cost=0.00..5071936.72 rows=29104515 width=16) (actual time=0.012..25529.577 rows=29182778 loops=1)\"\r\n\"                                  Output: c.ut\"\"                                  Filter: (((c.dt)::text = '@ Article'::text) OR ((c.dt)::text = 'Review'::text))\"\"                                  Buffers: shared hit=52128 read=4347235\"\r\n\"                            ->  Hash  (cost=1111096.04..1111096.04 rows=19811758 width=24) (actual time=27176.450..27176.450 rows=19820997 loops=1)\"\"                                  Output: b.article_id, b.journal_id\"\r\n\"                                  Buckets: 1048576  Batches: 4  Memory Usage: 271177kB\"\"                                  Buffers: shared hit=455531, temp written=79848\"\"                                  ->  Seq Scan on isi.isi_l1_publication b  (cost=0.00..1111096.04 rows=19811758 width=24) (actual time=152.219..21215.614 rows=19820997 loops=1)\"\r\n\"                                        Output: b.article_id, b.journal_id\"\"                                        Filter: (((b.publication_year)::text < '2012'::text) AND ((b.publication_year)::text > '1999'::text))\"\r\n\"                                        Buffers: shared hit=455531\"\"  CTE country_articles\"\"    ->  Bitmap Heap Scan on isi.art_country_link a  (cost=6427.92..863693.95 rows=244534 width=16) (actual time=65.053..256.632 rows=205195 loops=1)\"\r\n\"          Output: a.art_id\"\"          Recheck Cond: ((a.countrycode)::text = 'ZA'::text)\"\"          Buffers: shared hit=107510\"\"          ->  Bitmap Index Scan on art_country_link_countrycode_idx  (cost=0.00..6366.79 rows=244534 width=0) (actual time=36.481..36.481 rows=205195 loops=1)\"\r\n\"                Index Cond: ((a.countrycode)::text = 'ZA'::text)\"\"                Buffers: shared hit=603\"\"  ->  Append  (cost=0.00..414492.87 rows=13816429 width=48) (actual time=229449.025..261565.050 rows=2719091 loops=1)\"\r\n\"        Buffers: shared hit=627401 read=4347235, temp read=234498 written=234492\"\"        ->  Subquery Scan on \"*SELECT* 1\"  (cost=0.00..407156.85 rows=13571895 width=48) (actual time=229449.025..260892.314 rows=2513896 loops=1)\"\r\n\"              Output: \"*SELECT* 1\".art_id, 0\"\"              Buffers: shared hit=519891 read=4347235, temp read=234498 written=234492\"\"              ->  CTE Scan on subject_articles  (cost=0.00..271437.90 rows=13571895 width=48) (actual time=229449.024..260423.294 rows=2513896 loops=1)\"\r\n\"                    Output: subject_articles.art_id\"\"                    Buffers: shared hit=519891 read=4347235, temp read=234498 written=234492\"\"        ->  Subquery Scan on \"*SELECT* 2\"  (cost=0.00..7336.02 rows=244534 width=48) (actual time=65.059..353.671 rows=205195 loops=1)\"\r\n\"              Output: \"*SELECT* 2\".art_id, 1\"\"              Buffers: shared hit=107510\"\"              ->  CTE Scan on country_articles  (cost=0.00..4890.68 rows=244534 width=48) (actual time=65.057..320.444 rows=205195 loops=1)\"\r\n\"                    Output: country_articles.art_id\"\"                    Buffers: shared hit=107510\"\"Total runtime: 263466.781 ms\"The index for those fields:\r\nCREATE INDEX isi_l1_publication_publication_year_idx  ON isi.isi_l1_publication  USING btree  (publication_year COLLATE pg_catalog.\"default\");CREATE INDEX ritem_dt_idx  ON isi.ritem  USING btree\r\n  (dt COLLATE pg_catalog.\"default\");RegardsJohann-- Because experiencing your loyal love is better than life itself, my lips will praise you.  (Psalm 63:3)", "msg_date": "Thu, 19 Dec 2013 15:06:21 +0200", "msg_from": "Johann Spies <[email protected]>", "msg_from_op": true, "msg_subject": "query not using index" }, { "msg_contents": "Johann Spies <[email protected]> writes:\n> I would appreciate some help optimising the following query:\n\nIt's a mistake to imagine that indexes are going to help much with\na join of this size. Hash or merge join is going to be a lot better\nthan nestloop. What you need to do is make sure those will perform\nas well as possible, and to that end, it'd likely help to raise\nwork_mem. I'm not sure if you can sanely put it high enough to\nmake the query operate totally in memory --- it looks like you'd\nneed work_mem of 500MB or more to prevent any of the sorts or\nhashes from spilling to disk, and keep in mind that this query\nis going to use several times work_mem because there are multiple\nsorts/hashes going on. But if you can transiently dedicate a lot\nof RAM to this query, that should help some. I'd suggest increasing\nwork_mem via a SET command in the particular session running this\nquery --- you don't want such a high value to be the global default.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 19 Dec 2013 09:48:55 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query not using index" }, { "msg_contents": "On 19 December 2013 16:48, Tom Lane <[email protected]> wrote:\n\n> Johann Spies <[email protected]> writes:\n> > I would appreciate some help optimising the following query:\n>\n> It's a mistake to imagine that indexes are going to help much with\n> a join of this size. Hash or merge join is going to be a lot better\n> than nestloop. What you need to do is make sure those will perform\n> as well as possible, and to that end, it'd likely help to raise\n> work_mem. I'm not sure if you can sanely put it high enough to\n> make the query operate totally in memory --- it looks like you'd\n> need work_mem of 500MB or more to prevent any of the sorts or\n> hashes from spilling to disk, and keep in mind that this query\n> is going to use several times work_mem because there are multiple\n> sorts/hashes going on. But if you can transiently dedicate a lot\n> of RAM to this query, that should help some. I'd suggest increasing\n> work_mem via a SET command in the particular session running this\n> query --- you don't want such a high value to be the global default.\n>\n\nThanks Tom. Raising work_mem from 384MB to 512MB made a significant\ndifference.\n\nYou said \"hash or merge join id going to be a lot better than nestloop\".\nIs that purely in the hands of the query planner or what can I do to get\nthe planner to use that options apart from raising the work_mem?\n\nRegards\nJohann\n\n\n-- \nBecause experiencing your loyal love is better than life itself,\nmy lips will praise you. (Psalm 63:3)\n\nOn 19 December 2013 16:48, Tom Lane <[email protected]> wrote:\nJohann Spies <[email protected]> writes:\n> I would appreciate some help optimising the following query:\n\nIt's a mistake to imagine that indexes are going to help much with\na join of this size.  Hash or merge join is going to be a lot better\nthan nestloop.  What you need to do is make sure those will perform\nas well as possible, and to that end, it'd likely help to raise\nwork_mem.  I'm not sure if you can sanely put it high enough to\nmake the query operate totally in memory --- it looks like you'd\nneed work_mem of 500MB or more to prevent any of the sorts or\nhashes from spilling to disk, and keep in mind that this query\nis going to use several times work_mem because there are multiple\nsorts/hashes going on.  But if you can transiently dedicate a lot\nof RAM to this query, that should help some.  I'd suggest increasing\nwork_mem via a SET command in the particular session running this\nquery --- you don't want such a high value to be the global default.Thanks Tom.  Raising work_mem from 384MB to 512MB made a significant difference.You said \"hash or merge join id going to be a lot better than nestloop\".  Is that purely in the hands of the query planner or what can I do to get the planner to use that options apart from raising the work_mem?\nRegardsJohann -- Because experiencing your loyal love is better than life itself, my lips will praise you.  (Psalm 63:3)", "msg_date": "Mon, 23 Dec 2013 10:58:59 +0200", "msg_from": "Johann Spies <[email protected]>", "msg_from_op": true, "msg_subject": "Re: query not using index" }, { "msg_contents": "On 23/12/13 21:58, Johann Spies wrote:\n>\n>\n>\n> On 19 December 2013 16:48, Tom Lane <[email protected]\n> <mailto:[email protected]>> wrote:\n>\n> Johann Spies <[email protected]\n> <mailto:[email protected]>> writes:\n> > I would appreciate some help optimising the following query:\n>\n> It's a mistake to imagine that indexes are going to help much with\n> a join of this size. Hash or merge join is going to be a lot better\n> than nestloop. What you need to do is make sure those will perform\n> as well as possible, and to that end, it'd likely help to raise\n> work_mem. I'm not sure if you can sanely put it high enough to\n> make the query operate totally in memory --- it looks like you'd\n> need work_mem of 500MB or more to prevent any of the sorts or\n> hashes from spilling to disk, and keep in mind that this query\n> is going to use several times work_mem because there are multiple\n> sorts/hashes going on. But if you can transiently dedicate a lot\n> of RAM to this query, that should help some. I'd suggest increasing\n> work_mem via a SET command in the particular session running this\n> query --- you don't want such a high value to be the global default.\n>\n>\n> Thanks Tom. Raising work_mem from 384MB to 512MB made a significant\n> difference.\n>\n> You said \"hash or merge join id going to be a lot better than\n> nestloop\". Is that purely in the hands of the query planner or what can\n> I do to get the planner to use that options apart from raising the work_mem?\n>\n>\n\nYou can disable the hash and merge join options by doing:\n\nSET enable_hashjoin=off;\nSET enable_mergejoin=off;\n\nbefore running the query again. Timing it (or EXPLAIN ANALYZE) should \ndemonstrate if that planner made the right call by choosing hash or \nmerge in the first place.\n\nregards\n\nMark\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 28 Dec 2013 15:31:11 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query not using index" } ]
[ { "msg_contents": "Hi,\n\nI'm having something I feel is a bit of a limitation of the optimizer (or something I don't understand :) ).\n\nSorry, this is a rather long mail.\n\nI have a workaround for the problem below, but I don't like cheating the optimizer for no good reason.\n\nFirst a little bit of context, because the example below may feel dumb if you read it with no explanation:\n\nI'm working on the bacula project, trying to further optimize data insertion into the database (we've been working on this subject for a few years now, as backup speed is a core functionality of any backup software).\n\nBacula is a backup software: we are storing all of the backup's metadata into the database. It means all backed up files�\n\nThe two very large tables are file and path:\n\n * file contains metadata about ALL files that have been backed up and are still available in the database. It sometimes contain several billions records.\n\n * path contains the full path of a file. It is stored out of file to save (lots) of space, as most path are identical in files. It still usually contains 20 to 50 million records, and can be as big as 20GB.\n\nSo we have something like this:\n\n create table file (fileid bigserial, filename text, pathid bigint);\n alter table file add primary key (fileid);\n\n create table path (pathid bigserial, path text);\n alter table path add primary key (pathid);\n create unique index idx_path on path(path);\n\nI removed some columns from the file table: we store stat(), a checksum, etc. They're not needed for this example.\n\nIn order to insert data efficiently, backups first store data into temp tables which look like this:\n\n create table batch (path text, filename text);\n\n(once again I removed all metadata columns)\n\nIf you want to create a batch table with useful data to replicate what I'm going to show, you can try something like this:\n\n find /home -printf '%h\\t%f\\n' | psql -c \"COPY batch FROM STDIN\" bacula\n\nWe analyze the batch table: in the real code, it is a temp table, so it is compulsory:\n\n analyze batch;\n\nThen we insert missing paths. This is one of the plans that fail, but we'll focus on the second one: as we are starting from an empty path table, this query won't be realistic.\n\n insert into path (path) select path from batch where not exists (select 1 from path where path.path=batch.path) group by path;\n\nWe analyze:\n\n analyze path;\n\n\nSo now we insert into the file table.\n\ninsert into file (pathid,filename) select pathid, filename from batch join path using (path);\n\nHere is the plan:\n\nbacula=# explain select pathid, filename from batch join path using (path);\n QUERY PLAN \n-----------------------------------------------------------------------\n Hash Join (cost=822.25..22129.85 rows=479020 width=26)\n Hash Cond: (batch.path = path.path)\n -> Seq Scan on batch (cost=0.00..11727.20 rows=479020 width=85)\n -> Hash (cost=539.89..539.89 rows=22589 width=81)\n -> Seq Scan on path (cost=0.00..539.89 rows=22589 width=81)\n(5 lignes)\n\n\nI have this plan almost all the time. Lets add a bunch of useless records in path to be more realistic and make things worse: usually, bacula inserts data in 500,000 batches, and path is very large (millions of records), and is bigger than work_mem (it would have to be 20GB in most extreme cases), so it may trash disks heavily (many servers can be inserting at the same time).\n\nLet's simulate this (remove indexes before, put them back afterwards :) )\n\n insert into path (path) select generate_series(1000000,50000000); #Create a realistic path table\n analyze path;\n\nHere is the plan:\n\nbacula=# explain (analyze,buffers,verbose) select pathid, filename from batch join path using (path);\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=1646119.66..1945643.59 rows=479020 width=26) (actual time=27275.240..36745.904 rows=479020 loops=1)\n Output: path.pathid, batch.filename\n Hash Cond: (batch.path = path.path)\n Buffers: shared hit=130760 read=179917 written=1823, temp read=224130 written=223876\n -> Seq Scan on public.batch (cost=0.00..11727.20 rows=479020 width=85) (actual time=0.259..176.031 rows=479020 loops=1)\n Output: batch.filename, batch.path\n Buffers: shared read=6937 written=1823\n -> Hash (cost=793966.96..793966.96 rows=49022696 width=16) (actual time=27274.725..27274.725 rows=49022590 loops=1)\n Output: path.pathid, path.path\n Buckets: 131072 Batches: 128 Memory Usage: 18329kB\n Buffers: shared hit=130760 read=172980, temp written=218711\n -> Seq Scan on public.path (cost=0.00..793966.96 rows=49022696 width=16) (actual time=0.231..9650.452 rows=49022590 loops=1)\n Output: path.pathid, path.path\n Buffers: shared hit=130760 read=172980\n Total runtime: 36781.919 ms\n(15 lignes)\n\n\nIt seems like a logical choice (and it's working quite well, but we only have 1 of these running, and my path table is still very small compared to real use cases)\n\nLet's force a nested loop (we don't do it that way usually, we lower the *_page_cost)\n\n set enable_hashjoin TO off; set enable_mergejoin TO off;\n\nbacula=# explain (analyze,buffers,verbose) select pathid, filename from batch join path using (path);\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.56..4001768.10 rows=479020 width=26) (actual time=2.303..15371.237 rows=479020 loops=1)\n Output: path.pathid, batch.filename\n Buffers: shared hit=2403958 read=7539\n -> Seq Scan on public.batch (cost=0.00..11727.20 rows=479020 width=85) (actual time=0.340..160.142 rows=479020 loops=1)\n Output: batch.path, batch.filename\n Buffers: shared read=6937\n -> Index Scan using idx_path on public.path (cost=0.56..8.32 rows=1 width=16) (actual time=0.030..0.031 rows=1 loops=479020)\n Output: path.pathid, path.path\n Index Cond: (path.path = batch.path)\n Buffers: shared hit=2403958 read=602\n Total runtime: 15439.043 ms\n\n\nAs you can see, more than twice as fast, and a very high hit ratio on the path table, even if we start from a cold cache (I did, here, both PostgreSQL and OS). We have an excellent hit ratio because the batch table contains few different path (several files in a directory), and is already quite clustered, as it comes from a backup, which is of course performed directory by directory.\n\nWhat I think is the cause of the problem is that the planner doesn't take into account that we are going to fetch the exact same values all the time in the path table, so we'll have a very good hit ratio. Maybe the n_distinct from batch.path could be used to refine the caching effect on the index scan ? The correlation is almost 0, but that's normal, the directories aren't sorted alphabetically, but all records for a given path are grouped together.\n\nFor now, we work around this by using very low values for seq_page_cost and random_page_cost for these 2 queries. I just feel that maybe PostgreSQL could do a bit better here, so I wanted to submit this use case for discussion.\n\nRegards\n\nMarc\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 19 Dec 2013 16:33:24 +0100", "msg_from": "Marc Cousin <[email protected]>", "msg_from_op": true, "msg_subject": "query plan not optimal" }, { "msg_contents": ">\n> QUERY PLAN\n>\n> ----------------------------------------------------------------------------------------------------------------------------------\n> Nested Loop (cost=0.56..4001768.10 rows=479020 width=26) (actual\n> time=2.303..15371.237 rows=479020 loops=1)\n> Output: path.pathid, batch.filename\n> Buffers: shared hit=2403958 read=7539\n> -> Seq Scan on public.batch (cost=0.00..11727.20 rows=479020\n> width=85) (actual time=0.340..160.142 rows=479020 loops=1)\n> Output: batch.path, batch.filename\n> Buffers: shared read=6937\n> -> Index Scan using idx_path on public.path (cost=0.56..8.32 rows=1\n> width=16) (actual time=0.030..0.031 rows=1 loops=479020)\n> Output: path.pathid, path.path\n> Index Cond: (path.path = batch.path)\n> Buffers: shared hit=2403958 read=602\n> Total runtime: 15439.043 ms\n>\n>\n> As you can see, more than twice as fast, and a very high hit ratio on the\n> path table, even if we start from a cold cache (I did, here, both\n> PostgreSQL and OS). We have an excellent hit ratio because the batch table\n> contains few different path (several files in a directory), and is already\n> quite clustered, as it comes from a backup, which is of course performed\n> directory by directory.\n>\n\nWhat is your effective_cache_size set to?\n\nCheers,\n\nJeff\n\n                                                            QUERY PLAN\n\n----------------------------------------------------------------------------------------------------------------------------------\n Nested Loop  (cost=0.56..4001768.10 rows=479020 width=26) (actual time=2.303..15371.237 rows=479020 loops=1)\n   Output: path.pathid, batch.filename\n   Buffers: shared hit=2403958 read=7539\n   ->  Seq Scan on public.batch  (cost=0.00..11727.20 rows=479020 width=85) (actual time=0.340..160.142 rows=479020 loops=1)\n         Output: batch.path, batch.filename\n         Buffers: shared read=6937\n   ->  Index Scan using idx_path on public.path  (cost=0.56..8.32 rows=1 width=16) (actual time=0.030..0.031 rows=1 loops=479020)\n         Output: path.pathid, path.path\n         Index Cond: (path.path = batch.path)\n         Buffers: shared hit=2403958 read=602\n Total runtime: 15439.043 ms\n\n\nAs you can see, more than twice as fast, and a very high hit ratio on the path table, even if we start from a cold cache (I did, here, both PostgreSQL and OS). We have an excellent hit ratio because the batch table contains few different path (several files in a directory), and is already quite clustered, as it comes from a backup, which is of course performed directory by directory.\nWhat is your effective_cache_size set to?Cheers,Jeff", "msg_date": "Thu, 19 Dec 2013 10:33:04 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query plan not optimal" }, { "msg_contents": "\n\nOn 19/12/2013 19:33, Jeff Janes wrote:\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------\n> Nested Loop (cost=0.56..4001768.10 rows=479020 width=26) (actual\n> time=2.303..15371.237 rows=479020 loops=1)\n> Output: path.pathid, batch.filename\n> Buffers: shared hit=2403958 read=7539\n> -> Seq Scan on public.batch (cost=0.00..11727.20 rows=479020\n> width=85) (actual time=0.340..160.142 rows=479020 loops=1)\n> Output: batch.path, batch.filename\n> Buffers: shared read=6937\n> -> Index Scan using idx_path on public.path (cost=0.56..8.32\n> rows=1 width=16) (actual time=0.030..0.031 rows=1 loops=479020)\n> Output: path.pathid, path.path\n> Index Cond: (path.path = batch.path)\n> Buffers: shared hit=2403958 read=602\n> Total runtime: 15439.043 ms\n> \n> \n> As you can see, more than twice as fast, and a very high hit ratio\n> on the path table, even if we start from a cold cache (I did, here,\n> both PostgreSQL and OS). We have an excellent hit ratio because the\n> batch table contains few different path (several files in a\n> directory), and is already quite clustered, as it comes from a\n> backup, which is of course performed directory by directory.\n> \n> \n> What is your effective_cache_size set to?\n> \n> Cheers,\n> \n> Jeff\nYeah, I had forgotten to set it up correctly on this test environment\n(its value is correctly set in production environments). Putting it to a\nfew gigabytes here gives me this cost:\n\nbacula=# explain select pathid, filename from batch join path using (path);\n QUERY PLAN\n----------------------------------------------------------------------------\n Nested Loop (cost=0.56..2083904.10 rows=479020 width=26)\n -> Seq Scan on batch (cost=0.00..11727.20 rows=479020 width=85)\n -> Index Scan using idx_path on path (cost=0.56..4.32 rows=1 width=16)\n Index Cond: (path = batch.path)\n(4 lignes)\n\nIt still chooses the hash join though, but by a smaller margin.\n\nAnd it still only will access a very small part of path (always the same\n5000 records) during the query, which isn't accounted for in the cost if\nI understand correctly ?\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 19 Dec 2013 20:00:16 +0100", "msg_from": "Marc Cousin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: query plan not optimal" }, { "msg_contents": "Marc Cousin <[email protected]> wrote:\n\n> Then we insert missing paths. This is one of the plans that fail\n\n> insert into path (path)\n>   select path from batch\n>     where not exists\n>           (select 1 from path where path.path=batch.path)\n>     group by path;\n\nI know you said you wanted to focus on a different query, but this\none can easily be optimized.  Right now it is checking for an\nexisting row in path for each row in batch; and it only needs to\ncheck once for each path.  One way to write it would be:\n\ninsert into path (path)\n  select path from (select distinct path from batch) b\n    where not exists\n          (select 1 from path p where p.path = b.path);\n\n> So now we insert into the file table.\n>\n> insert into file (pathid,filename)\n>   select pathid, filename from batch join path using (path);\n\n> What I think is the cause of the problem is that the planner\n> doesn't take into account that we are going to fetch the exact\n> same values all the time in the path table, so we'll have a very\n> good hit ratio.\n\nIt kinda takes that into account for the index part of things via\nthe effective_cache_size setting.  That should normally be set to\n50% to 75% of machine RAM.\n\n> Maybe the n_distinct from batch.path could be used to refine the\n> caching effect on the index scan ?\n\nInteresting idea.\n\n> For now, we work around this by using very low values for\n> seq_page_cost and random_page_cost for these 2 queries.\nIf you are not already doing so, you might want to try setting\ncpu_tuple_cost to something in the 0.03 to 0.05 range.  I have\nfound that the default is too low compared to other cpu cost\nfactors, and raising it makes the exact settings for page costs\nless sensitive -- that is, you get good plans over a wider range of\npage cost settings.  I have sometimes been unable to get a good\nplan for a query without boosting this, regardless of what I do\nwith other settings.\n\nRunning with a development build on my 16GB development PC, I got\nyour fast plan with your \"big data\" test case by making only this\none adjustment from the postgresql.conf defaults:\n\nset effective_cache_size = '2GB';\n\nI also got the fast plan if I left effective_cache_size at the\ndefault and only changed:\n\nset cpu_tuple_cost = 0.03;\n\nI know that there have been adjustments to cost calculations for\nuse of large indexes in both minor and major releases.  If a little\nsensible tuning of cost factors to better match reality doesn't do\nit for you, you might want to consider an upgrade.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 19 Dec 2013 12:36:19 -0800 (PST)", "msg_from": "Kevin Grittner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query plan not optimal" }, { "msg_contents": "On 19/12/2013 21:36, Kevin Grittner wrote:\n> Marc Cousin <[email protected]> wrote:\n>\n>> Then we insert missing paths. This is one of the plans that fail\n>> insert into path (path)\n>> select path from batch\n>> where not exists\n>> (select 1 from path where path.path=batch.path)\n>> group by path;\n> I know you said you wanted to focus on a different query, but this\n> one can easily be optimized. Right now it is checking for an\n> existing row in path for each row in batch; and it only needs to\n> check once for each path. One way to write it would be:\n>\n> insert into path (path)\n> select path from (select distinct path from batch) b\n> where not exists\n> (select 1 from path p where p.path = b.path);\nYeah, I know, that's why I said I didn't want to focus on this one� we \nalready do this optimization :)\n\nThanks anyway :)\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 20 Dec 2013 07:05:35 +0100", "msg_from": "Marc Cousin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: query plan not optimal" }, { "msg_contents": "On Thursday, December 19, 2013, Marc Cousin wrote:\n\n>\n>\n> Yeah, I had forgotten to set it up correctly on this test environment\n> (its value is correctly set in production environments). Putting it to a\n> few gigabytes here gives me this cost:\n>\n> bacula=# explain select pathid, filename from batch join path using (path);\n> QUERY PLAN\n>\n> ----------------------------------------------------------------------------\n> Nested Loop (cost=0.56..2083904.10 rows=479020 width=26)\n> -> Seq Scan on batch (cost=0.00..11727.20 rows=479020 width=85)\n> -> Index Scan using idx_path on path (cost=0.56..4.32 rows=1 width=16)\n> Index Cond: (path = batch.path)\n> (4 lignes)\n>\n> It still chooses the hash join though, but by a smaller margin.\n>\n\nThis is still a tangent from your original point, but if I create index on\npath (path, pathid), then I can get an index only scan. This actually is\nnot much faster when everything is already cached, but the planner thinks\nit will be about 2x faster because it assumes the vm block accesses are\nfree. So this might be enough to tip it over for you.\n\n\n> And it still only will access a very small part of path (always the same\n> 5000 records) during the query, which isn't accounted for in the cost if\n> I understand correctly ?\n>\n\nI think you are correct, that it doesn't take account of ndistinct being 10\nto 100 fold less than ntuples on the outer loop, which theoretically could\npropagate down to the table size used in connection with\neffecitve_cache_size.\n\nIt seems to me the cost of the hash join is being greatly underestimated,\nwhich I think is more important than the nested loop being overestimated.\n (And in my hands, the merge join is actually the winner both in the\nplanner and in reality, but I suspect this is because all of your fake\npaths are lexically greater than all of the real paths)\n\nAlso, you talked earlier about cheating the planner by lowering\nrandom_page_cost. But why is that cheating? If caching means the cost is\ntruly lower...\n\nCheers,\n\nJeff\n\nOn Thursday, December 19, 2013, Marc Cousin wrote:\nYeah, I had forgotten to set it up correctly on this test environment\n(its value is correctly set in production environments). Putting it to a\nfew gigabytes here gives me this cost:\n\nbacula=# explain select pathid, filename from batch join path using (path);\n                                 QUERY PLAN\n----------------------------------------------------------------------------\n Nested Loop  (cost=0.56..2083904.10 rows=479020 width=26)\n   ->  Seq Scan on batch  (cost=0.00..11727.20 rows=479020 width=85)\n   ->  Index Scan using idx_path on path  (cost=0.56..4.32 rows=1 width=16)\n         Index Cond: (path = batch.path)\n(4 lignes)\n\nIt still chooses the hash join though, but by a smaller margin.This is still a tangent from your original point, but if I create index on path (path, pathid), then I can get an index only scan.  This actually is not much faster when everything is already cached, but the planner thinks it will be about 2x faster because it assumes the vm block accesses are free.  So this might be enough to tip it over for you.\n\nAnd it still only will access a very small part of path (always the same\n5000 records) during the query, which isn't accounted for in the cost if\nI understand correctly ?I think you are correct, that it doesn't take account of ndistinct being 10 to 100 fold less than ntuples on the outer loop, which theoretically could propagate down to the table size used in connection with effecitve_cache_size.\nIt seems to me the cost of the hash join is being greatly underestimated, which I think is more important than the nested loop being overestimated.  (And in my hands, the merge join is actually the winner both in the planner and in reality, but I suspect this is because all of your fake paths are lexically greater than all of the real paths)\nAlso, you talked earlier about cheating the planner by lowering random_page_cost.  But why is that cheating?  If caching means the cost is truly lower... Cheers,\nJeff", "msg_date": "Sun, 29 Dec 2013 10:51:46 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "query plan not optimal" }, { "msg_contents": "On 29/12/2013 19:51, Jeff Janes wrote:\n> On Thursday, December 19, 2013, Marc Cousin wrote:\n> \n> \n> \n> Yeah, I had forgotten to set it up correctly on this test environment\n> (its value is correctly set in production environments). Putting it to a\n> few gigabytes here gives me this cost:\n> \n> bacula=# explain select pathid, filename from batch join path using\n> (path);\n> QUERY PLAN\n> ----------------------------------------------------------------------------\n> Nested Loop (cost=0.56..2083904.10 rows=479020 width=26)\n> -> Seq Scan on batch (cost=0.00..11727.20 rows=479020 width=85)\n> -> Index Scan using idx_path on path (cost=0.56..4.32 rows=1\n> width=16)\n> Index Cond: (path = batch.path)\n> (4 lignes)\n> \n> It still chooses the hash join though, but by a smaller margin.\n> \n> \n> This is still a tangent from your original point, but if I create index\n> on path (path, pathid), then I can get an index only scan. This\n> actually is not much faster when everything is already cached, but the\n> planner thinks it will be about 2x faster because it assumes the vm\n> block accesses are free. So this might be enough to tip it over for you.\n\nYeah, still a tangent :)\n\nMany bacula users don't have index only scans (the one I was having\ntrouble with for example), as they are still using an older than 9.2\nPostgreSQL version.\n\n> \n> \n> And it still only will access a very small part of path (always the same\n> 5000 records) during the query, which isn't accounted for in the cost if\n> I understand correctly ?\n> \n> \n> I think you are correct, that it doesn't take account of ndistinct being\n> 10 to 100 fold less than ntuples on the outer loop, which theoretically\n> could propagate down to the table size used in connection with\n> effecitve_cache_size.\n> \n> It seems to me the cost of the hash join is being greatly\n> underestimated, which I think is more important than the nested loop\n> being overestimated. (And in my hands, the merge join is actually the\n> winner both in the planner and in reality, but I suspect this is because\n> all of your fake paths are lexically greater than all of the real paths)\nYes, probably.\n\n> \n> Also, you talked earlier about cheating the planner by lowering\n> random_page_cost. But why is that cheating? If caching means the cost\n> is truly lower... \nIt feels like cheating, as I feel I'm compensating for what looks like a\n\"bad\" estimate of the cost: the nested loop is very fast, even if\nnothing is cached at the beginning. We could put the *_page_cost\nhardcoded to low values in bacula's code for this query, but it is not\nthat good to put it in postgresql.conf as we currently do, as some other\nqueries are suffering from those very low costs. Anyway, it would be\neven better if it wasn't needed at all, hence this post :)\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 06 Jan 2014 15:41:35 +0100", "msg_from": "Marc Cousin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: query plan not optimal" } ]
[ { "msg_contents": "I'm working on setting up a large database (or at least what I consider to\nbe a large one with several tables having 10-20 million records inserted\nper day), and I've been using pgbench to verify that the hardware and\ndatabase are configured in an optimal manner.\n\nWhen I run pgbench in \"SELECT only\" after doing \"-i -s 2000\" I get what\nappears to be good performance (60k-70k tps) but if I initialize a new\ndatabase with \"-i -s 4000\" the tps drops to 4k-7k. Is this order of\nmagnitude drop expected? Or is there something wrong with my hardware or\ndatabase configuration that is causing this issue?\n\nThe one option that I've been considering is partitioning and I've been\nasking about it here:\nhttp://www.postgresql.org/message-id/CAAcYxUcb0NFfMDsMOCL5scNRrUL7=9hKxjz021JMQp0r7f5sCQ@mail.gmail.com\n\nI'm working on setting up a large database (or at least what I consider to be a large one with several tables having 10-20 million records inserted per day), and I've been using pgbench to verify that the hardware and database are configured in an optimal manner.\nWhen I run pgbench in \"SELECT only\" after doing \"-i -s 2000\" I get what appears to be good performance (60k-70k tps) but if I initialize a new database with \"-i -s 4000\" the tps drops to 4k-7k. Is this order of magnitude drop expected? Or is there something wrong with my hardware or database configuration that is causing this issue?\nThe one option that I've been considering is partitioning and I've been asking about it here:http://www.postgresql.org/message-id/CAAcYxUcb0NFfMDsMOCL5scNRrUL7=9hKxjz021JMQp0r7f5sCQ@mail.gmail.com", "msg_date": "Thu, 19 Dec 2013 10:00:38 -0700", "msg_from": "Dave Johansen <[email protected]>", "msg_from_op": true, "msg_subject": "Unexpected pgbench result" }, { "msg_contents": "On 12/19/2013 11:00 AM, Dave Johansen wrote:\n\n> When I run pgbench in \"SELECT only\" after doing \"-i -s 2000\" I get what\n> appears to be good performance (60k-70k tps) but if I initialize a new\n> database with \"-i -s 4000\" the tps drops to 4k-7k. Is this order of\n> magnitude drop expected? Or is there something wrong with my hardware or\n> database configuration that is causing this issue?\n\nWhen you increase the size of the initialized pgbench tables, you \nincrease the size on disk. My guess is that you doubled it so that the \ndata no longer fits in memory. You can verify this yourself:\n\nSELECT pg_size_pretty(sum(pg_database_size(oid))::bigint)\n from pg_database;\n\nAny amount of memory you have that is smaller than that, will affect \nselect performance. I can guarantee you will not get 60k-70k tps from \nanything short of an array of SSD devices or a PCIe NVRAM solution. Your \n'-s 2000' test was probably running mostly from memory, while the '-s \n4000' did not.\n\nWhat you're seeing is the speed your records are being supplied from \ndisk, plus whatever cache effects are there when records are read before \nthey are flushed in favor of more recent data.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 19 Dec 2013 11:44:09 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unexpected pgbench result" }, { "msg_contents": "On Thu, Dec 19, 2013 at 10:44 AM, Shaun Thomas <[email protected]>wrote:\n\n> On 12/19/2013 11:00 AM, Dave Johansen wrote:\n>\n> When I run pgbench in \"SELECT only\" after doing \"-i -s 2000\" I get what\n>> appears to be good performance (60k-70k tps) but if I initialize a new\n>> database with \"-i -s 4000\" the tps drops to 4k-7k. Is this order of\n>> magnitude drop expected? Or is there something wrong with my hardware or\n>> database configuration that is causing this issue?\n>>\n>\n> When you increase the size of the initialized pgbench tables, you increase\n> the size on disk. My guess is that you doubled it so that the data no\n> longer fits in memory. You can verify this yourself:\n>\n> SELECT pg_size_pretty(sum(pg_database_size(oid))::bigint)\n> from pg_database;\n>\n> Any amount of memory you have that is smaller than that, will affect\n> select performance. I can guarantee you will not get 60k-70k tps from\n> anything short of an array of SSD devices or a PCIe NVRAM solution. Your\n> '-s 2000' test was probably running mostly from memory, while the '-s 4000'\n> did not.\n>\n> What you're seeing is the speed your records are being supplied from disk,\n> plus whatever cache effects are there when records are read before they are\n> flushed in favor of more recent data.\n>\n\nYep, that was exactly it and that definitely makes sense now that you point\nit out.\n\nRight now, we're running a RAID 1 for pg_clog, pg_log and pg_xlog and then\na RAID 1+0 with 12 disks for the data. Would there be any benefit to\nrunning a separate RAID 1+0 with a tablespace for the indexes? Or is\nreading the indexes and data a serial process where separating them like\nthat won't have any big benefit?\n\nThanks,\nDave\n\nOn Thu, Dec 19, 2013 at 10:44 AM, Shaun Thomas <[email protected]> wrote:\nOn 12/19/2013 11:00 AM, Dave Johansen wrote:\n\n\nWhen I run pgbench in \"SELECT only\" after doing \"-i -s 2000\" I get what\nappears to be good performance (60k-70k tps) but if I initialize a new\ndatabase with \"-i -s 4000\" the tps drops to 4k-7k. Is this order of\nmagnitude drop expected? Or is there something wrong with my hardware or\ndatabase configuration that is causing this issue?\n\n\nWhen you increase the size of the initialized pgbench tables, you increase the size on disk. My guess is that you doubled it so that the data no longer fits in memory. You can verify this yourself:\n\nSELECT pg_size_pretty(sum(pg_database_size(oid))::bigint)\n  from pg_database;\n\nAny amount of memory you have that is smaller than that, will affect select performance. I can guarantee you will not get 60k-70k tps from anything short of an array of SSD devices or a PCIe NVRAM solution. Your '-s 2000' test was probably running mostly from memory, while the '-s 4000' did not.\n\nWhat you're seeing is the speed your records are being supplied from disk, plus whatever cache effects are there when records are read before they are flushed in favor of more recent data.\nYep, that was exactly it and that definitely makes sense now that you point it out.Right now, we're running a RAID 1 for pg_clog, pg_log and pg_xlog and then a RAID 1+0 with 12 disks for the data. Would there be any benefit to running a separate RAID 1+0 with a tablespace for the indexes? Or is reading the indexes and data a serial process where separating them like that won't have any big benefit?\nThanks,Dave", "msg_date": "Thu, 19 Dec 2013 15:06:18 -0700", "msg_from": "Dave Johansen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Unexpected pgbench result" }, { "msg_contents": "On 12/19/2013 04:06 PM, Dave Johansen wrote:\n\n> Right now, we're running a RAID 1 for pg_clog, pg_log and pg_xlog and\n> then a RAID 1+0 with 12 disks for the data. Would there be any benefit\n> to running a separate RAID 1+0 with a tablespace for the indexes?\n\nNot really. PostgreSQL doesn't currently support parallel backend \nfetches, aggregation, or really anything. It's looking like 9.4 will get \nus a lot closer to that, but right now, everything is serial.\n\nSerial or not, separate backends will have separate read concerns, and \nPostgreSQL 9.2 and above *do* support index only scans. So \ntheoretically, you might actually see some benefit there. If it were me \nand I had spindles available, I would just increase the overall size of \nthe pool. It's a lot easier than managing multiple tablespaces.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 20 Dec 2013 08:10:32 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unexpected pgbench result" }, { "msg_contents": "On Fri, Dec 20, 2013 at 7:10 AM, Shaun Thomas <[email protected]>wrote:\n\n> On 12/19/2013 04:06 PM, Dave Johansen wrote:\n>\n> Right now, we're running a RAID 1 for pg_clog, pg_log and pg_xlog and\n>> then a RAID 1+0 with 12 disks for the data. Would there be any benefit\n>> to running a separate RAID 1+0 with a tablespace for the indexes?\n>>\n>\n> Not really. PostgreSQL doesn't currently support parallel backend fetches,\n> aggregation, or really anything. It's looking like 9.4 will get us a lot\n> closer to that, but right now, everything is serial.\n>\n> Serial or not, separate backends will have separate read concerns, and\n> PostgreSQL 9.2 and above *do* support index only scans. So theoretically,\n> you might actually see some benefit there. If it were me and I had spindles\n> available, I would just increase the overall size of the pool. It's a lot\n> easier than managing multiple tablespaces.\n>\n\nOk, that makes sense. Is there a benefit to having the WAL and logs on the\nseparate RAID 1? Or is just having them be part of the larger RAID 1+0 just\nas good?\n\nOn Fri, Dec 20, 2013 at 7:10 AM, Shaun Thomas <[email protected]> wrote:\nOn 12/19/2013 04:06 PM, Dave Johansen wrote:\n\n\nRight now, we're running a RAID 1 for pg_clog, pg_log and pg_xlog and\nthen a RAID 1+0 with 12 disks for the data. Would there be any benefit\nto running a separate RAID 1+0 with a tablespace for the indexes?\n\n\nNot really. PostgreSQL doesn't currently support parallel backend fetches, aggregation, or really anything. It's looking like 9.4 will get us a lot closer to that, but right now, everything is serial.\n\nSerial or not, separate backends will have separate read concerns, and PostgreSQL 9.2 and above *do* support index only scans. So theoretically, you might actually see some benefit there. If it were me and I had spindles available, I would just increase the overall size of the pool. It's a lot easier than managing multiple tablespaces.\nOk, that makes sense. Is there a benefit to having the WAL and logs on the separate RAID 1? Or is just having them be part of the larger RAID 1+0 just as good?", "msg_date": "Fri, 20 Dec 2013 08:02:48 -0700", "msg_from": "Dave Johansen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Unexpected pgbench result" }, { "msg_contents": "Dave Johansen <[email protected]> wrote:\n\n> Is there a benefit to having the WAL and logs on the separate\n> RAID 1? Or is just having them be part of the larger RAID 1+0\n> just as good?\n\nI once accidentally left the pg_xlog directory on the 40-spindle\nRAID with most of the data instead of moving it.  Results with\ngraph here:\n\nhttp://www.postgresql.org/message-id/[email protected]\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 20 Dec 2013 07:22:10 -0800 (PST)", "msg_from": "Kevin Grittner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unexpected pgbench result" }, { "msg_contents": "On Fri, Dec 20, 2013 at 8:22 AM, Kevin Grittner <[email protected]> wrote:\n\n> Dave Johansen <[email protected]> wrote:\n>\n> > Is there a benefit to having the WAL and logs on the separate\n> > RAID 1? Or is just having them be part of the larger RAID 1+0\n> > just as good?\n>\n> I once accidentally left the pg_xlog directory on the 40-spindle\n> RAID with most of the data instead of moving it. Results with\n> graph here:\n>\n>\n> http://www.postgresql.org/message-id/[email protected]\n>\n\nThat's very helpful information. Thanks for sharing it,\nDave\n\nOn Fri, Dec 20, 2013 at 8:22 AM, Kevin Grittner <[email protected]> wrote:\nDave Johansen <[email protected]> wrote:\n\n> Is there a benefit to having the WAL and logs on the separate\n> RAID 1? Or is just having them be part of the larger RAID 1+0\n> just as good?\n\nI once accidentally left the pg_xlog directory on the 40-spindle\nRAID with most of the data instead of moving it.  Results with\ngraph here:\n\nhttp://www.postgresql.org/message-id/[email protected]\nThat's very helpful information. Thanks for sharing it,Dave", "msg_date": "Fri, 20 Dec 2013 08:40:35 -0700", "msg_from": "Dave Johansen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Unexpected pgbench result" } ]
[ { "msg_contents": "I just ran into an interesting issue on Postgres 8.4. I have a database\nwith about 3 months of data and when I do following query:\nSELECT DATE_TRUNC('day', time) AS time_t, COUNT(*) FROM mytable GROUP BY\ntime_t;\n\nEXPLAIN shows that it's doing a sort and then a GroupAggregate. There will\nonly be ~90 outputs, so is there a way I can hint/force the planner to just\ndo a HashAggregate?\n\nJust to see if it would change the plan, I tried increasing the work_mem up\nto 1GB and it still did the same plan.\n\nThanks,\nDave\n\nI just ran into an interesting issue on Postgres 8.4. I have a database with about 3 months of data and when I do following query:SELECT DATE_TRUNC('day', time) AS time_t, COUNT(*) FROM mytable GROUP BY time_t;\nEXPLAIN shows that it's doing a sort and then a GroupAggregate. There will only be ~90 outputs, so is there a way I can hint/force the planner to just do a HashAggregate?Just to see if it would change the plan, I tried increasing the work_mem up to 1GB and it still did the same plan.\nThanks,Dave", "msg_date": "Thu, 19 Dec 2013 17:35:11 -0700", "msg_from": "Dave Johansen <[email protected]>", "msg_from_op": true, "msg_subject": "DATE_TRUNC() and GROUP BY?" }, { "msg_contents": "On Fri, Dec 20, 2013 at 1:35 PM, Dave Johansen <[email protected]>wrote:\n\n> I just ran into an interesting issue on Postgres 8.4. I have a database\n> with about 3 months of data and when I do following query:\n> SELECT DATE_TRUNC('day', time) AS time_t, COUNT(*) FROM mytable GROUP BY\n> time_t;\n>\n> EXPLAIN shows that it's doing a sort and then a GroupAggregate. There will\n> only be ~90 outputs, so is there a way I can hint/force the planner to just\n> do a HashAggregate?\n>\n> Just to see if it would change the plan, I tried increasing the work_mem\n> up to 1GB and it still did the same plan.\n>\n>\nPostgreSQL does not really have any stats on the selectivity of\ndate_trunc('day', time) so my guess is that it can only assume that it has\nthe same selectivity as the time column by itself... Which is very untrue\nin this case.\nThe group aggregate plan is chosen here as PostgreSQL thinks the the hash\ntable is going to end up pretty big and decides that the group aggregate\nwill be the cheaper option.\n\nI mocked up your data and on 9.4 I can get the hash aggregate plan to run\nif I set the n_distinct value to 90 then analyze the table again.. Even if\nyou could do this on 8.4 I'd not recommend it as it will probably cause\nhavoc with other plans around the time column. I did also get the hash\naggregate plan to run if I created a functional index on date_trunc('day',\ntime) then ran analyze again. I don't have a copy of 8.4 around to see if\nthe planner will make use of the index in the same way.\n\nWhat would be really nice is if we could create our own statistics on what\nwe want, something like:\n\nCREATE STATISTICS name ON table (date_trunc('day', time));\n\nThat way postgres could have a better idea of the selectivity in this\nsituation.\n\nI'd give creating the function index a try, but keep in mind the overhead\nthat it will cause with inserts, updates and deletes.\n\nRegards\n\nDavid Rowley\n\n\n> Thanks,\n> Dave\n>\n\nOn Fri, Dec 20, 2013 at 1:35 PM, Dave Johansen <[email protected]> wrote:\nI just ran into an interesting issue on Postgres 8.4. I have a database with about 3 months of data and when I do following query:\nSELECT DATE_TRUNC('day', time) AS time_t, COUNT(*) FROM mytable GROUP BY time_t;\nEXPLAIN shows that it's doing a sort and then a GroupAggregate. There will only be ~90 outputs, so is there a way I can hint/force the planner to just do a HashAggregate?Just to see if it would change the plan, I tried increasing the work_mem up to 1GB and it still did the same plan.\nPostgreSQL does not really have any stats on the selectivity of date_trunc('day', time) so my guess is that it can only assume that it has the same selectivity as the time column by itself... Which is very untrue in this case.\nThe group aggregate plan is chosen here as PostgreSQL thinks the the hash table is going to end up pretty big and decides that the group aggregate will be the cheaper option.I mocked up your data and on 9.4 I can get the hash aggregate plan to run if I set the n_distinct value to 90 then analyze the table again.. Even if you could do this on 8.4 I'd not recommend it as it will probably cause havoc with other plans around the time column. I did also get the hash aggregate plan to run if I created a functional index on date_trunc('day', time) then ran analyze again. I don't have a copy of 8.4 around to see if the planner will make use of the index in the same way.\nWhat would be really nice is if we could create our own statistics on what we want, something like:CREATE STATISTICS name ON table (date_trunc('day', time));\nThat way postgres could have a better idea of the selectivity in this situation.I'd give creating the function index a try, but keep in mind the overhead that it will cause with inserts, updates and deletes.\nRegardsDavid Rowley Thanks,Dave", "msg_date": "Sat, 21 Dec 2013 18:46:22 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DATE_TRUNC() and GROUP BY?" }, { "msg_contents": "On Fri, Dec 20, 2013 at 10:46 PM, David Rowley <[email protected]> wrote:\n\n> On Fri, Dec 20, 2013 at 1:35 PM, Dave Johansen <[email protected]>wrote:\n>\n>> I just ran into an interesting issue on Postgres 8.4. I have a database\n>> with about 3 months of data and when I do following query:\n>> SELECT DATE_TRUNC('day', time) AS time_t, COUNT(*) FROM mytable GROUP BY\n>> time_t;\n>>\n>> EXPLAIN shows that it's doing a sort and then a GroupAggregate. There\n>> will only be ~90 outputs, so is there a way I can hint/force the planner to\n>> just do a HashAggregate?\n>>\n>> Just to see if it would change the plan, I tried increasing the work_mem\n>> up to 1GB and it still did the same plan.\n>>\n>>\n> PostgreSQL does not really have any stats on the selectivity of\n> date_trunc('day', time) so my guess is that it can only assume that it has\n> the same selectivity as the time column by itself... Which is very untrue\n> in this case.\n> The group aggregate plan is chosen here as PostgreSQL thinks the the hash\n> table is going to end up pretty big and decides that the group aggregate\n> will be the cheaper option.\n>\n> I mocked up your data and on 9.4 I can get the hash aggregate plan to run\n> if I set the n_distinct value to 90 then analyze the table again.. Even if\n> you could do this on 8.4 I'd not recommend it as it will probably cause\n> havoc with other plans around the time column. I did also get the hash\n> aggregate plan to run if I created a functional index on date_trunc('day',\n> time) then ran analyze again. I don't have a copy of 8.4 around to see if\n> the planner will make use of the index in the same way.\n>\n> What would be really nice is if we could create our own statistics on what\n> we want, something like:\n>\n> CREATE STATISTICS name ON table (date_trunc('day', time));\n>\n> That way postgres could have a better idea of the selectivity in this\n> situation.\n>\n> I'd give creating the function index a try, but keep in mind the overhead\n> that it will cause with inserts, updates and deletes.\n>\n\nThanks for the advice and I'll give the index a try. Are there any other\ntricks that I could try? Like maybe a custom aggregate or data type\nconversion (truncated date in seconds since an epoch) that would make the\nplanner do the right thing? Or will those two ideas just run into the same\nplanner problem?\n\nThanks again,\nDave\n\nOn Fri, Dec 20, 2013 at 10:46 PM, David Rowley <[email protected]> wrote:\nOn Fri, Dec 20, 2013 at 1:35 PM, Dave Johansen <[email protected]> wrote:\nI just ran into an interesting issue on Postgres 8.4. I have a database with about 3 months of data and when I do following query:\n\nSELECT DATE_TRUNC('day', time) AS time_t, COUNT(*) FROM mytable GROUP BY time_t;\nEXPLAIN shows that it's doing a sort and then a GroupAggregate. There will only be ~90 outputs, so is there a way I can hint/force the planner to just do a HashAggregate?Just to see if it would change the plan, I tried increasing the work_mem up to 1GB and it still did the same plan.\nPostgreSQL does not really have any stats on the selectivity of date_trunc('day', time) so my guess is that it can only assume that it has the same selectivity as the time column by itself... Which is very untrue in this case.\nThe group aggregate plan is chosen here as PostgreSQL thinks the the hash table is going to end up pretty big and decides that the group aggregate will be the cheaper option.I mocked up your data and on 9.4 I can get the hash aggregate plan to run if I set the n_distinct value to 90 then analyze the table again.. Even if you could do this on 8.4 I'd not recommend it as it will probably cause havoc with other plans around the time column. I did also get the hash aggregate plan to run if I created a functional index on date_trunc('day', time) then ran analyze again. I don't have a copy of 8.4 around to see if the planner will make use of the index in the same way.\nWhat would be really nice is if we could create our own statistics on what we want, something like:CREATE STATISTICS name ON table (date_trunc('day', time));\nThat way postgres could have a better idea of the selectivity in this situation.I'd give creating the function index a try, but keep in mind the overhead that it will cause with inserts, updates and deletes.\nThanks for the advice and I'll give the index a try. Are there any other tricks that I could try? Like maybe a custom aggregate or data type conversion (truncated date in seconds since an epoch) that would make the planner do the right thing? Or will those two ideas just run into the same planner problem?\nThanks again,Dave", "msg_date": "Thu, 2 Jan 2014 11:39:53 -0700", "msg_from": "Dave Johansen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DATE_TRUNC() and GROUP BY?" }, { "msg_contents": "On Thu, Jan 2, 2014 at 11:39 AM, Dave Johansen <[email protected]>wrote:\n\n> On Fri, Dec 20, 2013 at 10:46 PM, David Rowley <[email protected]>wrote:\n>\n>> On Fri, Dec 20, 2013 at 1:35 PM, Dave Johansen <[email protected]>wrote:\n>>\n>>> I just ran into an interesting issue on Postgres 8.4. I have a database\n>>> with about 3 months of data and when I do following query:\n>>> SELECT DATE_TRUNC('day', time) AS time_t, COUNT(*) FROM mytable GROUP BY\n>>> time_t;\n>>>\n>>> EXPLAIN shows that it's doing a sort and then a GroupAggregate. There\n>>> will only be ~90 outputs, so is there a way I can hint/force the planner to\n>>> just do a HashAggregate?\n>>>\n>>> Just to see if it would change the plan, I tried increasing the work_mem\n>>> up to 1GB and it still did the same plan.\n>>>\n>>>\n>> PostgreSQL does not really have any stats on the selectivity of\n>> date_trunc('day', time) so my guess is that it can only assume that it has\n>> the same selectivity as the time column by itself... Which is very untrue\n>> in this case.\n>> The group aggregate plan is chosen here as PostgreSQL thinks the the hash\n>> table is going to end up pretty big and decides that the group aggregate\n>> will be the cheaper option.\n>>\n>> I mocked up your data and on 9.4 I can get the hash aggregate plan to run\n>> if I set the n_distinct value to 90 then analyze the table again.. Even if\n>> you could do this on 8.4 I'd not recommend it as it will probably cause\n>> havoc with other plans around the time column. I did also get the hash\n>> aggregate plan to run if I created a functional index on date_trunc('day',\n>> time) then ran analyze again. I don't have a copy of 8.4 around to see if\n>> the planner will make use of the index in the same way.\n>>\n>\nI just tried this on 8.4 and it won't create the index because DATE_TRUNC()\nis not immutable. The exact error is:\nERROR: function in index expression must be marked IMMUTABLE\n\nAny suggestions or other ideas?\n\nThanks,\nDave\n\nOn Thu, Jan 2, 2014 at 11:39 AM, Dave Johansen <[email protected]> wrote:\nOn Fri, Dec 20, 2013 at 10:46 PM, David Rowley <[email protected]> wrote:\nOn Fri, Dec 20, 2013 at 1:35 PM, Dave Johansen <[email protected]> wrote:\nI just ran into an interesting issue on Postgres 8.4. I have a database with about 3 months of data and when I do following query:\n\n\nSELECT DATE_TRUNC('day', time) AS time_t, COUNT(*) FROM mytable GROUP BY time_t;\nEXPLAIN shows that it's doing a sort and then a GroupAggregate. There will only be ~90 outputs, so is there a way I can hint/force the planner to just do a HashAggregate?Just to see if it would change the plan, I tried increasing the work_mem up to 1GB and it still did the same plan.\nPostgreSQL does not really have any stats on the selectivity of date_trunc('day', time) so my guess is that it can only assume that it has the same selectivity as the time column by itself... Which is very untrue in this case.\nThe group aggregate plan is chosen here as PostgreSQL thinks the the hash table is going to end up pretty big and decides that the group aggregate will be the cheaper option.I mocked up your data and on 9.4 I can get the hash aggregate plan to run if I set the n_distinct value to 90 then analyze the table again.. Even if you could do this on 8.4 I'd not recommend it as it will probably cause havoc with other plans around the time column. I did also get the hash aggregate plan to run if I created a functional index on date_trunc('day', time) then ran analyze again. I don't have a copy of 8.4 around to see if the planner will make use of the index in the same way.\nI just tried this on 8.4 and it won't create the index because DATE_TRUNC() is not immutable. The exact error is:\nERROR:  function in index expression must be marked IMMUTABLEAny suggestions or other ideas?Thanks,Dave", "msg_date": "Thu, 2 Jan 2014 12:36:34 -0700", "msg_from": "Dave Johansen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DATE_TRUNC() and GROUP BY?" }, { "msg_contents": "On Thu, Jan 2, 2014 at 12:36 PM, Dave Johansen <[email protected]>wrote:\n\n> On Thu, Jan 2, 2014 at 11:39 AM, Dave Johansen <[email protected]>wrote:\n>\n>> On Fri, Dec 20, 2013 at 10:46 PM, David Rowley <[email protected]>wrote:\n>>\n>>> On Fri, Dec 20, 2013 at 1:35 PM, Dave Johansen <[email protected]>wrote:\n>>>\n>>>> I just ran into an interesting issue on Postgres 8.4. I have a database\n>>>> with about 3 months of data and when I do following query:\n>>>> SELECT DATE_TRUNC('day', time) AS time_t, COUNT(*) FROM mytable GROUP\n>>>> BY time_t;\n>>>>\n>>>> EXPLAIN shows that it's doing a sort and then a GroupAggregate. There\n>>>> will only be ~90 outputs, so is there a way I can hint/force the planner to\n>>>> just do a HashAggregate?\n>>>>\n>>>> Just to see if it would change the plan, I tried increasing the\n>>>> work_mem up to 1GB and it still did the same plan.\n>>>>\n>>>>\n>>> PostgreSQL does not really have any stats on the selectivity of\n>>> date_trunc('day', time) so my guess is that it can only assume that it has\n>>> the same selectivity as the time column by itself... Which is very untrue\n>>> in this case.\n>>> The group aggregate plan is chosen here as PostgreSQL thinks the the\n>>> hash table is going to end up pretty big and decides that the group\n>>> aggregate will be the cheaper option.\n>>>\n>>> I mocked up your data and on 9.4 I can get the hash aggregate plan to\n>>> run if I set the n_distinct value to 90 then analyze the table again.. Even\n>>> if you could do this on 8.4 I'd not recommend it as it will probably cause\n>>> havoc with other plans around the time column. I did also get the hash\n>>> aggregate plan to run if I created a functional index on date_trunc('day',\n>>> time) then ran analyze again. I don't have a copy of 8.4 around to see if\n>>> the planner will make use of the index in the same way.\n>>>\n>>\n> I just tried this on 8.4 and it won't create the index because\n> DATE_TRUNC() is not immutable. The exact error is:\n> ERROR: function in index expression must be marked IMMUTABLE\n>\n> Any suggestions or other ideas?\n>\n\nI apologize for the multiple emails, but I just looked at the definition of\nDATE_TRUNC() and for TIMESTAMP WITHOUT TIME ZONE it's IMMUTABLE, so I will\nlook into switching to that and see if using the index speeds up the\nqueries.\n\nOn Thu, Jan 2, 2014 at 12:36 PM, Dave Johansen <[email protected]> wrote:\nOn Thu, Jan 2, 2014 at 11:39 AM, Dave Johansen <[email protected]> wrote:\nOn Fri, Dec 20, 2013 at 10:46 PM, David Rowley <[email protected]> wrote:\nOn Fri, Dec 20, 2013 at 1:35 PM, Dave Johansen <[email protected]> wrote:\nI just ran into an interesting issue on Postgres 8.4. I have a database with about 3 months of data and when I do following query:\n\n\n\nSELECT DATE_TRUNC('day', time) AS time_t, COUNT(*) FROM mytable GROUP BY time_t;\nEXPLAIN shows that it's doing a sort and then a GroupAggregate. There will only be ~90 outputs, so is there a way I can hint/force the planner to just do a HashAggregate?Just to see if it would change the plan, I tried increasing the work_mem up to 1GB and it still did the same plan.\nPostgreSQL does not really have any stats on the selectivity of date_trunc('day', time) so my guess is that it can only assume that it has the same selectivity as the time column by itself... Which is very untrue in this case.\nThe group aggregate plan is chosen here as PostgreSQL thinks the the hash table is going to end up pretty big and decides that the group aggregate will be the cheaper option.I mocked up your data and on 9.4 I can get the hash aggregate plan to run if I set the n_distinct value to 90 then analyze the table again.. Even if you could do this on 8.4 I'd not recommend it as it will probably cause havoc with other plans around the time column. I did also get the hash aggregate plan to run if I created a functional index on date_trunc('day', time) then ran analyze again. I don't have a copy of 8.4 around to see if the planner will make use of the index in the same way.\nI just tried this on 8.4 and it won't create the index because DATE_TRUNC() is not immutable. The exact error is:\n\nERROR:  function in index expression must be marked IMMUTABLEAny suggestions or other ideas?\nI apologize for the multiple emails, but I just looked at the definition of DATE_TRUNC() and for TIMESTAMP WITHOUT TIME ZONE it's IMMUTABLE, so I will look into switching to that and see if using the index speeds up the queries.", "msg_date": "Thu, 2 Jan 2014 12:47:42 -0700", "msg_from": "Dave Johansen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DATE_TRUNC() and GROUP BY?" } ]
[ { "msg_contents": "Hello,\n\nI already posted this question to novice mail list and there is no answer yet. I've decided to post it again here.\n\nBefore posting the question here, I checked the mail list again for the same cases and found the message describing the case I started from: http://www.postgresql.org/message-id/[email protected]\nIt looks like there is no answer for that case too.\n\nI have a performance issue on using UNION ALL clause. Optimizer generates incorrect plan based on strange estimation of returned rows number. It suppose  that plan will be correct if this estimation is done correctly.\nThe following example helps to reproduce the issue:\n\nCREATE TABLE t1 (c1 INTEGER, id INTEGER PRIMARY KEY);\nINSERT INTO t1 (c1, id) SELECT b, b FROM generate_series(1, 1000000) a (b);\nREINDEX TABLE t1;\nANALYZE t1;\n\nCREATE TABLE t2 (c1 INTEGER, id INTEGER PRIMARY KEY);\nINSERT INTO t2 (c1, id) SELECT b, b FROM generate_series(1, 1000000) a (b);\nREINDEX TABLE t2;\nANALYZE t2;\n\nCREATE TABLE links (c1 INTEGER PRIMARY KEY, descr TEXT);\nINSERT INTO links (c1, descr) SELECT b, '2' FROM generate_series(1, 100000) a (b);\nREINDEX TABLE links;\nANALYZE links;\n\nCREATE TEMP TABLE t3 (ref_id INTEGER);\nINSERT INTO t3 (ref_id) VALUES (333333), (666666);\nANALYZE t3;\n\nIf I do the following:\nEXPLAIN ANALYZE SELECT * FROM (SELECT * FROM t1) t INNER JOIN t3 ON t3.ref_id = t.id INNER JOIN links l ON (t.c1 = l.c1);\n\nQUERY PLAN:\nNested Loop  (cost=0.00..18.39 rows=1 width=18) (actual time=0.056..0.056 rows=0 loops=1)\n  ->  Nested Loop  (cost=0.00..17.80 rows=2 width=12) (actual time=0.030..0.047 rows=2 loops=1)\n        ->  Seq Scan on t3  (cost=0.00..1.02 rows=2 width=4) (actual time=0.007..0.008 rows=2 loops=1)\n        ->  Index Scan using t1_pkey on t1  (cost=0.00..8.38 rows=1 width=8) (actual time=0.015..0.016 rows=1 loops=2)\n              Index Cond: (id = t3.ref_id)\n  ->  Index Scan using links_pkey on links l  (cost=0.00..0.28 rows=1 width=6) (actual time=0.004..0.004 rows=0 loops=2)\n        Index Cond: (c1 = t1.c1)\n\"Total runtime: 0.118 ms\"\n\nIt uses correctly index scan on \"links\" table and works normal.\n\nIf I do the following:\nEXPLAIN ANALYZE SELECT * FROM (SELECT * FROM t1 UNION ALL SELECT * FROM t2) t INNER JOIN t3 ON t3.ref_id = t.id INNER JOIN links l ON (t.c1 = l.c1);\n\nQUERY PLAN:\nHash Join  (cost=2693.00..3127.58 rows=20000 width=18) (actual time=47.158..47.158 rows=0 loops=1)\n  Hash Cond: (t1.c1 = l.c1)\n  ->  Nested Loop  (cost=0.00..34.58 rows=20000 width=12) (actual time=0.049..0.101 rows=4 loops=1)\n        ->  Seq Scan on t3  (cost=0.00..1.02 rows=2 width=4) (actual time=0.010..0.011 rows=2 loops=1)\n        ->  Append  (cost=0.00..16.76 rows=2 width=8) (actual time=0.022..0.038 rows=2 loops=2)\n              ->  Index Scan using t1_pkey on t1  (cost=0.00..8.38 rows=1 width=8) (actual time=0.019..0.022 rows=1 loops=2)\n                    Index Cond: (id = t3.ref_id)\n              ->  Index Scan using t2_pkey on t2  (cost=0.00..8.38 rows=1 width=8) (actual time=0.011..0.012 rows=1 loops=2)\n                    Index Cond: (id = t3.ref_id)\n  ->  Hash  (cost=1443.00..1443.00 rows=100000 width=6) (actual time=46.988..46.988 rows=100000 loops=1)\n        Buckets: 16384  Batches: 1  Memory Usage: 3711kB\n        ->  Seq Scan on links l  (cost=0.00..1443.00 rows=100000 width=6) (actual time=0.015..17.443 rows=100000 loops=1)\n\"Total runtime: 47.246 ms\"\n\nIt uses sequence scan on \"links\" table because of strange estimation on selection with using UNION ALL. It is very slow. I have more that million rows in each tables.\n\nStrange estimation is shown bellow:\nEXPLAIN ANALYZE SELECT * FROM (SELECT * FROM t1 UNION ALL SELECT * FROM t2) t INNER JOIN t3 ON t3.ref_id = t.id;\n\nQUERY PLAN:\nNested Loop  (cost=0.00..34.58 rows=20000 width=12) (actual time=0.049..0.099 rows=4 loops=1)\n  ->  Seq Scan on t3  (cost=0.00..1.02 rows=2 width=4) (actual time=0.009..0.010 rows=2 loops=1)\n  ->  Append  (cost=0.00..16.76 rows=2 width=8) (actual time=0.023..0.038 rows=2 loops=2)\n        ->  Index Scan using t1_pkey on t1  (cost=0.00..8.38 rows=1 width=8) (actual time=0.021..0.022 rows=1 loops=2)\n              Index Cond: (id = t3.ref_id)\n        ->  Index Scan using t2_pkey on t2  (cost=0.00..8.38 rows=1 width=8) (actual time=0.013..0.013 rows=1 loops=2)\n              Index Cond: (id = t3.ref_id)\n\"Total runtime: 0.164 ms\"\n\nThe strange estimation is seen at the first line:\nNested Loop  (cost=0.00..34.58 rows=20000 width=12) (actual time=0.049..0.099 rows=4 loops=1)\nAt the second line we can see that only two rows will be selected from joined table and 4 rows from two tables appended with UNION ALL, how it estimates that it should handle 20000 rows for that query?\nI suppose, that if it estimates 4 rows will be handled then it will use index scan on \"links\" table and will work fast.\n\nWith best regards, Aleksey Kuznetsov.\nHello,I already posted this question to novice mail list and there is no answer yet. I've decided to post it again here.Before posting the question here, I checked the mail list again for the same cases and found the message describing the case I started from: http://www.postgresql.org/message-id/[email protected] looks like there is no answer for that case too.I have a performance issue on using UNION ALL clause. Optimizer generates incorrect plan based on strange estimation of returned rows number. It suppose  that plan will be correct if this estimation is done correctly.The following example helps to reproduce the issue:CREATE TABLE t1 (c1 INTEGER, id INTEGER PRIMARY KEY);INSERT INTO t1 (c1, id) SELECT b, b FROM generate_series(1, 1000000) a (b);REINDEX TABLE t1;ANALYZE t1;CREATE TABLE t2 (c1 INTEGER, id INTEGER PRIMARY KEY);INSERT INTO t2 (c1, id) SELECT b, b FROM generate_series(1, 1000000) a (b);REINDEX TABLE t2;ANALYZE t2;CREATE TABLE links (c1 INTEGER PRIMARY KEY, descr TEXT);INSERT INTO links (c1, descr) SELECT b, '2' FROM generate_series(1, 100000) a (b);REINDEX TABLE links;ANALYZE links;CREATE TEMP TABLE t3 (ref_id INTEGER);INSERT INTO t3 (ref_id) VALUES (333333), (666666);ANALYZE t3;If I do the following:EXPLAIN ANALYZE SELECT * FROM (SELECT * FROM t1) t INNER JOIN t3 ON t3.ref_id = t.id INNER JOIN links l ON (t.c1 = l.c1);QUERY PLAN:Nested Loop  (cost=0.00..18.39 rows=1 width=18) (actual time=0.056..0.056 rows=0 loops=1)  ->  Nested Loop  (cost=0.00..17.80 rows=2 width=12) (actual time=0.030..0.047 rows=2 loops=1)        ->  Seq Scan on t3  (cost=0.00..1.02 rows=2 width=4) (actual time=0.007..0.008 rows=2 loops=1)        ->  Index Scan using t1_pkey on t1  (cost=0.00..8.38 rows=1 width=8) (actual time=0.015..0.016 rows=1 loops=2)              Index Cond: (id = t3.ref_id)  ->  Index Scan using links_pkey on links l  (cost=0.00..0.28 rows=1 width=6) (actual time=0.004..0.004 rows=0 loops=2)        Index Cond: (c1 = t1.c1)\"Total runtime: 0.118 ms\"It uses correctly index scan on \"links\" table and works normal.If I do the following:EXPLAIN ANALYZE SELECT * FROM (SELECT * FROM t1 UNION ALL SELECT * FROM t2) t INNER JOIN t3 ON t3.ref_id = t.id INNER JOIN links l ON (t.c1 = l.c1);QUERY PLAN:Hash Join  (cost=2693.00..3127.58 rows=20000 width=18) (actual time=47.158..47.158 rows=0 loops=1)  Hash Cond: (t1.c1 = l.c1)  ->  Nested Loop  (cost=0.00..34.58 rows=20000 width=12) (actual time=0.049..0.101 rows=4 loops=1)        ->  Seq Scan on t3  (cost=0.00..1.02 rows=2 width=4) (actual time=0.010..0.011 rows=2 loops=1)        ->  Append  (cost=0.00..16.76 rows=2 width=8) (actual time=0.022..0.038 rows=2 loops=2)              ->  Index Scan using t1_pkey on t1  (cost=0.00..8.38 rows=1 width=8) (actual time=0.019..0.022 rows=1 loops=2)                    Index Cond: (id = t3.ref_id)              ->  Index Scan using t2_pkey on t2  (cost=0.00..8.38 rows=1 width=8) (actual time=0.011..0.012 rows=1 loops=2)                    Index Cond: (id = t3.ref_id)  ->  Hash  (cost=1443.00..1443.00 rows=100000 width=6) (actual time=46.988..46.988 rows=100000 loops=1)        Buckets: 16384  Batches: 1  Memory Usage: 3711kB        ->  Seq Scan on links l  (cost=0.00..1443.00 rows=100000 width=6) (actual time=0.015..17.443 rows=100000 loops=1)\"Total runtime: 47.246 ms\"It uses sequence scan on \"links\" table because of strange estimation on selection with using UNION ALL. It is very slow. I have more that million rows in each tables.Strange estimation is shown bellow:EXPLAIN ANALYZE SELECT * FROM (SELECT * FROM t1 UNION ALL SELECT * FROM t2) t INNER JOIN t3 ON t3.ref_id = t.id;QUERY PLAN:Nested Loop  (cost=0.00..34.58 rows=20000 width=12) (actual time=0.049..0.099 rows=4 loops=1)  ->  Seq Scan on t3  (cost=0.00..1.02 rows=2 width=4) (actual time=0.009..0.010 rows=2 loops=1)  ->  Append  (cost=0.00..16.76 rows=2 width=8) (actual time=0.023..0.038 rows=2 loops=2)        ->  Index Scan using t1_pkey on t1  (cost=0.00..8.38 rows=1 width=8) (actual time=0.021..0.022 rows=1 loops=2)              Index Cond: (id = t3.ref_id)        ->  Index Scan using t2_pkey on t2  (cost=0.00..8.38 rows=1 width=8) (actual time=0.013..0.013 rows=1 loops=2)              Index Cond: (id = t3.ref_id)\"Total runtime: 0.164 ms\"The strange estimation is seen at the first line:Nested Loop  (cost=0.00..34.58 rows=20000 width=12) (actual time=0.049..0.099 rows=4 loops=1)At the second line we can see that only two rows will be selected from joined table and 4 rows from two tables appended with UNION ALL, how it estimates that it should handle 20000 rows for that query?I suppose, that if it estimates 4 rows will be handled then it will use index scan on \"links\" table and will work fast.With best regards, Aleksey Kuznetsov.", "msg_date": "Mon, 23 Dec 2013 10:32:46 +0400", "msg_from": "=?UTF-8?B?0JDQu9C10LrRgdC10Lkg0JrRg9C30L3QtdGG0L7Qsg==?=\n <[email protected]>", "msg_from_op": true, "msg_subject": "=?UTF-8?B?U3RyYW5nZSBudW1iZXIgb2Ygcm93cyBpbiBwbGFuIGNvc3Q=?=" } ]
[ { "msg_contents": "Hi,\n We have a requirement to store images/documents with an average size of 1-2MB on PostgreSQL database. We have PostgreSQL 9.2.4 running on Red hat linux 64 bit. We decided to setup a stand alone postgreSQL server without streaming replication to host the images/documents only. We are new to postgreSQL and we heard a lot of conversation about using Bytea vs Large object facility. We would be inserting and retrieving document as whole using java webservices call from hibernate/JPA interface into postgreSQL database. Is there any performance benchmark when using ByteA vs Large object facility? Is there a general guidance to use one of these?\n\nThanks,\nBabu\nHi, We have a requirement to store images/documents with an average size of 1-2MB on PostgreSQL database. We have PostgreSQL 9.2.4 running on Red hat linux 64 bit. We decided to setup a stand alone postgreSQL server without streaming replication to host the images/documents only. We are new to postgreSQL and we heard a lot of conversation about using Bytea vs Large object facility. We would be inserting and retrieving document as whole using java webservices call from hibernate/JPA interface into postgreSQL database. Is there any performance benchmark when using ByteA vs Large object facility? Is there a general guidance to use one of these?Thanks,Babu", "msg_date": "Mon, 23 Dec 2013 12:16:34 -0800 (PST)", "msg_from": "kosalram Babu Chellappa <[email protected]>", "msg_from_op": true, "msg_subject": "Bytea(TOAST) vs large object facility(OID)" }, { "msg_contents": "kosalram Babu Chellappa wrote:\r\n> We have a requirement to store images/documents with an average size of 1-2MB on PostgreSQL database.\r\n> We have PostgreSQL 9.2.4 running on Red hat linux 64 bit. We decided to setup a stand alone postgreSQL\r\n> server without streaming replication to host the images/documents only. We are new to postgreSQL and\r\n> we heard a lot of conversation about using Bytea vs Large object facility. We would be inserting and\r\n> retrieving document as whole using java webservices call from hibernate/JPA interface into postgreSQL\r\n> database. Is there any performance benchmark when using ByteA vs Large object facility? Is there a\r\n> general guidance to use one of these?\r\n\r\nI don't know anything about Hibernate, but since bytea is handled like\r\nany other regular data type, it should not present a problem.\r\nTo handle large objects, you need to use the large object API of\r\nPostgreSQL, which makes large objects different from other data types.\r\n\r\nSecond, large objects are stored in their own table, and the user table\r\nonly stores the object ID. When a row in the user table is deleted, the\r\nlarge object won't go away automatically; you'd have to write a trigger\r\nor something like that.\r\n\r\nThe real advantage of large objects comes when they are big enough that\r\nyou don't want to hold the whole thing in memory, but rather read and write\r\nthem in chunks.\r\n\r\nSince this is not the case in your setup, I think that bytea is better for you.\r\n\r\nGoing back a step, do you really want a database just to hold images and\r\ndocuments? That will be slower and more complicated than a simple file service,\r\nwhich would be a better solution for that requirement.\r\n\r\nYours,\r\nLaurenz Albe\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 24 Dec 2013 07:55:11 +0000", "msg_from": "Albe Laurenz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bytea(TOAST) vs large object facility(OID)" } ]
[ { "msg_contents": "Follow these steps to delete iPhone data\n<http://www.transfer-iphone-recovery.com/delete-data-from-iphone.html> \nconveniently:\nStep 1. Launch the iPhone Data Eraser\n<http://www.transfer-iphone-recovery.com/iphone-data-eraser.html> and\nconnect your iPhone to the computer\n<http://postgresql.1045698.n5.nabble.com/file/n5784590/20131108024053_93083.jpg> \nStep 2. Choose Erase all data on device and click on it \nStep 3. Start to erase all data on your device now\n<http://postgresql.1045698.n5.nabble.com/file/n5784590/20131108024224_73358.jpg> \nStep 4. Click “Done” and the wiping completed\n<http://postgresql.1045698.n5.nabble.com/file/n5784590/20131108024300_41834.jpg> \nBest luck!\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/How-to-completely-delete-iPhone-all-data-before-selling-tp5779543p5784590.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 26 Dec 2013 00:46:27 -0800 (PST)", "msg_from": "shirleymi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to completely delete iPhone all data before selling?" } ]
[ { "msg_contents": "Hello everybody.\n\nRecently I have tried to upgrade our postgres instalation from 9.1 to 9.3,\nbut one query in particular got extremelly slow. The query is:\n\n EXPLAIN ANALYZE SELECT *\n FROM sellable\n JOIN product ON product.sellable_id = sellable.id\n LEFT JOIN storable ON storable.product_id = product.id\n LEFT JOIN sellable_category\n ON sellable_category.id = sellable.category_id\n LEFT JOIN (SELECT storable.id AS storable_id, branch.id AS branch_id,\n SUM(product_stock_item.quantity) AS stock,\n\nSUM((product_stock_item.quantity*product_stock_item.stock_cost)) AS\ntotal_stock_cost\n FROM storable\n CROSS JOIN branch\n LEFT JOIN product_stock_item ON product_stock_item.branch_id\n= branch.id\n AND\nproduct_stock_item.storable_id = storable.id\n GROUP BY storable.id, branch.id) AS \"_stock_summary\"\n ON _stock_summary.storable_id = storable.id\n WHERE\n (_stock_summary.branch_id = '04c3a996-f7c1-11e2-9274-000ae4372716'\nOR _stock_summary.branch_id IS NULL)\n AND stoq_normalize_string(sellable.description) ILIKE\nstoq_normalize_string('%ray%')\n AND stoq_normalize_string(sellable_category.description) ILIKE\nstoq_normalize_string('%receit%')\n\nOn 9.1 it runs in about 500ms, while on a later version, it takes a lot more\nthan 180000ms (thats 0.5 seconds vs 3 minutes).\n\nEven though this might not be the most well writen query, thats quite some\ntime difference.\n\nA few things to notice:\n\n- stoq_normalize_string is a wrapper around unaccent marking it as\n unmutable, so it can be used to create an index\n- The original query has a few more joins but I removed the most I could\n without influencing the results.\n- The query is actually created using python-storm (an orm for python)\n\nUsing git bisect I have found that the problem starts with commit\n5b7b5518d0ea56c422a197875f7efa5deddbb388 (And the times I posted above are\nfrom this commit and its parent).\n\nNow this is as far as I can investigate, since my knowledge of the\npostgresql inners are between null and zero\n\nTrying to find out where the problem is, here are a few thinks that I have\ntried that changed the speed (but does not really fix it for me):\n\n- Replace stoq_normalize_string with unaccent\n- Remove the branch_id IS NULL from the where clause\n- Remove the left join with sellable_category\n\nThere you can download an extract from the database with the needed tables\nto reproduce the problem.\n\nhttp://www.stoq.com.br/~romaia/base.sql.bz2\n\nSo, finally, the question is: Is this a regression or was I just luck in\nthe first place\nthat the query was 'fast enought' and this is a somewhat expected behaviour\nfor this query?\n\n\n-- \nRonaldo Maia\n\nHello everybody.Recently I have tried to upgrade our postgres instalation from 9.1 to 9.3,but one query in particular got extremelly slow. The query is:\n    EXPLAIN ANALYZE SELECT *    FROM sellable    JOIN product ON product.sellable_id = sellable.id    LEFT JOIN storable ON storable.product_id = product.id\n    LEFT JOIN sellable_category         ON sellable_category.id = sellable.category_id    LEFT JOIN (SELECT storable.id AS storable_id, branch.id AS branch_id,\n                      SUM(product_stock_item.quantity) AS stock,                      SUM((product_stock_item.quantity*product_stock_item.stock_cost)) AS total_stock_cost               FROM storable\n               CROSS JOIN branch               LEFT JOIN product_stock_item ON product_stock_item.branch_id = branch.id                                           AND product_stock_item.storable_id = storable.id\n               GROUP BY storable.id, branch.id) AS \"_stock_summary\"         ON _stock_summary.storable_id = storable.id\n    WHERE         (_stock_summary.branch_id = '04c3a996-f7c1-11e2-9274-000ae4372716' OR _stock_summary.branch_id IS NULL)         AND stoq_normalize_string(sellable.description) ILIKE stoq_normalize_string('%ray%')\n         AND stoq_normalize_string(sellable_category.description) ILIKE stoq_normalize_string('%receit%')On 9.1 it runs in about 500ms, while on a later version, it takes a lot more\nthan 180000ms (thats 0.5 seconds vs 3 minutes).Even though this might not be the most well writen query, thats quite sometime difference.A few things to notice:\n- stoq_normalize_string is a wrapper around unaccent marking it as  unmutable, so it can be used to create an index- The original query has a few more joins but I removed the most I could\n  without influencing the results.- The query is actually created using python-storm (an orm for python)Using git bisect I have found that the problem starts with commit\n\n5b7b5518d0ea56c422a197875f7efa5deddbb388 (And the times I posted above arefrom this commit and its parent).Now this is as far as I can investigate, since my knowledge of the\n\npostgresql inners are between null and zeroTrying to find out where the problem is, here are a few thinks that I havetried that changed the speed (but does not really fix it for me):\n- Replace stoq_normalize_string with unaccent- Remove the branch_id IS NULL from the where clause- Remove the left join with sellable_categoryThere you can download an extract from the database with the needed tables\nto reproduce the problem.http://www.stoq.com.br/~romaia/base.sql.bz2So, finally, the question is: Is this a regression or was I just luck in the first place\nthat the query was 'fast enought' and this is a somewhat expected behaviourfor this query?-- Ronaldo Maia", "msg_date": "Thu, 26 Dec 2013 17:49:29 -0200", "msg_from": "Ronaldo Maia <[email protected]>", "msg_from_op": true, "msg_subject": "Possible regression (slow query on 9.2/9.3 when compared to 9.1)" }, { "msg_contents": "Ronaldo Maia <[email protected]> writes:\n> Recently I have tried to upgrade our postgres instalation from 9.1 to 9.3,\n> but one query in particular got extremelly slow.\n\nFWIW, this test case doesn't reproduce any problem for me --- I get\nidentical plans and indistinguishable timings (about 450ms on my machine)\nfrom 9.1 and 9.3 branch tips. This is with all-default settings and\na VACUUM ANALYZE after loading the data. I had to guess at the definition\nof stoq_normalize_string(), too, so I used\n\ncreate function stoq_normalize_string(text) returns text language sql\n strict immutable as 'select unaccent($1)';\n\nI speculate that you forgot to analyze the data after loading, or there's\nsome performance-relevant setting that you didn't carry forward from the\n9.1 database.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 29 Dec 2013 17:32:20 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible regression (slow query on 9.2/9.3 when compared to 9.1)" } ]
[ { "msg_contents": "It seems postgresql is unable to choose correct index in such cases.\n(my pg version is 9.3.2)\n\nLet's see example:\ncreate table t1 as select a.a, b.b from generate_series(1, 100) a(a),\ngenerate_series(1,500000) b(b);\ncreate index t1_a_idx on t1(a);\ncreate index t1_b_idx on t1(b);\ncreate index t1_a_b_idx on t1(a,b);\ncreate index t1_b_a_idx on t1(b,a);\nalter table t1 alter a set statistics 10000;\nalter table t1 alter b set statistics 10000;\nanalyze t1;\n\ntest=> explain select count(*) from t1 where a in (1, 9, 17, 26, 35, 41,\n50) and b = 333333;\n QUERY PLAN\n\n--------------------------------------------------------------------------------------\n Aggregate (cost=46.62..46.63 rows=1 width=0)\n -> Index Only Scan using t1_a_b_idx on t1 (cost=0.57..46.60 rows=7\nwidth=0)\n Index Cond: ((a = ANY ('{1,9,17,26,35,41,50}'::integer[])) AND (b\n= 333333))\n(3 rows)\n\nRows estimation is exact.\nBut I think using t1_a_b_idx index is not the best choice.\nLet's check:\n# drop pg and disc buffers/caches\nsystemctl stop postgresql.service ; echo 3 >/proc/sys/vm/drop_caches ;\nsystemctl start postgresql.service ; sleep 2\n# warm up pg and check the plan\n{ echo '\\\\timing' && echo \"explain select count(*) from t1 where a in (1,\n9, 17, 26, 35, 41, 50) and b = 333333;\" ; } | psql test\n# do the benchmark\n{ echo '\\\\timing' && echo \"select count(*) from t1 where a in (1, 9, 17,\n26, 35, 41, 50) and b = 333333;\" ; } | psql test\n\nI have 200-210ms timing for the last query and t1_a_b_idx is used always. I\nchecked several times.\n\nOk. Now 'drop index t1_a_b_idx;' and check again.\nPg now uses t1_b_a_idx and I have 90-100ms for that control query. This is\nmuch better.\n\nI took pageinspect contrib module, learnt btree structure and it is clear\nfor me\nwhy t1_b_a_idx is better. The question is: Is postgresql able to see that?\n\nIt seems postgresql is unable to choose correct index in such cases.(my pg version is 9.3.2)Let's see example:create table t1 as select a.a, b.b from generate_series(1, 100) a(a), generate_series(1,500000) b(b);\ncreate index t1_a_idx on t1(a);create index t1_b_idx on t1(b);create index t1_a_b_idx on t1(a,b);create index t1_b_a_idx on t1(b,a);alter table t1 alter a set statistics 10000;\nalter table t1 alter b set statistics 10000;analyze t1;test=> explain select count(*) from t1 where a in (1, 9, 17, 26, 35, 41, 50) and b = 333333;                                      QUERY PLAN                                      \n-------------------------------------------------------------------------------------- Aggregate  (cost=46.62..46.63 rows=1 width=0)   ->  Index Only Scan using t1_a_b_idx on t1  (cost=0.57..46.60 rows=7 width=0)\n         Index Cond: ((a = ANY ('{1,9,17,26,35,41,50}'::integer[])) AND (b = 333333))(3 rows)Rows estimation is exact.But I think using t1_a_b_idx index is not the best choice.\nLet's check:# drop pg and disc buffers/cachessystemctl stop postgresql.service ; echo 3 >/proc/sys/vm/drop_caches ; systemctl start postgresql.service ; sleep 2# warm up pg and check the plan\n{ echo '\\\\timing' && echo \"explain select count(*) from t1 where a in (1, 9, 17, 26, 35, 41, 50) and b = 333333;\" ; } | psql test# do the benchmark{ echo '\\\\timing' && echo \"select count(*) from t1 where a in (1, 9, 17, 26, 35, 41, 50) and b = 333333;\" ; } | psql test\nI have 200-210ms timing for the last query and t1_a_b_idx is used always. I checked several times.Ok. Now 'drop index t1_a_b_idx;' and check again.Pg now uses t1_b_a_idx and I have 90-100ms for that control query. This is much better.\nI took pageinspect contrib module, learnt btree structure and it is clear for mewhy t1_b_a_idx is better. The question is: Is postgresql able to see that?", "msg_date": "Fri, 27 Dec 2013 13:35:06 +0700", "msg_from": "Michael Kolomeitsev <[email protected]>", "msg_from_op": true, "msg_subject": "Pg makes nonoptimal choice between two multicolumn indexes with the\n same columns but in different order." }, { "msg_contents": "Michael Kolomeitsev <[email protected]> wrote:\n\n> it is clear for me why t1_b_a_idx is better. The question is: Is\n> postgresql able to see that?\n\nFor a number of reasons I never consider a bulk load complete until\nI run VACUUM FREEZE ANALYZE on the table(s) involved.  When I try\nyour test case without that, I get the bad index choice.  When I\nthen run VACUUM FREEZE ANALYZE on the database I get the good index\nchoice.\n\nThere may be some lesser maintenance which sets up visibility\ninformation and provides the planner with enough data to make a\ngood choice, I just noticed that you were not following what I\nconsider to be rote good practice, tried it, and it solved the\nproblem.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 28 Dec 2013 13:03:28 -0800 (PST)", "msg_from": "Kevin Grittner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pg makes nonoptimal choice between two multicolumn indexes with\n the same columns but in different order." }, { "msg_contents": "On 29/12/13 10:03, Kevin Grittner wrote:\n> Michael Kolomeitsev <[email protected]> wrote:\n>\n>> it is clear for me why t1_b_a_idx is better. The question is: Is\n>> postgresql able to see that?\n> For a number of reasons I never consider a bulk load complete until\n> I run VACUUM FREEZE ANALYZE on the table(s) involved. When I try\n> your test case without that, I get the bad index choice. When I\n> then run VACUUM FREEZE ANALYZE on the database I get the good index\n> choice.\n>\n> There may be some lesser maintenance which sets up visibility\n> information and provides the planner with enough data to make a\n> good choice, I just noticed that you were not following what I\n> consider to be rote good practice, tried it, and it solved the\n> problem.\n>\n> --\n> Kevin Grittner\n> EDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n>\nCurious: Would it be feasible to do some kind of ANALYZE during a bulk \noperation? Say if you could tell the system you expected to change 20% \nof the records in advance: then you could sample some of the changes and \nmodify the statistics with 0.2 times that plus 0.8 of the pre-existing \nstatistics.\n\nBEGIN BULK OPERATION CHANGE 20%\n[... several transactions ...]\nEND BULK OPERATION\n\nThe sampling could be done as part of the individual operations or at \nthe end of the bulk operation - whichever is deemed more practicable \n(possibly a bit of both?).\n\n\nCheers,\nGavin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 29 Dec 2013 10:19:20 +1300", "msg_from": "Gavin Flower <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pg makes nonoptimal choice between two multicolumn\n indexes with the same columns but in different order." }, { "msg_contents": "Kevin Grittner <[email protected]> writes:\n> Michael Kolomeitsev <[email protected]> wrote:\n>> it is clear for me why t1_b_a_idx is better. The question is: Is\n>> postgresql able to see that?\n\n> For a number of reasons I never consider a bulk load complete until\n> I run VACUUM FREEZE ANALYZE on the table(s) involved.� When I try\n> your test case without that, I get the bad index choice.� When I\n> then run VACUUM FREEZE ANALYZE on the database I get the good index\n> choice.\n\nI think that's just chance, because AFAICS the cost estimates are exactly\nthe same for both indexes, once you've done the vacuum to make all the\nheap pages all-visible. What's more, I'm not sure that that's wrong,\nbecause according to EXPLAIN (ANALYZE, BUFFERS) the exact same number of\nindex pages are touched for either index. So I think Michael's claim that\nthe one index is better is at best unproven.\n\nregression=# explain (analyze, buffers) select count(*) from t1 where a in (1, 9, 17, 26, 35, 41, 50) and b = 333333;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=32.12..32.13 rows=1 width=0) (actual time=0.097..0.098 rows=1 loops=1)\n Buffers: shared hit=30\n -> Index Only Scan using t1_b_a_idx on t1 (cost=0.57..32.10 rows=7 width=0) (actual time=0.044..0.085 rows=7 loops=1)\n Index Cond: ((b = 333333) AND (a = ANY ('{1,9,17,26,35,41,50}'::integer[])))\n Heap Fetches: 0\n Buffers: shared hit=30\n Total runtime: 0.174 ms\n(7 rows)\n\nregression=# begin; drop index t1_b_a_idx;\nBEGIN\nDROP INDEX\nregression=# explain (analyze, buffers) select count(*) from t1 where a in (1, 9, 17, 26, 35, 41, 50) and b = 333333;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=32.12..32.13 rows=1 width=0) (actual time=0.110..0.110 rows=1 loops=1)\n Buffers: shared hit=30\n -> Index Only Scan using t1_a_b_idx on t1 (cost=0.57..32.10 rows=7 width=0) (actual time=0.039..0.101 rows=7 loops=1)\n Index Cond: ((a = ANY ('{1,9,17,26,35,41,50}'::integer[])) AND (b = 333333))\n Heap Fetches: 0\n Buffers: shared hit=30\n Total runtime: 0.199 ms\n(7 rows)\n\nregression=# abort;\nROLLBACK\n\nI grant the theory that the repeated index probes in t1_b_a_idx should be\nmore localized than those in t1_a_b_idx, but PG's optimizer doesn't\nattempt to estimate such effects, and this example isn't doing much to\nconvince me that it'd be worth the trouble.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 28 Dec 2013 16:56:42 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pg makes nonoptimal choice between two multicolumn indexes with\n the same columns but in different order." }, { "msg_contents": "2013/12/29 Tom Lane <[email protected]>\n\n> I think that's just chance, because AFAICS the cost estimates are exactly\n> the same for both indexes, once you've done the vacuum to make all the\n> heap pages all-visible. What's more, I'm not sure that that's wrong,\n> because according to EXPLAIN (ANALYZE, BUFFERS) the exact same number of\n> index pages are touched for either index. So I think Michael's claim that\n> the one index is better is at best unproven.\n>\n\nLet me prove :)\n\n1. I do benchmarking after dropping Pg and OS disk caches/buffers.\nIn a way I posted in my first message:\nsh# systemctl stop postgresql.service ; echo 3 >/proc/sys/vm/drop_caches ;\nsystemctl start postgresql.service\n\nAnd timing results are quite stable: 200-210ms using t1_a_b_idx and\n90-100ms using t1_b_a_idx.\n\nTrying 'explain (analyze, buffers) ... ' I got this:\n* using t1_a_b_idx:\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=46.62..46.63 rows=1 width=0) (actual\ntime=228.853..228.854 rows=1 loops=1)\n Buffers: shared hit=12 read=23\n -> Index Only Scan using t1_a_b_idx on t1 (cost=0.57..46.60 rows=7\nwidth=0) (actual time=52.171..228.816 rows=7 loops=1)\n Index Cond: ((a = ANY ('{1,9,17,26,35,41,50}'::integer[])) AND (b\n= 333333))\n Heap Fetches: 7\n Buffers: shared hit=12 read=23\n Total runtime: 229.012 ms\n\n\n* using t1_b_a_idx:\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=60.12..60.13 rows=1 width=0) (actual\ntime=115.617..115.617 rows=1 loops=1)\n Buffers: shared hit=24 read=11\n -> Index Only Scan using t1_b_a_idx on t1 (cost=0.57..60.10 rows=7\nwidth=0) (actual time=80.460..115.590 rows=7 loops=1)\n Index Cond: ((b = 333333) AND (a = ANY\n('{1,9,17,26,35,41,50}'::integer[])))\n Heap Fetches: 7\n Buffers: shared hit=24 read=11\n Total runtime: 116.480 ms\n\n\nThere is a difference in read operations and moreover in cost estimation.\n23 - 11 = 12 excess read operations. If they are all random reads they may\ntake ~100ms on typical home/workstation SATA hard drive. That's the\ndifference between\ntimings I showed above.\nYes, I understand that Pg doesn't know (while planning the query) how many\npages will be hitted in shared buffers.\nBut I can't get why there is the same buffers count (35) in both plans...\nAnd I can't get why I have different cost estimations...\n\n\nI grant the theory that the repeated index probes in t1_b_a_idx should be\n> more localized than those in t1_a_b_idx, but PG's optimizer doesn't\n>\n\nYes, I see t1_a_b_idx and t1_b_a_idx have 3 levels in btree. For t1_a_b_idx\nPg have to read 1 (header) + 1 (root) + 1 (common level 1 node) + 7 * 2 =\n17 pages in it\nand for t1_b_a_idx 1 + 1 + 3 = 5 pages ('cause all 7 pairs of (a, b) are\nlocated in one btree leaf node). 17 - 5 = 12 - this is the same difference\nas we can see in\n'explain (analyze, buffers)'.\n\n\n\n\n> attempt to estimate such effects, and this example isn't doing much to\n> convince me that it'd be worth the trouble.\n>\n\nIn a real life situation I have two kinds of queries for that table:\n* select ... from t1 where a in (...) and b = ?\n* select ... from t1 where a = ? and b in (...)\n\nI select fields from t1 that are not in indexes thus there is no 'Index\nOnly Scan', more random reads and performance impact of choosing t1_a_b_idx\nin both queries is a bit lesser.\n\nAnd I got the answer (\"PG's optimizer doesn't attempt to estimate such\neffects\") for my situation.\nThanks a lot.\n\n2013/12/29 Tom Lane <[email protected]>\nI think that's just chance, because AFAICS the cost estimates are exactly\nthe same for both indexes, once you've done the vacuum to make all theheap pages all-visible.  What's more, I'm not sure that that's wrong,because according to EXPLAIN (ANALYZE, BUFFERS) the exact same number of\nindex pages are touched for either index.  So I think Michael's claim thatthe one index is better is at best unproven.Let me prove :)1. I do benchmarking after dropping Pg and OS disk caches/buffers.\nIn a way I posted in my first message:sh# systemctl stop postgresql.service ; echo 3 >/proc/sys/vm/drop_caches ; systemctl start postgresql.serviceAnd timing results are quite stable: 200-210ms using t1_a_b_idx and 90-100ms using t1_b_a_idx.\nTrying 'explain (analyze, buffers) ... ' I got this:* using t1_a_b_idx:                                                          QUERY PLAN------------------------------------------------------------------------------------------------------------------------------\n Aggregate  (cost=46.62..46.63 rows=1 width=0) (actual time=228.853..228.854 rows=1 loops=1)   Buffers: shared hit=12 read=23   ->  Index Only Scan using t1_a_b_idx on t1  (cost=0.57..46.60 rows=7 width=0) (actual time=52.171..228.816 rows=7 loops=1)\n         Index Cond: ((a = ANY ('{1,9,17,26,35,41,50}'::integer[])) AND (b = 333333))         Heap Fetches: 7         Buffers: shared hit=12 read=23 Total runtime: 229.012 ms\n* using t1_b_a_idx:                                                          QUERY PLAN------------------------------------------------------------------------------------------------------------------------------\n Aggregate  (cost=60.12..60.13 rows=1 width=0) (actual time=115.617..115.617 rows=1 loops=1)   Buffers: shared hit=24 read=11   ->  Index Only Scan using t1_b_a_idx on t1  (cost=0.57..60.10 rows=7 width=0) (actual time=80.460..115.590 rows=7 loops=1)\n         Index Cond: ((b = 333333) AND (a = ANY ('{1,9,17,26,35,41,50}'::integer[])))         Heap Fetches: 7         Buffers: shared hit=24 read=11 Total runtime: 116.480 ms\nThere is a difference in read operations and moreover in cost estimation.23 - 11 = 12 excess read operations. If they are all random reads they may take ~100ms on typical home/workstation SATA hard drive. That's the difference between\ntimings I showed above.Yes, I understand that Pg doesn't know (while planning the query) how many pages will be hitted in shared buffers.But I can't get why there is the same buffers count (35) in both plans...\nAnd I can't get why I have different cost estimations...\nI grant the theory that the repeated index probes in t1_b_a_idx should bemore localized than those in t1_a_b_idx, but PG's optimizer doesn'tYes, I see t1_a_b_idx and t1_b_a_idx have 3 levels in btree. For t1_a_b_idx Pg have to read 1 (header) + 1 (root) + 1 (common level 1 node) + 7 * 2 = 17 pages in it\nand for t1_b_a_idx 1 + 1 + 3 = 5 pages ('cause all 7 pairs of (a, b) are located in one btree leaf node). 17 - 5 = 12 - this is the same difference as we can see in'explain (analyze, buffers)'.\n \nattempt to estimate such effects, and this example isn't doing much toconvince me that it'd be worth the trouble.In a real life situation I have two kinds of queries for that table:\n* select ... from t1 where a in (...) and b = ?* select ... from t1 where a = ? and b in (...)I select fields from t1 that are not in indexes thus there is no 'Index Only Scan', more random reads and performance impact of choosing t1_a_b_idx in both queries is a bit lesser.\nAnd I got the answer (\"PG's optimizer doesn't attempt to estimate such effects\") for my situation.Thanks a lot.", "msg_date": "Sun, 29 Dec 2013 07:52:07 +0700", "msg_from": "Michael Kolomeitsev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Pg makes nonoptimal choice between two multicolumn\n indexes with the same columns but in different order." } ]
[ { "msg_contents": "Is fscnc off/on pertain only to writing WAL buffers to disk?\n\nOr is that also relevant to writing of dirty buffers to disk\n(checkpoint/bg)?\n\nCurious because in the docs fsync on/off is mentioned under WAL\nconfiguration. Further there is a wal_sync_method but not a\n\"checkpoint_sync_method\".\n\nShiv\n\nIs fscnc off/on pertain only to writing WAL buffers to disk?\nOr is that also relevant to writing of dirty buffers to disk (checkpoint/bg)?\nCurious because in the docs fsync on/off is mentioned under WAL configuration.  Further there is a wal_sync_method but not a \"checkpoint_sync_method\".\nShiv", "msg_date": "Fri, 27 Dec 2013 10:55:44 -0500", "msg_from": "GR Vishwanath <[email protected]>", "msg_from_op": true, "msg_subject": "Does fsync on/off for wal AND Checkpoint?" }, { "msg_contents": "On 12/27/2013 04:55 PM, GR Vishwanath wrote:\n> Is fscnc off/on pertain only to writing WAL buffers to disk?\n>\n> Or is that also relevant to writing of dirty buffers to disk\n> (checkpoint/bg)?\n>\n> Curious because in the docs fsync on/off is mentioned under WAL\n> configuration. Further there is a wal_sync_method but not a\n> \"checkpoint_sync_method\".\n\nThe setting is for all uses of fsync within the PostgreSQL server, so if \nyou turn it off PostgreSQL should never issue fsync. The only exceptions \nare some utility tools (eg. pg_basebackup) which do not read the \nconfiguration file.\n\n-- \nAndreas Karlsson\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 27 Dec 2013 17:27:14 +0100", "msg_from": "Andreas Karlsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Does fsync on/off for wal AND Checkpoint?" } ]
[ { "msg_contents": "Hello All,\n\nI am using multi tenant system and doing performance testing of multi\ntenant application. In case of single tenant it is working fine but once I\nenable tenants, then some time database servers not responding. Any clue?\n\n-- \n------\nRegards\n@Ankush Upadhyay@\n\nHello All,\n\nI am using multi tenant system and doing performance testing of multi \ntenant application. In case of single tenant it is working fine but once \nI enable tenants, then some time database servers not responding. Any clue? -- ------Regards@Ankush Upadhyay@", "msg_date": "Sat, 28 Dec 2013 10:49:01 +0530", "msg_from": "ankush upadhyay <[email protected]>", "msg_from_op": true, "msg_subject": "Are there some additional postgres tuning to improve performance in\n multi tenant system" }, { "msg_contents": "On 28/12/13 18:19, ankush upadhyay wrote:\n> Hello All,\n>\n> I am using multi tenant system and doing performance testing of multi\n> tenant application. In case of single tenant it is working fine but once\n> I enable tenants, then some time database servers not responding. Any clue?\n>\n\nIt is a bit tricky to tell without any relevant information (e.g schema \ndescription). But a likely culprit would be a missing index on the \nrelevant 'tenant_id' type field in each table that you are using to \ndistinguish the various tenant datasets.\n\nRegards\n\nMark\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 28 Dec 2013 22:54:14 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are there some additional postgres tuning to improve\n performance in multi tenant system" }, { "msg_contents": "I am using multiple databases in my app for multi tenacy environment.\n\nIn my app user login with tenant id and connect to respective database.\n\nThe scenario is when 5 users connect to 5 databases and perform some\nactivity in that case site performance going down and some time database\nserver become unresponsive.\n\nIn each database we have same schema with different data regarding to users\nof that tenant.\nAny idea?\n\n\nOn Sat, Dec 28, 2013 at 3:24 PM, Mark Kirkwood <\[email protected]> wrote:\n\n> On 28/12/13 18:19, ankush upadhyay wrote:\n>\n>> Hello All,\n>>\n>> I am using multi tenant system and doing performance testing of multi\n>> tenant application. In case of single tenant it is working fine but once\n>> I enable tenants, then some time database servers not responding. Any\n>> clue?\n>>\n>>\n> It is a bit tricky to tell without any relevant information (e.g schema\n> description). But a likely culprit would be a missing index on the relevant\n> 'tenant_id' type field in each table that you are using to distinguish the\n> various tenant datasets.\n>\n> Regards\n>\n> Mark\n>\n\n\n\n-- \n------\nRegards\n@Ankush Upadhyay@\n\nI am using multiple databases in my app for multi tenacy environment.In my app user login with tenant id and connect to respective database.The scenario is when 5 users connect to 5 databases and perform some activity in that case site performance going down and some time database server become unresponsive.\nIn each database we have same schema with different data regarding to users of that tenant.Any idea?On Sat, Dec 28, 2013 at 3:24 PM, Mark Kirkwood <[email protected]> wrote:\nOn 28/12/13 18:19, ankush upadhyay wrote:\n\nHello All,\n\nI am using multi tenant system and doing performance testing of multi\ntenant application. In case of single tenant it is working fine but once\nI enable tenants, then some time database servers not responding. Any clue?\n\n\n\nIt is a bit tricky to tell without any relevant information (e.g schema description). But a likely culprit would be a missing index on the relevant 'tenant_id' type field in each table that you are using to distinguish the various tenant datasets.\n\nRegards\n\nMark\n-- ------Regards@Ankush Upadhyay@", "msg_date": "Sat, 28 Dec 2013 17:10:54 +0530", "msg_from": "ankush upadhyay <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Are there some additional postgres tuning to improve\n performance in multi tenant system" }, { "msg_contents": "\nOn 12/28/2013 12:19 AM, ankush upadhyay wrote:\n> Hello All,\n>\n> I am using multi tenant system and doing performance testing of multi \n> tenant application. In case of single tenant it is working fine but \n> once I enable tenants, then some time database servers not responding. \n> Any clue?\n>\n>\n\n\n\nI usually use the term \"multi-tenancy\" to refer to different postgres \ninstances running on the same machine, rather than different databases \nwithin a single instance of postgres. So lease describe your setup in \nmore detail.\n\ncheers\n\nandrew\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 28 Dec 2013 08:20:07 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are there some additional postgres tuning to improve\n performance in multi tenant system" }, { "msg_contents": "On Sat, Dec 28, 2013 at 6:50 PM, Andrew Dunstan <[email protected]> wrote:\n\n>\n> On 12/28/2013 12:19 AM, ankush upadhyay wrote:\n>\n>> Hello All,\n>>\n>> I am using multi tenant system and doing performance testing of multi\n>> tenant application. In case of single tenant it is working fine but once I\n>> enable tenants, then some time database servers not responding. Any clue?\n>>\n>>\n>>\n>\n>\n> I usually use the term \"multi-tenancy\" to refer to different postgres\n> instances running on the same machine, rather than different databases\n> within a single instance of postgres. So lease describe your setup in more\n> detail.\n>\n> cheers\n>\n> andrew\n>\n\n\nFirst of all Thanks Andrew for let me know email etiquette and extremely\nsorry for confusion.\n\nHere I meant to say that different postgres instances running on the same\nmachine.\n\nActually I have one application machine and one database server machine\nwith multiple postgres instances running on it and accessing by application\nserver.\n\nI hope this time I could explain it in more details.\n\n-- \n------\nRegards\n@Ankush Upadhyay@\n\nOn Sat, Dec 28, 2013 at 6:50 PM, Andrew Dunstan <[email protected]> wrote:\n\nOn 12/28/2013 12:19 AM, ankush upadhyay wrote:\n\nHello All,\n\nI am using multi tenant system and doing performance testing of multi tenant application. In case of single tenant it is working fine but once I enable tenants, then some time database servers not responding. Any clue?\n\n\n\n\n\n\nI usually use the term \"multi-tenancy\" to refer to different postgres instances running on the same machine, rather than different databases within a single instance of postgres. So lease describe your setup in more detail.\n\ncheers\n\nandrew\nFirst of all Thanks Andrew for let me know email etiquette and extremely sorry for confusion.Here I meant to say that  different postgres instances running on the same machine.\nActually I have one application machine and one database server machine with multiple postgres instances running on it and accessing by application server.\nI hope this time I could explain it in more details.-- ------Regards@Ankush Upadhyay@", "msg_date": "Sat, 28 Dec 2013 19:16:32 +0530", "msg_from": "ankush upadhyay <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Are there some additional postgres tuning to improve\n performance in multi tenant system" }, { "msg_contents": "\nOn 12/28/2013 08:46 AM, ankush upadhyay wrote:\n> On Sat, Dec 28, 2013 at 6:50 PM, Andrew Dunstan <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n>\n> On 12/28/2013 12:19 AM, ankush upadhyay wrote:\n>\n> Hello All,\n>\n> I am using multi tenant system and doing performance testing\n> of multi tenant application. In case of single tenant it is\n> working fine but once I enable tenants, then some time\n> database servers not responding. Any clue?\n>\n>\n>\n>\n>\n> I usually use the term \"multi-tenancy\" to refer to different\n> postgres instances running on the same machine, rather than\n> different databases within a single instance of postgres. So lease\n> describe your setup in more detail.\n>\n> cheers\n>\n> andrew\n>\n>\n>\n> First of all Thanks Andrew for let me know email etiquette and \n> extremely sorry for confusion.\n>\n> Here I meant to say that different postgres instances running on the \n> same machine.\n>\n> Actually I have one application machine and one database server \n> machine with multiple postgres instances running on it and accessing \n> by application server.\n>\n> I hope this time I could explain it in more details.\n>\n>\n\n\nWhy are you doing that, as opposed to running multiple databases in a \nsingle instance? Running more than a handful of instances in a single \nmachine is almost always a recipe for poor performance. The vast \nmajority of users in my experience run a single postgres instance per \nmachine, possibly with a large number of databases.\n\ncheers\n\nandrew\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 28 Dec 2013 08:54:03 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are there some additional postgres tuning to improve\n performance in multi tenant system" } ]
[ { "msg_contents": "When querying a view with a WHERE condition, postgresql normally is able \nto perform an index scan which reduces time for evaluation dramatically.\n\nHowever, if a window function is evaluated in the view, postgresql is \nevaluating the window function before the WHERE condition is applied. \nThis induces a full table scan.\n\nThese are the results of EXPLAIN:\n-- without window function (non-equivalent)\nexplain select * from without_window_function where user_id = 43;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------\n Index Scan using idx_checkin_node_user_id on checkin_node \n(cost=0.43..26.06 rows=2 width=20)\n Index Cond: (user_id = 43)\n Filter: (((id % 1000) + 1) = 1)\n\n-- with window function\n explain select * from last_position where user_id = 43;\n QUERY PLAN\n------------------------------------------------------------------------------------------\n Subquery Scan on tmp_last_position (cost=973803.66..1151820.09 rows=2 \nwidth=20)\n Filter: ((tmp_last_position.datepos = 1) AND \n(tmp_last_position.user_id = 43))\n -> WindowAgg (cost=973803.66..1080613.52 rows=4747105 width=32)\n -> Sort (cost=973803.66..985671.42 rows=4747105 width=32)\n Sort Key: checkin_node.user_id, checkin_node.date, \ncheckin_node.id\n -> Seq Scan on checkin_node (cost=0.00..106647.05 \nrows=4747105 width=32)\n\nTo work around this, I avoid using a view for that (equivalent):\nEXPLAIN SELECT user_id, latitude, longitude\n FROM (\n SELECT\n user_id,\n latitude,\n longitude,\n rank() OVER (PARTITION BY user_id ORDER BY date DESC, id \nDESC) AS datepos\n FROM checkin_node\n WHERE user_id = 43\n ) AS tmp_last_position\n WHERE datepos = 1; -- takes 2 ms\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------\n Subquery Scan on tmp_last_position (cost=39.70..52.22 rows=2 width=20)\n Filter: (tmp_last_position.datepos = 1)\n -> WindowAgg (cost=39.70..47.40 rows=385 width=32)\n -> Sort (cost=39.70..40.67 rows=385 width=32)\n Sort Key: checkin_node.date, checkin_node.id\n -> Index Scan using idx_checkin_node_user_id on \ncheckin_node (cost=0.43..23.17 rows=385 width=32)\n Index Cond: (user_id = 43)\n\n\nI would expect postgresql to apply this query plan also for the view \nlast_position. It's 6621ms vs. 2ms, so the speedup is 3310!\n\nIs it a bug in the optimizer?\n\nHow to reproduce:\n=================\nOS: ubuntu 12.04\nPostgresql v9.3.2\n\nget some sample data:\nwget -qO- \nhttp://snap.stanford.edu/data/loc-brightkite_totalCheckins.txt.gz|gunzip \n-c|dos2unix|awk '{ if (length($0) > 20) print }'>test.csv\n\nexecute psql script:\n\n\n\\timing on\nBEGIN;\nDROP TABLE IF EXISTS checkin_node CASCADE;\nCREATE TABLE checkin_node (\n id SERIAL NOT NULL PRIMARY KEY,\n user_id INTEGER NOT NULL,\n date TIMESTAMP NOT NULL,\n latitude DOUBLE PRECISION NOT NULL,\n longitude DOUBLE PRECISION NOT NULL,\n original_id VARCHAR NOT NULL\n);\n\\COPY checkin_node (user_id, date, latitude, longitude, original_id) \nFROM 'test.csv' WITH DELIMITER E'\\t';\n\nALTER TABLE checkin_node DROP COLUMN original_id;\nCREATE INDEX idx_checkin_node_user_id ON checkin_node(user_id);\nCREATE INDEX idx_checkin_node_date ON checkin_node(date);\n\nCOMMIT;\n\nVACUUM ANALYZE checkin_node;\n\n-- doing window function in a view\n\nDROP VIEW IF EXISTS last_position CASCADE;\nCREATE VIEW last_position (user_id, latitude, longitude) AS (\n SELECT user_id, latitude, longitude\n FROM (\n SELECT\n user_id,\n latitude,\n longitude,\n rank() OVER (PARTITION BY user_id ORDER BY date DESC, id \nDESC) AS datepos\n FROM checkin_node\n ) AS tmp_last_position\n WHERE datepos = 1\n);\n\nselect * from last_position where user_id = 43; -- takes 6621ms\n\n-- similar view but without window function (non-equivalent)\n\nDROP VIEW IF EXISTS without_window_function CASCADE;\nCREATE VIEW without_window_function (user_id, latitude, longitude) AS (\n SELECT user_id, latitude, longitude\n FROM (\n SELECT\n user_id,\n latitude,\n longitude,\n (id % 1000)+1 AS datepos --to not use a constant here\n FROM checkin_node\n ) AS tmp_last_position\n WHERE datepos = 1\n);\nselect * from without_window_function where user_id = 43; -- takes 10ms\n\n\n-- workaround: avoid using views (equivalent)\n\nSELECT user_id, latitude, longitude\nFROM (\n SELECT\n user_id,\n latitude,\n longitude,\n rank() OVER (PARTITION BY user_id ORDER BY date DESC, id DESC) \nAS datepos\n FROM checkin_node\n WHERE user_id = 43\n ) AS tmp_last_position\nWHERE datepos = 1; -- takes 2 ms\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 2 Jan 2014 22:32:01 +0100", "msg_from": "Thomas Mayer <[email protected]>", "msg_from_op": true, "msg_subject": "window function induces full table scan" }, { "msg_contents": "Thomas Mayer <[email protected]> writes:\n> When querying a view with a WHERE condition, postgresql normally is able \n> to perform an index scan which reduces time for evaluation dramatically.\n\n> However, if a window function is evaluated in the view, postgresql is \n> evaluating the window function before the WHERE condition is applied. \n> This induces a full table scan.\n\nYou haven't exactly provided full details, but it looks like you are\nthinking that WHERE clauses applied above a window function should\nbe pushed to below it. A moment's thought about the semantics should\nconvince you that such an optimization would be incorrect: the window\nfunction would see fewer input rows than it should, and therefore would\n(in general) return the wrong values for the selected rows.\n\nIt's possible that in the specific case you exhibit here, pushing down\nthe clause wouldn't result in changes in the window function's output for\nthe selected rows, but the optimizer doesn't have enough knowledge about\nwindow functions to determine that.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 02 Jan 2014 16:52:50 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: window function induces full table scan" }, { "msg_contents": "On Thu, Jan 2, 2014 at 1:52 PM, Tom Lane <[email protected]> wrote:\n\n> Thomas Mayer <[email protected]> writes:\n> > When querying a view with a WHERE condition, postgresql normally is able\n> > to perform an index scan which reduces time for evaluation dramatically.\n>\n> > However, if a window function is evaluated in the view, postgresql is\n> > evaluating the window function before the WHERE condition is applied.\n> > This induces a full table scan.\n>\n> You haven't exactly provided full details, but it looks like you are\n> thinking that WHERE clauses applied above a window function should\n> be pushed to below it. A moment's thought about the semantics should\n> convince you that such an optimization would be incorrect: the window\n> function would see fewer input rows than it should, and therefore would\n> (in general) return the wrong values for the selected rows.\n>\n> It's possible that in the specific case you exhibit here, pushing down\n> the clause wouldn't result in changes in the window function's output for\n> the selected rows, but the optimizer doesn't have enough knowledge about\n> window functions to determine that.\n>\n\nA restriction in the WHERE clause which corresponds to the PARTITION BY\nshould be pushable, no? I think it doesn't need to understand the internal\nsemantics of the window function itself, just of the PARTITION BY, which\nshould be doable, at least in principle.\n\nCheers,\n\nJeff\n\nOn Thu, Jan 2, 2014 at 1:52 PM, Tom Lane <[email protected]> wrote:\nThomas Mayer <[email protected]> writes:\n> When querying a view with a WHERE condition, postgresql normally is able\n> to perform an index scan which reduces time for evaluation dramatically.\n\n> However, if a window function is evaluated in the view, postgresql is\n> evaluating the window function before the WHERE condition is applied.\n> This induces a full table scan.\n\nYou haven't exactly provided full details, but it looks like you are\nthinking that WHERE clauses applied above a window function should\nbe pushed to below it.  A moment's thought about the semantics should\nconvince you that such an optimization would be incorrect: the window\nfunction would see fewer input rows than it should, and therefore would\n(in general) return the wrong values for the selected rows.\n\nIt's possible that in the specific case you exhibit here, pushing down\nthe clause wouldn't result in changes in the window function's output for\nthe selected rows, but the optimizer doesn't have enough knowledge about\nwindow functions to determine that.A restriction in the WHERE clause which corresponds to the PARTITION BY should be pushable, no?  I think it doesn't need to understand the internal semantics of the window function itself, just of the PARTITION BY, which should be doable, at least in principle.\nCheers,Jeff", "msg_date": "Thu, 2 Jan 2014 14:26:31 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: window function induces full table scan" }, { "msg_contents": "Jeff Janes <[email protected]> writes:\n> On Thu, Jan 2, 2014 at 1:52 PM, Tom Lane <[email protected]> wrote:\n>> It's possible that in the specific case you exhibit here, pushing down\n>> the clause wouldn't result in changes in the window function's output for\n>> the selected rows, but the optimizer doesn't have enough knowledge about\n>> window functions to determine that.\n\n> A restriction in the WHERE clause which corresponds to the PARTITION BY\n> should be pushable, no? I think it doesn't need to understand the internal\n> semantics of the window function itself, just of the PARTITION BY, which\n> should be doable, at least in principle.\n\nIf the restriction clause must give the same answer for any two rows of\nthe same partition, then yeah, we could in principle push it down without\nknowing anything about the specific window function. It'd be a less than\ntrivial test to make, I think. In any case, it's not a \"bug\" that the\noptimizer doesn't do this currently.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 02 Jan 2014 17:43:12 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: window function induces full table scan" }, { "msg_contents": "You understood me correctly, Tom.\n\nAs you mention, the result would be correct in my case:\n- The window function is performing a \"PARTITION BY user_id\".\n- user_id is used for the WHERE condition.\n\nI agree, that in general (PARTITION BY and WHERE don't use the same set \nof attributes), incorrect results could occur when performing the WHERE \ncondition before performing the window function.\n\nHowever, in this special case, PARTITION BY and WHERE use the same set \nof attributes which safely allows some optimization.\n\nIn fact, this is getting complicated when multiple window functions with \ndifferent PARTITION BY's are used in one statement.\n\nI think, performing the WHERE condition before performing the window \nfunction would be safe if the WHERE condition attribute is element of \nthe PARTITION-BY-set of attributes of _every_ window function of the \nstatement.\n\nTo ensure correctness, WHERE condition attributes which are _not_ \nelement of the PARTITION-BY-set of attributes of _every_ window function \nof the statement need to be performed after performing the window function.\n\nSo, the optimizer could check if it's safe or not.\n\nRegards,\nThomas\n\nAm 02.01.2014 22:52, schrieb Tom Lane:\n> Thomas Mayer <[email protected]> writes:\n>> When querying a view with a WHERE condition, postgresql normally is able\n>> to perform an index scan which reduces time for evaluation dramatically.\n>\n>> However, if a window function is evaluated in the view, postgresql is\n>> evaluating the window function before the WHERE condition is applied.\n>> This induces a full table scan.\n>\n> You haven't exactly provided full details, but it looks like you are\n> thinking that WHERE clauses applied above a window function should\n> be pushed to below it. A moment's thought about the semantics should\n> convince you that such an optimization would be incorrect: the window\n> function would see fewer input rows than it should, and therefore would\n> (in general) return the wrong values for the selected rows.\n>\n> It's possible that in the specific case you exhibit here, pushing down\n> the clause wouldn't result in changes in the window function's output for\n> the selected rows, but the optimizer doesn't have enough knowledge about\n> window functions to determine that.\n>\n> \t\t\tregards, tom lane\n> .\n>\n\n-- \n======================================\nThomas Mayer\nDurlacher Allee 61\nD-76131 Karlsruhe\nTelefon: +49-721-2081661\nFax: +49-721-72380001\nMobil: +49-174-2152332\nE-Mail: [email protected]\n=======================================\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 2 Jan 2014 23:45:56 +0100", "msg_from": "Thomas Mayer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: window function induces full table scan" }, { "msg_contents": "\nAm 02.01.2014 23:43, schrieb Tom Lane:\n> Jeff Janes <[email protected]> writes:\n>> On Thu, Jan 2, 2014 at 1:52 PM, Tom Lane <[email protected]> wrote:\n>>> It's possible that in the specific case you exhibit here, pushing down\n>>> the clause wouldn't result in changes in the window function's output for\n>>> the selected rows, but the optimizer doesn't have enough knowledge about\n>>> window functions to determine that.\n>\n>> A restriction in the WHERE clause which corresponds to the PARTITION BY\n>> should be pushable, no? I think it doesn't need to understand the internal\n>> semantics of the window function itself, just of the PARTITION BY, which\n>> should be doable, at least in principle.\n>\n> If the restriction clause must give the same answer for any two rows of\n> the same partition, then yeah, we could in principle push it down without\n> knowing anything about the specific window function. It'd be a less than\n> trivial test to make, I think. In any case, it's not a \"bug\" that the\n> optimizer doesn't do this currently.\n\nI agree, this is not a \"bug\" in v9.3.2 in terms of correctness.\n\nBut it's a limitation, because the query plan is by far not optimal. You \nmay consider this report as a feature request then.\n\nThe optimization I suggested is normally performed, when no window \nfunction occurs in the statement.\n\nIt seems like the optimizer is already capable of doing a check if the \nWHERE can be done first.\n\nHowever, this check seems to be done too conservative: I guess, the \ncheck is ignoring the PARTITION-BY-sets of attributes completely.\n\n>\n> \t\t\tregards, tom lane\n> .\n>\n\nBest regards\nThomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 3 Jan 2014 00:12:55 +0100", "msg_from": "Thomas Mayer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: window function induces full table scan" }, { "msg_contents": "I wrote:\n> If the restriction clause must give the same answer for any two rows of\n> the same partition, then yeah, we could in principle push it down without\n> knowing anything about the specific window function. It'd be a less than\n> trivial test to make, I think.\n\nOn reflection, really this concern is isomorphic to whether or not it is\nsafe to push quals down into a SELECT DISTINCT. In principle, we should\nonly do that for quals that cannot distinguish values that are seen as\nequal by the equality operator used for DISTINCT. For instance, the\nfloat8 equality operator treats IEEE minus zero and plus zero as \"equal\",\nbut it's not hard to contrive a WHERE clause that can tell the difference.\nPushing such a clause down into a SELECT DISTINCT can change the results;\nbut we do it anyway, and have done so since the nineties, and I don't\nrecall hearing complaints about this.\n\nIf we wanted to be really principled about it, I think we'd have to\nrestrict pushdown to quals that tested subquery outputs with operators\nthat are members of the relevant equality operator's btree opclass.\nWhich would cause a lot of howls of anguish, while making things better\nfor a set of users that seems to be about empty.\n\nSo maybe it'd be all right to push down quals that only reference subquery\noutputs that are listed in the PARTITION clauses of all window functions\nin the subquery. I think that'd be a reasonably straightforward extension\nof the existing tests in allpaths.c.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 02 Jan 2014 18:55:55 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: window function induces full table scan" }, { "msg_contents": "Just to track it down: The limitation can also be reproduced without \nusing views. Using views is just a use case where the suggested \noptimization is actually needed.\n\nPlus, when I remove the condition \"WHERE datepos = 1\", the same \nbehaviour still occurs. Here, I wanted to see if postgresql is \npreferring the condition \"WHERE datepos = 1\" (datepos is the result of \nthe window function) over the condition \"user_id = 43\" for optimization. \nBut this is not the case.\n\n-- workaround example: \"WHERE user_id = 43\" condition in subselect\n\nSELECT user_id, latitude, longitude\nFROM (\n SELECT\n user_id,\n latitude,\n longitude,\n rank() OVER (PARTITION BY user_id ORDER BY date DESC, id DESC) \nAS datepos\n FROM checkin_node\n WHERE user_id = 43\n ) AS tmp_last_position\nWHERE datepos = 1; -- takes 2 ms\n\n-- track it down: reproduce limitation without a view:\n\nSELECT user_id, latitude, longitude\nFROM (\n SELECT\n user_id,\n latitude,\n longitude,\n rank() OVER (PARTITION BY user_id ORDER BY date DESC, id DESC)\n AS datepos\n FROM checkin_node\n ) AS tmp_last_position\nWHERE datepos = 1\nAND user_id = 43; -- takes 6621 ms\n\n-- without datepos condition\n\nSELECT user_id, latitude, longitude\nFROM (\n SELECT\n user_id,\n latitude,\n longitude,\n rank() OVER (PARTITION BY user_id ORDER BY date DESC, id DESC)\n AS datepos\n FROM checkin_node\n ) AS tmp_last_position\nWHERE user_id = 43; -- takes 6574 ms\n\nBest regards,\nThomas\n\n\nAm 03.01.2014 00:12, schrieb Thomas Mayer:\n>\n> Am 02.01.2014 23:43, schrieb Tom Lane:\n>> Jeff Janes <[email protected]> writes:\n>>> On Thu, Jan 2, 2014 at 1:52 PM, Tom Lane <[email protected]> wrote:\n>>>> It's possible that in the specific case you exhibit here, pushing down\n>>>> the clause wouldn't result in changes in the window function's output for\n>>>> the selected rows, but the optimizer doesn't have enough knowledge about\n>>>> window functions to determine that.\n>>\n>>> A restriction in the WHERE clause which corresponds to the PARTITION BY\n>>> should be pushable, no? I think it doesn't need to understand the internal\n>>> semantics of the window function itself, just of the PARTITION BY, which\n>>> should be doable, at least in principle.\n>>\n>> If the restriction clause must give the same answer for any two rows of\n>> the same partition, then yeah, we could in principle push it down without\n>> knowing anything about the specific window function. It'd be a less than\n>> trivial test to make, I think. In any case, it's not a \"bug\" that the\n>> optimizer doesn't do this currently.\n>\n> I agree, this is not a \"bug\" in v9.3.2 in terms of correctness.\n>\n> But it's a limitation, because the query plan is by far not optimal. You\n> may consider this report as a feature request then.\n>\n> The optimization I suggested is normally performed, when no window\n> function occurs in the statement.\n>\n> It seems like the optimizer is already capable of doing a check if the\n> WHERE can be done first.\n>\n> However, this check seems to be done too conservative: I guess, the\n> check is ignoring the PARTITION-BY-sets of attributes completely.\n>\n>>\n>> \t\t\tregards, tom lane\n>> .\n>>\n>\n> Best regards\n> Thomas\n>\n>\n\n-- \n======================================\nThomas Mayer\nDurlacher Allee 61\nD-76131 Karlsruhe\nTelefon: +49-721-2081661\nFax: +49-721-72380001\nMobil: +49-174-2152332\nE-Mail: [email protected]\n=======================================\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 3 Jan 2014 01:13:00 +0100", "msg_from": "Thomas Mayer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: window function induces full table scan" }, { "msg_contents": "I have just cloned the postgresql git repository and checked out the \nREL9_3_2 tagged version to have a look at the \nsrc/backend/optimizer/path/allpaths.c file.\n\nAs Tom already mentioned, quals are currently not pushed down when \nsubqueries with window functions occur:\n\nThere is a function subquery_is_pushdown_safe(...) which is asserting \nthat quals are not pushed down when window functions occur:\n\n\"\n * 2. If the subquery contains any window functions, we can't push quals\n * into it, because that could change the results.\n[...]\n/* Check point 2 */\nif (subquery->hasWindowFuncs)\n\treturn false;\n\"\n\nTo implement the optimization, subquery_is_pushdown_safe() needs to \nreturn true if pushing down the quals to a subquery which has window \nfunctions is in fact safe (\"quals that only reference subquery\noutputs that are listed in the PARTITION clauses of all window functions\nin the subquery\").\n\nPlus, there is a function qual_is_pushdown_safe(...) which contains an \nassertion, which might possibly become obsolete:\n\n\"\n/*\n * It would be unsafe to push down window function calls, but at least for\n * the moment we could never see any in a qual anyhow.\t(The same applies\n * to aggregates, which we check for in pull_var_clause below.)\n */\nAssert(!contain_window_function(qual));\n\"\n\nTom, do you think that these two changes could be sufficient? Do you \nhave a more general aproach in mind?\n\nBest regards\nThomas\n\nAm 03.01.2014 00:55, schrieb Tom Lane:\n> I wrote:\n>> If the restriction clause must give the same answer for any two rows of\n>> the same partition, then yeah, we could in principle push it down without\n>> knowing anything about the specific window function. It'd be a less than\n>> trivial test to make, I think.\n>\n> On reflection, really this concern is isomorphic to whether or not it is\n> safe to push quals down into a SELECT DISTINCT. In principle, we should\n> only do that for quals that cannot distinguish values that are seen as\n> equal by the equality operator used for DISTINCT. For instance, the\n> float8 equality operator treats IEEE minus zero and plus zero as \"equal\",\n> but it's not hard to contrive a WHERE clause that can tell the difference.\n> Pushing such a clause down into a SELECT DISTINCT can change the results;\n> but we do it anyway, and have done so since the nineties, and I don't\n> recall hearing complaints about this.\n>\n> If we wanted to be really principled about it, I think we'd have to\n> restrict pushdown to quals that tested subquery outputs with operators\n> that are members of the relevant equality operator's btree opclass.\n> Which would cause a lot of howls of anguish, while making things better\n> for a set of users that seems to be about empty.\n>\n> So maybe it'd be all right to push down quals that only reference subquery\n> outputs that are listed in the PARTITION clauses of all window functions\n> in the subquery. I think that'd be a reasonably straightforward extension\n> of the existing tests in allpaths.c.\n>\n> \t\t\tregards, tom lane\n> .\n>\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 3 Jan 2014 05:37:31 +0100", "msg_from": "Thomas Mayer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: window function induces full table scan" }, { "msg_contents": "Thomas Mayer <[email protected]> writes:\n> To implement the optimization, subquery_is_pushdown_safe() needs to \n> return true if pushing down the quals to a subquery which has window \n> functions is in fact safe (\"quals that only reference subquery\n> outputs that are listed in the PARTITION clauses of all window functions\n> in the subquery\").\n\nI'd just remove that check.\n\n> Plus, there is a function qual_is_pushdown_safe(...) which contains an \n> assertion, which might possibly become obsolete:\n\nNo, that should stay. There are no window functions in the upper query's\nWHERE, there will be none pushed into the lower's WHERE, and that's as it\nmust be.\n\n> Tom, do you think that these two changes could be sufficient?\n\nCertainly not. What you'd need to do is include the\nis-it-listed-in-all-PARTITION-clauses consideration in the code that marks\n\"unsafe\" subquery output columns. And update all the relevant comments.\nAnd maybe add a couple of regression test cases.\n\nOffhand I think the details of testing whether a given output column\nappears in a given partition clause are identical to testing whether\nit appears in the distinctClause. So you'd just be mechanizing running\nthrough the windowClause list to verify whether this holds for all\nthe WINDOW clauses.\n\nNote that if you just look at the windowClause list, then you might\nbe filtering by named window definitions that appeared in the WINDOW\nclause but were never actually referenced by any window function.\nI don't have a problem with blowing off the optimization in such cases.\nI don't think it's appropriate to expend the cycles that would be needed\nto discover whether they're all referenced at this point. (If anyone ever\ncomplains, it'd be much cheaper to modify the parser to get rid of\nunreferenced window definitions.)\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 03 Jan 2014 09:54:25 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: window function induces full table scan" }, { "msg_contents": "\n\nAm 03.01.2014 15:54, schrieb Tom Lane:\n> Thomas Mayer <[email protected]> writes:\n>> To implement the optimization, subquery_is_pushdown_safe() needs to\n>> return true if pushing down the quals to a subquery which has window\n>> functions is in fact safe (\"quals that only reference subquery\n>> outputs that are listed in the PARTITION clauses of all window functions\n>> in the subquery\").\n>\n> I'd just remove that check.\n>\n>> Plus, there is a function qual_is_pushdown_safe(...) which contains an\n>> assertion, which might possibly become obsolete:\n>\n> No, that should stay. There are no window functions in the upper query's\n> WHERE, there will be none pushed into the lower's WHERE, and that's as it\n> must be.\n>\n>> Tom, do you think that these two changes could be sufficient?\n>\n> Certainly not. What you'd need to do is include the\n> is-it-listed-in-all-PARTITION-clauses consideration in the code that marks\n> \"unsafe\" subquery output columns.\n\nClear. That's what I intended to write ;)\n\nFor better performance, we could first check subquery->hasWindowFuncs \nand only if this evaluates to true, check if the attribute is marked as \na \"unsafe subquery output column\". If it's unsafe \nsubquery_is_pushdown_safe() needs to return false.\n\nI was first thinking to do the decision safe/unsafe in the \nsubquery_is_pushdown_safe() function or in a new function that is called \nby subquery_is_pushdown_safe().\n\n... \"mark\" ...: Do I understand you correctly, that you prefer doing the \ndecision elsewhere and store the result (safe/unsafe) boolean value \nbesides to the subquery output fields? For the push-down, a subquery \noutput field must be available anyways.\n\nMore in general, one could possibly \"mark\" safe subquery output fields \nfor all the tests in subquery_is_pushdown_safe(). So this function would \nnot necessarily have to be in allpaths.c at all. But that kind of \nrefactoring would be a different issue which also could be implemented \nseparately.\n\n> And update all the relevant comments.\n> And maybe add a couple of regression test cases.\n>\n\nIndeed, that would be nice ;) Straightforward, there should be\n- A test case with one window function (in the subquery) and one \ncondition, which is safe to be pushed down. Then, assert that \nsubquery_is_pushdown_safe() returns true\n- A test case with one window function and one condition, which is \nunsafe to be pushed down. Then, assert that subquery_is_pushdown_safe() \nreturns false\n- A test case with multiple window functions and multiple conditions, \nall safe to be pushed down. Then, assert that \nsubquery_is_pushdown_safe() returns true\n- A test case with multiple window functions and multiple conditions, \nall except one safe to be pushed down. Then, assert that \nsubquery_is_pushdown_safe() returns true for the safe ones and false for \nthe unsafe ones\n- a test case that ensures that, after the change, the right query plan \nis chosen (integration test).\n- execute some example queries (integration test).\n\nWhat do you think about it? What else needs to be tested?\n\n> Offhand I think the details of testing whether a given output column\n> appears in a given partition clause are identical to testing whether\n> it appears in the distinctClause. So you'd just be mechanizing running\n> through the windowClause list to verify whether this holds for all\n> the WINDOW clauses.\n\nWhen a field is element of all PARTITION BY clauses of all window \nfunctions, it does not mean that this field is distinct. Think of a \nquery like\n\nSELECT * FROM (\n SELECT\n b,\n rank() OVER (PARTITION BY b ORDER BY c DESC) AS rankc\n FROM tbl;\n) as tmp\nWHERE tmp.b=1\n\nIn that case, the field b is not distinct (as there is no GROUP BY b). \nAnyways, tmp.b=1 would be safe to to be pushed down.\n\nDo you mean that a GROUP BY b is done implicitly (and internally) at a \ndeeper level just for the window function and _therefore_ is distinct at \nthat point?\n\nDoes the safe/unsafe marking survive recursiveness (over subquery \nlevels) then?\n\n>\n> Note that if you just look at the windowClause list, then you might\n> be filtering by named window definitions that appeared in the WINDOW\n> clause but were never actually referenced by any window function.\n> I don't have a problem with blowing off the optimization in such cases.\n> I don't think it's appropriate to expend the cycles that would be needed\n> to discover whether they're all referenced at this point. (If anyone ever\n> complains, it'd be much cheaper to modify the parser to get rid of\n> unreferenced window definitions.)\n>\n> \t\t\tregards, tom lane\n> .\n>\n\nBest regards\nThomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 3 Jan 2014 18:44:50 +0100", "msg_from": "Thomas Mayer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: window function induces full table scan" }, { "msg_contents": "Thomas Mayer <[email protected]> writes:\n> ... \"mark\" ...: Do I understand you correctly, that you prefer doing the \n> decision elsewhere and store the result (safe/unsafe) boolean value \n> besides to the subquery output fields? For the push-down, a subquery \n> output field must be available anyways.\n\nSee check_output_columns(). The infrastructure for deciding whether\na potentially-pushable qual refers to any unsafe subquery outputs already\nexists; we just need to extend it to consider outputs unsafe if they\ndon't appear in all PARTITION BY lists.\n\n>> Offhand I think the details of testing whether a given output column\n>> appears in a given partition clause are identical to testing whether\n>> it appears in the distinctClause. So you'd just be mechanizing running\n>> through the windowClause list to verify whether this holds for all\n>> the WINDOW clauses.\n\n> When a field is element of all PARTITION BY clauses of all window \n> functions, it does not mean that this field is distinct.\n\nNo, I didn't say that. What I meant was that (a) the is_pushdown_safe\nlogic can treat non-partitioning subquery outputs much like non-DISTINCT\noutputs, and (b) the parsetree representation of PARTITION BY is enough\nlike DISTINCT ON that the same kind of test (viz, a targetIsInSortList\ncall) will serve.\n\nI think you need to read the code around subquery_is_pushdown_safe and\nqual_is_pushdown_safe some more.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 03 Jan 2014 13:04:13 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: window function induces full table scan" }, { "msg_contents": "Am 03.01.2014 19:04, schrieb Tom Lane:\n\n> I think you need to read the code around subquery_is_pushdown_safe and\n> qual_is_pushdown_safe some more.\n>\n> \t\t\tregards, tom lane\n> .\n>\n\nIn general, I'd need to go throught the pg source code which will take \nsome time. For instance, I wanted to see which are the requirements for \na patch to be accepted.\n\nCurrently, I can't provide you with a patch anyways (lack of time and \nknowledge). However, I can possibly give it a try in a few weeks.\n\nIf someone is working on that in the meantime, please let me know.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 3 Jan 2014 19:25:24 +0100", "msg_from": "Thomas Mayer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: window function induces full table scan" } ]
[ { "msg_contents": "The problem was described here earlier but there is no answers\nunfortunately:\nhttp://www.postgresql.org/message-id/[email protected]\nIt looks like defect.\n\nCREATE TABLE t1 (c1 INTEGER, id INTEGER PRIMARY KEY);\nINSERT INTO t1 (c1, id) SELECT b, b FROM generate_series(1, 1000000) a (b);\nREINDEX TABLE t1;\nANALYZE t1;\n\nCREATE TABLE t2 (c1 INTEGER, id INTEGER PRIMARY KEY);\nINSERT INTO t2 (c1, id) SELECT b, b FROM generate_series(1, 1000000) a (b);\nREINDEX TABLE t2;\nANALYZE t2;\n\nCREATE TEMP TABLE t3 (ref_id INTEGER);\nINSERT INTO t3 (ref_id) VALUES (333333), (666666);\nANALYZE t3;\n\ntest=> EXPLAIN (ANALYZE) SELECT * FROM (SELECT * FROM t1 UNION ALL SELECT *\nFROM t2) t INNER JOIN t3 ON t3.ref_id = t.id;\n QUERY PLAN\n\n\n----------------------------------------------------------------------------------------------------\n--------------------\n Nested Loop (cost=0.42..34.84 rows=20000 width=12) (actual\ntime=0.046..0.104 rows=4 loops=1)\n -> Seq Scan on t3 (cost=0.00..1.02 rows=2 width=4) (actual\ntime=0.008..0.009 rows=2 loops=1)\n -> Append (cost=0.42..16.89 rows=2 width=8) (actual time=0.023..0.042\nrows=2 loops=2)\n -> Index Scan using t1_pkey on t1 (cost=0.42..8.45 rows=1\nwidth=8) (actual time=0.020..0.022 rows=1 loops=2)\n Index Cond: (id = t3.ref_id)\n -> Index Scan using t2_pkey on t2 (cost=0.42..8.45 rows=1\nwidth=8) (actual time=0.015..0.016 rows=1 loops=2)\n Index Cond: (id = t3.ref_id)\n Total runtime: 0.184 ms\n(8 rows)\n\nThis plan is perfect. But the rows estimation is not: 20000 vs 4.\nAs I can see Pg is able to do correct rows estimation: inner append: rows =\n2, outer seq scan: rows = 2. And nested loop has to know that one is able\nto produce 2 * 2 = 4 rows max.\nMoreover the cost estimation is _correct_! It is corresponded to 'rows=4'.\n\nWhy it is important to make correct row estimation? 'Cause it does matter\nin more complex query.\nLet's join another big table in that query:\n\nCREATE TABLE links (c1 INTEGER PRIMARY KEY, descr TEXT);\nINSERT INTO links (c1, descr) SELECT b, '2' FROM generate_series(1, 100000)\na (b);\nREINDEX TABLE links;\nANALYZE links;\n\ntest=> EXPLAIN (ANALYZE) SELECT * FROM (SELECT * FROM t1 UNION ALL SELECT *\nFROM t2) t INNER JOIN t3 ON t3.ref_id = t.id INNER JOIN links l ON (t.c1 =\nl.c1);\n QUERY PLAN\n\n\n----------------------------------------------------------------------------------------------------\n--------------------------\n Hash Join (cost=2693.43..3127.84 rows=20000 width=18) (actual\ntime=33.619..33.619 rows=0 loops=1)\n Hash Cond: (t1.c1 = l.c1)\n -> Nested Loop (cost=0.42..34.84 rows=20000 width=12) (actual\ntime=0.038..0.078 rows=4 loops=1)\n -> Seq Scan on t3 (cost=0.00..1.02 rows=2 width=4) (actual\ntime=0.006..0.007 rows=2 loops=1)\n -> Append (cost=0.42..16.89 rows=2 width=8) (actual\ntime=0.017..0.029 rows=2 loops=2)\n -> Index Scan using t1_pkey on t1 (cost=0.42..8.45 rows=1\nwidth=8) (actual time=0.015..0.017 rows=1 loops=2)\n Index Cond: (id = t3.ref_id)\n -> Index Scan using t2_pkey on t2 (cost=0.42..8.45 rows=1\nwidth=8) (actual time=0.009..0.009 rows=1 loops=2)\n Index Cond: (id = t3.ref_id)\n -> Hash (cost=1443.00..1443.00 rows=100000 width=6) (actual\ntime=33.479..33.479 rows=100000 loops=1)\n Buckets: 16384 Batches: 1 Memory Usage: 3711kB\n -> Seq Scan on links l (cost=0.00..1443.00 rows=100000 width=6)\n(actual time=0.017..14.853 rows=100000 loops=1)\n Total runtime: 33.716 ms\n(13 rows)\n\nPlanner thinks there'll be 20000 rows when join is performed between \"t\"\nand \"t3\". And that's why it makes a decision to use hash join with \"links\"\ntable.\nLet's prove it:\n\nCREATE OR REPLACE FUNCTION public.f1()\n RETURNS SETOF integer\n LANGUAGE plpgsql\n ROWS 20000\nAS $function$\nBEGIN\nRETURN QUERY EXECUTE 'SELECT t.c1 FROM (SELECT * FROM t1 UNION ALL SELECT *\nFROM t2) t INNER JOIN t3 ON t3.ref_id = t.id';\nEND;\n$function$\n\ntest=> explain select * from f1() t(c1) INNER JOIN links l ON (t.c1 = l.c1);\n QUERY PLAN\n---------------------------------------------------------------------------\n Hash Join (cost=2693.25..3293.25 rows=20000 width=10)\n Hash Cond: (t.c1 = l.c1)\n -> Function Scan on f1 t (cost=0.25..200.25 rows=20000 width=4)\n -> Hash (cost=1443.00..1443.00 rows=100000 width=6)\n -> Seq Scan on links l (cost=0.00..1443.00 rows=100000 width=6)\n(5 rows)\n\nThe same \"defect\" plan.\n\ntest=> ALTER FUNCTION f1() ROWS 4;\nALTER FUNCTION\ntest=> explain select * from f1() t(c1) INNER JOIN links l ON (t.c1 = l.c1);\n QUERY PLAN\n\n--------------------------------------------------------------------------------\n Nested Loop (cost=0.54..33.58 rows=4 width=10)\n -> Function Scan on f1 t (cost=0.25..0.29 rows=4 width=4)\n -> Index Scan using links_pkey on links l (cost=0.29..8.31 rows=1\nwidth=6)\n Index Cond: (c1 = t.c1)\n(4 rows)\n\nThe correct/perfect plan.\n\nIn real life I have bigger \"links\" table and wrong plan slows execution\nsignificantly.\nI found several workarounds. And it is not a problem anymore for me.\n\nI just want to report this \"strange thing\".\n\nI tried to look into source code, found some interesting places there but I\nthink it is useless: Pg developers know the situation much better than me.\n\nThe problem was described here earlier but there is no answers unfortunately: http://www.postgresql.org/message-id/[email protected]\nIt looks like defect.CREATE TABLE t1 (c1 INTEGER, id INTEGER PRIMARY KEY);INSERT INTO t1 (c1, id) SELECT b, b FROM generate_series(1, 1000000) a (b);REINDEX TABLE t1;\nANALYZE t1;CREATE TABLE t2 (c1 INTEGER, id INTEGER PRIMARY KEY);INSERT INTO t2 (c1, id) SELECT b, b FROM generate_series(1, 1000000) a (b);REINDEX TABLE t2;ANALYZE t2;\nCREATE TEMP TABLE t3 (ref_id INTEGER);INSERT INTO t3 (ref_id) VALUES (333333), (666666);ANALYZE t3;test=> EXPLAIN (ANALYZE) SELECT * FROM (SELECT * FROM t1 UNION ALL SELECT * FROM t2) t INNER JOIN t3 ON t3.ref_id = t.id;\n                                                       QUERY PLAN                                                       ----------------------------------------------------------------------------------------------------\n-------------------- Nested Loop  (cost=0.42..34.84 rows=20000 width=12) (actual time=0.046..0.104 rows=4 loops=1)   ->  Seq Scan on t3  (cost=0.00..1.02 rows=2 width=4) (actual time=0.008..0.009 rows=2 loops=1)\n   ->  Append  (cost=0.42..16.89 rows=2 width=8) (actual time=0.023..0.042 rows=2 loops=2)         ->  Index Scan using t1_pkey on t1  (cost=0.42..8.45 rows=1 width=8) (actual time=0.020..0.022 rows=1 loops=2)\n               Index Cond: (id = t3.ref_id)         ->  Index Scan using t2_pkey on t2  (cost=0.42..8.45 rows=1 width=8) (actual time=0.015..0.016 rows=1 loops=2)               Index Cond: (id = t3.ref_id)\n Total runtime: 0.184 ms(8 rows)This plan is perfect. But the rows estimation is not: 20000 vs 4.As I can see Pg is able to do correct rows estimation: inner append: rows = 2, outer seq scan: rows = 2. And nested loop has to know that one is able to produce 2 * 2 = 4 rows max.\nMoreover the cost estimation is _correct_! It is corresponded to 'rows=4'.Why it is important to make correct row estimation? 'Cause it does matter in more complex query.\nLet's join another big table in that query:CREATE TABLE links (c1 INTEGER PRIMARY KEY, descr TEXT);INSERT INTO links (c1, descr) SELECT b, '2' FROM generate_series(1, 100000) a (b);\nREINDEX TABLE links;ANALYZE links;test=> EXPLAIN (ANALYZE) SELECT * FROM (SELECT * FROM t1 UNION ALL SELECT * FROM t2) t INNER JOIN t3 ON t3.ref_id = t.id INNER JOIN links l ON (t.c1 = l.c1);\n                                                          QUERY PLAN                                                          ----------------------------------------------------------------------------------------------------\n-------------------------- Hash Join  (cost=2693.43..3127.84 rows=20000 width=18) (actual time=33.619..33.619 rows=0 loops=1)   Hash Cond: (t1.c1 = l.c1)   ->  Nested Loop  (cost=0.42..34.84 rows=20000 width=12) (actual time=0.038..0.078 rows=4 loops=1)\n         ->  Seq Scan on t3  (cost=0.00..1.02 rows=2 width=4) (actual time=0.006..0.007 rows=2 loops=1)         ->  Append  (cost=0.42..16.89 rows=2 width=8) (actual time=0.017..0.029 rows=2 loops=2)\n               ->  Index Scan using t1_pkey on t1  (cost=0.42..8.45 rows=1 width=8) (actual time=0.015..0.017 rows=1 loops=2)                     Index Cond: (id = t3.ref_id)               ->  Index Scan using t2_pkey on t2  (cost=0.42..8.45 rows=1 width=8) (actual time=0.009..0.009 rows=1 loops=2)\n                     Index Cond: (id = t3.ref_id)   ->  Hash  (cost=1443.00..1443.00 rows=100000 width=6) (actual time=33.479..33.479 rows=100000 loops=1)         Buckets: 16384  Batches: 1  Memory Usage: 3711kB\n         ->  Seq Scan on links l  (cost=0.00..1443.00 rows=100000 width=6) (actual time=0.017..14.853 rows=100000 loops=1) Total runtime: 33.716 ms(13 rows)Planner thinks there'll be 20000 rows when join is performed between \"t\" and \"t3\". And that's why it makes a decision to use hash join with \"links\" table.\nLet's prove it:CREATE OR REPLACE FUNCTION public.f1() RETURNS SETOF integer LANGUAGE plpgsql ROWS 20000AS $function$BEGIN\nRETURN QUERY EXECUTE 'SELECT t.c1 FROM (SELECT * FROM t1 UNION ALL SELECT * FROM t2) t INNER JOIN t3 ON t3.ref_id = t.id';END;$function$\ntest=> explain select * from f1() t(c1) INNER JOIN links l ON (t.c1 = l.c1);                                QUERY PLAN                                 ---------------------------------------------------------------------------\n Hash Join  (cost=2693.25..3293.25 rows=20000 width=10)   Hash Cond: (t.c1 = l.c1)   ->  Function Scan on f1 t  (cost=0.25..200.25 rows=20000 width=4)   ->  Hash  (cost=1443.00..1443.00 rows=100000 width=6)\n         ->  Seq Scan on links l  (cost=0.00..1443.00 rows=100000 width=6)(5 rows)The same \"defect\" plan.test=> ALTER FUNCTION f1() ROWS 4;\nALTER FUNCTIONtest=> explain select * from f1() t(c1) INNER JOIN links l ON (t.c1 = l.c1);                                   QUERY PLAN                                   --------------------------------------------------------------------------------\n Nested Loop  (cost=0.54..33.58 rows=4 width=10)   ->  Function Scan on f1 t  (cost=0.25..0.29 rows=4 width=4)   ->  Index Scan using links_pkey on links l  (cost=0.29..8.31 rows=1 width=6)\n         Index Cond: (c1 = t.c1)(4 rows)The correct/perfect plan.In real life I have bigger \"links\" table and wrong plan slows execution significantly.\nI found several workarounds. And it is not a problem anymore for me.I just want to report this \"strange thing\".I tried to look into source code, found some interesting places there but I think it is useless: Pg developers know the situation much better than me.", "msg_date": "Mon, 6 Jan 2014 12:50:47 +0700", "msg_from": "Michael Kolomeitsev <[email protected]>", "msg_from_op": true, "msg_subject": "Wrong rows count estimation in query with simple UNION ALL leads to\n drammatically slow plan" } ]
[ { "msg_contents": "Hi,\n\nI have a query that has each field used in conditions + sort indexed, but\nit scans through all data.\n\nThe query in question looks like:\n\nhttp://pastie.org/8618562\n\nI have each of those condition fields indexed:\n\nNewsArticle.groupId\nNewsArticle.sharedToCommunityIds\nNewsArticle.sourceFilterIds\nCommunityGroupLink.communityId\nCommunityGroupLink.groupId\nSourceFilter.groupId\nSourceFilter.communityId\n\nThis is the data output for explain http://d.pr/i/VGT3\n\nAnd in visual http://d.pr/i/mqiN\n\nLine 7 says rows=99173 which makes it real slow (it can take up to a minute\nto run).\n\nDo you have any ideas? All of them are appreciated!\n\nCheers,\n\n--\nYours sincerely,\nKai Sellgren\n\nHi,\nI have a query that has each field used in conditions + sort indexed, but it scans through all data.\nThe query in question looks like:\nhttp://pastie.org/8618562\nI have each of those condition fields indexed:\nNewsArticle.groupIdNewsArticle.sharedToCommunityIds\nNewsArticle.sourceFilterIds\nCommunityGroupLink.communityIdCommunityGroupLink.groupId\nSourceFilter.groupIdSourceFilter.communityId\nThis is the data output for explain http://d.pr/i/VGT3\nAnd in visual http://d.pr/i/mqiN\nLine 7 says rows=99173 which makes it real slow (it can take up to a minute to run).\nDo you have any ideas? All of them are appreciated!\nCheers,\n--Yours sincerely,\n\nKai Sellgren", "msg_date": "Thu, 9 Jan 2014 23:36:53 +0200", "msg_from": "Kai Sellgren <[email protected]>", "msg_from_op": true, "msg_subject": "Issue with query scanning through all data even with indexes" }, { "msg_contents": "From: [email protected] [mailto:[email protected]] On Behalf Of Kai Sellgren\r\nSent: Thursday, January 09, 2014 4:37 PM\r\nTo: [email protected]\r\nSubject: [PERFORM] Issue with query scanning through all data even with indexes\r\n\r\nHi,\r\n\r\nI have a query that has each field used in conditions + sort indexed, but it scans through all data.\r\n\r\nThe query in question looks like:\r\n\r\nhttp://pastie.org/8618562\r\n\r\nI have each of those condition fields indexed:\r\n\r\nNewsArticle.groupId\r\nNewsArticle.sharedToCommunityIds\r\nNewsArticle.sourceFilterIds\r\nCommunityGroupLink.communityId\r\nCommunityGroupLink.groupId\r\nSourceFilter.groupId\r\nSourceFilter.communityId\r\n\r\nThis is the data output for explain http://d.pr/i/VGT3\r\n\r\nAnd in visual http://d.pr/i/mqiN\r\n\r\nLine 7 says rows=99173 which makes it real slow (it can take up to a minute to run).\r\n\r\nDo you have any ideas? All of them are appreciated!\r\n\r\nCheers,\r\n\r\n--\r\nYours sincerely,\r\nKai Sellgren\r\n\r\n\r\nCould you try to move WHERE clause conditions into JOIN conditions, something like this:\r\n\r\nSELECT \"NewsArticle\".\"id\"\r\nFROM \"NewsArticle\"\r\nLEFT JOIN \"CommunityGroupLink\" ON \"CommunityGroupLink\".\"communityId\" = 1538 AND (\"CommunityGroupLink\".\"groupId\" = \"NewsArticle\".\"groupId\")\r\n AND((1538 = ANY (\"NewsArticle\".\"sharedToCommunityIds\") OR (\"CommunityGroupLink\".\"id\" IS NOT NULL)))\r\nLEFT JOIN \"SourceFilter\" ON \"SourceFilter\".\"communityId\" = 1538 AND \"SourceFilter\".\"groupId\" = \"NewsArticle\".\"groupId\"\r\n AND((\"SourceFilter\".\"id\" IS NULL OR \"SourceFilter\".\"id\" = ANY(\"NewsArticle\".\"sourceFilterIds\")));\r\n\r\n\r\nNot sure what you do with \"LIMIT 35\" - it's not shown in \"explain\" plan.\r\n\r\nRegards,\r\nIgor Neyman\r\n\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 16 Jan 2014 15:41:49 +0000", "msg_from": "Igor Neyman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with query scanning through all data even with\n indexes" } ]
[ { "msg_contents": "Can any one teel me the query to get value for *Number of cached blocks read,\nNumber of cached index blocks read, Number of cached sequence blocks read*\nin Postgresql? I just find all other queries, except this. so if you know\nkindly help me.\n\nThanks in advance.\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/PostgreSQL-query-for-cache-performance-counters-tp5786245.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 10 Jan 2014 01:46:50 -0800 (PST)", "msg_from": "ambilalmca <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL query for cache performance counters?" } ]
[ { "msg_contents": "Hi,\n\nI'm experiecing slow count performance:\n\nSELECT COUNT(*) AS \"count\"\nFROM \"NewsArticle\"\n\nTakes 210 ms. I've run analyze and vacuum. I'm on 9.3. Here're the stats\nhttp://d.pr/i/6YoB\n\nI don't understand why is it that slow. It returns 1 integer, and counts\nwithout filters.\n\nThis performs quickly:\n\nSELECT reltuples AS count\nFROM pg_class\nWHERE relname = 'NewsArticle';\n\nBut I'd like to add conditions so I don't like the last method.\n\n\n--\nYours sincerely,\nKai Sellgren\n\nHi,\nI'm experiecing slow count performance:\nSELECT COUNT(*) AS \"count\"\n\nFROM \"NewsArticle\"\n\nTakes 210 ms. I've run analyze and vacuum. I'm on 9.3. Here're the stats http://d.pr/i/6YoB\nI don't understand why is it that slow. It returns 1 integer, and counts without filters.\nThis performs quickly:\nSELECT reltuples AS countFROM pg_class\nWHERE relname = 'NewsArticle';\nBut I'd like to add conditions so I don't like the last method.\n--Yours sincerely,\n\nKai Sellgren", "msg_date": "Mon, 13 Jan 2014 23:57:20 +0200", "msg_from": "Kai Sellgren <[email protected]>", "msg_from_op": true, "msg_subject": "Slow counting on v9.3" }, { "msg_contents": "Hi Kai,\n\nYou are right, postgresql Count() function is slow, because; It's\nphysically count the rows one by one.\n\nOther database systems using indexes for counting, but postgresql walk\nthrough all rows in multiple transactions with different row states for\ncalculating the real row count. This is about architecture of postgresql.\n\nIf you use WHERE condition on indexed column in your query, this will be\nmuch faster.\n\nSource: http://wiki.postgresql.org/wiki/Slow_Counting<http://wiki.postgresql.org/wiki/Slow_Counting>\n\n\n\n\n\n\nOn Mon, Jan 13, 2014 at 11:57 PM, Kai Sellgren <[email protected]>wrote:\n\n> Hi,\n>\n> I'm experiecing slow count performance:\n>\n> SELECT COUNT(*) AS \"count\"\n> FROM \"NewsArticle\"\n>\n> Takes 210 ms. I've run analyze and vacuum. I'm on 9.3. Here're the stats\n> http://d.pr/i/6YoB\n>\n> I don't understand why is it that slow. It returns 1 integer, and counts\n> without filters.\n>\n> This performs quickly:\n>\n> SELECT reltuples AS count\n> FROM pg_class\n> WHERE relname = 'NewsArticle';\n>\n> But I'd like to add conditions so I don't like the last method.\n>\n>\n> --\n> Yours sincerely,\n> Kai Sellgren\n>\n\nHi Kai, You are right, postgresql Count() function is slow, because; It's physically count the rows one by one. Other database systems using indexes for counting, but postgresql walk through all rows in multiple transactions with different row states for calculating the real row count. This is about architecture of postgresql. \nIf you use WHERE condition on indexed column in your query, this will be much faster. Source: http://wiki.postgresql.org/wiki/Slow_Counting\nOn Mon, Jan 13, 2014 at 11:57 PM, Kai Sellgren <[email protected]> wrote:\nHi,\n\nI'm experiecing slow count performance:\nSELECT COUNT(*) AS \"count\"\n\n\n\nFROM \"NewsArticle\"\n\n\n\nTakes 210 ms. I've run analyze and vacuum. I'm on 9.3. Here're the stats http://d.pr/i/6YoB\nI don't understand why is it that slow. It returns 1 integer, and counts without filters.\nThis performs quickly:\nSELECT reltuples AS countFROM pg_class\nWHERE relname = 'NewsArticle';\nBut I'd like to add conditions so I don't like the last method.\n--Yours sincerely,\n\n\n\nKai Sellgren", "msg_date": "Thu, 16 Jan 2014 10:37:52 +0200", "msg_from": "=?UTF-8?B?TWVobWV0IMOHYWtvxJ9sdQ==?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow counting on v9.3" }, { "msg_contents": "Kai Sellgren <kaisellgren 'at' gmail.com> writes:\n\n> Hi,\n>\n> I'm experiecing slow count performance:\n>\n> SELECT COUNT(*) AS \"count\"\n> FROM \"NewsArticle\"\n>\n> Takes 210 ms. I've run analyze and vacuum. I'm on 9.3. Here're the stats http:/\n> /d.pr/i/6YoB\n>\n> I don't understand why is it that slow. It returns 1 integer, and counts\n> without filters.\n\nYou might actually have a lot more dead tuples than reported in\nstatistic. Last vacuum is old according to your screenshot. Try\n\"VACUUM FULL ANALYZE\" on your table, then try again counting.\n\n\n> This performs quickly:\n>\n> SELECT reltuples AS count\n> FROM pg_class\n> WHERE relname = 'NewsArticle';\n\nThis is not the same. This one uses precomputed statistics, and\ndoesn't scan the actual table data.\n\n\n> But I'd like to add conditions so I don't like the last method.\n>\n>\n> --\n> Yours sincerely,\n> Kai Sellgren\n\n-- \nGuillaume Cottenceau\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 16 Jan 2014 09:42:57 +0100", "msg_from": "Guillaume Cottenceau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow counting on v9.3" } ]
[ { "msg_contents": "We have a 9.1.11 backend (Ubuntu 12.04 x86_64, m1.medium EC2 instance)\nwhich seems to be stuck at COMMIT for 2 days now:\n\nmydb=# SELECT procpid, waiting, current_query,\nCURRENT_TIMESTAMP - query_start AS query_elapsed,\nCURRENT_TIMESTAMP - xact_start AS xact_elapsed\nFROM pg_stat_activity WHERE procpid != pg_backend_pid() AND\ncurrent_query != '<IDLE>';\n-[ RECORD 1 ]-+-----------------------\nprocpid | 6061\nwaiting | f\ncurrent_query | COMMIT;\nquery_elapsed | 2 days 08:59:17.619142\nxact_elapsed | 3 days 15:48:10.739912\n\n\nThe transaction behind that COMMIT has been the only thing running on\nthis Postgres instance for the past 3 days or so, since Postgres was\nstarted on that machine. I spun the EC2 instance for this database up\nsolely to test a database subsetting process, which is what the\ntransaction was doing before it got stuck at COMMIT -- using a bunch\nof DELETEs and ALTER TABLE ... DROP|ADD CONSTRAINTs to delete 90% or\nso of our data in order to be able to pg_dump a slimmed-down\ndevelopment copy.\n\nThe EC2 instances we use have separate EBS-backed volumes for the\nPostgreSQL data and WAL directories. The backend in question seems to\nbe stuck reading a ton of data from the data partition: the monitoring\nfor those EBS volumes shows those volumes have been hammered reading a\nconstant aggregate 90MB/sec since that COMMIT started. The write\nbandwidth to the postgresql-data partition has been almost nil since\nthe COMMIT, and there has been no read/write activity on the WAL\nvolumes.\n\nHere, we can see that backend has managed to read 22 TB despite the\nfact that the entire database is only 228 GB on disk.\n\n$ sudo cat /proc/6061/io\nrchar: 24505414843923\nwchar: 23516159014\nsyscr: 2991395854\nsyscw: 2874613\nread_bytes: 24791719338496\nwrite_bytes: 22417580032\ncancelled_write_bytes: 221208576\n\n$ df -h /dev/md0 /dev/md1\nFilesystem Size Used Avail Use% Mounted on\n/dev/md0 480G 228G 253G 48% /mnt/ebs/postgresql-data\n/dev/md1 32G 20G 13G 61% /mnt/ebs/postgresql-wal\n\nRunning an strace on the backend shows a whole ton of read() calls and\nthe occasional lseek(). I grabbed a backtrace of the backend with gdb,\nattached.\n\nAttached also are the non-default pg_settings for this instance.\nYou'll notice that fsync, full_page_writes, and autovacuum are all\noff: this is intentional, since this instance is transient and has\nnothing important on it. There are no interesting errors in the\nPostgres log files since it was spun up.\n\nAny ideas on how to further diagnose or avoid this problem?\n\nJosh\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Tue, 14 Jan 2014 11:51:42 -0500", "msg_from": "Josh Kupershmidt <[email protected]>", "msg_from_op": true, "msg_subject": "COMMIT stuck for days after bulk delete" }, { "msg_contents": "Josh Kupershmidt <[email protected]> writes:\n> We have a 9.1.11 backend (Ubuntu 12.04 x86_64, m1.medium EC2 instance)\n> which seems to be stuck at COMMIT for 2 days now:\n> ...\n> The transaction behind that COMMIT has been the only thing running on\n> this Postgres instance for the past 3 days or so, since Postgres was\n> started on that machine. I spun the EC2 instance for this database up\n> solely to test a database subsetting process, which is what the\n> transaction was doing before it got stuck at COMMIT -- using a bunch\n> of DELETEs and ALTER TABLE ... DROP|ADD CONSTRAINTs to delete 90% or\n> so of our data in order to be able to pg_dump a slimmed-down\n> development copy.\n\nA plausible guess is that the backend is running around trying to verify\nthat some deferred foreign key constraints still hold. But without\nknowing what your schema is, that's only a guess.\n\nIf that is it, a likely solution is to drop *all* the FK constraints\nbefore doing the bulk delete, then (in a new transaction, probably)\nrecreate the ones you still want.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 14 Jan 2014 12:36:01 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COMMIT stuck for days after bulk delete" }, { "msg_contents": "On Tue, Jan 14, 2014 at 12:36 PM, Tom Lane <[email protected]> wrote:\n> Josh Kupershmidt <[email protected]> writes:\n>> We have a 9.1.11 backend (Ubuntu 12.04 x86_64, m1.medium EC2 instance)\n>> which seems to be stuck at COMMIT for 2 days now:\n>> ...\n>> The transaction behind that COMMIT has been the only thing running on\n>> this Postgres instance for the past 3 days or so, since Postgres was\n>> started on that machine. I spun the EC2 instance for this database up\n>> solely to test a database subsetting process, which is what the\n>> transaction was doing before it got stuck at COMMIT -- using a bunch\n>> of DELETEs and ALTER TABLE ... DROP|ADD CONSTRAINTs to delete 90% or\n>> so of our data in order to be able to pg_dump a slimmed-down\n>> development copy.\n>\n> A plausible guess is that the backend is running around trying to verify\n> that some deferred foreign key constraints still hold. But without\n> knowing what your schema is, that's only a guess.\n\nYeah, that's a good guess. A bunch of the FK constraints I am dropping\nand re-adding are marked DEFERRABLE INITIALLY DEFERRED; there are 167\ncounted by:\n\nSELECT COUNT(*)\n FROM pg_catalog.pg_constraint c\n WHERE contype = 'f' AND condeferrable AND condeferred AND\n connamespace =\n (SELECT oid FROM pg_catalog.pg_namespace WHERE nspname = 'public') ;\n\n> If that is it, a likely solution is to drop *all* the FK constraints\n> before doing the bulk delete, then (in a new transaction, probably)\n> recreate the ones you still want.\n\nWill try that, thanks for the suggestion.\n\nJosh\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 14 Jan 2014 13:24:55 -0500", "msg_from": "Josh Kupershmidt <[email protected]>", "msg_from_op": true, "msg_subject": "Re: COMMIT stuck for days after bulk delete" } ]
[ { "msg_contents": "For Postgresql:\n\n> select version();\n version\n\n-------------------------------------------------------------------------------------------------------------\n PostgreSQL 9.3.2 on amd64-portbld-freebsd9.2, compiled by cc (GCC) 4.2.1\n20070831 patched [FreeBSD], 64-bit\n\nFor table:\n\n> \\d core.cookies2tags\n Tabela \"core.cookies2tags\"\n Kolumna | Typ | Modyfikatory\n\n-----------------------+-----------------------------+--------------------------\n co2ta_co_id | integer | niepusty\n co2ta_cl_id | integer | niepusty\n co2ta_ta_id | integer | niepusty\n co2ta_ta_ukey_id | text |\n co2ta_ta_ukey_hash | character(40) |\n co2ta_fpr_id | integer |\n co2ta_date_first | timestamp without time zone | niepusty domyślnie\nnow()\n co2ta_date_last | timestamp without time zone | niepusty domyślnie\nnow()\n co2ta_count_all | integer | niepusty domyślnie 1\n co2ta_count_1 | integer | niepusty domyślnie 1\n co2ta_date_1 | date | niepusty\n co2ta_datelist_date | date |\n co2ta_datelist_counts | integer[] |\n co2ta_ta_params | hstore |\n co2ta_fca_id | integer |\n co2ta_mco_id | integer |\nIndeksy:\n \"cookies2tags_ukey1\" UNIQUE, btree (co2ta_co_id, co2ta_cl_id,\nco2ta_ta_id, co2ta_ta_ukey_hash) WHERE co2ta_ta_ukey_hash IS NOT NULL\n \"cookies2tags_ukey2\" UNIQUE, btree (co2ta_co_id, co2ta_cl_id,\nco2ta_ta_id) WHERE co2ta_ta_ukey_hash IS NULL\n \"cookies2tags_co_id_key\" btree (co2ta_co_id)\n \"cookies2tags_co_id_key2\" btree (co2ta_co_id, co2ta_cl_id)\n \"cookies2tags_key1\" btree (co2ta_cl_id, co2ta_ta_id, co2ta_ta_ukey_hash)\n \"cookies2tags_key2\" btree (co2ta_cl_id, co2ta_ta_ukey_hash) WHERE\nco2ta_fpr_id IS NULL AND (co2ta_ta_id = ANY (ARRAY[1, 2, 3, 4]))\n \"cookies2tags_key3\" btree (co2ta_cl_id, co2ta_ta_id, co2ta_date_1)\n \"cookies2tags_key4\" btree (co2ta_mco_id)\n \"idx_co_id_date_last\" btree (co2ta_co_id, co2ta_date_last)\n\nTable is rather big (about 150M rows).\n\nFor this query:\n\nWITH s AS (\n SELECT\n co2ta_co_id AS co_id,\n co2ta_ta_id AS ta_id,\n MIN(co2ta_date_last) AS co2ta_date_last_min,\n MAX(co2ta_date_last) AS co2ta_date_last_max,\n COUNT(DISTINCT(co2ta_ta_ukey_hash)) AS co2ta_ta_ukey_count,\n 1\n FROM\n core.cookies2tags co2ta\n WHERE\n co2ta.co2ta_co_id =\nANY('{\"1\",\"123567429\",\"123872617\",\"123929118\",\"123930244\",\"123935996\",\"123937156\",\"123944495\",\"123944999\",\"123945469\"}'::int[])\nAND\n co2ta.co2ta_cl_id = 97 AND\n co2ta.co2ta_ta_id = ANY('{\"142\"}'::int[])\n GROUP BY\n ta_id,\n co_id\n)\nSELECT\n *\nFROM\n s\nUNION ALL\nSELECT\n s.co_id,\n NULL,\n MIN(s.co2ta_date_last_min),\n MAX(s.co2ta_date_last_min),\n NULL,\n 1\nFROM\n s\nGROUP BY\n s.co_id\n\ni get following plan:\n\n\nQUERY PLAN\n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------\n Append (cost=49.38..49.44 rows=2 width=36) (actual time=39.009..39.034\nrows=16 loops=1)\n CTE s\n -> GroupAggregate (cost=49.35..49.38 rows=1 width=57) (actual\ntime=39.006..39.016 rows=8 loops=1)\n -> Sort (cost=49.35..49.35 rows=1 width=57) (actual\ntime=38.993..38.993 rows=8 loops=1)\n Sort Key: co2ta.co2ta_ta_id, co2ta.co2ta_co_id\n Sort Method: quicksort Memory: 25kB\n -> Index Scan using cookies2tags_key3 on cookies2tags\nco2ta (cost=0.57..49.34 rows=1 width=57) (actual time=38.339..38.982\nrows=8 loops=1)\n Index Cond: ((co2ta_cl_id = 97) AND (co2ta_ta_id =\nANY ('{142}'::integer[])))\n Filter: (co2ta_co_id = ANY\n('{1,123567429,123872617,123929118,123930244,123935996,123937156,123944495,123944999,123945469}'::integer[]))\n Rows Removed by Filter: 32120\n -> CTE Scan on s (cost=0.00..0.02 rows=1 width=36) (actual\ntime=39.008..39.021 rows=8 loops=1)\n -> HashAggregate (cost=0.03..0.04 rows=1 width=12) (actual\ntime=0.009..0.010 rows=8 loops=1)\n -> CTE Scan on s s_1 (cost=0.00..0.02 rows=1 width=12) (actual\ntime=0.000..0.001 rows=8 loops=1)\n Total runtime: 39.079 ms\n\nBut if i remove one of co2ta_co_id in query (eq. \"1\") i get:\n\n\n QUERY PLAN\n\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Append (cost=45.28..45.35 rows=2 width=36) (actual time=0.233..0.255\nrows=16 loops=1)\n CTE s\n -> GroupAggregate (cost=45.25..45.28 rows=1 width=57) (actual\ntime=0.230..0.241 rows=8 loops=1)\n -> Sort (cost=45.25..45.26 rows=1 width=57) (actual\ntime=0.224..0.225 rows=8 loops=1)\n Sort Key: co2ta.co2ta_ta_id, co2ta.co2ta_co_id\n Sort Method: quicksort Memory: 25kB\n -> Index Scan using cookies2tags_co_id_key2 on\ncookies2tags co2ta (cost=0.58..45.24 rows=1 width=57) (actual\ntime=0.031..0.215 rows=8 loops=1)\n Index Cond: ((co2ta_co_id = ANY\n('{123567429,123872617,123929118,123930244,123935996,123937156,123944495,123944999,123945469}'::integer[]))\nAND (co2ta_cl_id = 97))\n Filter: (co2ta_ta_id = ANY ('{142}'::integer[]))\n Rows Removed by Filter: 187\n -> CTE Scan on s (cost=0.00..0.02 rows=1 width=36) (actual\ntime=0.232..0.244 rows=8 loops=1)\n -> HashAggregate (cost=0.03..0.04 rows=1 width=12) (actual\ntime=0.007..0.009 rows=8 loops=1)\n -> CTE Scan on s s_1 (cost=0.00..0.02 rows=1 width=12) (actual\ntime=0.001..0.001 rows=8 loops=1)\n Total runtime: 0.321 ms\n\nThis plan is much faster. I notice that if I put more co2ta_co_id values in\nquery than some threshold PostgreSQL creates unoptimal plan.\n\nI wonder what should I tune, to get PostgreSQL use other index for queries\nwith more co2ta_co_id values in query?\nCurrently as hotfix I split input values, to execute more queries with less\nco2ta_co_id values.\n\n-- \nPiotr Gasidło\n\nFor Postgresql:> select version();                                                   version                                                   -------------------------------------------------------------------------------------------------------------\n PostgreSQL 9.3.2 on amd64-portbld-freebsd9.2, compiled by cc (GCC) 4.2.1 20070831 patched [FreeBSD], 64-bitFor table:> \\d core.cookies2tags                           Tabela \"core.cookies2tags\"\n        Kolumna        |             Typ             |       Modyfikatory       -----------------------+-----------------------------+-------------------------- co2ta_co_id           | integer                     | niepusty\n co2ta_cl_id           | integer                     | niepusty co2ta_ta_id           | integer                     | niepusty co2ta_ta_ukey_id      | text                        |  co2ta_ta_ukey_hash    | character(40)               | \n co2ta_fpr_id          | integer                     |  co2ta_date_first      | timestamp without time zone | niepusty domyślnie now() co2ta_date_last       | timestamp without time zone | niepusty domyślnie now()\n co2ta_count_all       | integer                     | niepusty domyślnie 1 co2ta_count_1         | integer                     | niepusty domyślnie 1 co2ta_date_1          | date                        | niepusty\n co2ta_datelist_date   | date                        |  co2ta_datelist_counts | integer[]                   |  co2ta_ta_params       | hstore                      |  co2ta_fca_id          | integer                     | \n co2ta_mco_id          | integer                     | Indeksy:    \"cookies2tags_ukey1\" UNIQUE, btree (co2ta_co_id, co2ta_cl_id, co2ta_ta_id, co2ta_ta_ukey_hash) WHERE co2ta_ta_ukey_hash IS NOT NULL\n    \"cookies2tags_ukey2\" UNIQUE, btree (co2ta_co_id, co2ta_cl_id, co2ta_ta_id) WHERE co2ta_ta_ukey_hash IS NULL    \"cookies2tags_co_id_key\" btree (co2ta_co_id)    \"cookies2tags_co_id_key2\" btree (co2ta_co_id, co2ta_cl_id)\n    \"cookies2tags_key1\" btree (co2ta_cl_id, co2ta_ta_id, co2ta_ta_ukey_hash)    \"cookies2tags_key2\" btree (co2ta_cl_id, co2ta_ta_ukey_hash) WHERE co2ta_fpr_id IS NULL AND (co2ta_ta_id = ANY (ARRAY[1, 2, 3, 4]))\n    \"cookies2tags_key3\" btree (co2ta_cl_id, co2ta_ta_id, co2ta_date_1)    \"cookies2tags_key4\" btree (co2ta_mco_id)    \"idx_co_id_date_last\" btree (co2ta_co_id, co2ta_date_last)\nTable is rather big (about 150M rows).For this query:WITH s AS (  SELECT     co2ta_co_id AS co_id,    co2ta_ta_id AS ta_id,\n    MIN(co2ta_date_last) AS co2ta_date_last_min,    MAX(co2ta_date_last) AS co2ta_date_last_max,    COUNT(DISTINCT(co2ta_ta_ukey_hash)) AS co2ta_ta_ukey_count,    1  FROM\n    core.cookies2tags co2ta  WHERE     co2ta.co2ta_co_id = ANY('{\"1\",\"123567429\",\"123872617\",\"123929118\",\"123930244\",\"123935996\",\"123937156\",\"123944495\",\"123944999\",\"123945469\"}'::int[]) AND\n    co2ta.co2ta_cl_id = 97 AND    co2ta.co2ta_ta_id = ANY('{\"142\"}'::int[])  GROUP BY     ta_id,    co_id) SELECT  *\nFROM   sUNION ALLSELECT  s.co_id,  NULL,  MIN(s.co2ta_date_last_min),  MAX(s.co2ta_date_last_min),  NULL,  1\nFROM   sGROUP BY  s.co_idi get following plan:                                                                          QUERY PLAN                                                                           \n---------------------------------------------------------------------------------------------------------------------------------------------------------------- Append  (cost=49.38..49.44 rows=2 width=36) (actual time=39.009..39.034 rows=16 loops=1)\n   CTE s     ->  GroupAggregate  (cost=49.35..49.38 rows=1 width=57) (actual time=39.006..39.016 rows=8 loops=1)           ->  Sort  (cost=49.35..49.35 rows=1 width=57) (actual time=38.993..38.993 rows=8 loops=1)\n                 Sort Key: co2ta.co2ta_ta_id, co2ta.co2ta_co_id                 Sort Method: quicksort  Memory: 25kB                 ->  Index Scan using cookies2tags_key3 on cookies2tags co2ta  (cost=0.57..49.34 rows=1 width=57) (actual time=38.339..38.982 rows=8 loops=1)\n                       Index Cond: ((co2ta_cl_id = 97) AND (co2ta_ta_id = ANY ('{142}'::integer[])))                       Filter: (co2ta_co_id = ANY ('{1,123567429,123872617,123929118,123930244,123935996,123937156,123944495,123944999,123945469}'::integer[]))\n                       Rows Removed by Filter: 32120   ->  CTE Scan on s  (cost=0.00..0.02 rows=1 width=36) (actual time=39.008..39.021 rows=8 loops=1)   ->  HashAggregate  (cost=0.03..0.04 rows=1 width=12) (actual time=0.009..0.010 rows=8 loops=1)\n         ->  CTE Scan on s s_1  (cost=0.00..0.02 rows=1 width=12) (actual time=0.000..0.001 rows=8 loops=1) Total runtime: 39.079 msBut if i remove one of co2ta_co_id in query (eq. \"1\") i get:\n                                                                                        QUERY PLAN                                                                                         \n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Append  (cost=45.28..45.35 rows=2 width=36) (actual time=0.233..0.255 rows=16 loops=1)\n   CTE s     ->  GroupAggregate  (cost=45.25..45.28 rows=1 width=57) (actual time=0.230..0.241 rows=8 loops=1)           ->  Sort  (cost=45.25..45.26 rows=1 width=57) (actual time=0.224..0.225 rows=8 loops=1)\n                 Sort Key: co2ta.co2ta_ta_id, co2ta.co2ta_co_id                 Sort Method: quicksort  Memory: 25kB                 ->  Index Scan using cookies2tags_co_id_key2 on cookies2tags co2ta  (cost=0.58..45.24 rows=1 width=57) (actual time=0.031..0.215 rows=8 loops=1)\n                       Index Cond: ((co2ta_co_id = ANY ('{123567429,123872617,123929118,123930244,123935996,123937156,123944495,123944999,123945469}'::integer[])) AND (co2ta_cl_id = 97))                       Filter: (co2ta_ta_id = ANY ('{142}'::integer[]))\n                       Rows Removed by Filter: 187   ->  CTE Scan on s  (cost=0.00..0.02 rows=1 width=36) (actual time=0.232..0.244 rows=8 loops=1)   ->  HashAggregate  (cost=0.03..0.04 rows=1 width=12) (actual time=0.007..0.009 rows=8 loops=1)\n         ->  CTE Scan on s s_1  (cost=0.00..0.02 rows=1 width=12) (actual time=0.001..0.001 rows=8 loops=1) Total runtime: 0.321 msThis plan is much faster. I notice that if I put more co2ta_co_id values in query than some threshold PostgreSQL creates unoptimal plan.\nI wonder what should I tune, to get PostgreSQL use other index for queries with more co2ta_co_id values in query?Currently as hotfix I split input values, to execute more queries with less co2ta_co_id values.\n-- Piotr Gasidło", "msg_date": "Fri, 17 Jan 2014 23:57:54 +0100", "msg_from": "=?UTF-8?Q?Piotr_Gasid=C5=82o?= <[email protected]>", "msg_from_op": true, "msg_subject": "Wrong index selection" }, { "msg_contents": "=?UTF-8?Q?Piotr_Gasid=C5=82o?= <[email protected]> writes:\n> [ planner prefers this: ]\n\n> -> Index Scan using cookies2tags_key3 on cookies2tags\n> co2ta (cost=0.57..49.34 rows=1 width=57) (actual time=38.339..38.982\n> rows=8 loops=1)\n> Index Cond: ((co2ta_cl_id = 97) AND (co2ta_ta_id =\n> ANY ('{142}'::integer[])))\n> Filter: (co2ta_co_id = ANY\n> ('{1,123567429,123872617,123929118,123930244,123935996,123937156,123944495,123944999,123945469}'::integer[]))\n> Rows Removed by Filter: 32120\n\n> [ over this: ]\n\n> -> Index Scan using cookies2tags_co_id_key2 on\n> cookies2tags co2ta (cost=0.58..45.24 rows=1 width=57) (actual\n> time=0.031..0.215 rows=8 loops=1)\n> Index Cond: ((co2ta_co_id = ANY\n> ('{123567429,123872617,123929118,123930244,123935996,123937156,123944495,123944999,123945469}'::integer[]))\n> AND (co2ta_cl_id = 97))\n> Filter: (co2ta_ta_id = ANY ('{142}'::integer[]))\n> Rows Removed by Filter: 187\n\nWell, as you can see the planner thinks these are going to cost about the\nsame, but actually the first one fetches a lot of rows that end up getting\nrejected by the filter condition. That means the co2ta_ta_id condition\nis somewhat redundant given the other two, much more so than the\nco2ta_co_id condition is given the other two. If the individual\nconditions are estimated about right (have you checked?), then that means\nthat this is an artifact of cross-column correlation statistics, which\nunfortunately Postgres doesn't know anything about. Is there any way of\nnormalizing the data to reduce the cross-column correlations?\n\nMy other advice would be to simplify and reduce the set of indexes ---\nIMO someone's gone way overboard with index creation here. It's unlikely\nthat those indexes are all pulling their weight for their maintenance\ncosts, and you can reduce problems with choosing the \"wrong\" index if\nthat index simply isn't there.\n\nOn the other hand, if this is a near-read-only table such that having lots\nof indexes is basically free, you could fix the problem by creating an\nindex on all three columns, which should dominate both of these choices.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 17 Jan 2014 19:33:44 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wrong index selection" } ]
[ { "msg_contents": "Hello.\n\nA description of what you are trying to achieve and what results you\nexpect.:\nThere are two PG server: physical and virtaul.\n\nPhysical server hardware:\n1 Xeon(R) CPU E31235 @ 3.20GHz\n8GB RAM\nsw RAID 2x250GB WesternDigital SATA.\niperf test between PC and Physical server shown 891 Mbit/sec (on average)\n\nVirtaul server:\n2 sockets x 2 cores vCPU\nRAM 8GB\niSCSI 1GBit/s volume for DB over dedicated VLAN, iperf test shown 977\nMbit/sec\niperf test between PC and virtaul server shown 892 Mbits/sec\n\nI run the same query with EXPALIN ANALYZE via psql on my PC with \"\\timing\non\" and I get similar server runtime for both servers and different psql\ntime.\nWhen I run the same query in servers command line I get similar results\n(server runtime and psql timing) on both physical and virtual servers (see\nTable below).\n\nOutput:\n~~~~~~\nEXPLAIN ANALYZE SELECT field1, field2\nFROM table1 WHERE field2 = 89170844;\n QUERY PLAN\n\n---------------------------------------------------------------------------------------------------\n Index Scan using \"PK_table1\" on \"table1\" (cost=0.42..8.44 rows=1\nwidth=42) (actual rows=1 loops=1)\n Index Cond: (\"field2\" = 89170844)\n Total runtime: 0.054 ms\n(3 rows)\n\nTime: 1.211 ms\n | Physical | Virtaul\n--------------------------------------------------\nfrom PC \"Total runtime\" | 0.05x ms | 0.05x ms\n--------------------------------------------------\nfrom PC timing | 0.7 ms | 1.211 ms <-- strange\n--------------------------------------------------\nfrom server \"Total runtime\"| 0.05x ms | 0.05x ms\n--------------------------------------------------\nfrom server timing | 0.55 ms | 0.6 ms\n\nPostgreSQL version number you are running:\nPhysical - postgresql91.x86_64 (9.1.11-1PGDG.rhel6) installed via yum from\nyum.postgresql.org\nPostgreSQL 9.1.11 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7\n20120313 (Red Hat 4.4.7-3), 64-bit\n\nVirtual - postgresql93.x86_64 (9.3.2-1PGDG.rhel6) installed via yum from\nyum.postgresql.org\nPostgreSQL 9.3.2 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7\n20120313 (Red Hat 4.4.7-3), 64-bit\n\nChanges made to the settings in the postgresql.conf file:\nPhysical server:\n name |\ncurrent_setting | source\n------------------------------+--------------------------------------------------------------------------------------+----------------------\n application_name | psql\n | client\n archive_command | test ! -f /mnt/storage/archivedir/%f.gz &&\ngzip -c %p >/mnt/storage/archivedir/%f.gz | configuration file\n archive_mode | on\n | configuration file\n autovacuum | on\n | configuration file\n checkpoint_completion_target | 0.9\n | configuration file\n checkpoint_segments | 16\n | configuration file\n checkpoint_timeout | 15min\n | configuration file\n client_encoding | UTF8\n | client\n constraint_exclusion | on\n | configuration file\n DateStyle | ISO, DMY\n | configuration file\n default_statistics_target | 50\n | configuration file\n default_text_search_config | pg_catalog.russian\n | configuration file\n effective_cache_size | 704MB\n | configuration file\n lc_messages | en_US.UTF-8\n | configuration file\n lc_monetary | ru_RU.UTF-8\n | configuration file\n lc_numeric | ru_RU.UTF-8\n | configuration file\n lc_time | ru_RU.UTF-8\n | configuration file\n listen_addresses | *\n | configuration file\n log_autovacuum_min_duration | 500ms\n | configuration file\n log_checkpoints | on\n | configuration file\n log_connections | off\n | configuration file\n log_destination | syslog\n | configuration file\n log_directory | pg_log\n | configuration file\n log_error_verbosity | verbose\n | configuration file\n log_filename | postgresql-%Y-%m-%d_%H%M%S.log\n | configuration file\n log_line_prefix | %m db=%d u=%u host=%h\n | configuration file\n log_min_duration_statement | 100ms\n | configuration file\n log_min_error_statement | info\n | configuration file\n log_min_messages | info\n | configuration file\n log_rotation_age | 1d\n | configuration file\n log_rotation_size | 0\n | configuration file\n log_temp_files | 0\n | configuration file\n log_timezone | W-SU\n | environment variable\n log_truncate_on_rotation | on\n | configuration file\n logging_collector | on\n | configuration file\n maintenance_work_mem | 60MB\n | configuration file\n max_connections | 120\n | configuration file\n max_stack_depth | 2MB\n | environment variable\n port | 5432\n | command line\n shared_buffers | 240MB\n | configuration file\n syslog_facility | local0\n | configuration file\n syslog_ident | postgres\n | configuration file\n TimeZone | W-SU\n | environment variable\n wal_buffers | 8MB\n | configuration file\n wal_level | archive\n | configuration file\n work_mem | 16MB\n | configuration file\n\nVirtual:\n name |\ncurrent_setting | source\n------------------------------+--------------------------------------------------------------------------------------+----------------------\n application_name | psql\n | client\n archive_command | test ! -f /mnt/storage/archivedir/%f.gz &&\ngzip -c %p >/mnt/storage/archivedir/%f.gz | configuration file\n archive_mode | on\n | configuration file\n autovacuum | on\n | configuration file\n checkpoint_completion_target | 0.9\n | configuration file\n checkpoint_segments | 16\n | configuration file\n checkpoint_timeout | 15min\n | configuration file\n client_encoding | UTF8\n | client\n constraint_exclusion | on\n | configuration file\n DateStyle | ISO, DMY\n | configuration file\n default_statistics_target | 50\n | configuration file\n default_text_search_config | pg_catalog.russian\n | configuration file\n effective_cache_size | 6000MB\n | configuration file\n lc_messages | en_US.UTF-8\n | configuration file\n lc_monetary | ru_RU.UTF-8\n | configuration file\n lc_numeric | ru_RU.UTF-8\n | configuration file\n lc_time | ru_RU.UTF-8\n | configuration file\n listen_addresses | *\n | configuration file\n log_autovacuum_min_duration | 500ms\n | configuration file\n log_checkpoints | on\n | configuration file\n log_connections | off\n | configuration file\n log_destination | syslog\n | configuration file\n log_error_verbosity | verbose\n | configuration file\n log_line_prefix | %m db=%d u=%u host=%h\n | configuration file\n log_min_duration_statement | 500ms\n | configuration file\n log_min_error_statement | info\n | configuration file\n log_min_messages | info\n | configuration file\n log_rotation_age | 1d\n | configuration file\n log_rotation_size | 0\n | configuration file\n log_temp_files | 0\n | configuration file\n log_truncate_on_rotation | on\n | configuration file\n logging_collector | on\n | configuration file\n maintenance_work_mem | 240MB\n | configuration file\n max_connections | 120\n | configuration file\n max_stack_depth | 2MB\n | environment variable\n port | 5432\n | command line\n shared_buffers | 1GB\n | configuration file\n syslog_facility | local0\n | configuration file\n syslog_ident | postgres\n | configuration file\n wal_buffers | 8MB\n | configuration file\n wal_level | archive\n | configuration file\n work_mem | 120MB\n | configuration file\n\nOperating system and version:\nPhysical - Scientific Linux release 6.2 (Carbon).\nuname -a:\nLinux pg.arc.world 2.6.32-279.5.1.el6.x86_64 #1 SMP Tue Aug 14 16:11:42 CDT\n2012 x86_64 x86_64 x86_64 GNU/Linux\n\nVirtual - - Scientific Linux release 6.4 (Carbon).\nuname -a:\nLinux vm-pg.arc.world 2.6.32-358.el6.x86_64 #1 SMP Fri Feb 22 20:37:17 CST\n2013 x86_64 x86_64 x86_64 GNU/Linux\n\nWhat program you're using to connect to PostgreSQL:\npsql 9.3.2 on PC\npsql 9.1.11 on Physical server\npsql 9.3.2 on Virtual server\nNo connection pool, load balancer or application server.\n\nIs there anything relevant or unusual in the PostgreSQL server logs?:\nNo\n\nThank you in advance,\nVladimir Scherbo\n\nHello.A description of what you are trying to achieve and what results you expect.:There are two PG server: physical and virtaul. \nPhysical server hardware:1 Xeon(R) CPU E31235 @ 3.20GHz8GB RAMsw RAID 2x250GB WesternDigital SATA.iperf test between PC and Physical server shown 891 Mbit/sec (on average)\nVirtaul server:2 sockets x 2 cores vCPURAM 8GBiSCSI 1GBit/s volume for DB over dedicated VLAN, iperf test shown 977 Mbit/sec\r\n\r\niperf test between PC and virtaul server shown 892 Mbits/secI run the same query with EXPALIN ANALYZE via psql on my PC with \"\\timing on\" and I get similar server runtime for both servers and different psql time.\nWhen I run the same query in servers command line I get similar results (server runtime and psql timing) on both physical and virtual servers (see Table below).Output:\n~~~~~~EXPLAIN ANALYZE SELECT field1, field2 FROM table1  WHERE field2 = 89170844;\n                                            QUERY PLAN                                             ---------------------------------------------------------------------------------------------------\n Index Scan using \"PK_table1\" on \"table1\"  (cost=0.42..8.44 rows=1 width=42) (actual rows=1 loops=1)   Index Cond: (\"field2\" = 89170844)\n Total runtime: 0.054 ms(3 rows)\nTime: 1.211 ms                           | Physical  | Virtaul--------------------------------------------------\nfrom PC \"Total runtime\"    | 0.05x ms  | 0.05x ms\n--------------------------------------------------from PC timing             | 0.7 ms    | 1.211 ms  <-- strange \n--------------------------------------------------from server \"Total runtime\"| 0.05x ms  | 0.05x ms\n--------------------------------------------------from server timing         | 0.55 ms   | 0.6 ms\nPostgreSQL version number you are running:Physical - postgresql91.x86_64 (9.1.11-1PGDG.rhel6) installed via yum from yum.postgresql.org\r\n\r\nPostgreSQL 9.1.11 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-3), 64-bitVirtual - postgresql93.x86_64 (9.3.2-1PGDG.rhel6) installed via yum from yum.postgresql.org\nPostgreSQL 9.3.2 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-3), 64-bitChanges made to the settings in the postgresql.conf file:\r\n\r\nPhysical server:             name             |                                   current_setting                                    |        source------------------------------+--------------------------------------------------------------------------------------+----------------------\n application_name             | psql                                                                                 | client archive_command              | test ! -f /mnt/storage/archivedir/%f.gz && gzip -c %p >/mnt/storage/archivedir/%f.gz | configuration file\n archive_mode                 | on                                                                                   | configuration file autovacuum                   | on                                                                                   | configuration file\n checkpoint_completion_target | 0.9                                                                                  | configuration file checkpoint_segments          | 16                                                                                   | configuration file\n checkpoint_timeout           | 15min                                                                                | configuration file client_encoding              | UTF8                                                                                 | client\n constraint_exclusion         | on                                                                                   | configuration file DateStyle                    | ISO, DMY                                                                             | configuration file\n default_statistics_target    | 50                                                                                   | configuration file default_text_search_config   | pg_catalog.russian                                                                   | configuration file\n effective_cache_size         | 704MB                                                                                | configuration file lc_messages                  | en_US.UTF-8                                                                          | configuration file\n lc_monetary                  | ru_RU.UTF-8                                                                          | configuration file lc_numeric                   | ru_RU.UTF-8                                                                          | configuration file\n lc_time                      | ru_RU.UTF-8                                                                          | configuration file listen_addresses             | *                                                                                    | configuration file\n log_autovacuum_min_duration  | 500ms                                                                                | configuration file log_checkpoints              | on                                                                                   | configuration file\n log_connections              | off                                                                                  | configuration file log_destination              | syslog                                                                               | configuration file\n log_directory                | pg_log                                                                               | configuration file log_error_verbosity          | verbose                                                                              | configuration file\n log_filename                 | postgresql-%Y-%m-%d_%H%M%S.log                                                       | configuration file log_line_prefix              | %m db=%d u=%u host=%h                                                                | configuration file\n log_min_duration_statement   | 100ms                                                                                | configuration file log_min_error_statement      | info                                                                                 | configuration file\n log_min_messages             | info                                                                                 | configuration file log_rotation_age             | 1d                                                                                   | configuration file\n log_rotation_size            | 0                                                                                    | configuration file log_temp_files               | 0                                                                                    | configuration file\n log_timezone                 | W-SU                                                                                 | environment variable log_truncate_on_rotation     | on                                                                                   | configuration file\n logging_collector            | on                                                                                   | configuration file maintenance_work_mem         | 60MB                                                                                 | configuration file\n max_connections              | 120                                                                                  | configuration file max_stack_depth              | 2MB                                                                                  | environment variable\n port                         | 5432                                                                                 | command line shared_buffers               | 240MB                                                                                | configuration file\n syslog_facility              | local0                                                                               | configuration file syslog_ident                 | postgres                                                                             | configuration file\n TimeZone                     | W-SU                                                                                 | environment variable wal_buffers                  | 8MB                                                                                  | configuration file\n wal_level                    | archive                                                                              | configuration file work_mem                     | 16MB                                                                                 | configuration file\nVirtual:             name             |                                   current_setting                                    |        source        \n------------------------------+--------------------------------------------------------------------------------------+---------------------- application_name             | psql                                                                                 | client\n archive_command              | test ! -f /mnt/storage/archivedir/%f.gz && gzip -c %p >/mnt/storage/archivedir/%f.gz | configuration file archive_mode                 | on                                                                                   | configuration file\n autovacuum                   | on                                                                                   | configuration file checkpoint_completion_target | 0.9                                                                                  | configuration file\n checkpoint_segments          | 16                                                                                   | configuration file checkpoint_timeout           | 15min                                                                                | configuration file\n client_encoding              | UTF8                                                                                 | client constraint_exclusion         | on                                                                                   | configuration file\n DateStyle                    | ISO, DMY                                                                             | configuration file default_statistics_target    | 50                                                                                   | configuration file\n default_text_search_config   | pg_catalog.russian                                                                   | configuration file effective_cache_size         | 6000MB                                                                               | configuration file\n lc_messages                  | en_US.UTF-8                                                                          | configuration file lc_monetary                  | ru_RU.UTF-8                                                                          | configuration file\n lc_numeric                   | ru_RU.UTF-8                                                                          | configuration file lc_time                      | ru_RU.UTF-8                                                                          | configuration file\n listen_addresses             | *                                                                                    | configuration file log_autovacuum_min_duration  | 500ms                                                                                | configuration file\n log_checkpoints              | on                                                                                   | configuration file log_connections              | off                                                                                  | configuration file\n log_destination              | syslog                                                                               | configuration file log_error_verbosity          | verbose                                                                              | configuration file\n log_line_prefix              | %m db=%d u=%u host=%h                                                                | configuration file log_min_duration_statement   | 500ms                                                                                | configuration file\n log_min_error_statement      | info                                                                                 | configuration file log_min_messages             | info                                                                                 | configuration file\n log_rotation_age             | 1d                                                                                   | configuration file log_rotation_size            | 0                                                                                    | configuration file\n log_temp_files               | 0                                                                                    | configuration file log_truncate_on_rotation     | on                                                                                   | configuration file\n logging_collector            | on                                                                                   | configuration file maintenance_work_mem         | 240MB                                                                                | configuration file\n max_connections              | 120                                                                                  | configuration file max_stack_depth              | 2MB                                                                                  | environment variable\n port                         | 5432                                                                                 | command line shared_buffers               | 1GB                                                                                  | configuration file\n syslog_facility              | local0                                                                               | configuration file syslog_ident                 | postgres                                                                             | configuration file\n wal_buffers                  | 8MB                                                                                  | configuration file wal_level                    | archive                                                                              | configuration file\n work_mem                     | 120MB                                                                                | configuration fileOperating system and version:\nPhysical - Scientific Linux release 6.2 (Carbon).uname -a:Linux pg.arc.world 2.6.32-279.5.1.el6.x86_64 #1 SMP Tue Aug 14 16:11:42 CDT 2012 x86_64 x86_64 x86_64 GNU/Linux\nVirtual -  - Scientific Linux release 6.4 (Carbon). uname -a:Linux vm-pg.arc.world 2.6.32-358.el6.x86_64 #1 SMP Fri Feb 22 20:37:17 CST 2013 x86_64 x86_64 x86_64 GNU/Linux\nWhat program you're using to connect to PostgreSQL:psql 9.3.2 on PCpsql 9.1.11 on Physical serverpsql 9.3.2 on Virtual server\r\n\r\nNo connection pool, load balancer or application server. Is there anything relevant or unusual in the PostgreSQL server logs?: No Thank you in advance,\r\n\r\nVladimir Scherbo", "msg_date": "Mon, 20 Jan 2014 17:55:50 +0400", "msg_from": "ARCEnergo <[email protected]>", "msg_from_op": true, "msg_subject": "Time of query result delivery" }, { "msg_contents": "On 01/20/2014 05:55 AM, ARCEnergo wrote:\n> Time: 1.211 ms\n> | Physical | Virtaul\n> --------------------------------------------------\n> from PC \"Total runtime\" | 0.05x ms | 0.05x ms\n> --------------------------------------------------\n> from PC timing | 0.7 ms | 1.211 ms <-- strange\n> --------------------------------------------------\n> from server \"Total runtime\"| 0.05x ms | 0.05x ms\n> --------------------------------------------------\n> from server timing | 0.55 ms | 0.6 ms\n\nI'm really not clear on what you're trying to measure here. If you're\ndoing \"time\" from your PC, then network transmission time completely\ndominates your reponse time ... and that can be affected by all kinds of\nrandom variables.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 21 Jan 2014 10:55:49 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Time of query result delivery" }, { "msg_contents": "On 01/22/2014 12:19 AM, ARCEnergo wrote:\n> Hello, Josh.\n> \n> Thank you for your response.\n> I measured that \"times\" many many times\n> and always I got larger repnose time from VM-server.\n> And always server times (EXPLAIN ANALYZE) were very similar.\n> And several tests with iperf always shown similar results for both Ph and\n> VM servers.\n> So, we have practically same server time and same network speed,\n> but different results for timing in psql.\n> I wolud like to know what is the reason?\n\nThe difference would most likely be something in your VM environment.\nBeyond that, I can't speculate.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 22 Jan 2014 12:01:39 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Time of query result delivery" } ]
[ { "msg_contents": "Hi,\n \nWe have a PostgreSQL DB, version 8.4 on a Suse Linux system.\nEvery night a script runs with several updates and inserts. The query time\nat day increases after \napproximately 3 weeks from a few minutes to about an hour. \nAfter export, drop and import the DB the query time is again at a few\nminutes.\n \nWe have tested vacuum full, vacuum analyze and reindex and get no\nimprovement.\n \nHas anyone an idea why the queries are getting slower and slower?\n \nThank you so much for your help!\n \n \nThe DB configuration:\n \nVirtual server, 7GB RAM, DB size = 16GB\n \nshared_buffers = 1024MB\ntemp_buffers = 32MB\nwork_mem = 8MB\ncheckpoint_segments = 20\neffective_cache_size = 512MB\nmax_locks_per_transaction = 256\n \n \n\nHi, We have a PostgreSQL DB, version 8.4 on a Suse Linux system.Every night a script runs with several updates and inserts. The query time at day increases after approximately 3 weeks from a few minutes to about an hour. After export, drop and import the DB the query time is again at a few minutes. We have tested vacuum full, vacuum analyze and reindex and get no improvement. Has anyone an idea why the queries are getting slower and slower? Thank you so much for your help!  The DB configuration: Virtual server, 7GB RAM, DB size = 16GB shared_buffers = 1024MBtemp_buffers = 32MBwork_mem = 8MBcheckpoint_segments = 20effective_cache_size = 512MBmax_locks_per_transaction = 256", "msg_date": "Tue, 21 Jan 2014 07:26:52 +0100", "msg_from": "\"Katharina Koobs\" <[email protected]>", "msg_from_op": true, "msg_subject": "Increasing query time after updates" }, { "msg_contents": "On 01/21/2014 08:26 AM, Katharina Koobs wrote:\n> Hi,\n>\n> We have a PostgreSQL DB, version 8.4 on a Suse Linux system.\n> Every night a script runs with several updates and inserts. The query time\n> at day increases after\n> approximately 3 weeks from a few minutes to about an hour.\n\nDoes it get gradually slower every day, or suddenly jump from few \nminutes to one hour after three weeks? The former would suggest some \nkind of bloating or fragmentation, while the latter would suggest a \nchange in a query plan (possibly still caused by bloating).\n\nDoes the database size change over time?\n\n> After export, drop and import the DB the query time is again at a few\n> minutes.\n>\n> We have tested vacuum full, vacuum analyze and reindex and get no\n> improvement.\n>\n> Has anyone an idea why the queries are getting slower and slower?\n\nOne theory is that the tables are initially more or less ordered by one \ncolumn, but get gradually shuffled by the updates. Exporting and \nimporting would load the data back in order. However, a blow to that \ntheory is that a pg_dump + reload will load the tuples in roughly the \nsame physical order, but perhaps you used something else for the \nexport+import.\n\nYou could try running CLUSTER on any large tables. Since version 9.0, \nVACUUM FULL does more or less the same as CLUSTER, ie. rewrites the \nwhole table, but in 8.4 it's different.\n\n> Thank you so much for your help!\n>\n>\n> The DB configuration:\n>\n> Virtual server, 7GB RAM, DB size = 16GB\n>\n> shared_buffers = 1024MB\n> temp_buffers = 32MB\n> work_mem = 8MB\n> checkpoint_segments = 20\n> effective_cache_size = 512MB\n> max_locks_per_transaction = 256\n\nWith 7GB of RAM, you might want to raise effective_cache_size to \nsomething like 4GB. It doesn't allocate anything, but tells PostgreSQL \nhow much memory it can expect the operating system to use as buffer \ncache, which can influence query plans. I doubt it makes any difference \nfor the problem you're seeing, but just as general advice..\n\n8.4 is quite old by now, and will no longer be supported by the \ncommunity after July 2014. You'll have to upgrade pretty soon anyway, so \nyou might as well upgrade now and see if it helps.\n\n- Heikki\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 21 Jan 2014 10:06:47 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Increasing query time after updates" }, { "msg_contents": "Dear Heikki,\nthank you for your valuable feedback. Regarding your questions: It \ngradually slower every day. The database size is increasing only \nslightly over time.\n\nI will try your hint regarding CLUSTERING. The difference in effect of \nVACUUM FULL in version 9.0 sounds very interesting. I will discuss the \nupdate to version 9.0 with my colleague.\n\nAny further idea or feedback is much appreciated.\n\nThank you so much & kind regards,\nKatharina\n\n\n-----Ursprüngliche Nachricht-----\nVon: Heikki Linnakangas [mailto:[email protected]] \nGesendet: Dienstag, 21. Januar 2014 09:07\nAn: Katharina Koobs\nCc: [email protected]; 'Sebastian Vogt'\nBetreff: Re: [PERFORM] Increasing query time after updates\n\nOn 01/21/2014 08:26 AM, Katharina Koobs wrote:\n> Hi,\n>\n> We have a PostgreSQL DB, version 8.4 on a Suse Linux system.\n> Every night a script runs with several updates and inserts. The query time\n> at day increases after\n> approximately 3 weeks from a few minutes to about an hour.\n\nDoes it get gradually slower every day, or suddenly jump from few \nminutes to one hour after three weeks? The former would suggest some \nkind of bloating or fragmentation, while the latter would suggest a \nchange in a query plan (possibly still caused by bloating).\n\nDoes the database size change over time?\n\n> After export, drop and import the DB the query time is again at a few\n> minutes.\n>\n> We have tested vacuum full, vacuum analyze and reindex and get no\n> improvement.\n>\n> Has anyone an idea why the queries are getting slower and slower?\n\nOne theory is that the tables are initially more or less ordered by one \ncolumn, but get gradually shuffled by the updates. Exporting and \nimporting would load the data back in order. However, a blow to that \ntheory is that a pg_dump + reload will load the tuples in roughly the \nsame physical order, but perhaps you used something else for the \nexport+import.\n\nYou could try running CLUSTER on any large tables. Since version 9.0, \nVACUUM FULL does more or less the same as CLUSTER, ie. rewrites the \nwhole table, but in 8.4 it's different.\n\n> Thank you so much for your help!\n>\n>\n> The DB configuration:\n>\n> Virtual server, 7GB RAM, DB size = 16GB\n>\n> shared_buffers = 1024MB\n> temp_buffers = 32MB\n> work_mem = 8MB\n> checkpoint_segments = 20\n> effective_cache_size = 512MB\n> max_locks_per_transaction = 256\n\nWith 7GB of RAM, you might want to raise effective_cache_size to \nsomething like 4GB. It doesn't allocate anything, but tells PostgreSQL \nhow much memory it can expect the operating system to use as buffer \ncache, which can influence query plans. I doubt it makes any difference \nfor the problem you're seeing, but just as general advice..\n\n8.4 is quite old by now, and will no longer be supported by the \ncommunity after July 2014. You'll have to upgrade pretty soon anyway, so \nyou might as well upgrade now and see if it helps.\n\n- Heikki\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 21 Jan 2014 09:37:56 +0100", "msg_from": "\"Katharina Koobs\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Increasing query time after updates" }, { "msg_contents": "On 21/01/14 21:37, Katharina Koobs wrote:\n> Dear Heikki,\n> thank you for your valuable feedback. Regarding your questions: It\n> gradually slower every day. The database size is increasing only\n> slightly over time.\n>\n> I will try your hint regarding CLUSTERING. The difference in effect of\n> VACUUM FULL in version 9.0 sounds very interesting. I will discuss the\n> update to version 9.0 with my colleague.\n>\n> Any further idea or feedback is much appreciated.\n>\n>\n\nIndex bloat could be a factor too - performing a regular REINDEX on the \nrelevant tables could be worth a try.\n\nRegards\n\nMark\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 21 Jan 2014 21:45:43 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Increasing query time after updates" }, { "msg_contents": "Hi,\n\nOn 21 Leden 2014, 7:26, Katharina Koobs wrote:\n> Hi,\n>\n> We have a PostgreSQL DB, version 8.4 on a Suse Linux system.\n> Every night a script runs with several updates and inserts. The query time\n> at day increases after\n> approximately 3 weeks from a few minutes to about an hour.\n> After export, drop and import the DB the query time is again at a few\n> minutes.\n>\n> We have tested vacuum full, vacuum analyze and reindex and get no\n> improvement.\n>\n> Has anyone an idea why the queries are getting slower and slower?\n\nThe table/index bloat would be my first bet, but that should be fixed (or\nat least improved) by the vacuum commands you've tested.\n\nSadly, the amount of info you provided is insufficient to determine the\ncause - the best thing you can give us are explain plans of the query, one\nwhen it's fast, one when it's slow.\n\nIf it's longer than a few lines, please post it to explain.depesz.com and\nnot here (the clients will reformat it, making it unreadable).\n\n> Thank you so much for your help!\n>\n> The DB configuration:\n>\n> Virtual server, 7GB RAM, DB size = 16GB\n>\n> shared_buffers = 1024MB\n> temp_buffers = 32MB\n> work_mem = 8MB\n> checkpoint_segments = 20\n> effective_cache_size = 512MB\n\nAny reason not to use higher value for effective_cache_size? You have 7GB\nof RAM, 1GB of that is for shared buffers, so I'd say ~4GB would be a good\nvalue here. Unlikely to be the cause of the issues you're seeing, though.\n\nTomas\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 21 Jan 2014 11:57:51 +0100", "msg_from": "\"Tomas Vondra\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Increasing query time after updates" }, { "msg_contents": "On 21/01/14 21:45, Mark Kirkwood wrote:\n> On 21/01/14 21:37, Katharina Koobs wrote:\n>> Dear Heikki,\n>> thank you for your valuable feedback. Regarding your questions: It\n>> gradually slower every day. The database size is increasing only\n>> slightly over time.\n>>\n>> I will try your hint regarding CLUSTERING. The difference in effect of\n>> VACUUM FULL in version 9.0 sounds very interesting. I will discuss the\n>> update to version 9.0 with my colleague.\n>>\n>> Any further idea or feedback is much appreciated.\n>>\n>>\n>\n> Index bloat could be a factor too - performing a regular REINDEX on the\n> relevant tables could be worth a try.\n>\n\nSorry - I missed that you had tried reindex already.\n\nregards\n\nMark\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 22 Jan 2014 07:53:20 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Increasing query time after updates" } ]
[ { "msg_contents": "I have a question about replacing NULL columns with tables. Say I have a\ntable called Contacts that represents the Contacts (say from Facebook) of\nmy Users. Right now Contacts has several columns and in addition to these\nit has an Owner (the user that owns this contact) and User. The User is\nNULL if there is no user in my system that matches the contact. The Owner\ncan never be NULL.\n\nIf I want to get all the contacts for a user and know which are and which\nare not themselves users I would do an OUTER join with the Users table on\nuser_id (which may or may not be null). Or I may not want the user itself\nand not need to perform the OUTER join at all. I just want to see if they\nare NULL or a valid ID from the Users table.\n\nSo, if I endeavor to get rid of this kind of usage of NULL I could create\nanother table called ContactUsers with two fields (ContactID, UserID).\n\nI'd go from this:\n\n Contacts (ContactID, OwnerID, UserID, ...)\n\nTo this:\n\n Contacts(ContactID, OwnerID, ... )\n ContactUsers( ContactID, UserID)\n\nSo my questions are these:\n\n I assume my new queries of all contacts with or without correlated\nusers would now change to all need a User column that has a SELECT query on\nContactUsers for that column.\n\nHow would this perform relative to the approach that uses a field and NULL.\n\n I *think* updates will actually be faster because instead of updating I\nwill simply insert into ContactUsers and not have to lookup the Contact row\nand update its UserID field. But I'm concerned about queries.\n\n I'm also concerned about when a User changes their details. Now I will\nhave to DELETE from the ContactUsers table then INSERT if there are\ndifferent correlations. Before I would just update the contact rows with\nthe modified user to NULL and re-correlate.\n\nShould I be concerned about the performance of this approach? In addition\nto getting rid of NULLs and approach like this would help me reduce the\nnumber of columns in some tables. But that SELECT for each column thing has\nme concerned since the queries are very fast now using a NULL field.\n\n\nThanks!\n\nI have a question about replacing NULL columns with tables. Say I have a table called Contacts that represents the Contacts (say from Facebook) of my Users. Right now Contacts has several columns and in addition to these it has an Owner (the user that owns this contact) and User. The User is NULL if there is no user in my system that matches the contact. The Owner can never be NULL. \nIf I want to get all the contacts for a user and know which are and which are not themselves users I would do an OUTER join with the Users table on user_id (which may or may not be null). Or I may not want the user itself and not need to perform the OUTER join at all. I just want to see if they are NULL or a valid ID from the Users table.\nSo, if I endeavor to get rid of this kind of usage of NULL I could create another table called ContactUsers with two fields (ContactID, UserID).I'd go from this:\n    Contacts (ContactID, OwnerID, UserID, ...)To this:    Contacts(ContactID, OwnerID, ... )    ContactUsers( ContactID, UserID)\nSo my questions are these:    I assume my new queries of all contacts with or without correlated users would now change to all need a User column that has a SELECT query on ContactUsers for that column.\nHow would this perform relative to the approach that uses a field and NULL.    I *think* updates will actually be faster because instead of updating I will simply insert into ContactUsers and not have to lookup the Contact row and update its UserID field. But I'm concerned about queries.\n   I'm also concerned about when a User changes their details. Now I will have to DELETE from the ContactUsers table then INSERT if there are different correlations. Before I would just update the contact rows with the modified user to NULL and re-correlate.\nShould I be concerned about the performance of this approach? In addition to getting rid of NULLs and approach like this would help me reduce the number of columns in some tables. But that SELECT for each column thing has me concerned since the queries are very fast now using a NULL field.\nThanks!", "msg_date": "Thu, 23 Jan 2014 08:34:33 -0800", "msg_from": "Robert DiFalco <[email protected]>", "msg_from_op": true, "msg_subject": "Removing nulls with 6NF" } ]
[ { "msg_contents": " We have 64GB of Memory on RHEL 6.4 shared_buffers = 8GB work_mem = 64MB maintenance_work_mem = 1GB effective_cache_size = 48GBI found this list of recommended parameters for memory management in PostgreSQL. About shared_buffers: Below 2GB, set to 20% of total system memory.Below 32GB, set to 25% of total system memory.Above 32GB, set to 8GB About work_mem, this parameter can cause a huge speed-up if set properly, however it can use that amount of memory per planning node. Here are some recommendations to set it up. Start low: 32-64MBLook for ‘temporary file’ lines in logsSet to 2-3x the largest temp file About maintenance_work_mem, some recommendations were: 10% of system memory, up to1GBMaybe even higher if you are having VACUUM problems About effective_cache_size, guidelines suggested. Set to the amount of file system cache availableIf you don’t know, set it to 50% of total system memoryWe have real time 24/7 data ingest processes running on our 9.3.2 database 7TB in sizeDo these settings look correct for 9.3.2?thanks\n", "msg_date": "Fri, 24 Jan 2014 09:23:32 -0700", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL 9.3.2 Performance issues" }, { "msg_contents": "\"[email protected]\" <[email protected]> wrote:\n\n> We have 64GB of Memory on RHEL 6.4\n>\n> shared_buffers = 8GB\n> work_mem = 64MB\n> maintenance_work_mem = 1GB\n> effective_cache_size = 48GB\n\n> Do these settings look correct for 9.3.2?\n\nMaybe.\n\nWhat is your max_connections setting?\n\nI find that a good place to start with work_mem is to ignore the\nfactors you quoted, and to set it somewhere near 25% of machine RAM\ndivided by max_connections.  It might be possible to go up from\nthere, but monitor closely for peaks of activity which cause enough\nmemory allocation to flush the OS cache and cause high disk read\nrates, killing performance until the cache repopulates.  The can\ntake a while since the high disk read rates slow queries, causing\nmore of them to compete, leading to higher total work_mem\nallocations, and thus preventing recovery from the performance\ndegradation.  In other words, setting this too high leads to\nunstable performance.  It looks better than a lower setting until\ntoo many users hit Enter at about the same time, causing\nperformance to collapse for a while.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 24 Jan 2014 12:43:46 -0800 (PST)", "msg_from": "Kevin Grittner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 9.3.2 Performance issues" } ]
[ { "msg_contents": "hi, all,\n\nI have a requirement to move a table to a different storage. I will \ncreate the tablespace and use alter table set tablespace to achieve it. \nThe concern is that the table is pretty big and during the set \ntablespace, application could insert/update/delete records. If there is a lock during the operation, it will impact db availability.\n\nI looked at the pg_repack usage and in release 1.2 \nhttp://reorg.github.io/pg_repack/. there is -s tablespace that claims to be an\nonline version of ALTER TABLE ... SET TABLESPACE\n\nis this the functionality that solves the alter table set tablespace lock issue?\n\nFor any one using pg_repack, how well does it perform?\n\nThank you.\n\n\nbest,Ying\nhi, all,I\n have a requirement to move a table to a different storage. I will \ncreate the tablespace and use alter table set tablespace to achieve it. \nThe concern is that the table is pretty big and during the set \ntablespace, application could insert/update/delete records. If there is a\n lock during the operation, it will impact db availability.I\n looked at the pg_repack usage and in release 1.2 \nhttp://reorg.github.io/pg_repack/. there is -s tablespace that claims to\n be an\nonline version of ALTER TABLE ... SET TABLESPACEis this the functionality that solves the alter table set tablespace lock issue?For any one using pg_repack, how well does it perform?Thank you.best,Ying", "msg_date": "Fri, 24 Jan 2014 12:48:25 -0800 (PST)", "msg_from": "Ying He <[email protected]>", "msg_from_op": true, "msg_subject": "pg_repack solves alter table set tablespace lock" }, { "msg_contents": "On Fri, Jan 24, 2014 at 3:48 PM, Ying He <[email protected]> wrote:\n\n> I looked at the pg_repack usage and in release 1.2\n> http://reorg.github.io/pg_repack/. there is -s tablespace that claims to\n> be an online version of ALTER TABLE ... SET TABLESPACE\n>\n> is this the functionality that solves the alter table set tablespace lock\n> issue?\n>\n\nCross-posting to multiple lists in quick succession is generally considered\nrude; I see you have posted to the reorg-general list already, which is the\nright forum for questions about pg_repack. (And yes, that \"-s\" flag sounds\nlike what you are after.)\n\n Josh\n\nOn Fri, Jan 24, 2014 at 3:48 PM, Ying He <[email protected]> wrote:\n\nI\n looked at the pg_repack usage and in release 1.2 \nhttp://reorg.github.io/pg_repack/. there is -s tablespace that claims to\n be an\nonline version of ALTER TABLE ... SET TABLESPACE\nis this the functionality that solves the alter table set tablespace lock issue?Cross-posting to multiple lists in quick succession is generally considered rude; I see you have posted to the reorg-general list already, which is the right forum for questions about pg_repack. (And yes, that \"-s\" flag sounds like what you are after.)\n Josh", "msg_date": "Fri, 24 Jan 2014 16:42:42 -0500", "msg_from": "Josh Kupershmidt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_repack solves alter table set tablespace lock" }, { "msg_contents": "Thank you Josh. Won't double post again. Just thought reorg mailing list is quite inactive.\n\nbest,\nYing\n\n\n\n\nOn Friday, January 24, 2014 4:43 PM, Josh Kupershmidt <[email protected]> wrote:\n \n\n\nOn Fri, Jan 24, 2014 at 3:48 PM, Ying He <[email protected]> wrote:\n\nI looked at the pg_repack usage and in release 1.2 http://reorg.github.io/pg_repack/. there is -s tablespace that claims to be an\nonline version of ALTER TABLE ... SET TABLESPACE\n>\n>\n>\n>is this the functionality that solves the alter table set tablespace lock issue?\n\nCross-posting to multiple lists in quick succession is generally considered rude; I see you have posted to the reorg-general list already, which is the right forum for questions about pg_repack. (And yes, that \"-s\" flag sounds like what you are after.)\n\n Josh\nThank you Josh. Won't double post again. Just thought reorg mailing list is quite inactive.best,Ying On Friday, January 24, 2014 4:43 PM, Josh Kupershmidt <[email protected]> wrote: On Fri, Jan 24, 2014 at 3:48 PM, Ying He <[email protected]> wrote:\n\nI\n looked at the pg_repack usage and in release 1.2 \nhttp://reorg.github.io/pg_repack/. there is -s tablespace that claims to\n be an\nonline version of ALTER TABLE ... SET TABLESPACE\nis this the functionality that solves the alter table set tablespace lock issue?Cross-posting to multiple lists in quick succession is generally considered rude; I see you have posted to the reorg-general list already, which is the right forum for questions about pg_repack. (And yes, that \"-s\" flag sounds like what you are after.)\n Josh", "msg_date": "Mon, 27 Jan 2014 06:15:06 -0800 (PST)", "msg_from": "Ying He <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_repack solves alter table set tablespace lock" }, { "msg_contents": "Ying He escribi�:\n> Thank you Josh. Won't double post again. Just thought reorg mailing list is quite inactive.\n\nWell, that tells you something about its maintenance state and what sort\nof help you can expect if you find yourself in trouble with it.\n\n-- \n�lvaro Herrera http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 27 Jan 2014 11:24:54 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_repack solves alter table set tablespace lock" } ]
[ { "msg_contents": "Hi All,\n\nI have configured the blow parameters for a 32 GB server . I this correct ?\n\nshared_buffers = 6GB\nwork_mem = 24MB maintenance_work_mem = 250MB\neffective_cache_size = 16GB\nshared_preload_libraries = 'pg_stat_statements' pg_stat_statements.max =\n10000 pg_stat_statements.track = all\nwal_buffers = 8MB\ncheckpoint_segments = 32\ncheckpoint_completion_target = 0.9\n\n\n-- \n--Regards\nRAMAKRISHNAN KANDASAMY\n\nHi All,I have configured the blow parameters for a 32 GB server . I this correct ?shared_buffers = 6GB work_mem = 24MB\t\t\t\nmaintenance_work_mem = 250MBeffective_cache_size = 16GB shared_preload_libraries = 'pg_stat_statements'\t\t\npg_stat_statements.max = 10000\npg_stat_statements.track = allwal_buffers = 8MBcheckpoint_segments = 32\tcheckpoint_completion_target = 0.9\n-- --RegardsRAMAKRISHNAN KANDASAMY", "msg_date": "Sat, 25 Jan 2014 12:02:59 +0530", "msg_from": "RAMAKRISHNAN KANDASAMY <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL 9.3.2 Performance tuning for 32 GB server" }, { "msg_contents": "On Sat, Jan 25, 2014 at 12:02:59PM +0530, RAMAKRISHNAN KANDASAMY wrote:\n> Hi All,\n> \n> I have configured the blow parameters for a 32 GB server . I this correct ?\n> \n> shared_buffers = 6GB\n\ngoing over 2GB probably doesn't help\n\n> work_mem = 24MB maintenance_work_mem = 250MB\n\nwork_mem depends a lot of your queries and the number of clients, but\nwith 32GB RAM setting a default work_mem of 128MB would probably not\nhurt. Your maintenance_work_mem is too low, raise it to 2GB. \n\n> effective_cache_size = 16GB\n\nif it's a dedicated server you can raise it to 24GB\n\n> shared_preload_libraries = 'pg_stat_statements' pg_stat_statements.max =\n> 10000 pg_stat_statements.track = all\n> wal_buffers = 8MB\n> checkpoint_segments = 32\n\ndepends of your load, 10's reasonable for light loads. 50 or 100 isn't\nuncommon for heavier ones. Keep in mind that every increase\nof 30 will cost you 1 gigabyte of disk space in pg_xlog and an extra\n~2-5 minutes (depends of your i/o) of recovery time after a crash.\n\n> checkpoint_completion_target = 0.9\n> \n> \n\nIt's considered as a bad habit to change the cost settings, but I often\nraise the default cpu_tuple_cost to 0.08 (instead of 0.01) too.\n\n> -- \n> --Regards\n> RAMAKRISHNAN KANDASAMY\n\n-- \nNo trees were killed in the creation of this message.\nHowever, many electrons were terribly inconvenienced.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 31 Jan 2014 14:55:04 +0100", "msg_from": "Julien Cigar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 9.3.2 Performance tuning for 32 GB server" }, { "msg_contents": "On Fri, Jan 31, 2014 at 8:55 AM, Julien Cigar <[email protected]> wrote:\n\n> On Sat, Jan 25, 2014 at 12:02:59PM +0530, RAMAKRISHNAN KANDASAMY wrote:\n> > Hi All,\n> >\n> > I have configured the blow parameters for a 32 GB server . I this\n> correct ?\n> >\n> > shared_buffers = 6GB\n>\n> going over 2GB probably doesn't help\n>\n\nThat is true on 32 bit system. On a 64 bit system with 32GB of RAM, there\nis a lot of value to be potentially gained by having shared buffers\nsignificantly higher than 2GB.\n\n\n>\n> It's considered as a bad habit to change the cost settings, but I often\n> raise the default cpu_tuple_cost to 0.08 (instead of 0.01) too.\n>\n> > --\n> > --Regards\n> > RAMAKRISHNAN KANDASAMY\n>\n> --\n> No trees were killed in the creation of this message.\n> However, many electrons were terribly inconvenienced.\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nThomas John\n\nOn Fri, Jan 31, 2014 at 8:55 AM, Julien Cigar <[email protected]> wrote:\nOn Sat, Jan 25, 2014 at 12:02:59PM +0530, RAMAKRISHNAN KANDASAMY wrote:\n> Hi All,\n>\n> I have configured the blow parameters for a 32 GB server . I this correct ?\n>\n> shared_buffers = 6GB\n\ngoing over 2GB probably doesn't helpThat is true on 32 bit system. On a 64 bit system with 32GB of RAM, there is a lot of value to be potentially gained by having shared buffers significantly higher than 2GB.\n \n\nIt's considered as a bad habit to change the cost settings, but I often\nraise the default cpu_tuple_cost to 0.08 (instead of 0.01) too.\n\n> --\n> --Regards\n> RAMAKRISHNAN KANDASAMY\n\n--\nNo trees were killed in the creation of this message.\nHowever, many electrons were terribly inconvenienced.\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- Thomas John", "msg_date": "Sun, 2 Feb 2014 18:52:00 -0500", "msg_from": "Tom Kincaid <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 9.3.2 Performance tuning for 32 GB server" } ]
[ { "msg_contents": "Hi,\n\nI have two plans of a query.\nnestloop plan is much faster, but planner chose the slower one, hashjoin.\n\n http://explain.depesz.com/s/Aqs\n http://explain.depesz.com/s/97C\n\nit seems that rows=39698995 are quite overestimated.\n\n-> Nested Loop (cost=0.000..5403.600 rows=39698995 width=45) (actual\ntime=0.392..14.817 rows=943 loops=1)\n -> Nested Loop (cost=0.000..17.600 rows=1 width=8) (actual\ntime=0.241..0.246 rows=1 loops=1)\n -> Index Scan using seven on hotel three (cost=0.000..6.880 rows=1\nwidth=6) (actual time=0.113..0.115 rows=1 loops=1)\n Index Cond: (two = 31750::numeric)\n -> Index Scan using echo on oscar_foxtrot november\n(cost=0.000..10.710 rows=1 width=14) (actual time=0.117..0.118 rows=1\nloops=1)\n Index Cond: (charlie = three.golf)\n -> Index Scan using zulu on oscar_foxtrot juliet (cost=0.000..3849.200\nrows=153679 width=45) (actual time=0.147..14.241 rows=943 loops=1)\n Index Cond: ((uniform_yankee = november.uniform_yankee) AND\n(uniform_victor = november.uniform_victor))\n\npg_stats is like this;\n> select attname, null_frac, n_distinct, most_common_vals,\nmost_common_freqs from pg_stats where tablename like 'oscar_foxtrot%' and\n(attname = 'uniform_yankee' or attname = 'uniform_victor')\n\"uniform_yankee\";0;12;\"{83886082,83886085}\";\"{0.9742,0.02}\"\n\"uniform_victor\";0;23;\"{1342767106,1342308357}\";\"{0.973467,0.02}\"\n\nI assumed that nestloop rows would be more or less inner_path_rows *\nouter_path_rows with good pg_stats, and good plan could come based on it.\n\nthe plan above is not that case. Suspcious of 40 million rows and small\nnumber of values(actually two values) making up 98% of distribution.\nso.. I looked up some code and found that rows=153679 is rows of\nparameterized base rel estimated by eqsel(), and row=39698995 is rows of\nparameterized join rel by eqjoinsel().\nI think wrong plan above comes from the fact that the two estimation cannot\nbe close in general, great difference in my case.\n\nwhere am i wrong and right?\nIs there recommended approach, related issue, commit and so on i can follow\n?\n\nthanks\n\n\n[[nstallation Info]]\nPostgreSQL-9.2.5 (via postgresql yum repository)\nOS: Centos 6.3 (custom linux-3.10.12 kernel)\npostgresql.conf:\n effective_cache_size = 10000MB\n shared_buffers = 1000MB\n work_mem = 100MB\n maintenance_work_mem = 100MB\nHW: CPU 4-core Xeon x 2 sockets, RAM 256GB\n\n--\nRegards,\nJang.\n\n a sound mind in a sound body\n\nHi,I have two plans of a query.nestloop plan is much faster, but planner chose the slower one, hashjoin.    http://explain.depesz.com/s/Aqs\n  http://explain.depesz.com/s/97Cit seems that rows=39698995  are quite overestimated.-> Nested Loop (cost=0.000..5403.600 rows=39698995 width=45) (actual time=0.392..14.817 rows=943 loops=1)\n   -> Nested Loop (cost=0.000..17.600 rows=1 width=8) (actual time=0.241..0.246 rows=1 loops=1)         -> Index Scan using seven on hotel three (cost=0.000..6.880 rows=1 width=6) (actual time=0.113..0.115 rows=1 loops=1)\n                Index Cond: (two = 31750::numeric)         -> Index Scan using echo on oscar_foxtrot november (cost=0.000..10.710 rows=1 width=14) (actual time=0.117..0.118 rows=1 loops=1)                 Index Cond: (charlie = three.golf)\n   -> Index Scan using zulu on oscar_foxtrot juliet (cost=0.000..3849.200 rows=153679 width=45) (actual time=0.147..14.241 rows=943 loops=1)         Index Cond: ((uniform_yankee = november.uniform_yankee) AND (uniform_victor = november.uniform_victor))\npg_stats is like this;> select attname, null_frac, n_distinct, most_common_vals, most_common_freqs from pg_stats where tablename like 'oscar_foxtrot%' and (attname = 'uniform_yankee' or attname = 'uniform_victor')\n\"uniform_yankee\";0;12;\"{83886082,83886085}\";\"{0.9742,0.02}\"\"uniform_victor\";0;23;\"{1342767106,1342308357}\";\"{0.973467,0.02}\"\nI assumed that nestloop rows would be more or less inner_path_rows * outer_path_rows with good pg_stats, and good plan could come based on it.the plan above is not that case. Suspcious of 40 million rows and small number of values(actually two values) making up 98% of distribution. \nso.. I looked up some code and found that rows=153679 is rows of parameterized base rel estimated by eqsel(), and row=39698995 is rows of parameterized join rel by eqjoinsel().I think wrong plan above comes from the fact that the two estimation cannot be close in general, great difference in my case.\nwhere am i wrong and right? Is there recommended approach, related issue, commit and so on i can follow ?thanks[[nstallation Info]]\nPostgreSQL-9.2.5 (via postgresql yum repository)OS: Centos 6.3 (custom linux-3.10.12 kernel)postgresql.conf:    effective_cache_size = 10000MB    shared_buffers = 1000MB    work_mem = 100MB\n    maintenance_work_mem = 100MBHW: CPU 4-core Xeon x 2 sockets, RAM 256GB --Regards,Jang. a sound mind in a sound body", "msg_date": "Mon, 27 Jan 2014 14:58:41 +0900", "msg_from": "Bongseo Jang <[email protected]>", "msg_from_op": true, "msg_subject": "self join,\n parameterized base/join rel path row estimation and generally..." } ]
[ { "msg_contents": "Hello,\n\nI have Postrgres 9.3 running on a Linux machine with 32GB RAM. I have a\nfairly large database (some tables with approx. 1 mil. records) and I\nhave the following query:\n\n SELECT * FROM (\n SELECT DISTINCT c.ext_content_id AS type_1_id,\n \"substring\"(c.ext_content_id::text, 1, 13) AS type_1_album_id,\n cm1.value AS type_1_artist,\n cm2.value AS type_1_title,\n cm4.value AS type_1_duration,\n pm1.value AS type_1_icpn,\n cm3.value AS type_1_isrc,\n c.provider AS type_1_provider,\n to_number(cm5.value::text, '999999'::text) AS type_2_set_number,\n to_number(cm6.value::text, '999999'::text) AS type_2_track_number,\n cm7.value AS type_6_availability_ppd,\n cm12.value AS type_6_availability_sub,\n cm9.value AS type_1_language,\n cm11.value AS type_1_label_reporting_id,\n cm13.value AS type_1_parent_isrc\n FROM content c\n LEFT JOIN content_metadata cm1 ON c.content_id = cm1.content_id AND\n cm1.name::text = 'track_artist'::text\n LEFT JOIN content_metadata cm2 ON c.content_id = cm2.content_id AND\n cm2.name::text = 'track_title'::text\n LEFT JOIN content_metadata cm3 ON c.content_id = cm3.content_id AND\n cm3.name::text = 'track_isrc'::text\n LEFT JOIN content_metadata cm4 ON c.content_id = cm4.content_id AND\n cm4.name::text = 'track_duration'::text\n LEFT JOIN content_metadata cm5 ON c.content_id = cm5.content_id AND\n cm5.name::text = 'set_number'::text\n LEFT JOIN content_metadata cm6 ON c.content_id = cm6.content_id AND\n cm6.name::text = 'track_number'::text\n LEFT JOIN content_metadata cm7 ON c.content_id = cm7.content_id AND\n cm7.name::text = 'unlimited'::text\n LEFT JOIN content_metadata cm9 ON c.content_id = cm9.content_id AND\n cm9.name::text = 'language'::text\n LEFT JOIN content_metadata cm10 ON c.content_id = cm10.content_id\n AND cm10.name::text = 'import_date'::text\n LEFT JOIN content_metadata cm11 ON c.content_id = cm11.content_id\n AND cm11.name::text = 'label_reporting_id'::text\n LEFT JOIN content_metadata cm12 ON c.content_id = cm12.content_id\n AND cm12.name::text = 'subscription'::text\n LEFT JOIN content_metadata cm13 ON c.content_id = cm13.content_id\n AND cm13.name::text = 'parent_isrc'::text,\n product p\n LEFT JOIN product_metadata pm4 ON p.product_id = pm4.product_id AND\n pm4.name::text = 'product_title'::text\n LEFT JOIN product_metadata pm1 ON p.product_id = pm1.product_id AND\n pm1.name::text = 'upc'::text\n WHERE p.ext_product_id::text = substr(c.ext_content_id::text, 1, 13)\n ) view\n WHERE type_1_id='1-111-1027897-01-001';\n\nBelow are the definitions of the tables involved.\n\nContent:\n\n Table \"public.content\"\n Column | Type | Modifiers\n -----------------+-----------------------------+-----------\n content_id | bigint | not null\n status | character varying(3) | not null\n display_name | character varying(1024) | not null\n ext_content_id | character varying(64) | not null\n provider | character varying(128) | not null\n last_updated_by | character varying(30) | not null\n last_updated_on | timestamp without time zone | not null\n created_by | character varying(30) | not null\n created_on | timestamp without time zone | not null\n Indexes:\n \"content_pkey\" PRIMARY KEY, btree (content_id)\n \"ak_key_2_content\" UNIQUE, btree (ext_content_id, provider)\n \"index_content_01\" UNIQUE, btree (ext_content_id)\n Foreign-key constraints:\n \"fk_content_01\" FOREIGN KEY (provider) REFERENCES\n provider(ext_provider_id)\n Referenced by:\n TABLE \"content_metadata\" CONSTRAINT \"fk_content_metadata_01\"\n FOREIGN KEY (content_id) REFERENCES content(content_id)\n TABLE \"packaged\" CONSTRAINT \"fk_packaged_reference_content\"\n FOREIGN KEY (content_id) REFERENCES content(content_id)\n TABLE \"product_content\" CONSTRAINT \"fk_product_content_01\"\n FOREIGN KEY (content_id) REFERENCES content(content_id)\n Triggers:\n td_content BEFORE DELETE ON content FOR EACH ROW EXECUTE\n PROCEDURE trigger_fct_td_content()\n ti_content BEFORE INSERT ON content FOR EACH ROW EXECUTE\n PROCEDURE trigger_fct_ti_content()\n tu_content BEFORE UPDATE ON content FOR EACH ROW EXECUTE\n PROCEDURE trigger_fct_tu_content()\n tu_content_tree BEFORE UPDATE ON content FOR EACH ROW EXECUTE\n PROCEDURE trigger_fct_tu_content_tree()\n\nProduct:\n\n Table \"public.product\"\n Column | Type | Modifiers\n -----------------+-----------------------------+-----------\n product_id | bigint | not null\n status | character varying(3) | not null\n display_name | character varying(1024) | not null\n ext_product_id | character varying(64) | not null\n last_updated_by | character varying(30) | not null\n last_updated_on | timestamp without time zone | not null\n created_by | character varying(30) | not null\n created_on | timestamp without time zone | not null\n Indexes:\n \"product_pkey\" PRIMARY KEY, btree (product_id)\n \"ak_key_2_product\" UNIQUE, btree (ext_product_id)\n Referenced by:\n TABLE \"contract_product\" CONSTRAINT \"fk_contract_product_02\"\n FOREIGN KEY (product_id) REFERENCES product(product_id)\n TABLE \"offer_product\" CONSTRAINT \"fk_offer_product_01\" FOREIGN\n KEY (product_id) REFERENCES product(product_id)\n TABLE \"product_metadata\" CONSTRAINT\n \"fk_product__reference_product\" FOREIGN KEY (product_id)\n REFERENCES product(product_id)\n TABLE \"product_content\" CONSTRAINT \"fk_product_content_02\"\n FOREIGN KEY (product_id) REFERENCES product(product_id)\n Triggers:\n td_product BEFORE DELETE ON product FOR EACH ROW EXECUTE\n PROCEDURE trigger_fct_td_product()\n ti_product BEFORE INSERT ON product FOR EACH ROW EXECUTE\n PROCEDURE trigger_fct_ti_product()\n tu_product BEFORE UPDATE ON product FOR EACH ROW EXECUTE\n PROCEDURE trigger_fct_tu_product()\n tu_product_tree BEFORE UPDATE ON product FOR EACH ROW EXECUTE\n PROCEDURE trigger_fct_tu_product_tree()\n\nProduct_metadata:\n\n Table \"public.product_metadata\"\n Column | Type | Modifiers\n -----------------+-----------------------------+-----------\n product_id | bigint | not null\n name | character varying(64) | not null\n distributor_id | bigint |\n value | character varying(4000) |\n created_on | timestamp without time zone | not null\n created_by | character varying(30) | not null\n last_updated_on | timestamp without time zone | not null\n last_updated_by | character varying(30) | not null\n Indexes:\n \"idx_product_metadata_03\" btree (name, value)\n \"index_product_metadata_02\" btree (product_id, name)\n \"index_product_metadata_cid\" btree (product_id)\n Foreign-key constraints:\n \"fk_product__reference_product\" FOREIGN KEY (product_id)\n REFERENCES product(product_id)\n \"fk_product_metadata_02\" FOREIGN KEY (distributor_id) REFERENCES\n operator(operator_id)\n Triggers:\n td_product_metadata BEFORE DELETE ON product_metadata FOR EACH\n ROW EXECUTE PROCEDURE trigger_fct_td_product_metadata()\n ti_product_metadata BEFORE INSERT ON product_metadata FOR EACH\n ROW EXECUTE PROCEDURE trigger_fct_ti_product_metadata()\n tu_product_metadata BEFORE UPDATE ON product_metadata FOR EACH\n ROW EXECUTE PROCEDURE trigger_fct_tu_product_metadata()\n\nContent_metadata:\n\n Table \"public.content_metadata\"\n Column | Type | Modifiers\n -----------------+-----------------------------+-----------\n content_id | bigint | not null\n name | character varying(64) | not null\n distributor_id | bigint |\n value | character varying(4000) |\n last_updated_by | character varying(30) | not null\n last_updated_on | timestamp without time zone | not null\n created_by | character varying(30) | not null\n created_on | timestamp without time zone | not null\n Indexes:\n \"idx_content_metadata_03\" btree (name, value)\n \"idx_content_metadata_04\" btree (content_id, name, value)\n \"index_content_metadata_02\" btree (content_id, name)\n \"index_content_metadata_cid\" btree (content_id)\n Foreign-key constraints:\n \"fk_content_metadata_01\" FOREIGN KEY (content_id) REFERENCES\n content(content_id)\n \"fk_content_metadata_02\" FOREIGN KEY (distributor_id) REFERENCES\n operator(operator_id)\n Triggers:\n td_content_metadata BEFORE DELETE ON content_metadata FOR EACH\n ROW EXECUTE PROCEDURE trigger_fct_td_content_metadata()\n ti_content_metadata BEFORE INSERT ON content_metadata FOR EACH\n ROW EXECUTE PROCEDURE trigger_fct_ti_content_metadata()\n tu_content_metadata BEFORE UPDATE ON content_metadata FOR EACH\n ROW EXECUTE PROCEDURE trigger_fct_tu_content_metadata()\n\nThe query as it is takes approx. 35 seconds, which is very bad. If I\ntake out the line:\n\n LEFT JOIN product_metadata pm4 ON p.product_id = pm4.product_id AND\n pm4.name::text = 'product_title'::text\n\nthen the time is under 1 second. \n\nHere you can see the plan for the query (as it is here, i.e. when it\ntakes a lot of time): http://explain.depesz.com/s/K9s\n\nAs far as I can see, the wrong index is used. In the lines\n\n \"-> Bitmap Heap Scan on product_metadata pm4 \n (cost=6014.11..257694.54 rows=579474 width=8) (actual\n time=282.364..13005.344 rows=557834 loops=1)\"\n \"Recheck Cond: ((name)::text = 'product_title'::text)\"\n \"Buffers: shared read=175851\"\n \"-> Bitmap Index Scan on idx_product_metadata_03 \n (cost=0.00..5869.24 rows=579474 width=0) (actual\n time=222.724..222.724 rows=557834 loops=1)\"\n \"Index Cond: ((name)::text = 'product_title'::text)\"\n \"Buffers: shared read=3953\"\n\nit can be seen that it uses idx_product_metadata_03 which is on (name,\nvalue). Shouldn't it use index_product_metadata_02 which is on\n(product_id, name)? \n\nOr is there another reason why this query is so slow? \n\nI have changed some of the default settings of Postgres as follows:\n\nrandom_page_cost = 1.4\ncpu_index_tuple_cost = 0.00001\neffective_cache_size = 256MB\nwork_mem = 300MB\n\nIf you need any other information, let me know.\n\nThanks in advance!\n\nWith regards,\nStelian Iancu\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 27 Jan 2014 06:44:20 -0800", "msg_from": "Stelian Iancu <[email protected]>", "msg_from_op": true, "msg_subject": "Slow query (wrong index used maybe)" }, { "msg_contents": "Stelian Iancu <[email protected]> writes:\n> I have Postrgres 9.3 running on a Linux machine with 32GB RAM. I have a\n> fairly large database (some tables with approx. 1 mil. records) and I\n> have the following query:\n> [ 13-way join joined to a 3-way join ]\n\nThink you'll need to raise join_collapse_limit and from_collapse_limit\nto get the best plan here. The planning time might hurt, though.\n\nTBH that schema looks designed for inefficiency; you'd be better off\nrethinking the design rather than hoping the planner is smart enough\nto save you from it.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 27 Jan 2014 10:06:24 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query (wrong index used maybe)" }, { "msg_contents": "Hello Stelian, \n\nHave you tried to use func_table module?, I think it will help you to eliminate all the joins.\n\nRegards\n\n\n\nOn Monday, January 27, 2014 5:54 PM, Stelian Iancu <[email protected]> wrote:\n \nHello,\n\nI have Postrgres 9.3 running on a Linux machine with 32GB RAM. I have a\nfairly large database (some tables with approx. 1 mil. records) and I\nhave the following query:\n\n    SELECT * FROM (\n    SELECT DISTINCT c.ext_content_id AS type_1_id,\n    \"substring\"(c.ext_content_id::text, 1, 13) AS type_1_album_id,\n    cm1.value AS type_1_artist,\n    cm2.value AS type_1_title,\n    cm4.value AS type_1_duration,\n    pm1.value AS type_1_icpn,\n    cm3.value AS type_1_isrc,\n    c.provider AS type_1_provider,\n    to_number(cm5.value::text, '999999'::text) AS type_2_set_number,\n    to_number(cm6.value::text, '999999'::text) AS type_2_track_number,\n    cm7.value AS type_6_availability_ppd,\n    cm12.value AS type_6_availability_sub,\n    cm9.value AS type_1_language,\n    cm11.value AS type_1_label_reporting_id,\n    cm13.value AS type_1_parent_isrc\n    FROM content c\n    LEFT JOIN content_metadata cm1 ON c.content_id = cm1.content_id AND\n    cm1.name::text = 'track_artist'::text\n    LEFT JOIN content_metadata cm2 ON c.content_id = cm2.content_id AND\n    cm2.name::text = 'track_title'::text\n    LEFT JOIN content_metadata cm3 ON c.content_id = cm3.content_id AND\n    cm3.name::text = 'track_isrc'::text\n    LEFT JOIN content_metadata cm4 ON c.content_id = cm4.content_id AND\n    cm4.name::text = 'track_duration'::text\n    LEFT JOIN content_metadata cm5 ON c.content_id = cm5.content_id AND\n    cm5.name::text = 'set_number'::text\n    LEFT JOIN content_metadata cm6 ON c.content_id = cm6.content_id AND\n    cm6.name::text = 'track_number'::text\n    LEFT JOIN content_metadata cm7 ON c.content_id = cm7.content_id AND\n    cm7.name::text = 'unlimited'::text\n    LEFT JOIN content_metadata cm9 ON c.content_id = cm9.content_id AND\n    cm9.name::text = 'language'::text\n    LEFT JOIN content_metadata cm10 ON c.content_id = cm10.content_id\n    AND cm10.name::text = 'import_date'::text\n    LEFT JOIN content_metadata cm11 ON c.content_id = cm11.content_id\n    AND cm11.name::text = 'label_reporting_id'::text\n    LEFT JOIN content_metadata cm12 ON c.content_id = cm12.content_id\n    AND cm12.name::text = 'subscription'::text\n    LEFT JOIN content_metadata cm13 ON c.content_id = cm13.content_id\n    AND cm13.name::text = 'parent_isrc'::text,\n    product p\n    LEFT JOIN product_metadata pm4 ON p.product_id = pm4.product_id AND\n    pm4.name::text = 'product_title'::text\n    LEFT JOIN product_metadata pm1 ON p.product_id = pm1.product_id AND\n    pm1.name::text = 'upc'::text\n    WHERE p.ext_product_id::text = substr(c.ext_content_id::text, 1, 13)\n    ) view\n    WHERE type_1_id='1-111-1027897-01-001';\n\nBelow are the definitions of the tables involved.\n\nContent:\n\n                      Table \"public.content\"\n        Column      |            Type            | Modifiers\n    -----------------+-----------------------------+-----------\n    content_id      | bigint                      | not null\n    status          | character varying(3)        | not null\n    display_name    | character varying(1024)    | not null\n    ext_content_id  | character varying(64)      | not null\n    provider        | character varying(128)      | not null\n    last_updated_by | character varying(30)      | not null\n    last_updated_on | timestamp without time zone | not null\n    created_by      | character varying(30)      | not null\n    created_on      | timestamp without time zone | not null\n    Indexes:\n        \"content_pkey\" PRIMARY KEY, btree (content_id)\n        \"ak_key_2_content\" UNIQUE, btree (ext_content_id, provider)\n        \"index_content_01\" UNIQUE, btree (ext_content_id)\n    Foreign-key constraints:\n        \"fk_content_01\" FOREIGN KEY (provider) REFERENCES\n        provider(ext_provider_id)\n    Referenced by:\n        TABLE \"content_metadata\" CONSTRAINT \"fk_content_metadata_01\"\n        FOREIGN KEY (content_id) REFERENCES content(content_id)\n        TABLE \"packaged\" CONSTRAINT \"fk_packaged_reference_content\"\n        FOREIGN KEY (content_id) REFERENCES content(content_id)\n        TABLE \"product_content\" CONSTRAINT \"fk_product_content_01\"\n        FOREIGN KEY (content_id) REFERENCES content(content_id)\n    Triggers:\n        td_content BEFORE DELETE ON content FOR EACH ROW EXECUTE\n        PROCEDURE trigger_fct_td_content()\n        ti_content BEFORE INSERT ON content FOR EACH ROW EXECUTE\n        PROCEDURE trigger_fct_ti_content()\n        tu_content BEFORE UPDATE ON content FOR EACH ROW EXECUTE\n        PROCEDURE trigger_fct_tu_content()\n        tu_content_tree BEFORE UPDATE ON content FOR EACH ROW EXECUTE\n        PROCEDURE trigger_fct_tu_content_tree()\n\nProduct:\n\n                  Table \"public.product\"\n        Column      |            Type            | Modifiers\n    -----------------+-----------------------------+-----------\n    product_id      | bigint                      | not null\n    status          | character varying(3)        | not null\n    display_name    | character varying(1024)    | not null\n    ext_product_id  | character varying(64)      | not null\n    last_updated_by | character varying(30)      | not null\n    last_updated_on | timestamp without time zone | not null\n    created_by      | character varying(30)      | not null\n    created_on      | timestamp without time zone | not null\n    Indexes:\n        \"product_pkey\" PRIMARY KEY, btree (product_id)\n        \"ak_key_2_product\" UNIQUE, btree (ext_product_id)\n    Referenced by:\n        TABLE \"contract_product\" CONSTRAINT \"fk_contract_product_02\"\n        FOREIGN KEY (product_id) REFERENCES product(product_id)\n        TABLE \"offer_product\" CONSTRAINT \"fk_offer_product_01\" FOREIGN\n        KEY (product_id) REFERENCES product(product_id)\n        TABLE \"product_metadata\" CONSTRAINT\n        \"fk_product__reference_product\" FOREIGN KEY (product_id)\n        REFERENCES product(product_id)\n        TABLE \"product_content\" CONSTRAINT \"fk_product_content_02\"\n        FOREIGN KEY (product_id) REFERENCES product(product_id)\n    Triggers:\n        td_product BEFORE DELETE ON product FOR EACH ROW EXECUTE\n        PROCEDURE trigger_fct_td_product()\n        ti_product BEFORE INSERT ON product FOR EACH ROW EXECUTE\n        PROCEDURE trigger_fct_ti_product()\n        tu_product BEFORE UPDATE ON product FOR EACH ROW EXECUTE\n        PROCEDURE trigger_fct_tu_product()\n        tu_product_tree BEFORE UPDATE ON product FOR EACH ROW EXECUTE\n        PROCEDURE trigger_fct_tu_product_tree()\n\nProduct_metadata:\n\n              Table \"public.product_metadata\"\n        Column      |            Type            | Modifiers\n    -----------------+-----------------------------+-----------\n    product_id      | bigint                      | not null\n    name            | character varying(64)      | not null\n    distributor_id  | bigint                      |\n    value          | character varying(4000)    |\n    created_on      | timestamp without time zone | not null\n    created_by      | character varying(30)      | not null\n    last_updated_on | timestamp without time zone | not null\n    last_updated_by | character varying(30)      | not null\n    Indexes:\n        \"idx_product_metadata_03\" btree (name, value)\n        \"index_product_metadata_02\" btree (product_id, name)\n        \"index_product_metadata_cid\" btree (product_id)\n    Foreign-key constraints:\n        \"fk_product__reference_product\" FOREIGN KEY (product_id)\n        REFERENCES product(product_id)\n        \"fk_product_metadata_02\" FOREIGN KEY (distributor_id) REFERENCES\n        operator(operator_id)\n    Triggers:\n        td_product_metadata BEFORE DELETE ON product_metadata FOR EACH\n        ROW EXECUTE PROCEDURE trigger_fct_td_product_metadata()\n        ti_product_metadata BEFORE INSERT ON product_metadata FOR EACH\n        ROW EXECUTE PROCEDURE trigger_fct_ti_product_metadata()\n        tu_product_metadata BEFORE UPDATE ON product_metadata FOR EACH\n        ROW EXECUTE PROCEDURE trigger_fct_tu_product_metadata()\n\nContent_metadata:\n\n              Table \"public.content_metadata\"\n        Column      |            Type            | Modifiers\n    -----------------+-----------------------------+-----------\n    content_id      | bigint                      | not null\n    name            | character varying(64)      | not null\n    distributor_id  | bigint                      |\n    value          | character varying(4000)    |\n    last_updated_by | character varying(30)      | not null\n    last_updated_on | timestamp without time zone | not null\n    created_by      | character varying(30)      | not null\n    created_on      | timestamp without time zone | not null\n    Indexes:\n        \"idx_content_metadata_03\" btree (name, value)\n        \"idx_content_metadata_04\" btree (content_id, name, value)\n        \"index_content_metadata_02\" btree (content_id, name)\n        \"index_content_metadata_cid\" btree (content_id)\n    Foreign-key constraints:\n        \"fk_content_metadata_01\" FOREIGN KEY (content_id) REFERENCES\n        content(content_id)\n        \"fk_content_metadata_02\" FOREIGN KEY (distributor_id) REFERENCES\n        operator(operator_id)\n    Triggers:\n        td_content_metadata BEFORE DELETE ON content_metadata FOR EACH\n        ROW EXECUTE PROCEDURE trigger_fct_td_content_metadata()\n        ti_content_metadata BEFORE INSERT ON content_metadata FOR EACH\n        ROW EXECUTE PROCEDURE trigger_fct_ti_content_metadata()\n        tu_content_metadata BEFORE UPDATE ON content_metadata FOR EACH\n        ROW EXECUTE PROCEDURE trigger_fct_tu_content_metadata()\n\nThe query as it is takes approx. 35 seconds, which is very bad. If I\ntake out the line:\n\n    LEFT JOIN product_metadata pm4 ON p.product_id = pm4.product_id AND\n    pm4.name::text = 'product_title'::text\n\nthen the time is under 1 second. \n\nHere you can see the plan for the query (as it is here, i.e. when it\ntakes a lot of time): http://explain.depesz.com/s/K9s\n\nAs far as I can see, the wrong index is used. In the lines\n\n    \"->  Bitmap Heap Scan on product_metadata pm4 \n    (cost=6014.11..257694.54 rows=579474 width=8) (actual\n    time=282.364..13005.344 rows=557834 loops=1)\"\n        \"Recheck Cond: ((name)::text = 'product_title'::text)\"\n        \"Buffers: shared read=175851\"\n        \"->  Bitmap Index Scan on idx_product_metadata_03 \n        (cost=0.00..5869.24 rows=579474 width=0) (actual\n        time=222.724..222.724 rows=557834 loops=1)\"\n            \"Index Cond: ((name)::text = 'product_title'::text)\"\n            \"Buffers: shared read=3953\"\n\nit can be seen that it uses idx_product_metadata_03 which is on (name,\nvalue). Shouldn't it use index_product_metadata_02 which is on\n(product_id, name)? \n\nOr is there another reason why this query is so slow? \n\nI have changed some of the default settings of Postgres as follows:\n\nrandom_page_cost = 1.4\ncpu_index_tuple_cost = 0.00001\neffective_cache_size = 256MB\nwork_mem = 300MB\n\nIf you need any other information, let me know.\n\nThanks in advance!\n\nWith regards,\nStelian Iancu\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\nHello Stelian, Have you tried to use func_table module?, I think it will help you to eliminate all the joins.Regards\n On Monday, January 27, 2014 5:54 PM, Stelian Iancu <[email protected]> wrote: Hello,I have Postrgres 9.3 running on a Linux machine with 32GB RAM. I have afairly large database (some tables with approx. 1 mil. records) and Ihave the following query:    SELECT * FROM (    SELECT DISTINCT c.ext_content_id AS type_1_id,    \"substring\"(c.ext_content_id::text, 1, 13) AS type_1_album_id,    cm1.value AS type_1_artist,    cm2.value AS type_1_title,    cm4.value AS type_1_duration,    pm1.value\n AS type_1_icpn,    cm3.value AS type_1_isrc,    c.provider AS type_1_provider,    to_number(cm5.value::text, '999999'::text) AS type_2_set_number,    to_number(cm6.value::text, '999999'::text) AS type_2_track_number,    cm7.value AS type_6_availability_ppd,    cm12.value AS type_6_availability_sub,    cm9.value AS type_1_language,    cm11.value AS type_1_label_reporting_id,    cm13.value AS type_1_parent_isrc    FROM content c    LEFT JOIN content_metadata cm1 ON c.content_id = cm1.content_id AND    cm1.name::text = 'track_artist'::text    LEFT JOIN content_metadata cm2 ON c.content_id = cm2.content_id AND    cm2.name::text = 'track_title'::text    LEFT JOIN content_metadata cm3 ON c.content_id = cm3.content_id AND    cm3.name::text\n = 'track_isrc'::text    LEFT JOIN content_metadata cm4 ON c.content_id = cm4.content_id AND    cm4.name::text = 'track_duration'::text    LEFT JOIN content_metadata cm5 ON c.content_id = cm5.content_id AND    cm5.name::text = 'set_number'::text    LEFT JOIN content_metadata cm6 ON c.content_id = cm6.content_id AND    cm6.name::text = 'track_number'::text    LEFT JOIN content_metadata cm7 ON c.content_id = cm7.content_id AND    cm7.name::text = 'unlimited'::text    LEFT JOIN content_metadata cm9 ON c.content_id = cm9.content_id AND    cm9.name::text = 'language'::text    LEFT JOIN content_metadata cm10 ON c.content_id = cm10.content_id    AND cm10.name::text = 'import_date'::text    LEFT JOIN content_metadata cm11 ON c.content_id = cm11.content_id    AND\n cm11.name::text = 'label_reporting_id'::text    LEFT JOIN content_metadata cm12 ON c.content_id = cm12.content_id    AND cm12.name::text = 'subscription'::text    LEFT JOIN content_metadata cm13 ON c.content_id = cm13.content_id    AND cm13.name::text = 'parent_isrc'::text,    product p    LEFT JOIN product_metadata pm4 ON p.product_id = pm4.product_id AND    pm4.name::text = 'product_title'::text    LEFT JOIN product_metadata pm1 ON p.product_id = pm1.product_id AND    pm1.name::text = 'upc'::text    WHERE p.ext_product_id::text = substr(c.ext_content_id::text, 1, 13)    ) view    WHERE type_1_id='1-111-1027897-01-001';Below are the definitions of the tables involved.Content:                      Table\n \"public.content\"        Column      |            Type            | Modifiers    -----------------+-----------------------------+-----------    content_id      | bigint                      | not null    status          | character varying(3)        | not null    display_name    | character varying(1024)    | not null    ext_content_id  | character varying(64)      | not null    provider        | character varying(128)      | not null    last_updated_by | character varying(30)      | not null    last_updated_on | timestamp without\n time zone | not null    created_by      | character varying(30)      | not null    created_on      | timestamp without time zone | not null    Indexes:        \"content_pkey\" PRIMARY KEY, btree (content_id)        \"ak_key_2_content\" UNIQUE, btree (ext_content_id, provider)        \"index_content_01\" UNIQUE, btree (ext_content_id)    Foreign-key constraints:        \"fk_content_01\" FOREIGN KEY (provider) REFERENCES        provider(ext_provider_id)    Referenced by:        TABLE \"content_metadata\" CONSTRAINT \"fk_content_metadata_01\"        FOREIGN KEY (content_id) REFERENCES content(content_id)        TABLE \"packaged\" CONSTRAINT\n \"fk_packaged_reference_content\"        FOREIGN KEY (content_id) REFERENCES content(content_id)        TABLE \"product_content\" CONSTRAINT \"fk_product_content_01\"        FOREIGN KEY (content_id) REFERENCES content(content_id)    Triggers:        td_content BEFORE DELETE ON content FOR EACH ROW EXECUTE        PROCEDURE trigger_fct_td_content()        ti_content BEFORE INSERT ON content FOR EACH ROW EXECUTE        PROCEDURE trigger_fct_ti_content()        tu_content BEFORE UPDATE ON content FOR EACH ROW EXECUTE        PROCEDURE trigger_fct_tu_content()        tu_content_tree BEFORE UPDATE ON content FOR EACH ROW EXECUTE        PROCEDURE\n trigger_fct_tu_content_tree()Product:                  Table \"public.product\"        Column      |            Type            | Modifiers    -----------------+-----------------------------+-----------    product_id      | bigint                      | not null    status          | character varying(3)        | not null    display_name    | character varying(1024)    | not null    ext_product_id  | character varying(64)      | not null    last_updated_by | character varying(30)      | not null    last_updated_on | timestamp\n without time zone | not null    created_by      | character varying(30)      | not null    created_on      | timestamp without time zone | not null    Indexes:        \"product_pkey\" PRIMARY KEY, btree (product_id)        \"ak_key_2_product\" UNIQUE, btree (ext_product_id)    Referenced by:        TABLE \"contract_product\" CONSTRAINT \"fk_contract_product_02\"        FOREIGN KEY (product_id) REFERENCES product(product_id)        TABLE \"offer_product\" CONSTRAINT \"fk_offer_product_01\" FOREIGN        KEY (product_id) REFERENCES product(product_id)        TABLE \"product_metadata\" CONSTRAINT        \"fk_product__reference_product\" FOREIGN KEY (product_id)   \n     REFERENCES product(product_id)        TABLE \"product_content\" CONSTRAINT \"fk_product_content_02\"        FOREIGN KEY (product_id) REFERENCES product(product_id)    Triggers:        td_product BEFORE DELETE ON product FOR EACH ROW EXECUTE        PROCEDURE trigger_fct_td_product()        ti_product BEFORE INSERT ON product FOR EACH ROW EXECUTE        PROCEDURE trigger_fct_ti_product()        tu_product BEFORE UPDATE ON product FOR EACH ROW EXECUTE        PROCEDURE trigger_fct_tu_product()        tu_product_tree BEFORE UPDATE ON product FOR EACH ROW EXECUTE        PROCEDURE trigger_fct_tu_product_tree()Product_metadata:              Table\n \"public.product_metadata\"        Column      |            Type            | Modifiers    -----------------+-----------------------------+-----------    product_id      | bigint                      | not null    name            | character varying(64)      | not null    distributor_id  | bigint                      |    value          | character varying(4000)    |    created_on      | timestamp without time zone | not null    created_by      | character varying(30)      | not null   \n last_updated_on | timestamp without time zone | not null    last_updated_by | character varying(30)      | not null    Indexes:        \"idx_product_metadata_03\" btree (name, value)        \"index_product_metadata_02\" btree (product_id, name)        \"index_product_metadata_cid\" btree (product_id)    Foreign-key constraints:        \"fk_product__reference_product\" FOREIGN KEY (product_id)        REFERENCES product(product_id)        \"fk_product_metadata_02\" FOREIGN KEY (distributor_id) REFERENCES        operator(operator_id)    Triggers:        td_product_metadata BEFORE DELETE ON product_metadata FOR EACH        ROW EXECUTE PROCEDURE\n trigger_fct_td_product_metadata()        ti_product_metadata BEFORE INSERT ON product_metadata FOR EACH        ROW EXECUTE PROCEDURE trigger_fct_ti_product_metadata()        tu_product_metadata BEFORE UPDATE ON product_metadata FOR EACH        ROW EXECUTE PROCEDURE trigger_fct_tu_product_metadata()Content_metadata:              Table \"public.content_metadata\"        Column      |            Type            | Modifiers    -----------------+-----------------------------+-----------    content_id      | bigint                      | not null    name            | character\n varying(64)      | not null    distributor_id  | bigint                      |    value          | character varying(4000)    |    last_updated_by | character varying(30)      | not null    last_updated_on | timestamp without time zone | not null    created_by      | character varying(30)      | not null    created_on      | timestamp without time zone | not null    Indexes:        \"idx_content_metadata_03\" btree (name, value)        \"idx_content_metadata_04\" btree (content_id, name, value)        \"index_content_metadata_02\" btree (content_id, name)        \"index_content_metadata_cid\"\n btree (content_id)    Foreign-key constraints:        \"fk_content_metadata_01\" FOREIGN KEY (content_id) REFERENCES        content(content_id)        \"fk_content_metadata_02\" FOREIGN KEY (distributor_id) REFERENCES        operator(operator_id)    Triggers:        td_content_metadata BEFORE DELETE ON content_metadata FOR EACH        ROW EXECUTE PROCEDURE trigger_fct_td_content_metadata()        ti_content_metadata BEFORE INSERT ON content_metadata FOR EACH        ROW EXECUTE PROCEDURE trigger_fct_ti_content_metadata()        tu_content_metadata BEFORE UPDATE ON content_metadata FOR EACH        ROW EXECUTE PROCEDURE trigger_fct_tu_content_metadata()The query as it is takes approx. 35\n seconds, which is very bad. If Itake out the line:    LEFT JOIN product_metadata pm4 ON p.product_id = pm4.product_id AND    pm4.name::text = 'product_title'::textthen the time is under 1 second. Here you can see the plan for the query (as it is here, i.e. when ittakes a lot of time): http://explain.depesz.com/s/K9sAs far as I can see, the wrong index is used. In the lines    \"->  Bitmap Heap Scan on product_metadata pm4     (cost=6014.11..257694.54 rows=579474 width=8) (actual    time=282.364..13005.344 rows=557834 loops=1)\"        \"Recheck Cond: ((name)::text = 'product_title'::text)\"        \"Buffers: shared read=175851\"        \"->  Bitmap Index Scan on idx_product_metadata_03    \n     (cost=0.00..5869.24 rows=579474 width=0) (actual        time=222.724..222.724 rows=557834 loops=1)\"            \"Index Cond: ((name)::text = 'product_title'::text)\"            \"Buffers: shared read=3953\"it can be seen that it uses idx_product_metadata_03 which is on (name,value). Shouldn't it use index_product_metadata_02 which is on(product_id, name)? Or is there another reason why this query is so slow? I have changed some of the default settings of Postgres as follows:random_page_cost = 1.4cpu_index_tuple_cost = 0.00001effective_cache_size = 256MBwork_mem = 300MBIf you need any other information, let me know.Thanks in advance!With regards,Stelian Iancu-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Mon, 27 Jan 2014 09:20:57 -0800 (PST)", "msg_from": "salah jubeh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query (wrong index used maybe)" }, { "msg_contents": "On Mon, Jan 27, 2014, at 9:20, salah jubeh wrote:\n> Hello Stelian, \n> \n\nHello,\n\n> Have you tried to use func_table module?, I think it will help you to eliminate all the joins.\n\nNo, I haven't. I can have a look later, thanks.\n\n> \n> Regards\n> \n \n> \n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 27 Jan 2014 09:33:27 -0800", "msg_from": "Stelian Iancu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query (wrong index used maybe)" }, { "msg_contents": "On Mon, Jan 27, 2014, at 7:06, Tom Lane wrote:\n> Stelian Iancu <[email protected]> writes:\n> > I have Postrgres 9.3 running on a Linux machine with 32GB RAM. I have a\n> > fairly large database (some tables with approx. 1 mil. records) and I\n> > have the following query:\n> > [ 13-way join joined to a 3-way join ]\n> \n> Think you'll need to raise join_collapse_limit and from_collapse_limit\n> to get the best plan here. The planning time might hurt, though.\n> \n\nI did raise both to 40 and it works flawless (for now). I got the\nresponse time to less than a second. However I don't know what the\nimplications are for the future.\n\n> TBH that schema looks designed for inefficiency; you'd be better off\n> rethinking the design rather than hoping the planner is smart enough\n> to save you from it.\n> \n\nHeh, I wish it was this easy. This whole thing is part of us moving away\nfrom Oracle to Postgres. We already have this huge DB with this schema\nin Oracle (which was successfully imported into Postgres, minus these\nperformance issues we're seeing now) and I don't know how feasible it is\nto even start thinking about a redesign. \n\nBut I appreciate your input regarding this. Maybe one of these days I\nwill have success in convincing my boss to even start taking a look at\nthe design of the DB (you know the saying \"it works, don't fix it\").\n\n> \t\t\tregards, tom lane\n> \n> \n> -- \n> Sent via pgsql-performance mailing list\n> ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 27 Jan 2014 09:37:58 -0800", "msg_from": "Stelian Iancu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query (wrong index used maybe)" }, { "msg_contents": "My developers have had the same issue.\n\nPostgres 9.2.3 on Linux 5.6.\n\nThe query planner estimates (for 27 table join SQL) that using the nestloop\nis faster, when in fact it is not. A hashjoin returns results faster. We've\nset enable_nestloop = false and have gotten good results. The problem is,\nnestoop would be faster for other types of queries. Maybe ones with fewer\njoins.\n\nRecently we made a change that forced our multi join queires to slow down.\nWe now build temp views for each user session. To speed these queries up, we\nup'd geqo_effort = 10. This has also given us good results; but again, we\ndon't know if there will be another impact down the road.\n\nSame issue here with redesign. There is some simple denormalization we could\ndo that would minimize our joins. Instead if link tables, we would utilize\nhstore, json or array columns types.\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Slow-query-wrong-index-used-maybe-tp5788979p5789045.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 27 Jan 2014 11:10:49 -0800 (PST)", "msg_from": "bobJobS <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query (wrong index used maybe)" }, { "msg_contents": "On 28/01/14 08:10, bobJobS wrote:\n> My developers have had the same issue.\n>\n> Postgres 9.2.3 on Linux 5.6.\n>\nThe latest Linux kernel is 3.13 (https://www.kernel.org), so I assume \n5.6 is a distribution version.\n\nSo which distribution of Linux are you using?\n\n\nCheers,\nGavin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 28 Jan 2014 08:43:07 +1300", "msg_from": "Gavin Flower <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query (wrong index used maybe)" }, { "msg_contents": "\n\nOn Mon, Jan 27, 2014, at 11:43, Gavin Flower wrote:\n> On 28/01/14 08:10, bobJobS wrote:\n> > My developers have had the same issue.\n> >\n> > Postgres 9.2.3 on Linux 5.6.\n> >\n> The latest Linux kernel is 3.13 (https://www.kernel.org), so I assume \n> 5.6 is a distribution version.\n> \n> So which distribution of Linux are you using?\n> \n> \n\nI cannot reply for Bob, but we're on Debian 7. \n\n> Cheers,\n> Gavin\n> \n> \n> -- \n> Sent via pgsql-performance mailing list\n> ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 28 Jan 2014 00:29:51 -0800", "msg_from": "Stelian Iancu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query (wrong index used maybe)" }, { "msg_contents": "RHEL 5.10 kernel 2.6.18\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Slow-query-wrong-index-used-maybe-tp5788979p5789206.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 28 Jan 2014 05:39:56 -0800 (PST)", "msg_from": "bobJobS <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query (wrong index used maybe)" } ]
[ { "msg_contents": "Have a problem where a stored procedure is taking a week to run. The\nstored procedure should take less than a second to run. In researching a\nselect hanging problem, three things are suggested; an autovacuum problem,\na resource is locked, or there is something wrong with the stored procedure.\n\n· Autovacuum is running. A 'ps -elf | grep postgres' shows:\n\n00:00:43 postgres: logger process\n\n00:5:50 postgres: writer process\n\n00:3:04 postgres: wal writer process\n\n00:00:48 postgres: autovacuum launcher process\n\n00:00:50 postgres: stats collector process\n\n00:01:28 postgres: operstions OPDB [local] idle\n\n154.11.29 postgres: operstions OPDB [local] select\n\n· The select is from running a select of a stored procedure from a\n'c' program using the PqsendQuery function.\n\n· Postgres.conf has both autovacuum and track_counts set to 'on'. All\nother autovacuum values are left as delivered (commented out).\n\n· A 'select * from pg_stats_activity;' shows no query is blocked.\n\n· We have recently changed from using Oracle 10g (running on Red\nHat AS 4.5) to PostgreSQL 9.1.2 (running on CentOS 6.3). The only\ndifferences between the two versions are:\n\no Syntax changes between Oracle and Postgres.\n\no In Oracle a commit was executed after each 'chuck' of work was done. A\ncommit is no longer used in Postgres because the Postgres documentation\nindicates that a commit has no affect until the end of the transaction\n(i.e., the end of the stored procedure).\n\no The same stored procedure is running just fine on all of our test\nsystems and at one of our two customer sites. All of the systems are\nconfigured the same (same operating system and software).\n\nLastly, in the directories used to store the tables and indexes, there are\n918896 files in the tables directory and 921291 files in the indexes\ndirectory. All of the file names are just numbers (no extensions). About\n60 files are added to each directory every second. On our test systems and\nat our other customer site, there are only about 50 files in each directory.\nWhy are there so many files?\nThank you everyone for your time.\nPeter Blair\n\nHave a problem where a stored procedure is taking a week to run.  The stored procedure should take less than a second to run.   In researching a select hanging problem, three things are suggested; an autovacuum problem, a resource is locked, or there is something wrong with the stored procedure.\n·         Autovacuum is running.  A ‘ps –elf | grep postgres’ shows:\n00:00:43 postgres: logger process\n00:5:50 postgres: writer process\n00:3:04 postgres: wal writer process\n00:00:48 postgres: autovacuum launcher process\n00:00:50 postgres: stats collector process\n00:01:28 postgres: operstions OPDB [local] idle\n154.11.29 postgres: operstions OPDB [local] select\n·         The select is from running a select of a stored procedure from a ‘c’ program using the PqsendQuery function.\n·         Postgres.conf has both autovacuum and track_counts set to ‘on’.  All other autovacuum values are left as delivered (commented out).\n·         A ‘select * from pg_stats_activity;’ shows no query is blocked.\n·         We have recently changed from using Oracle 10g (running on Red Hat AS 4.5) to PostgreSQL 9.1.2 (running on CentOS 6.3).  The only differences between the two versions are:\no   Syntax changes between Oracle and Postgres.\no   In Oracle a commit was executed after each ‘chuck’ of work was done.  A commit is no longer used in Postgres because the Postgres documentation indicates that a commit has no affect until the end of the transaction (i.e., the end of the stored procedure).\no   The same stored procedure is running just fine on all of our test systems and at one of our two customer sites.  All of the systems are configured the same (same operating system and software).\nLastly, in the directories used to store the tables and indexes, there are 918896 files in the tables directory and 921291 files in the indexes directory.  All of the file names are just numbers (no extensions).  About 60 files are added to each directory every second.  On our test systems and at our other customer site, there are only about 50 files in each directory.\nWhy are there so many files?\nThank you everyone for your time.\nPeter Blair", "msg_date": "Mon, 27 Jan 2014 17:06:57 -0500", "msg_from": "Peter Blair <[email protected]>", "msg_from_op": true, "msg_subject": "Select hangs and there are lots of files in table and index\n directories." }, { "msg_contents": "Peter Blair <[email protected]> writes:\n> Have a problem where a stored procedure is taking a week to run. The\n> stored procedure should take less than a second to run.\n\nIs that \"it's known to terminate if you give it a week\", or \"we've let\nit run for a week and it shows no sign of ever terminating\"?\n\n> In researching a\n> select hanging problem, three things are suggested; an autovacuum problem,\n> a resource is locked, or there is something wrong with the stored procedure.\n\nI'd bet on the last, given that you're apparently working with an immature\nport from Oracle. The error recovery semantics, in particular, are enough\ndifferent in PL/SQL and PL/pgSQL that it's not too hard to credit having\naccidentally written an infinite loop via careless translation.\n\n> Lastly, in the directories used to store the tables and indexes, there are\n> 918896 files in the tables directory and 921291 files in the indexes\n> directory. All of the file names are just numbers (no extensions). About\n> 60 files are added to each directory every second. On our test systems and\n> at our other customer site, there are only about 50 files in each directory.\n> Why are there so many files?\n\nIf the filenames are just numbers, then they must be actual tables or\nindexes, not temp files. (You could cross-check that theory by noting\nwhether the system catalogs, such as pg_class, are bloating at a\nproportional rate.) I'm guessing that there's some loop in your procedure\nthat's creating new temp tables, or maybe even non-temp tables. You would\nnot be able to see them via \"select * from pg_class\" in another session\nbecause they're not committed yet, but they'd be taking up filesystem\nentries. The loop might or might not be dropping the tables again; IIRC\nthe filesystem entries wouldn't get cleaned up till end of transaction\neven if the tables are nominally dropped.\n\nNot much to go on, but I'd look for a loop that includes a CREATE TABLE\nand a BEGIN ... EXCEPT block, and take a close look at the conditions\nunder which the EXCEPT allows the loop to continue.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 27 Jan 2014 19:43:35 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Select hangs and there are lots of files in table and index\n directories." }, { "msg_contents": "Tom,\n\nYou are correct. The was an infinate loop created because of the\ndifferences in the date math between Oracle and Postgres.\n\nThank again for your help.\nOn Mon, Jan 27, 2014 at 7:43 PM, Tom Lane <[email protected]> wrote:\n\n> Peter Blair <[email protected]> writes:\n> > Have a problem where a stored procedure is taking a week to run. The\n> > stored procedure should take less than a second to run.\n>\n> Is that \"it's known to terminate if you give it a week\", or \"we've let\n> it run for a week and it shows no sign of ever terminating\"?\n>\n> > In researching a\n> > select hanging problem, three things are suggested; an autovacuum\n> problem,\n> > a resource is locked, or there is something wrong with the stored\n> procedure.\n>\n> I'd bet on the last, given that you're apparently working with an immature\n> port from Oracle. The error recovery semantics, in particular, are enough\n> different in PL/SQL and PL/pgSQL that it's not too hard to credit having\n> accidentally written an infinite loop via careless translation.\n>\n> > Lastly, in the directories used to store the tables and indexes, there\n> are\n> > 918896 files in the tables directory and 921291 files in the indexes\n> > directory. All of the file names are just numbers (no extensions).\n> About\n> > 60 files are added to each directory every second. On our test systems\n> and\n> > at our other customer site, there are only about 50 files in each\n> directory.\n> > Why are there so many files?\n>\n> If the filenames are just numbers, then they must be actual tables or\n> indexes, not temp files. (You could cross-check that theory by noting\n> whether the system catalogs, such as pg_class, are bloating at a\n> proportional rate.) I'm guessing that there's some loop in your procedure\n> that's creating new temp tables, or maybe even non-temp tables. You would\n> not be able to see them via \"select * from pg_class\" in another session\n> because they're not committed yet, but they'd be taking up filesystem\n> entries. The loop might or might not be dropping the tables again; IIRC\n> the filesystem entries wouldn't get cleaned up till end of transaction\n> even if the tables are nominally dropped.\n>\n> Not much to go on, but I'd look for a loop that includes a CREATE TABLE\n> and a BEGIN ... EXCEPT block, and take a close look at the conditions\n> under which the EXCEPT allows the loop to continue.\n>\n> regards, tom lane\n>\n\nTom,\n \nYou are correct.  The was an infinate loop created because of the differences in the date math between Oracle and Postgres.\n \nThank again for your help.\nOn Mon, Jan 27, 2014 at 7:43 PM, Tom Lane <[email protected]> wrote:\n\nPeter Blair <[email protected]> writes:> Have a problem where a stored procedure is taking a week to run.  The> stored procedure should take less than a second to run.\nIs that \"it's known to terminate if you give it a week\", or \"we've letit run for a week and it shows no sign of ever terminating\"?\n> In researching a> select hanging problem, three things are suggested; an autovacuum problem,> a resource is locked, or there is something wrong with the stored procedure.\nI'd bet on the last, given that you're apparently working with an immatureport from Oracle.  The error recovery semantics, in particular, are enoughdifferent in PL/SQL and PL/pgSQL that it's not too hard to credit having\naccidentally written an infinite loop via careless translation.\n> Lastly, in the directories used to store the tables and indexes, there are> 918896 files in the tables directory and 921291 files in the indexes> directory.  All of the file names are just numbers (no extensions).  About\n> 60 files are added to each directory every second.  On our test systems and> at our other customer site, there are only about 50 files in each directory.> Why are there so many files?If the filenames are just numbers, then they must be actual tables or\nindexes, not temp files.  (You could cross-check that theory by notingwhether the system catalogs, such as pg_class, are bloating at aproportional rate.)  I'm guessing that there's some loop in your procedure\nthat's creating new temp tables, or maybe even non-temp tables.  You wouldnot be able to see them via \"select * from pg_class\" in another sessionbecause they're not committed yet, but they'd be taking up filesystem\nentries.  The loop might or might not be dropping the tables again; IIRCthe filesystem entries wouldn't get cleaned up till end of transactioneven if the tables are nominally dropped.Not much to go on, but I'd look for a loop that includes a CREATE TABLE\nand a BEGIN ... EXCEPT block, and take a close look at the conditionsunder which the EXCEPT allows the loop to continue.                        regards, tom lane", "msg_date": "Tue, 28 Jan 2014 09:47:48 -0500", "msg_from": "Peter Blair <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Select hangs and there are lots of files in table and\n index directories." }, { "msg_contents": "All,\n\nOne other problem with this case, those 900K worth of files in each of the\ntable and index directories (1.8M total files) are still hanging around. I\nhave:\n* fixed the and reloaded the stored procedure\n* restarted the database\n* ran the stored procedure\n* there are only 378 rows in the pg_class table\n\nHow do I get rid of those other files?\n\nJust a guess, but do I shutdown the database, and delete any file not\nlisted in pg_class? I do not see anything in the PostgreSQL documentation\nabout this.\n\nThank again.\n\nOn Tue, Jan 28, 2014 at 9:47 AM, Peter Blair <[email protected]> wrote:\n\n> Tom,\n>\n> You are correct. The was an infinate loop created because of the\n> differences in the date math between Oracle and Postgres.\n>\n> Thank again for your help.\n> On Mon, Jan 27, 2014 at 7:43 PM, Tom Lane <[email protected]> wrote:\n>\n>> Peter Blair <[email protected]> writes:\n>> > Have a problem where a stored procedure is taking a week to run. The\n>> > stored procedure should take less than a second to run.\n>>\n>> Is that \"it's known to terminate if you give it a week\", or \"we've let\n>> it run for a week and it shows no sign of ever terminating\"?\n>>\n>> > In researching a\n>> > select hanging problem, three things are suggested; an autovacuum\n>> problem,\n>> > a resource is locked, or there is something wrong with the stored\n>> procedure.\n>>\n>> I'd bet on the last, given that you're apparently working with an immature\n>> port from Oracle. The error recovery semantics, in particular, are enough\n>> different in PL/SQL and PL/pgSQL that it's not too hard to credit having\n>> accidentally written an infinite loop via careless translation.\n>>\n>> > Lastly, in the directories used to store the tables and indexes, there\n>> are\n>> > 918896 files in the tables directory and 921291 files in the indexes\n>> > directory. All of the file names are just numbers (no extensions).\n>> About\n>> > 60 files are added to each directory every second. On our test systems\n>> and\n>> > at our other customer site, there are only about 50 files in each\n>> directory.\n>> > Why are there so many files?\n>>\n>> If the filenames are just numbers, then they must be actual tables or\n>> indexes, not temp files. (You could cross-check that theory by noting\n>> whether the system catalogs, such as pg_class, are bloating at a\n>> proportional rate.) I'm guessing that there's some loop in your procedure\n>> that's creating new temp tables, or maybe even non-temp tables. You would\n>> not be able to see them via \"select * from pg_class\" in another session\n>> because they're not committed yet, but they'd be taking up filesystem\n>> entries. The loop might or might not be dropping the tables again; IIRC\n>> the filesystem entries wouldn't get cleaned up till end of transaction\n>> even if the tables are nominally dropped.\n>>\n>> Not much to go on, but I'd look for a loop that includes a CREATE TABLE\n>> and a BEGIN ... EXCEPT block, and take a close look at the conditions\n>> under which the EXCEPT allows the loop to continue.\n>>\n>> regards, tom lane\n>>\n>\n>\n\nAll,\n \nOne other problem with this case, those 900K worth of files in each of the table and index directories (1.8M total files) are still hanging around.  I have:\n* fixed the and reloaded the stored procedure\n* restarted the database\n* ran the stored procedure\n* there are only 378 rows in the pg_class table\n \nHow do I get rid of those other files?\n \nJust a guess, but do I shutdown the database, and delete any file not listed in pg_class?  I do not see anything in the PostgreSQL documentation about this.\n \nThank again.\nOn Tue, Jan 28, 2014 at 9:47 AM, Peter Blair <[email protected]> wrote:\n\nTom,\n \nYou are correct.  The was an infinate loop created because of the differences in the date math between Oracle and Postgres.\n \nThank again for your help.\n\n\nOn Mon, Jan 27, 2014 at 7:43 PM, Tom Lane <[email protected]> wrote:\n\nPeter Blair <[email protected]> writes:> Have a problem where a stored procedure is taking a week to run.  The> stored procedure should take less than a second to run.\nIs that \"it's known to terminate if you give it a week\", or \"we've letit run for a week and it shows no sign of ever terminating\"?\n> In researching a> select hanging problem, three things are suggested; an autovacuum problem,> a resource is locked, or there is something wrong with the stored procedure.I'd bet on the last, given that you're apparently working with an immature\nport from Oracle.  The error recovery semantics, in particular, are enoughdifferent in PL/SQL and PL/pgSQL that it's not too hard to credit havingaccidentally written an infinite loop via careless translation.\n> Lastly, in the directories used to store the tables and indexes, there are> 918896 files in the tables directory and 921291 files in the indexes> directory.  All of the file names are just numbers (no extensions).  About\n> 60 files are added to each directory every second.  On our test systems and> at our other customer site, there are only about 50 files in each directory.> Why are there so many files?If the filenames are just numbers, then they must be actual tables or\nindexes, not temp files.  (You could cross-check that theory by notingwhether the system catalogs, such as pg_class, are bloating at aproportional rate.)  I'm guessing that there's some loop in your procedure\nthat's creating new temp tables, or maybe even non-temp tables.  You wouldnot be able to see them via \"select * from pg_class\" in another sessionbecause they're not committed yet, but they'd be taking up filesystem\nentries.  The loop might or might not be dropping the tables again; IIRCthe filesystem entries wouldn't get cleaned up till end of transactioneven if the tables are nominally dropped.Not much to go on, but I'd look for a loop that includes a CREATE TABLE\nand a BEGIN ... EXCEPT block, and take a close look at the conditionsunder which the EXCEPT allows the loop to continue.                        regards, tom lane", "msg_date": "Wed, 29 Jan 2014 14:12:33 -0500", "msg_from": "Peter Blair <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Select hangs and there are lots of files in table and\n index directories." }, { "msg_contents": "Peter Blair <[email protected]> writes:\n> One other problem with this case, those 900K worth of files in each of the\n> table and index directories (1.8M total files) are still hanging around.\n\nHm ... if left to its own devices, I think the session that created them\nshould have deleted them, assuming you did a normal query cancel on it.\nMaybe you did kill -9?\n\n> Just a guess, but do I shutdown the database, and delete any file not\n> listed in pg_class?\n\nFor starters, try just stopping and starting the database; I think there\nmight be logic to remove orphaned files during postmaster startup.\n\nIf that doesn't work, you can get rid of any numeric-named files that\nmatch no value in pg_class.relfilenode of their database.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 29 Jan 2014 17:03:16 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Select hangs and there are lots of files in table and index\n directories." } ]
[ { "msg_contents": "Hi,\n\nWe upgraded our PG version from 9.1.3 to 9.2.6. After that, noticed a huge\njump in the memory consumed by PG backend process during a delete query on\none of our DB tables. The heap memory as reported in /proc/PID/smaps\nincreased from 25MB to 600 MB. There are quite a few triggers setup on this\ntable and I determined that the jump happens when one particular trigger\nfunction (SQL function) is exceuted. The queries executed in this function\n(around 3000 line SQL function) are quite complex, quite a few unions,\njoins on tables containing around 100K records. The function is created\nusing the syntax:\n\nCREATE OR REPLACE FUNCTION FUNC1 (p_old TABLETYPE, p_new TABLETYPE, p_id\ncharacter(50), p_event_type text)\n RETURNS integer AS\n$BODY$\n\n......\n......\n......\n\n$BODY$\n LANGUAGE 'plpgsql' VOLATILE\n COST 100;\n\n\nThe memory gets released once the backend process exits but in our\napplication we have multiple threads opening connections to PG and\nexecuting these delete queries. Connection pooling is being used and this\nmemory is not getting released when the connection is idle; so at one point\nthe machine goes OOM.\n\nFor the upgrade, we have migrated the data used pg_dump and pg_restore.\n\nSystem Configuration:\nCentOS release 5.6 (Final)\nMem: 9 GB\nCPU : 4 core - Intel(R) Xeon(R) CPU E5620 @ 2.40GHz\n\nPostgresQL configuration:\nmax_connections = 250\nshared_buffers = 1536MB\nwork_mem = 12MB\nmaintenance_work_mem = 384MB\neffective_cache_size = 3072MB\n\nI have used psql to test the delete queries and used /proc/PID/smaps to\ncheck the memory usage of the launched backend.\n\nThe heap in 'smaps' is shown as below:\n04e3c000-29a70000 rw-p 04e3c000 00:00 0\n[heap]\nSize: 602320 kB\nRss: 596596 kB\nShared_Clean: 0 kB\nShared_Dirty: 60 kB\nPrivate_Clean: 0 kB\nPrivate_Dirty: 596536 kB\nSwap: 0 kB\nPss: 596544 kB\n\n\nWe are using the RHEL-5 64-bit PotgresQL RPMs present on the PG website:\npostgresql92-9.2.6-1PGDG.rhel5.x86_64.rpm\npostgresql92-contrib-9.2.6-1PGDG.rhel5.x86_64.rpm\npostgresql92-libs-9.2.6-1PGDG.rhel5.x86_64.rpm\npostgresql92-server-9.2.6-1PGDG.rhel5.x86_64.rpm\n\nI have now compiled the PG source code and installed a version with debug\nsymbols on the same machine. But I do not know how to determine what is\ntaking up so much memory. Is there any data, that I can collect by\nconnecting gdb, which will help ?\n\nI tried getting the memory stats (enabled SHOW_MEMORY_STATS) and captured\nthe attached data but I do not know how to interpret this data and also not\nsure whether this statistics captures the heap memory used by the SQL\nfunction.\n\nQueries I have are:\n\n1. Has there been any change in the way memory is allocated/released when\nSQL functions are triggered in a backend?\n\n2. How can I determine what is taking up so much memory; basically how do I\nproceed further on this one?\n\n3. I guess it is some data which is cached when the SQL function runs the\nfirst time in the backend because if I delete another row of the same table\nin the same PSQL session the memory does not jump again by that amount. Is\nthere a way to indicate that such caching should not be done?\n\nThanks,\nDatta.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Mon, 27 Jan 2014 14:27:55 -0800", "msg_from": "Dattaram Porob <[email protected]>", "msg_from_op": true, "msg_subject": "Increased memory utilization by pgsql backend after upgrade from\n 9.1.3 to 9.2.6" }, { "msg_contents": "Hi,\n\nWe upgraded our PG version from 9.1.3 to 9.2.6. After that, noticed a huge\njump in the memory consumed by PG backend process during a delete query on\none of our DB tables. The heap memory as reported in /proc/PID/smaps\nincreased from 25MB to 600 MB. There are quite a few triggers setup on this\ntable and I determined that the jump happens when one particular trigger\nfunction (SQL function) is exceuted. The queries executed in this function\n(around 3000 line SQL function) are quite complex, quite a few unions,\njoins on tables containing around 100K records. The function is created\nusing the syntax:\n\nCREATE OR REPLACE FUNCTION FUNC1 (p_old TABLETYPE, p_new TABLETYPE, p_id\ncharacter(50), p_event_type text)\n RETURNS integer AS\n$BODY$\n\n......\n......\n......\n\n$BODY$\n LANGUAGE 'plpgsql' VOLATILE\n COST 100;\n\n\nThe memory gets released once the backend process exits but in our\napplication we have multiple threads opening connections to PG and\nexecuting these delete queries. Connection pooling is being used and this\nmemory is not getting released when the connection is idle; so at one point\nthe machine goes OOM.\n\nFor the upgrade, we have migrated the data used pg_dump and pg_restore.\n\nSystem Configuration:\nCentOS release 5.6 (Final)\nMem: 9 GB\nCPU : 4 core - Intel(R) Xeon(R) CPU E5620 @ 2.40GHz\n\nPostgresQL configuration:\nmax_connections = 250\nshared_buffers = 1536MB\nwork_mem = 12MB\nmaintenance_work_mem = 384MB\neffective_cache_size = 3072MB\n\nI have used psql to test the delete queries and used /proc/PID/smaps to\ncheck the memory usage of the launched backend.\n\nThe heap in 'smaps' is shown as below:\n04e3c000-29a70000 rw-p 04e3c000 00:00 0\n[heap]\nSize: 602320 kB\nRss: 596596 kB\nShared_Clean: 0 kB\nShared_Dirty: 60 kB\nPrivate_Clean: 0 kB\nPrivate_Dirty: 596536 kB\nSwap: 0 kB\nPss: 596544 kB\n\n\nWe are using the RHEL-5 64-bit PotgresQL RPMs present on the PG website:\npostgresql92-9.2.6-1PGDG.rhel5.x86_64.rpm\npostgresql92-contrib-9.2.6-1PGDG.rhel5.x86_64.rpm\npostgresql92-libs-9.2.6-1PGDG.rhel5.x86_64.rpm\npostgresql92-server-9.2.6-1PGDG.rhel5.x86_64.rpm\n\nI have now compiled the PG source code and installed a version with debug\nsymbols on the same machine. But I do not know how to determine what is\ntaking up so much memory. Is there any data, that I can collect by\nconnecting gdb, which will help ?\n\nI tried getting the memory stats (enabled SHOW_MEMORY_STATS) and captured\nthe attached data but I do not know how to interpret this data and also not\nsure whether this statistics captures the heap memory used by the SQL\nfunction.\n\nQueries I have are:\n\n1. Has there been any change in the way memory is allocated/released when\nSQL functions are triggered in a backend?\n\n2. How can I determine what is taking up so much memory; basically how do I\nproceed further on this one?\n\n3. I guess it is some data which is cached when the SQL function runs the\nfirst time in the backend because if I delete another row of the same table\nin the same PSQL session the memory does not jump again by that amount. Is\nthere a way to indicate that such caching should not be done?\n\nThanks,\nDatta.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Mon, 27 Jan 2014 14:31:23 -0800", "msg_from": "Dattaram Porob <[email protected]>", "msg_from_op": true, "msg_subject": "Fwd: Increased memory utilization by pgsql backend after upgrade from\n 9.1.3 to 9.2.6" }, { "msg_contents": "Hi,\n\nI managed to get a valgrind (--tool=massif) dump and ms_print output of the\nmemory utilization by the backend process. Attached is the ms_print output.\nIt reports a peak heap utilization of 616 MB.\n\nLooks like this memory is being used to cache the query plan. Any ideas,\nwhy it is occupying such a huge heap in 9.2.6 as compared to 9.1.3 ? I know\nthat the same SQL function occupies around 25MB heap in 9.1.3.\n\nAny thoughts/comments?\n\nThanks,\nDattaram.\n\n\nOn Mon, Jan 27, 2014 at 2:27 PM, Dattaram Porob <[email protected]>wrote:\n\n> Hi,\n>\n> We upgraded our PG version from 9.1.3 to 9.2.6. After that, noticed a huge\n> jump in the memory consumed by PG backend process during a delete query on\n> one of our DB tables. The heap memory as reported in /proc/PID/smaps\n> increased from 25MB to 600 MB. There are quite a few triggers setup on this\n> table and I determined that the jump happens when one particular trigger\n> function (SQL function) is exceuted. The queries executed in this function\n> (around 3000 line SQL function) are quite complex, quite a few unions,\n> joins on tables containing around 100K records. The function is created\n> using the syntax:\n>\n> CREATE OR REPLACE FUNCTION FUNC1 (p_old TABLETYPE, p_new TABLETYPE, p_id\n> character(50), p_event_type text)\n> RETURNS integer AS\n> $BODY$\n>\n> ......\n> ......\n> ......\n>\n> $BODY$\n> LANGUAGE 'plpgsql' VOLATILE\n> COST 100;\n>\n>\n> The memory gets released once the backend process exits but in our\n> application we have multiple threads opening connections to PG and\n> executing these delete queries. Connection pooling is being used and this\n> memory is not getting released when the connection is idle; so at one point\n> the machine goes OOM.\n>\n> For the upgrade, we have migrated the data used pg_dump and pg_restore.\n>\n> System Configuration:\n> CentOS release 5.6 (Final)\n> Mem: 9 GB\n> CPU : 4 core - Intel(R) Xeon(R) CPU E5620 @ 2.40GHz\n>\n> PostgresQL configuration:\n> max_connections = 250\n> shared_buffers = 1536MB\n> work_mem = 12MB\n> maintenance_work_mem = 384MB\n> effective_cache_size = 3072MB\n>\n> I have used psql to test the delete queries and used /proc/PID/smaps to\n> check the memory usage of the launched backend.\n>\n> The heap in 'smaps' is shown as below:\n> 04e3c000-29a70000 rw-p 04e3c000 00:00 0\n> [heap]\n> Size: 602320 kB\n> Rss: 596596 kB\n> Shared_Clean: 0 kB\n> Shared_Dirty: 60 kB\n> Private_Clean: 0 kB\n> Private_Dirty: 596536 kB\n> Swap: 0 kB\n> Pss: 596544 kB\n>\n>\n> We are using the RHEL-5 64-bit PotgresQL RPMs present on the PG website:\n> postgresql92-9.2.6-1PGDG.rhel5.x86_64.rpm\n> postgresql92-contrib-9.2.6-1PGDG.rhel5.x86_64.rpm\n> postgresql92-libs-9.2.6-1PGDG.rhel5.x86_64.rpm\n> postgresql92-server-9.2.6-1PGDG.rhel5.x86_64.rpm\n>\n> I have now compiled the PG source code and installed a version with debug\n> symbols on the same machine. But I do not know how to determine what is\n> taking up so much memory. Is there any data, that I can collect by\n> connecting gdb, which will help ?\n>\n> I tried getting the memory stats (enabled SHOW_MEMORY_STATS) and captured\n> the attached data but I do not know how to interpret this data and also not\n> sure whether this statistics captures the heap memory used by the SQL\n> function.\n>\n> Queries I have are:\n>\n> 1. Has there been any change in the way memory is allocated/released when\n> SQL functions are triggered in a backend?\n>\n> 2. How can I determine what is taking up so much memory; basically how do\n> I proceed further on this one?\n>\n> 3. I guess it is some data which is cached when the SQL function runs the\n> first time in the backend because if I delete another row of the same table\n> in the same PSQL session the memory does not jump again by that amount. Is\n> there a way to indicate that such caching should not be done?\n>\n> Thanks,\n> Datta.\n>\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 30 Jan 2014 09:43:37 -0800", "msg_from": "Dattaram Porob <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Increased memory utilization by pgsql backend after upgrade from\n 9.1.3 to 9.2.6" }, { "msg_contents": "On Thu, Jan 30, 2014 at 2:43 PM, Dattaram Porob\n<[email protected]> wrote:\n>\n> Looks like this memory is being used to cache the query plan. Any ideas, why\n> it is occupying such a huge heap in 9.2.6 as compared to 9.1.3 ? I know that\n> the same SQL function occupies around 25MB heap in 9.1.3.\n>\n> Any thoughts/comments?\n\nI believe (not sure) that you can release that memory by issuing a\nDISCARD ALL when returning a connection to the pool.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 4 Feb 2014 23:32:46 -0200", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Increased memory utilization by pgsql backend after\n upgrade from 9.1.3 to 9.2.6" } ]
[ { "msg_contents": "I have a performance problem using a dimensional model where the date is\nspecified in a DATE dimension, specifically when using 'WHERE DATE >= 'Some\nDate'\n\nThis query runs very fast when using an equality expression, eg. 'WHERE\nDATE = '2014-01-01\", and I'm wondering if there is a way to make it run\nfast when using the greater than expression.\n\nThe dimension table is about 5k rows, and the Fact table is ~60M.\n\nThanks in advance for any advice.\n\nJT.\n\n\n\nThe query :\n\nselect sid, count(*) from fact fact_data fact left outer join dim_date dim\non dim.date_id = fact.date_id where dim.date >= '2014-1-25' group by sid\norder by count desc limit 10;\n\nFACT Table Definition:\n\n Table \"public.fact_data\"\n Column | Type | Modifiers\n---------------+-----------------------------+-----------\n date_id | integer |\n date | timestamp without time zone |\n agent_id | integer |\n instance_id | integer |\n sid | integer |\n Indexes:\n \"fact_agent_id\" btree (agent_id)\n \"fact_date_id\" btree (date_id) CLUSTER\n \"fact_alarms_sid\" btree (sid)\n\n\n Table \"public.dim_date\"\n Column | Type | Modifiers\n\n--------------------+---------+------------------------------------------------------------\n date_id | integer | not null default\nnextval('dim_date_date_id_seq'::regclass)\n date | date |\n year | integer |\n month | integer |\n month_name | text |\n day | integer |\n day_of_year | integer |\n weekday_name | text |\n calendar_week | integer |\n quarter | text |\n year_quarter | text |\n year_month | text |\n year_calendar_week | text |\n weekend | text |\n week_start_date | date |\n week_end_date | date |\n month_start_date | date |\n month_end_date | date |\nIndexes:\n \"dim_date_date\" btree (date)\n \"dim_date_date_id\" btree (date_id)\n\nEXPLAIN Output:\n\nexplain (analyze, buffers) select dim.date_id, fact.sid, count(1) from\nfact_data fact left outer join dim_date dim on dim.date_id = fact.date_id\nwhere dim.date_id >= 5139 group by 1,2 order by 3 desc limit 10;\n\n QUERY PLAN\n\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=9772000.55..9772000.58 rows=10 width=8) (actual\ntime=91064.421..91064.440 rows=10 loops=1)\n Buffers: shared hit=4042 read=1542501\n -> Sort (cost=9772000.55..9787454.06 rows=6181404 width=8) (actual\ntime=91064.408..91064.414 rows=10 loops=1)\n Sort Key: (count(1))\n Sort Method: top-N heapsort Memory: 25kB\n Buffers: shared hit=4042 read=1542501\n -> GroupAggregate (cost=9150031.23..9638422.63 rows=6181404\nwidth=8) (actual time=90892.625..91063.905 rows=617 loops=1)\n Buffers: shared hit=4042 read=1542501\n -> Sort (cost=9150031.23..9256675.57 rows=42657736\nwidth=8) (actual time=90877.129..90964.995 rows=124965 loops=1)\n Sort Key: dim.date_id, fact.sid\n Sort Method: quicksort Memory: 8930kB\n Buffers: shared hit=4042 read=1542501\n -> Hash Join (cost=682.34..3160739.50 rows=42657736\nwidth=8) (actual time=45087.394..90761.624 rows=124965 loops=1)\n Hash Cond: (fact.date_id = dim.date_id)\n Buffers: shared hit=4042 read=1542501\n -> Seq Scan on fact_data fact\n (cost=0.00..2139866.40 rows=59361340 width=8) (actual\ntime=0.090..47001.500 rows=59360952 loops=1)\n Buffers: shared hit=3752 read=1542501\n -> Hash (cost=518.29..518.29 rows=13124\nwidth=4) (actual time=21.083..21.083 rows=13125 loops=1)\n Buckets: 2048 Batches: 1 Memory Usage:\n462kB\n Buffers: shared hit=290\n -> Seq Scan on dim_date dim\n (cost=0.00..518.29 rows=13124 width=4) (actual time=0.494..10.918\nrows=13125 loops=1)\n Filter: (date_id >= 5139)\n Rows Removed by Filter: 5138\n Buffers: shared hit=290\n Total runtime: 91064.496 ms\n(25 rows)\n\nI have a performance problem using a dimensional model where the date is specified in a DATE dimension, specifically when using 'WHERE DATE >= 'Some Date'This query runs very fast when using an equality expression, eg. 'WHERE DATE = '2014-01-01\", and I'm wondering if there is a way to make it run fast when using the greater than expression. \nThe dimension table is about 5k rows, and the Fact table is ~60M.Thanks in advance for any advice.JT.\nThe query :select sid, count(*) from fact fact_data fact left outer join dim_date dim on dim.date_id = fact.date_id where dim.date >= '2014-1-25' group by sid order by count desc limit 10;\nFACT Table Definition:           Table \"public.fact_data\"    Column     |            Type             | Modifiers ---------------+-----------------------------+-----------\n date_id | integer                     |  date    | timestamp without time zone |  agent_id      | integer                     |  instance_id   | integer                     | \n sid           | integer                     |  Indexes:    \"fact_agent_id\" btree (agent_id)    \"fact_date_id\" btree (date_id) CLUSTER    \"fact_alarms_sid\" btree (sid)\n                                 Table \"public.dim_date\"       Column       |  Type   |                         Modifiers                          \n--------------------+---------+------------------------------------------------------------ date_id            | integer | not null default nextval('dim_date_date_id_seq'::regclass) date               | date    | \n year               | integer |  month              | integer |  month_name         | text    |  day                | integer |  day_of_year        | integer |  weekday_name       | text    | \n calendar_week      | integer |  quarter            | text    |  year_quarter       | text    |  year_month         | text    |  year_calendar_week | text    |  weekend            | text    | \n week_start_date    | date    |  week_end_date      | date    |  month_start_date   | date    |  month_end_date     | date    | Indexes:    \"dim_date_date\" btree (date)\n    \"dim_date_date_id\" btree (date_id)EXPLAIN Output:explain (analyze, buffers)  select dim.date_id, fact.sid, count(1) from fact_data fact left outer join dim_date dim on dim.date_id = fact.date_id where dim.date_id >= 5139 group by 1,2  order by 3 desc limit 10;\n                                                                               QUERY PLAN                                                                                -------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit  (cost=9772000.55..9772000.58 rows=10 width=8) (actual time=91064.421..91064.440 rows=10 loops=1)   Buffers: shared hit=4042 read=1542501   ->  Sort  (cost=9772000.55..9787454.06 rows=6181404 width=8) (actual time=91064.408..91064.414 rows=10 loops=1)\n         Sort Key: (count(1))         Sort Method: top-N heapsort  Memory: 25kB         Buffers: shared hit=4042 read=1542501         ->  GroupAggregate  (cost=9150031.23..9638422.63 rows=6181404 width=8) (actual time=90892.625..91063.905 rows=617 loops=1)\n               Buffers: shared hit=4042 read=1542501               ->  Sort  (cost=9150031.23..9256675.57 rows=42657736 width=8) (actual time=90877.129..90964.995 rows=124965 loops=1)                     Sort Key: dim.date_id, fact.sid\n                     Sort Method: quicksort  Memory: 8930kB                     Buffers: shared hit=4042 read=1542501                     ->  Hash Join  (cost=682.34..3160739.50 rows=42657736 width=8) (actual time=45087.394..90761.624 rows=124965 loops=1)\n                           Hash Cond: (fact.date_id = dim.date_id)                           Buffers: shared hit=4042 read=1542501                           ->  Seq Scan on fact_data fact  (cost=0.00..2139866.40 rows=59361340 width=8) (actual time=0.090..47001.500 rows=59360952 loops=1)\n                                 Buffers: shared hit=3752 read=1542501                           ->  Hash  (cost=518.29..518.29 rows=13124 width=4) (actual time=21.083..21.083 rows=13125 loops=1)\n                                 Buckets: 2048  Batches: 1  Memory Usage: 462kB                                 Buffers: shared hit=290                                 ->  Seq Scan on dim_date dim  (cost=0.00..518.29 rows=13124 width=4) (actual time=0.494..10.918 rows=13125 loops=1)\n                                       Filter: (date_id >= 5139)                                       Rows Removed by Filter: 5138                                       Buffers: shared hit=290\n Total runtime: 91064.496 ms(25 rows)", "msg_date": "Tue, 28 Jan 2014 15:01:32 -0700", "msg_from": "Jim Treinen <[email protected]>", "msg_from_op": true, "msg_subject": "Slow query on join with Date >=" } ]
[ { "msg_contents": "Hello everyone,\n\nI've a query that runs on a table with a matching index to its WHERE and\nORDER clause. However the planner never uses that index. Is there any reason\nwhy it doesn't?\n\nHere's the table:\n\ndb=> \\d social_feed_feed_items;\n Table\n\"public.social_feed_feed_items\"\n Column | Type | \nModifiers\n-------------------+-----------------------------+---------------------------------------------------------------------\n id | integer | not null default\nnextval('social_feed_feed_items_id_seq'::regclass)\n social_feed_id | integer |\n social_message_id | integer |\n posted_at | timestamp without time zone |\nIndexes:\n \"social_message_feed_feed_items_pkey\" PRIMARY KEY, btree (id)\n \"index_social_feed_feed_items_on_social_feed_id\" btree (social_feed_id)\n \"index_social_feed_feed_items_on_social_feed_id_and_posted_at\" btree\n(social_feed_id, posted_at DESC NULLS LAST)\n \"index_social_feed_feed_items_on_social_message_id\" btree\n(social_message_id)\n \"social_feed_item_feed_message_index\" btree (social_feed_id,\nsocial_message_id)\n\nHere's the query:\n\ndb=> EXPLAIN ANALYSE SELECT social_feed_feed_items.social_message_id FROM\nsocial_feed_feed_items WHERE social_feed_feed_items.social_feed_id = 480\nORDER BY posted_at DESC NULLS LAST LIMIT 1200;\n \nQUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=126.83..127.43 rows=1200 width=12) (actual time=10.321..13.694\nrows=1200 loops=1)\n -> Sort (cost=126.83..129.08 rows=4498 width=12) (actual\ntime=10.318..11.485 rows=1200 loops=1)\n Sort Key: posted_at\n Sort Method: top-N heapsort Memory: 153kB\n -> Index Scan using index_social_feed_feed_items_on_social_feed_id\non social_feed_feed_items (cost=0.09..76.33 rows=4498 width=12) (actual\ntime=0.037..5.317 rows=4249 loops=1)\n Index Cond: (social_feed_id = 480)\n Total runtime: 14.913 ms\n(7 rows)\n\nI was hoping that they planner would use\nindex_social_feed_feed_items_on_social_feed_id_and_posted_at, but it never\ndoes. If I manually remove the index that it currently uses then magic\nhappens:\n\ndb=> DROP INDEX index_social_feed_feed_items_on_social_feed_id;\nDROP INDEX\ndb=> EXPLAIN ANALYSE SELECT social_feed_feed_items.social_message_id FROM\nsocial_feed_feed_items WHERE social_feed_feed_items.social_feed_id = 480\nORDER BY posted_at DESC NULLS LAST LIMIT 1200;\n \nQUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.09..998.63 rows=1200 width=12) (actual time=0.027..3.792\nrows=1200 loops=1)\n -> Index Scan using\nindex_social_feed_feed_items_on_social_feed_id_and_posted_at on\nsocial_feed_feed_items (cost=0.09..3742.95 rows=4498 width=12) (actual\ntime=0.023..1.536 rows=1200 loops=1)\n Index Cond: (social_feed_id = 480)\n Total runtime: 4.966 ms\n(4 rows)\n\nSo my question is, without dropping\nindex_social_feed_feed_items_on_social_feed_id since it's needed by other\nqueries, how do I make the planner use\nindex_social_feed_feed_items_on_social_feed_id_and_posted_at for a much\nfaster performance? Why didn't the query look at the matching WHERE and\nORDER clause and only chose the WHERE to begin its plan?\n\ndb=> show SERVER_VERSION;\n server_version\n----------------\n 9.3.2\n(1 row)\n\nThank you very much for your response(s).\n\nRegards,\nKen\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/WHERE-with-ORDER-not-using-the-best-index-tp5789581.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 29 Jan 2014 13:19:17 -0800 (PST)", "msg_from": "jugnooken <[email protected]>", "msg_from_op": true, "msg_subject": "WHERE with ORDER not using the best index" }, { "msg_contents": "jugnooken <[email protected]> writes:\n> Here's the query:\n\n> db=> EXPLAIN ANALYSE SELECT social_feed_feed_items.social_message_id FROM\n> social_feed_feed_items WHERE social_feed_feed_items.social_feed_id = 480\n> ORDER BY posted_at DESC NULLS LAST LIMIT 1200;\n \n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=126.83..127.43 rows=1200 width=12) (actual time=10.321..13.694\n> rows=1200 loops=1)\n> -> Sort (cost=126.83..129.08 rows=4498 width=12) (actual\n> time=10.318..11.485 rows=1200 loops=1)\n> Sort Key: posted_at\n> Sort Method: top-N heapsort Memory: 153kB\n> -> Index Scan using index_social_feed_feed_items_on_social_feed_id\n> on social_feed_feed_items (cost=0.09..76.33 rows=4498 width=12) (actual\n> time=0.037..5.317 rows=4249 loops=1)\n> Index Cond: (social_feed_id = 480)\n> Total runtime: 14.913 ms\n> (7 rows)\n\n> I was hoping that they planner would use\n> index_social_feed_feed_items_on_social_feed_id_and_posted_at, but it never\n> does. If I manually remove the index that it currently uses then magic\n> happens:\n\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.09..998.63 rows=1200 width=12) (actual time=0.027..3.792\n> rows=1200 loops=1)\n> -> Index Scan using\n> index_social_feed_feed_items_on_social_feed_id_and_posted_at on\n> social_feed_feed_items (cost=0.09..3742.95 rows=4498 width=12) (actual\n> time=0.023..1.536 rows=1200 loops=1)\n> Index Cond: (social_feed_id = 480)\n> Total runtime: 4.966 ms\n> (4 rows)\n\nWell, it likes the first plan because it's estimating that one as cheaper\n;-). The question is why the indexscan cost is estimated so remarkably\nhigh for the second index --- nearly two orders of magnitude more to\nretrieve the same number of index entries. The most obvious explanation\nis that that index is horribly bloated for some reason. Have you checked\nthe physical index sizes? If the second index is many times bigger,\nREINDEX ought to help, though it's unclear whether the bloat will recur.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 29 Jan 2014 18:15:55 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WHERE with ORDER not using the best index" }, { "msg_contents": "Thank you so much for the prompt reply, Tom. The index is actually fairly new\n- but to be safe I issued REINDEX TABLE so that they are all clean. Here are\nthe sizes of each index right after REINDEX.\n\ndb=> select\npg_size_pretty(pg_relation_size('index_social_feed_feed_items_on_social_feed_id_and_posted_at'));\n pg_size_pretty\n----------------\n 149 MB\n(1 row)\n\ndb=> select\npg_size_pretty(pg_relation_size('index_social_feed_feed_items_on_social_feed_id'));\n pg_size_pretty\n----------------\n 106 MB\n(1 row)\n\nUnfortunately, pg still thinks using\nindex_social_feed_feed_items_on_social_feed_id is faster although they are\nabout the same size :(. Any idea?\n\nRegards,\nKen\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/WHERE-with-ORDER-not-using-the-best-index-tp5789581p5789624.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 29 Jan 2014 17:47:19 -0800 (PST)", "msg_from": "jugnooken <[email protected]>", "msg_from_op": true, "msg_subject": "Re: WHERE with ORDER not using the best index" }, { "msg_contents": "jugnooken <[email protected]> writes:\n> Unfortunately, pg still thinks using\n> index_social_feed_feed_items_on_social_feed_id is faster although they are\n> about the same size :(. Any idea?\n\nOn further reflection, the cost estimate that is weird for this number of\nrows is not the large one for your preferred index, but the small estimate\nfor the one the planner likes. My guess is that that must be happening\nbecause the latter index is nearly perfectly correlated with the table's\nphysical order, whereas yours is more or less random relative to table\norder.\n\nThe fact that the former index is actually faster in use means that in\nyour environment, random access into the table is pretty cheap, which\nmeans you should consider decreasing random_page_cost. But first it'd\nbe a good idea to confirm that your test case is actually representative\nof production behavior --- it's very easy to get fooled by all-in-cache\nmeasurements, which are not reliable guides unless your database does in\nfact fit in RAM.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 30 Jan 2014 11:00:41 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WHERE with ORDER not using the best index" } ]
[ { "msg_contents": "Hi!\n\nI have a table called 'feed'. It's a big table accessed by many types of\nqueries, so I have quite a lot of indices on it.\n\nThose that are relevant looks like this:\n\n\"feed_user_id_active_id_added_idx\" btree (user_id, active_id, added)\n\"feed_user_id_added_idx\" btree (user_id, added DESC)\n\"feed_user_id_added_idx2\" btree (user_id, added DESC) WHERE active_id =\nuser_id AND type = 1\n\nlast one is very small and tailored for the specific query.\n\"added\" field is timestamp, everything else is integers.\n\nThat specific query looks like this:\n\nSELECT * FROM feed WHERE user_id = ? AND type = 1 AND active_id = user_id\nORDER BY added DESC LIMIT 31;\n\nBut it doesn't use the last index. EXPLAIN shows this:\n\n Limit (cost=0.00..463.18 rows=31 width=50)\n -> Index Scan Backward using feed_user_id_active_id_added_idx on\nuser_feed (cost=0.00..851.66 rows=57 width=50)\n Index Cond: ((user_id = 7) AND (active_id = 7))\n Filter: (type = 1)\n\nSo as we can see optimiser changes \"active_id = user_id\" to \"active_id =\n<whatever value user_id takes>\". And it brokes my nice fast partial index :(\nCan I do something here so optimiser would use the feed_user_id_added_idx2\nindex? It's around ten times smaller than the 'generic'\nfeed_user_id_active_id_added_idx index.\n\nI have PostgreSQL 9.2.6 on Debian.\n\nBest regards,\nDmitriy Shalashov\n\nHi!I have a table called 'feed'. It's a big table accessed by many types of queries, so I have quite a lot of indices on it.Those that are relevant looks like this:\n\"feed_user_id_active_id_added_idx\" btree (user_id, active_id, added)\"feed_user_id_added_idx\" btree (user_id, added DESC)\"feed_user_id_added_idx2\" btree (user_id, added DESC) WHERE active_id = user_id AND type = 1\nlast one is very small and tailored for the specific query.\"added\" field is timestamp, everything else is integers.That specific query looks like this:SELECT * FROM feed WHERE user_id = ? AND type = 1 AND active_id = user_id ORDER BY added DESC LIMIT 31;\nBut it doesn't use the last index. EXPLAIN shows this: Limit  (cost=0.00..463.18 rows=31 width=50)   ->  Index Scan Backward using feed_user_id_active_id_added_idx on user_feed  (cost=0.00..851.66 rows=57 width=50)\n\n         Index Cond: ((user_id = 7) AND (active_id = 7))         Filter: (type = 1)So as we can see optimiser changes \"active_id = user_id\" to \"active_id = <whatever value user_id takes>\". And it brokes my nice fast partial index :(\nCan I do something here so optimiser would use the feed_user_id_added_idx2 index? It's around ten times smaller than the 'generic' feed_user_id_active_id_added_idx index.I have PostgreSQL 9.2.6 on Debian.\nBest regards,Dmitriy Shalashov", "msg_date": "Thu, 30 Jan 2014 03:38:01 +0400", "msg_from": "=?UTF-8?B?0JTQvNC40YLRgNC40Lkg0KjQsNC70LDRiNC+0LI=?= <[email protected]>", "msg_from_op": true, "msg_subject": "trick the query optimiser to skip some optimisations" }, { "msg_contents": "On Wed, Jan 29, 2014 at 3:38 PM, Дмитрий Шалашов <[email protected]> wrote:\n\n\n> \"feed_user_id_added_idx2\" btree (user_id, added DESC) WHERE active_id =\n> user_id AND type = 1\n>\n\n ...\n\n\n> SELECT * FROM feed WHERE user_id = ? AND type = 1 AND active_id = user_id\n> ORDER BY added DESC LIMIT 31;\n>\n> But it doesn't use the last index. EXPLAIN shows this:\n>\n> Limit (cost=0.00..463.18 rows=31 width=50)\n> -> Index Scan Backward using feed_user_id_active_id_added_idx on\n> user_feed (cost=0.00..851.66 rows=57 width=50)\n> Index Cond: ((user_id = 7) AND (active_id = 7))\n> Filter: (type = 1)\n>\n> So as we can see optimiser changes \"active_id = user_id\" to \"active_id =\n> <whatever value user_id takes>\". And it brokes my nice fast partial index :(\n> Can I do something here so optimiser would use the feed_user_id_added_idx2\n> index? It's around ten times smaller than the 'generic'\n> feed_user_id_active_id_added_idx index.\n>\n\nHow about \"where user_id+0=?\"\n\nCheers,\n\nJeff\n\nOn Wed, Jan 29, 2014 at 3:38 PM, Дмитрий Шалашов <[email protected]> wrote:\n \"feed_user_id_added_idx2\" btree (user_id, added DESC) WHERE active_id = user_id AND type = 1\n ... SELECT * FROM feed WHERE user_id = ? AND type = 1 AND active_id = user_id ORDER BY added DESC LIMIT 31;\nBut it doesn't use the last index. EXPLAIN shows this: Limit  (cost=0.00..463.18 rows=31 width=50)   ->  Index Scan Backward using feed_user_id_active_id_added_idx on user_feed  (cost=0.00..851.66 rows=57 width=50)\n\n\n         Index Cond: ((user_id = 7) AND (active_id = 7))         Filter: (type = 1)So as we can see optimiser changes \"active_id = user_id\" to \"active_id = <whatever value user_id takes>\". And it brokes my nice fast partial index :(\nCan I do something here so optimiser would use the feed_user_id_added_idx2 index? It's around ten times smaller than the 'generic' feed_user_id_active_id_added_idx index.\nHow about \"where user_id+0=?\" Cheers,Jeff", "msg_date": "Wed, 29 Jan 2014 15:50:00 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: trick the query optimiser to skip some optimisations" }, { "msg_contents": "Thanks for the tip!\n\nWell, index is now used but...\n\n Limit (cost=264291.67..264291.75 rows=31 width=50)\n -> Sort (cost=264291.67..264292.80 rows=453 width=50)\n Sort Key: added\n -> Bitmap Heap Scan on feed (cost=1850.99..264278.18 rows=453\nwidth=50)\n Recheck Cond: ((active_id = user_id) AND (type = 1))\n Filter: ((user_id + 0) = 7)\n -> Bitmap Index Scan on feed_user_id_added_idx2\n(cost=0.00..1850.88 rows=90631 width=0)\n\n\nBest regards,\nDmitriy Shalashov\n\n\n2014-01-30 Jeff Janes <[email protected]>\n\n> On Wed, Jan 29, 2014 at 3:38 PM, Дмитрий Шалашов <[email protected]>wrote:\n>\n>\n>> \"feed_user_id_added_idx2\" btree (user_id, added DESC) WHERE active_id =\n>> user_id AND type = 1\n>>\n>\n> ...\n>\n>\n>> SELECT * FROM feed WHERE user_id = ? AND type = 1 AND active_id = user_id\n>> ORDER BY added DESC LIMIT 31;\n>>\n>> But it doesn't use the last index. EXPLAIN shows this:\n>>\n>> Limit (cost=0.00..463.18 rows=31 width=50)\n>> -> Index Scan Backward using feed_user_id_active_id_added_idx on\n>> user_feed (cost=0.00..851.66 rows=57 width=50)\n>> Index Cond: ((user_id = 7) AND (active_id = 7))\n>> Filter: (type = 1)\n>>\n>> So as we can see optimiser changes \"active_id = user_id\" to \"active_id =\n>> <whatever value user_id takes>\". And it brokes my nice fast partial index :(\n>> Can I do something here so optimiser would use the\n>> feed_user_id_added_idx2 index? It's around ten times smaller than the\n>> 'generic' feed_user_id_active_id_added_idx index.\n>>\n>\n> How about \"where user_id+0=?\"\n>\n> Cheers,\n>\n> Jeff\n>\n\nThanks for the tip!Well, index is now used but... Limit  (cost=264291.67..264291.75 rows=31 width=50)   ->  Sort  (cost=264291.67..264292.80 rows=453 width=50)         Sort Key: added\n\n         ->  Bitmap Heap Scan on feed  (cost=1850.99..264278.18 rows=453 width=50)               Recheck Cond: ((active_id = user_id) AND (type = 1))               Filter: ((user_id + 0) = 7)               ->  Bitmap Index Scan on feed_user_id_added_idx2  (cost=0.00..1850.88 rows=90631 width=0)\nBest regards,Dmitriy Shalashov\n2014-01-30 Jeff Janes <[email protected]>\nOn Wed, Jan 29, 2014 at 3:38 PM, Дмитрий Шалашов <[email protected]> wrote:\n \"feed_user_id_added_idx2\" btree (user_id, added DESC) WHERE active_id = user_id AND type = 1\n ... SELECT * FROM feed WHERE user_id = ? AND type = 1 AND active_id = user_id ORDER BY added DESC LIMIT 31;\nBut it doesn't use the last index. EXPLAIN shows this: Limit  (cost=0.00..463.18 rows=31 width=50)   ->  Index Scan Backward using feed_user_id_active_id_added_idx on user_feed  (cost=0.00..851.66 rows=57 width=50)\n\n\n\n\n         Index Cond: ((user_id = 7) AND (active_id = 7))         Filter: (type = 1)So as we can see optimiser changes \"active_id = user_id\" to \"active_id = <whatever value user_id takes>\". And it brokes my nice fast partial index :(\nCan I do something here so optimiser would use the feed_user_id_added_idx2 index? It's around ten times smaller than the 'generic' feed_user_id_active_id_added_idx index.\nHow about \"where user_id+0=?\" Cheers,Jeff", "msg_date": "Thu, 30 Jan 2014 04:17:23 +0400", "msg_from": "=?UTF-8?B?0JTQvNC40YLRgNC40Lkg0KjQsNC70LDRiNC+0LI=?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: trick the query optimiser to skip some optimisations" }, { "msg_contents": "On Wed, Jan 29, 2014 at 3:38 PM, Дмитрий Шалашов <[email protected]> wrote:\n> I have a table called 'feed'. It's a big table accessed by many types of\n> queries, so I have quite a lot of indices on it.\n>\n> Those that are relevant looks like this:\n>\n> \"feed_user_id_active_id_added_idx\" btree (user_id, active_id, added)\n> \"feed_user_id_added_idx\" btree (user_id, added DESC)\n> \"feed_user_id_added_idx2\" btree (user_id, added DESC) WHERE active_id =\n> user_id AND type = 1\n>\n> last one is very small and tailored for the specific query.\n> \"added\" field is timestamp, everything else is integers.\n[..]\n> Limit (cost=0.00..463.18 rows=31 width=50)\n> -> Index Scan Backward using feed_user_id_active_id_added_idx on\n> user_feed (cost=0.00..851.66 rows=57 width=50)\n> Index Cond: ((user_id = 7) AND (active_id = 7))\n> Filter: (type = 1)\n[...]\n> Can I do something here so optimiser would use the feed_user_id_added_idx2\n> index? It's around ten times smaller than the 'generic'\n> feed_user_id_active_id_added_idx index.\n>\n> I have PostgreSQL 9.2.6 on Debian.\n\nCould you please show EXPLAIN ANALYZE for both cases, the current one\nand with feed_user_id_active_id_added_idx dropped?\n\n-- \nKind regards,\nSergey Konoplev\nPostgreSQL Consultant and DBA\n\nhttp://www.linkedin.com/in/grayhemp\n+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979\[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 30 Jan 2014 12:36:26 -0800", "msg_from": "Sergey Konoplev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: trick the query optimiser to skip some optimisations" }, { "msg_contents": "On Wed, Jan 29, 2014 at 4:17 PM, Дмитрий Шалашов <[email protected]> wrote:\n\n> Thanks for the tip!\n>\n> Well, index is now used but...\n>\n> Limit (cost=264291.67..264291.75 rows=31 width=50)\n> -> Sort (cost=264291.67..264292.80 rows=453 width=50)\n> Sort Key: added\n> -> Bitmap Heap Scan on feed (cost=1850.99..264278.18 rows=453\n> width=50)\n> Recheck Cond: ((active_id = user_id) AND (type = 1))\n> Filter: ((user_id + 0) = 7)\n> -> Bitmap Index Scan on feed_user_id_added_idx2\n> (cost=0.00..1850.88 rows=90631 width=0)\n>\n\nAh, of course. It prevents the optimization you want, as well as the one\nyou don't want.\n\nThis is getting very ugly, but maybe change the index to match the\ndegenerate query:\n\n\"feed_user_id_added_idx3\" btree ((user_id + 0), added DESC) WHERE active_id\n= user_id AND type = 1\n\nLong term I would probably look into refactoring the table so that\n\"active_id = user_id\" is not a magical condition, like it seems to be for\nyou currently. Maybe introduce a boolean column.\n\nCheers,\n\nJeff\n\nOn Wed, Jan 29, 2014 at 4:17 PM, Дмитрий Шалашов <[email protected]> wrote:\nThanks for the tip!Well, index is now used but...\n Limit  (cost=264291.67..264291.75 rows=31 width=50)   ->  Sort  (cost=264291.67..264292.80 rows=453 width=50)         Sort Key: added\n\n         ->  Bitmap Heap Scan on feed  (cost=1850.99..264278.18 rows=453 width=50)               Recheck Cond: ((active_id = user_id) AND (type = 1))               Filter: ((user_id + 0) = 7)               ->  Bitmap Index Scan on feed_user_id_added_idx2  (cost=0.00..1850.88 rows=90631 width=0)\nAh, of course.  It prevents the optimization you want, as well as the one you don't want.This is getting very ugly, but maybe change the index to match the degenerate query:\n\"feed_user_id_added_idx3\" btree ((user_id + 0), added DESC) WHERE active_id = user_id AND type = 1Long term I would probably look into refactoring the table so that \"active_id = user_id\" is not a magical condition, like it seems to be for you currently.  Maybe introduce a boolean column.\nCheers,Jeff", "msg_date": "Thu, 30 Jan 2014 14:03:32 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: trick the query optimiser to skip some optimisations" } ]
[ { "msg_contents": "Hi,\n\n \n\nI am not sure if it is bug or not but I found some strange behaviour. Maybe\nit is the same as described on\nhttp://www.postgresql.org/message-id/[email protected] ?). If\nyes - I'm sorry for the trouble, but I think that my example is more\nobvious.\n\n \n\nTested on PostgreSQL 9.2.4 and 9.2.6.\n\n \n\nConsole 1:\n\nBEGIN;\n\nDECLARE a CURSOR FOR SELECT * FROM tab;\n\n--- Keep cursor open for disallow full vacuum of tab\n\n \n\nConsole 2:\n\nSELECT count(*) FROM tab; \n\n---- Result: 3588;\n\nselect reltuples from pg_class where relname='table'; \n\n--- Result: 3588\n\nUPDATE tab SET id=id;\n\nUPDATE tab SET id=id;\n\nUPDATE tab SET id=id;\n\nVACUUM ANALYZE tab;\n\nselect reltuples from pg_class where relname='table'; \n\n--- Result: 3588\n\n \n\nNow wait few seconds :)\n\n \n\nselect reltuples from pg_class where relname='table'; \n\n--- Result: 12560\n\n \n\nVACUUM ANALYZE tab;\n\nselect reltuples from pg_class where relname='table'; \n\n--- Result: 3588\n\n \n\nThere is 3588 live records and 12560 live+dead records in table.\n\nThat is strange for me. VACUUM updates pg_class.reltuples differently (only\nlive roiws count) than autovacuum (live and dead rows). Why?\n\n \n\nAlso in planning:\n\n \n\nexplain SELECT id FROM tab;\n\n QUERY PLAN\n\n----------------------------------------------------------------------\n\nSeq Scan on tab (cost=0.00..1074.60 rows=12560 width=4)\n\n \n\nEstimation is done with the use of current pg_class.reltuples value. This\nvalue includes dead rows count after autovacuum so estimation is bad,\nespecially in more complex planner tree, for example:\n\n \n\nExplain SELECT a.id FROM tab AS a JOIN tab AS b USING (id);\n\n \n\n QUERY PLAN\n\n----------------------------------------------------------------------------\n-----------------------------\n\nNested Loop (cost=0.00..6410.70 rows=12560 width=4)\n\n -> Seq Scan on tab a (cost=0.00..1074.60 rows=12560 width=8)\n\n -> Index Only Scan using tab_pkey on tab b (cost=0.00..0.41 rows=1\nwidth=4)\n\n Index Cond: (id = a.id)\n\n \n\nPostgreSQL estimates 12560 records in query result. This is wrong estimation\nif dead tuples are removed during seq scan or index scan (I suppose that it\nis).\n\n \n\nI don't think that AUTOVACUUM and VACUUM ANALYZE should behave differently\n:(\n\n \n\n--------------------------------------------------------------------------\n\nArtur Zajac\n\n \n\n\nHi, I am not sure if it is bug or not but I found some strange behaviour. Maybe it is the same as described on http://www.postgresql.org/message-id/[email protected] ?). If yes – I’m sorry for the trouble, but I think that my example is more obvious. Tested on PostgreSQL 9.2.4 and 9.2.6. Console 1:BEGIN;DECLARE a CURSOR FOR SELECT * FROM tab;--- Keep cursor open for disallow full vacuum of tab Console 2:SELECT count(*) FROM tab;     ---- Result: 3588;select reltuples from pg_class where relname='table’;  --- Result: 3588UPDATE tab SET id=id;UPDATE tab SET id=id;UPDATE tab SET id=id;VACUUM ANALYZE tab;select reltuples from pg_class where relname='table’; --- Result: 3588 Now wait few seconds J select reltuples from pg_class where relname='table’; --- Result: 12560 VACUUM ANALYZE tab;select reltuples from pg_class where relname='table’; --- Result: 3588 There is 3588 live records and 12560 live+dead records in table.That is strange for me. VACUUM updates pg_class.reltuples differently (only live roiws count) than autovacuum (live and dead rows). Why? Also in planning: explain SELECT id FROM tab;                              QUERY PLAN---------------------------------------------------------------------- Seq Scan on tab  (cost=0.00..1074.60 rows=12560 width=4) Estimation is done with the use of current pg_class.reltuples value.  This value includes dead rows count after autovacuum so estimation is bad, especially in more complex planner tree, for example: Explain SELECT a.id FROM tab AS a JOIN tab AS b USING (id);                                                QUERY PLAN--------------------------------------------------------------------------------------------------------- Nested Loop  (cost=0.00..6410.70 rows=12560 width=4)   ->  Seq Scan on tab a  (cost=0.00..1074.60 rows=12560 width=8)   ->  Index Only Scan using tab_pkey on tab b  (cost=0.00..0.41 rows=1 width=4)         Index Cond: (id = a.id) PostgreSQL estimates 12560 records in query result. This is wrong estimation if dead tuples are removed during seq scan or index scan (I suppose that it is). I don’t think that AUTOVACUUM and VACUUM ANALYZE should behave differently L --------------------------------------------------------------------------Artur Zajac", "msg_date": "Mon, 3 Feb 2014 19:57:45 +0100", "msg_from": "=?iso-8859-2?Q?Artur_Zaj=B1c_CFI?= <[email protected]>", "msg_from_op": true, "msg_subject": "Planner estimates and VACUUM/autovacuum" } ]
[ { "msg_contents": "Hi,\n\n \n\nI am not sure if it is bug or not but I found some strange behaviour. Maybe\nit is the same as described on\nhttp://www.postgresql.org/message-id/[email protected] ?). If\nyes - I'm sorry for the trouble, but I think that my example is more\nobvious.\n\n \n\nTested on PostgreSQL 9.2.4 and 9.2.6.\n\n \n\nConsole 1:\n\nBEGIN;\n\nDECLARE a CURSOR FOR SELECT * FROM tab;\n\n--- Keep cursor open for disallow full vacuum of tab\n\n \n\nConsole 2:\n\nSELECT count(*) FROM tab; \n\n---- Result: 3588;\n\nselect reltuples from pg_class where relname='table'; \n\n--- Result: 3588\n\nUPDATE tab SET id=id;\n\nUPDATE tab SET id=id;\n\nUPDATE tab SET id=id;\n\nVACUUM ANALYZE tab;\n\nselect reltuples from pg_class where relname='table'; \n\n--- Result: 3588\n\n \n\nNow wait few seconds :)\n\n \n\nselect reltuples from pg_class where relname='table'; \n\n--- Result: 12560\n\n \n\nVACUUM ANALYZE tab;\n\nselect reltuples from pg_class where relname='table'; \n\n--- Result: 3588\n\n \n\nThere is 3588 live records and 12560 live+dead records in table.\n\nThat is strange for me. VACUUM updates pg_class.reltuples differently (only\nlive roiws count) than autovacuum (live and dead rows). Why?\n\n \n\nAlso in planning:\n\n \n\nexplain SELECT id FROM tab;\n\n QUERY PLAN\n\n----------------------------------------------------------------------\n\nSeq Scan on tab (cost=0.00..1074.60 rows=12560 width=4)\n\n \n\nEstimation is done with the use of current pg_class.reltuples value. This\nvalue includes dead rows count after autovacuum so estimation is bad,\nespecially in more complex planner tree, for example:\n\n \n\nExplain SELECT a.id FROM tab AS a JOIN tab AS b USING (id);\n\n \n\n QUERY PLAN\n\n----------------------------------------------------------------------------\n-----------------------------\n\nNested Loop (cost=0.00..6410.70 rows=12560 width=4)\n\n -> Seq Scan on tab a (cost=0.00..1074.60 rows=12560 width=8)\n\n -> Index Only Scan using tab_pkey on tab b (cost=0.00..0.41 rows=1\nwidth=4)\n\n Index Cond: (id = a.id)\n\n \n\nPostgreSQL estimates 12560 records in query result. This is wrong estimation\nif dead tuples are removed during seq scan or index scan (I suppose that it\nis).\n\n \n\nI don't think that AUTOVACUUM and VACUUM ANALYZE should behave differently\n:(\n\n \n\n--------------------------------------------------------------------------\n\nArtur Zajac\n\n \n\n \n\n\nHi, I am not sure if it is bug or not but I found some strange behaviour. Maybe it is the same as described on http://www.postgresql.org/message-id/[email protected] ?). If yes – I’m sorry for the trouble, but I think that my example is more obvious. Tested on PostgreSQL 9.2.4 and 9.2.6. Console 1:BEGIN;DECLARE a CURSOR FOR SELECT * FROM tab;--- Keep cursor open for disallow full vacuum of tab Console 2:SELECT count(*) FROM tab;     ---- Result: 3588;select reltuples from pg_class where relname='table’;  --- Result: 3588UPDATE tab SET id=id;UPDATE tab SET id=id;UPDATE tab SET id=id;VACUUM ANALYZE tab;select reltuples from pg_class where relname='table’; --- Result: 3588 Now wait few seconds J select reltuples from pg_class where relname='table’; --- Result: 12560 VACUUM ANALYZE tab;select reltuples from pg_class where relname='table’; --- Result: 3588 There is 3588 live records and 12560 live+dead records in table.That is strange for me. VACUUM updates pg_class.reltuples differently (only live roiws count) than autovacuum (live and dead rows). Why? Also in planning: explain SELECT id FROM tab;                              QUERY PLAN----------------------------------------------------------------------Seq Scan on tab  (cost=0.00..1074.60 rows=12560 width=4) Estimation is done with the use of current pg_class.reltuples value.  This value includes dead rows count after autovacuum so estimation is bad, especially in more complex planner tree, for example: Explain SELECT a.id FROM tab AS a JOIN tab AS b USING (id);                                                QUERY PLAN---------------------------------------------------------------------------------------------------------Nested Loop  (cost=0.00..6410.70 rows=12560 width=4)   ->  Seq Scan on tab a  (cost=0.00..1074.60 rows=12560 width=8)   ->  Index Only Scan using tab_pkey on tab b  (cost=0.00..0.41 rows=1 width=4)         Index Cond: (id = a.id) PostgreSQL estimates 12560 records in query result. This is wrong estimation if dead tuples are removed during seq scan or index scan (I suppose that it is). I don’t think that AUTOVACUUM and VACUUM ANALYZE should behave differently L --------------------------------------------------------------------------Artur Zajac", "msg_date": "Mon, 3 Feb 2014 20:29:34 +0100", "msg_from": "=?iso-8859-2?Q?Artur_Zaj=B1c?= <[email protected]>", "msg_from_op": true, "msg_subject": "Planner estimates and VACUUM/autovacuum" }, { "msg_contents": "=?iso-8859-2?Q?Artur_Zaj=B1c?= <[email protected]> writes:\n> That is strange for me. VACUUM updates pg_class.reltuples differently (only\n> live roiws count) than autovacuum (live and dead rows). Why?\n\nI don't have time to poke into this in detail right now, but I think\nthat more likely the issue is that ANALYZE might have a different rule\nthan VACUUM. autovacuum will fire those operations independently, which\ndoesn't match what you're doing by hand.\n\nWhich is not to say that we shouldn't think about fixing that, but that\nit's important to understand the problem clearly first.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 03 Feb 2014 16:03:32 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner estimates and VACUUM/autovacuum" } ]
[ { "msg_contents": "We have been running into a (live lock?) issue on our production Postgres\ninstance causing queries referencing a particular table to become extremely\nslow and our application to lock up.\n\nThis tends to occur on a particular table that gets a lot of queries\nagainst it after a large number of deletes. When this happens, the\nfollowing symptoms occur when queries referencing that table are run (even\nit we stop the deleting):\n\nSELECT * FROM table_name LIMIT 10; -- takes ~45 seconds to complete\nEXPLAIN SELECT * FROM table_name LIMIT 10; -- takes ~45 seconds to\ncomplete the explain query, the query plan looks reasonable\nEXPLAIN SELECT * FROM table_name LIMIT 10; -- takes ~45 seconds to\ncomplete the explain analyze query, query plan looks reasonable, timing\nstats says query took sub millisecond time to complete\n\nSELECT * FROM another_table LIMIT 10; -- takes sub millisecond time\nEXPLAIN * FROM another_table LIMIT 10; -- takes sub millisecond time, query\nplan looks reasonable\n\nThis behavior only stops and the queries go back to taking sub millisecond\ntime if we take the application issuing the SELECTs offline and wait for\nthe active queries to finish (or terminate them).\n\nThere is not a particularly large load on the database machine at the time,\nneither are there a particularly large number of wal logs being written\n(although there is a burst of wal log writes immediately after the queue is\ncleared).\n\ntable_name stats:\n~ 400,000,000 rows\nWe are deleting 10,000,000s of rows in 100,000 row increments over a few\ndays time prior/during this slowdown.\nSimultaneously a web app is querying this table continuously.\n\ntable_name has 4 btree indexes on it (one of which is set to CLUSTER) and\none foreign key constraint.\n\nThe obvious workaround is to not delete so much data on the table on our\nproduction database, but we would like to figure out why Postgres is live\nlocking this table. Do you have any ideas why this is happening and how to\nprevent it while still doing mass deletes on the table?\n\n-------------------------------------------------------------------------\n\nSystem information:\n\nPostgres Version - PostgreSQL 9.2.3 on x86_64-unknown-linux-gnu, compiled\nby gcc (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit\nOS - Ubuntu 12.04 LTS\n\nAutovacuum is on.\n\n--------------------------------------------------------------------------\n\nSELECT name, current_setting(name), source\n FROM pg_settings\n WHERE source NOT IN ('default');\n name | current_setting |\n source\n------------------------------+------------------------------------------+----------------------\n application_name | psql |\nclient\n archive_command | /bin/true |\nconfiguration file\n archive_mode | on |\nconfiguration file\n bytea_output | escape |\nconfiguration file\n checkpoint_completion_target | 0.9 |\nconfiguration file\n checkpoint_segments | 24 |\nconfiguration file\n client_encoding | UTF8 |\nsession\n DateStyle | ISO, MDY |\nconfiguration file\n default_text_search_config | pg_catalog.english |\nconfiguration file\n effective_cache_size | 54GB |\nconfiguration file\n effective_io_concurrency | 2 |\nconfiguration file\n listen_addresses | * |\nconfiguration file\n log_checkpoints | on |\nconfiguration file\n log_connections | on |\nconfiguration file\n log_disconnections | on |\nconfiguration file\n log_hostname | on |\nconfiguration file\n log_line_prefix | %t |\nconfiguration file\n logging_collector | on |\nconfiguration file\n maintenance_work_mem | 256MB |\nconfiguration file\n max_connections | 600 |\nconfiguration file\n max_stack_depth | 2MB |\nenvironment variable\n max_wal_senders | 3 |\nconfiguration file\n random_page_cost | 1.75 |\nconfiguration file\n server_encoding | UTF8 |\noverride\n shared_buffers | 12GB |\nconfiguration file\n synchronous_commit | off |\nconfiguration file\n tcp_keepalives_idle | 180 |\nconfiguration file\n track_activity_query_size | 8192 |\nconfiguration file\n transaction_deferrable | off |\noverride\n transaction_isolation | read committed |\noverride\n transaction_read_only | off |\noverride\n vacuum_freeze_min_age | 20000000 |\nconfiguration file\n vacuum_freeze_table_age | 800000000 |\nconfiguration file\n wal_buffers | 16MB |\noverride\n wal_keep_segments | 16384 |\nconfiguration file\n wal_level | hot_standby |\nconfiguration file\n wal_writer_delay | 330ms |\nconfiguration file\n work_mem | 512MB |\nconfiguration file\n\n\n-- \nThank You,\nPweaver ([email protected])\n\nWe have been running into a (live lock?) issue on our production Postgres instance causing queries referencing a particular table to become extremely slow and our application to lock up.\nThis tends to occur on a particular table that gets a lot of queries against it after a large number of deletes. When this happens, the following symptoms occur when queries referencing that table are run (even it we stop the deleting):\nSELECT * FROM table_name LIMIT 10;  -- takes ~45 seconds to completeEXPLAIN SELECT * FROM table_name LIMIT 10;  -- takes ~45 seconds to complete the explain query, the query plan looks reasonable\nEXPLAIN SELECT * FROM table_name LIMIT 10;  -- takes ~45 seconds to complete the explain analyze query, query plan looks reasonable, timing stats says query took sub millisecond time to complete\r\nSELECT * FROM another_table LIMIT 10; -- takes sub millisecond timeEXPLAIN * FROM another_table LIMIT 10; -- takes sub millisecond time, query plan looks reasonableThis behavior only stops and the queries go back to taking sub millisecond time if we take the application issuing the SELECTs offline and wait for the active queries to finish (or terminate them).\nThere is not a particularly large load on the database machine at the time, neither are there a particularly large number of wal logs being written (although there is a burst of wal log writes immediately after the queue is cleared).\ntable_name stats:~ 400,000,000 rowsWe are deleting 10,000,000s of rows in 100,000 row increments over a few days time prior/during this slowdown.Simultaneously a web app is querying this table continuously.\ntable_name has 4 btree indexes on it (one of which is set to CLUSTER) and one foreign key constraint.The obvious workaround is to not delete so much data on the table on our production database, but we would like to figure out why Postgres is live locking this table. Do you have any ideas why this is happening and how to prevent it while still doing mass deletes on the table?\n-------------------------------------------------------------------------System information:Postgres Version - PostgreSQL 9.2.3 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit\nOS - Ubuntu 12.04 LTSAutovacuum is on.--------------------------------------------------------------------------SELECT name, current_setting(name), source\n  FROM pg_settings  WHERE source NOT IN ('default');             name             |             current_setting              |        source        ------------------------------+------------------------------------------+----------------------\n application_name             | psql                                     | client archive_command              | /bin/true                                | configuration file archive_mode                 | on                                       | configuration file\n bytea_output                 | escape                                   | configuration file checkpoint_completion_target | 0.9                                      | configuration file checkpoint_segments          | 24                                       | configuration file\n client_encoding              | UTF8                                     | session DateStyle                    | ISO, MDY                                 | configuration file default_text_search_config   | pg_catalog.english                       | configuration file\n effective_cache_size         | 54GB                                     | configuration file effective_io_concurrency     | 2                                        | configuration file listen_addresses             | *                                        | configuration file\n log_checkpoints              | on                                       | configuration file log_connections              | on                                       | configuration file log_disconnections           | on                                       | configuration file\n log_hostname                 | on                                       | configuration file log_line_prefix              | %t                                       | configuration file logging_collector            | on                                       | configuration file\n maintenance_work_mem         | 256MB                                    | configuration file max_connections              | 600                                      | configuration file max_stack_depth              | 2MB                                      | environment variable\n max_wal_senders              | 3                                        | configuration file random_page_cost             | 1.75                                     | configuration file server_encoding              | UTF8                                     | override\n shared_buffers               | 12GB                                     | configuration file synchronous_commit           | off                                      | configuration file tcp_keepalives_idle          | 180                                      | configuration file\n track_activity_query_size    | 8192                                     | configuration file transaction_deferrable       | off                                      | override transaction_isolation        | read committed                           | override\n transaction_read_only        | off                                      | override vacuum_freeze_min_age        | 20000000                                 | configuration file vacuum_freeze_table_age      | 800000000                                | configuration file\n wal_buffers                  | 16MB                                     | override wal_keep_segments            | 16384                                    | configuration file wal_level                    | hot_standby                              | configuration file\n wal_writer_delay             | 330ms                                    | configuration file work_mem                     | 512MB                                    | configuration file\n-- Thank You,Pweaver ([email protected])", "msg_date": "Mon, 3 Feb 2014 16:35:43 -0500", "msg_from": "\"Pweaver (Paul Weaver)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres Query Plan Live Lock" }, { "msg_contents": "On Mon, Feb 3, 2014 at 1:35 PM, Pweaver (Paul Weaver)\n<[email protected]> wrote:\n> We have been running into a (live lock?) issue on our production Postgres\n> instance causing queries referencing a particular table to become extremely\n> slow and our application to lock up.\n\nLivelock? Really? That would imply that the query would never finish.\nA livelock is morally equivalent to an undetected deadlock.\n\n> This tends to occur on a particular table that gets a lot of queries against\n> it after a large number of deletes. When this happens, the following\n> symptoms occur when queries referencing that table are run (even it we stop\n> the deleting):\n>\n> SELECT * FROM table_name LIMIT 10; -- takes ~45 seconds to complete\n> EXPLAIN SELECT * FROM table_name LIMIT 10; -- takes ~45 seconds to complete\n> the explain query, the query plan looks reasonable\n> EXPLAIN SELECT * FROM table_name LIMIT 10; -- takes ~45 seconds to complete\n> the explain analyze query, query plan looks reasonable, timing stats says\n> query took sub millisecond time to complete\n\nWhy should explain analyze say that? You'd need to catch the problem\nas it is run.\n\n> SELECT * FROM another_table LIMIT 10; -- takes sub millisecond time\n> EXPLAIN * FROM another_table LIMIT 10; -- takes sub millisecond time, query\n> plan looks reasonable\n>\n> This behavior only stops and the queries go back to taking sub millisecond\n> time if we take the application issuing the SELECTs offline and wait for the\n> active queries to finish (or terminate them).\n>\n> There is not a particularly large load on the database machine at the time,\n> neither are there a particularly large number of wal logs being written\n> (although there is a burst of wal log writes immediately after the queue is\n> cleared).\n\nAre you aware of hint bits?\n\nhttps://wiki.postgresql.org/wiki/Hint_Bits\n\n-- \nRegards,\nPeter Geoghegan\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 4 Feb 2014 18:03:43 -0800", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres Query Plan Live Lock" }, { "msg_contents": "On Monday, February 3, 2014, Pweaver (Paul Weaver) <[email protected]>\nwrote:\n\n> We have been running into a (live lock?) issue on our production Postgres\n> instance causing queries referencing a particular table to become extremely\n> slow and our application to lock up.\n>\n> This tends to occur on a particular table that gets a lot of queries\n> against it after a large number of deletes. When this happens, the\n> following symptoms occur when queries referencing that table are run (even\n> it we stop the deleting):\n>\n\nWhat do you mean by \"stop the deleting\"? Are you pausing the delete but\nwithout either committing or rolling back the transaction, but just holding\nit open? Are you stopping it cleanly, between transactions?\n\nAlso, how many queries are happening concurrently? Perhaps you need a\nconnection pooler.\n\nIs the CPU time user time or system time? What kernel version do you have?\n\n\n> SELECT * FROM table_name LIMIT 10; -- takes ~45 seconds to complete\n> EXPLAIN SELECT * FROM table_name LIMIT 10; -- takes ~45 seconds to\n> complete the explain query, the query plan looks reasonable\n>\n\nThis sounds like the problem we heard quite a bit about recently, where\nprocesses spend a lot of time fighting over the proclock while they try to\ncheck the commit status of open transactions while. But I don't see how\ndeletes could trigger that behavior. If the delete has not committed, the\ntuples are still visible and the LIMIT 10 is quickly satisfied. If the\ndelete has committed, the tuples quickly get hinted, and so the next query\nalong should be faster.\n\nI also don't see why the explain would be slow. A similar problem was\ntracked down to digging through in-doubt tuples while trying to use an\nindex to find the true the min or max during estimating the cost of a merge\njoin. But I don't think a simple table query should lead to that, unless\ntable_name is a view. And I don't see how deletes, rather than uncommitted\ninserts, could trigger it either.\n\n\n max_connections | 600 |\n> configuration file\n>\n\nThat is quite extreme. If a temporary load spike (like from the deletes\nand the hinting needed after them) slows down the select queries and you\nstart more and more of them, soon you could tip the system over into kernel\nscheduler insanity with high system time. Once in this mode, it will stay\nthere until the incoming stream of queries stops and the existing ones\nclear out. But, if that is what is occurring, I don't know why queries on\nother tables would still be fast.\n\nCheers,\n\nJeff\n\n>\n\nOn Monday, February 3, 2014, Pweaver (Paul Weaver) <[email protected]> wrote:\nWe have been running into a (live lock?) issue on our production Postgres instance causing queries referencing a particular table to become extremely slow and our application to lock up.\n\nThis tends to occur on a particular table that gets a lot of queries against it after a large number of deletes. When this happens, the following symptoms occur when queries referencing that table are run (even it we stop the deleting):\nWhat do you mean by \"stop the deleting\"?  Are you pausing the delete but without either committing or rolling back the transaction, but just holding it open?  Are you stopping it cleanly, between transactions?\n\nAlso, how many queries are happening concurrently?  Perhaps you need a connection pooler.Is the CPU time user time or system time?  What kernel version do you have?\n\n\nSELECT * FROM table_name LIMIT 10;  -- takes ~45 seconds to completeEXPLAIN SELECT * FROM table_name LIMIT 10;  -- takes ~45 seconds to complete the explain query, the query plan looks reasonable\nThis sounds like the problem we heard quite a bit about recently, where processes spend a lot of time fighting over the proclock while they try to check the commit status of open transactions while.  But I don't see how deletes could trigger that behavior.  If the delete has not committed, the tuples are still visible and the LIMIT 10 is quickly satisfied.  If the delete has committed, the tuples quickly get hinted, and so the next query along should be faster.\nI also don't see why the explain would be slow.  A similar problem was tracked down to digging through in-doubt tuples while trying to use an index to find the true the min or max during estimating the cost of a merge join.  But I don't think a simple table query should lead to that, unless table_name is a view.  And I don't see how deletes, rather than uncommitted inserts, could trigger it either.\n max_connections              | 600                                      | configuration file\nThat is quite extreme.  If a temporary load spike (like from the deletes and the hinting needed after them) slows down the select queries and you start more and more of them, soon you could tip the system over into kernel scheduler insanity with high system time.  Once in this mode, it will stay there until the incoming stream of queries stops and the existing ones clear out.  But, if that is what is occurring, I don't know why queries on other tables would still be fast.\nCheers,Jeff", "msg_date": "Wed, 5 Feb 2014 06:52:20 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Postgres Query Plan Live Lock" }, { "msg_contents": "On Mon, Feb 3, 2014 at 1:35 PM, Pweaver (Paul Weaver)\n<[email protected]>wrote:\n\n>\n> table_name stats:\n> ~ 400,000,000 rows\n> We are deleting 10,000,000s of rows in 100,000 row increments over a few\n> days time prior/during this slowdown.\n>\n\nIf you issue \"VACUUM\" or \"VACUUM ANALYZE\" after each DELETE, do the SELECTs\nbecome more responsive?\n\nOn Mon, Feb 3, 2014 at 1:35 PM, Pweaver (Paul Weaver) <[email protected]> wrote:\ntable_name stats:~ 400,000,000 rowsWe are deleting 10,000,000s of rows in 100,000 row increments over a few days time prior/during this slowdown.\nIf you issue \"VACUUM\" or \"VACUUM ANALYZE\" after each DELETE, do the SELECTs become more responsive?", "msg_date": "Wed, 5 Feb 2014 08:39:45 -0800", "msg_from": "bricklen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres Query Plan Live Lock" }, { "msg_contents": "On Tue, Feb 4, 2014 at 9:03 PM, Peter Geoghegan <[email protected]\n> wrote:\n\n> On Mon, Feb 3, 2014 at 1:35 PM, Pweaver (Paul Weaver)\n> <[email protected]> wrote:\n> > We have been running into a (live lock?) issue on our production Postgres\n> > instance causing queries referencing a particular table to become\n> extremely\n> > slow and our application to lock up.\n>\n> Livelock? Really? That would imply that the query would never finish.\n> A livelock is morally equivalent to an undetected deadlock.\n>\nLivelock is bad term.\n\n\n> > This tends to occur on a particular table that gets a lot of queries\n> against\n> > it after a large number of deletes. When this happens, the following\n> > symptoms occur when queries referencing that table are run (even it we\n> stop\n> > the deleting):\n> >\n> > SELECT * FROM table_name LIMIT 10; -- takes ~45 seconds to complete\n> > EXPLAIN SELECT * FROM table_name LIMIT 10; -- takes ~45 seconds to\n> complete\n> > the explain query, the query plan looks reasonable\n> > EXPLAIN SELECT * FROM table_name LIMIT 10; -- takes ~45 seconds to\n> complete\n> > the explain analyze query, query plan looks reasonable, timing stats says\n> > query took sub millisecond time to complete\n>\n> Why should explain analyze say that? You'd need to catch the problem\n> as it is run.\n>\n> > SELECT * FROM another_table LIMIT 10; -- takes sub millisecond time\n> > EXPLAIN * FROM another_table LIMIT 10; -- takes sub millisecond time,\n> query\n> > plan looks reasonable\n> >\n> > This behavior only stops and the queries go back to taking sub\n> millisecond\n> > time if we take the application issuing the SELECTs offline and wait for\n> the\n> > active queries to finish (or terminate them).\n> >\n> > There is not a particularly large load on the database machine at the\n> time,\n> > neither are there a particularly large number of wal logs being written\n> > (although there is a burst of wal log writes immediately after the queue\n> is\n> > cleared).\n>\n> Are you aware of hint bits?\n>\n> https://wiki.postgresql.org/wiki/Hint_Bits\n\nNo, but why would this cause the EXPLAIN queries to be slow?\n\n>\n>\n> --\n> Regards,\n> Peter Geoghegan\n>\n\n\n\n-- \nThank You,\nPweaver ([email protected])\n\nOn Tue, Feb 4, 2014 at 9:03 PM, Peter Geoghegan <[email protected]> wrote:\nOn Mon, Feb 3, 2014 at 1:35 PM, Pweaver (Paul Weaver)\n<[email protected]> wrote:\n> We have been running into a (live lock?) issue on our production Postgres\n> instance causing queries referencing a particular table to become extremely\n> slow and our application to lock up.\n\nLivelock? Really? That would imply that the query would never finish.\nA livelock is morally equivalent to an undetected deadlock.Livelock is bad term.\n\n> This tends to occur on a particular table that gets a lot of queries against\n> it after a large number of deletes. When this happens, the following\n> symptoms occur when queries referencing that table are run (even it we stop\n> the deleting):\n>\n> SELECT * FROM table_name LIMIT 10;  -- takes ~45 seconds to complete\n> EXPLAIN SELECT * FROM table_name LIMIT 10;  -- takes ~45 seconds to complete\n> the explain query, the query plan looks reasonable\n> EXPLAIN SELECT * FROM table_name LIMIT 10;  -- takes ~45 seconds to complete\n> the explain analyze query, query plan looks reasonable, timing stats says\n> query took sub millisecond time to complete\n\nWhy should explain analyze say that? You'd need to catch the problem\nas it is run.\n\n> SELECT * FROM another_table LIMIT 10; -- takes sub millisecond time\n> EXPLAIN * FROM another_table LIMIT 10; -- takes sub millisecond time, query\n> plan looks reasonable\n>\n> This behavior only stops and the queries go back to taking sub millisecond\n> time if we take the application issuing the SELECTs offline and wait for the\n> active queries to finish (or terminate them).\n>\n> There is not a particularly large load on the database machine at the time,\n> neither are there a particularly large number of wal logs being written\n> (although there is a burst of wal log writes immediately after the queue is\n> cleared).\n\nAre you aware of hint bits?\n\nhttps://wiki.postgresql.org/wiki/Hint_BitsNo, but why would this cause the EXPLAIN queries to be slow? \n\n\n--\nRegards,\nPeter Geoghegan\n-- Thank You,Pweaver ([email protected])", "msg_date": "Wed, 5 Feb 2014 14:36:47 -0500", "msg_from": "\"Pweaver (Paul Weaver)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres Query Plan Live Lock" }, { "msg_contents": "On Wed, Feb 5, 2014 at 9:52 AM, Jeff Janes <[email protected]> wrote:\n\n> On Monday, February 3, 2014, Pweaver (Paul Weaver) <[email protected]>\n> wrote:\n>\n>> We have been running into a (live lock?) issue on our production Postgres\n>> instance causing queries referencing a particular table to become extremely\n>> slow and our application to lock up.\n>>\n>> This tends to occur on a particular table that gets a lot of queries\n>> against it after a large number of deletes. When this happens, the\n>> following symptoms occur when queries referencing that table are run (even\n>> it we stop the deleting):\n>>\n>\n> What do you mean by \"stop the deleting\"? Are you pausing the delete but\n> without either committing or rolling back the transaction, but just holding\n> it open? Are you stopping it cleanly, between transactions?\n>\n\nWe are repeatedly running delete commands in their own transactions. We\nstop issuing new deletes and let them finish cleanly.\n\n>\n> Also, how many queries are happening concurrently? Perhaps you need a\n> connection pooler.\n>\nUsually between 1 and 20. When it gets locked up closer to 100-200.\nWe should add a connection pooler. Would the number of active queries on\nthe table be causing the issue?\n\n>\n> Is the CPU time user time or system time? What kernel version do you have?\n>\nReal time - 3.2.0-26\n\n>\n>\n>> SELECT * FROM table_name LIMIT 10; -- takes ~45 seconds to complete\n>> EXPLAIN SELECT * FROM table_name LIMIT 10; -- takes ~45 seconds to\n>> complete the explain query, the query plan looks reasonable\n>>\n>\n> This sounds like the problem we heard quite a bit about recently, where\n> processes spend a lot of time fighting over the proclock while they try to\n> check the commit status of open transactions while. But I don't see how\n> deletes could trigger that behavior. If the delete has not committed, the\n> tuples are still visible and the LIMIT 10 is quickly satisfied. If the\n> delete has committed, the tuples quickly get hinted, and so the next query\n> along should be faster.\n>\n> I also don't see why the explain would be slow. A similar problem was\n> tracked down to digging through in-doubt tuples while trying to use an\n> index to find the true the min or max during estimating the cost of a merge\n> join. But I don't think a simple table query should lead to that, unless\n> table_name is a view. And I don't see how deletes, rather than uncommitted\n> inserts, could trigger it either.\n>\n>\n> max_connections | 600 |\n>> configuration file\n>>\n>\n> That is quite extreme. If a temporary load spike (like from the deletes\n> and the hinting needed after them) slows down the select queries and you\n> start more and more of them, soon you could tip the system over into kernel\n> scheduler insanity with high system time. Once in this mode, it will stay\n> there until the incoming stream of queries stops and the existing ones\n> clear out. But, if that is what is occurring, I don't know why queries on\n> other tables would still be fast.\n>\nWe probably want a connection pooler anyways, but in this particular case,\nthe load average is fairly low on the machine running Postrgres.\n\n>\n> Cheers,\n>\n> Jeff\n>\n>>\n\n\n-- \nThank You,\nPweaver ([email protected])\n\nOn Wed, Feb 5, 2014 at 9:52 AM, Jeff Janes <[email protected]> wrote:\nOn Monday, February 3, 2014, Pweaver (Paul Weaver) <[email protected]> wrote:\n\nWe have been running into a (live lock?) issue on our production Postgres instance causing queries referencing a particular table to become extremely slow and our application to lock up.\n\nThis tends to occur on a particular table that gets a lot of queries against it after a large number of deletes. When this happens, the following symptoms occur when queries referencing that table are run (even it we stop the deleting):\nWhat do you mean by \"stop the deleting\"?  Are you pausing the delete but without either committing or rolling back the transaction, but just holding it open?  Are you stopping it cleanly, between transactions?\nWe are repeatedly running delete commands in their own transactions. We stop issuing new deletes and let them finish cleanly. \n\n\nAlso, how many queries are happening concurrently?  Perhaps you need a connection pooler.Usually between 1 and 20. When it gets locked up closer to 100-200.We should add a connection pooler. Would the number of active queries on the table be causing the issue?\nIs the CPU time user time or system time?  What kernel version do you have?\nReal time - 3.2.0-26\n\n\n\nSELECT * FROM table_name LIMIT 10;  -- takes ~45 seconds to completeEXPLAIN SELECT * FROM table_name LIMIT 10;  -- takes ~45 seconds to complete the explain query, the query plan looks reasonable\nThis sounds like the problem we heard quite a bit about recently, where processes spend a lot of time fighting over the proclock while they try to check the commit status of open transactions while.  But I don't see how deletes could trigger that behavior.  If the delete has not committed, the tuples are still visible and the LIMIT 10 is quickly satisfied.  If the delete has committed, the tuples quickly get hinted, and so the next query along should be faster.\nI also don't see why the explain would be slow.  A similar problem was tracked down to digging through in-doubt tuples while trying to use an index to find the true the min or max during estimating the cost of a merge join.  But I don't think a simple table query should lead to that, unless table_name is a view.  And I don't see how deletes, rather than uncommitted inserts, could trigger it either.\n max_connections              | 600                                      | configuration file\nThat is quite extreme.  If a temporary load spike (like from the deletes and the hinting needed after them) slows down the select queries and you start more and more of them, soon you could tip the system over into kernel scheduler insanity with high system time.  Once in this mode, it will stay there until the incoming stream of queries stops and the existing ones clear out.  But, if that is what is occurring, I don't know why queries on other tables would still be fast.\nWe probably want a connection pooler anyways, but in this particular case, the load average is fairly low on the machine running Postrgres.\n\nCheers,Jeff\n\n\n\n\n-- Thank You,Pweaver ([email protected])", "msg_date": "Wed, 5 Feb 2014 14:47:53 -0500", "msg_from": "\"Pweaver (Paul Weaver)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres Query Plan Live Lock" }, { "msg_contents": "On Wed, Feb 5, 2014 at 4:47 PM, Pweaver (Paul Weaver)\n<[email protected]> wrote:\n>> That is quite extreme. If a temporary load spike (like from the deletes\n>> and the hinting needed after them) slows down the select queries and you\n>> start more and more of them, soon you could tip the system over into kernel\n>> scheduler insanity with high system time. Once in this mode, it will stay\n>> there until the incoming stream of queries stops and the existing ones clear\n>> out. But, if that is what is occurring, I don't know why queries on other\n>> tables would still be fast.\n>\n> We probably want a connection pooler anyways, but in this particular case,\n> the load average is fairly low on the machine running Postrgres.\n\n\nIndeed, if lack of connection pooling was the cause, I'd expect a huge\nload average (around 100).\n\nCan you post the output of \"vmstat 6 10\" and \"iostat -x -m -d 6 10\"\nwhile the server is overloaded? (try to run them at the same time so\nresults can be correlated).\n\nAlso, some details on the hardware wouldn't hurt, like amount of RAM,\nnumber of processors, kind of processor, whether it's a virtual\nmachine or a bare metal one, number of disks and disk configuration,\netc...\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 6 Feb 2014 00:42:49 -0200", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres Query Plan Live Lock" }, { "msg_contents": "On Wed, Feb 5, 2014 at 11:47 AM, Pweaver (Paul Weaver)\n<[email protected]>wrote:\n\n>\n> On Wed, Feb 5, 2014 at 9:52 AM, Jeff Janes <[email protected]> wrote:\n>\n>> On Monday, February 3, 2014, Pweaver (Paul Weaver) <[email protected]>\n>> wrote:\n>>\n>>> We have been running into a (live lock?) issue on our production\n>>> Postgres instance causing queries referencing a particular table to become\n>>> extremely slow and our application to lock up.\n>>>\n>>> This tends to occur on a particular table that gets a lot of queries\n>>> against it after a large number of deletes. When this happens, the\n>>> following symptoms occur when queries referencing that table are run (even\n>>> it we stop the deleting):\n>>>\n>>\n>> What do you mean by \"stop the deleting\"? Are you pausing the delete but\n>> without either committing or rolling back the transaction, but just holding\n>> it open? Are you stopping it cleanly, between transactions?\n>>\n>\n> We are repeatedly running delete commands in their own transactions. We\n> stop issuing new deletes and let them finish cleanly.\n>\n>>\n>> Also, how many queries are happening concurrently? Perhaps you need a\n>> connection pooler.\n>>\n> Usually between 1 and 20. When it gets locked up closer to 100-200.\n> We should add a connection pooler. Would the number of active queries on\n> the table be causing the issue?\n>\n\n100 to 200 active connections cannot be helpful. That number should not be\n*inherently* harmful, but certainly can be very harmful in conjunction with\nsomething else. One thing it could be harmful in conjunction with would be\ncontention on the PROCLOCK spinlock, but if you don't have open\ntransactions that have touched a lot of tuples (which it sounds like you do\nnot) then that probably isn't the case. Another thing could be kernel\nscheduler problems. I think some of the early 3-series kernels had some\nproblems with the scheduler under many concurrently active processes, which\nlead to high % system CPU time. There are also problems with NUMA, and\nwith transparent huge pages, from around the same kernel versions.\n\n\n\n>\n>> Is the CPU time user time or system time? What kernel version do you\n>> have?\n>>\n> Real time - 3.2.0-26\n>\n\nI meant using \"top\" or \"sar\" during a lock up, is the CPU time being spent\nin %user, or in %system?\n\nUnfortunately I don't know exactly when in the 3-series kernels the\nproblems showed up, or were fixed.\n\nIn any case, lowering the max_connections will probably prevent you from\naccidentally poking the beast, even if we can't figure out exactly what\nkind of beast it is.\n\n\n>\n>>\n>> max_connections | 600\n>>> | configuration file\n>>>\n>>\n>> That is quite extreme. If a temporary load spike (like from the deletes\n>> and the hinting needed after them) slows down the select queries and you\n>> start more and more of them, soon you could tip the system over into kernel\n>> scheduler insanity with high system time. Once in this mode, it will stay\n>> there until the incoming stream of queries stops and the existing ones\n>> clear out. But, if that is what is occurring, I don't know why queries on\n>> other tables would still be fast.\n>>\n> We probably want a connection pooler anyways, but in this particular case,\n> the load average is fairly low on the machine running Postrgres.\n>\n\nIs the load average low even during the problem event?\n\nCheers,\n\nJeff\n\nOn Wed, Feb 5, 2014 at 11:47 AM, Pweaver (Paul Weaver) <[email protected]> wrote:\nOn Wed, Feb 5, 2014 at 9:52 AM, Jeff Janes <[email protected]> wrote:\nOn Monday, February 3, 2014, Pweaver (Paul Weaver) <[email protected]> wrote:\n\nWe have been running into a (live lock?) issue on our production Postgres instance causing queries referencing a particular table to become extremely slow and our application to lock up.\n\nThis tends to occur on a particular table that gets a lot of queries against it after a large number of deletes. When this happens, the following symptoms occur when queries referencing that table are run (even it we stop the deleting):\nWhat do you mean by \"stop the deleting\"?  Are you pausing the delete but without either committing or rolling back the transaction, but just holding it open?  Are you stopping it cleanly, between transactions?\nWe are repeatedly running delete commands in their own transactions. We stop issuing new deletes and let them finish cleanly. \n\n\nAlso, how many queries are happening concurrently?  Perhaps you need a connection pooler.Usually between 1 and 20. When it gets locked up closer to 100-200.We should add a connection pooler. Would the number of active queries on the table be causing the issue?\n100 to 200 active connections cannot be helpful.  That number should not be *inherently* harmful, but certainly can be very harmful in conjunction with something else.  One thing it could be harmful in conjunction with would be contention on the PROCLOCK spinlock, but if you don't have open transactions that have touched a lot of tuples (which it sounds like you do not) then that probably isn't the case. Another thing could be kernel scheduler problems.  I think some of the early 3-series kernels had some problems with the scheduler under many concurrently active processes, which lead to high % system CPU time.  There are also problems with NUMA, and with transparent huge pages, from around the same kernel versions.\n \nIs the CPU time user time or system time?  What kernel version do you have?\nReal time - 3.2.0-26I meant using \"top\" or \"sar\" during a lock up, is the CPU time being spent in %user, or in %system?\nUnfortunately I don't know exactly when in the 3-series kernels the problems showed up, or were fixed. In any case, lowering the max_connections will probably prevent you from accidentally poking the beast, even if we can't figure out exactly what kind of beast it is.\n \n\n max_connections              | 600                                      | configuration file\nThat is quite extreme.  If a temporary load spike (like from the deletes and the hinting needed after them) slows down the select queries and you start more and more of them, soon you could tip the system over into kernel scheduler insanity with high system time.  Once in this mode, it will stay there until the incoming stream of queries stops and the existing ones clear out.  But, if that is what is occurring, I don't know why queries on other tables would still be fast.\nWe probably want a connection pooler anyways, but in this particular case, the load average is fairly low on the machine running Postrgres.\nIs the load average low even during the problem event?  Cheers,Jeff", "msg_date": "Tue, 11 Feb 2014 15:50:38 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres Query Plan Live Lock" } ]
[ { "msg_contents": "Hi,\n \nWe have a PostgreSQL DB, version 9.3 on a Suse Linux system.\nWe ran the update from postgresql 8.4 to 9.3.\nAfter importing the database the query time of one sql query is about 30\nsec.\nAfter ANALYZE the DB the query time of this sql query is about 45 minutes.\nWe can see that after analyzing the indexes will no longer be used.\n \nHas anyone an idea why ANALYZE cause this problem?\n \nThanks a lot for your help!\n \nKatharina\n \n \n\nHi, We have a PostgreSQL DB, version 9.3 on a Suse Linux system.We ran the update from postgresql 8.4 to 9.3.After importing the database the query time of one sql query is about 30 sec.After ANALYZE the DB the query time of this sql query is about 45 minutes.We can see that after analyzing the indexes will no longer be used. Has anyone an idea why ANALYZE cause this problem? Thanks a lot for your help! Katharina", "msg_date": "Wed, 5 Feb 2014 12:50:52 +0100", "msg_from": "\"Katharina Koobs\" <[email protected]>", "msg_from_op": true, "msg_subject": "increasing query time after analyze" }, { "msg_contents": "Hello\n\n\n2014-02-05 Katharina Koobs <[email protected]>:\n\n> Hi,\n>\n>\n>\n> We have a PostgreSQL DB, version 9.3 on a Suse Linux system.\n>\n> We ran the update from postgresql 8.4 to 9.3.\n>\n> After importing the database the query time of one sql query is about 30\n> sec.\n>\n> After ANALYZE the DB the query time of this sql query is about 45 minutes.\n>\n> We can see that after analyzing the indexes will no longer be used.\n>\n>\n>\n> Has anyone an idea why ANALYZE cause this problem?\n>\n\nyes, it is possible - sometimes due more reasons (some strange dataset or\ncorrelation between columns) a statistics estimations are totally out. And\nbad musty statistics can produces better estimations than fresh statistics\n\nplease send a \"EXPLAIN ANALYZE\" output for fast and slow queries.\n\nRegards\n\nPavel Stehule\n\n\n>\n>\n> Thanks a lot for your help!\n>\n>\n>\n> Katharina\n>\n>\n>\n>\n>\n\nHello2014-02-05 Katharina Koobs <[email protected]>:\nHi,\n We have a PostgreSQL DB, version 9.3 on a Suse Linux system.\nWe ran the update from postgresql 8.4 to 9.3.After importing the database the query time of one sql query is about 30 sec.\nAfter ANALYZE the DB the query time of this sql query is about 45 minutes.\nWe can see that after analyzing the indexes will no longer be used. \nHas anyone an idea why ANALYZE cause this problem?yes, it is possible - sometimes due more reasons (some strange dataset or correlation between columns) a statistics estimations are totally out. And bad musty statistics can produces better estimations than fresh statistics \nplease send a \"EXPLAIN ANALYZE\" output for fast and slow queries.RegardsPavel Stehule \n \nThanks a lot for your help!\n Katharina", "msg_date": "Wed, 5 Feb 2014 13:15:17 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: increasing query time after analyze" }, { "msg_contents": "On Wed, Feb 5, 2014 at 8:50 AM, Katharina Koobs\n<[email protected]> wrote:\n> Hi,\n>\n>\n>\n> We have a PostgreSQL DB, version 9.3 on a Suse Linux system.\n>\n> We ran the update from postgresql 8.4 to 9.3.\n>\n> After importing the database the query time of one sql query is about 30\n> sec.\n>\n> After ANALYZE the DB the query time of this sql query is about 45 minutes.\n>\n> We can see that after analyzing the indexes will no longer be used.\n>\n>\n>\n> Has anyone an idea why ANALYZE cause this problem?\n\n\nIf you just restored a dump, you should do a VACUUM ANALYZE.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 5 Feb 2014 13:28:39 -0200", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: increasing query time after analyze" } ]
[ { "msg_contents": "Hello list.\n\nI know all the theory about vacuuming. I've got log tables that get\nperiodically pruned. The pruning is... quirky, though. It's not so\nmuch deleting data, as summarizing many (thousands) of rows into one\nsingle row. For that, a combination of deletes and updates are used.\n\nIn essence, the tables are write-only except for the summarization\nstep for old data.\n\nMany tables are becoming increasingly bloated, which is somewhat\nexpected due to this usage pattern: I had expected table size to be\nabout constant, holding recent data plus archived old data (which is\nreally small compared to full recent logs), with some constant-sized\nbloat due to daily summarization updates/deletes.\n\nWhat I'm seeing, though, is not that, but bloat proportional to table\nsize (always stuck at about 65% bloat). What's weird, is that vacuum\nfull does the trick of reducing table size and bloat back to 0%. I\nhaven't had time yet to verify whether it goes back to 65% after\nvacuum full (that will take time, maybe a month).\n\nQuestion is... why isn't all that free space being used? The table\ngrows in size even though there's plenty (65%) of free space.\n\nI've got autovacuum severely crippled and that could be a reason, but\nI do perform regular vacuum runs weekly that always run to completion.\nI also do routine reindexing to stop index bloat on its tracks, yet\nfreshly-reindexed indexes get considerably reduced in size with vacuum\nfull.\n\nIs there some case in which regular vacuum would fail to reclaim space\nbut vacuum full would not?\n\nI'm running postgresql 9.2.5. If this was 8.x, I'd suspect of the free\nspace map, but AFAIK there's no such limit in 9.2. Relevant\nnon-default settings are:\n\n\"archive_mode\";\"on\"\n\"autovacuum_analyze_scale_factor\";\"0.05\"\n\"autovacuum_analyze_threshold\";\"500\"\n\"autovacuum_max_workers\";\"1\"\n\"autovacuum_naptime\";\"900\"\n\"autovacuum_vacuum_cost_delay\";\"80\"\n\"autovacuum_vacuum_cost_limit\";\"100\"\n\"autovacuum_vacuum_scale_factor\";\"0.1\"\n\"autovacuum_vacuum_threshold\";\"500\"\n\"bgwriter_lru_maxpages\";\"200\"\n\"checkpoint_completion_target\";\"0.9\"\n\"checkpoint_segments\";\"64\"\n\"checkpoint_timeout\";\"1800\"\n\"maintenance_work_mem\";\"1048576\"\n\"vacuum_cost_delay\";\"20\"\n\"wal_level\";\"archive\"\n\n\n[0] http://labs.omniti.com/labs/pgtreats/browser/trunk/tools/pg_bloat_report.pl\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 7 Feb 2014 16:47:52 -0200", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": true, "msg_subject": "Bloated tables and why is vacuum full the only option" }, { "msg_contents": "On 7.2.2014 19:47, Claudio Freire wrote:\n>\n> Question is... why isn't all that free space being used? The table\n> grows in size even though there's plenty (65%) of free space.\n> \n> I've got autovacuum severely crippled and that could be a reason, but\n> I do perform regular vacuum runs weekly that always run to completion.\n> I also do routine reindexing to stop index bloat on its tracks, yet\n> freshly-reindexed indexes get considerably reduced in size with vacuum\n> full.\n\nAre you logging autovacuum actions? I.e. what is\n\n log_autovacuum_min_duration\n\nset to? It it's set to -1 you won't get any messages because of\nconflicting locks or stuff like that, which might be the culprit here.\n\nAlso, when you're running the weekly VACUUM, do VACUUM (VERBOSE) and\npost it here. That might at least help us eliminate some of the usual\nsuspects.\n\nregards\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 09 Feb 2014 16:50:04 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bloated tables and why is vacuum full the only option" }, { "msg_contents": "On Sun, Feb 9, 2014 at 12:50 PM, Tomas Vondra <[email protected]> wrote:\n> On 7.2.2014 19:47, Claudio Freire wrote:\n>>\n>> Question is... why isn't all that free space being used? The table\n>> grows in size even though there's plenty (65%) of free space.\n>>\n>> I've got autovacuum severely crippled and that could be a reason, but\n>> I do perform regular vacuum runs weekly that always run to completion.\n>> I also do routine reindexing to stop index bloat on its tracks, yet\n>> freshly-reindexed indexes get considerably reduced in size with vacuum\n>> full.\n>\n> Are you logging autovacuum actions? I.e. what is\n>\n> log_autovacuum_min_duration\n>\n> set to? It it's set to -1 you won't get any messages because of\n> conflicting locks or stuff like that, which might be the culprit here.\n\nIt was set to -1. I set it to 5000 and I'll be keeping an eye on the logs.\n\n> Also, when you're running the weekly VACUUM, do VACUUM (VERBOSE) and\n> post it here. That might at least help us eliminate some of the usual\n> suspects.\n\nI'm using a cron job for this, I'll see about dumping the results to a\nlog file and post when it's done.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 9 Feb 2014 17:13:53 -0200", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bloated tables and why is vacuum full the only option" }, { "msg_contents": "Claudio Freire <[email protected]> writes:\n>>> I also do routine reindexing to stop index bloat on its tracks, yet\n>>> freshly-reindexed indexes get considerably reduced in size with vacuum\n>>> full.\n\nAFAIK there's no reason for vacuum full to produce a different result\nfrom reindex. Did you mean to say that the indexes get smaller than\nwhat they had been after some normal operation? If so it's worth noting\nthis comment from the btree index building code (nbtsort.c):\n\n * It is not wise to pack the pages entirely full, since then *any*\n * insertion would cause a split (and not only of the leaf page; the need\n * for a split would cascade right up the tree). The steady-state load\n * factor for btrees is usually estimated at 70%. We choose to pack leaf\n * pages to the user-controllable fill factor (default 90%) while upper pages\n * are always packed to 70%. This gives us reasonable density (there aren't\n * many upper pages if the keys are reasonable-size) without risking a lot of\n * cascading splits during early insertions.\n\nAs the comment notes, the initial state of a freshly-built index is packed\nmore densely than what you can expect after a lot of insertions/updates\nhave occurred. That's not a bug, it's just a fact of life.\n\nAlso, there are certain usage patterns that can result in btree indexes\nhaving densities much lower than the conventional-wisdom 70%. The main\none I've seen in practice is \"decimation\", where you delete say 99 out\nof every 100 entries in index order. This leaves just a few live entries\nin each leaf page --- but our btree code doesn't reclaim an index page\nfor recycling until it's totally empty. So you can end up with a very\nlow load factor after doing something like that, and a reindex is the\nonly good way to fix it.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 09 Feb 2014 14:40:00 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bloated tables and why is vacuum full the only option" }, { "msg_contents": "On Sun, Feb 9, 2014 at 4:40 PM, Tom Lane <[email protected]> wrote:\n> Claudio Freire <[email protected]> writes:\n>>>> I also do routine reindexing to stop index bloat on its tracks, yet\n>>>> freshly-reindexed indexes get considerably reduced in size with vacuum\n>>>> full.\n>\n> AFAIK there's no reason for vacuum full to produce a different result\n> from reindex. Did you mean to say that the indexes get smaller than\n> what they had been after some normal operation? If so it's worth noting\n> this comment from the btree index building code (nbtsort.c):\n\nSmaller than after reindex. It was a surprise to me too.\n\n> Also, there are certain usage patterns that can result in btree indexes\n> having densities much lower than the conventional-wisdom 70%. The main\n> one I've seen in practice is \"decimation\", where you delete say 99 out\n> of every 100 entries in index order. This leaves just a few live entries\n> in each leaf page --- but our btree code doesn't reclaim an index page\n> for recycling until it's totally empty. So you can end up with a very\n> low load factor after doing something like that, and a reindex is the\n> only good way to fix it.\n\nThat's exactly the kind of pattern the \"archival\" step results in,\nthat's why I do routine reindexing.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 9 Feb 2014 17:48:18 -0200", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bloated tables and why is vacuum full the only option" }, { "msg_contents": "On Fri, Feb 7, 2014 at 10:47 AM, Claudio Freire <[email protected]> wrote:\n> What I'm seeing, though, is not that, but bloat proportional to table\n> size (always stuck at about 65% bloat). What's weird, is that vacuum\n> full does the trick of reducing table size and bloat back to 0%. I\n> haven't had time yet to verify whether it goes back to 65% after\n> vacuum full (that will take time, maybe a month).\n\nTry pgcompact, it was designed particularily for such cases like yours\nhttps://github.com/grayhemp/pgtoolkit.\n\n-- \nKind regards,\nSergey Konoplev\nPostgreSQL Consultant and DBA\n\nhttp://www.linkedin.com/in/grayhemp\n+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979\[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 9 Feb 2014 14:32:18 -0800", "msg_from": "Sergey Konoplev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bloated tables and why is vacuum full the only option" }, { "msg_contents": "On Sun, Feb 9, 2014 at 7:32 PM, Sergey Konoplev <[email protected]> wrote:\n> On Fri, Feb 7, 2014 at 10:47 AM, Claudio Freire <[email protected]> wrote:\n>> What I'm seeing, though, is not that, but bloat proportional to table\n>> size (always stuck at about 65% bloat). What's weird, is that vacuum\n>> full does the trick of reducing table size and bloat back to 0%. I\n>> haven't had time yet to verify whether it goes back to 65% after\n>> vacuum full (that will take time, maybe a month).\n>\n> Try pgcompact, it was designed particularily for such cases like yours\n> https://github.com/grayhemp/pgtoolkit.\n\nIt's a pity that that requires several sequential scans of the tables.\nFor my case, that's probably as intrusive as the exclusive locks.\n\nI noticed I didn't mention, but the tables involved are around 20-50GB in size.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 9 Feb 2014 20:58:57 -0200", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bloated tables and why is vacuum full the only option" }, { "msg_contents": "On Sun, Feb 9, 2014 at 2:58 PM, Claudio Freire <[email protected]> wrote:\n> On Sun, Feb 9, 2014 at 7:32 PM, Sergey Konoplev <[email protected]> wrote:\n>> Try pgcompact, it was designed particularily for such cases like yours\n>> https://github.com/grayhemp/pgtoolkit.\n>\n> It's a pity that that requires several sequential scans of the tables.\n> For my case, that's probably as intrusive as the exclusive locks.\n\nProbably you should run it with --no-pgstattuple if you are talking\nabout these seq scans. If your tables are not TOASTed then the\napproximation method of gathering statistics would work pretty good\nfor you.\n\n> I noticed I didn't mention, but the tables involved are around 20-50GB in size.\n\nIt is not the thing I would worry about. I regularly use it with even\nbigger tables.\n\n-- \nKind regards,\nSergey Konoplev\nPostgreSQL Consultant and DBA\n\nhttp://www.linkedin.com/in/grayhemp\n+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979\[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 9 Feb 2014 15:29:41 -0800", "msg_from": "Sergey Konoplev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bloated tables and why is vacuum full the only option" }, { "msg_contents": "On Fri, Feb 7, 2014 at 10:47 AM, Claudio Freire <[email protected]>wrote:\n\n> Hello list.\n>\n> I know all the theory about vacuuming. I've got log tables that get\n> periodically pruned. The pruning is... quirky, though. It's not so\n> much deleting data, as summarizing many (thousands) of rows into one\n> single row. For that, a combination of deletes and updates are used.\n>\n> In essence, the tables are write-only except for the summarization\n> step for old data.\n>\n> Many tables are becoming increasingly bloated, which is somewhat\n> expected due to this usage pattern: I had expected table size to be\n> about constant, holding recent data plus archived old data (which is\n> really small compared to full recent logs), with some constant-sized\n> bloat due to daily summarization updates/deletes.\n>\n> What I'm seeing, though, is not that, but bloat proportional to table\n> size (always stuck at about 65% bloat). What's weird, is that vacuum\n> full does the trick of reducing table size and bloat back to 0%. I\n> haven't had time yet to verify whether it goes back to 65% after\n> vacuum full (that will take time, maybe a month).\n>\n> Question is... why isn't all that free space being used? The table\n> grows in size even though there's plenty (65%) of free space.\n>\n\nWhat does this look like with the pg_bloat_report.pl you linked to?\n\nDoes pg_freespace agree that that space is reusable?\n\nSELECT avail,count(*) FROM pg_freespace('pgbench_accounts') group by avail;\n\nCheers,\n\nJeff\n\nOn Fri, Feb 7, 2014 at 10:47 AM, Claudio Freire <[email protected]> wrote:\nHello list.\n\nI know all the theory about vacuuming. I've got log tables that get\nperiodically pruned. The pruning is... quirky, though. It's not so\nmuch deleting data, as summarizing many (thousands) of rows into one\nsingle row. For that, a combination of deletes and updates are used.\n\nIn essence, the tables are write-only except for the summarization\nstep for old data.\n\nMany tables are becoming increasingly bloated, which is somewhat\nexpected due to this usage pattern: I had expected table size to be\nabout constant, holding recent data plus archived old data (which is\nreally small compared to full recent logs), with some constant-sized\nbloat due to daily summarization updates/deletes.\n\nWhat I'm seeing, though, is not that, but bloat proportional to table\nsize (always stuck at about 65% bloat). What's weird, is that vacuum\nfull does the trick of reducing table size and bloat back to 0%. I\nhaven't had time yet to verify whether it goes back to 65% after\nvacuum full (that will take time, maybe a month).\n\nQuestion is... why isn't all that free space being used? The table\ngrows in size even though there's plenty (65%) of free space.What does this look like with the pg_bloat_report.pl you linked to?\nDoes pg_freespace agree that that space is reusable?SELECT avail,count(*) FROM pg_freespace('pgbench_accounts') group by avail;Cheers,\nJeff", "msg_date": "Tue, 11 Feb 2014 15:29:45 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bloated tables and why is vacuum full the only option" }, { "msg_contents": "On Tue, Feb 11, 2014 at 8:29 PM, Jeff Janes <[email protected]> wrote:\n> On Fri, Feb 7, 2014 at 10:47 AM, Claudio Freire <[email protected]>\n> wrote:\n>>\n>> Hello list.\n>>\n>> I know all the theory about vacuuming. I've got log tables that get\n>> periodically pruned. The pruning is... quirky, though. It's not so\n>> much deleting data, as summarizing many (thousands) of rows into one\n>> single row. For that, a combination of deletes and updates are used.\n>>\n>> In essence, the tables are write-only except for the summarization\n>> step for old data.\n>>\n>> Many tables are becoming increasingly bloated, which is somewhat\n>> expected due to this usage pattern: I had expected table size to be\n>> about constant, holding recent data plus archived old data (which is\n>> really small compared to full recent logs), with some constant-sized\n>> bloat due to daily summarization updates/deletes.\n>>\n>> What I'm seeing, though, is not that, but bloat proportional to table\n>> size (always stuck at about 65% bloat). What's weird, is that vacuum\n>> full does the trick of reducing table size and bloat back to 0%. I\n>> haven't had time yet to verify whether it goes back to 65% after\n>> vacuum full (that will take time, maybe a month).\n>>\n>> Question is... why isn't all that free space being used? The table\n>> grows in size even though there's plenty (65%) of free space.\n>\n>\n> What does this look like with the pg_bloat_report.pl you linked to?\n>\n> Does pg_freespace agree that that space is reusable?\n>\n> SELECT avail,count(*) FROM pg_freespace('pgbench_accounts') group by avail;\n\nI don't have pg_freespacemap installed on that server.\n\nI guess I'll take the next opportunity to install it.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 11 Feb 2014 21:36:45 -0200", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bloated tables and why is vacuum full the only option" }, { "msg_contents": "On Sun, Feb 9, 2014 at 12:50 PM, Tomas Vondra <[email protected]> wrote:\n> Also, when you're running the weekly VACUUM, do VACUUM (VERBOSE) and\n> post it here. That might at least help us eliminate some of the usual\n> suspects.\n\nI have the full dump if relevant. The relevant extract for:\n\nReport:\n 1. public.full_bid_logs\n2551219 of 4417505 pages wasted (57.8%), 19 GB of 34 GB.\n 2. public.non_bid_logs\n814076 of 1441728 pages wasted (56.5%), 6360 MB of 11 GB.\n 3. public.click_logs\n631630 of 2252999 pages wasted (28.0%), 4935 MB of 17 GB.\n 4. public.full_impression_logs\n282298 of 762238 pages wasted (37.0%), 2205 MB of 5955 MB.\n\nMost of which appear on the log regarding cancelled autovacuums (on\nWednesday, which suggests it's because of the manual vacuum itself\nsince they run on Wednesdays):\n\n/media/data/pgsql9/data/pg_log/postgresql-Wed.log:LOG: sending cancel\nto blocking autovacuum PID 30000\n/media/data/pgsql9/data/pg_log/postgresql-Wed.log:ERROR: canceling\nautovacuum task\n/media/data/pgsql9/data/pg_log/postgresql-Wed.log:CONTEXT: automatic\nvacuum of table \"mat.public.aggregated_tracks_hourly_full\"\n/media/data/pgsql9/data/pg_log/postgresql-Wed.log:LOG: sending cancel\nto blocking autovacuum PID 30000\n/media/data/pgsql9/data/pg_log/postgresql-Wed.log:ERROR: canceling\nautovacuum task\n/media/data/pgsql9/data/pg_log/postgresql-Wed.log:CONTEXT: automatic\nvacuum of table \"mat.public.bid_logs\"\n/media/data/pgsql9/data/pg_log/postgresql-Wed.log:LOG: sending cancel\nto blocking autovacuum PID 30000\n/media/data/pgsql9/data/pg_log/postgresql-Wed.log:ERROR: canceling\nautovacuum task\n/media/data/pgsql9/data/pg_log/postgresql-Wed.log:CONTEXT: automatic\nvacuum of table \"mat.public.full_impression_logs\"\n/media/data/pgsql9/data/pg_log/postgresql-Wed.log:LOG: sending cancel\nto blocking autovacuum PID 30000\n/media/data/pgsql9/data/pg_log/postgresql-Wed.log:ERROR: canceling\nautovacuum task\n/media/data/pgsql9/data/pg_log/postgresql-Wed.log:CONTEXT: automatic\nvacuum of table \"mat.public.non_bid_logs\"\n\nNote: the bid_logs one up there was the one I vacuumed full, so that's\nwhy it's not showing on pg bloat report.\n\n(manual) Vacuum verbose follows:\n\n...\nINFO: vacuuming \"public.full_bid_logs\"\nINFO: scanned index \"full_bid_logs_pkey\" to remove 11416990 row versions\nDETAIL: CPU 16.49s/133.42u sec elapsed 2660.01 sec.\nINFO: \"full_bid_logs\": removed 11416990 row versions in 171209 pages\nDETAIL: CPU 5.28s/5.18u sec elapsed 610.86 sec.\nINFO: index \"full_bid_logs_pkey\" now contains 267583488 row versions\nin 2207958 pages\nDETAIL: 10206852 index row versions were removed.\n3852 index pages have been deleted, 3010 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.04 sec.\nINFO: \"full_bid_logs\": found 4625843 removable, 29252597 nonremovable\nrow versions in 402582 out of 4417505 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 2408853 unused item pointers.\n0 pages are entirely empty.\nCPU 30.10s/145.42u sec elapsed 4317.03 sec.\nINFO: analyzing \"public.full_bid_logs\"\nINFO: \"full_bid_logs\": scanned 30000 of 4417505 pages, containing\n1802104 live rows and 0 dead rows; 30000 rows in sample, 211394515\nestimated total rows\n...\nINFO: vacuuming \"public.non_bid_logs\"\nINFO: scanned index \"non_bid_logs_pkey\" to remove 10137502 row versions\nDETAIL: CPU 1.55s/23.23u sec elapsed 325.36 sec.\nINFO: scanned index \"non_bid_logs_created_idx\" to remove 10137502 row versions\nDETAIL: CPU 1.60s/23.40u sec elapsed 332.35 sec.\nINFO: \"non_bid_logs\": removed 10137502 row versions in 116848 pages\nDETAIL: CPU 3.60s/1.36u sec elapsed 403.18 sec.\nINFO: index \"non_bid_logs_pkey\" now contains 41372799 row versions in\n176455 pages\nDETAIL: 10106519 index row versions were removed.\n29944 index pages have been deleted, 11887 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"non_bid_logs_created_idx\" now contains 41384156 row\nversions in 184580 pages\nDETAIL: 10137502 index row versions were removed.\n45754 index pages have been deleted, 18310 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"non_bid_logs\": found 1378107 removable, 4283511 nonremovable\nrow versions in 163702 out of 1451616 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 1345761 unused item pointers.\n0 pages are entirely empty.\nCPU 9.28s/49.42u sec elapsed 1378.98 sec.\nINFO: \"non_bid_logs\": suspending truncate due to conflicting lock request\nINFO: \"non_bid_logs\": truncated 1451616 to 1441728 pages\nDETAIL: CPU 0.18s/0.20u sec elapsed 6.98 sec.\nINFO: \"non_bid_logs\": stopping truncate due to conflicting lock request\nINFO: analyzing \"public.non_bid_logs\"\nINFO: \"non_bid_logs\": scanned 30000 of 1441728 pages, containing\n874070 live rows and 0 dead rows; 30000 rows in sample, 47720462\nestimated total rows\n...\nINFO: vacuuming \"public.click_logs\"\nINFO: scanned index \"click_logs_pkey\" to remove 9594881 row versions\nDETAIL: CPU 7.34s/67.36u sec elapsed 1194.09 sec.\nINFO: \"click_logs\": removed 9594881 row versions in 135032 pages\nDETAIL: CPU 4.36s/2.06u sec elapsed 485.08 sec.\nINFO: index \"click_logs_pkey\" now contains 155456958 row versions in\n968791 pages\nDETAIL: 9579492 index row versions were removed.\n2283 index pages have been deleted, 1008 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.63 sec.\nINFO: \"click_logs\": found 5502098 removable, 24781508 nonremovable\nrow versions in 398918 out of 2252999 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 1447127 unused item pointers.\n0 pages are entirely empty.\nCPU 20.54s/74.54u sec elapsed 2796.24 sec.\nINFO: \"click_logs\": stopping truncate due to conflicting lock request\nINFO: vacuuming \"pg_toast.pg_toast_16563\"\nINFO: index \"pg_toast_16563_index\" now contains 0 row versions in 1 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"pg_toast_16563\": found 0 removable, 0 nonremovable row\nversions in 0 out of 0 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: analyzing \"public.click_logs\"\nINFO: \"click_logs\": scanned 30000 of 2252999 pages, containing\n2057564 live rows and 0 dead rows; 30000 rows in sample, 113618340\nestimated total rows\n...\nINFO: vacuuming \"public.full_impression_logs\"\nINFO: scanned index \"full_impression_logs_pkey\" to remove 4982049 row versions\nDETAIL: CPU 4.50s/27.49u sec elapsed 658.99 sec.\nINFO: \"full_impression_logs\": removed 4982049 row versions in 45983 pages\nDETAIL: CPU 1.41s/1.00u sec elapsed 168.53 sec.\nINFO: index \"full_impression_logs_pkey\" now contains 61634251 row\nversions in 531050 pages\nDETAIL: 759542 index row versions were removed.\n29 index pages have been deleted, 29 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.01 sec.\nINFO: \"full_impression_logs\": found 0 removable, 758433 nonremovable\nrow versions in 48383 out of 763267 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 20689 unused item pointers.\n0 pages are entirely empty.\nCPU 6.40s/28.80u sec elapsed 879.32 sec.\nINFO: \"full_impression_logs\": truncated 763267 to 762238 pages\nDETAIL: CPU 0.00s/0.00u sec elapsed 0.05 sec.\nINFO: analyzing \"public.full_impression_logs\"\nINFO: \"full_impression_logs\": scanned 30000 of 762238 pages,\ncontaining 2415147 live rows and 0 dead rows; 30000 rows in sample,\n41836806 estimated total rows\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 24 Feb 2014 18:43:43 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bloated tables and why is vacuum full the only option" } ]
[ { "msg_contents": "We're using PostgreSQL to host our analytics (OLAP) database and trying to\ntune our configuration for better performance.\n\nIs there any existing PG performance benchmarking tool set that tailors for\nOLAP purpose? Right now what we do is just go through the list of\ndocumentations\n\nI think pgtune is optimized more for OLTP application. Is there something\nsimilar to pgtune/pgbench for OLAP?\n\nThanks,\nHuy\n\nWe're using PostgreSQL to host our analytics (OLAP) database and trying to tune our configuration for better performance.Is there any existing PG performance benchmarking tool set that tailors for OLAP purpose? Right now what we do is just go through the list of documentations \nI think pgtune is optimized more for OLTP application. Is there something similar to pgtune/pgbench for OLAP?Thanks,Huy", "msg_date": "Sat, 8 Feb 2014 11:36:06 +0800", "msg_from": "Huy Nguyen <[email protected]>", "msg_from_op": true, "msg_subject": "Performance Benchmarking for data-warehousing instance?" }, { "msg_contents": "On Fri, Feb 7, 2014 at 7:36 PM, Huy Nguyen <[email protected]> wrote:\n> I think pgtune is optimized more for OLTP application. Is there something\n> similar to pgtune/pgbench for OLAP?\n\nIIRC pgtune can be told to give out an OLAP-optimized postgresql.conf.\nMaybe that's only recent versions?\n\n-- \nRegards,\nPeter Geoghegan\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 7 Feb 2014 20:12:49 -0800", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Benchmarking for data-warehousing instance?" }, { "msg_contents": "Do you happen to have a link to it? Though I think different machine specs\nshould yield different optimal postgresql.conf.\n\nI'm looking for a hand-crafted set of data + queries tailored for OLAP so\nthat I can try to manually tweak one config at a time and run against the\nbenchmark.\n\nI might considering creating one if no one has done it before.\n\n\nOn Sat, Feb 8, 2014 at 12:12 PM, Peter Geoghegan <\[email protected]> wrote:\n\n> On Fri, Feb 7, 2014 at 7:36 PM, Huy Nguyen <[email protected]> wrote:\n> > I think pgtune is optimized more for OLTP application. Is there something\n> > similar to pgtune/pgbench for OLAP?\n>\n> IIRC pgtune can be told to give out an OLAP-optimized postgresql.conf.\n> Maybe that's only recent versions?\n>\n> --\n> Regards,\n> Peter Geoghegan\n>\n\nDo you happen to have a link to it? Though I think different machine specs should yield different optimal postgresql.conf.I'm looking for a hand-crafted set of data + queries tailored for OLAP so that I can try to manually tweak one config at a time and run against the benchmark.\nI might considering creating one if no one has done it before.On Sat, Feb 8, 2014 at 12:12 PM, Peter Geoghegan <[email protected]> wrote:\nOn Fri, Feb 7, 2014 at 7:36 PM, Huy Nguyen <[email protected]> wrote:\n\n> I think pgtune is optimized more for OLTP application. Is there something\n> similar to pgtune/pgbench for OLAP?\n\nIIRC pgtune can be told to give out an OLAP-optimized postgresql.conf.\nMaybe that's only recent versions?\n\n--\nRegards,\nPeter Geoghegan", "msg_date": "Sat, 8 Feb 2014 14:41:22 +0800", "msg_from": "Huy Nguyen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance Benchmarking for data-warehousing instance?" }, { "msg_contents": "I used PostgreSQL 8.4 for Pentaho OLAP (mondrian) for a while. It's work\nlike i want. But all i've done are the combination of Application level,\nDatabase level and Hardware level.\n\nOur application is CRM, Then we did ETL to new PostgreSQL server, and do\nOLAP there. The application and OLAP DBMS is totally difference\nconfiguration. In Hardware level with hardware optimization and\npostgresql.conf configuration. We can run OLAP query for our need.\n\nI think you have to know your database behavior and what OLAP you need to\nquery. Then you will know how to configure your OLAP and postgresql.conf\nand Hardware you need.\n\nBenchmark tool is only a test. But you have to know all about pg_log and\nperformance monitoring.\n\n\nWattana\n\n\n2014-02-08 13:41 GMT+07:00 Huy Nguyen <[email protected]>:\n\n> Do you happen to have a link to it? Though I think different machine specs\n> should yield different optimal postgresql.conf.\n>\n> I'm looking for a hand-crafted set of data + queries tailored for OLAP so\n> that I can try to manually tweak one config at a time and run against the\n> benchmark.\n>\n> I might considering creating one if no one has done it before.\n>\n>\n> On Sat, Feb 8, 2014 at 12:12 PM, Peter Geoghegan <\n> [email protected]> wrote:\n>\n>> On Fri, Feb 7, 2014 at 7:36 PM, Huy Nguyen <[email protected]> wrote:\n>> > I think pgtune is optimized more for OLTP application. Is there\n>> something\n>> > similar to pgtune/pgbench for OLAP?\n>>\n>> IIRC pgtune can be told to give out an OLAP-optimized postgresql.conf.\n>> Maybe that's only recent versions?\n>>\n>> --\n>> Regards,\n>> Peter Geoghegan\n>>\n>\n>\n\n\n-- \nLife has no boundaries...\n\nI used PostgreSQL 8.4  for Pentaho OLAP (mondrian) for a while. It's work like i want. But all i've done are the combination of Application level, Database level and Hardware level.\nOur application is CRM, Then we did ETL to new PostgreSQL server, and do OLAP there. The application and OLAP DBMS is totally difference configuration. In Hardware level with hardware optimization and postgresql.conf configuration. We can run OLAP query for our need.\nI think you have to know your database behavior and what OLAP you need to query. Then you will know how to configure your OLAP and postgresql.conf and Hardware you need.Benchmark tool is only a test. But you have to know all about pg_log and performance monitoring.\nWattana2014-02-08 13:41 GMT+07:00 Huy Nguyen <[email protected]>:\nDo you happen to have a link to it? Though I think different machine specs should yield different optimal postgresql.conf.\nI'm looking for a hand-crafted set of data + queries tailored for OLAP so that I can try to manually tweak one config at a time and run against the benchmark.\nI might considering creating one if no one has done it before.On Sat, Feb 8, 2014 at 12:12 PM, Peter Geoghegan <[email protected]> wrote:\nOn Fri, Feb 7, 2014 at 7:36 PM, Huy Nguyen <[email protected]> wrote:\n\n\n> I think pgtune is optimized more for OLTP application. Is there something\n> similar to pgtune/pgbench for OLAP?\n\nIIRC pgtune can be told to give out an OLAP-optimized postgresql.conf.\nMaybe that's only recent versions?\n\n--\nRegards,\nPeter Geoghegan\n\n-- Life has no boundaries...", "msg_date": "Sat, 8 Feb 2014 15:24:59 +0700", "msg_from": "Wattana Hinchaisri <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Benchmarking for data-warehousing instance?" }, { "msg_contents": "Hi,\n\nOn 8.2.2014 07:41, Huy Nguyen wrote:\n> Do you happen to have a link to it? Though I think different machine\n> specs should yield different optimal postgresql.conf.\n\nAn optimal configuration is not just about machine specs, it's about the\nworkload and application configuration too. So there's no benchmark that\nwould give you the best config for your application.\n\n> I'm looking for a hand-crafted set of data + queries tailored for OLAP\n> so that I can try to manually tweak one config at a time and run against\n> the benchmark.\n\nI think using pgtune is the best starting point you can get, and you may\ntweak it based on your actual workload. If you can prepare a sample of\nthe workload (i.e. a representative amount of data) and run a set of\nactual queries (generated by the application), that'd be an excellent\nsituation.\n\n> I might considering creating one if no one has done it before.\n\nSo how exactly is that going to work? There's an benchmark for this,\ncalled TPC-H [1], but again - this is just a model of how a DWH/DSS\napplication may look like.\n\nI've spent a lot of time working with it a while ago (see [2]), and IMHO\nthe values recommended by pgtune are quite fine.\n\n[1] http://www.tpc.org/tpch/default.asp\n[2] http://www.fuzzy.cz/en/articles/dss-tpc-h-benchmark-with-postgresql/\n\nregards\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 09 Feb 2014 16:42:37 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Benchmarking for data-warehousing instance?" } ]
[ { "msg_contents": "\nHello,\n\nWhile analyzing performance, we encountered the following phenomenon,\n\n SELECT sum(pow(.5*generate_series,.5))\n FROM generate_series(1,1000000);\n\nis much much (a hundred times) slower than\n\n SELECT sum(pow(random()*generate_series,.5))\n FROM generate_series(1,1000000);\n\nand asymptotic difference is even more astounding.\nThis seems counter-intuitive, considering the cost of\nan additional random() call instead of a constant factor.\nWhat are the reasons for this strange performance boost?\n\nThanks, M. Putz\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 10 Feb 2014 20:52:51 +0100 (CET)", "msg_from": "M Putz <[email protected]>", "msg_from_op": true, "msg_subject": "Strange performance boost with random()" }, { "msg_contents": "On 02/10/2014 09:52 PM, M Putz wrote:\n>\n> Hello,\n>\n> While analyzing performance, we encountered the following phenomenon,\n>\n> SELECT sum(pow(.5*generate_series,.5))\n> FROM generate_series(1,1000000);\n>\n> is much much (a hundred times) slower than\n>\n> SELECT sum(pow(random()*generate_series,.5))\n> FROM generate_series(1,1000000);\n>\n> and asymptotic difference is even more astounding.\n> This seems counter-intuitive, considering the cost of\n> an additional random() call instead of a constant factor.\n> What are the reasons for this strange performance boost?\n\nDifferent data type. The first uses numeric, which is pretty slow for \ndoing calculations. random() returns a double, which makes the pow and \nsum to also use double, which is a lot faster.\n\nTo see the effect, try these variants:\n\nSELECT sum(pow(.5::float8 * generate_series,.5))\nFROM generate_series(1,1000000);\n\nSELECT sum(pow(random()::numeric * generate_series,.5))\nFROM generate_series(1,1000000);\n\n- Heikki\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 10 Feb 2014 22:03:36 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange performance boost with random()" }, { "msg_contents": "On Mon, Feb 10, 2014 at 3:03 PM, Heikki Linnakangas <[email protected]\n> wrote:\n\n> On 02/10/2014 09:52 PM, M Putz wrote:\n>\n>>\n>> Hello,\n>>\n>> While analyzing performance, we encountered the following phenomenon,\n>>\n>> SELECT sum(pow(.5*generate_series,.5))\n>> FROM generate_series(1,1000000);\n>>\n>> is much much (a hundred times) slower than\n>>\n>> SELECT sum(pow(random()*generate_series,.5))\n>> FROM generate_series(1,1000000);\n>>\n>> and asymptotic difference is even more astounding.\n>> This seems counter-intuitive, considering the cost of\n>> an additional random() call instead of a constant factor.\n>> What are the reasons for this strange performance boost?\n>>\n>\n> Different data type. The first uses numeric, which is pretty slow for\n> doing calculations. random() returns a double, which makes the pow and sum\n> to also use double, which is a lot faster.\n>\n> To see the effect, try these variants:\n>\n> SELECT sum(pow(.5::float8 * generate_series,.5))\n> FROM generate_series(1,1000000);\n>\n> SELECT sum(pow(random()::numeric * generate_series,.5))\n> FROM generate_series(1,1000000);\n>\n> - Heikki\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nThat's interesting .. Does PostgreSQL always use the NUMERIC data type for\nconstants in absence of cast ?\n\nSébastien\n\nOn Mon, Feb 10, 2014 at 3:03 PM, Heikki Linnakangas <[email protected]> wrote:\nOn 02/10/2014 09:52 PM, M Putz wrote:\n\n\nHello,\n\nWhile analyzing performance, we encountered the following phenomenon,\n\n    SELECT sum(pow(.5*generate_series,.5))\n    FROM generate_series(1,1000000);\n\nis much much (a hundred times) slower than\n\n    SELECT sum(pow(random()*generate_series,.5))\n    FROM generate_series(1,1000000);\n\nand asymptotic difference is even more astounding.\nThis seems counter-intuitive, considering the cost of\nan additional random() call instead of a constant factor.\nWhat are the reasons for this strange performance boost?\n\n\nDifferent data type. The first uses numeric, which is pretty slow for doing calculations. random() returns a double, which makes the pow and sum to also use double, which is a lot faster.\n\nTo see the effect, try these variants:\n\nSELECT sum(pow(.5::float8 * generate_series,.5))\nFROM generate_series(1,1000000);\n\nSELECT sum(pow(random()::numeric * generate_series,.5))\nFROM generate_series(1,1000000);\n\n- Heikki\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\nThat's interesting .. Does PostgreSQL always use the NUMERIC data type for constants in absence of cast ?\n\nSébastien", "msg_date": "Tue, 11 Feb 2014 10:21:30 -0500", "msg_from": "=?UTF-8?Q?S=C3=A9bastien_Lorion?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange performance boost with random()" }, { "msg_contents": "=?UTF-8?Q?S=C3=A9bastien_Lorion?= <[email protected]> writes:\n> That's interesting .. Does PostgreSQL always use the NUMERIC data type for\n> constants in absence of cast ?\n\nIf they're not integers, yes, that's the initial assumption.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 11 Feb 2014 10:37:47 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange performance boost with random()" }, { "msg_contents": "Dear All\n\nthis probably not the best list to post this question:\n\nI use cascading deletes but would like to first inform the user what she\nis about to do. \nSomething like : explain delete from PANEL where panel_id=21;\n-- you are about to delete 32144 records in tables abc aaa wewew\n\nThis is clearly something that can be programmed but as all information\nis available in the database schema, there should be a generalized \nprocedure available.\n\nIs there someone who has heard about this problem?\n\ngreetings\n\nEildert\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 11 Feb 2014 21:54:04 +0100", "msg_from": "Eildert Groeneveld <[email protected]>", "msg_from_op": false, "msg_subject": "list number of entries to be delete in cascading deletes" }, { "msg_contents": "On Tue, Feb 11, 2014 at 5:54 PM, Eildert Groeneveld\n<[email protected]> wrote:\n> Dear All\n>\n> this probably not the best list to post this question:\n>\n> I use cascading deletes but would like to first inform the user what she\n> is about to do.\n> Something like : explain delete from PANEL where panel_id=21;\n> -- you are about to delete 32144 records in tables abc aaa wewew\n>\n> This is clearly something that can be programmed but as all information\n> is available in the database schema, there should be a generalized\n> procedure available.\n\nGranted, this is somewhat ugly, but you could issue the delete, note\ndown the rowcount, and rollback.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 11 Feb 2014 18:58:03 -0200", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: list number of entries to be delete in cascading deletes" }, { "msg_contents": "On Di, 2014-02-11 at 18:58 -0200, Claudio Freire wrote:\n> On Tue, Feb 11, 2014 at 5:54 PM, Eildert Groeneveld\n> <[email protected]> wrote:\n> > Dear All\n> >\n> > this probably not the best list to post this question:\n> >\n> > I use cascading deletes but would like to first inform the user what she\n> > is about to do.\n> > Something like : explain delete from PANEL where panel_id=21;\n> > -- you are about to delete 32144 records in tables abc aaa wewew\n> >\n> > This is clearly something that can be programmed but as all information\n> > is available in the database schema, there should be a generalized\n> > procedure available.\n> \n> Granted, this is somewhat ugly, but you could issue the delete, note\n> down the rowcount, and rollback.\nThanks Claudio, thats an option and a fallback if we do not come up with\na better version. I am sure that there is something around.\n\n\n> \n> \n\n-- \nEildert Groeneveld\n===================================================\nInstitute of Farm Animal Genetics (FLI)\nMariensee 31535 Neustadt Germany\nTel : (+49)(0)5034 871155 Fax : (+49)(0)5034 871143\ne-mail: [email protected] \nweb: http://vce.tzv.fal.de\n==================================================\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 12 Feb 2014 11:42:55 +0100", "msg_from": "Eildert Groeneveld <[email protected]>", "msg_from_op": false, "msg_subject": "Re: list number of entries to be delete in cascading\n deletes" } ]
[ { "msg_contents": "Hi,\n \nWe have still problems with our query time.\nAfter restoring the database the query time is about one minute.\nAfter an analyze the query time is about 70 minutes.\nWe could discover one table which cause the problem.\nAfter analyzing this table the query time increases.\nWe have made an explain plan before and after analyzing the table cifx.\n \nThe plans can be found on\n \nAfter analyze:\nexplain.depesz.com/s/HuZ\n \nBefore analyze:\nexplain.depesz.com/s/XWF\n \nThanks a lot for your help!\n \n \n \n \n\nHi, We have still problems with our query time.After restoring the database the query time is about one minute.After an analyze the query time is about 70 minutes.We could discover one table which cause the problem.After analyzing this table the query time increases.We have made an explain plan before and after analyzing the table cifx. The plans can be found on After analyze:explain.depesz.com/s/HuZ Before analyze:explain.depesz.com/s/XWF Thanks a lot for your help!", "msg_date": "Wed, 12 Feb 2014 09:58:49 +0100", "msg_from": "\"Katharina Koobs\" <[email protected]>", "msg_from_op": true, "msg_subject": "increasing query time after analyze" }, { "msg_contents": "2014-02-12 9:58 GMT+01:00 Katharina Koobs <[email protected]>:\n\n> explain.depesz.com/s/HuZ\n\n\nfast query is fast due intesive use a hashjoins\n\nbut you can see\n\nHash Left Join (cost=9343.05..41162.99 rows=6889 width=1350) (actual\ntime=211.767..23519.296 rows=639137 loops=1)\n\na estimation is out. Is strange so after ANALYSE a estimation is worse\n\nNested Loop Left Join (cost=1122.98..28246.98 rows=1 width=1335) (actual\ntime=33.222..14631581.741 rows=639137 loops=1)\n\nSo it looks some in data is strange - probably dependency between columns:\nsos_stg_aggr.stichtag = sos_stichtag.tid, sos_stg_aggr.stuart = cifx.apnr\n\n Hash Join (cost=1121.04..24144.02 rows=57 width=339) (actual\ntime=2.407..11157.151 rows=639221 loops=1)\n\n - Hash Cond: (sos_stg_aggr.stuart = cifx.apnr)\n\n\nNested loop based plan is extremely sensitive to this wrong estimation.\n\nYou can try:\n\n* penalize nested loop - set enable_nested_loop to off; -- for this query\n* divide this query to more queries - store result temporary table and\nanalyze (fix wrong estimation)\n* maybe you can increase a work_mem\n\nRegards\n\nPavel Stehule\n\n2014-02-12 9:58 GMT+01:00 Katharina Koobs <[email protected]>:\nexplain.depesz.com/s/HuZ\nfast query is fast due intesive use a hashjoinsbut you can see Hash Left Join (cost=9343.05..41162.99 rows=6889 width=1350) (actual time=211.767..23519.296 rows=639137 loops=1)\n\na estimation is out. Is strange so after ANALYSE a estimation is worseNested Loop Left Join\n\n\n \n\n (cost=1122.98..28246.98\n rows=1\n width=1335)\n \n\n (actual\n time=33.222..14631581.741\n rows=639137\n loops=1)\n So it looks some in data is strange - probably dependency between columns: sos_stg_aggr.stichtag = sos_stichtag.tid, sos_stg_aggr.stuart = cifx.apnr\n\n\n\n Hash Join\n\n\n \n\n (cost=1121.04..24144.02\n rows=57\n width=339)\n \n\n (actual\n time=2.407..11157.151\n rows=639221\n loops=1)\n \n\nHash Cond: (sos_stg_aggr.stuart = cifx.apnr)Nested loop based plan is extremely sensitive to this wrong estimation. \n\nYou can try:* penalize nested loop - set enable_nested_loop to off; -- for this query* divide this query to more queries - store result temporary table and analyze (fix wrong estimation)\n* maybe you can increase a work_memRegardsPavel Stehule", "msg_date": "Wed, 12 Feb 2014 10:20:44 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: increasing query time after analyze" } ]
[ { "msg_contents": "Hi all.\n\nToday I have started getting errors like below in logs (seems that I have not changed anything for last week). When it happens the db gets lots of connections in state active, eats 100% cpu and clients get errors (due to timeout). \n\n2014-02-12 15:44:24.562 MSK,\"rpop\",\"rpopdb_p6\",30061,\"localhost:58350\",52fb5e53.756d,1,\"SELECT waiting\",2014-02-12 15:43:15 MSK,143/264877,1002850566,LOG,00000,\"process 30061 still waiting for ExclusiveLock on extension of relation 26118 of database 24590 after 1000.082 ms\",,,,,\"SQL statement \"\"insert into rpop.rpop_imap_uidls (folder_id, uidl) values (i_folder_id, i_uidl)\"\"\n\nI have read several topics [1, 2, 3, 4] with similar problems but haven't find a good solution. Below is some more diagnostics.\n\nI am running PostgreSQL 9.3.2 installed from RPM packages on RHEL 6.4. Host is running with the following CPU (32 cores) and memory:\n\nroot@rpopdb01e ~ # fgrep -m1 'model name' /proc/cpuinfo\nmodel name\t: Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz\nroot@rpopdb01e ~ # free -m\n total used free shared buffers cached\nMem: 129028 123558 5469 0 135 119504\n-/+ buffers/cache: 3918 125110\nSwap: 16378 0 16378\nroot@rpopdb01e ~ #\n\nPGDATA lives on RAID6 array of 8 ssd-disks with ext4, iostat and atop say the disks are really free. Right now PGDATA takes only 95G.\nThe settings changed in postgresql.conf are here [5].\n\nWhen it happens the last query from here [6] shows that almost all queries are waiting for ExclusiveLock, but they do a simple insert.\n\n (extend,26647,26825,,,,,,,) | 5459 | ExclusiveLock | 1 | (extend,26647,26825,,,,,,,) | 8053 | ExclusiveLock | 5459,8053\n (extend,26647,26828,,,,,,,) | 5567 | ExclusiveLock | 1 | (extend,26647,26828,,,,,,,) | 5490 | ExclusiveLock | 5567,5490\n (extend,24584,25626,,,,,,,) | 5611 | ExclusiveLock | 1 | (extend,24584,25626,,,,,,,) | 3963 | ExclusiveLock | 5611,3963\n\nI have several databases running on one host with one postmaster process and ExclusiveLock is being waited by many oids. I suppose the only common thing for all of them is that they are bigger than others and they almost do not get updates and deletes (only inserts and reads). Some more info about one of such tables is here [7].\n\nI have tried to look at the source code (src/backend/access/heap/hio.c) to understand when the exclusive lock can be taken, but I could only read comments :) I have also examined FSM for this tables and their indexes and found that for most of them there are free pages but there are, for example, such cases:\n\nrpopdb_p0=# select count(*) from pg_freespace('rpop.rpop_uidl') where avail != 0;\n count\n--------\n 115953\n(1 row)\n\nrpopdb_p0=# select count(*) from pg_freespace('rpop.pk_rpop_uidl') where avail != 0;\n count\n-------\n 0\n(1 row)\n\nrpopdb_p0=# \\dS+ rpop.rpop_uidl\n Table \"rpop.rpop_uidl\"\n Column | Type | Modifiers | Storage | Stats target | Description\n--------+------------------------+-----------+----------+--------------+-------------\n popid | bigint | not null | plain | |\n uidl | character varying(200) | not null | extended | |\nIndexes:\n \"pk_rpop_uidl\" PRIMARY KEY, btree (popid, uidl)\nHas OIDs: no\n\nrpopdb_p0=#\n\n\nMy questions are:\n1. Do we consume 100% cpu (in system) trying to get page from FSM? Or does it happen during exclusive lock acquiring? How can I dig it?\n2. How much space do we extend to the relation when we get exclusive lock on it?\n3. Why extended page is not visible for other backends?\n4. Is there any possibility of situation where backend A got exclusive lock on some relation to extend it. Then OS CPU scheduler made a context switch to backend B while backend B is waiting for exclusive lock on the same relation. And so on for many backends.\n5. (and the main question) what can I do to get rid of such situations? It is a production cluster and I do not have any ideas what to do with this situation :( Any help would be really appropriate.\n\n[1] http://www.postgresql.org/message-id/[email protected]\n[2] http://pgsql.performance.narkive.com/IrkPbl3f/postgresql-9-2-3-performance-problem-caused-exclusive-locks\n[3] http://www.postgresql.org/message-id/[email protected]\n[4] http://www.postgresql.org/message-id/CAL_0b1sypYeOyNkYNV95nNV2d+4jXTug3HkKF6FahfW7Gvgb_Q@mail.gmail.com\n[5] http://pastebin.com/raw.php?i=Bd40Vn6h\n[6] http://wiki.postgresql.org/wiki/Lock_dependency_information\n[7 http://pastebin.com/raw.php?i=eGrtG524]\n\n--\nVladimir\n\n\n\n\n\nHi all.Today I have started getting errors like below in logs (seems that I have not changed anything for last week). When it happens the db gets lots of connections in state active, eats 100% cpu and clients get errors (due to timeout). 2014-02-12 15:44:24.562 MSK,\"rpop\",\"rpopdb_p6\",30061,\"localhost:58350\",52fb5e53.756d,1,\"SELECT waiting\",2014-02-12 15:43:15 MSK,143/264877,1002850566,LOG,00000,\"process 30061 still waiting for ExclusiveLock on extension of relation 26118 of database 24590 after 1000.082 ms\",,,,,\"SQL statement \"\"insert into rpop.rpop_imap_uidls (folder_id, uidl) values (i_folder_id, i_uidl)\"\"I have read several topics [1, 2, 3, 4] with similar problems but haven't find a good solution. Below is some more diagnostics.I am running PostgreSQL 9.3.2 installed from RPM packages on RHEL 6.4. Host is running with the following CPU (32 cores) and memory:root@rpopdb01e ~ # fgrep -m1 'model name' /proc/cpuinfomodel name : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHzroot@rpopdb01e ~ # free -m             total       used       free     shared    buffers     cachedMem:        129028     123558       5469          0        135     119504-/+ buffers/cache:       3918     125110Swap:        16378          0      16378root@rpopdb01e ~ #PGDATA lives on RAID6 array of 8 ssd-disks with ext4, iostat and atop say the disks are really free. Right now PGDATA takes only 95G.The settings changed in postgresql.conf are here [5].When it happens the last query from here [6] shows that almost all queries are waiting for ExclusiveLock, but they do a simple insert. (extend,26647,26825,,,,,,,) |        5459 | ExclusiveLock |     1 | (extend,26647,26825,,,,,,,) | 8053 | ExclusiveLock | 5459,8053 (extend,26647,26828,,,,,,,) |        5567 | ExclusiveLock |     1 | (extend,26647,26828,,,,,,,) | 5490 | ExclusiveLock | 5567,5490 (extend,24584,25626,,,,,,,) |        5611 | ExclusiveLock |     1 | (extend,24584,25626,,,,,,,) | 3963 | ExclusiveLock | 5611,3963I have several databases running on one host with one postmaster process and ExclusiveLock is being waited by many oids. I suppose the only common thing for all of them is that they are bigger than others and they almost do not get updates and deletes (only inserts and reads). Some more info about one of such tables is here [7].I have tried to look at the source code (src/backend/access/heap/hio.c) to understand when the exclusive lock can be taken, but I could only read comments :) I have also examined FSM for this tables and their indexes and found that for most of them there are free pages but there are, for example, such cases:rpopdb_p0=# select count(*) from pg_freespace('rpop.rpop_uidl') where avail != 0; count-------- 115953(1 row)rpopdb_p0=# select count(*) from pg_freespace('rpop.pk_rpop_uidl') where avail != 0; count-------     0(1 row)rpopdb_p0=# \\dS+ rpop.rpop_uidl                               Table \"rpop.rpop_uidl\" Column |          Type          | Modifiers | Storage  | Stats target | Description--------+------------------------+-----------+----------+--------------+------------- popid  | bigint                 | not null  | plain    |              | uidl   | character varying(200) | not null  | extended |              |Indexes:    \"pk_rpop_uidl\" PRIMARY KEY, btree (popid, uidl)Has OIDs: norpopdb_p0=#My questions are:1. Do we consume 100% cpu (in system) trying to get page from FSM? Or does it happen during exclusive lock acquiring? How can I dig it?2. How much space do we extend to the relation when we get exclusive lock on it?3. Why extended page is not visible for other backends?4. Is there any possibility of situation where backend A got exclusive lock on some relation to extend it. Then OS CPU scheduler made a context switch to backend B while backend B is waiting for exclusive lock on the same relation. And so on for many backends.5. (and the main question) what can I do to get rid of such situations? It is a production cluster and I do not have any ideas what to do with this situation :( Any help would be really appropriate.[1] http://www.postgresql.org/message-id/[email protected][2] http://pgsql.performance.narkive.com/IrkPbl3f/postgresql-9-2-3-performance-problem-caused-exclusive-locks[3] http://www.postgresql.org/message-id/[email protected][4] http://www.postgresql.org/message-id/CAL_0b1sypYeOyNkYNV95nNV2d+4jXTug3HkKF6FahfW7Gvgb_Q@mail.gmail.com[5] http://pastebin.com/raw.php?i=Bd40Vn6h[6] http://wiki.postgresql.org/wiki/Lock_dependency_information[7 http://pastebin.com/raw.php?i=eGrtG524]\n--Vladimir", "msg_date": "Wed, 12 Feb 2014 20:59:20 +0400", "msg_from": "=?koi8-r?B?4s/Sz8TJziD3zMHEyc3J0g==?= <[email protected]>", "msg_from_op": true, "msg_subject": "Problem with ExclusiveLock on inserts" }, { "msg_contents": "Hi Vladimir,\n\nJust in case: how is your ext4 mount? \n\nBest regards, \nIlya\n\n> On Feb 12, 2014, at 17:59, Бородин Владимир <[email protected]> wrote:\n> \n> Hi all.\n> \n> Today I have started getting errors like below in logs (seems that I have not changed anything for last week). When it happens the db gets lots of connections in state active, eats 100% cpu and clients get errors (due to timeout). \n> \n> 2014-02-12 15:44:24.562 MSK,\"rpop\",\"rpopdb_p6\",30061,\"localhost:58350\",52fb5e53.756d,1,\"SELECT waiting\",2014-02-12 15:43:15 MSK,143/264877,1002850566,LOG,00000,\"process 30061 still waiting for ExclusiveLock on extension of relation 26118 of database 24590 after 1000.082 ms\",,,,,\"SQL statement \"\"insert into rpop.rpop_imap_uidls (folder_id, uidl) values (i_folder_id, i_uidl)\"\"\n> \n> I have read several topics [1, 2, 3, 4] with similar problems but haven't find a good solution. Below is some more diagnostics.\n> \n> I am running PostgreSQL 9.3.2 installed from RPM packages on RHEL 6.4. Host is running with the following CPU (32 cores) and memory:\n> \n> root@rpopdb01e ~ # fgrep -m1 'model name' /proc/cpuinfo\n> model name\t: Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz\n> root@rpopdb01e ~ # free -m\n> total used free shared buffers cached\n> Mem: 129028 123558 5469 0 135 119504\n> -/+ buffers/cache: 3918 125110\n> Swap: 16378 0 16378\n> root@rpopdb01e ~ #\n> \n> PGDATA lives on RAID6 array of 8 ssd-disks with ext4, iostat and atop say the disks are really free. Right now PGDATA takes only 95G.\n> The settings changed in postgresql.conf are here [5].\n> \n> When it happens the last query from here [6] shows that almost all queries are waiting for ExclusiveLock, but they do a simple insert.\n> \n> (extend,26647,26825,,,,,,,) | 5459 | ExclusiveLock | 1 | (extend,26647,26825,,,,,,,) | 8053 | ExclusiveLock | 5459,8053\n> (extend,26647,26828,,,,,,,) | 5567 | ExclusiveLock | 1 | (extend,26647,26828,,,,,,,) | 5490 | ExclusiveLock | 5567,5490\n> (extend,24584,25626,,,,,,,) | 5611 | ExclusiveLock | 1 | (extend,24584,25626,,,,,,,) | 3963 | ExclusiveLock | 5611,3963\n> \n> I have several databases running on one host with one postmaster process and ExclusiveLock is being waited by many oids. I suppose the only common thing for all of them is that they are bigger than others and they almost do not get updates and deletes (only inserts and reads). Some more info about one of such tables is here [7].\n> \n> I have tried to look at the source code (src/backend/access/heap/hio.c) to understand when the exclusive lock can be taken, but I could only read comments :) I have also examined FSM for this tables and their indexes and found that for most of them there are free pages but there are, for example, such cases:\n> \n> rpopdb_p0=# select count(*) from pg_freespace('rpop.rpop_uidl') where avail != 0;\n> count\n> --------\n> 115953\n> (1 row)\n> \n> rpopdb_p0=# select count(*) from pg_freespace('rpop.pk_rpop_uidl') where avail != 0;\n> count\n> -------\n> 0\n> (1 row)\n> \n> rpopdb_p0=# \\dS+ rpop.rpop_uidl\n> Table \"rpop.rpop_uidl\"\n> Column | Type | Modifiers | Storage | Stats target | Description\n> --------+------------------------+-----------+----------+--------------+-------------\n> popid | bigint | not null | plain | |\n> uidl | character varying(200) | not null | extended | |\n> Indexes:\n> \"pk_rpop_uidl\" PRIMARY KEY, btree (popid, uidl)\n> Has OIDs: no\n> \n> rpopdb_p0=#\n> \n> \n> My questions are:\n> 1. Do we consume 100% cpu (in system) trying to get page from FSM? Or does it happen during exclusive lock acquiring? How can I dig it?\n> 2. How much space do we extend to the relation when we get exclusive lock on it?\n> 3. Why extended page is not visible for other backends?\n> 4. Is there any possibility of situation where backend A got exclusive lock on some relation to extend it. Then OS CPU scheduler made a context switch to backend B while backend B is waiting for exclusive lock on the same relation. And so on for many backends.\n> 5. (and the main question) what can I do to get rid of such situations? It is a production cluster and I do not have any ideas what to do with this situation :( Any help would be really appropriate.\n> \n> [1] http://www.postgresql.org/message-id/[email protected]\n> [2] http://pgsql.performance.narkive.com/IrkPbl3f/postgresql-9-2-3-performance-problem-caused-exclusive-locks\n> [3] http://www.postgresql.org/message-id/[email protected]\n> [4] http://www.postgresql.org/message-id/CAL_0b1sypYeOyNkYNV95nNV2d+4jXTug3HkKF6FahfW7Gvgb_Q@mail.gmail.com\n> [5] http://pastebin.com/raw.php?i=Bd40Vn6h\n> [6] http://wiki.postgresql.org/wiki/Lock_dependency_information\n> [7 http://pastebin.com/raw.php?i=eGrtG524]\n> \n> --\n> Vladimir\n> \n> \n> \n> \n\nHi Vladimir,Just in case: how is your ext4 mount? Best regards, IlyaOn Feb 12, 2014, at 17:59, Бородин Владимир <[email protected]> wrote:Hi all.Today I have started getting errors like below in logs (seems that I have not changed anything for last week). When it happens the db gets lots of connections in state active, eats 100% cpu and clients get errors (due to timeout). 2014-02-12 15:44:24.562 MSK,\"rpop\",\"rpopdb_p6\",30061,\"localhost:58350\",52fb5e53.756d,1,\"SELECT waiting\",2014-02-12 15:43:15 MSK,143/264877,1002850566,LOG,00000,\"process 30061 still waiting for ExclusiveLock on extension of relation 26118 of database 24590 after 1000.082 ms\",,,,,\"SQL statement \"\"insert into rpop.rpop_imap_uidls (folder_id, uidl) values (i_folder_id, i_uidl)\"\"I have read several topics [1, 2, 3, 4] with similar problems but haven't find a good solution. Below is some more diagnostics.I am running PostgreSQL 9.3.2 installed from RPM packages on RHEL 6.4. Host is running with the following CPU (32 cores) and memory:root@rpopdb01e ~ # fgrep -m1 'model name' /proc/cpuinfomodel name : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHzroot@rpopdb01e ~ # free -m             total       used       free     shared    buffers     cachedMem:        129028     123558       5469          0        135     119504-/+ buffers/cache:       3918     125110Swap:        16378          0      16378root@rpopdb01e ~ #PGDATA lives on RAID6 array of 8 ssd-disks with ext4, iostat and atop say the disks are really free. Right now PGDATA takes only 95G.The settings changed in postgresql.conf are here [5].When it happens the last query from here [6] shows that almost all queries are waiting for ExclusiveLock, but they do a simple insert. (extend,26647,26825,,,,,,,) |        5459 | ExclusiveLock |     1 | (extend,26647,26825,,,,,,,) | 8053 | ExclusiveLock | 5459,8053 (extend,26647,26828,,,,,,,) |        5567 | ExclusiveLock |     1 | (extend,26647,26828,,,,,,,) | 5490 | ExclusiveLock | 5567,5490 (extend,24584,25626,,,,,,,) |        5611 | ExclusiveLock |     1 | (extend,24584,25626,,,,,,,) | 3963 | ExclusiveLock | 5611,3963I have several databases running on one host with one postmaster process and ExclusiveLock is being waited by many oids. I suppose the only common thing for all of them is that they are bigger than others and they almost do not get updates and deletes (only inserts and reads). Some more info about one of such tables is here [7].I have tried to look at the source code (src/backend/access/heap/hio.c) to understand when the exclusive lock can be taken, but I could only read comments :) I have also examined FSM for this tables and their indexes and found that for most of them there are free pages but there are, for example, such cases:rpopdb_p0=# select count(*) from pg_freespace('rpop.rpop_uidl') where avail != 0; count-------- 115953(1 row)rpopdb_p0=# select count(*) from pg_freespace('rpop.pk_rpop_uidl') where avail != 0; count-------     0(1 row)rpopdb_p0=# \\dS+ rpop.rpop_uidl                               Table \"rpop.rpop_uidl\" Column |          Type          | Modifiers | Storage  | Stats target | Description--------+------------------------+-----------+----------+--------------+------------- popid  | bigint                 | not null  | plain    |              | uidl   | character varying(200) | not null  | extended |              |Indexes:    \"pk_rpop_uidl\" PRIMARY KEY, btree (popid, uidl)Has OIDs: norpopdb_p0=#My questions are:1. Do we consume 100% cpu (in system) trying to get page from FSM? Or does it happen during exclusive lock acquiring? How can I dig it?2. How much space do we extend to the relation when we get exclusive lock on it?3. Why extended page is not visible for other backends?4. Is there any possibility of situation where backend A got exclusive lock on some relation to extend it. Then OS CPU scheduler made a context switch to backend B while backend B is waiting for exclusive lock on the same relation. And so on for many backends.5. (and the main question) what can I do to get rid of such situations? It is a production cluster and I do not have any ideas what to do with this situation :( Any help would be really appropriate.[1] http://www.postgresql.org/message-id/[email protected][2] http://pgsql.performance.narkive.com/IrkPbl3f/postgresql-9-2-3-performance-problem-caused-exclusive-locks[3] http://www.postgresql.org/message-id/[email protected][4] http://www.postgresql.org/message-id/CAL_0b1sypYeOyNkYNV95nNV2d+4jXTug3HkKF6FahfW7Gvgb_Q@mail.gmail.com[5] http://pastebin.com/raw.php?i=Bd40Vn6h[6] http://wiki.postgresql.org/wiki/Lock_dependency_information[7 http://pastebin.com/raw.php?i=eGrtG524]\n--Vladimir", "msg_date": "Wed, 12 Feb 2014 18:30:58 +0100", "msg_from": "Ilya Kosmodemiansky <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem with ExclusiveLock on inserts" }, { "msg_contents": "root@rpopdb01e ~ # fgrep data /etc/fstab\nUUID=f815fd3f-e4e4-43a6-a6a1-bce1203db3e0 /var/lib/pgsql/9.3/data ext4 noatime,nodiratime 0 1\nroot@rpopdb01e ~ #\n\nAccording to iostat the disks are not the bottleneck.\n\n12.02.2014, в 21:30, Ilya Kosmodemiansky <[email protected]> написал(а):\n\n> Hi Vladimir,\n> \n> Just in case: how is your ext4 mount? \n> \n> Best regards, \n> Ilya\n> \n> On Feb 12, 2014, at 17:59, Бородин Владимир <[email protected]> wrote:\n> \n>> Hi all.\n>> \n>> Today I have started getting errors like below in logs (seems that I have not changed anything for last week). When it happens the db gets lots of connections in state active, eats 100% cpu and clients get errors (due to timeout). \n>> \n>> 2014-02-12 15:44:24.562 MSK,\"rpop\",\"rpopdb_p6\",30061,\"localhost:58350\",52fb5e53.756d,1,\"SELECT waiting\",2014-02-12 15:43:15 MSK,143/264877,1002850566,LOG,00000,\"process 30061 still waiting for ExclusiveLock on extension of relation 26118 of database 24590 after 1000.082 ms\",,,,,\"SQL statement \"\"insert into rpop.rpop_imap_uidls (folder_id, uidl) values (i_folder_id, i_uidl)\"\"\n>> \n>> I have read several topics [1, 2, 3, 4] with similar problems but haven't find a good solution. Below is some more diagnostics.\n>> \n>> I am running PostgreSQL 9.3.2 installed from RPM packages on RHEL 6.4. Host is running with the following CPU (32 cores) and memory:\n>> \n>> root@rpopdb01e ~ # fgrep -m1 'model name' /proc/cpuinfo\n>> model name\t: Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz\n>> root@rpopdb01e ~ # free -m\n>> total used free shared buffers cached\n>> Mem: 129028 123558 5469 0 135 119504\n>> -/+ buffers/cache: 3918 125110\n>> Swap: 16378 0 16378\n>> root@rpopdb01e ~ #\n>> \n>> PGDATA lives on RAID6 array of 8 ssd-disks with ext4, iostat and atop say the disks are really free. Right now PGDATA takes only 95G.\n>> The settings changed in postgresql.conf are here [5].\n>> \n>> When it happens the last query from here [6] shows that almost all queries are waiting for ExclusiveLock, but they do a simple insert.\n>> \n>> (extend,26647,26825,,,,,,,) | 5459 | ExclusiveLock | 1 | (extend,26647,26825,,,,,,,) | 8053 | ExclusiveLock | 5459,8053\n>> (extend,26647,26828,,,,,,,) | 5567 | ExclusiveLock | 1 | (extend,26647,26828,,,,,,,) | 5490 | ExclusiveLock | 5567,5490\n>> (extend,24584,25626,,,,,,,) | 5611 | ExclusiveLock | 1 | (extend,24584,25626,,,,,,,) | 3963 | ExclusiveLock | 5611,3963\n>> \n>> I have several databases running on one host with one postmaster process and ExclusiveLock is being waited by many oids. I suppose the only common thing for all of them is that they are bigger than others and they almost do not get updates and deletes (only inserts and reads). Some more info about one of such tables is here [7].\n>> \n>> I have tried to look at the source code (src/backend/access/heap/hio.c) to understand when the exclusive lock can be taken, but I could only read comments :) I have also examined FSM for this tables and their indexes and found that for most of them there are free pages but there are, for example, such cases:\n>> \n>> rpopdb_p0=# select count(*) from pg_freespace('rpop.rpop_uidl') where avail != 0;\n>> count\n>> --------\n>> 115953\n>> (1 row)\n>> \n>> rpopdb_p0=# select count(*) from pg_freespace('rpop.pk_rpop_uidl') where avail != 0;\n>> count\n>> -------\n>> 0\n>> (1 row)\n>> \n>> rpopdb_p0=# \\dS+ rpop.rpop_uidl\n>> Table \"rpop.rpop_uidl\"\n>> Column | Type | Modifiers | Storage | Stats target | Description\n>> --------+------------------------+-----------+----------+--------------+-------------\n>> popid | bigint | not null | plain | |\n>> uidl | character varying(200) | not null | extended | |\n>> Indexes:\n>> \"pk_rpop_uidl\" PRIMARY KEY, btree (popid, uidl)\n>> Has OIDs: no\n>> \n>> rpopdb_p0=#\n>> \n>> \n>> My questions are:\n>> 1. Do we consume 100% cpu (in system) trying to get page from FSM? Or does it happen during exclusive lock acquiring? How can I dig it?\n>> 2. How much space do we extend to the relation when we get exclusive lock on it?\n>> 3. Why extended page is not visible for other backends?\n>> 4. Is there any possibility of situation where backend A got exclusive lock on some relation to extend it. Then OS CPU scheduler made a context switch to backend B while backend B is waiting for exclusive lock on the same relation. And so on for many backends.\n>> 5. (and the main question) what can I do to get rid of such situations? It is a production cluster and I do not have any ideas what to do with this situation :( Any help would be really appropriate.\n>> \n>> [1] http://www.postgresql.org/message-id/[email protected]\n>> [2] http://pgsql.performance.narkive.com/IrkPbl3f/postgresql-9-2-3-performance-problem-caused-exclusive-locks\n>> [3] http://www.postgresql.org/message-id/[email protected]\n>> [4] http://www.postgresql.org/message-id/CAL_0b1sypYeOyNkYNV95nNV2d+4jXTug3HkKF6FahfW7Gvgb_Q@mail.gmail.com\n>> [5] http://pastebin.com/raw.php?i=Bd40Vn6h\n>> [6] http://wiki.postgresql.org/wiki/Lock_dependency_information\n>> [7 http://pastebin.com/raw.php?i=eGrtG524]\n>> \n>> --\n>> Vladimir\n>> \n>> \n>> \n>> \n\n\n--\nVladimir\n\n\n\n\n\nroot@rpopdb01e ~ # fgrep data /etc/fstabUUID=f815fd3f-e4e4-43a6-a6a1-bce1203db3e0 /var/lib/pgsql/9.3/data ext4 noatime,nodiratime 0 1root@rpopdb01e ~ #According to iostat the disks are not the bottleneck.12.02.2014, в 21:30, Ilya Kosmodemiansky <[email protected]> написал(а):Hi Vladimir,Just in case: how is your ext4 mount? Best regards, IlyaOn Feb 12, 2014, at 17:59, Бородин Владимир <[email protected]> wrote:Hi all.Today I have started getting errors like below in logs (seems that I have not changed anything for last week). When it happens the db gets lots of connections in state active, eats 100% cpu and clients get errors (due to timeout). 2014-02-12 15:44:24.562 MSK,\"rpop\",\"rpopdb_p6\",30061,\"localhost:58350\",52fb5e53.756d,1,\"SELECT waiting\",2014-02-12 15:43:15 MSK,143/264877,1002850566,LOG,00000,\"process 30061 still waiting for ExclusiveLock on extension of relation 26118 of database 24590 after 1000.082 ms\",,,,,\"SQL statement \"\"insert into rpop.rpop_imap_uidls (folder_id, uidl) values (i_folder_id, i_uidl)\"\"I have read several topics [1, 2, 3, 4] with similar problems but haven't find a good solution. Below is some more diagnostics.I am running PostgreSQL 9.3.2 installed from RPM packages on RHEL 6.4. Host is running with the following CPU (32 cores) and memory:root@rpopdb01e ~ # fgrep -m1 'model name' /proc/cpuinfomodel name : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHzroot@rpopdb01e ~ # free -m             total       used       free     shared    buffers     cachedMem:        129028     123558       5469          0        135     119504-/+ buffers/cache:       3918     125110Swap:        16378          0      16378root@rpopdb01e ~ #PGDATA lives on RAID6 array of 8 ssd-disks with ext4, iostat and atop say the disks are really free. Right now PGDATA takes only 95G.The settings changed in postgresql.conf are here [5].When it happens the last query from here [6] shows that almost all queries are waiting for ExclusiveLock, but they do a simple insert. (extend,26647,26825,,,,,,,) |        5459 | ExclusiveLock |     1 | (extend,26647,26825,,,,,,,) | 8053 | ExclusiveLock | 5459,8053 (extend,26647,26828,,,,,,,) |        5567 | ExclusiveLock |     1 | (extend,26647,26828,,,,,,,) | 5490 | ExclusiveLock | 5567,5490 (extend,24584,25626,,,,,,,) |        5611 | ExclusiveLock |     1 | (extend,24584,25626,,,,,,,) | 3963 | ExclusiveLock | 5611,3963I have several databases running on one host with one postmaster process and ExclusiveLock is being waited by many oids. I suppose the only common thing for all of them is that they are bigger than others and they almost do not get updates and deletes (only inserts and reads). Some more info about one of such tables is here [7].I have tried to look at the source code (src/backend/access/heap/hio.c) to understand when the exclusive lock can be taken, but I could only read comments :) I have also examined FSM for this tables and their indexes and found that for most of them there are free pages but there are, for example, such cases:rpopdb_p0=# select count(*) from pg_freespace('rpop.rpop_uidl') where avail != 0; count-------- 115953(1 row)rpopdb_p0=# select count(*) from pg_freespace('rpop.pk_rpop_uidl') where avail != 0; count-------     0(1 row)rpopdb_p0=# \\dS+ rpop.rpop_uidl                               Table \"rpop.rpop_uidl\" Column |          Type          | Modifiers | Storage  | Stats target | Description--------+------------------------+-----------+----------+--------------+------------- popid  | bigint                 | not null  | plain    |              | uidl   | character varying(200) | not null  | extended |              |Indexes:    \"pk_rpop_uidl\" PRIMARY KEY, btree (popid, uidl)Has OIDs: norpopdb_p0=#My questions are:1. Do we consume 100% cpu (in system) trying to get page from FSM? Or does it happen during exclusive lock acquiring? How can I dig it?2. How much space do we extend to the relation when we get exclusive lock on it?3. Why extended page is not visible for other backends?4. Is there any possibility of situation where backend A got exclusive lock on some relation to extend it. Then OS CPU scheduler made a context switch to backend B while backend B is waiting for exclusive lock on the same relation. And so on for many backends.5. (and the main question) what can I do to get rid of such situations? It is a production cluster and I do not have any ideas what to do with this situation :( Any help would be really appropriate.[1] http://www.postgresql.org/message-id/[email protected][2] http://pgsql.performance.narkive.com/IrkPbl3f/postgresql-9-2-3-performance-problem-caused-exclusive-locks[3] http://www.postgresql.org/message-id/[email protected][4] http://www.postgresql.org/message-id/CAL_0b1sypYeOyNkYNV95nNV2d+4jXTug3HkKF6FahfW7Gvgb_Q@mail.gmail.com[5] http://pastebin.com/raw.php?i=Bd40Vn6h[6] http://wiki.postgresql.org/wiki/Lock_dependency_information[7 http://pastebin.com/raw.php?i=eGrtG524]\n--Vladimir\n\n\n--Vladimir", "msg_date": "Wed, 12 Feb 2014 21:43:33 +0400", "msg_from": "=?koi8-r?B?4s/Sz8TJziD3zMHEyc3J0g==?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Problem with ExclusiveLock on inserts" }, { "msg_contents": "My question was actually about barrier option, by default it is enabled on RHEL6/ext4 and could cause serious bottleneck on io before disks are actually involved. What says mount without arguments? \n\n> On Feb 12, 2014, at 18:43, Бородин Владимир <[email protected]> wrote:\n> \n> root@rpopdb01e ~ # fgrep data /etc/fstab\n> UUID=f815fd3f-e4e4-43a6-a6a1-bce1203db3e0 /var/lib/pgsql/9.3/data ext4 noatime,nodiratime 0 1\n> root@rpopdb01e ~ #\n> \n> According to iostat the disks are not the bottleneck.\n> \n>> 12.02.2014, в 21:30, Ilya Kosmodemiansky <[email protected]> написал(а):\n>> \n>> Hi Vladimir,\n>> \n>> Just in case: how is your ext4 mount? \n>> \n>> Best regards, \n>> Ilya\n>> \n>>> On Feb 12, 2014, at 17:59, Бородин Владимир <[email protected]> wrote:\n>>> \n>>> Hi all.\n>>> \n>>> Today I have started getting errors like below in logs (seems that I have not changed anything for last week). When it happens the db gets lots of connections in state active, eats 100% cpu and clients get errors (due to timeout). \n>>> \n>>> 2014-02-12 15:44:24.562 MSK,\"rpop\",\"rpopdb_p6\",30061,\"localhost:58350\",52fb5e53.756d,1,\"SELECT waiting\",2014-02-12 15:43:15 MSK,143/264877,1002850566,LOG,00000,\"process 30061 still waiting for ExclusiveLock on extension of relation 26118 of database 24590 after 1000.082 ms\",,,,,\"SQL statement \"\"insert into rpop.rpop_imap_uidls (folder_id, uidl) values (i_folder_id, i_uidl)\"\"\n>>> \n>>> I have read several topics [1, 2, 3, 4] with similar problems but haven't find a good solution. Below is some more diagnostics.\n>>> \n>>> I am running PostgreSQL 9.3.2 installed from RPM packages on RHEL 6.4. Host is running with the following CPU (32 cores) and memory:\n>>> \n>>> root@rpopdb01e ~ # fgrep -m1 'model name' /proc/cpuinfo\n>>> model name\t: Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz\n>>> root@rpopdb01e ~ # free -m\n>>> total used free shared buffers cached\n>>> Mem: 129028 123558 5469 0 135 119504\n>>> -/+ buffers/cache: 3918 125110\n>>> Swap: 16378 0 16378\n>>> root@rpopdb01e ~ #\n>>> \n>>> PGDATA lives on RAID6 array of 8 ssd-disks with ext4, iostat and atop say the disks are really free. Right now PGDATA takes only 95G.\n>>> The settings changed in postgresql.conf are here [5].\n>>> \n>>> When it happens the last query from here [6] shows that almost all queries are waiting for ExclusiveLock, but they do a simple insert.\n>>> \n>>> (extend,26647,26825,,,,,,,) | 5459 | ExclusiveLock | 1 | (extend,26647,26825,,,,,,,) | 8053 | ExclusiveLock | 5459,8053\n>>> (extend,26647,26828,,,,,,,) | 5567 | ExclusiveLock | 1 | (extend,26647,26828,,,,,,,) | 5490 | ExclusiveLock | 5567,5490\n>>> (extend,24584,25626,,,,,,,) | 5611 | ExclusiveLock | 1 | (extend,24584,25626,,,,,,,) | 3963 | ExclusiveLock | 5611,3963\n>>> \n>>> I have several databases running on one host with one postmaster process and ExclusiveLock is being waited by many oids. I suppose the only common thing for all of them is that they are bigger than others and they almost do not get updates and deletes (only inserts and reads). Some more info about one of such tables is here [7].\n>>> \n>>> I have tried to look at the source code (src/backend/access/heap/hio.c) to understand when the exclusive lock can be taken, but I could only read comments :) I have also examined FSM for this tables and their indexes and found that for most of them there are free pages but there are, for example, such cases:\n>>> \n>>> rpopdb_p0=# select count(*) from pg_freespace('rpop.rpop_uidl') where avail != 0;\n>>> count\n>>> --------\n>>> 115953\n>>> (1 row)\n>>> \n>>> rpopdb_p0=# select count(*) from pg_freespace('rpop.pk_rpop_uidl') where avail != 0;\n>>> count\n>>> -------\n>>> 0\n>>> (1 row)\n>>> \n>>> rpopdb_p0=# \\dS+ rpop.rpop_uidl\n>>> Table \"rpop.rpop_uidl\"\n>>> Column | Type | Modifiers | Storage | Stats target | Description\n>>> --------+------------------------+-----------+----------+--------------+-------------\n>>> popid | bigint | not null | plain | |\n>>> uidl | character varying(200) | not null | extended | |\n>>> Indexes:\n>>> \"pk_rpop_uidl\" PRIMARY KEY, btree (popid, uidl)\n>>> Has OIDs: no\n>>> \n>>> rpopdb_p0=#\n>>> \n>>> \n>>> My questions are:\n>>> 1. Do we consume 100% cpu (in system) trying to get page from FSM? Or does it happen during exclusive lock acquiring? How can I dig it?\n>>> 2. How much space do we extend to the relation when we get exclusive lock on it?\n>>> 3. Why extended page is not visible for other backends?\n>>> 4. Is there any possibility of situation where backend A got exclusive lock on some relation to extend it. Then OS CPU scheduler made a context switch to backend B while backend B is waiting for exclusive lock on the same relation. And so on for many backends.\n>>> 5. (and the main question) what can I do to get rid of such situations? It is a production cluster and I do not have any ideas what to do with this situation :( Any help would be really appropriate.\n>>> \n>>> [1] http://www.postgresql.org/message-id/[email protected]\n>>> [2] http://pgsql.performance.narkive.com/IrkPbl3f/postgresql-9-2-3-performance-problem-caused-exclusive-locks\n>>> [3] http://www.postgresql.org/message-id/[email protected]\n>>> [4] http://www.postgresql.org/message-id/CAL_0b1sypYeOyNkYNV95nNV2d+4jXTug3HkKF6FahfW7Gvgb_Q@mail.gmail.com\n>>> [5] http://pastebin.com/raw.php?i=Bd40Vn6h\n>>> [6] http://wiki.postgresql.org/wiki/Lock_dependency_information\n>>> [7 http://pastebin.com/raw.php?i=eGrtG524]\n>>> \n>>> --\n>>> Vladimir\n> \n> \n> --\n> Vladimir\n> \n> \n> \n> \n\nMy question was actually about barrier option, by default it is enabled on RHEL6/ext4 and could cause serious bottleneck on io before disks are actually involved. What says mount without arguments? On Feb 12, 2014, at 18:43, Бородин Владимир <[email protected]> wrote:root@rpopdb01e ~ # fgrep data /etc/fstabUUID=f815fd3f-e4e4-43a6-a6a1-bce1203db3e0 /var/lib/pgsql/9.3/data ext4 noatime,nodiratime 0 1root@rpopdb01e ~ #According to iostat the disks are not the bottleneck.12.02.2014, в 21:30, Ilya Kosmodemiansky <[email protected]> написал(а):Hi Vladimir,Just in case: how is your ext4 mount? Best regards, IlyaOn Feb 12, 2014, at 17:59, Бородин Владимир <[email protected]> wrote:Hi all.Today I have started getting errors like below in logs (seems that I have not changed anything for last week). When it happens the db gets lots of connections in state active, eats 100% cpu and clients get errors (due to timeout). 2014-02-12 15:44:24.562 MSK,\"rpop\",\"rpopdb_p6\",30061,\"localhost:58350\",52fb5e53.756d,1,\"SELECT waiting\",2014-02-12 15:43:15 MSK,143/264877,1002850566,LOG,00000,\"process 30061 still waiting for ExclusiveLock on extension of relation 26118 of database 24590 after 1000.082 ms\",,,,,\"SQL statement \"\"insert into rpop.rpop_imap_uidls (folder_id, uidl) values (i_folder_id, i_uidl)\"\"I have read several topics [1, 2, 3, 4] with similar problems but haven't find a good solution. Below is some more diagnostics.I am running PostgreSQL 9.3.2 installed from RPM packages on RHEL 6.4. Host is running with the following CPU (32 cores) and memory:root@rpopdb01e ~ # fgrep -m1 'model name' /proc/cpuinfomodel name : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHzroot@rpopdb01e ~ # free -m             total       used       free     shared    buffers     cachedMem:        129028     123558       5469          0        135     119504-/+ buffers/cache:       3918     125110Swap:        16378          0      16378root@rpopdb01e ~ #PGDATA lives on RAID6 array of 8 ssd-disks with ext4, iostat and atop say the disks are really free. Right now PGDATA takes only 95G.The settings changed in postgresql.conf are here [5].When it happens the last query from here [6] shows that almost all queries are waiting for ExclusiveLock, but they do a simple insert. (extend,26647,26825,,,,,,,) |        5459 | ExclusiveLock |     1 | (extend,26647,26825,,,,,,,) | 8053 | ExclusiveLock | 5459,8053 (extend,26647,26828,,,,,,,) |        5567 | ExclusiveLock |     1 | (extend,26647,26828,,,,,,,) | 5490 | ExclusiveLock | 5567,5490 (extend,24584,25626,,,,,,,) |        5611 | ExclusiveLock |     1 | (extend,24584,25626,,,,,,,) | 3963 | ExclusiveLock | 5611,3963I have several databases running on one host with one postmaster process and ExclusiveLock is being waited by many oids. I suppose the only common thing for all of them is that they are bigger than others and they almost do not get updates and deletes (only inserts and reads). Some more info about one of such tables is here [7].I have tried to look at the source code (src/backend/access/heap/hio.c) to understand when the exclusive lock can be taken, but I could only read comments :) I have also examined FSM for this tables and their indexes and found that for most of them there are free pages but there are, for example, such cases:rpopdb_p0=# select count(*) from pg_freespace('rpop.rpop_uidl') where avail != 0; count-------- 115953(1 row)rpopdb_p0=# select count(*) from pg_freespace('rpop.pk_rpop_uidl') where avail != 0; count-------     0(1 row)rpopdb_p0=# \\dS+ rpop.rpop_uidl                               Table \"rpop.rpop_uidl\" Column |          Type          | Modifiers | Storage  | Stats target | Description--------+------------------------+-----------+----------+--------------+------------- popid  | bigint                 | not null  | plain    |              | uidl   | character varying(200) | not null  | extended |              |Indexes:    \"pk_rpop_uidl\" PRIMARY KEY, btree (popid, uidl)Has OIDs: norpopdb_p0=#My questions are:1. Do we consume 100% cpu (in system) trying to get page from FSM? Or does it happen during exclusive lock acquiring? How can I dig it?2. How much space do we extend to the relation when we get exclusive lock on it?3. Why extended page is not visible for other backends?4. Is there any possibility of situation where backend A got exclusive lock on some relation to extend it. Then OS CPU scheduler made a context switch to backend B while backend B is waiting for exclusive lock on the same relation. And so on for many backends.5. (and the main question) what can I do to get rid of such situations? It is a production cluster and I do not have any ideas what to do with this situation :( Any help would be really appropriate.[1] http://www.postgresql.org/message-id/[email protected][2] http://pgsql.performance.narkive.com/IrkPbl3f/postgresql-9-2-3-performance-problem-caused-exclusive-locks[3] http://www.postgresql.org/message-id/[email protected][4] http://www.postgresql.org/message-id/CAL_0b1sypYeOyNkYNV95nNV2d+4jXTug3HkKF6FahfW7Gvgb_Q@mail.gmail.com[5] http://pastebin.com/raw.php?i=Bd40Vn6h[6] http://wiki.postgresql.org/wiki/Lock_dependency_information[7 http://pastebin.com/raw.php?i=eGrtG524]\n--Vladimir\n\n\n--Vladimir", "msg_date": "Wed, 12 Feb 2014 18:56:52 +0100", "msg_from": "Ilya Kosmodemiansky <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem with ExclusiveLock on inserts" }, { "msg_contents": "Oh, I haven't thought about barriers, sorry. Although I use soft raid without batteries I have turned barriers off on one cluster shard to try.\n\nroot@rpopdb01e ~ # mount | fgrep data\n/dev/md2 on /var/lib/pgsql/9.3/data type ext4 (rw,noatime,nodiratime)\nroot@rpopdb01e ~ # mount -o remount,nobarrier /dev/md2\nroot@rpopdb01e ~ # mount | fgrep data\n/dev/md2 on /var/lib/pgsql/9.3/data type ext4 (rw,noatime,nodiratime,nobarrier)\nroot@rpopdb01e ~ #\n\n12.02.2014, в 21:56, Ilya Kosmodemiansky <[email protected]> написал(а):\n\n> My question was actually about barrier option, by default it is enabled on RHEL6/ext4 and could cause serious bottleneck on io before disks are actually involved. What says mount without arguments? \n> \n> On Feb 12, 2014, at 18:43, Бородин Владимир <[email protected]> wrote:\n> \n>> root@rpopdb01e ~ # fgrep data /etc/fstab\n>> UUID=f815fd3f-e4e4-43a6-a6a1-bce1203db3e0 /var/lib/pgsql/9.3/data ext4 noatime,nodiratime 0 1\n>> root@rpopdb01e ~ #\n>> \n>> According to iostat the disks are not the bottleneck.\n>> \n>> 12.02.2014, в 21:30, Ilya Kosmodemiansky <[email protected]> написал(а):\n>> \n>>> Hi Vladimir,\n>>> \n>>> Just in case: how is your ext4 mount? \n>>> \n>>> Best regards, \n>>> Ilya\n>>> \n>>> On Feb 12, 2014, at 17:59, Бородин Владимир <[email protected]> wrote:\n>>> \n>>>> Hi all.\n>>>> \n>>>> Today I have started getting errors like below in logs (seems that I have not changed anything for last week). When it happens the db gets lots of connections in state active, eats 100% cpu and clients get errors (due to timeout). \n>>>> \n>>>> 2014-02-12 15:44:24.562 MSK,\"rpop\",\"rpopdb_p6\",30061,\"localhost:58350\",52fb5e53.756d,1,\"SELECT waiting\",2014-02-12 15:43:15 MSK,143/264877,1002850566,LOG,00000,\"process 30061 still waiting for ExclusiveLock on extension of relation 26118 of database 24590 after 1000.082 ms\",,,,,\"SQL statement \"\"insert into rpop.rpop_imap_uidls (folder_id, uidl) values (i_folder_id, i_uidl)\"\"\n>>>> \n>>>> I have read several topics [1, 2, 3, 4] with similar problems but haven't find a good solution. Below is some more diagnostics.\n>>>> \n>>>> I am running PostgreSQL 9.3.2 installed from RPM packages on RHEL 6.4. Host is running with the following CPU (32 cores) and memory:\n>>>> \n>>>> root@rpopdb01e ~ # fgrep -m1 'model name' /proc/cpuinfo\n>>>> model name\t: Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz\n>>>> root@rpopdb01e ~ # free -m\n>>>> total used free shared buffers cached\n>>>> Mem: 129028 123558 5469 0 135 119504\n>>>> -/+ buffers/cache: 3918 125110\n>>>> Swap: 16378 0 16378\n>>>> root@rpopdb01e ~ #\n>>>> \n>>>> PGDATA lives on RAID6 array of 8 ssd-disks with ext4, iostat and atop say the disks are really free. Right now PGDATA takes only 95G.\n>>>> The settings changed in postgresql.conf are here [5].\n>>>> \n>>>> When it happens the last query from here [6] shows that almost all queries are waiting for ExclusiveLock, but they do a simple insert.\n>>>> \n>>>> (extend,26647,26825,,,,,,,) | 5459 | ExclusiveLock | 1 | (extend,26647,26825,,,,,,,) | 8053 | ExclusiveLock | 5459,8053\n>>>> (extend,26647,26828,,,,,,,) | 5567 | ExclusiveLock | 1 | (extend,26647,26828,,,,,,,) | 5490 | ExclusiveLock | 5567,5490\n>>>> (extend,24584,25626,,,,,,,) | 5611 | ExclusiveLock | 1 | (extend,24584,25626,,,,,,,) | 3963 | ExclusiveLock | 5611,3963\n>>>> \n>>>> I have several databases running on one host with one postmaster process and ExclusiveLock is being waited by many oids. I suppose the only common thing for all of them is that they are bigger than others and they almost do not get updates and deletes (only inserts and reads). Some more info about one of such tables is here [7].\n>>>> \n>>>> I have tried to look at the source code (src/backend/access/heap/hio.c) to understand when the exclusive lock can be taken, but I could only read comments :) I have also examined FSM for this tables and their indexes and found that for most of them there are free pages but there are, for example, such cases:\n>>>> \n>>>> rpopdb_p0=# select count(*) from pg_freespace('rpop.rpop_uidl') where avail != 0;\n>>>> count\n>>>> --------\n>>>> 115953\n>>>> (1 row)\n>>>> \n>>>> rpopdb_p0=# select count(*) from pg_freespace('rpop.pk_rpop_uidl') where avail != 0;\n>>>> count\n>>>> -------\n>>>> 0\n>>>> (1 row)\n>>>> \n>>>> rpopdb_p0=# \\dS+ rpop.rpop_uidl\n>>>> Table \"rpop.rpop_uidl\"\n>>>> Column | Type | Modifiers | Storage | Stats target | Description\n>>>> --------+------------------------+-----------+----------+--------------+-------------\n>>>> popid | bigint | not null | plain | |\n>>>> uidl | character varying(200) | not null | extended | |\n>>>> Indexes:\n>>>> \"pk_rpop_uidl\" PRIMARY KEY, btree (popid, uidl)\n>>>> Has OIDs: no\n>>>> \n>>>> rpopdb_p0=#\n>>>> \n>>>> \n>>>> My questions are:\n>>>> 1. Do we consume 100% cpu (in system) trying to get page from FSM? Or does it happen during exclusive lock acquiring? How can I dig it?\n>>>> 2. How much space do we extend to the relation when we get exclusive lock on it?\n>>>> 3. Why extended page is not visible for other backends?\n>>>> 4. Is there any possibility of situation where backend A got exclusive lock on some relation to extend it. Then OS CPU scheduler made a context switch to backend B while backend B is waiting for exclusive lock on the same relation. And so on for many backends.\n>>>> 5. (and the main question) what can I do to get rid of such situations? It is a production cluster and I do not have any ideas what to do with this situation :( Any help would be really appropriate.\n>>>> \n>>>> [1] http://www.postgresql.org/message-id/[email protected]\n>>>> [2] http://pgsql.performance.narkive.com/IrkPbl3f/postgresql-9-2-3-performance-problem-caused-exclusive-locks\n>>>> [3] http://www.postgresql.org/message-id/[email protected]\n>>>> [4] http://www.postgresql.org/message-id/CAL_0b1sypYeOyNkYNV95nNV2d+4jXTug3HkKF6FahfW7Gvgb_Q@mail.gmail.com\n>>>> [5] http://pastebin.com/raw.php?i=Bd40Vn6h\n>>>> [6] http://wiki.postgresql.org/wiki/Lock_dependency_information\n>>>> [7 http://pastebin.com/raw.php?i=eGrtG524]\n>>>> \n>>>> --\n>>>> Vladimir\n>>>> \n>>>> \n>>>> \n>>>> \n>> \n>> \n>> --\n>> Vladimir\n>> \n>> \n>> \n>> \n\n\n--\nДа пребудет с вами сила...\nhttp://simply.name\n\n\n\n\n\n\nOh, I haven't thought about barriers, sorry. Although I use soft raid without batteries I have turned barriers off on one cluster shard to try.root@rpopdb01e ~ # mount | fgrep data/dev/md2 on /var/lib/pgsql/9.3/data type ext4 (rw,noatime,nodiratime)root@rpopdb01e ~ # mount -o remount,nobarrier /dev/md2root@rpopdb01e ~ # mount | fgrep data/dev/md2 on /var/lib/pgsql/9.3/data type ext4 (rw,noatime,nodiratime,nobarrier)root@rpopdb01e ~ #12.02.2014, в 21:56, Ilya Kosmodemiansky <[email protected]> написал(а):My question was actually about barrier option, by default it is enabled on RHEL6/ext4 and could cause serious bottleneck on io before disks are actually involved. What says mount without arguments? On Feb 12, 2014, at 18:43, Бородин Владимир <[email protected]> wrote:root@rpopdb01e ~ # fgrep data /etc/fstabUUID=f815fd3f-e4e4-43a6-a6a1-bce1203db3e0 /var/lib/pgsql/9.3/data ext4 noatime,nodiratime 0 1root@rpopdb01e ~ #According to iostat the disks are not the bottleneck.12.02.2014, в 21:30, Ilya Kosmodemiansky <[email protected]> написал(а):Hi Vladimir,Just in case: how is your ext4 mount? Best regards, IlyaOn Feb 12, 2014, at 17:59, Бородин Владимир <[email protected]> wrote:Hi all.Today I have started getting errors like below in logs (seems that I have not changed anything for last week). When it happens the db gets lots of connections in state active, eats 100% cpu and clients get errors (due to timeout). 2014-02-12 15:44:24.562 MSK,\"rpop\",\"rpopdb_p6\",30061,\"localhost:58350\",52fb5e53.756d,1,\"SELECT waiting\",2014-02-12 15:43:15 MSK,143/264877,1002850566,LOG,00000,\"process 30061 still waiting for ExclusiveLock on extension of relation 26118 of database 24590 after 1000.082 ms\",,,,,\"SQL statement \"\"insert into rpop.rpop_imap_uidls (folder_id, uidl) values (i_folder_id, i_uidl)\"\"I have read several topics [1, 2, 3, 4] with similar problems but haven't find a good solution. Below is some more diagnostics.I am running PostgreSQL 9.3.2 installed from RPM packages on RHEL 6.4. Host is running with the following CPU (32 cores) and memory:root@rpopdb01e ~ # fgrep -m1 'model name' /proc/cpuinfomodel name : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHzroot@rpopdb01e ~ # free -m             total       used       free     shared    buffers     cachedMem:        129028     123558       5469          0        135     119504-/+ buffers/cache:       3918     125110Swap:        16378          0      16378root@rpopdb01e ~ #PGDATA lives on RAID6 array of 8 ssd-disks with ext4, iostat and atop say the disks are really free. Right now PGDATA takes only 95G.The settings changed in postgresql.conf are here [5].When it happens the last query from here [6] shows that almost all queries are waiting for ExclusiveLock, but they do a simple insert. (extend,26647,26825,,,,,,,) |        5459 | ExclusiveLock |     1 | (extend,26647,26825,,,,,,,) | 8053 | ExclusiveLock | 5459,8053 (extend,26647,26828,,,,,,,) |        5567 | ExclusiveLock |     1 | (extend,26647,26828,,,,,,,) | 5490 | ExclusiveLock | 5567,5490 (extend,24584,25626,,,,,,,) |        5611 | ExclusiveLock |     1 | (extend,24584,25626,,,,,,,) | 3963 | ExclusiveLock | 5611,3963I have several databases running on one host with one postmaster process and ExclusiveLock is being waited by many oids. I suppose the only common thing for all of them is that they are bigger than others and they almost do not get updates and deletes (only inserts and reads). Some more info about one of such tables is here [7].I have tried to look at the source code (src/backend/access/heap/hio.c) to understand when the exclusive lock can be taken, but I could only read comments :) I have also examined FSM for this tables and their indexes and found that for most of them there are free pages but there are, for example, such cases:rpopdb_p0=# select count(*) from pg_freespace('rpop.rpop_uidl') where avail != 0; count-------- 115953(1 row)rpopdb_p0=# select count(*) from pg_freespace('rpop.pk_rpop_uidl') where avail != 0; count-------     0(1 row)rpopdb_p0=# \\dS+ rpop.rpop_uidl                               Table \"rpop.rpop_uidl\" Column |          Type          | Modifiers | Storage  | Stats target | Description--------+------------------------+-----------+----------+--------------+------------- popid  | bigint                 | not null  | plain    |              | uidl   | character varying(200) | not null  | extended |              |Indexes:    \"pk_rpop_uidl\" PRIMARY KEY, btree (popid, uidl)Has OIDs: norpopdb_p0=#My questions are:1. Do we consume 100% cpu (in system) trying to get page from FSM? Or does it happen during exclusive lock acquiring? How can I dig it?2. How much space do we extend to the relation when we get exclusive lock on it?3. Why extended page is not visible for other backends?4. Is there any possibility of situation where backend A got exclusive lock on some relation to extend it. Then OS CPU scheduler made a context switch to backend B while backend B is waiting for exclusive lock on the same relation. And so on for many backends.5. (and the main question) what can I do to get rid of such situations? It is a production cluster and I do not have any ideas what to do with this situation :( Any help would be really appropriate.[1] http://www.postgresql.org/message-id/[email protected][2] http://pgsql.performance.narkive.com/IrkPbl3f/postgresql-9-2-3-performance-problem-caused-exclusive-locks[3] http://www.postgresql.org/message-id/[email protected][4] http://www.postgresql.org/message-id/CAL_0b1sypYeOyNkYNV95nNV2d+4jXTug3HkKF6FahfW7Gvgb_Q@mail.gmail.com[5] http://pastebin.com/raw.php?i=Bd40Vn6h[6] http://wiki.postgresql.org/wiki/Lock_dependency_information[7 http://pastebin.com/raw.php?i=eGrtG524]\n--Vladimir\n\n\n--Vladimir\n\n\n--Да пребудет с вами сила...http://simply.name", "msg_date": "Wed, 12 Feb 2014 23:20:02 +0400", "msg_from": "=?koi8-r?B?4s/Sz8TJziD3zMHEyc3J0g==?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Problem with ExclusiveLock on inserts" }, { "msg_contents": "another thing which is arguable - concurrency degree. How many of your\nmax_connections = 4000 are actually running? 4000 definitely looks like an\noverkill and they could be a serious source of concurrency, especially then\nyou have had barrier enabled and software raid.\n\nPlus for 32Gb of shared buffers with synchronous_commit = on especially on\nheavy workload one should definitely have bbu, otherwise performance will\nbe poor.\n\n\nOn Wed, Feb 12, 2014 at 8:20 PM, Бородин Владимир <[email protected]> wrote:\n\n> Oh, I haven't thought about barriers, sorry. Although I use soft raid\n> without batteries I have turned barriers off on one cluster shard to try.\n>\n> root@rpopdb01e ~ # mount | fgrep data\n> /dev/md2 on /var/lib/pgsql/9.3/data type ext4 (rw,noatime,nodiratime)\n> root@rpopdb01e ~ # mount -o remount,nobarrier /dev/md2\n> root@rpopdb01e ~ # mount | fgrep data\n> /dev/md2 on /var/lib/pgsql/9.3/data type ext4\n> (rw,noatime,nodiratime,nobarrier)\n> root@rpopdb01e ~ #\n>\n> 12.02.2014, в 21:56, Ilya Kosmodemiansky <[email protected]>\n> написал(а):\n>\n> My question was actually about barrier option, by default it is enabled on\n> RHEL6/ext4 and could cause serious bottleneck on io before disks are\n> actually involved. What says mount without arguments?\n>\n> On Feb 12, 2014, at 18:43, Бородин Владимир <[email protected]> wrote:\n>\n> root@rpopdb01e ~ # fgrep data /etc/fstab\n> UUID=f815fd3f-e4e4-43a6-a6a1-bce1203db3e0 /var/lib/pgsql/9.3/data ext4\n> noatime,nodiratime 0 1\n> root@rpopdb01e ~ #\n>\n> According to iostat the disks are not the bottleneck.\n>\n> 12.02.2014, в 21:30, Ilya Kosmodemiansky <[email protected]>\n> написал(а):\n>\n> Hi Vladimir,\n>\n> Just in case: how is your ext4 mount?\n>\n> Best regards,\n> Ilya\n>\n> On Feb 12, 2014, at 17:59, Бородин Владимир <[email protected]> wrote:\n>\n> Hi all.\n>\n> Today I have started getting errors like below in logs (seems that I have\n> not changed anything for last week). When it happens the db gets lots of\n> connections in state active, eats 100% cpu and clients get errors (due to\n> timeout).\n>\n> 2014-02-12 15:44:24.562\n> MSK,\"rpop\",\"rpopdb_p6\",30061,\"localhost:58350\",52fb5e53.756d,1,\"SELECT\n> waiting\",2014-02-12 15:43:15 MSK,143/264877,1002850566,LOG,00000,\"process\n> 30061 still waiting for ExclusiveLock on extension of relation 26118 of\n> database 24590 after 1000.082 ms\",,,,,\"SQL statement \"\"insert into\n> rpop.rpop_imap_uidls (folder_id, uidl) values (i_folder_id, i_uidl)\"\"\n>\n> I have read several topics [1, 2, 3, 4] with similar problems but haven't\n> find a good solution. Below is some more diagnostics.\n>\n> I am running PostgreSQL 9.3.2 installed from RPM packages on RHEL 6.4.\n> Host is running with the following CPU (32 cores) and memory:\n>\n> root@rpopdb01e ~ # fgrep -m1 'model name' /proc/cpuinfo\n> model name : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz\n> root@rpopdb01e ~ # free -m\n> total used free shared buffers cached\n> Mem: 129028 123558 5469 0 135 119504\n> -/+ buffers/cache: 3918 125110\n> Swap: 16378 0 16378\n> root@rpopdb01e ~ #\n>\n> PGDATA lives on RAID6 array of 8 ssd-disks with ext4, iostat and atop say\n> the disks are really free. Right now PGDATA takes only 95G.\n> The settings changed in postgresql.conf are here [5].\n>\n> When it happens the last query from here [6] shows that almost all queries\n> are waiting for ExclusiveLock, but they do a simple insert.\n>\n> (extend,26647,26825,,,,,,,) | 5459 | ExclusiveLock | 1 |\n> (extend,26647,26825,,,,,,,) | 8053 | ExclusiveLock | 5459,8053\n> (extend,26647,26828,,,,,,,) | 5567 | ExclusiveLock | 1 |\n> (extend,26647,26828,,,,,,,) | 5490 | ExclusiveLock | 5567,5490\n> (extend,24584,25626,,,,,,,) | 5611 | ExclusiveLock | 1 |\n> (extend,24584,25626,,,,,,,) | 3963 | ExclusiveLock | 5611,3963\n>\n> I have several databases running on one host with one postmaster process\n> and ExclusiveLock is being waited by many oids. I suppose the only common\n> thing for all of them is that they are bigger than others and they almost\n> do not get updates and deletes (only inserts and reads). Some more info\n> about one of such tables is here [7].\n>\n> I have tried to look at the source code (src/backend/access/heap/hio.c) to\n> understand when the exclusive lock can be taken, but I could only read\n> comments :) I have also examined FSM for this tables and their indexes and\n> found that for most of them there are free pages but there are, for\n> example, such cases:\n>\n> rpopdb_p0=# select count(*) from pg_freespace('rpop.rpop_uidl') where\n> avail != 0;\n> count\n> --------\n> 115953\n> (1 row)\n>\n> rpopdb_p0=# select count(*) from pg_freespace('rpop.pk_rpop_uidl') where\n> avail != 0;\n> count\n> -------\n> 0\n> (1 row)\n>\n> rpopdb_p0=# \\dS+ rpop.rpop_uidl\n> Table \"rpop.rpop_uidl\"\n> Column | Type | Modifiers | Storage | Stats target |\n> Description\n>\n> --------+------------------------+-----------+----------+--------------+-------------\n> popid | bigint | not null | plain | |\n> uidl | character varying(200) | not null | extended | |\n> Indexes:\n> \"pk_rpop_uidl\" PRIMARY KEY, btree (popid, uidl)\n> Has OIDs: no\n>\n> rpopdb_p0=#\n>\n>\n> My questions are:\n> 1. Do we consume 100% cpu (in system) trying to get page from FSM? Or does\n> it happen during exclusive lock acquiring? How can I dig it?\n> 2. How much space do we extend to the relation when we get exclusive lock\n> on it?\n> 3. Why extended page is not visible for other backends?\n> 4. Is there any possibility of situation where backend A got exclusive\n> lock on some relation to extend it. Then OS CPU scheduler made a context\n> switch to backend B while backend B is waiting for exclusive lock on the\n> same relation. And so on for many backends.\n> 5. (and the main question) what can I do to get rid of such situations? It\n> is a production cluster and I do not have any ideas what to do with this\n> situation :( Any help would be really appropriate.\n>\n> [1]\n> http://www.postgresql.org/message-id/[email protected]\n> [2]\n> http://pgsql.performance.narkive.com/IrkPbl3f/postgresql-9-2-3-performance-problem-caused-exclusive-locks\n> [3] http://www.postgresql.org/message-id/[email protected]\n> [4]\n> http://www.postgresql.org/message-id/CAL_0b1sypYeOyNkYNV95nNV2d+4jXTug3HkKF6FahfW7Gvgb_Q@mail.gmail.com\n> [5] http://pastebin.com/raw.php?i=Bd40Vn6h\n> [6] http://wiki.postgresql.org/wiki/Lock_dependency_information\n> [7 http://pastebin.com/raw.php?i=eGrtG524]\n>\n> --\n> Vladimir\n>\n>\n>\n>\n>\n>\n> --\n> Vladimir\n>\n>\n>\n>\n>\n>\n> --\n> Да пребудет с вами сила...\n> http://simply.name\n>\n>\n>\n>\n>\n>\n\nanother thing which is arguable - concurrency degree. How many of your max_connections = 4000 are actually running?  4000 definitely looks like an overkill and they could be a serious source of concurrency, especially then you have had barrier enabled and software raid. \nPlus for 32Gb of shared buffers with synchronous_commit = on especially on heavy workload one should definitely have bbu, otherwise performance will be poor.  \n\nOn Wed, Feb 12, 2014 at 8:20 PM, Бородин Владимир <[email protected]> wrote:\nOh, I haven't thought about barriers, sorry. Although I use soft raid without batteries I have turned barriers off on one cluster shard to try.root@rpopdb01e ~ # mount | fgrep data\n/dev/md2 on /var/lib/pgsql/9.3/data type ext4 (rw,noatime,nodiratime)root@rpopdb01e ~ # mount -o remount,nobarrier /dev/md2root@rpopdb01e ~ # mount | fgrep data/dev/md2 on /var/lib/pgsql/9.3/data type ext4 (rw,noatime,nodiratime,nobarrier)\nroot@rpopdb01e ~ #12.02.2014, в 21:56, Ilya Kosmodemiansky <[email protected]> написал(а):\n\nMy question was actually about barrier option, by default it is enabled on RHEL6/ext4 and could cause serious bottleneck on io before disks are actually involved. What says mount without arguments? \nOn Feb 12, 2014, at 18:43, Бородин Владимир <[email protected]> wrote:root@rpopdb01e ~ # fgrep data /etc/fstab\nUUID=f815fd3f-e4e4-43a6-a6a1-bce1203db3e0 /var/lib/pgsql/9.3/data ext4 noatime,nodiratime 0 1root@rpopdb01e ~ #According to iostat the disks are not the bottleneck.\n12.02.2014, в 21:30, Ilya Kosmodemiansky <[email protected]> написал(а):Hi Vladimir,\nJust in case: how is your ext4 mount? Best regards, IlyaOn Feb 12, 2014, at 17:59, Бородин Владимир <[email protected]> wrote:\nHi all.Today I have started getting errors like below in logs (seems that I have not changed anything for last week). When it happens the db gets lots of connections in state active, eats 100% cpu and clients get errors (due to timeout). \n2014-02-12 15:44:24.562 MSK,\"rpop\",\"rpopdb_p6\",30061,\"localhost:58350\",52fb5e53.756d,1,\"SELECT waiting\",2014-02-12 15:43:15 MSK,143/264877,1002850566,LOG,00000,\"process 30061 still waiting for ExclusiveLock on extension of relation 26118 of database 24590 after 1000.082 ms\",,,,,\"SQL statement \"\"insert into rpop.rpop_imap_uidls (folder_id, uidl) values (i_folder_id, i_uidl)\"\"\nI have read several topics [1, 2, 3, 4] with similar problems but haven't find a good solution. Below is some more diagnostics.I am running PostgreSQL 9.3.2 installed from RPM packages on RHEL 6.4. Host is running with the following CPU (32 cores) and memory:\nroot@rpopdb01e ~ # fgrep -m1 'model name' /proc/cpuinfomodel name : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHzroot@rpopdb01e ~ # free -m\n             total       used       free     shared    buffers     cachedMem:        129028     123558       5469          0        135     119504-/+ buffers/cache:       3918     125110\n\nSwap:        16378          0      16378root@rpopdb01e ~ #PGDATA lives on RAID6 array of 8 ssd-disks with ext4, iostat and atop say the disks are really free. Right now PGDATA takes only 95G.\nThe settings changed in postgresql.conf are here [5].When it happens the last query from here [6] shows that almost all queries are waiting for ExclusiveLock, but they do a simple insert.\n (extend,26647,26825,,,,,,,) |        5459 | ExclusiveLock |     1 | (extend,26647,26825,,,,,,,) | 8053 | ExclusiveLock | 5459,8053 (extend,26647,26828,,,,,,,) |        5567 | ExclusiveLock |     1 | (extend,26647,26828,,,,,,,) | 5490 | ExclusiveLock | 5567,5490\n (extend,24584,25626,,,,,,,) |        5611 | ExclusiveLock |     1 | (extend,24584,25626,,,,,,,) | 3963 | ExclusiveLock | 5611,3963I have several databases running on one host with one postmaster process and ExclusiveLock is being waited by many oids. I suppose the only common thing for all of them is that they are bigger than others and they almost do not get updates and deletes (only inserts and reads). Some more info about one of such tables is here [7].\nI have tried to look at the source code (src/backend/access/heap/hio.c) to understand when the exclusive lock can be taken, but I could only read comments :) I have also examined FSM for this tables and their indexes and found that for most of them there are free pages but there are, for example, such cases:\nrpopdb_p0=# select count(*) from pg_freespace('rpop.rpop_uidl') where avail != 0; count-------- 115953(1 row)rpopdb_p0=# select count(*) from pg_freespace('rpop.pk_rpop_uidl') where avail != 0;\n count-------     0(1 row)rpopdb_p0=# \\dS+ rpop.rpop_uidl                               Table \"rpop.rpop_uidl\" Column |          Type          | Modifiers | Storage  | Stats target | Description\n--------+------------------------+-----------+----------+--------------+------------- popid  | bigint                 | not null  | plain    |              | uidl   | character varying(200) | not null  | extended |              |\nIndexes:    \"pk_rpop_uidl\" PRIMARY KEY, btree (popid, uidl)Has OIDs: norpopdb_p0=#My questions are:\n1. Do we consume 100% cpu (in system) trying to get page from FSM? Or does it happen during exclusive lock acquiring? How can I dig it?2. How much space do we extend to the relation when we get exclusive lock on it?\n3. Why extended page is not visible for other backends?4. Is there any possibility of situation where backend A got exclusive lock on some relation to extend it. Then OS CPU scheduler made a context switch to backend B while backend B is waiting for exclusive lock on the same relation. And so on for many backends.\n5. (and the main question) what can I do to get rid of such situations? It is a production cluster and I do not have any ideas what to do with this situation :( Any help would be really appropriate.\n[1] http://www.postgresql.org/message-id/[email protected]\n[2] http://pgsql.performance.narkive.com/IrkPbl3f/postgresql-9-2-3-performance-problem-caused-exclusive-locks\n[3] http://www.postgresql.org/message-id/[email protected][4] http://www.postgresql.org/message-id/CAL_0b1sypYeOyNkYNV95nNV2d+4jXTug3HkKF6FahfW7Gvgb_Q@mail.gmail.com\n[5] http://pastebin.com/raw.php?i=Bd40Vn6h[6] http://wiki.postgresql.org/wiki/Lock_dependency_information\n[7 http://pastebin.com/raw.php?i=eGrtG524]\n\n--Vladimir\n\n\n\n\n--Vladimir\n\n\n\n\n\n--Да пребудет с вами сила...http://simply.name", "msg_date": "Wed, 12 Feb 2014 20:37:11 +0100", "msg_from": "Ilya Kosmodemiansky <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem with ExclusiveLock on inserts" }, { "msg_contents": "Yes, this is legacy, I will fix it. We had lots of inactive connections but right now we use pgbouncer for this. When the workload is normal we have some kind of 80-120 backends. Less than 10 of them are in active state. Having problem with locks we get lots of sessions (sometimes more than 1000 of them are in active state). According to vmstat the number of context switches is not so big (less than 20k), so I don't think it is the main reason. Yes, it can aggravate the problem, but imho not create it.\n\nI don't understand the correlation of shared buffers size and synchronous_commit. Could you please explain your statement?\n\n12.02.2014, в 23:37, Ilya Kosmodemiansky <[email protected]> написал(а):\n\n> another thing which is arguable - concurrency degree. How many of your max_connections = 4000 are actually running? 4000 definitely looks like an overkill and they could be a serious source of concurrency, especially then you have had barrier enabled and software raid. \n> \n> Plus for 32Gb of shared buffers with synchronous_commit = on especially on heavy workload one should definitely have bbu, otherwise performance will be poor. \n> \n> \n> On Wed, Feb 12, 2014 at 8:20 PM, Бородин Владимир <[email protected]> wrote:\n> Oh, I haven't thought about barriers, sorry. Although I use soft raid without batteries I have turned barriers off on one cluster shard to try.\n> \n> root@rpopdb01e ~ # mount | fgrep data\n> /dev/md2 on /var/lib/pgsql/9.3/data type ext4 (rw,noatime,nodiratime)\n> root@rpopdb01e ~ # mount -o remount,nobarrier /dev/md2\n> root@rpopdb01e ~ # mount | fgrep data\n> /dev/md2 on /var/lib/pgsql/9.3/data type ext4 (rw,noatime,nodiratime,nobarrier)\n> root@rpopdb01e ~ #\n> \n> 12.02.2014, в 21:56, Ilya Kosmodemiansky <[email protected]> написал(а):\n> \n>> My question was actually about barrier option, by default it is enabled on RHEL6/ext4 and could cause serious bottleneck on io before disks are actually involved. What says mount without arguments? \n>> \n>> On Feb 12, 2014, at 18:43, Бородин Владимир <[email protected]> wrote:\n>> \n>>> root@rpopdb01e ~ # fgrep data /etc/fstab\n>>> UUID=f815fd3f-e4e4-43a6-a6a1-bce1203db3e0 /var/lib/pgsql/9.3/data ext4 noatime,nodiratime 0 1\n>>> root@rpopdb01e ~ #\n>>> \n>>> According to iostat the disks are not the bottleneck.\n>>> \n>>> 12.02.2014, в 21:30, Ilya Kosmodemiansky <[email protected]> написал(а):\n>>> \n>>>> Hi Vladimir,\n>>>> \n>>>> Just in case: how is your ext4 mount? \n>>>> \n>>>> Best regards, \n>>>> Ilya\n>>>> \n>>>> On Feb 12, 2014, at 17:59, Бородин Владимир <[email protected]> wrote:\n>>>> \n>>>>> Hi all.\n>>>>> \n>>>>> Today I have started getting errors like below in logs (seems that I have not changed anything for last week). When it happens the db gets lots of connections in state active, eats 100% cpu and clients get errors (due to timeout). \n>>>>> \n>>>>> 2014-02-12 15:44:24.562 MSK,\"rpop\",\"rpopdb_p6\",30061,\"localhost:58350\",52fb5e53.756d,1,\"SELECT waiting\",2014-02-12 15:43:15 MSK,143/264877,1002850566,LOG,00000,\"process 30061 still waiting for ExclusiveLock on extension of relation 26118 of database 24590 after 1000.082 ms\",,,,,\"SQL statement \"\"insert into rpop.rpop_imap_uidls (folder_id, uidl) values (i_folder_id, i_uidl)\"\"\n>>>>> \n>>>>> I have read several topics [1, 2, 3, 4] with similar problems but haven't find a good solution. Below is some more diagnostics.\n>>>>> \n>>>>> I am running PostgreSQL 9.3.2 installed from RPM packages on RHEL 6.4. Host is running with the following CPU (32 cores) and memory:\n>>>>> \n>>>>> root@rpopdb01e ~ # fgrep -m1 'model name' /proc/cpuinfo\n>>>>> model name\t: Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz\n>>>>> root@rpopdb01e ~ # free -m\n>>>>> total used free shared buffers cached\n>>>>> Mem: 129028 123558 5469 0 135 119504\n>>>>> -/+ buffers/cache: 3918 125110\n>>>>> Swap: 16378 0 16378\n>>>>> root@rpopdb01e ~ #\n>>>>> \n>>>>> PGDATA lives on RAID6 array of 8 ssd-disks with ext4, iostat and atop say the disks are really free. Right now PGDATA takes only 95G.\n>>>>> The settings changed in postgresql.conf are here [5].\n>>>>> \n>>>>> When it happens the last query from here [6] shows that almost all queries are waiting for ExclusiveLock, but they do a simple insert.\n>>>>> \n>>>>> (extend,26647,26825,,,,,,,) | 5459 | ExclusiveLock | 1 | (extend,26647,26825,,,,,,,) | 8053 | ExclusiveLock | 5459,8053\n>>>>> (extend,26647,26828,,,,,,,) | 5567 | ExclusiveLock | 1 | (extend,26647,26828,,,,,,,) | 5490 | ExclusiveLock | 5567,5490\n>>>>> (extend,24584,25626,,,,,,,) | 5611 | ExclusiveLock | 1 | (extend,24584,25626,,,,,,,) | 3963 | ExclusiveLock | 5611,3963\n>>>>> \n>>>>> I have several databases running on one host with one postmaster process and ExclusiveLock is being waited by many oids. I suppose the only common thing for all of them is that they are bigger than others and they almost do not get updates and deletes (only inserts and reads). Some more info about one of such tables is here [7].\n>>>>> \n>>>>> I have tried to look at the source code (src/backend/access/heap/hio.c) to understand when the exclusive lock can be taken, but I could only read comments :) I have also examined FSM for this tables and their indexes and found that for most of them there are free pages but there are, for example, such cases:\n>>>>> \n>>>>> rpopdb_p0=# select count(*) from pg_freespace('rpop.rpop_uidl') where avail != 0;\n>>>>> count\n>>>>> --------\n>>>>> 115953\n>>>>> (1 row)\n>>>>> \n>>>>> rpopdb_p0=# select count(*) from pg_freespace('rpop.pk_rpop_uidl') where avail != 0;\n>>>>> count\n>>>>> -------\n>>>>> 0\n>>>>> (1 row)\n>>>>> \n>>>>> rpopdb_p0=# \\dS+ rpop.rpop_uidl\n>>>>> Table \"rpop.rpop_uidl\"\n>>>>> Column | Type | Modifiers | Storage | Stats target | Description\n>>>>> --------+------------------------+-----------+----------+--------------+-------------\n>>>>> popid | bigint | not null | plain | |\n>>>>> uidl | character varying(200) | not null | extended | |\n>>>>> Indexes:\n>>>>> \"pk_rpop_uidl\" PRIMARY KEY, btree (popid, uidl)\n>>>>> Has OIDs: no\n>>>>> \n>>>>> rpopdb_p0=#\n>>>>> \n>>>>> \n>>>>> My questions are:\n>>>>> 1. Do we consume 100% cpu (in system) trying to get page from FSM? Or does it happen during exclusive lock acquiring? How can I dig it?\n>>>>> 2. How much space do we extend to the relation when we get exclusive lock on it?\n>>>>> 3. Why extended page is not visible for other backends?\n>>>>> 4. Is there any possibility of situation where backend A got exclusive lock on some relation to extend it. Then OS CPU scheduler made a context switch to backend B while backend B is waiting for exclusive lock on the same relation. And so on for many backends.\n>>>>> 5. (and the main question) what can I do to get rid of such situations? It is a production cluster and I do not have any ideas what to do with this situation :( Any help would be really appropriate.\n>>>>> \n>>>>> [1] http://www.postgresql.org/message-id/[email protected]\n>>>>> [2] http://pgsql.performance.narkive.com/IrkPbl3f/postgresql-9-2-3-performance-problem-caused-exclusive-locks\n>>>>> [3] http://www.postgresql.org/message-id/[email protected]\n>>>>> [4] http://www.postgresql.org/message-id/CAL_0b1sypYeOyNkYNV95nNV2d+4jXTug3HkKF6FahfW7Gvgb_Q@mail.gmail.com\n>>>>> [5] http://pastebin.com/raw.php?i=Bd40Vn6h\n>>>>> [6] http://wiki.postgresql.org/wiki/Lock_dependency_information\n>>>>> [7 http://pastebin.com/raw.php?i=eGrtG524]\n>>>>> \n>>>>> --\n>>>>> Vladimir\n>>>>> \n>>>>> \n>>>>> \n>>>>> \n>>> \n>>> \n>>> --\n>>> Vladimir\n>>> \n>>> \n>>> \n>>> \n> \n> \n> --\n> Да пребудет с вами сила...\n> http://simply.name\n> \n> \n> \n> \n> \n> \n\n\n--\nVladimir\n\n\n\n\n\nYes, this is legacy, I will fix it. We had lots of inactive connections but right now we use pgbouncer for this. When the workload is normal we have some kind of 80-120 backends. Less than 10 of them are in active state. Having problem with locks we get lots of sessions (sometimes more than 1000 of them are in active state). According to vmstat the number of context switches is not so big (less than 20k), so I don't think it is the main reason. Yes, it can aggravate the problem, but imho not create it.I don't understand the correlation of shared buffers size and synchronous_commit. Could you please explain your statement?12.02.2014, в 23:37, Ilya Kosmodemiansky <[email protected]> написал(а):another thing which is arguable - concurrency degree. How many of your max_connections = 4000 are actually running?  4000 definitely looks like an overkill and they could be a serious source of concurrency, especially then you have had barrier enabled and software raid. \nPlus for 32Gb of shared buffers with synchronous_commit = on especially on heavy workload one should definitely have bbu, otherwise performance will be poor.  \n\nOn Wed, Feb 12, 2014 at 8:20 PM, Бородин Владимир <[email protected]> wrote:\nOh, I haven't thought about barriers, sorry. Although I use soft raid without batteries I have turned barriers off on one cluster shard to try.root@rpopdb01e ~ # mount | fgrep data\n/dev/md2 on /var/lib/pgsql/9.3/data type ext4 (rw,noatime,nodiratime)root@rpopdb01e ~ # mount -o remount,nobarrier /dev/md2root@rpopdb01e ~ # mount | fgrep data/dev/md2 on /var/lib/pgsql/9.3/data type ext4 (rw,noatime,nodiratime,nobarrier)\nroot@rpopdb01e ~ #12.02.2014, в 21:56, Ilya Kosmodemiansky <[email protected]> написал(а):\n\nMy question was actually about barrier option, by default it is enabled on RHEL6/ext4 and could cause serious bottleneck on io before disks are actually involved. What says mount without arguments? \nOn Feb 12, 2014, at 18:43, Бородин Владимир <[email protected]> wrote:root@rpopdb01e ~ # fgrep data /etc/fstab\nUUID=f815fd3f-e4e4-43a6-a6a1-bce1203db3e0 /var/lib/pgsql/9.3/data ext4 noatime,nodiratime 0 1root@rpopdb01e ~ #According to iostat the disks are not the bottleneck.\n12.02.2014, в 21:30, Ilya Kosmodemiansky <[email protected]> написал(а):Hi Vladimir,\nJust in case: how is your ext4 mount? Best regards, IlyaOn Feb 12, 2014, at 17:59, Бородин Владимир <[email protected]> wrote:\nHi all.Today I have started getting errors like below in logs (seems that I have not changed anything for last week). When it happens the db gets lots of connections in state active, eats 100% cpu and clients get errors (due to timeout). \n2014-02-12 15:44:24.562 MSK,\"rpop\",\"rpopdb_p6\",30061,\"localhost:58350\",52fb5e53.756d,1,\"SELECT waiting\",2014-02-12 15:43:15 MSK,143/264877,1002850566,LOG,00000,\"process 30061 still waiting for ExclusiveLock on extension of relation 26118 of database 24590 after 1000.082 ms\",,,,,\"SQL statement \"\"insert into rpop.rpop_imap_uidls (folder_id, uidl) values (i_folder_id, i_uidl)\"\"\nI have read several topics [1, 2, 3, 4] with similar problems but haven't find a good solution. Below is some more diagnostics.I am running PostgreSQL 9.3.2 installed from RPM packages on RHEL 6.4. Host is running with the following CPU (32 cores) and memory:\nroot@rpopdb01e ~ # fgrep -m1 'model name' /proc/cpuinfomodel name : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHzroot@rpopdb01e ~ # free -m\n             total       used       free     shared    buffers     cachedMem:        129028     123558       5469          0        135     119504-/+ buffers/cache:       3918     125110\n\nSwap:        16378          0      16378root@rpopdb01e ~ #PGDATA lives on RAID6 array of 8 ssd-disks with ext4, iostat and atop say the disks are really free. Right now PGDATA takes only 95G.\nThe settings changed in postgresql.conf are here [5].When it happens the last query from here [6] shows that almost all queries are waiting for ExclusiveLock, but they do a simple insert.\n (extend,26647,26825,,,,,,,) |        5459 | ExclusiveLock |     1 | (extend,26647,26825,,,,,,,) | 8053 | ExclusiveLock | 5459,8053 (extend,26647,26828,,,,,,,) |        5567 | ExclusiveLock |     1 | (extend,26647,26828,,,,,,,) | 5490 | ExclusiveLock | 5567,5490\n (extend,24584,25626,,,,,,,) |        5611 | ExclusiveLock |     1 | (extend,24584,25626,,,,,,,) | 3963 | ExclusiveLock | 5611,3963I have several databases running on one host with one postmaster process and ExclusiveLock is being waited by many oids. I suppose the only common thing for all of them is that they are bigger than others and they almost do not get updates and deletes (only inserts and reads). Some more info about one of such tables is here [7].\nI have tried to look at the source code (src/backend/access/heap/hio.c) to understand when the exclusive lock can be taken, but I could only read comments :) I have also examined FSM for this tables and their indexes and found that for most of them there are free pages but there are, for example, such cases:\nrpopdb_p0=# select count(*) from pg_freespace('rpop.rpop_uidl') where avail != 0; count-------- 115953(1 row)rpopdb_p0=# select count(*) from pg_freespace('rpop.pk_rpop_uidl') where avail != 0;\n count-------     0(1 row)rpopdb_p0=# \\dS+ rpop.rpop_uidl                               Table \"rpop.rpop_uidl\" Column |          Type          | Modifiers | Storage  | Stats target | Description\n--------+------------------------+-----------+----------+--------------+------------- popid  | bigint                 | not null  | plain    |              | uidl   | character varying(200) | not null  | extended |              |\nIndexes:    \"pk_rpop_uidl\" PRIMARY KEY, btree (popid, uidl)Has OIDs: norpopdb_p0=#My questions are:\n1. Do we consume 100% cpu (in system) trying to get page from FSM? Or does it happen during exclusive lock acquiring? How can I dig it?2. How much space do we extend to the relation when we get exclusive lock on it?\n3. Why extended page is not visible for other backends?4. Is there any possibility of situation where backend A got exclusive lock on some relation to extend it. Then OS CPU scheduler made a context switch to backend B while backend B is waiting for exclusive lock on the same relation. And so on for many backends.\n5. (and the main question) what can I do to get rid of such situations? It is a production cluster and I do not have any ideas what to do with this situation :( Any help would be really appropriate.\n[1] http://www.postgresql.org/message-id/[email protected]\n[2] http://pgsql.performance.narkive.com/IrkPbl3f/postgresql-9-2-3-performance-problem-caused-exclusive-locks\n[3] http://www.postgresql.org/message-id/[email protected][4] http://www.postgresql.org/message-id/CAL_0b1sypYeOyNkYNV95nNV2d+4jXTug3HkKF6FahfW7Gvgb_Q@mail.gmail.com\n[5] http://pastebin.com/raw.php?i=Bd40Vn6h[6] http://wiki.postgresql.org/wiki/Lock_dependency_information\n[7 http://pastebin.com/raw.php?i=eGrtG524]\n\n--Vladimir\n\n\n\n\n--Vladimir\n\n\n\n\n\n--Да пребудет с вами сила...http://simply.name\n\n\n\n\n--Vladimir", "msg_date": "Wed, 12 Feb 2014 23:57:49 +0400", "msg_from": "=?koi8-r?B?4s/Sz8TJziD3zMHEyc3J0g==?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Problem with ExclusiveLock on inserts" }, { "msg_contents": "On Wed, Feb 12, 2014 at 8:57 PM, Бородин Владимир <[email protected]> wrote:\n>\n> Yes, this is legacy, I will fix it. We had lots of inactive connections but right now we use pgbouncer for this. When the workload is normal we have some kind of 80-120 backends. Less than 10 of them are in active state. Having problem with locks we get lots of sessions (sometimes more than 1000 of them are in active state). According to vmstat the number of context switches is not so big (less than 20k), so I don't think it is the main reason. Yes, it can aggravate the problem, but imho not create it.\n\n\nI'am afraid that is the problem. More than 1000 backends, most of them\nare simply waiting.\n\n>\n>\n> I don't understand the correlation of shared buffers size and synchronous_commit. Could you please explain your statement?\n\n\nYou need to fsync your huge shared buffers any time your database\nperforms checkpoint. By default it usually happens too often because\ncheckpoint_timeout is 5min by default. Without bbu, on software raid\nthat leads to io spike and you commit waits for wal.\n\n\n>\n> 12.02.2014, в 23:37, Ilya Kosmodemiansky <[email protected]> написал(а):\n>\n> another thing which is arguable - concurrency degree. How many of your max_connections = 4000 are actually running? 4000 definitely looks like an overkill and they could be a serious source of concurrency, especially then you have had barrier enabled and software raid.\n>\n> Plus for 32Gb of shared buffers with synchronous_commit = on especially on heavy workload one should definitely have bbu, otherwise performance will be poor.\n>\n>\n> On Wed, Feb 12, 2014 at 8:20 PM, Бородин Владимир <[email protected]> wrote:\n>>\n>> Oh, I haven't thought about barriers, sorry. Although I use soft raid without batteries I have turned barriers off on one cluster shard to try.\n>>\n>> root@rpopdb01e ~ # mount | fgrep data\n>> /dev/md2 on /var/lib/pgsql/9.3/data type ext4 (rw,noatime,nodiratime)\n>> root@rpopdb01e ~ # mount -o remount,nobarrier /dev/md2\n>> root@rpopdb01e ~ # mount | fgrep data\n>> /dev/md2 on /var/lib/pgsql/9.3/data type ext4 (rw,noatime,nodiratime,nobarrier)\n>> root@rpopdb01e ~ #\n>>\n>> 12.02.2014, в 21:56, Ilya Kosmodemiansky <[email protected]> написал(а):\n>>\n>> My question was actually about barrier option, by default it is enabled on RHEL6/ext4 and could cause serious bottleneck on io before disks are actually involved. What says mount without arguments?\n>>\n>> On Feb 12, 2014, at 18:43, Бородин Владимир <[email protected]> wrote:\n>>\n>> root@rpopdb01e ~ # fgrep data /etc/fstab\n>> UUID=f815fd3f-e4e4-43a6-a6a1-bce1203db3e0 /var/lib/pgsql/9.3/data ext4 noatime,nodiratime 0 1\n>> root@rpopdb01e ~ #\n>>\n>> According to iostat the disks are not the bottleneck.\n>>\n>> 12.02.2014, в 21:30, Ilya Kosmodemiansky <[email protected]> написал(а):\n>>\n>> Hi Vladimir,\n>>\n>> Just in case: how is your ext4 mount?\n>>\n>> Best regards,\n>> Ilya\n>>\n>> On Feb 12, 2014, at 17:59, Бородин Владимир <[email protected]> wrote:\n>>\n>> Hi all.\n>>\n>> Today I have started getting errors like below in logs (seems that I have not changed anything for last week). When it happens the db gets lots of connections in state active, eats 100% cpu and clients get errors (due to timeout).\n>>\n>> 2014-02-12 15:44:24.562 MSK,\"rpop\",\"rpopdb_p6\",30061,\"localhost:58350\",52fb5e53.756d,1,\"SELECT waiting\",2014-02-12 15:43:15 MSK,143/264877,1002850566,LOG,00000,\"process 30061 still waiting for ExclusiveLock on extension of relation 26118 of database 24590 after 1000.082 ms\",,,,,\"SQL statement \"\"insert into rpop.rpop_imap_uidls (folder_id, uidl) values (i_folder_id, i_uidl)\"\"\n>>\n>> I have read several topics [1, 2, 3, 4] with similar problems but haven't find a good solution. Below is some more diagnostics.\n>>\n>> I am running PostgreSQL 9.3.2 installed from RPM packages on RHEL 6.4. Host is running with the following CPU (32 cores) and memory:\n>>\n>> root@rpopdb01e ~ # fgrep -m1 'model name' /proc/cpuinfo\n>> model name : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz\n>> root@rpopdb01e ~ # free -m\n>> total used free shared buffers cached\n>> Mem: 129028 123558 5469 0 135 119504\n>> -/+ buffers/cache: 3918 125110\n>> Swap: 16378 0 16378\n>> root@rpopdb01e ~ #\n>>\n>> PGDATA lives on RAID6 array of 8 ssd-disks with ext4, iostat and atop say the disks are really free. Right now PGDATA takes only 95G.\n>> The settings changed in postgresql.conf are here [5].\n>>\n>> When it happens the last query from here [6] shows that almost all queries are waiting for ExclusiveLock, but they do a simple insert.\n>>\n>> (extend,26647,26825,,,,,,,) | 5459 | ExclusiveLock | 1 | (extend,26647,26825,,,,,,,) | 8053 | ExclusiveLock | 5459,8053\n>> (extend,26647,26828,,,,,,,) | 5567 | ExclusiveLock | 1 | (extend,26647,26828,,,,,,,) | 5490 | ExclusiveLock | 5567,5490\n>> (extend,24584,25626,,,,,,,) | 5611 | ExclusiveLock | 1 | (extend,24584,25626,,,,,,,) | 3963 | ExclusiveLock | 5611,3963\n>>\n>> I have several databases running on one host with one postmaster process and ExclusiveLock is being waited by many oids. I suppose the only common thing for all of them is that they are bigger than others and they almost do not get updates and deletes (only inserts and reads). Some more info about one of such tables is here [7].\n>>\n>> I have tried to look at the source code (src/backend/access/heap/hio.c) to understand when the exclusive lock can be taken, but I could only read comments :) I have also examined FSM for this tables and their indexes and found that for most of them there are free pages but there are, for example, such cases:\n>>\n>> rpopdb_p0=# select count(*) from pg_freespace('rpop.rpop_uidl') where avail != 0;\n>> count\n>> --------\n>> 115953\n>> (1 row)\n>>\n>> rpopdb_p0=# select count(*) from pg_freespace('rpop.pk_rpop_uidl') where avail != 0;\n>> count\n>> -------\n>> 0\n>> (1 row)\n>>\n>> rpopdb_p0=# \\dS+ rpop.rpop_uidl\n>> Table \"rpop.rpop_uidl\"\n>> Column | Type | Modifiers | Storage | Stats target | Description\n>> --------+------------------------+-----------+----------+--------------+-------------\n>> popid | bigint | not null | plain | |\n>> uidl | character varying(200) | not null | extended | |\n>> Indexes:\n>> \"pk_rpop_uidl\" PRIMARY KEY, btree (popid, uidl)\n>> Has OIDs: no\n>>\n>> rpopdb_p0=#\n>>\n>>\n>> My questions are:\n>> 1. Do we consume 100% cpu (in system) trying to get page from FSM? Or does it happen during exclusive lock acquiring? How can I dig it?\n>> 2. How much space do we extend to the relation when we get exclusive lock on it?\n>> 3. Why extended page is not visible for other backends?\n>> 4. Is there any possibility of situation where backend A got exclusive lock on some relation to extend it. Then OS CPU scheduler made a context switch to backend B while backend B is waiting for exclusive lock on the same relation. And so on for many backends.\n>> 5. (and the main question) what can I do to get rid of such situations? It is a production cluster and I do not have any ideas what to do with this situation :( Any help would be really appropriate.\n>>\n>> [1] http://www.postgresql.org/message-id/[email protected]\n>> [2] http://pgsql.performance.narkive.com/IrkPbl3f/postgresql-9-2-3-performance-problem-caused-exclusive-locks\n>> [3] http://www.postgresql.org/message-id/[email protected]\n>> [4] http://www.postgresql.org/message-id/CAL_0b1sypYeOyNkYNV95nNV2d+4jXTug3HkKF6FahfW7Gvgb_Q@mail.gmail.com\n>> [5] http://pastebin.com/raw.php?i=Bd40Vn6h\n>> [6] http://wiki.postgresql.org/wiki/Lock_dependency_information\n>> [7 http://pastebin.com/raw.php?i=eGrtG524]\n>>\n>> --\n>> Vladimir\n>>\n>>\n>>\n>>\n>>\n>>\n>> --\n>> Vladimir\n>>\n>>\n>>\n>>\n>>\n>>\n>> --\n>> Да пребудет с вами сила...\n>> http://simply.name\n>>\n>>\n>>\n>>\n>>\n>\n>\n>\n> --\n> Vladimir\n>\n>\n>\n>\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 12 Feb 2014 21:14:48 +0100", "msg_from": "Ilya Kosmodemiansky <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem with ExclusiveLock on inserts" }, { "msg_contents": "I have limited max connections to 1000, reduced shared buffers to 8G and restarted postgres. \n\nThe logs say that checkpoint finishes in 2,5 minutes (as expected due to default checkpoint_completion_target = 0.5) with no IO spikes so I don't want to increase checkpoint_timeout of checkpoint_segments.\n\nI have also noticed that this big tables stopped vacuuming automatically a couple of weeks ago. It could be the reason of the problem, I will now try to tune autovacuum parameters to turn it back. But yesterday I ran \"vacuum analyze\" for all relations manually but that did not help.\n\n13.02.2014, в 0:14, Ilya Kosmodemiansky <[email protected]> написал(а):\n\n> On Wed, Feb 12, 2014 at 8:57 PM, Бородин Владимир <[email protected]> wrote:\n>> \n>> Yes, this is legacy, I will fix it. We had lots of inactive connections but right now we use pgbouncer for this. When the workload is normal we have some kind of 80-120 backends. Less than 10 of them are in active state. Having problem with locks we get lots of sessions (sometimes more than 1000 of them are in active state). According to vmstat the number of context switches is not so big (less than 20k), so I don't think it is the main reason. Yes, it can aggravate the problem, but imho not create it.\n> \n> \n> I'am afraid that is the problem. More than 1000 backends, most of them\n> are simply waiting.\n> \n>> \n>> \n>> I don't understand the correlation of shared buffers size and synchronous_commit. Could you please explain your statement?\n> \n> \n> You need to fsync your huge shared buffers any time your database\n> performs checkpoint. By default it usually happens too often because\n> checkpoint_timeout is 5min by default. Without bbu, on software raid\n> that leads to io spike and you commit waits for wal.\n> \n> \n>> \n>> 12.02.2014, в 23:37, Ilya Kosmodemiansky <[email protected]> написал(а):\n>> \n>> another thing which is arguable - concurrency degree. How many of your max_connections = 4000 are actually running? 4000 definitely looks like an overkill and they could be a serious source of concurrency, especially then you have had barrier enabled and software raid.\n>> \n>> Plus for 32Gb of shared buffers with synchronous_commit = on especially on heavy workload one should definitely have bbu, otherwise performance will be poor.\n>> \n>> \n>> On Wed, Feb 12, 2014 at 8:20 PM, Бородин Владимир <[email protected]> wrote:\n>>> \n>>> Oh, I haven't thought about barriers, sorry. Although I use soft raid without batteries I have turned barriers off on one cluster shard to try.\n>>> \n>>> root@rpopdb01e ~ # mount | fgrep data\n>>> /dev/md2 on /var/lib/pgsql/9.3/data type ext4 (rw,noatime,nodiratime)\n>>> root@rpopdb01e ~ # mount -o remount,nobarrier /dev/md2\n>>> root@rpopdb01e ~ # mount | fgrep data\n>>> /dev/md2 on /var/lib/pgsql/9.3/data type ext4 (rw,noatime,nodiratime,nobarrier)\n>>> root@rpopdb01e ~ #\n>>> \n>>> 12.02.2014, в 21:56, Ilya Kosmodemiansky <[email protected]> написал(а):\n>>> \n>>> My question was actually about barrier option, by default it is enabled on RHEL6/ext4 and could cause serious bottleneck on io before disks are actually involved. What says mount without arguments?\n>>> \n>>> On Feb 12, 2014, at 18:43, Бородин Владимир <[email protected]> wrote:\n>>> \n>>> root@rpopdb01e ~ # fgrep data /etc/fstab\n>>> UUID=f815fd3f-e4e4-43a6-a6a1-bce1203db3e0 /var/lib/pgsql/9.3/data ext4 noatime,nodiratime 0 1\n>>> root@rpopdb01e ~ #\n>>> \n>>> According to iostat the disks are not the bottleneck.\n>>> \n>>> 12.02.2014, в 21:30, Ilya Kosmodemiansky <[email protected]> написал(а):\n>>> \n>>> Hi Vladimir,\n>>> \n>>> Just in case: how is your ext4 mount?\n>>> \n>>> Best regards,\n>>> Ilya\n>>> \n>>> On Feb 12, 2014, at 17:59, Бородин Владимир <[email protected]> wrote:\n>>> \n>>> Hi all.\n>>> \n>>> Today I have started getting errors like below in logs (seems that I have not changed anything for last week). When it happens the db gets lots of connections in state active, eats 100% cpu and clients get errors (due to timeout).\n>>> \n>>> 2014-02-12 15:44:24.562 MSK,\"rpop\",\"rpopdb_p6\",30061,\"localhost:58350\",52fb5e53.756d,1,\"SELECT waiting\",2014-02-12 15:43:15 MSK,143/264877,1002850566,LOG,00000,\"process 30061 still waiting for ExclusiveLock on extension of relation 26118 of database 24590 after 1000.082 ms\",,,,,\"SQL statement \"\"insert into rpop.rpop_imap_uidls (folder_id, uidl) values (i_folder_id, i_uidl)\"\"\n>>> \n>>> I have read several topics [1, 2, 3, 4] with similar problems but haven't find a good solution. Below is some more diagnostics.\n>>> \n>>> I am running PostgreSQL 9.3.2 installed from RPM packages on RHEL 6.4. Host is running with the following CPU (32 cores) and memory:\n>>> \n>>> root@rpopdb01e ~ # fgrep -m1 'model name' /proc/cpuinfo\n>>> model name : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz\n>>> root@rpopdb01e ~ # free -m\n>>> total used free shared buffers cached\n>>> Mem: 129028 123558 5469 0 135 119504\n>>> -/+ buffers/cache: 3918 125110\n>>> Swap: 16378 0 16378\n>>> root@rpopdb01e ~ #\n>>> \n>>> PGDATA lives on RAID6 array of 8 ssd-disks with ext4, iostat and atop say the disks are really free. Right now PGDATA takes only 95G.\n>>> The settings changed in postgresql.conf are here [5].\n>>> \n>>> When it happens the last query from here [6] shows that almost all queries are waiting for ExclusiveLock, but they do a simple insert.\n>>> \n>>> (extend,26647,26825,,,,,,,) | 5459 | ExclusiveLock | 1 | (extend,26647,26825,,,,,,,) | 8053 | ExclusiveLock | 5459,8053\n>>> (extend,26647,26828,,,,,,,) | 5567 | ExclusiveLock | 1 | (extend,26647,26828,,,,,,,) | 5490 | ExclusiveLock | 5567,5490\n>>> (extend,24584,25626,,,,,,,) | 5611 | ExclusiveLock | 1 | (extend,24584,25626,,,,,,,) | 3963 | ExclusiveLock | 5611,3963\n>>> \n>>> I have several databases running on one host with one postmaster process and ExclusiveLock is being waited by many oids. I suppose the only common thing for all of them is that they are bigger than others and they almost do not get updates and deletes (only inserts and reads). Some more info about one of such tables is here [7].\n>>> \n>>> I have tried to look at the source code (src/backend/access/heap/hio.c) to understand when the exclusive lock can be taken, but I could only read comments :) I have also examined FSM for this tables and their indexes and found that for most of them there are free pages but there are, for example, such cases:\n>>> \n>>> rpopdb_p0=# select count(*) from pg_freespace('rpop.rpop_uidl') where avail != 0;\n>>> count\n>>> --------\n>>> 115953\n>>> (1 row)\n>>> \n>>> rpopdb_p0=# select count(*) from pg_freespace('rpop.pk_rpop_uidl') where avail != 0;\n>>> count\n>>> -------\n>>> 0\n>>> (1 row)\n>>> \n>>> rpopdb_p0=# \\dS+ rpop.rpop_uidl\n>>> Table \"rpop.rpop_uidl\"\n>>> Column | Type | Modifiers | Storage | Stats target | Description\n>>> --------+------------------------+-----------+----------+--------------+-------------\n>>> popid | bigint | not null | plain | |\n>>> uidl | character varying(200) | not null | extended | |\n>>> Indexes:\n>>> \"pk_rpop_uidl\" PRIMARY KEY, btree (popid, uidl)\n>>> Has OIDs: no\n>>> \n>>> rpopdb_p0=#\n>>> \n>>> \n>>> My questions are:\n>>> 1. Do we consume 100% cpu (in system) trying to get page from FSM? Or does it happen during exclusive lock acquiring? How can I dig it?\n>>> 2. How much space do we extend to the relation when we get exclusive lock on it?\n>>> 3. Why extended page is not visible for other backends?\n>>> 4. Is there any possibility of situation where backend A got exclusive lock on some relation to extend it. Then OS CPU scheduler made a context switch to backend B while backend B is waiting for exclusive lock on the same relation. And so on for many backends.\n>>> 5. (and the main question) what can I do to get rid of such situations? It is a production cluster and I do not have any ideas what to do with this situation :( Any help would be really appropriate.\n>>> \n>>> [1] http://www.postgresql.org/message-id/[email protected]\n>>> [2] http://pgsql.performance.narkive.com/IrkPbl3f/postgresql-9-2-3-performance-problem-caused-exclusive-locks\n>>> [3] http://www.postgresql.org/message-id/[email protected]\n>>> [4] http://www.postgresql.org/message-id/CAL_0b1sypYeOyNkYNV95nNV2d+4jXTug3HkKF6FahfW7Gvgb_Q@mail.gmail.com\n>>> [5] http://pastebin.com/raw.php?i=Bd40Vn6h\n>>> [6] http://wiki.postgresql.org/wiki/Lock_dependency_information\n>>> [7 http://pastebin.com/raw.php?i=eGrtG524]\n>>> \n>>> --\n>>> Vladimir\n>>> \n>>> \n>>> \n>>> \n>>> \n>>> \n>>> --\n>>> Vladimir\n>>> \n>>> \n>>> \n>>> \n>>> \n>>> \n>>> --\n>>> Да пребудет с вами сила...\n>>> http://simply.name\n>>> \n>>> \n>>> \n>>> \n>>> \n>> \n>> \n>> \n>> --\n>> Vladimir\n>> \n>> \n>> \n>> \n\n\n--\nVladimir\n\n\n\n\n\nI have limited max connections to 1000, reduced shared buffers to 8G and restarted postgres. The logs say that checkpoint finishes in 2,5 minutes (as expected due to default checkpoint_completion_target = 0.5) with no IO spikes so I don't want to increase checkpoint_timeout of checkpoint_segments.I have also noticed that this big tables stopped vacuuming automatically a couple of weeks ago. It could be the reason of the problem, I will now try to tune autovacuum parameters to turn it back. But yesterday I ran \"vacuum analyze\" for all relations manually but that did not help.13.02.2014, в 0:14, Ilya Kosmodemiansky <[email protected]> написал(а):On Wed, Feb 12, 2014 at 8:57 PM, Бородин Владимир <[email protected]> wrote:Yes, this is legacy, I will fix it. We had lots of inactive connections but right now we use pgbouncer for this. When the workload is normal we have some kind of 80-120 backends. Less than 10 of them are in active state. Having problem with locks we get lots of sessions (sometimes more than 1000 of them are in active state). According to vmstat the number of context switches is not so big (less than 20k), so I don't think it is the main reason. Yes, it can aggravate the problem, but imho not create it.I'am afraid that is the problem. More than 1000 backends, most of themare simply waiting.I don't understand the correlation of shared buffers size and synchronous_commit. Could you please explain your statement?You need to fsync your huge shared buffers any time your databaseperforms checkpoint. By default it usually happens too often becausecheckpoint_timeout is 5min by default. Without bbu, on software raidthat leads to io spike and you commit waits for wal.12.02.2014, в 23:37, Ilya Kosmodemiansky <[email protected]> написал(а):another thing which is arguable - concurrency degree. How many of your max_connections = 4000 are actually running?  4000 definitely looks like an overkill and they could be a serious source of concurrency, especially then you have had barrier enabled and software raid.Plus for 32Gb of shared buffers with synchronous_commit = on especially on heavy workload one should definitely have bbu, otherwise performance will be poor.On Wed, Feb 12, 2014 at 8:20 PM, Бородин Владимир <[email protected]> wrote:Oh, I haven't thought about barriers, sorry. Although I use soft raid without batteries I have turned barriers off on one cluster shard to try.root@rpopdb01e ~ # mount | fgrep data/dev/md2 on /var/lib/pgsql/9.3/data type ext4 (rw,noatime,nodiratime)root@rpopdb01e ~ # mount -o remount,nobarrier /dev/md2root@rpopdb01e ~ # mount | fgrep data/dev/md2 on /var/lib/pgsql/9.3/data type ext4 (rw,noatime,nodiratime,nobarrier)root@rpopdb01e ~ #12.02.2014, в 21:56, Ilya Kosmodemiansky <[email protected]> написал(а):My question was actually about barrier option, by default it is enabled on RHEL6/ext4 and could cause serious bottleneck on io before disks are actually involved. What says mount without arguments?On Feb 12, 2014, at 18:43, Бородин Владимир <[email protected]> wrote:root@rpopdb01e ~ # fgrep data /etc/fstabUUID=f815fd3f-e4e4-43a6-a6a1-bce1203db3e0 /var/lib/pgsql/9.3/data ext4 noatime,nodiratime 0 1root@rpopdb01e ~ #According to iostat the disks are not the bottleneck.12.02.2014, в 21:30, Ilya Kosmodemiansky <[email protected]> написал(а):Hi Vladimir,Just in case: how is your ext4 mount?Best regards,IlyaOn Feb 12, 2014, at 17:59, Бородин Владимир <[email protected]> wrote:Hi all.Today I have started getting errors like below in logs (seems that I have not changed anything for last week). When it happens the db gets lots of connections in state active, eats 100% cpu and clients get errors (due to timeout).2014-02-12 15:44:24.562 MSK,\"rpop\",\"rpopdb_p6\",30061,\"localhost:58350\",52fb5e53.756d,1,\"SELECT waiting\",2014-02-12 15:43:15 MSK,143/264877,1002850566,LOG,00000,\"process 30061 still waiting for ExclusiveLock on extension of relation 26118 of database 24590 after 1000.082 ms\",,,,,\"SQL statement \"\"insert into rpop.rpop_imap_uidls (folder_id, uidl) values (i_folder_id, i_uidl)\"\"I have read several topics [1, 2, 3, 4] with similar problems but haven't find a good solution. Below is some more diagnostics.I am running PostgreSQL 9.3.2 installed from RPM packages on RHEL 6.4. Host is running with the following CPU (32 cores) and memory:root@rpopdb01e ~ # fgrep -m1 'model name' /proc/cpuinfomodel name : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHzroot@rpopdb01e ~ # free -m             total       used       free     shared    buffers     cachedMem:        129028     123558       5469          0        135     119504-/+ buffers/cache:       3918     125110Swap:        16378          0      16378root@rpopdb01e ~ #PGDATA lives on RAID6 array of 8 ssd-disks with ext4, iostat and atop say the disks are really free. Right now PGDATA takes only 95G.The settings changed in postgresql.conf are here [5].When it happens the last query from here [6] shows that almost all queries are waiting for ExclusiveLock, but they do a simple insert. (extend,26647,26825,,,,,,,) |        5459 | ExclusiveLock |     1 | (extend,26647,26825,,,,,,,) | 8053 | ExclusiveLock | 5459,8053 (extend,26647,26828,,,,,,,) |        5567 | ExclusiveLock |     1 | (extend,26647,26828,,,,,,,) | 5490 | ExclusiveLock | 5567,5490 (extend,24584,25626,,,,,,,) |        5611 | ExclusiveLock |     1 | (extend,24584,25626,,,,,,,) | 3963 | ExclusiveLock | 5611,3963I have several databases running on one host with one postmaster process and ExclusiveLock is being waited by many oids. I suppose the only common thing for all of them is that they are bigger than others and they almost do not get updates and deletes (only inserts and reads). Some more info about one of such tables is here [7].I have tried to look at the source code (src/backend/access/heap/hio.c) to understand when the exclusive lock can be taken, but I could only read comments :) I have also examined FSM for this tables and their indexes and found that for most of them there are free pages but there are, for example, such cases:rpopdb_p0=# select count(*) from pg_freespace('rpop.rpop_uidl') where avail != 0; count-------- 115953(1 row)rpopdb_p0=# select count(*) from pg_freespace('rpop.pk_rpop_uidl') where avail != 0; count-------     0(1 row)rpopdb_p0=# \\dS+ rpop.rpop_uidl                               Table \"rpop.rpop_uidl\" Column |          Type          | Modifiers | Storage  | Stats target | Description--------+------------------------+-----------+----------+--------------+------------- popid  | bigint                 | not null  | plain    |              | uidl   | character varying(200) | not null  | extended |              |Indexes:    \"pk_rpop_uidl\" PRIMARY KEY, btree (popid, uidl)Has OIDs: norpopdb_p0=#My questions are:1. Do we consume 100% cpu (in system) trying to get page from FSM? Or does it happen during exclusive lock acquiring? How can I dig it?2. How much space do we extend to the relation when we get exclusive lock on it?3. Why extended page is not visible for other backends?4. Is there any possibility of situation where backend A got exclusive lock on some relation to extend it. Then OS CPU scheduler made a context switch to backend B while backend B is waiting for exclusive lock on the same relation. And so on for many backends.5. (and the main question) what can I do to get rid of such situations? It is a production cluster and I do not have any ideas what to do with this situation :( Any help would be really appropriate.[1] http://www.postgresql.org/message-id/[email protected][2] http://pgsql.performance.narkive.com/IrkPbl3f/postgresql-9-2-3-performance-problem-caused-exclusive-locks[3] http://www.postgresql.org/message-id/[email protected][4] http://www.postgresql.org/message-id/CAL_0b1sypYeOyNkYNV95nNV2d+4jXTug3HkKF6FahfW7Gvgb_Q@mail.gmail.com[5] http://pastebin.com/raw.php?i=Bd40Vn6h[6] http://wiki.postgresql.org/wiki/Lock_dependency_information[7 http://pastebin.com/raw.php?i=eGrtG524]--Vladimir--Vladimir--Да пребудет с вами сила...http://simply.name--Vladimir\n--Vladimir", "msg_date": "Thu, 13 Feb 2014 12:35:36 +0400", "msg_from": "=?koi8-r?B?4s/Sz8TJziD3zMHEyc3J0g==?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Problem with ExclusiveLock on inserts" }, { "msg_contents": "2014-02-12 18:59, Бородин Владимир <[email protected]>:\n> I have read several topics [1, 2, 3, 4] with similar problems but haven't\n> find a good solution. Below is some more diagnostics.\n\nI reported the second one. The diagnostics was very similar to yours.\nI think a lot people experienced this problem with big servers. It was\nonly because of large shared_buffers. The problem disappeared after\nwe reduced it to 2 GB.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 13 Feb 2014 11:13:23 +0200", "msg_from": "Emre Hasegeli <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem with ExclusiveLock on inserts" }, { "msg_contents": "Cool. How much ram do you have on your server and what is the size of the database?\n\nMy settings of 32 GB was set due to recommendation of 25% from here [1]. Right now I am going to investigate the content of buffer cache with pg_buffercache and maybe OS page cache with recently released pg_stat_kcache to determine the optimal size for my load profile.\n\n[1] http://www.postgresql.org/docs/current/static/runtime-config-resource.html\n\n13.02.2014, в 13:13, Emre Hasegeli <[email protected]> написал(а):\n\n> 2014-02-12 18:59, Бородин Владимир <[email protected]>:\n>> I have read several topics [1, 2, 3, 4] with similar problems but haven't\n>> find a good solution. Below is some more diagnostics.\n> \n> I reported the second one. The diagnostics was very similar to yours.\n> I think a lot people experienced this problem with big servers. It was\n> only because of large shared_buffers. The problem disappeared after\n> we reduced it to 2 GB.\n\n\n--\nДа пребудет с вами сила...\nhttp://simply.name\n\n\n\n\n\n\nCool. How much ram do you have on your server and what is the size of the database?My settings of 32 GB was set due to recommendation of 25% from here [1]. Right now I am going to investigate the content of buffer cache with pg_buffercache and maybe OS page cache with recently released pg_stat_kcache to determine the optimal size for my load profile.[1] http://www.postgresql.org/docs/current/static/runtime-config-resource.html13.02.2014, в 13:13, Emre Hasegeli <[email protected]> написал(а):2014-02-12 18:59, Бородин Владимир <[email protected]>:I have read several topics [1, 2, 3, 4] with similar problems but haven'tfind a good solution. Below is some more diagnostics.I reported the second one. The diagnostics was very similar to yours.I think a lot people experienced this problem with big servers. It wasonly because of large shared_buffers. The problem disappeared afterwe reduced it to 2 GB.\n--Да пребудет с вами сила...http://simply.name", "msg_date": "Thu, 13 Feb 2014 13:23:06 +0400", "msg_from": "=?koi8-r?B?4s/Sz8TJziD3zMHEyc3J0g==?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Problem with ExclusiveLock on inserts" }, { "msg_contents": "Vladimir,\n\nAnd, any effect on your problem?\n\nOn Thu, Feb 13, 2014 at 9:35 AM, Бородин Владимир <[email protected]> wrote:\n> I have limited max connections to 1000, reduced shared buffers to 8G and restarted postgres.\n\n1000 is still to much in most cases. With pgbouncer in transaction\npooling mode normaly pool size 8-32, max_connections = 100 (default\nvalue) and client_connections 500-1500 looks more reasonable.\n\n\n> I have also noticed that this big tables stopped vacuuming automatically a couple of weeks ago. It could be the reason of the problem, I will now try to tune autovacuum parameters to turn it back. But yesterday I ran \"vacuum analyze\" for all relations manually but that did not help.\n\nHow do your autovacuum parameters look like now?\n\n> 13.02.2014, в 0:14, Ilya Kosmodemiansky <[email protected]> написал(а):\n>\n> On Wed, Feb 12, 2014 at 8:57 PM, Бородин Владимир <[email protected]> wrote:\n>\n>\n> Yes, this is legacy, I will fix it. We had lots of inactive connections but right now we use pgbouncer for this. When the workload is normal we have some kind of 80-120 backends. Less than 10 of them are in active state. Having problem with locks we get lots of sessions (sometimes more than 1000 of them are in active state). According to vmstat the number of context switches is not so big (less than 20k), so I don't think it is the main reason. Yes, it can aggravate the problem, but imho not create it.\n>\n>\n>\n> I'am afraid that is the problem. More than 1000 backends, most of them\n> are simply waiting.\n>\n>\n>\n> I don't understand the correlation of shared buffers size and synchronous_commit. Could you please explain your statement?\n>\n>\n>\n> You need to fsync your huge shared buffers any time your database\n> performs checkpoint. By default it usually happens too often because\n> checkpoint_timeout is 5min by default. Without bbu, on software raid\n> that leads to io spike and you commit waits for wal.\n>\n>\n>\n> 12.02.2014, в 23:37, Ilya Kosmodemiansky <[email protected]> написал(а):\n>\n> another thing which is arguable - concurrency degree. How many of your max_connections = 4000 are actually running? 4000 definitely looks like an overkill and they could be a serious source of concurrency, especially then you have had barrier enabled and software raid.\n>\n> Plus for 32Gb of shared buffers with synchronous_commit = on especially on heavy workload one should definitely have bbu, otherwise performance will be poor.\n>\n>\n> On Wed, Feb 12, 2014 at 8:20 PM, Бородин Владимир <[email protected]> wrote:\n>\n>\n> Oh, I haven't thought about barriers, sorry. Although I use soft raid without batteries I have turned barriers off on one cluster shard to try.\n>\n> root@rpopdb01e ~ # mount | fgrep data\n> /dev/md2 on /var/lib/pgsql/9.3/data type ext4 (rw,noatime,nodiratime)\n> root@rpopdb01e ~ # mount -o remount,nobarrier /dev/md2\n> root@rpopdb01e ~ # mount | fgrep data\n> /dev/md2 on /var/lib/pgsql/9.3/data type ext4 (rw,noatime,nodiratime,nobarrier)\n> root@rpopdb01e ~ #\n>\n> 12.02.2014, в 21:56, Ilya Kosmodemiansky <[email protected]> написал(а):\n>\n> My question was actually about barrier option, by default it is enabled on RHEL6/ext4 and could cause serious bottleneck on io before disks are actually involved. What says mount without arguments?\n>\n> On Feb 12, 2014, at 18:43, Бородин Владимир <[email protected]> wrote:\n>\n> root@rpopdb01e ~ # fgrep data /etc/fstab\n> UUID=f815fd3f-e4e4-43a6-a6a1-bce1203db3e0 /var/lib/pgsql/9.3/data ext4 noatime,nodiratime 0 1\n> root@rpopdb01e ~ #\n>\n> According to iostat the disks are not the bottleneck.\n>\n> 12.02.2014, в 21:30, Ilya Kosmodemiansky <[email protected]> написал(а):\n>\n> Hi Vladimir,\n>\n> Just in case: how is your ext4 mount?\n>\n> Best regards,\n> Ilya\n>\n> On Feb 12, 2014, at 17:59, Бородин Владимир <[email protected]> wrote:\n>\n> Hi all.\n>\n> Today I have started getting errors like below in logs (seems that I have not changed anything for last week). When it happens the db gets lots of connections in state active, eats 100% cpu and clients get errors (due to timeout).\n>\n> 2014-02-12 15:44:24.562 MSK,\"rpop\",\"rpopdb_p6\",30061,\"localhost:58350\",52fb5e53.756d,1,\"SELECT waiting\",2014-02-12 15:43:15 MSK,143/264877,1002850566,LOG,00000,\"process 30061 still waiting for ExclusiveLock on extension of relation 26118 of database 24590 after 1000.082 ms\",,,,,\"SQL statement \"\"insert into rpop.rpop_imap_uidls (folder_id, uidl) values (i_folder_id, i_uidl)\"\"\n>\n> I have read several topics [1, 2, 3, 4] with similar problems but haven't find a good solution. Below is some more diagnostics.\n>\n> I am running PostgreSQL 9.3.2 installed from RPM packages on RHEL 6.4. Host is running with the following CPU (32 cores) and memory:\n>\n> root@rpopdb01e ~ # fgrep -m1 'model name' /proc/cpuinfo\n> model name : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz\n> root@rpopdb01e ~ # free -m\n> total used free shared buffers cached\n> Mem: 129028 123558 5469 0 135 119504\n> -/+ buffers/cache: 3918 125110\n> Swap: 16378 0 16378\n> root@rpopdb01e ~ #\n>\n> PGDATA lives on RAID6 array of 8 ssd-disks with ext4, iostat and atop say the disks are really free. Right now PGDATA takes only 95G.\n> The settings changed in postgresql.conf are here [5].\n>\n> When it happens the last query from here [6] shows that almost all queries are waiting for ExclusiveLock, but they do a simple insert.\n>\n> (extend,26647,26825,,,,,,,) | 5459 | ExclusiveLock | 1 | (extend,26647,26825,,,,,,,) | 8053 | ExclusiveLock | 5459,8053\n> (extend,26647,26828,,,,,,,) | 5567 | ExclusiveLock | 1 | (extend,26647,26828,,,,,,,) | 5490 | ExclusiveLock | 5567,5490\n> (extend,24584,25626,,,,,,,) | 5611 | ExclusiveLock | 1 | (extend,24584,25626,,,,,,,) | 3963 | ExclusiveLock | 5611,3963\n>\n> I have several databases running on one host with one postmaster process and ExclusiveLock is being waited by many oids. I suppose the only common thing for all of them is that they are bigger than others and they almost do not get updates and deletes (only inserts and reads). Some more info about one of such tables is here [7].\n>\n> I have tried to look at the source code (src/backend/access/heap/hio.c) to understand when the exclusive lock can be taken, but I could only read comments :) I have also examined FSM for this tables and their indexes and found that for most of them there are free pages but there are, for example, such cases:\n>\n> rpopdb_p0=# select count(*) from pg_freespace('rpop.rpop_uidl') where avail != 0;\n> count\n> --------\n> 115953\n> (1 row)\n>\n> rpopdb_p0=# select count(*) from pg_freespace('rpop.pk_rpop_uidl') where avail != 0;\n> count\n> -------\n> 0\n> (1 row)\n>\n> rpopdb_p0=# \\dS+ rpop.rpop_uidl\n> Table \"rpop.rpop_uidl\"\n> Column | Type | Modifiers | Storage | Stats target | Description\n> --------+------------------------+-----------+----------+--------------+-------------\n> popid | bigint | not null | plain | |\n> uidl | character varying(200) | not null | extended | |\n> Indexes:\n> \"pk_rpop_uidl\" PRIMARY KEY, btree (popid, uidl)\n> Has OIDs: no\n>\n> rpopdb_p0=#\n>\n>\n> My questions are:\n> 1. Do we consume 100% cpu (in system) trying to get page from FSM? Or does it happen during exclusive lock acquiring? How can I dig it?\n> 2. How much space do we extend to the relation when we get exclusive lock on it?\n> 3. Why extended page is not visible for other backends?\n> 4. Is there any possibility of situation where backend A got exclusive lock on some relation to extend it. Then OS CPU scheduler made a context switch to backend B while backend B is waiting for exclusive lock on the same relation. And so on for many backends.\n> 5. (and the main question) what can I do to get rid of such situations? It is a production cluster and I do not have any ideas what to do with this situation :( Any help would be really appropriate.\n>\n> [1] http://www.postgresql.org/message-id/[email protected]\n> [2] http://pgsql.performance.narkive.com/IrkPbl3f/postgresql-9-2-3-performance-problem-caused-exclusive-locks\n> [3] http://www.postgresql.org/message-id/[email protected]\n> [4] http://www.postgresql.org/message-id/CAL_0b1sypYeOyNkYNV95nNV2d+4jXTug3HkKF6FahfW7Gvgb_Q@mail.gmail.com\n> [5] http://pastebin.com/raw.php?i=Bd40Vn6h\n> [6] http://wiki.postgresql.org/wiki/Lock_dependency_information\n> [7 http://pastebin.com/raw.php?i=eGrtG524]\n>\n> --\n> Vladimir\n>\n>\n>\n>\n>\n>\n> --\n> Vladimir\n>\n>\n>\n>\n>\n>\n> --\n> Да пребудет с вами сила...\n> http://simply.name\n>\n>\n>\n>\n>\n>\n>\n>\n> --\n> Vladimir\n>\n>\n>\n>\n>\n>\n> --\n> Vladimir\n>\n>\n>\n>\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 13 Feb 2014 10:29:11 +0100", "msg_from": "Ilya Kosmodemiansky <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem with ExclusiveLock on inserts" }, { "msg_contents": "On Thu, Feb 13, 2014 at 11:23 AM, Бородин Владимир <[email protected]> wrote:\n> Cool. How much ram do you have on your server and what is the size of the\n> database?\n\nIt has 200 GiB of memory for 100 GB database at that time. We had migrated\nthe database from MySQL, that was the reason of overmuch resources. I do\nnot maintain it anymore and I moved the database to much smaller server\nbefore I leave.\n\n> My settings of 32 GB was set due to recommendation of 25% from here [1].\n\nMaybe we should add a note to that page.\n\n> Right now I am going to investigate the content of buffer cache with\n> pg_buffercache and maybe OS page cache with recently released pg_stat_kcache\n> to determine the optimal size for my load profile.\n>\n> [1]\n> http://www.postgresql.org/docs/current/static/runtime-config-resource.html\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 13 Feb 2014 12:00:16 +0200", "msg_from": "Emre Hasegeli <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem with ExclusiveLock on inserts" }, { "msg_contents": "13.02.2014, в 13:29, Ilya Kosmodemiansky <[email protected]> написал(а):\n\n> Vladimir,\n> \n> And, any effect on your problem?\n\nIt worked without problems longer than previous configuration but repeated again several minutes ago :(\n\n> \n> On Thu, Feb 13, 2014 at 9:35 AM, Бородин Владимир <[email protected]> wrote:\n>> I have limited max connections to 1000, reduced shared buffers to 8G and restarted postgres.\n> \n> 1000 is still to much in most cases. With pgbouncer in transaction\n> pooling mode normaly pool size 8-32, max_connections = 100 (default\n> value) and client_connections 500-1500 looks more reasonable.\n\nClients for this db are plproxy hosts. As far as I know plproxy can work only with statement pooling.\n\n> \n> \n>> I have also noticed that this big tables stopped vacuuming automatically a couple of weeks ago. It could be the reason of the problem, I will now try to tune autovacuum parameters to turn it back. But yesterday I ran \"vacuum analyze\" for all relations manually but that did not help.\n> \n> How do your autovacuum parameters look like now?\n\nThey were all default except for vacuum_defer_cleanup_age = 100000. I have increased autovacuum_max_workers = 20 because I have 10 databases with about 10 tables each. That did not make better (I haven't seen more than two auto vacuum workers simultaneously). Then I have tried to set vacuum_cost_limit = 1000. Still not vacuuming big tables. Right now the parameters look like this:\n\nroot@rpopdb01e ~ # fgrep vacuum /var/lib/pgsql/9.3/data/conf.d/postgresql.conf \n#vacuum_cost_delay = 0 # 0-100 milliseconds\n#vacuum_cost_page_hit = 1 # 0-10000 credits\n#vacuum_cost_page_miss = 10 # 0-10000 credits\n#vacuum_cost_page_dirty = 20 # 0-10000 credits\nvacuum_cost_limit = 1000 # 1-10000 credits\nvacuum_defer_cleanup_age = 100000 # number of xacts by which cleanup is delayed\nautovacuum = on # Enable autovacuum subprocess? 'on'\nlog_autovacuum_min_duration = 0 # -1 disables, 0 logs all actions and\nautovacuum_max_workers = 20 # max number of autovacuum subprocesses\n#autovacuum_naptime = 1min # time between autovacuum runs\n#autovacuum_vacuum_threshold = 50 # min number of row updates before\n # vacuum\n#autovacuum_analyze_threshold = 50 # min number of row updates before\n#autovacuum_vacuum_scale_factor = 0.2 # fraction of table size before vacuum\n#autovacuum_analyze_scale_factor = 0.1 # fraction of table size before analyze\n#autovacuum_freeze_max_age = 200000000 # maximum XID age before forced vacuum\n#autovacuum_vacuum_cost_delay = 20ms # default vacuum cost delay for\n # autovacuum, in milliseconds;\n # -1 means use vacuum_cost_delay\n#autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for\n # autovacuum, -1 means use\n # vacuum_cost_limit\n#vacuum_freeze_min_age = 50000000\n#vacuum_freeze_table_age = 150000000\nroot@rpopdb01e ~ #\n\n> \n>> 13.02.2014, в 0:14, Ilya Kosmodemiansky <[email protected]> написал(а):\n>> \n>> On Wed, Feb 12, 2014 at 8:57 PM, Бородин Владимир <[email protected]> wrote:\n>> \n>> \n>> Yes, this is legacy, I will fix it. We had lots of inactive connections but right now we use pgbouncer for this. When the workload is normal we have some kind of 80-120 backends. Less than 10 of them are in active state. Having problem with locks we get lots of sessions (sometimes more than 1000 of them are in active state). According to vmstat the number of context switches is not so big (less than 20k), so I don't think it is the main reason. Yes, it can aggravate the problem, but imho not create it.\n>> \n>> \n>> \n>> I'am afraid that is the problem. More than 1000 backends, most of them\n>> are simply waiting.\n>> \n>> \n>> \n>> I don't understand the correlation of shared buffers size and synchronous_commit. Could you please explain your statement?\n>> \n>> \n>> \n>> You need to fsync your huge shared buffers any time your database\n>> performs checkpoint. By default it usually happens too often because\n>> checkpoint_timeout is 5min by default. Without bbu, on software raid\n>> that leads to io spike and you commit waits for wal.\n>> \n>> \n>> \n>> 12.02.2014, в 23:37, Ilya Kosmodemiansky <[email protected]> написал(а):\n>> \n>> another thing which is arguable - concurrency degree. How many of your max_connections = 4000 are actually running? 4000 definitely looks like an overkill and they could be a serious source of concurrency, especially then you have had barrier enabled and software raid.\n>> \n>> Plus for 32Gb of shared buffers with synchronous_commit = on especially on heavy workload one should definitely have bbu, otherwise performance will be poor.\n>> \n>> \n>> On Wed, Feb 12, 2014 at 8:20 PM, Бородин Владимир <[email protected]> wrote:\n>> \n>> \n>> Oh, I haven't thought about barriers, sorry. Although I use soft raid without batteries I have turned barriers off on one cluster shard to try.\n>> \n>> root@rpopdb01e ~ # mount | fgrep data\n>> /dev/md2 on /var/lib/pgsql/9.3/data type ext4 (rw,noatime,nodiratime)\n>> root@rpopdb01e ~ # mount -o remount,nobarrier /dev/md2\n>> root@rpopdb01e ~ # mount | fgrep data\n>> /dev/md2 on /var/lib/pgsql/9.3/data type ext4 (rw,noatime,nodiratime,nobarrier)\n>> root@rpopdb01e ~ #\n>> \n>> 12.02.2014, в 21:56, Ilya Kosmodemiansky <[email protected]> написал(а):\n>> \n>> My question was actually about barrier option, by default it is enabled on RHEL6/ext4 and could cause serious bottleneck on io before disks are actually involved. What says mount without arguments?\n>> \n>> On Feb 12, 2014, at 18:43, Бородин Владимир <[email protected]> wrote:\n>> \n>> root@rpopdb01e ~ # fgrep data /etc/fstab\n>> UUID=f815fd3f-e4e4-43a6-a6a1-bce1203db3e0 /var/lib/pgsql/9.3/data ext4 noatime,nodiratime 0 1\n>> root@rpopdb01e ~ #\n>> \n>> According to iostat the disks are not the bottleneck.\n>> \n>> 12.02.2014, в 21:30, Ilya Kosmodemiansky <[email protected]> написал(а):\n>> \n>> Hi Vladimir,\n>> \n>> Just in case: how is your ext4 mount?\n>> \n>> Best regards,\n>> Ilya\n>> \n>> On Feb 12, 2014, at 17:59, Бородин Владимир <[email protected]> wrote:\n>> \n>> Hi all.\n>> \n>> Today I have started getting errors like below in logs (seems that I have not changed anything for last week). When it happens the db gets lots of connections in state active, eats 100% cpu and clients get errors (due to timeout).\n>> \n>> 2014-02-12 15:44:24.562 MSK,\"rpop\",\"rpopdb_p6\",30061,\"localhost:58350\",52fb5e53.756d,1,\"SELECT waiting\",2014-02-12 15:43:15 MSK,143/264877,1002850566,LOG,00000,\"process 30061 still waiting for ExclusiveLock on extension of relation 26118 of database 24590 after 1000.082 ms\",,,,,\"SQL statement \"\"insert into rpop.rpop_imap_uidls (folder_id, uidl) values (i_folder_id, i_uidl)\"\"\n>> \n>> I have read several topics [1, 2, 3, 4] with similar problems but haven't find a good solution. Below is some more diagnostics.\n>> \n>> I am running PostgreSQL 9.3.2 installed from RPM packages on RHEL 6.4. Host is running with the following CPU (32 cores) and memory:\n>> \n>> root@rpopdb01e ~ # fgrep -m1 'model name' /proc/cpuinfo\n>> model name : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz\n>> root@rpopdb01e ~ # free -m\n>> total used free shared buffers cached\n>> Mem: 129028 123558 5469 0 135 119504\n>> -/+ buffers/cache: 3918 125110\n>> Swap: 16378 0 16378\n>> root@rpopdb01e ~ #\n>> \n>> PGDATA lives on RAID6 array of 8 ssd-disks with ext4, iostat and atop say the disks are really free. Right now PGDATA takes only 95G.\n>> The settings changed in postgresql.conf are here [5].\n>> \n>> When it happens the last query from here [6] shows that almost all queries are waiting for ExclusiveLock, but they do a simple insert.\n>> \n>> (extend,26647,26825,,,,,,,) | 5459 | ExclusiveLock | 1 | (extend,26647,26825,,,,,,,) | 8053 | ExclusiveLock | 5459,8053\n>> (extend,26647,26828,,,,,,,) | 5567 | ExclusiveLock | 1 | (extend,26647,26828,,,,,,,) | 5490 | ExclusiveLock | 5567,5490\n>> (extend,24584,25626,,,,,,,) | 5611 | ExclusiveLock | 1 | (extend,24584,25626,,,,,,,) | 3963 | ExclusiveLock | 5611,3963\n>> \n>> I have several databases running on one host with one postmaster process and ExclusiveLock is being waited by many oids. I suppose the only common thing for all of them is that they are bigger than others and they almost do not get updates and deletes (only inserts and reads). Some more info about one of such tables is here [7].\n>> \n>> I have tried to look at the source code (src/backend/access/heap/hio.c) to understand when the exclusive lock can be taken, but I could only read comments :) I have also examined FSM for this tables and their indexes and found that for most of them there are free pages but there are, for example, such cases:\n>> \n>> rpopdb_p0=# select count(*) from pg_freespace('rpop.rpop_uidl') where avail != 0;\n>> count\n>> --------\n>> 115953\n>> (1 row)\n>> \n>> rpopdb_p0=# select count(*) from pg_freespace('rpop.pk_rpop_uidl') where avail != 0;\n>> count\n>> -------\n>> 0\n>> (1 row)\n>> \n>> rpopdb_p0=# \\dS+ rpop.rpop_uidl\n>> Table \"rpop.rpop_uidl\"\n>> Column | Type | Modifiers | Storage | Stats target | Description\n>> --------+------------------------+-----------+----------+--------------+-------------\n>> popid | bigint | not null | plain | |\n>> uidl | character varying(200) | not null | extended | |\n>> Indexes:\n>> \"pk_rpop_uidl\" PRIMARY KEY, btree (popid, uidl)\n>> Has OIDs: no\n>> \n>> rpopdb_p0=#\n>> \n>> \n>> My questions are:\n>> 1. Do we consume 100% cpu (in system) trying to get page from FSM? Or does it happen during exclusive lock acquiring? How can I dig it?\n>> 2. How much space do we extend to the relation when we get exclusive lock on it?\n>> 3. Why extended page is not visible for other backends?\n>> 4. Is there any possibility of situation where backend A got exclusive lock on some relation to extend it. Then OS CPU scheduler made a context switch to backend B while backend B is waiting for exclusive lock on the same relation. And so on for many backends.\n>> 5. (and the main question) what can I do to get rid of such situations? It is a production cluster and I do not have any ideas what to do with this situation :( Any help would be really appropriate.\n>> \n>> [1] http://www.postgresql.org/message-id/[email protected]\n>> [2] http://pgsql.performance.narkive.com/IrkPbl3f/postgresql-9-2-3-performance-problem-caused-exclusive-locks\n>> [3] http://www.postgresql.org/message-id/[email protected]\n>> [4] http://www.postgresql.org/message-id/CAL_0b1sypYeOyNkYNV95nNV2d+4jXTug3HkKF6FahfW7Gvgb_Q@mail.gmail.com\n>> [5] http://pastebin.com/raw.php?i=Bd40Vn6h\n>> [6] http://wiki.postgresql.org/wiki/Lock_dependency_information\n>> [7 http://pastebin.com/raw.php?i=eGrtG524]\n>> \n>> --\n>> Vladimir\n>> \n>> \n>> \n>> \n>> \n>> \n>> --\n>> Vladimir\n>> \n>> \n>> \n>> \n>> \n>> \n>> --\n>> Да пребудет с вами сила...\n>> http://simply.name\n>> \n>> \n>> \n>> \n>> \n>> \n>> \n>> \n>> --\n>> Vladimir\n>> \n>> \n>> \n>> \n>> \n>> \n>> --\n>> Vladimir\n>> \n>> \n>> \n>> \n\n\n--\nДа пребудет с вами сила...\nhttp://simply.name\n\n\n\n\n\n\n13.02.2014, в 13:29, Ilya Kosmodemiansky <[email protected]> написал(а):Vladimir,And, any effect on your problem?It worked without problems longer than previous configuration but repeated again several minutes ago :(On Thu, Feb 13, 2014 at 9:35 AM, Бородин Владимир <[email protected]> wrote:I have limited max connections to 1000, reduced shared buffers to 8G and restarted postgres.1000 is still to much in most cases. With pgbouncer in transactionpooling mode normaly pool size 8-32, max_connections = 100 (defaultvalue) and client_connections 500-1500 looks more reasonable.Clients for this db are plproxy hosts. As far as I know plproxy can work only with statement pooling.I have also noticed that this big tables stopped vacuuming automatically a couple of weeks ago. It could be the reason of the problem, I will now try to tune autovacuum parameters to turn it back. But yesterday I ran \"vacuum analyze\" for all relations manually but that did not help.How do your autovacuum parameters look like now?They were all default except for vacuum_defer_cleanup_age = 100000. I have increased autovacuum_max_workers = 20 because I have 10 databases with about 10 tables each. That did not make better (I haven't seen more than two auto vacuum workers simultaneously). Then I have tried to set vacuum_cost_limit = 1000. Still not vacuuming big tables. Right now the parameters look like this:root@rpopdb01e ~ # fgrep vacuum /var/lib/pgsql/9.3/data/conf.d/postgresql.conf #vacuum_cost_delay = 0                  # 0-100 milliseconds#vacuum_cost_page_hit = 1               # 0-10000 credits#vacuum_cost_page_miss = 10             # 0-10000 credits#vacuum_cost_page_dirty = 20            # 0-10000 creditsvacuum_cost_limit = 1000                # 1-10000 creditsvacuum_defer_cleanup_age = 100000       # number of xacts by which cleanup is delayedautovacuum = on                         # Enable autovacuum subprocess?  'on'log_autovacuum_min_duration = 0         # -1 disables, 0 logs all actions andautovacuum_max_workers = 20             # max number of autovacuum subprocesses#autovacuum_naptime = 1min              # time between autovacuum runs#autovacuum_vacuum_threshold = 50       # min number of row updates before                                        # vacuum#autovacuum_analyze_threshold = 50      # min number of row updates before#autovacuum_vacuum_scale_factor = 0.2   # fraction of table size before vacuum#autovacuum_analyze_scale_factor = 0.1  # fraction of table size before analyze#autovacuum_freeze_max_age = 200000000  # maximum XID age before forced vacuum#autovacuum_vacuum_cost_delay = 20ms    # default vacuum cost delay for                                        # autovacuum, in milliseconds;                                        # -1 means use vacuum_cost_delay#autovacuum_vacuum_cost_limit = -1      # default vacuum cost limit for                                        # autovacuum, -1 means use                                        # vacuum_cost_limit#vacuum_freeze_min_age = 50000000#vacuum_freeze_table_age = 150000000root@rpopdb01e ~ #13.02.2014, в 0:14, Ilya Kosmodemiansky <[email protected]> написал(а):On Wed, Feb 12, 2014 at 8:57 PM, Бородин Владимир <[email protected]> wrote:Yes, this is legacy, I will fix it. We had lots of inactive connections but right now we use pgbouncer for this. When the workload is normal we have some kind of 80-120 backends. Less than 10 of them are in active state. Having problem with locks we get lots of sessions (sometimes more than 1000 of them are in active state). According to vmstat the number of context switches is not so big (less than 20k), so I don't think it is the main reason. Yes, it can aggravate the problem, but imho not create it.I'am afraid that is the problem. More than 1000 backends, most of themare simply waiting.I don't understand the correlation of shared buffers size and synchronous_commit. Could you please explain your statement?You need to fsync your huge shared buffers any time your databaseperforms checkpoint. By default it usually happens too often becausecheckpoint_timeout is 5min by default. Without bbu, on software raidthat leads to io spike and you commit waits for wal.12.02.2014, в 23:37, Ilya Kosmodemiansky <[email protected]> написал(а):another thing which is arguable - concurrency degree. How many of your max_connections = 4000 are actually running?  4000 definitely looks like an overkill and they could be a serious source of concurrency, especially then you have had barrier enabled and software raid.Plus for 32Gb of shared buffers with synchronous_commit = on especially on heavy workload one should definitely have bbu, otherwise performance will be poor.On Wed, Feb 12, 2014 at 8:20 PM, Бородин Владимир <[email protected]> wrote:Oh, I haven't thought about barriers, sorry. Although I use soft raid without batteries I have turned barriers off on one cluster shard to try.root@rpopdb01e ~ # mount | fgrep data/dev/md2 on /var/lib/pgsql/9.3/data type ext4 (rw,noatime,nodiratime)root@rpopdb01e ~ # mount -o remount,nobarrier /dev/md2root@rpopdb01e ~ # mount | fgrep data/dev/md2 on /var/lib/pgsql/9.3/data type ext4 (rw,noatime,nodiratime,nobarrier)root@rpopdb01e ~ #12.02.2014, в 21:56, Ilya Kosmodemiansky <[email protected]> написал(а):My question was actually about barrier option, by default it is enabled on RHEL6/ext4 and could cause serious bottleneck on io before disks are actually involved. What says mount without arguments?On Feb 12, 2014, at 18:43, Бородин Владимир <[email protected]> wrote:root@rpopdb01e ~ # fgrep data /etc/fstabUUID=f815fd3f-e4e4-43a6-a6a1-bce1203db3e0 /var/lib/pgsql/9.3/data ext4 noatime,nodiratime 0 1root@rpopdb01e ~ #According to iostat the disks are not the bottleneck.12.02.2014, в 21:30, Ilya Kosmodemiansky <[email protected]> написал(а):Hi Vladimir,Just in case: how is your ext4 mount?Best regards,IlyaOn Feb 12, 2014, at 17:59, Бородин Владимир <[email protected]> wrote:Hi all.Today I have started getting errors like below in logs (seems that I have not changed anything for last week). When it happens the db gets lots of connections in state active, eats 100% cpu and clients get errors (due to timeout).2014-02-12 15:44:24.562 MSK,\"rpop\",\"rpopdb_p6\",30061,\"localhost:58350\",52fb5e53.756d,1,\"SELECT waiting\",2014-02-12 15:43:15 MSK,143/264877,1002850566,LOG,00000,\"process 30061 still waiting for ExclusiveLock on extension of relation 26118 of database 24590 after 1000.082 ms\",,,,,\"SQL statement \"\"insert into rpop.rpop_imap_uidls (folder_id, uidl) values (i_folder_id, i_uidl)\"\"I have read several topics [1, 2, 3, 4] with similar problems but haven't find a good solution. Below is some more diagnostics.I am running PostgreSQL 9.3.2 installed from RPM packages on RHEL 6.4. Host is running with the following CPU (32 cores) and memory:root@rpopdb01e ~ # fgrep -m1 'model name' /proc/cpuinfomodel name : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHzroot@rpopdb01e ~ # free -m            total       used       free     shared    buffers     cachedMem:        129028     123558       5469          0        135     119504-/+ buffers/cache:       3918     125110Swap:        16378          0      16378root@rpopdb01e ~ #PGDATA lives on RAID6 array of 8 ssd-disks with ext4, iostat and atop say the disks are really free. Right now PGDATA takes only 95G.The settings changed in postgresql.conf are here [5].When it happens the last query from here [6] shows that almost all queries are waiting for ExclusiveLock, but they do a simple insert.(extend,26647,26825,,,,,,,) |        5459 | ExclusiveLock |     1 | (extend,26647,26825,,,,,,,) | 8053 | ExclusiveLock | 5459,8053(extend,26647,26828,,,,,,,) |        5567 | ExclusiveLock |     1 | (extend,26647,26828,,,,,,,) | 5490 | ExclusiveLock | 5567,5490(extend,24584,25626,,,,,,,) |        5611 | ExclusiveLock |     1 | (extend,24584,25626,,,,,,,) | 3963 | ExclusiveLock | 5611,3963I have several databases running on one host with one postmaster process and ExclusiveLock is being waited by many oids. I suppose the only common thing for all of them is that they are bigger than others and they almost do not get updates and deletes (only inserts and reads). Some more info about one of such tables is here [7].I have tried to look at the source code (src/backend/access/heap/hio.c) to understand when the exclusive lock can be taken, but I could only read comments :) I have also examined FSM for this tables and their indexes and found that for most of them there are free pages but there are, for example, such cases:rpopdb_p0=# select count(*) from pg_freespace('rpop.rpop_uidl') where avail != 0;count--------115953(1 row)rpopdb_p0=# select count(*) from pg_freespace('rpop.pk_rpop_uidl') where avail != 0;count-------    0(1 row)rpopdb_p0=# \\dS+ rpop.rpop_uidl                              Table \"rpop.rpop_uidl\"Column |          Type          | Modifiers | Storage  | Stats target | Description--------+------------------------+-----------+----------+--------------+-------------popid  | bigint                 | not null  | plain    |              |uidl   | character varying(200) | not null  | extended |              |Indexes:   \"pk_rpop_uidl\" PRIMARY KEY, btree (popid, uidl)Has OIDs: norpopdb_p0=#My questions are:1. Do we consume 100% cpu (in system) trying to get page from FSM? Or does it happen during exclusive lock acquiring? How can I dig it?2. How much space do we extend to the relation when we get exclusive lock on it?3. Why extended page is not visible for other backends?4. Is there any possibility of situation where backend A got exclusive lock on some relation to extend it. Then OS CPU scheduler made a context switch to backend B while backend B is waiting for exclusive lock on the same relation. And so on for many backends.5. (and the main question) what can I do to get rid of such situations? It is a production cluster and I do not have any ideas what to do with this situation :( Any help would be really appropriate.[1] http://www.postgresql.org/message-id/[email protected][2] http://pgsql.performance.narkive.com/IrkPbl3f/postgresql-9-2-3-performance-problem-caused-exclusive-locks[3] http://www.postgresql.org/message-id/[email protected][4] http://www.postgresql.org/message-id/CAL_0b1sypYeOyNkYNV95nNV2d+4jXTug3HkKF6FahfW7Gvgb_Q@mail.gmail.com[5] http://pastebin.com/raw.php?i=Bd40Vn6h[6] http://wiki.postgresql.org/wiki/Lock_dependency_information[7 http://pastebin.com/raw.php?i=eGrtG524]--Vladimir--Vladimir--Да пребудет с вами сила...http://simply.name--Vladimir--Vladimir\n--Да пребудет с вами сила...http://simply.name", "msg_date": "Thu, 13 Feb 2014 14:26:49 +0400", "msg_from": "=?koi8-r?B?4s/Sz8TJziD3zMHEyc3J0g==?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Problem with ExclusiveLock on inserts" }, { "msg_contents": "Vladimir,\n\npgbouncer works with pl/proxy in transaction pooling mode. A wide spread\nphrase that statement mode is for plproxy does not mean any limitations for\ntransaction pooling mode until you have atocommit on client. Anyway, try to\nreduce connections.\n\ntry to set your autovacuum a bit more aggressive:\n\n\n autovacuum_analyze_scale_factor=0.05 #or like that\n autovacuum_analyze_threshold=5\n autovacuum_freeze_max_age=200000000\n autovacuum_max_workers=20 # that is fine for slow disks\n autovacuum_naptime=1\n autovacuum_vacuum_cost_delay=5 # or at least 10\n autovacuum_vacuum_cost_limit =-1\n autovacuum_vacuum_scale_factor=0.01 # this setting is to be really\naggressive, otherwise you simply postpone huge vacuums and related disk io,\nsmaller portions are better\n autovacuum_vacuum_threshold=20\n\nprobably you will also need some ionice for autovacuum workers\n\n\n\nOn Thu, Feb 13, 2014 at 11:26 AM, Бородин Владимир <[email protected]> wrote:\n\n>\n> 13.02.2014, в 13:29, Ilya Kosmodemiansky <[email protected]>\n> написал(а):\n>\n> Vladimir,\n>\n> And, any effect on your problem?\n>\n>\n> It worked without problems longer than previous configuration but repeated\n> again several minutes ago :(\n>\n>\n> On Thu, Feb 13, 2014 at 9:35 AM, Бородин Владимир <[email protected]>\n> wrote:\n>\n> I have limited max connections to 1000, reduced shared buffers to 8G and\n> restarted postgres.\n>\n>\n> 1000 is still to much in most cases. With pgbouncer in transaction\n> pooling mode normaly pool size 8-32, max_connections = 100 (default\n> value) and client_connections 500-1500 looks more reasonable.\n>\n>\n> Clients for this db are plproxy hosts. As far as I know plproxy can work\n> only with statement pooling.\n>\n>\n>\n> I have also noticed that this big tables stopped vacuuming automatically a\n> couple of weeks ago. It could be the reason of the problem, I will now try\n> to tune autovacuum parameters to turn it back. But yesterday I ran \"vacuum\n> analyze\" for all relations manually but that did not help.\n>\n>\n> How do your autovacuum parameters look like now?\n>\n>\n> They were all default except for vacuum_defer_cleanup_age = 100000. I have\n> increased autovacuum_max_workers = 20 because I have 10 databases with\n> about 10 tables each. That did not make better (I haven't seen more than\n> two auto vacuum workers simultaneously). Then I have tried to\n> set vacuum_cost_limit = 1000. Still not vacuuming big tables. Right now the\n> parameters look like this:\n>\n> root@rpopdb01e ~ # fgrep vacuum\n> /var/lib/pgsql/9.3/data/conf.d/postgresql.conf\n> #vacuum_cost_delay = 0 # 0-100 milliseconds\n> #vacuum_cost_page_hit = 1 # 0-10000 credits\n> #vacuum_cost_page_miss = 10 # 0-10000 credits\n> #vacuum_cost_page_dirty = 20 # 0-10000 credits\n> vacuum_cost_limit = 1000 # 1-10000 credits\n> vacuum_defer_cleanup_age = 100000 # number of xacts by which cleanup\n> is delayed\n> autovacuum = on # Enable autovacuum subprocess?\n> 'on'\n> log_autovacuum_min_duration = 0 # -1 disables, 0 logs all actions\n> and\n> autovacuum_max_workers = 20 # max number of autovacuum\n> subprocesses\n> #autovacuum_naptime = 1min # time between autovacuum runs\n> #autovacuum_vacuum_threshold = 50 # min number of row updates before\n> # vacuum\n> #autovacuum_analyze_threshold = 50 # min number of row updates before\n> #autovacuum_vacuum_scale_factor = 0.2 # fraction of table size before\n> vacuum\n> #autovacuum_analyze_scale_factor = 0.1 # fraction of table size before\n> analyze\n> #autovacuum_freeze_max_age = 200000000 # maximum XID age before forced\n> vacuum\n> #autovacuum_vacuum_cost_delay = 20ms # default vacuum cost delay for\n> # autovacuum, in milliseconds;\n> # -1 means use vacuum_cost_delay\n> #autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for\n> # autovacuum, -1 means use\n> # vacuum_cost_limit\n> #vacuum_freeze_min_age = 50000000\n> #vacuum_freeze_table_age = 150000000\n> root@rpopdb01e ~ #\n>\n>\n> 13.02.2014, в 0:14, Ilya Kosmodemiansky <[email protected]> написал(а):\n>\n> On Wed, Feb 12, 2014 at 8:57 PM, Бородин Владимир <[email protected]>\n> wrote:\n>\n>\n> Yes, this is legacy, I will fix it. We had lots of inactive connections\n> but right now we use pgbouncer for this. When the workload is normal we\n> have some kind of 80-120 backends. Less than 10 of them are in active\n> state. Having problem with locks we get lots of sessions (sometimes more\n> than 1000 of them are in active state). According to vmstat the number of\n> context switches is not so big (less than 20k), so I don't think it is the\n> main reason. Yes, it can aggravate the problem, but imho not create it.\n>\n>\n>\n> I'am afraid that is the problem. More than 1000 backends, most of them\n> are simply waiting.\n>\n>\n>\n> I don't understand the correlation of shared buffers size and\n> synchronous_commit. Could you please explain your statement?\n>\n>\n>\n> You need to fsync your huge shared buffers any time your database\n> performs checkpoint. By default it usually happens too often because\n> checkpoint_timeout is 5min by default. Without bbu, on software raid\n> that leads to io spike and you commit waits for wal.\n>\n>\n>\n> 12.02.2014, в 23:37, Ilya Kosmodemiansky <[email protected]>\n> написал(а):\n>\n> another thing which is arguable - concurrency degree. How many of your\n> max_connections = 4000 are actually running? 4000 definitely looks like an\n> overkill and they could be a serious source of concurrency, especially then\n> you have had barrier enabled and software raid.\n>\n> Plus for 32Gb of shared buffers with synchronous_commit = on especially on\n> heavy workload one should definitely have bbu, otherwise performance will\n> be poor.\n>\n>\n> On Wed, Feb 12, 2014 at 8:20 PM, Бородин Владимир <[email protected]>\n> wrote:\n>\n>\n> Oh, I haven't thought about barriers, sorry. Although I use soft raid\n> without batteries I have turned barriers off on one cluster shard to try.\n>\n> root@rpopdb01e ~ # mount | fgrep data\n> /dev/md2 on /var/lib/pgsql/9.3/data type ext4 (rw,noatime,nodiratime)\n> root@rpopdb01e ~ # mount -o remount,nobarrier /dev/md2\n> root@rpopdb01e ~ # mount | fgrep data\n> /dev/md2 on /var/lib/pgsql/9.3/data type ext4\n> (rw,noatime,nodiratime,nobarrier)\n> root@rpopdb01e ~ #\n>\n> 12.02.2014, в 21:56, Ilya Kosmodemiansky <[email protected]>\n> написал(а):\n>\n> My question was actually about barrier option, by default it is enabled on\n> RHEL6/ext4 and could cause serious bottleneck on io before disks are\n> actually involved. What says mount without arguments?\n>\n> On Feb 12, 2014, at 18:43, Бородин Владимир <[email protected]> wrote:\n>\n> root@rpopdb01e ~ # fgrep data /etc/fstab\n> UUID=f815fd3f-e4e4-43a6-a6a1-bce1203db3e0 /var/lib/pgsql/9.3/data ext4\n> noatime,nodiratime 0 1\n> root@rpopdb01e ~ #\n>\n> According to iostat the disks are not the bottleneck.\n>\n> 12.02.2014, в 21:30, Ilya Kosmodemiansky <[email protected]>\n> написал(а):\n>\n> Hi Vladimir,\n>\n> Just in case: how is your ext4 mount?\n>\n> Best regards,\n> Ilya\n>\n> On Feb 12, 2014, at 17:59, Бородин Владимир <[email protected]> wrote:\n>\n> Hi all.\n>\n> Today I have started getting errors like below in logs (seems that I have\n> not changed anything for last week). When it happens the db gets lots of\n> connections in state active, eats 100% cpu and clients get errors (due to\n> timeout).\n>\n> 2014-02-12 15:44:24.562\n> MSK,\"rpop\",\"rpopdb_p6\",30061,\"localhost:58350\",52fb5e53.756d,1,\"SELECT\n> waiting\",2014-02-12 15:43:15 MSK,143/264877,1002850566,LOG,00000,\"process\n> 30061 still waiting for ExclusiveLock on extension of relation 26118 of\n> database 24590 after 1000.082 ms\",,,,,\"SQL statement \"\"insert into\n> rpop.rpop_imap_uidls (folder_id, uidl) values (i_folder_id, i_uidl)\"\"\n>\n> I have read several topics [1, 2, 3, 4] with similar problems but haven't\n> find a good solution. Below is some more diagnostics.\n>\n> I am running PostgreSQL 9.3.2 installed from RPM packages on RHEL 6.4.\n> Host is running with the following CPU (32 cores) and memory:\n>\n> root@rpopdb01e ~ # fgrep -m1 'model name' /proc/cpuinfo\n> model name : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz\n> root@rpopdb01e ~ # free -m\n> total used free shared buffers cached\n> Mem: 129028 123558 5469 0 135 119504\n> -/+ buffers/cache: 3918 125110\n> Swap: 16378 0 16378\n> root@rpopdb01e ~ #\n>\n> PGDATA lives on RAID6 array of 8 ssd-disks with ext4, iostat and atop say\n> the disks are really free. Right now PGDATA takes only 95G.\n> The settings changed in postgresql.conf are here [5].\n>\n> When it happens the last query from here [6] shows that almost all queries\n> are waiting for ExclusiveLock, but they do a simple insert.\n>\n> (extend,26647,26825,,,,,,,) | 5459 | ExclusiveLock | 1 |\n> (extend,26647,26825,,,,,,,) | 8053 | ExclusiveLock | 5459,8053\n> (extend,26647,26828,,,,,,,) | 5567 | ExclusiveLock | 1 |\n> (extend,26647,26828,,,,,,,) | 5490 | ExclusiveLock | 5567,5490\n> (extend,24584,25626,,,,,,,) | 5611 | ExclusiveLock | 1 |\n> (extend,24584,25626,,,,,,,) | 3963 | ExclusiveLock | 5611,3963\n>\n> I have several databases running on one host with one postmaster process\n> and ExclusiveLock is being waited by many oids. I suppose the only common\n> thing for all of them is that they are bigger than others and they almost\n> do not get updates and deletes (only inserts and reads). Some more info\n> about one of such tables is here [7].\n>\n> I have tried to look at the source code (src/backend/access/heap/hio.c) to\n> understand when the exclusive lock can be taken, but I could only read\n> comments :) I have also examined FSM for this tables and their indexes and\n> found that for most of them there are free pages but there are, for\n> example, such cases:\n>\n> rpopdb_p0=# select count(*) from pg_freespace('rpop.rpop_uidl') where\n> avail != 0;\n> count\n> --------\n> 115953\n> (1 row)\n>\n> rpopdb_p0=# select count(*) from pg_freespace('rpop.pk_rpop_uidl') where\n> avail != 0;\n> count\n> -------\n> 0\n> (1 row)\n>\n> rpopdb_p0=# \\dS+ rpop.rpop_uidl\n> Table \"rpop.rpop_uidl\"\n> Column | Type | Modifiers | Storage | Stats target |\n> Description\n>\n> --------+------------------------+-----------+----------+--------------+-------------\n> popid | bigint | not null | plain | |\n> uidl | character varying(200) | not null | extended | |\n> Indexes:\n> \"pk_rpop_uidl\" PRIMARY KEY, btree (popid, uidl)\n> Has OIDs: no\n>\n> rpopdb_p0=#\n>\n>\n> My questions are:\n> 1. Do we consume 100% cpu (in system) trying to get page from FSM? Or does\n> it happen during exclusive lock acquiring? How can I dig it?\n> 2. How much space do we extend to the relation when we get exclusive lock\n> on it?\n> 3. Why extended page is not visible for other backends?\n> 4. Is there any possibility of situation where backend A got exclusive\n> lock on some relation to extend it. Then OS CPU scheduler made a context\n> switch to backend B while backend B is waiting for exclusive lock on the\n> same relation. And so on for many backends.\n> 5. (and the main question) what can I do to get rid of such situations? It\n> is a production cluster and I do not have any ideas what to do with this\n> situation :( Any help would be really appropriate.\n>\n> [1]\n> http://www.postgresql.org/message-id/[email protected]\n> [2]\n> http://pgsql.performance.narkive.com/IrkPbl3f/postgresql-9-2-3-performance-problem-caused-exclusive-locks\n> [3] http://www.postgresql.org/message-id/[email protected]\n> [4]\n> http://www.postgresql.org/message-id/CAL_0b1sypYeOyNkYNV95nNV2d+4jXTug3HkKF6FahfW7Gvgb_Q@mail.gmail.com\n> [5] http://pastebin.com/raw.php?i=Bd40Vn6h\n> [6] http://wiki.postgresql.org/wiki/Lock_dependency_information\n> [7 http://pastebin.com/raw.php?i=eGrtG524]\n>\n> --\n> Vladimir\n>\n>\n>\n>\n>\n>\n> --\n> Vladimir\n>\n>\n>\n>\n>\n>\n> --\n> Да пребудет с вами сила...\n> http://simply.name\n>\n>\n>\n>\n>\n>\n>\n>\n> --\n> Vladimir\n>\n>\n>\n>\n>\n>\n> --\n> Vladimir\n>\n>\n>\n>\n>\n>\n> --\n> Да пребудет с вами сила...\n> http://simply.name\n>\n>\n>\n>\n>\n>\n\n\n-- \nIlya Kosmodemiansky\n\nDatabase consultant,\nPostgreSQL-Consulting.com\n\ntel. +14084142500\ncell. +4915144336040\[email protected]\n\nVladimir,pgbouncer works with pl/proxy in transaction pooling mode. A wide spread phrase that statement mode is for plproxy does not mean any limitations for transaction pooling mode until you have atocommit on client. Anyway, try to reduce connections.\ntry to set your autovacuum a bit more aggressive: autovacuum_analyze_scale_factor=0.05 #or like that autovacuum_analyze_threshold=5          autovacuum_freeze_max_age=200000000 autovacuum_max_workers=20 # that is fine for slow disks\n autovacuum_naptime=1          autovacuum_vacuum_cost_delay=5 # or at least 10 autovacuum_vacuum_cost_limit =-1    autovacuum_vacuum_scale_factor=0.01 # this setting is to be really aggressive, otherwise you simply postpone huge vacuums and related disk io, smaller portions are better\n autovacuum_vacuum_threshold=20   probably you will also need some ionice for autovacuum workersOn Thu, Feb 13, 2014 at 11:26 AM, Бородин Владимир <[email protected]> wrote:\n13.02.2014, в 13:29, Ilya Kosmodemiansky <[email protected]> написал(а):\nVladimir,And, any effect on your problem?It worked without problems longer than previous configuration but repeated again several minutes ago :(\nOn Thu, Feb 13, 2014 at 9:35 AM, Бородин Владимир <[email protected]> wrote:I have limited max connections to 1000, reduced shared buffers to 8G and restarted postgres.\n1000 is still to much in most cases. With pgbouncer in transactionpooling mode normaly pool size 8-32, max_connections = 100 (defaultvalue) and client_connections 500-1500 looks more reasonable.\nClients for this db are plproxy hosts. As far as I know plproxy can work only with statement pooling.\nI have also noticed that this big tables stopped vacuuming automatically a couple of weeks ago. It could be the reason of the problem, I will now try to tune autovacuum parameters to turn it back. But yesterday I ran \"vacuum analyze\" for all relations manually but that did not help.\nHow do your autovacuum parameters look like now?They were all default except for vacuum_defer_cleanup_age = 100000. I have increased autovacuum_max_workers = 20 because I have 10 databases with about 10 tables each. That did not make better (I haven't seen more than two auto vacuum workers simultaneously). Then I have tried to set vacuum_cost_limit = 1000. Still not vacuuming big tables. Right now the parameters look like this:\nroot@rpopdb01e ~ # fgrep vacuum /var/lib/pgsql/9.3/data/conf.d/postgresql.conf #vacuum_cost_delay = 0                  # 0-100 milliseconds#vacuum_cost_page_hit = 1               # 0-10000 credits\n#vacuum_cost_page_miss = 10             # 0-10000 credits#vacuum_cost_page_dirty = 20            # 0-10000 creditsvacuum_cost_limit = 1000                # 1-10000 creditsvacuum_defer_cleanup_age = 100000       # number of xacts by which cleanup is delayed\nautovacuum = on                         # Enable autovacuum subprocess?  'on'log_autovacuum_min_duration = 0         # -1 disables, 0 logs all actions andautovacuum_max_workers = 20             # max number of autovacuum subprocesses\n#autovacuum_naptime = 1min              # time between autovacuum runs#autovacuum_vacuum_threshold = 50       # min number of row updates before                                        # vacuum\n#autovacuum_analyze_threshold = 50      # min number of row updates before#autovacuum_vacuum_scale_factor = 0.2   # fraction of table size before vacuum#autovacuum_analyze_scale_factor = 0.1  # fraction of table size before analyze\n#autovacuum_freeze_max_age = 200000000  # maximum XID age before forced vacuum#autovacuum_vacuum_cost_delay = 20ms    # default vacuum cost delay for                                        # autovacuum, in milliseconds;\n                                        # -1 means use vacuum_cost_delay#autovacuum_vacuum_cost_limit = -1      # default vacuum cost limit for                                        # autovacuum, -1 means use\n                                        # vacuum_cost_limit#vacuum_freeze_min_age = 50000000#vacuum_freeze_table_age = 150000000root@rpopdb01e ~ #\n13.02.2014, в 0:14, Ilya Kosmodemiansky <[email protected]> написал(а):On Wed, Feb 12, 2014 at 8:57 PM, Бородин Владимир <[email protected]> wrote:\nYes, this is legacy, I will fix it. We had lots of inactive connections but right now we use pgbouncer for this. When the workload is normal we have some kind of 80-120 backends. Less than 10 of them are in active state. Having problem with locks we get lots of sessions (sometimes more than 1000 of them are in active state). According to vmstat the number of context switches is not so big (less than 20k), so I don't think it is the main reason. Yes, it can aggravate the problem, but imho not create it.\nI'am afraid that is the problem. More than 1000 backends, most of themare simply waiting.I don't understand the correlation of shared buffers size and synchronous_commit. Could you please explain your statement?\nYou need to fsync your huge shared buffers any time your databaseperforms checkpoint. By default it usually happens too often becausecheckpoint_timeout is 5min by default. Without bbu, on software raid\nthat leads to io spike and you commit waits for wal.12.02.2014, в 23:37, Ilya Kosmodemiansky <[email protected]> написал(а):another thing which is arguable - concurrency degree. How many of your max_connections = 4000 are actually running?  4000 definitely looks like an overkill and they could be a serious source of concurrency, especially then you have had barrier enabled and software raid.\nPlus for 32Gb of shared buffers with synchronous_commit = on especially on heavy workload one should definitely have bbu, otherwise performance will be poor.On Wed, Feb 12, 2014 at 8:20 PM, Бородин Владимир <[email protected]> wrote:\nOh, I haven't thought about barriers, sorry. Although I use soft raid without batteries I have turned barriers off on one cluster shard to try.root@rpopdb01e ~ # mount | fgrep data/dev/md2 on /var/lib/pgsql/9.3/data type ext4 (rw,noatime,nodiratime)\nroot@rpopdb01e ~ # mount -o remount,nobarrier /dev/md2root@rpopdb01e ~ # mount | fgrep data/dev/md2 on /var/lib/pgsql/9.3/data type ext4 (rw,noatime,nodiratime,nobarrier)root@rpopdb01e ~ #12.02.2014, в 21:56, Ilya Kosmodemiansky <[email protected]> написал(а):\nMy question was actually about barrier option, by default it is enabled on RHEL6/ext4 and could cause serious bottleneck on io before disks are actually involved. What says mount without arguments?On Feb 12, 2014, at 18:43, Бородин Владимир <[email protected]> wrote:\nroot@rpopdb01e ~ # fgrep data /etc/fstabUUID=f815fd3f-e4e4-43a6-a6a1-bce1203db3e0 /var/lib/pgsql/9.3/data ext4 noatime,nodiratime 0 1root@rpopdb01e ~ #According to iostat the disks are not the bottleneck.\n12.02.2014, в 21:30, Ilya Kosmodemiansky <[email protected]> написал(а):Hi Vladimir,Just in case: how is your ext4 mount?Best regards,\nIlyaOn Feb 12, 2014, at 17:59, Бородин Владимир <[email protected]> wrote:Hi all.Today I have started getting errors like below in logs (seems that I have not changed anything for last week). When it happens the db gets lots of connections in state active, eats 100% cpu and clients get errors (due to timeout).\n2014-02-12 15:44:24.562 MSK,\"rpop\",\"rpopdb_p6\",30061,\"localhost:58350\",52fb5e53.756d,1,\"SELECT waiting\",2014-02-12 15:43:15 MSK,143/264877,1002850566,LOG,00000,\"process 30061 still waiting for ExclusiveLock on extension of relation 26118 of database 24590 after 1000.082 ms\",,,,,\"SQL statement \"\"insert into rpop.rpop_imap_uidls (folder_id, uidl) values (i_folder_id, i_uidl)\"\"\nI have read several topics [1, 2, 3, 4] with similar problems but haven't find a good solution. Below is some more diagnostics.I am running PostgreSQL 9.3.2 installed from RPM packages on RHEL 6.4. Host is running with the following CPU (32 cores) and memory:\nroot@rpopdb01e ~ # fgrep -m1 'model name' /proc/cpuinfomodel name : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHzroot@rpopdb01e ~ # free -m            total       used       free     shared    buffers     cached\nMem:        129028     123558       5469          0        135     119504-/+ buffers/cache:       3918     125110Swap:        16378          0      16378root@rpopdb01e ~ #PGDATA lives on RAID6 array of 8 ssd-disks with ext4, iostat and atop say the disks are really free. Right now PGDATA takes only 95G.\nThe settings changed in postgresql.conf are here [5].When it happens the last query from here [6] shows that almost all queries are waiting for ExclusiveLock, but they do a simple insert.(extend,26647,26825,,,,,,,) |        5459 | ExclusiveLock |     1 | (extend,26647,26825,,,,,,,) | 8053 | ExclusiveLock | 5459,8053\n(extend,26647,26828,,,,,,,) |        5567 | ExclusiveLock |     1 | (extend,26647,26828,,,,,,,) | 5490 | ExclusiveLock | 5567,5490(extend,24584,25626,,,,,,,) |        5611 | ExclusiveLock |     1 | (extend,24584,25626,,,,,,,) | 3963 | ExclusiveLock | 5611,3963\nI have several databases running on one host with one postmaster process and ExclusiveLock is being waited by many oids. I suppose the only common thing for all of them is that they are bigger than others and they almost do not get updates and deletes (only inserts and reads). Some more info about one of such tables is here [7].\nI have tried to look at the source code (src/backend/access/heap/hio.c) to understand when the exclusive lock can be taken, but I could only read comments :) I have also examined FSM for this tables and their indexes and found that for most of them there are free pages but there are, for example, such cases:\nrpopdb_p0=# select count(*) from pg_freespace('rpop.rpop_uidl') where avail != 0;count--------115953(1 row)rpopdb_p0=# select count(*) from pg_freespace('rpop.pk_rpop_uidl') where avail != 0;\ncount-------    0(1 row)rpopdb_p0=# \\dS+ rpop.rpop_uidl                              Table \"rpop.rpop_uidl\"Column |          Type          | Modifiers | Storage  | Stats target | Description\n--------+------------------------+-----------+----------+--------------+-------------popid  | bigint                 | not null  | plain    |              |uidl   | character varying(200) | not null  | extended |              |\nIndexes:   \"pk_rpop_uidl\" PRIMARY KEY, btree (popid, uidl)Has OIDs: norpopdb_p0=#My questions are:1. Do we consume 100% cpu (in system) trying to get page from FSM? Or does it happen during exclusive lock acquiring? How can I dig it?\n2. How much space do we extend to the relation when we get exclusive lock on it?3. Why extended page is not visible for other backends?4. Is there any possibility of situation where backend A got exclusive lock on some relation to extend it. Then OS CPU scheduler made a context switch to backend B while backend B is waiting for exclusive lock on the same relation. And so on for many backends.\n5. (and the main question) what can I do to get rid of such situations? It is a production cluster and I do not have any ideas what to do with this situation :( Any help would be really appropriate.[1] http://www.postgresql.org/message-id/[email protected]\n[2] http://pgsql.performance.narkive.com/IrkPbl3f/postgresql-9-2-3-performance-problem-caused-exclusive-locks\n[3] http://www.postgresql.org/message-id/[email protected][4] http://www.postgresql.org/message-id/CAL_0b1sypYeOyNkYNV95nNV2d+4jXTug3HkKF6FahfW7Gvgb_Q@mail.gmail.com\n[5] http://pastebin.com/raw.php?i=Bd40Vn6h[6] http://wiki.postgresql.org/wiki/Lock_dependency_information\n[7 http://pastebin.com/raw.php?i=eGrtG524]--Vladimir--Vladimir--Да пребудет с вами сила...\nhttp://simply.name--Vladimir--Vladimir\n\n\n--Да пребудет с вами сила...http://simply.name\n\n\n-- Ilya KosmodemianskyDatabase consultant,PostgreSQL-Consulting.comtel. +14084142500cell. [email protected]", "msg_date": "Thu, 13 Feb 2014 13:13:34 +0100", "msg_from": "Ilya Kosmodemiansky <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem with ExclusiveLock on inserts" }, { "msg_contents": "Vladimir,\n\npgbouncer works with pl/proxy in transaction pooling mode. A wide spread\nphrase that statement mode is for plproxy does not mean any limitations for\ntransaction pooling mode until you have atocommit on client. Anyway, try to\nreduce connections.\n\ntry to set your autovacuum a bit more aggressive:\n\n\n autovacuum_analyze_scale_\nfactor=0.05 #or like that\n autovacuum_analyze_threshold=5\n autovacuum_freeze_max_age=200000000\n autovacuum_max_workers=20 # that is fine for slow disks\n autovacuum_naptime=1\n autovacuum_vacuum_cost_delay=5 # or at least 10\n autovacuum_vacuum_cost_limit =-1\n autovacuum_vacuum_scale_factor=0.01 # this setting is to be really\naggressive, otherwise you simply postpone huge vacuums and related disk io,\nsmaller portions are better\n autovacuum_vacuum_threshold=20\n\nprobably you will also need some ionice for autovacuum workers\n\n\nOn Thu, Feb 13, 2014 at 1:13 PM, Ilya Kosmodemiansky <\[email protected]> wrote:\n\n> Vladimir,\n>\n> pgbouncer works with pl/proxy in transaction pooling mode. A wide spread\n> phrase that statement mode is for plproxy does not mean any limitations for\n> transaction pooling mode until you have atocommit on client. Anyway, try to\n> reduce connections.\n>\n> try to set your autovacuum a bit more aggressive:\n>\n>\n> autovacuum_analyze_scale_factor=0.05 #or like that\n> autovacuum_analyze_threshold=5\n> autovacuum_freeze_max_age=200000000\n> autovacuum_max_workers=20 # that is fine for slow disks\n> autovacuum_naptime=1\n> autovacuum_vacuum_cost_delay=5 # or at least 10\n> autovacuum_vacuum_cost_limit =-1\n> autovacuum_vacuum_scale_factor=0.01 # this setting is to be really\n> aggressive, otherwise you simply postpone huge vacuums and related disk io,\n> smaller portions are better\n> autovacuum_vacuum_threshold=20\n>\n> probably you will also need some ionice for autovacuum workers\n>\n>\n>\n> On Thu, Feb 13, 2014 at 11:26 AM, Бородин Владимир <[email protected]>wrote:\n>\n>>\n>> 13.02.2014, в 13:29, Ilya Kosmodemiansky <[email protected]>\n>> написал(а):\n>>\n>> Vladimir,\n>>\n>> And, any effect on your problem?\n>>\n>>\n>> It worked without problems longer than previous configuration but\n>> repeated again several minutes ago :(\n>>\n>>\n>> On Thu, Feb 13, 2014 at 9:35 AM, Бородин Владимир <[email protected]>\n>> wrote:\n>>\n>> I have limited max connections to 1000, reduced shared buffers to 8G and\n>> restarted postgres.\n>>\n>>\n>> 1000 is still to much in most cases. With pgbouncer in transaction\n>> pooling mode normaly pool size 8-32, max_connections = 100 (default\n>> value) and client_connections 500-1500 looks more reasonable.\n>>\n>>\n>> Clients for this db are plproxy hosts. As far as I know plproxy can work\n>> only with statement pooling.\n>>\n>>\n>>\n>> I have also noticed that this big tables stopped vacuuming automatically\n>> a couple of weeks ago. It could be the reason of the problem, I will now\n>> try to tune autovacuum parameters to turn it back. But yesterday I ran\n>> \"vacuum analyze\" for all relations manually but that did not help.\n>>\n>>\n>> How do your autovacuum parameters look like now?\n>>\n>>\n>> They were all default except for vacuum_defer_cleanup_age = 100000. I\n>> have increased autovacuum_max_workers = 20 because I have 10 databases with\n>> about 10 tables each. That did not make better (I haven't seen more than\n>> two auto vacuum workers simultaneously). Then I have tried to\n>> set vacuum_cost_limit = 1000. Still not vacuuming big tables. Right now the\n>> parameters look like this:\n>>\n>> root@rpopdb01e ~ # fgrep vacuum\n>> /var/lib/pgsql/9.3/data/conf.d/postgresql.conf\n>> #vacuum_cost_delay = 0 # 0-100 milliseconds\n>> #vacuum_cost_page_hit = 1 # 0-10000 credits\n>> #vacuum_cost_page_miss = 10 # 0-10000 credits\n>> #vacuum_cost_page_dirty = 20 # 0-10000 credits\n>> vacuum_cost_limit = 1000 # 1-10000 credits\n>> vacuum_defer_cleanup_age = 100000 # number of xacts by which\n>> cleanup is delayed\n>> autovacuum = on # Enable autovacuum subprocess?\n>> 'on'\n>> log_autovacuum_min_duration = 0 # -1 disables, 0 logs all actions\n>> and\n>> autovacuum_max_workers = 20 # max number of autovacuum\n>> subprocesses\n>> #autovacuum_naptime = 1min # time between autovacuum runs\n>> #autovacuum_vacuum_threshold = 50 # min number of row updates before\n>> # vacuum\n>> #autovacuum_analyze_threshold = 50 # min number of row updates before\n>> #autovacuum_vacuum_scale_factor = 0.2 # fraction of table size before\n>> vacuum\n>> #autovacuum_analyze_scale_factor = 0.1 # fraction of table size before\n>> analyze\n>> #autovacuum_freeze_max_age = 200000000 # maximum XID age before forced\n>> vacuum\n>> #autovacuum_vacuum_cost_delay = 20ms # default vacuum cost delay for\n>> # autovacuum, in milliseconds;\n>> # -1 means use vacuum_cost_delay\n>> #autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for\n>> # autovacuum, -1 means use\n>> # vacuum_cost_limit\n>> #vacuum_freeze_min_age = 50000000\n>> #vacuum_freeze_table_age = 150000000\n>> root@rpopdb01e ~ #\n>>\n>>\n>> 13.02.2014, в 0:14, Ilya Kosmodemiansky <[email protected]>\n>> написал(а):\n>>\n>> On Wed, Feb 12, 2014 at 8:57 PM, Бородин Владимир <[email protected]>\n>> wrote:\n>>\n>>\n>> Yes, this is legacy, I will fix it. We had lots of inactive connections\n>> but right now we use pgbouncer for this. When the workload is normal we\n>> have some kind of 80-120 backends. Less than 10 of them are in active\n>> state. Having problem with locks we get lots of sessions (sometimes more\n>> than 1000 of them are in active state). According to vmstat the number of\n>> context switches is not so big (less than 20k), so I don't think it is the\n>> main reason. Yes, it can aggravate the problem, but imho not create it.\n>>\n>>\n>>\n>> I'am afraid that is the problem. More than 1000 backends, most of them\n>> are simply waiting.\n>>\n>>\n>>\n>> I don't understand the correlation of shared buffers size and\n>> synchronous_commit. Could you please explain your statement?\n>>\n>>\n>>\n>> You need to fsync your huge shared buffers any time your database\n>> performs checkpoint. By default it usually happens too often because\n>> checkpoint_timeout is 5min by default. Without bbu, on software raid\n>> that leads to io spike and you commit waits for wal.\n>>\n>>\n>>\n>> 12.02.2014, в 23:37, Ilya Kosmodemiansky <[email protected]>\n>> написал(а):\n>>\n>> another thing which is arguable - concurrency degree. How many of your\n>> max_connections = 4000 are actually running? 4000 definitely looks like an\n>> overkill and they could be a serious source of concurrency, especially then\n>> you have had barrier enabled and software raid.\n>>\n>> Plus for 32Gb of shared buffers with synchronous_commit = on especially\n>> on heavy workload one should definitely have bbu, otherwise performance\n>> will be poor.\n>>\n>>\n>> On Wed, Feb 12, 2014 at 8:20 PM, Бородин Владимир <[email protected]>\n>> wrote:\n>>\n>>\n>> Oh, I haven't thought about barriers, sorry. Although I use soft raid\n>> without batteries I have turned barriers off on one cluster shard to try.\n>>\n>> root@rpopdb01e ~ # mount | fgrep data\n>> /dev/md2 on /var/lib/pgsql/9.3/data type ext4 (rw,noatime,nodiratime)\n>> root@rpopdb01e ~ # mount -o remount,nobarrier /dev/md2\n>> root@rpopdb01e ~ # mount | fgrep data\n>> /dev/md2 on /var/lib/pgsql/9.3/data type ext4\n>> (rw,noatime,nodiratime,nobarrier)\n>> root@rpopdb01e ~ #\n>>\n>> 12.02.2014, в 21:56, Ilya Kosmodemiansky <[email protected]>\n>> написал(а):\n>>\n>> My question was actually about barrier option, by default it is enabled\n>> on RHEL6/ext4 and could cause serious bottleneck on io before disks are\n>> actually involved. What says mount without arguments?\n>>\n>> On Feb 12, 2014, at 18:43, Бородин Владимир <[email protected]> wrote:\n>>\n>> root@rpopdb01e ~ # fgrep data /etc/fstab\n>> UUID=f815fd3f-e4e4-43a6-a6a1-bce1203db3e0 /var/lib/pgsql/9.3/data ext4\n>> noatime,nodiratime 0 1\n>> root@rpopdb01e ~ #\n>>\n>> According to iostat the disks are not the bottleneck.\n>>\n>> 12.02.2014, в 21:30, Ilya Kosmodemiansky <[email protected]>\n>> написал(а):\n>>\n>> Hi Vladimir,\n>>\n>> Just in case: how is your ext4 mount?\n>>\n>> Best regards,\n>> Ilya\n>>\n>> On Feb 12, 2014, at 17:59, Бородин Владимир <[email protected]> wrote:\n>>\n>> Hi all.\n>>\n>> Today I have started getting errors like below in logs (seems that I have\n>> not changed anything for last week). When it happens the db gets lots of\n>> connections in state active, eats 100% cpu and clients get errors (due to\n>> timeout).\n>>\n>> 2014-02-12 15:44:24.562\n>> MSK,\"rpop\",\"rpopdb_p6\",30061,\"localhost:58350\",52fb5e53.756d,1,\"SELECT\n>> waiting\",2014-02-12 15:43:15 MSK,143/264877,1002850566,LOG,00000,\"process\n>> 30061 still waiting for ExclusiveLock on extension of relation 26118 of\n>> database 24590 after 1000.082 ms\",,,,,\"SQL statement \"\"insert into\n>> rpop.rpop_imap_uidls (folder_id, uidl) values (i_folder_id, i_uidl)\"\"\n>>\n>> I have read several topics [1, 2, 3, 4] with similar problems but haven't\n>> find a good solution. Below is some more diagnostics.\n>>\n>> I am running PostgreSQL 9.3.2 installed from RPM packages on RHEL 6.4.\n>> Host is running with the following CPU (32 cores) and memory:\n>>\n>> root@rpopdb01e ~ # fgrep -m1 'model name' /proc/cpuinfo\n>> model name : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz\n>> root@rpopdb01e ~ # free -m\n>> total used free shared buffers cached\n>> Mem: 129028 123558 5469 0 135 119504\n>> -/+ buffers/cache: 3918 125110\n>> Swap: 16378 0 16378\n>> root@rpopdb01e ~ #\n>>\n>> PGDATA lives on RAID6 array of 8 ssd-disks with ext4, iostat and atop say\n>> the disks are really free. Right now PGDATA takes only 95G.\n>> The settings changed in postgresql.conf are here [5].\n>>\n>> When it happens the last query from here [6] shows that almost all\n>> queries are waiting for ExclusiveLock, but they do a simple insert.\n>>\n>> (extend,26647,26825,,,,,,,) | 5459 | ExclusiveLock | 1 |\n>> (extend,26647,26825,,,,,,,) | 8053 | ExclusiveLock | 5459,8053\n>> (extend,26647,26828,,,,,,,) | 5567 | ExclusiveLock | 1 |\n>> (extend,26647,26828,,,,,,,) | 5490 | ExclusiveLock | 5567,5490\n>> (extend,24584,25626,,,,,,,) | 5611 | ExclusiveLock | 1 |\n>> (extend,24584,25626,,,,,,,) | 3963 | ExclusiveLock | 5611,3963\n>>\n>> I have several databases running on one host with one postmaster process\n>> and ExclusiveLock is being waited by many oids. I suppose the only common\n>> thing for all of them is that they are bigger than others and they almost\n>> do not get updates and deletes (only inserts and reads). Some more info\n>> about one of such tables is here [7].\n>>\n>> I have tried to look at the source code (src/backend/access/heap/hio.c)\n>> to understand when the exclusive lock can be taken, but I could only read\n>> comments :) I have also examined FSM for this tables and their indexes and\n>> found that for most of them there are free pages but there are, for\n>> example, such cases:\n>>\n>> rpopdb_p0=# select count(*) from pg_freespace('rpop.rpop_uidl') where\n>> avail != 0;\n>> count\n>> --------\n>> 115953\n>> (1 row)\n>>\n>> rpopdb_p0=# select count(*) from pg_freespace('rpop.pk_rpop_uidl') where\n>> avail != 0;\n>> count\n>> -------\n>> 0\n>> (1 row)\n>>\n>> rpopdb_p0=# \\dS+ rpop.rpop_uidl\n>> Table \"rpop.rpop_uidl\"\n>> Column | Type | Modifiers | Storage | Stats target |\n>> Description\n>>\n>> --------+------------------------+-----------+----------+--------------+-------------\n>> popid | bigint | not null | plain | |\n>> uidl | character varying(200) | not null | extended | |\n>> Indexes:\n>> \"pk_rpop_uidl\" PRIMARY KEY, btree (popid, uidl)\n>> Has OIDs: no\n>>\n>> rpopdb_p0=#\n>>\n>>\n>> My questions are:\n>> 1. Do we consume 100% cpu (in system) trying to get page from FSM? Or\n>> does it happen during exclusive lock acquiring? How can I dig it?\n>> 2. How much space do we extend to the relation when we get exclusive lock\n>> on it?\n>> 3. Why extended page is not visible for other backends?\n>> 4. Is there any possibility of situation where backend A got exclusive\n>> lock on some relation to extend it. Then OS CPU scheduler made a context\n>> switch to backend B while backend B is waiting for exclusive lock on the\n>> same relation. And so on for many backends.\n>> 5. (and the main question) what can I do to get rid of such situations?\n>> It is a production cluster and I do not have any ideas what to do with this\n>> situation :( Any help would be really appropriate.\n>>\n>> [1]\n>> http://www.postgresql.org/message-id/[email protected]\n>> [2]\n>> http://pgsql.performance.narkive.com/IrkPbl3f/postgresql-9-2-3-performance-problem-caused-exclusive-locks\n>> [3] http://www.postgresql.org/message-id/[email protected]\n>> [4]\n>> http://www.postgresql.org/message-id/CAL_0b1sypYeOyNkYNV95nNV2d+4jXTug3HkKF6FahfW7Gvgb_Q@mail.gmail.com\n>> [5] http://pastebin.com/raw.php?i=Bd40Vn6h\n>> [6] http://wiki.postgresql.org/wiki/Lock_dependency_information\n>> [7 http://pastebin.com/raw.php?i=eGrtG524]\n>>\n>> --\n>> Vladimir\n>>\n>>\n>>\n>>\n>>\n>>\n>> --\n>> Vladimir\n>>\n>>\n>>\n>>\n>>\n>>\n>> --\n>> Да пребудет с вами сила...\n>> http://simply.name\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n>> --\n>> Vladimir\n>>\n>>\n>>\n>>\n>>\n>>\n>> --\n>> Vladimir\n>>\n>>\n>>\n>>\n>>\n>>\n>> --\n>> Да пребудет с вами сила...\n>> http://simply.name\n>>\n>>\n>>\n>>\n>>\n>>\n>\n>\n> --\n> Ilya Kosmodemiansky\n>\n> Database consultant,\n> PostgreSQL-Consulting.com\n>\n> tel. +14084142500\n> cell. +4915144336040\n> [email protected]\n>\n\n\n\n-- \nIlya Kosmodemiansky\n\nDatabase consultant,\nPostgreSQL-Consulting.com\n\ntel. +14084142500\ncell. +4915144336040\[email protected]\n\nVladimir,pgbouncer works with pl/proxy in \ntransaction pooling mode. A wide spread phrase that statement mode is \nfor plproxy does not mean any limitations for transaction pooling mode \nuntil you have atocommit on client. Anyway, try to reduce connections.\ntry to set your autovacuum a bit more aggressive: autovacuum_analyze_scale_factor=0.05 #or like that autovacuum_analyze_threshold=5          autovacuum_freeze_max_age=200000000 autovacuum_max_workers=20 # that is fine for slow disks\n\n autovacuum_naptime=1          autovacuum_vacuum_cost_delay=5 # or at least 10 autovacuum_vacuum_cost_limit =-1    autovacuum_vacuum_scale_factor=0.01\n # this setting is to be really aggressive, otherwise you simply \npostpone huge vacuums and related disk io, smaller portions are better\n autovacuum_vacuum_threshold=20   probably you will also need some ionice for autovacuum workersOn Thu, Feb 13, 2014 at 1:13 PM, Ilya Kosmodemiansky <[email protected]> wrote:\nVladimir,pgbouncer works with pl/proxy in transaction pooling mode. A wide spread phrase that statement mode is for plproxy does not mean any limitations for transaction pooling mode until you have atocommit on client. Anyway, try to reduce connections.\ntry to set your autovacuum a bit more aggressive: autovacuum_analyze_scale_factor=0.05 #or like that autovacuum_analyze_threshold=5          autovacuum_freeze_max_age=200000000 autovacuum_max_workers=20 # that is fine for slow disks\n\n autovacuum_naptime=1          autovacuum_vacuum_cost_delay=5 # or at least 10 autovacuum_vacuum_cost_limit =-1    autovacuum_vacuum_scale_factor=0.01 # this setting is to be really aggressive, otherwise you simply postpone huge vacuums and related disk io, smaller portions are better\n\n autovacuum_vacuum_threshold=20   probably you will also need some ionice for autovacuum workersOn Thu, Feb 13, 2014 at 11:26 AM, Бородин Владимир <[email protected]> wrote:\n13.02.2014, в 13:29, Ilya Kosmodemiansky <[email protected]> написал(а):\nVladimir,And, any effect on your problem?It worked without problems longer than previous configuration but repeated again several minutes ago :(\nOn Thu, Feb 13, 2014 at 9:35 AM, Бородин Владимир <[email protected]> wrote:I have limited max connections to 1000, reduced shared buffers to 8G and restarted postgres.\n1000 is still to much in most cases. With pgbouncer in transactionpooling mode normaly pool size 8-32, max_connections = 100 (defaultvalue) and client_connections 500-1500 looks more reasonable.\nClients for this db are plproxy hosts. As far as I know plproxy can work only with statement pooling.\n\nI have also noticed that this big tables stopped vacuuming automatically a couple of weeks ago. It could be the reason of the problem, I will now try to tune autovacuum parameters to turn it back. But yesterday I ran \"vacuum analyze\" for all relations manually but that did not help.\nHow do your autovacuum parameters look like now?They were all default except for vacuum_defer_cleanup_age = 100000. I have increased autovacuum_max_workers = 20 because I have 10 databases with about 10 tables each. That did not make better (I haven't seen more than two auto vacuum workers simultaneously). Then I have tried to set vacuum_cost_limit = 1000. Still not vacuuming big tables. Right now the parameters look like this:\nroot@rpopdb01e ~ # fgrep vacuum /var/lib/pgsql/9.3/data/conf.d/postgresql.conf #vacuum_cost_delay = 0                  # 0-100 milliseconds#vacuum_cost_page_hit = 1               # 0-10000 credits\n#vacuum_cost_page_miss = 10             # 0-10000 credits#vacuum_cost_page_dirty = 20            # 0-10000 creditsvacuum_cost_limit = 1000                # 1-10000 creditsvacuum_defer_cleanup_age = 100000       # number of xacts by which cleanup is delayed\nautovacuum = on                         # Enable autovacuum subprocess?  'on'log_autovacuum_min_duration = 0         # -1 disables, 0 logs all actions andautovacuum_max_workers = 20             # max number of autovacuum subprocesses\n#autovacuum_naptime = 1min              # time between autovacuum runs#autovacuum_vacuum_threshold = 50       # min number of row updates before                                        # vacuum\n#autovacuum_analyze_threshold = 50      # min number of row updates before#autovacuum_vacuum_scale_factor = 0.2   # fraction of table size before vacuum#autovacuum_analyze_scale_factor = 0.1  # fraction of table size before analyze\n#autovacuum_freeze_max_age = 200000000  # maximum XID age before forced vacuum#autovacuum_vacuum_cost_delay = 20ms    # default vacuum cost delay for                                        # autovacuum, in milliseconds;\n                                        # -1 means use vacuum_cost_delay#autovacuum_vacuum_cost_limit = -1      # default vacuum cost limit for                                        # autovacuum, -1 means use\n                                        # vacuum_cost_limit#vacuum_freeze_min_age = 50000000#vacuum_freeze_table_age = 150000000root@rpopdb01e ~ #\n13.02.2014, в 0:14, Ilya Kosmodemiansky <[email protected]> написал(а):On Wed, Feb 12, 2014 at 8:57 PM, Бородин Владимир <[email protected]> wrote:\nYes, this is legacy, I will fix it. We had lots of inactive connections but right now we use pgbouncer for this. When the workload is normal we have some kind of 80-120 backends. Less than 10 of them are in active state. Having problem with locks we get lots of sessions (sometimes more than 1000 of them are in active state). According to vmstat the number of context switches is not so big (less than 20k), so I don't think it is the main reason. Yes, it can aggravate the problem, but imho not create it.\nI'am afraid that is the problem. More than 1000 backends, most of themare simply waiting.I don't understand the correlation of shared buffers size and synchronous_commit. Could you please explain your statement?\nYou need to fsync your huge shared buffers any time your databaseperforms checkpoint. By default it usually happens too often becausecheckpoint_timeout is 5min by default. Without bbu, on software raid\n\nthat leads to io spike and you commit waits for wal.12.02.2014, в 23:37, Ilya Kosmodemiansky <[email protected]> написал(а):another thing which is arguable - concurrency degree. How many of your max_connections = 4000 are actually running?  4000 definitely looks like an overkill and they could be a serious source of concurrency, especially then you have had barrier enabled and software raid.\nPlus for 32Gb of shared buffers with synchronous_commit = on especially on heavy workload one should definitely have bbu, otherwise performance will be poor.On Wed, Feb 12, 2014 at 8:20 PM, Бородин Владимир <[email protected]> wrote:\nOh, I haven't thought about barriers, sorry. Although I use soft raid without batteries I have turned barriers off on one cluster shard to try.root@rpopdb01e ~ # mount | fgrep data/dev/md2 on /var/lib/pgsql/9.3/data type ext4 (rw,noatime,nodiratime)\n\nroot@rpopdb01e ~ # mount -o remount,nobarrier /dev/md2root@rpopdb01e ~ # mount | fgrep data/dev/md2 on /var/lib/pgsql/9.3/data type ext4 (rw,noatime,nodiratime,nobarrier)root@rpopdb01e ~ #12.02.2014, в 21:56, Ilya Kosmodemiansky <[email protected]> написал(а):\nMy question was actually about barrier option, by default it is enabled on RHEL6/ext4 and could cause serious bottleneck on io before disks are actually involved. What says mount without arguments?On Feb 12, 2014, at 18:43, Бородин Владимир <[email protected]> wrote:\nroot@rpopdb01e ~ # fgrep data /etc/fstabUUID=f815fd3f-e4e4-43a6-a6a1-bce1203db3e0 /var/lib/pgsql/9.3/data ext4 noatime,nodiratime 0 1root@rpopdb01e ~ #According to iostat the disks are not the bottleneck.\n12.02.2014, в 21:30, Ilya Kosmodemiansky <[email protected]> написал(а):Hi Vladimir,Just in case: how is your ext4 mount?Best regards,\n\nIlyaOn Feb 12, 2014, at 17:59, Бородин Владимир <[email protected]> wrote:Hi all.Today I have started getting errors like below in logs (seems that I have not changed anything for last week). When it happens the db gets lots of connections in state active, eats 100% cpu and clients get errors (due to timeout).\n2014-02-12 15:44:24.562 MSK,\"rpop\",\"rpopdb_p6\",30061,\"localhost:58350\",52fb5e53.756d,1,\"SELECT waiting\",2014-02-12 15:43:15 MSK,143/264877,1002850566,LOG,00000,\"process 30061 still waiting for ExclusiveLock on extension of relation 26118 of database 24590 after 1000.082 ms\",,,,,\"SQL statement \"\"insert into rpop.rpop_imap_uidls (folder_id, uidl) values (i_folder_id, i_uidl)\"\"\nI have read several topics [1, 2, 3, 4] with similar problems but haven't find a good solution. Below is some more diagnostics.I am running PostgreSQL 9.3.2 installed from RPM packages on RHEL 6.4. Host is running with the following CPU (32 cores) and memory:\nroot@rpopdb01e ~ # fgrep -m1 'model name' /proc/cpuinfomodel name : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHzroot@rpopdb01e ~ # free -m            total       used       free     shared    buffers     cached\n\nMem:        129028     123558       5469          0        135     119504-/+ buffers/cache:       3918     125110Swap:        16378          0      16378root@rpopdb01e ~ #PGDATA lives on RAID6 array of 8 ssd-disks with ext4, iostat and atop say the disks are really free. Right now PGDATA takes only 95G.\n\nThe settings changed in postgresql.conf are here [5].When it happens the last query from here [6] shows that almost all queries are waiting for ExclusiveLock, but they do a simple insert.(extend,26647,26825,,,,,,,) |        5459 | ExclusiveLock |     1 | (extend,26647,26825,,,,,,,) | 8053 | ExclusiveLock | 5459,8053\n\n(extend,26647,26828,,,,,,,) |        5567 | ExclusiveLock |     1 | (extend,26647,26828,,,,,,,) | 5490 | ExclusiveLock | 5567,5490(extend,24584,25626,,,,,,,) |        5611 | ExclusiveLock |     1 | (extend,24584,25626,,,,,,,) | 3963 | ExclusiveLock | 5611,3963\nI have several databases running on one host with one postmaster process and ExclusiveLock is being waited by many oids. I suppose the only common thing for all of them is that they are bigger than others and they almost do not get updates and deletes (only inserts and reads). Some more info about one of such tables is here [7].\nI have tried to look at the source code (src/backend/access/heap/hio.c) to understand when the exclusive lock can be taken, but I could only read comments :) I have also examined FSM for this tables and their indexes and found that for most of them there are free pages but there are, for example, such cases:\nrpopdb_p0=# select count(*) from pg_freespace('rpop.rpop_uidl') where avail != 0;count--------115953(1 row)rpopdb_p0=# select count(*) from pg_freespace('rpop.pk_rpop_uidl') where avail != 0;\n\ncount-------    0(1 row)rpopdb_p0=# \\dS+ rpop.rpop_uidl                              Table \"rpop.rpop_uidl\"Column |          Type          | Modifiers | Storage  | Stats target | Description\n\n--------+------------------------+-----------+----------+--------------+-------------popid  | bigint                 | not null  | plain    |              |uidl   | character varying(200) | not null  | extended |              |\n\nIndexes:   \"pk_rpop_uidl\" PRIMARY KEY, btree (popid, uidl)Has OIDs: norpopdb_p0=#My questions are:1. Do we consume 100% cpu (in system) trying to get page from FSM? Or does it happen during exclusive lock acquiring? How can I dig it?\n\n2. How much space do we extend to the relation when we get exclusive lock on it?3. Why extended page is not visible for other backends?4. Is there any possibility of situation where backend A got exclusive lock on some relation to extend it. Then OS CPU scheduler made a context switch to backend B while backend B is waiting for exclusive lock on the same relation. And so on for many backends.\n\n5. (and the main question) what can I do to get rid of such situations? It is a production cluster and I do not have any ideas what to do with this situation :( Any help would be really appropriate.[1] http://www.postgresql.org/message-id/[email protected]\n\n[2] http://pgsql.performance.narkive.com/IrkPbl3f/postgresql-9-2-3-performance-problem-caused-exclusive-locks\n\n[3] http://www.postgresql.org/message-id/[email protected][4] http://www.postgresql.org/message-id/CAL_0b1sypYeOyNkYNV95nNV2d+4jXTug3HkKF6FahfW7Gvgb_Q@mail.gmail.com\n\n[5] http://pastebin.com/raw.php?i=Bd40Vn6h[6] http://wiki.postgresql.org/wiki/Lock_dependency_information\n\n[7 http://pastebin.com/raw.php?i=eGrtG524]--Vladimir--Vladimir--Да пребудет с вами сила...\nhttp://simply.name--Vladimir--Vladimir\n\n\n--Да пребудет с вами сила...http://simply.name\n\n\n-- Ilya KosmodemianskyDatabase consultant,PostgreSQL-Consulting.comtel. +14084142500\ncell. [email protected]\n\n-- Ilya KosmodemianskyDatabase consultant,PostgreSQL-Consulting.comtel. +14084142500cell. [email protected]", "msg_date": "Thu, 13 Feb 2014 13:20:30 +0100", "msg_from": "Ilya Kosmodemiansky <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem with ExclusiveLock on inserts" }, { "msg_contents": "With these settings auto vacuuming of all tables became more often (as was expected), but not of big tables with lots of inserts (and with absolutely no updates). I've done a cron-script that does \"vacuum analyze\" for all databases which runs every hour. And it seems that right now I don't have any performance problems.\n\nIlya and Emre, thank you for help.\n\n13.02.2014, в 16:20, Ilya Kosmodemiansky <[email protected]> написал(а):\n\n> Vladimir,\n> \n> pgbouncer works with pl/proxy in transaction pooling mode. A wide spread phrase that statement mode is for plproxy does not mean any limitations for transaction pooling mode until you have atocommit on client. Anyway, try to reduce connections.\n> \n> try to set your autovacuum a bit more aggressive:\n> \n> \n> autovacuum_analyze_scale_\n> factor=0.05 #or like that\n> autovacuum_analyze_threshold=5 \n> autovacuum_freeze_max_age=200000000\n> autovacuum_max_workers=20 # that is fine for slow disks\n> autovacuum_naptime=1 \n> autovacuum_vacuum_cost_delay=5 # or at least 10\n> autovacuum_vacuum_cost_limit =-1 \n> autovacuum_vacuum_scale_factor=0.01 # this setting is to be really aggressive, otherwise you simply postpone huge vacuums and related disk io, smaller portions are better\n> autovacuum_vacuum_threshold=20 \n> \n> probably you will also need some ionice for autovacuum workers\n> \n> \n> On Thu, Feb 13, 2014 at 1:13 PM, Ilya Kosmodemiansky <[email protected]> wrote:\n> Vladimir,\n> \n> pgbouncer works with pl/proxy in transaction pooling mode. A wide spread phrase that statement mode is for plproxy does not mean any limitations for transaction pooling mode until you have atocommit on client. Anyway, try to reduce connections.\n> \n> try to set your autovacuum a bit more aggressive:\n> \n> \n> autovacuum_analyze_scale_factor=0.05 #or like that\n> autovacuum_analyze_threshold=5 \n> autovacuum_freeze_max_age=200000000\n> autovacuum_max_workers=20 # that is fine for slow disks\n> autovacuum_naptime=1 \n> autovacuum_vacuum_cost_delay=5 # or at least 10\n> autovacuum_vacuum_cost_limit =-1 \n> autovacuum_vacuum_scale_factor=0.01 # this setting is to be really aggressive, otherwise you simply postpone huge vacuums and related disk io, smaller portions are better\n> autovacuum_vacuum_threshold=20 \n> \n> probably you will also need some ionice for autovacuum workers\n> \n> \n> \n> On Thu, Feb 13, 2014 at 11:26 AM, Бородин Владимир <[email protected]> wrote:\n> \n> 13.02.2014, в 13:29, Ilya Kosmodemiansky <[email protected]> написал(а):\n> \n>> Vladimir,\n>> \n>> And, any effect on your problem?\n> \n> It worked without problems longer than previous configuration but repeated again several minutes ago :(\n> \n>> \n>> On Thu, Feb 13, 2014 at 9:35 AM, Бородин Владимир <[email protected]> wrote:\n>>> I have limited max connections to 1000, reduced shared buffers to 8G and restarted postgres.\n>> \n>> 1000 is still to much in most cases. With pgbouncer in transaction\n>> pooling mode normaly pool size 8-32, max_connections = 100 (default\n>> value) and client_connections 500-1500 looks more reasonable.\n> \n> Clients for this db are plproxy hosts. As far as I know plproxy can work only with statement pooling.\n> \n>> \n>> \n>>> I have also noticed that this big tables stopped vacuuming automatically a couple of weeks ago. It could be the reason of the problem, I will now try to tune autovacuum parameters to turn it back. But yesterday I ran \"vacuum analyze\" for all relations manually but that did not help.\n>> \n>> How do your autovacuum parameters look like now?\n> \n> They were all default except for vacuum_defer_cleanup_age = 100000. I have increased autovacuum_max_workers = 20 because I have 10 databases with about 10 tables each. That did not make better (I haven't seen more than two auto vacuum workers simultaneously). Then I have tried to set vacuum_cost_limit = 1000. Still not vacuuming big tables. Right now the parameters look like this:\n> \n> root@rpopdb01e ~ # fgrep vacuum /var/lib/pgsql/9.3/data/conf.d/postgresql.conf \n> #vacuum_cost_delay = 0 # 0-100 milliseconds\n> #vacuum_cost_page_hit = 1 # 0-10000 credits\n> #vacuum_cost_page_miss = 10 # 0-10000 credits\n> #vacuum_cost_page_dirty = 20 # 0-10000 credits\n> vacuum_cost_limit = 1000 # 1-10000 credits\n> vacuum_defer_cleanup_age = 100000 # number of xacts by which cleanup is delayed\n> autovacuum = on # Enable autovacuum subprocess? 'on'\n> log_autovacuum_min_duration = 0 # -1 disables, 0 logs all actions and\n> autovacuum_max_workers = 20 # max number of autovacuum subprocesses\n> #autovacuum_naptime = 1min # time between autovacuum runs\n> #autovacuum_vacuum_threshold = 50 # min number of row updates before\n> # vacuum\n> #autovacuum_analyze_threshold = 50 # min number of row updates before\n> #autovacuum_vacuum_scale_factor = 0.2 # fraction of table size before vacuum\n> #autovacuum_analyze_scale_factor = 0.1 # fraction of table size before analyze\n> #autovacuum_freeze_max_age = 200000000 # maximum XID age before forced vacuum\n> #autovacuum_vacuum_cost_delay = 20ms # default vacuum cost delay for\n> # autovacuum, in milliseconds;\n> # -1 means use vacuum_cost_delay\n> #autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for\n> # autovacuum, -1 means use\n> # vacuum_cost_limit\n> #vacuum_freeze_min_age = 50000000\n> #vacuum_freeze_table_age = 150000000\n> root@rpopdb01e ~ #\n> \n>> \n>>> 13.02.2014, в 0:14, Ilya Kosmodemiansky <[email protected]> написал(а):\n>>> \n>>> On Wed, Feb 12, 2014 at 8:57 PM, Бородин Владимир <[email protected]> wrote:\n>>> \n>>> \n>>> Yes, this is legacy, I will fix it. We had lots of inactive connections but right now we use pgbouncer for this. When the workload is normal we have some kind of 80-120 backends. Less than 10 of them are in active state. Having problem with locks we get lots of sessions (sometimes more than 1000 of them are in active state). According to vmstat the number of context switches is not so big (less than 20k), so I don't think it is the main reason. Yes, it can aggravate the problem, but imho not create it.\n>>> \n>>> \n>>> \n>>> I'am afraid that is the problem. More than 1000 backends, most of them\n>>> are simply waiting.\n>>> \n>>> \n>>> \n>>> I don't understand the correlation of shared buffers size and synchronous_commit. Could you please explain your statement?\n>>> \n>>> \n>>> \n>>> You need to fsync your huge shared buffers any time your database\n>>> performs checkpoint. By default it usually happens too often because\n>>> checkpoint_timeout is 5min by default. Without bbu, on software raid\n>>> that leads to io spike and you commit waits for wal.\n>>> \n>>> \n>>> \n>>> 12.02.2014, в 23:37, Ilya Kosmodemiansky <[email protected]> написал(а):\n>>> \n>>> another thing which is arguable - concurrency degree. How many of your max_connections = 4000 are actually running? 4000 definitely looks like an overkill and they could be a serious source of concurrency, especially then you have had barrier enabled and software raid.\n>>> \n>>> Plus for 32Gb of shared buffers with synchronous_commit = on especially on heavy workload one should definitely have bbu, otherwise performance will be poor.\n>>> \n>>> \n>>> On Wed, Feb 12, 2014 at 8:20 PM, Бородин Владимир <[email protected]> wrote:\n>>> \n>>> \n>>> Oh, I haven't thought about barriers, sorry. Although I use soft raid without batteries I have turned barriers off on one cluster shard to try.\n>>> \n>>> root@rpopdb01e ~ # mount | fgrep data\n>>> /dev/md2 on /var/lib/pgsql/9.3/data type ext4 (rw,noatime,nodiratime)\n>>> root@rpopdb01e ~ # mount -o remount,nobarrier /dev/md2\n>>> root@rpopdb01e ~ # mount | fgrep data\n>>> /dev/md2 on /var/lib/pgsql/9.3/data type ext4 (rw,noatime,nodiratime,nobarrier)\n>>> root@rpopdb01e ~ #\n>>> \n>>> 12.02.2014, в 21:56, Ilya Kosmodemiansky <[email protected]> написал(а):\n>>> \n>>> My question was actually about barrier option, by default it is enabled on RHEL6/ext4 and could cause serious bottleneck on io before disks are actually involved. What says mount without arguments?\n>>> \n>>> On Feb 12, 2014, at 18:43, Бородин Владимир <[email protected]> wrote:\n>>> \n>>> root@rpopdb01e ~ # fgrep data /etc/fstab\n>>> UUID=f815fd3f-e4e4-43a6-a6a1-bce1203db3e0 /var/lib/pgsql/9.3/data ext4 noatime,nodiratime 0 1\n>>> root@rpopdb01e ~ #\n>>> \n>>> According to iostat the disks are not the bottleneck.\n>>> \n>>> 12.02.2014, в 21:30, Ilya Kosmodemiansky <[email protected]> написал(а):\n>>> \n>>> Hi Vladimir,\n>>> \n>>> Just in case: how is your ext4 mount?\n>>> \n>>> Best regards,\n>>> Ilya\n>>> \n>>> On Feb 12, 2014, at 17:59, Бородин Владимир <[email protected]> wrote:\n>>> \n>>> Hi all.\n>>> \n>>> Today I have started getting errors like below in logs (seems that I have not changed anything for last week). When it happens the db gets lots of connections in state active, eats 100% cpu and clients get errors (due to timeout).\n>>> \n>>> 2014-02-12 15:44:24.562 MSK,\"rpop\",\"rpopdb_p6\",30061,\"localhost:58350\",52fb5e53.756d,1,\"SELECT waiting\",2014-02-12 15:43:15 MSK,143/264877,1002850566,LOG,00000,\"process 30061 still waiting for ExclusiveLock on extension of relation 26118 of database 24590 after 1000.082 ms\",,,,,\"SQL statement \"\"insert into rpop.rpop_imap_uidls (folder_id, uidl) values (i_folder_id, i_uidl)\"\"\n>>> \n>>> I have read several topics [1, 2, 3, 4] with similar problems but haven't find a good solution. Below is some more diagnostics.\n>>> \n>>> I am running PostgreSQL 9.3.2 installed from RPM packages on RHEL 6.4. Host is running with the following CPU (32 cores) and memory:\n>>> \n>>> root@rpopdb01e ~ # fgrep -m1 'model name' /proc/cpuinfo\n>>> model name : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz\n>>> root@rpopdb01e ~ # free -m\n>>> total used free shared buffers cached\n>>> Mem: 129028 123558 5469 0 135 119504\n>>> -/+ buffers/cache: 3918 125110\n>>> Swap: 16378 0 16378\n>>> root@rpopdb01e ~ #\n>>> \n>>> PGDATA lives on RAID6 array of 8 ssd-disks with ext4, iostat and atop say the disks are really free. Right now PGDATA takes only 95G.\n>>> The settings changed in postgresql.conf are here [5].\n>>> \n>>> When it happens the last query from here [6] shows that almost all queries are waiting for ExclusiveLock, but they do a simple insert.\n>>> \n>>> (extend,26647,26825,,,,,,,) | 5459 | ExclusiveLock | 1 | (extend,26647,26825,,,,,,,) | 8053 | ExclusiveLock | 5459,8053\n>>> (extend,26647,26828,,,,,,,) | 5567 | ExclusiveLock | 1 | (extend,26647,26828,,,,,,,) | 5490 | ExclusiveLock | 5567,5490\n>>> (extend,24584,25626,,,,,,,) | 5611 | ExclusiveLock | 1 | (extend,24584,25626,,,,,,,) | 3963 | ExclusiveLock | 5611,3963\n>>> \n>>> I have several databases running on one host with one postmaster process and ExclusiveLock is being waited by many oids. I suppose the only common thing for all of them is that they are bigger than others and they almost do not get updates and deletes (only inserts and reads). Some more info about one of such tables is here [7].\n>>> \n>>> I have tried to look at the source code (src/backend/access/heap/hio.c) to understand when the exclusive lock can be taken, but I could only read comments :) I have also examined FSM for this tables and their indexes and found that for most of them there are free pages but there are, for example, such cases:\n>>> \n>>> rpopdb_p0=# select count(*) from pg_freespace('rpop.rpop_uidl') where avail != 0;\n>>> count\n>>> --------\n>>> 115953\n>>> (1 row)\n>>> \n>>> rpopdb_p0=# select count(*) from pg_freespace('rpop.pk_rpop_uidl') where avail != 0;\n>>> count\n>>> -------\n>>> 0\n>>> (1 row)\n>>> \n>>> rpopdb_p0=# \\dS+ rpop.rpop_uidl\n>>> Table \"rpop.rpop_uidl\"\n>>> Column | Type | Modifiers | Storage | Stats target | Description\n>>> --------+------------------------+-----------+----------+--------------+-------------\n>>> popid | bigint | not null | plain | |\n>>> uidl | character varying(200) | not null | extended | |\n>>> Indexes:\n>>> \"pk_rpop_uidl\" PRIMARY KEY, btree (popid, uidl)\n>>> Has OIDs: no\n>>> \n>>> rpopdb_p0=#\n>>> \n>>> \n>>> My questions are:\n>>> 1. Do we consume 100% cpu (in system) trying to get page from FSM? Or does it happen during exclusive lock acquiring? How can I dig it?\n>>> 2. How much space do we extend to the relation when we get exclusive lock on it?\n>>> 3. Why extended page is not visible for other backends?\n>>> 4. Is there any possibility of situation where backend A got exclusive lock on some relation to extend it. Then OS CPU scheduler made a context switch to backend B while backend B is waiting for exclusive lock on the same relation. And so on for many backends.\n>>> 5. (and the main question) what can I do to get rid of such situations? It is a production cluster and I do not have any ideas what to do with this situation :( Any help would be really appropriate.\n>>> \n>>> [1] http://www.postgresql.org/message-id/[email protected]\n>>> [2] http://pgsql.performance.narkive.com/IrkPbl3f/postgresql-9-2-3-performance-problem-caused-exclusive-locks\n>>> [3] http://www.postgresql.org/message-id/[email protected]\n>>> [4] http://www.postgresql.org/message-id/CAL_0b1sypYeOyNkYNV95nNV2d+4jXTug3HkKF6FahfW7Gvgb_Q@mail.gmail.com\n>>> [5] http://pastebin.com/raw.php?i=Bd40Vn6h\n>>> [6] http://wiki.postgresql.org/wiki/Lock_dependency_information\n>>> [7 http://pastebin.com/raw.php?i=eGrtG524]\n>>> \n>>> --\n>>> Vladimir\n>>> \n>>> \n>>> \n>>> \n>>> \n>>> \n>>> --\n>>> Vladimir\n>>> \n>>> \n>>> \n>>> \n>>> \n>>> \n>>> --\n>>> Да пребудет с вами сила...\n>>> http://simply.name\n>>> \n>>> \n>>> \n>>> \n>>> \n>>> \n>>> \n>>> \n>>> --\n>>> Vladimir\n>>> \n>>> \n>>> \n>>> \n>>> \n>>> \n>>> --\n>>> Vladimir\n>>> \n>>> \n>>> \n>>> \n> \n> \n> --\n> Да пребудет с вами сила...\n> http://simply.name\n> \n> \n> \n> \n> \n> \n> \n> \n> -- \n> Ilya Kosmodemiansky\n> \n> Database consultant,\n> PostgreSQL-Consulting.com\n> \n> tel. +14084142500\n> cell. +4915144336040\n> [email protected]\n> \n> \n> \n> -- \n> Ilya Kosmodemiansky\n> \n> Database consultant,\n> PostgreSQL-Consulting.com\n> \n> tel. +14084142500\n> cell. +4915144336040\n> [email protected]\n\n\n--\nДа пребудет с вами сила...\nhttp://simply.name\n\n\n\n\n\n\nWith these settings auto vacuuming of all tables became more often (as was expected), but not of big tables with lots of inserts (and with absolutely no updates). I've done a cron-script that does \"vacuum analyze\" for all databases which runs every hour. And it seems that right now I don't have any performance problems.Ilya and Emre, thank you for help.13.02.2014, в 16:20, Ilya Kosmodemiansky <[email protected]> написал(а):Vladimir,pgbouncer works with pl/proxy in \ntransaction pooling mode. A wide spread phrase that statement mode is \nfor plproxy does not mean any limitations for transaction pooling mode \nuntil you have atocommit on client. Anyway, try to reduce connections.\ntry to set your autovacuum a bit more aggressive: autovacuum_analyze_scale_factor=0.05 #or like that autovacuum_analyze_threshold=5          autovacuum_freeze_max_age=200000000 autovacuum_max_workers=20 # that is fine for slow disks\n\n autovacuum_naptime=1          autovacuum_vacuum_cost_delay=5 # or at least 10 autovacuum_vacuum_cost_limit =-1    autovacuum_vacuum_scale_factor=0.01\n # this setting is to be really aggressive, otherwise you simply \npostpone huge vacuums and related disk io, smaller portions are better\n autovacuum_vacuum_threshold=20   probably you will also need some ionice for autovacuum workersOn Thu, Feb 13, 2014 at 1:13 PM, Ilya Kosmodemiansky <[email protected]> wrote:\nVladimir,pgbouncer works with pl/proxy in transaction pooling mode. A wide spread phrase that statement mode is for plproxy does not mean any limitations for transaction pooling mode until you have atocommit on client. Anyway, try to reduce connections.\ntry to set your autovacuum a bit more aggressive: autovacuum_analyze_scale_factor=0.05 #or like that autovacuum_analyze_threshold=5          autovacuum_freeze_max_age=200000000 autovacuum_max_workers=20 # that is fine for slow disks\n\n autovacuum_naptime=1          autovacuum_vacuum_cost_delay=5 # or at least 10 autovacuum_vacuum_cost_limit =-1    autovacuum_vacuum_scale_factor=0.01 # this setting is to be really aggressive, otherwise you simply postpone huge vacuums and related disk io, smaller portions are better\n\n autovacuum_vacuum_threshold=20   probably you will also need some ionice for autovacuum workersOn Thu, Feb 13, 2014 at 11:26 AM, Бородин Владимир <[email protected]> wrote:\n13.02.2014, в 13:29, Ilya Kosmodemiansky <[email protected]> написал(а):\nVladimir,And, any effect on your problem?It worked without problems longer than previous configuration but repeated again several minutes ago :(\nOn Thu, Feb 13, 2014 at 9:35 AM, Бородин Владимир <[email protected]> wrote:I have limited max connections to 1000, reduced shared buffers to 8G and restarted postgres.\n1000 is still to much in most cases. With pgbouncer in transactionpooling mode normaly pool size 8-32, max_connections = 100 (defaultvalue) and client_connections 500-1500 looks more reasonable.\nClients for this db are plproxy hosts. As far as I know plproxy can work only with statement pooling.\n\nI have also noticed that this big tables stopped vacuuming automatically a couple of weeks ago. It could be the reason of the problem, I will now try to tune autovacuum parameters to turn it back. But yesterday I ran \"vacuum analyze\" for all relations manually but that did not help.\nHow do your autovacuum parameters look like now?They were all default except for vacuum_defer_cleanup_age = 100000. I have increased autovacuum_max_workers = 20 because I have 10 databases with about 10 tables each. That did not make better (I haven't seen more than two auto vacuum workers simultaneously). Then I have tried to set vacuum_cost_limit = 1000. Still not vacuuming big tables. Right now the parameters look like this:\nroot@rpopdb01e ~ # fgrep vacuum /var/lib/pgsql/9.3/data/conf.d/postgresql.conf #vacuum_cost_delay = 0                  # 0-100 milliseconds#vacuum_cost_page_hit = 1               # 0-10000 credits\n#vacuum_cost_page_miss = 10             # 0-10000 credits#vacuum_cost_page_dirty = 20            # 0-10000 creditsvacuum_cost_limit = 1000                # 1-10000 creditsvacuum_defer_cleanup_age = 100000       # number of xacts by which cleanup is delayed\nautovacuum = on                         # Enable autovacuum subprocess?  'on'log_autovacuum_min_duration = 0         # -1 disables, 0 logs all actions andautovacuum_max_workers = 20             # max number of autovacuum subprocesses\n#autovacuum_naptime = 1min              # time between autovacuum runs#autovacuum_vacuum_threshold = 50       # min number of row updates before                                        # vacuum\n#autovacuum_analyze_threshold = 50      # min number of row updates before#autovacuum_vacuum_scale_factor = 0.2   # fraction of table size before vacuum#autovacuum_analyze_scale_factor = 0.1  # fraction of table size before analyze\n#autovacuum_freeze_max_age = 200000000  # maximum XID age before forced vacuum#autovacuum_vacuum_cost_delay = 20ms    # default vacuum cost delay for                                        # autovacuum, in milliseconds;\n                                        # -1 means use vacuum_cost_delay#autovacuum_vacuum_cost_limit = -1      # default vacuum cost limit for                                        # autovacuum, -1 means use\n                                        # vacuum_cost_limit#vacuum_freeze_min_age = 50000000#vacuum_freeze_table_age = 150000000root@rpopdb01e ~ #\n13.02.2014, в 0:14, Ilya Kosmodemiansky <[email protected]> написал(а):On Wed, Feb 12, 2014 at 8:57 PM, Бородин Владимир <[email protected]> wrote:\nYes, this is legacy, I will fix it. We had lots of inactive connections but right now we use pgbouncer for this. When the workload is normal we have some kind of 80-120 backends. Less than 10 of them are in active state. Having problem with locks we get lots of sessions (sometimes more than 1000 of them are in active state). According to vmstat the number of context switches is not so big (less than 20k), so I don't think it is the main reason. Yes, it can aggravate the problem, but imho not create it.\nI'am afraid that is the problem. More than 1000 backends, most of themare simply waiting.I don't understand the correlation of shared buffers size and synchronous_commit. Could you please explain your statement?\nYou need to fsync your huge shared buffers any time your databaseperforms checkpoint. By default it usually happens too often becausecheckpoint_timeout is 5min by default. Without bbu, on software raid\n\nthat leads to io spike and you commit waits for wal.12.02.2014, в 23:37, Ilya Kosmodemiansky <[email protected]> написал(а):another thing which is arguable - concurrency degree. How many of your max_connections = 4000 are actually running?  4000 definitely looks like an overkill and they could be a serious source of concurrency, especially then you have had barrier enabled and software raid.\nPlus for 32Gb of shared buffers with synchronous_commit = on especially on heavy workload one should definitely have bbu, otherwise performance will be poor.On Wed, Feb 12, 2014 at 8:20 PM, Бородин Владимир <[email protected]> wrote:\nOh, I haven't thought about barriers, sorry. Although I use soft raid without batteries I have turned barriers off on one cluster shard to try.root@rpopdb01e ~ # mount | fgrep data/dev/md2 on /var/lib/pgsql/9.3/data type ext4 (rw,noatime,nodiratime)\n\nroot@rpopdb01e ~ # mount -o remount,nobarrier /dev/md2root@rpopdb01e ~ # mount | fgrep data/dev/md2 on /var/lib/pgsql/9.3/data type ext4 (rw,noatime,nodiratime,nobarrier)root@rpopdb01e ~ #12.02.2014, в 21:56, Ilya Kosmodemiansky <[email protected]> написал(а):\nMy question was actually about barrier option, by default it is enabled on RHEL6/ext4 and could cause serious bottleneck on io before disks are actually involved. What says mount without arguments?On Feb 12, 2014, at 18:43, Бородин Владимир <[email protected]> wrote:\nroot@rpopdb01e ~ # fgrep data /etc/fstabUUID=f815fd3f-e4e4-43a6-a6a1-bce1203db3e0 /var/lib/pgsql/9.3/data ext4 noatime,nodiratime 0 1root@rpopdb01e ~ #According to iostat the disks are not the bottleneck.\n12.02.2014, в 21:30, Ilya Kosmodemiansky <[email protected]> написал(а):Hi Vladimir,Just in case: how is your ext4 mount?Best regards,\n\nIlyaOn Feb 12, 2014, at 17:59, Бородин Владимир <[email protected]> wrote:Hi all.Today I have started getting errors like below in logs (seems that I have not changed anything for last week). When it happens the db gets lots of connections in state active, eats 100% cpu and clients get errors (due to timeout).\n2014-02-12 15:44:24.562 MSK,\"rpop\",\"rpopdb_p6\",30061,\"localhost:58350\",52fb5e53.756d,1,\"SELECT waiting\",2014-02-12 15:43:15 MSK,143/264877,1002850566,LOG,00000,\"process 30061 still waiting for ExclusiveLock on extension of relation 26118 of database 24590 after 1000.082 ms\",,,,,\"SQL statement \"\"insert into rpop.rpop_imap_uidls (folder_id, uidl) values (i_folder_id, i_uidl)\"\"\nI have read several topics [1, 2, 3, 4] with similar problems but haven't find a good solution. Below is some more diagnostics.I am running PostgreSQL 9.3.2 installed from RPM packages on RHEL 6.4. Host is running with the following CPU (32 cores) and memory:\nroot@rpopdb01e ~ # fgrep -m1 'model name' /proc/cpuinfomodel name : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHzroot@rpopdb01e ~ # free -m            total       used       free     shared    buffers     cached\n\nMem:        129028     123558       5469          0        135     119504-/+ buffers/cache:       3918     125110Swap:        16378          0      16378root@rpopdb01e ~ #PGDATA lives on RAID6 array of 8 ssd-disks with ext4, iostat and atop say the disks are really free. Right now PGDATA takes only 95G.\n\nThe settings changed in postgresql.conf are here [5].When it happens the last query from here [6] shows that almost all queries are waiting for ExclusiveLock, but they do a simple insert.(extend,26647,26825,,,,,,,) |        5459 | ExclusiveLock |     1 | (extend,26647,26825,,,,,,,) | 8053 | ExclusiveLock | 5459,8053\n\n(extend,26647,26828,,,,,,,) |        5567 | ExclusiveLock |     1 | (extend,26647,26828,,,,,,,) | 5490 | ExclusiveLock | 5567,5490(extend,24584,25626,,,,,,,) |        5611 | ExclusiveLock |     1 | (extend,24584,25626,,,,,,,) | 3963 | ExclusiveLock | 5611,3963\nI have several databases running on one host with one postmaster process and ExclusiveLock is being waited by many oids. I suppose the only common thing for all of them is that they are bigger than others and they almost do not get updates and deletes (only inserts and reads). Some more info about one of such tables is here [7].\nI have tried to look at the source code (src/backend/access/heap/hio.c) to understand when the exclusive lock can be taken, but I could only read comments :) I have also examined FSM for this tables and their indexes and found that for most of them there are free pages but there are, for example, such cases:\nrpopdb_p0=# select count(*) from pg_freespace('rpop.rpop_uidl') where avail != 0;count--------115953(1 row)rpopdb_p0=# select count(*) from pg_freespace('rpop.pk_rpop_uidl') where avail != 0;\n\ncount-------    0(1 row)rpopdb_p0=# \\dS+ rpop.rpop_uidl                              Table \"rpop.rpop_uidl\"Column |          Type          | Modifiers | Storage  | Stats target | Description\n\n--------+------------------------+-----------+----------+--------------+-------------popid  | bigint                 | not null  | plain    |              |uidl   | character varying(200) | not null  | extended |              |\n\nIndexes:   \"pk_rpop_uidl\" PRIMARY KEY, btree (popid, uidl)Has OIDs: norpopdb_p0=#My questions are:1. Do we consume 100% cpu (in system) trying to get page from FSM? Or does it happen during exclusive lock acquiring? How can I dig it?\n\n2. How much space do we extend to the relation when we get exclusive lock on it?3. Why extended page is not visible for other backends?4. Is there any possibility of situation where backend A got exclusive lock on some relation to extend it. Then OS CPU scheduler made a context switch to backend B while backend B is waiting for exclusive lock on the same relation. And so on for many backends.\n\n5. (and the main question) what can I do to get rid of such situations? It is a production cluster and I do not have any ideas what to do with this situation :( Any help would be really appropriate.[1] http://www.postgresql.org/message-id/[email protected]\n\n[2] http://pgsql.performance.narkive.com/IrkPbl3f/postgresql-9-2-3-performance-problem-caused-exclusive-locks\n\n[3] http://www.postgresql.org/message-id/[email protected][4] http://www.postgresql.org/message-id/CAL_0b1sypYeOyNkYNV95nNV2d+4jXTug3HkKF6FahfW7Gvgb_Q@mail.gmail.com\n\n[5] http://pastebin.com/raw.php?i=Bd40Vn6h[6] http://wiki.postgresql.org/wiki/Lock_dependency_information\n\n[7 http://pastebin.com/raw.php?i=eGrtG524]--Vladimir--Vladimir--Да пребудет с вами сила...\nhttp://simply.name--Vladimir--Vladimir\n\n\n--Да пребудет с вами сила...http://simply.name\n\n\n-- Ilya KosmodemianskyDatabase consultant,PostgreSQL-Consulting.comtel. +14084142500\ncell. [email protected]\n\n-- Ilya KosmodemianskyDatabase consultant,PostgreSQL-Consulting.comtel. +14084142500cell. [email protected]\n\n\n--Да пребудет с вами сила...http://simply.name", "msg_date": "Wed, 26 Feb 2014 10:32:32 +0400", "msg_from": "=?koi8-r?B?4s/Sz8TJziD3zMHEyc3J0g==?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Problem with ExclusiveLock on inserts" } ]
[ { "msg_contents": "I'm working with slon and the index portion for at least 3 of my tables\ntake hours to complete and thus with this instance of slony being a wide\narea replica, sessions time out and slon fails to complete.\n\nSo I'm looking at dumping the schema without index information, install\nthat on the slon slave and replicate that way, once replication of the data\nis done, I can run commands to create the indexes.\n\nBut I'm not 100% if there are tools or private scripts written to pull\nindexes from a schema only dump and then allow for an easy recreation of\nthe indexes at the end of the slon replication process (once all sets are\nreplicated)?\n\nThanks\nTory\n\nI'm working with slon and the index portion for at least 3 of my tables take hours to complete and thus with this instance of slony being a wide area replica, sessions time out and slon fails to complete. \nSo I'm looking at dumping the schema without index information, install that on the slon slave and replicate that way, once replication of the data is done, I can run commands to create the indexes.\nBut I'm not 100% if there are tools or private scripts written to pull indexes from a schema only dump and then allow for an easy recreation of the indexes at the end of the slon replication process (once all sets are replicated)?\nThanksTory", "msg_date": "Fri, 14 Feb 2014 13:06:02 -0800", "msg_from": "Tory M Blue <[email protected]>", "msg_from_op": true, "msg_subject": "Can one Dump schema without index/constraints?" }, { "msg_contents": "On Sat, Feb 15, 2014 at 6:06 AM, Tory M Blue <[email protected]> wrote:\n>\n>\n> I'm working with slon and the index portion for at least 3 of my tables take\n> hours to complete and thus with this instance of slony being a wide area\n> replica, sessions time out and slon fails to complete.\n>\n> So I'm looking at dumping the schema without index information, install that\n> on the slon slave and replicate that way, once replication of the data is\n> done, I can run commands to create the indexes.\n>\n> But I'm not 100% if there are tools or private scripts written to pull\n> indexes from a schema only dump and then allow for an easy recreation of the\n> indexes at the end of the slon replication process (once all sets are\n> replicated)?\n\"pg_dump --section\" can be used for that. pre-data includes table\ndefinitions and everything other than post-data dumps. post-data has\ncontraint, trigger, index and rules.\n-- \nMichael\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 15 Feb 2014 10:55:45 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can one Dump schema without index/constraints?" }, { "msg_contents": "On Fri, Feb 14, 2014 at 5:55 PM, Michael Paquier\n<[email protected]>wrote:\n\n> On Sat, Feb 15, 2014 at 6:06 AM, Tory M Blue <[email protected]> wrote:\n> >\n> >\n> > I'm working with slon and the index portion for at least 3 of my tables\n> take\n> > hours to complete and thus with this instance of slony being a wide area\n> > replica, sessions time out and slon fails to complete.\n> >\n> > So I'm looking at dumping the schema without index information, install\n> that\n> > on the slon slave and replicate that way, once replication of the data is\n> > done, I can run commands to create the indexes.\n> >\n> > But I'm not 100% if there are tools or private scripts written to pull\n> > indexes from a schema only dump and then allow for an easy recreation of\n> the\n> > indexes at the end of the slon replication process (once all sets are\n> > replicated)?\n> \"pg_dump --section\" can be used for that. pre-data includes table\n> definitions and everything other than post-data dumps. post-data has\n> contraint, trigger, index and rules.\n> --\n> Michael\n>\n\nThanks Michael, that looks like the ticket!! Will give this a whirl\n\nThanks again\nTory\n\nOn Fri, Feb 14, 2014 at 5:55 PM, Michael Paquier <[email protected]> wrote:\nOn Sat, Feb 15, 2014 at 6:06 AM, Tory M Blue <[email protected]> wrote:\n>\n>\n> I'm working with slon and the index portion for at least 3 of my tables take\n> hours to complete and thus with this instance of slony being a wide area\n> replica, sessions time out and slon fails to complete.\n>\n> So I'm looking at dumping the schema without index information, install that\n> on the slon slave and replicate that way, once replication of the data is\n> done, I can run commands to create the indexes.\n>\n> But I'm not 100% if there are tools or private scripts written to pull\n> indexes from a schema only dump and then allow for an easy recreation of the\n> indexes at the end of the slon replication process (once all sets are\n> replicated)?\n\"pg_dump --section\" can be used for that. pre-data includes table\ndefinitions and everything other than post-data dumps. post-data has\ncontraint, trigger, index and rules.\n--\nMichael\nThanks Michael, that looks like the ticket!! Will give this a whirlThanks againTory", "msg_date": "Fri, 14 Feb 2014 21:40:27 -0800", "msg_from": "Tory M Blue <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Can one Dump schema without index/constraints?" } ]
[ { "msg_contents": "Hi,\n\nI’m kind of a noob when it comes to setting up RAID controllers and tweaking them so I need some advice here.\n\nI’m just about to setup my newly rented DELL R720 12. gen server. It’s running a single Intel Xeon E5-2620 v.2 processor and 64GB ECC ram. I have installed 8 300GB SSDs in it. It has an PERC H710 raid controller (Based on the LSI SAS 2208 dual core ROC). \n\nNow my database should be optimized for writing. UPDATEs are by far my biggest bottleneck.\n\nFirstly: Should I just put all 8 drives in one single RAID 10 array, or would it be better to have the 6 of them in one RAID 10 array, and then the remaining two in a separate RAID 1 array e.g. for having WAL log dir on it’s own drives?\n\nSecondly: Now what settings should I pay attention to when setting this up, if I wan’t it to have optimal write performance (cache behavior, write back etc.)?\n\nTHANKS!\n\nBTW i attached a screenshot of some of the settings I can alter:", "msg_date": "Sun, 16 Feb 2014 18:49:21 +0100", "msg_from": "=?iso-8859-1?Q?Niels_Kristian_Schj=F8dt?= <[email protected]>", "msg_from_op": true, "msg_subject": "Optimal settings for RAID controller - optimized for writes" }, { "msg_contents": "On Sun, Feb 16, 2014 at 11:49 AM, Niels Kristian Schjødt\n<[email protected]> wrote:\n>\n> Hi,\n>\n> I'm kind of a noob when it comes to setting up RAID controllers and tweaking them so I need some advice here.\n>\n> I'm just about to setup my newly rented DELL R720 12. gen server. It's running a single Intel Xeon E5-2620 v.2 processor and 64GB ECC ram. I have installed 8 300GB SSDs in it. It has an PERC H710 raid controller (Based on the LSI SAS 2208 dual core ROC).\n>\n> Now my database should be optimized for writing. UPDATEs are by far my biggest bottleneck.\n>\n> Firstly: Should I just put all 8 drives in one single RAID 10 array, or would it be better to have the 6 of them in one RAID 10 array, and then the remaining two in a separate RAID 1 array e.g. for having WAL log dir on it's own drives?\n>\n> Secondly: Now what settings should I pay attention to when setting this up, if I wan't it to have optimal write performance (cache behavior, write back etc.)?\n>\n> THANKS!\n>\n> BTW i attached a screenshot of some of the settings I can alter:\n\nAFAIK, There isn't too much to do here (outside of mapping drives to\nthe array). The major question is about dedicating two drives for\nWAL. I would consider dedicating that much SSD storage for WAL to be\ngross overkill -- disk drives are better suited for that task. Which\nmodel SSD are you using?\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 5 Mar 2014 18:34:00 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimal settings for RAID controller - optimized for writes" } ]
[ { "msg_contents": "Hi,\n\nI’m kind of a noob when it comes to setting up RAID controllers and tweaking them so I need some advice here.\n\nI’m just about to setup my newly rented DELL R720 12. gen server. It’s running a single Intel Xeon E5-2620 v.2 processor and 64GB ECC ram. I have installed 8 300GB SSDs in it. It has an PERC H710 raid controller (Based on the LSI SAS 2208 dual core ROC). \n\nNow my database should be optimized for writing. UPDATEs are by far my biggest bottleneck.\n\nFirstly: Should I just put all 8 drives in one single RAID 10 array, or would it be better to have the 6 of them in one RAID 10 array, and then the remaining two in a separate RAID 1 array e.g. for having WAL log dir on it’s own drives?\n\nSecondly: Now what settings should I pay attention to when setting this up, if I wan’t it to have optimal write performance (cache behavior, write back etc.)?\n\nTHANKS!\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 17 Feb 2014 16:03:38 +0100", "msg_from": "=?windows-1252?Q?Niels_Kristian_Schj=F8dt?=\n <[email protected]>", "msg_from_op": true, "msg_subject": "Optimal settings for RAID controller - optimized for writes" }, { "msg_contents": "Hi,\nI configured a similar architecture some months ago and this is the best\nchoice after some pgbench and Bonnie++ tests.\nServer: DELL R720d\nCPU: dual Xeon 8-core\nRAM: 32GB ECC\nController PERC H710\nDisks:\n2xSSD (MLC) Raid1 for Operating System (CentOS 6.4)\n4xSSD (SLC) Raid10 for WAL archive and a dedicated \"fast tablespace\", where\nwe have most UPDATE actions (+ Hot spare).\n10xHDD 15kRPM Raid5 for \"default tablespace\" (optimized for space, instead\nof Raid10) (+ Hot spare).\n\nOur application have above 200 UPDATE /sec. (on the \"fast tablespace\") and\nabove 15GB per die of records (on the \"default tablespace\").\n\nAfter the testing phase I had the following conclusion:\n4xSSD (SLC) RAID 10 vs. 10xHDD RAID 5 have comparable I/O performance in\nthe sequential Read and Write, but much more performance on the Random scan\n(obviously!!), BUT as far I know the postgresql I/O processes are not\nheavily involved in a random I/O, so at same price I will prefer to buy\n10xHDD instead of 4xSSD (SLC) using above 10x of available space at the\nsame price!!\n\n10xHDD RAID 10 vs. 10xHDD RAID 5 : with Bonnie++ I noticed a very small\ndifference in I/O performance so I decided to use RAID 5 + a dedicated Hot\nSpare instead of a RAID10.\n\nIf I could go back, I would have spent the money of the SLC in other HDDs.\n\nregards.\n\n\n\n2014-02-17 16:03 GMT+01:00 Niels Kristian Schjødt <\[email protected]>:\n\n> Hi,\n>\n> I’m kind of a noob when it comes to setting up RAID controllers and\n> tweaking them so I need some advice here.\n>\n> I’m just about to setup my newly rented DELL R720 12. gen server. It’s\n> running a single Intel Xeon E5-2620 v.2 processor and 64GB ECC ram. I have\n> installed 8 300GB SSDs in it. It has an PERC H710 raid controller (Based on\n> the LSI SAS 2208 dual core ROC).\n>\n> Now my database should be optimized for writing. UPDATEs are by far my\n> biggest bottleneck.\n>\n> Firstly: Should I just put all 8 drives in one single RAID 10 array, or\n> would it be better to have the 6 of them in one RAID 10 array, and then the\n> remaining two in a separate RAID 1 array e.g. for having WAL log dir on\n> it’s own drives?\n>\n> Secondly: Now what settings should I pay attention to when setting this\n> up, if I wan’t it to have optimal write performance (cache behavior, write\n> back etc.)?\n>\n> THANKS!\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nHi,I configured a similar architecture some months ago and this is the best choice after some pgbench and Bonnie++ tests.Server: DELL R720dCPU: dual Xeon 8-coreRAM: 32GB ECC\nController PERC H710Disks:2xSSD (MLC) Raid1 for Operating System (CentOS 6.4)4xSSD (SLC) Raid10 for WAL archive and a dedicated \"fast tablespace\", where we have most UPDATE actions (+ Hot spare).\n10xHDD 15kRPM Raid5 for \"default tablespace\" (optimized for space, instead of Raid10)  (+ Hot spare).Our application have above 200 UPDATE /sec. (on the \"fast tablespace\") and above 15GB per die of records (on the \"default tablespace\").\nAfter the testing phase I had the following conclusion:4xSSD (SLC) RAID 10 vs. 10xHDD RAID 5 have comparable I/O performance in the sequential Read and Write, but much more performance on the Random scan (obviously!!), BUT as far I know the postgresql I/O processes are not heavily involved in a random I/O, so at same price I will prefer to buy 10xHDD instead of 4xSSD (SLC) using above 10x of available space at the same price!!\n10xHDD RAID 10 vs. 10xHDD RAID 5 : with Bonnie++ I noticed a very small difference in I/O performance so I decided to use RAID 5 + a dedicated Hot Spare instead of a RAID10.If I could go back,  I would have spent the money of the SLC in other HDDs.\nregards.2014-02-17 16:03 GMT+01:00 Niels Kristian Schjødt <[email protected]>:\nHi,\n\nI’m kind of a noob when it comes to setting up RAID controllers and tweaking them so I need some advice here.\n\nI’m just about to setup my newly rented DELL R720 12. gen server. It’s running a single Intel Xeon E5-2620 v.2 processor and 64GB ECC ram. I have installed 8 300GB SSDs in it. It has an PERC H710 raid controller (Based on the LSI SAS 2208 dual core ROC).\n\nNow my database should be optimized for writing. UPDATEs are by far my biggest bottleneck.\n\nFirstly: Should I just put all 8 drives in one single RAID 10 array, or would it be better to have the 6 of them in one RAID 10 array, and then the remaining two in a separate RAID 1 array e.g. for having WAL log dir on it’s own drives?\n\nSecondly: Now what settings should I pay attention to when setting this up, if I wan’t it to have optimal write performance (cache behavior, write back etc.)?\n\nTHANKS!\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Mon, 17 Feb 2014 16:29:10 +0100", "msg_from": "DFE <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimal settings for RAID controller - optimized for writes" }, { "msg_contents": "On 17 Únor 2014, 16:03, Niels Kristian Schjødt wrote:\n> Hi,\n>\n> I’m kind of a noob when it comes to setting up RAID controllers and\n> tweaking them so I need some advice here.\n>\n> I’m just about to setup my newly rented DELL R720 12. gen server. It’s\n> running a single Intel Xeon E5-2620 v.2 processor and 64GB ECC ram. I have\n> installed 8 300GB SSDs in it. It has an PERC H710 raid controller (Based\n> on the LSI SAS 2208 dual core ROC).\n>\n> Now my database should be optimized for writing. UPDATEs are by far my\n> biggest bottleneck.\n\nI think it's pretty difficult to answer this without a clear idea of how\nmuch data the UPDATEs modify etc. Locating the data may require a lot of\nreads too.\n\n> Firstly: Should I just put all 8 drives in one single RAID 10 array, or\n> would it be better to have the 6 of them in one RAID 10 array, and then\n> the remaining two in a separate RAID 1 array e.g. for having WAL log dir\n> on it’s own drives?\n\nThis used to be done to separate WAL and data files onto separate disks,\nas the workloads are very different (WAL is almost entirely sequential\nwrites, access to data files is often random). With spinning drives this\nmade a huge difference as the WAL drives were doing just seq writes, but\nwith SSDs it's not that important anymore.\n\nIf you can do some testing, do it. I'd probably create a RAID10 on all 8\ndisks.\n\n> Secondly: Now what settings should I pay attention to when setting this\n> up, if I want it to have optimal write performance (cache behavior, write\n> back etc.)?\n\nI'm wondering whether the controller (H710) actually handles TRIM well or\nnot. I know a lot of hardware controllers tend not to pass TRIM to the\ndrives, which results in poor write performance after some time, but I've\nbeen unable to google anything about TRIM on H710.\n\nTomas\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 17 Feb 2014 16:40:21 +0100", "msg_from": "\"Tomas Vondra\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimal settings for RAID controller - optimized for\n writes" }, { "msg_contents": "The thing is, it's difficult to transfer these experiences without clear\nidea of the workloads.\n\nFor example I wouldn't say 200 updates / second is a write-heavy workload.\nA single 15k drive should handle that just fine, assuming the data fit\ninto RAM (which seems to be the case, but maybe I got that wrong).\n\nNiels, what amounts of data are we talking about? What is the total\ndatabase size? How much data are you updating? Are those updates random,\nor are you updating a lot of data in a sequential manner? How did you\ndetermine UPDATEs are the bottleneck?\n\nTomas\n\nOn 17 Únor 2014, 16:29, DFE wrote:\n> Hi,\n> I configured a similar architecture some months ago and this is the best\n> choice after some pgbench and Bonnie++ tests.\n> Server: DELL R720d\n> CPU: dual Xeon 8-core\n> RAM: 32GB ECC\n> Controller PERC H710\n> Disks:\n> 2xSSD (MLC) Raid1 for Operating System (CentOS 6.4)\n> 4xSSD (SLC) Raid10 for WAL archive and a dedicated \"fast tablespace\",\n> where\n> we have most UPDATE actions (+ Hot spare).\n> 10xHDD 15kRPM Raid5 for \"default tablespace\" (optimized for space, instead\n> of Raid10) (+ Hot spare).\n>\n> Our application have above 200 UPDATE /sec. (on the \"fast tablespace\") and\n> above 15GB per die of records (on the \"default tablespace\").\n>\n> After the testing phase I had the following conclusion:\n> 4xSSD (SLC) RAID 10 vs. 10xHDD RAID 5 have comparable I/O performance in\n> the sequential Read and Write, but much more performance on the Random\n> scan\n> (obviously!!), BUT as far I know the postgresql I/O processes are not\n> heavily involved in a random I/O, so at same price I will prefer to buy\n> 10xHDD instead of 4xSSD (SLC) using above 10x of available space at the\n> same price!!\n>\n> 10xHDD RAID 10 vs. 10xHDD RAID 5 : with Bonnie++ I noticed a very small\n> difference in I/O performance so I decided to use RAID 5 + a dedicated Hot\n> Spare instead of a RAID10.\n>\n> If I could go back, I would have spent the money of the SLC in other\n> HDDs.\n>\n> regards.\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 17 Feb 2014 16:54:27 +0100", "msg_from": "\"Tomas Vondra\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimal settings for RAID controller - optimized for\n writes" }, { "msg_contents": "Hi,\n\nI don't have PERC H710 raid controller, but I think he would like to know raid \nstriping/chunk size or read/write cache ratio in writeback-cache setting is the \nbest. I'd like to know it, too:)\n\nRegards,\n--\nMitsumasa KONDO\nNTT Open Source Software Center\n\n(2014/02/18 0:54), Tomas Vondra wrote:\n> The thing is, it's difficult to transfer these experiences without clear\n> idea of the workloads.\n>\n> For example I wouldn't say 200 updates / second is a write-heavy workload.\n> A single 15k drive should handle that just fine, assuming the data fit\n> into RAM (which seems to be the case, but maybe I got that wrong).\n>\n> Niels, what amounts of data are we talking about? What is the total\n> database size? How much data are you updating? Are those updates random,\n> or are you updating a lot of data in a sequential manner? How did you\n> determine UPDATEs are the bottleneck?\n>\n> Tomas\n>\n> On 17 Únor 2014, 16:29, DFE wrote:\n>> Hi,\n>> I configured a similar architecture some months ago and this is the best\n>> choice after some pgbench and Bonnie++ tests.\n>> Server: DELL R720d\n>> CPU: dual Xeon 8-core\n>> RAM: 32GB ECC\n>> Controller PERC H710\n>> Disks:\n>> 2xSSD (MLC) Raid1 for Operating System (CentOS 6.4)\n>> 4xSSD (SLC) Raid10 for WAL archive and a dedicated \"fast tablespace\",\n>> where\n>> we have most UPDATE actions (+ Hot spare).\n>> 10xHDD 15kRPM Raid5 for \"default tablespace\" (optimized for space, instead\n>> of Raid10) (+ Hot spare).\n>>\n>> Our application have above 200 UPDATE /sec. (on the \"fast tablespace\") and\n>> above 15GB per die of records (on the \"default tablespace\").\n>>\n>> After the testing phase I had the following conclusion:\n>> 4xSSD (SLC) RAID 10 vs. 10xHDD RAID 5 have comparable I/O performance in\n>> the sequential Read and Write, but much more performance on the Random\n>> scan\n>> (obviously!!), BUT as far I know the postgresql I/O processes are not\n>> heavily involved in a random I/O, so at same price I will prefer to buy\n>> 10xHDD instead of 4xSSD (SLC) using above 10x of available space at the\n>> same price!!\n>>\n>> 10xHDD RAID 10 vs. 10xHDD RAID 5 : with Bonnie++ I noticed a very small\n>> difference in I/O performance so I decided to use RAID 5 + a dedicated Hot\n>> Spare instead of a RAID10.\n>>\n>> If I could go back, I would have spent the money of the SLC in other\n>> HDDs.\n\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 18 Feb 2014 10:23:09 +0900", "msg_from": "KONDO Mitsumasa <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimal settings for RAID controller - optimized for\n writes" }, { "msg_contents": "On Mon, Feb 17, 2014 at 8:03 AM, Niels Kristian Schjødt\n<[email protected]> wrote:\n> Hi,\n>\n> I'm kind of a noob when it comes to setting up RAID controllers and tweaking them so I need some advice here.\n>\n> I'm just about to setup my newly rented DELL R720 12. gen server. It's running a single Intel Xeon E5-2620 v.2 processor and 64GB ECC ram. I have installed 8 300GB SSDs in it. It has an PERC H710 raid controller (Based on the LSI SAS 2208 dual core ROC).\n>\n> Now my database should be optimized for writing. UPDATEs are by far my biggest bottleneck.\n>\n> Firstly: Should I just put all 8 drives in one single RAID 10 array, or would it be better to have the 6 of them in one RAID 10 array, and then the remaining two in a separate RAID 1 array e.g. for having WAL log dir on it's own drives?\n>\n> Secondly: Now what settings should I pay attention to when setting this up, if I wan't it to have optimal write performance (cache behavior, write back etc.)?\n\nPick a base configuration that's the simplest, i.e. all 8 in a\nRAID-10. Benchmark it to get a baseline, using a load similar to your\nown. You can use pgbench's ability to run scripts to make some pretty\nrealistic benchmarks. Once you've got your baseline then start\nexperimenting. If you can't prove that moving two drives to RAID-1 for\nxlog makes it faster then don't do it.\n\nRecently I was testing MCL FusionIO cards (1.2TB) and no matter how I\nsliced things up, one big partition was just as fast as or faster than\nany other configuration (separate spinners for xlog, etc) I could come\nup with. On this machine sequential IO to a RAID-1 pair of those was\n~1GB/s. Random access during various pgbench runs was limited to\n~200MB/s random throughput. Moving half of that (pg_xlog) onto other\nmedia didn't make things any faster and just made setup more\ncomplicated. I'll be testing 6x600GB SSDs in the next few weeks under\nan LSI card, and we'll likely have a spinning drive RAID-1 for pg_xlog\nthere, at least to compare. If you want I can post what I see from\nthat benchmark next week etc.\n\nSo how many updates / second do you need to push through this thing?\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 17 Feb 2014 21:04:09 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimal settings for RAID controller - optimized for writes" }, { "msg_contents": "On Mon, Feb 17, 2014 at 04:29:10PM +0100, DFE wrote:\n>2xSSD (MLC) Raid1 for Operating System (CentOS 6.4)\n>4xSSD (SLC) Raid10 for WAL archive and a dedicated \"fast tablespace\", where we\n>have most UPDATE actions (+ Hot spare).\n>10xHDD 15kRPM Raid5 for \"default tablespace\" (optimized for space, instead of\n>Raid10) �(+ Hot spare).\n\nThat's bascially backwards. The WAL is basically a sequential write-only \nworkload, and there's generally no particular advantage to having it on \nan SSD. Unless you've got a workload that's writing WAL faster than the \nsequential write speed of your spinning disk array (fairly unusual). \nPutting indexes on the SSD and the WAL on the spinning disk would \nprobably result in more bang for the buck.\n\nOne thing I've found to help performance in some workloads is to move \nthe xlog to a simple ext2 partition. There's no reason for that data to\nbe on a journaled fs, and isolating it can keep the synchronous xlog \noperations from interfering with the async table operations. (E.g., it \nenforces seperate per-filesystem queues, metadata flushes, etc.; note \nthat there will be essentially no metadata updates on the xlog if there \nare sufficient log segments allocated.)\n\nMike Stone\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 18 Feb 2014 09:25:04 -0500", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimal settings for RAID controller - optimized for\n writes" }, { "msg_contents": "On 18.2.2014 02:23, KONDO Mitsumasa wrote:\n> Hi,\n> \n> I don't have PERC H710 raid controller, but I think he would like to\n> know raid striping/chunk size or read/write cache ratio in\n> writeback-cache setting is the best. I'd like to know it, too:)\n\nWe do have dozens of H710 controllers, but not with SSDs. I've been\nunable to find reliable answers how it handles TRIM, and how that works\nwith wearout reporting (using SMART).\n\nThe stripe size is actually a very good question. On spinning drives it\nusually does not matter too much - unless you have a very specialized\nworkload, the 'medium size' is the right choice (AFAIK we're using 64kB\non H710, which is the default).\n\nWith SSDs this might actually matter much more, as the SSDs work with\n\"erase blocks\" (mostly 512kB), and I suspect using small stripe might\nresult in repeated writes to the same block - overwriting one block\nrepeatedly and thus increased wearout. But maybe the controller will\nhandle that just fine, e.g. by coalescing the writes and sending them to\nthe drive as a single write. Or maybe the drive can do that in local\nwrite cache (all SSDs have that).\n\nThe other thing is filesystem alignment - a few years ago this was a\nmajor issue causing poor write performance. Nowadays this works fine,\nmost tools are able to create partitions properly aligned to the 512kB\nautomatically. But if the controller discards this information, it might\nbe worth messing with the stripe size a bit to get it right.\n\nBut those are mostly speculations / curious questions I've been asking\nmyself recently, as we've been considering SSDs with H710/H710p too.\n\nAs for the controller cache - my opinion is that using this for caching\nwrites is just plain wrong. If you need to cache reads, buy more RAM -\nit's much cheaper, so you can buy more of it. Cache on controller (with\na BBU) is designed especially for caching writes safely. (And maybe it\ncould help with some of the erase-block issues too?)\n\nregards\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 18 Feb 2014 21:41:23 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimal settings for RAID controller - optimized for\n writes" }, { "msg_contents": "(2014/02/19 5:41), Tomas Vondra wrote:\n> On 18.2.2014 02:23, KONDO Mitsumasa wrote:\n>> Hi,\n>>\n>> I don't have PERC H710 raid controller, but I think he would like to\n>> know raid striping/chunk size or read/write cache ratio in\n>> writeback-cache setting is the best. I'd like to know it, too:)\n>\n> The stripe size is actually a very good question. On spinning drives it\n> usually does not matter too much - unless you have a very specialized\n> workload, the 'medium size' is the right choice (AFAIK we're using 64kB\n> on H710, which is the default).\nI am interested that raid stripe size of PERC H710 is 64kB. In HP raid card,\ndefault chunk size is 256kB. If we use two disks with raid 0, stripe size will\nbe 512kB. I think that it might too big, but it might be optimized in raid \ncard... In actually, it isn't bad in that settings.\n\nI'm interested in raid card internal behavior. Fortunately, linux raid card \ndriver is open souce, so we might good at looking the source code when we have time.\n\n> With SSDs this might actually matter much more, as the SSDs work with\n> \"erase blocks\" (mostly 512kB), and I suspect using small stripe might\n> result in repeated writes to the same block - overwriting one block\n> repeatedly and thus increased wearout. But maybe the controller will\n> handle that just fine, e.g. by coalescing the writes and sending them to\n> the drive as a single write. Or maybe the drive can do that in local\n> write cache (all SSDs have that).\nI have heard that genuine raid card with genuine ssds are optimized in these \nssds. It is important that using compatible with ssd for performance. If the \nworst case, life time of ssd is be short, and will be bad performance.\n\n\n> But those are mostly speculations / curious questions I've been asking\n> myself recently, as we've been considering SSDs with H710/H710p too.\n>\n> As for the controller cache - my opinion is that using this for caching\n> writes is just plain wrong. If you need to cache reads, buy more RAM -\n> it's much cheaper, so you can buy more of it. Cache on controller (with\n> a BBU) is designed especially for caching writes safely. (And maybe it\n> could help with some of the erase-block issues too?)\nI'm wondering about effective of readahead in OS and raid card. In general, \nreadahead data by raid card is stored in raid cache, and not stored in OS caches. \nReadahead data by OS is stored in OS cache. I'd like to use all raid cache for \nonly write cache, because fsync() becomes faster. But then, it cannot use \nreadahead very much by raid card.. If we hope to use more effectively, we have to \nclear it, but it seems difficult:(\n\nRegards,\n--\nMitsumasa KONDO\nNTT Open Source Software Center\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 19 Feb 2014 11:45:32 +0900", "msg_from": "KONDO Mitsumasa <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimal settings for RAID controller - optimized for\n writes" }, { "msg_contents": "On Tue, Feb 18, 2014 at 2:41 PM, Tomas Vondra <[email protected]> wrote:\n> On 18.2.2014 02:23, KONDO Mitsumasa wrote:\n>> Hi,\n>>\n>> I don't have PERC H710 raid controller, but I think he would like to\n>> know raid striping/chunk size or read/write cache ratio in\n>> writeback-cache setting is the best. I'd like to know it, too:)\n>\n> We do have dozens of H710 controllers, but not with SSDs. I've been\n> unable to find reliable answers how it handles TRIM, and how that works\n> with wearout reporting (using SMART).\n\nAFAIK (I haven't looked for a few months), they don't support TRIM.\nThe only hardware RAID vendor that has even basic TRIM support Intel\nand that's no accident; I have a theory that enterprise storage\nvendors are deliberately holding back SSD: SSD (at least, the newer,\nbetter ones) destroy the business model for \"enterprise storage\nequipment\" in a large percentage of applications. A 2u server with,\nsay, 10 s3700 drives gives *far* superior performance to most SANs\nthat cost under 100k$. For about 1/10th of the price.\n\nIf you have a server that is i/o constrained as opposed to storage\nconstrained (AKA: a database) hard drives make zero economic sense.\nIf your vendor is jerking you around by charging large multiples of\nmarket rates for storage and/or disallowing drives that actually\nperform well in their storage gear, choose a new vendor. And consider\nusing software raid.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 19 Feb 2014 09:13:17 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimal settings for RAID controller - optimized for writes" }, { "msg_contents": "On Wed, Feb 19, 2014 at 8:13 AM, Merlin Moncure <[email protected]> wrote:\n> On Tue, Feb 18, 2014 at 2:41 PM, Tomas Vondra <[email protected]> wrote:\n>> On 18.2.2014 02:23, KONDO Mitsumasa wrote:\n>>> Hi,\n>>>\n>>> I don't have PERC H710 raid controller, but I think he would like to\n>>> know raid striping/chunk size or read/write cache ratio in\n>>> writeback-cache setting is the best. I'd like to know it, too:)\n>>\n>> We do have dozens of H710 controllers, but not with SSDs. I've been\n>> unable to find reliable answers how it handles TRIM, and how that works\n>> with wearout reporting (using SMART).\n>\n> AFAIK (I haven't looked for a few months), they don't support TRIM.\n> The only hardware RAID vendor that has even basic TRIM support Intel\n> and that's no accident; I have a theory that enterprise storage\n> vendors are deliberately holding back SSD: SSD (at least, the newer,\n> better ones) destroy the business model for \"enterprise storage\n> equipment\" in a large percentage of applications. A 2u server with,\n> say, 10 s3700 drives gives *far* superior performance to most SANs\n> that cost under 100k$. For about 1/10th of the price.\n>\n> If you have a server that is i/o constrained as opposed to storage\n> constrained (AKA: a database) hard drives make zero economic sense.\n> If your vendor is jerking you around by charging large multiples of\n> market rates for storage and/or disallowing drives that actually\n> perform well in their storage gear, choose a new vendor. And consider\n> using software raid.\n\nYou can also do the old trick of underprovisioning and / or\nunderutilizing all the space on SSDs. I.e. put 10 600GB SSDs under a\nHW RAID controller in RAID-10, then only parititon out 1/2 the storage\nyou get from that. so you get 1.5TB os storage and the drives are\nunderutilized enough to have spare space.\n\nRight now I'm testing on a machine with 2x Intel E5-2690s\n(http://ark.intel.com/products/64596/intel-xeon-processor-e5-2690-20m-cache-2_90-ghz-8_00-gts-intel-qpi)\n512GB RAM and 6x600GB Intel SSDs (not sure which ones) under a LSI\nMegaRAID 9266. I'm able to crank out 6500 to 7200 TPS under pgbench on\na scale 1000 db at 8 to 60 clients on that machine. It's not cheap,\nbut storage wise it's WAY cheaper than most SANS and very fast.\npg_xlog is on a pair of non-descript SATA spinners btw.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 19 Feb 2014 11:09:05 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimal settings for RAID controller - optimized for writes" }, { "msg_contents": "Hi,\n\nOn 19.2.2014 03:45, KONDO Mitsumasa wrote:\n> (2014/02/19 5:41), Tomas Vondra wrote:\n>> On 18.2.2014 02:23, KONDO Mitsumasa wrote:\n>>> Hi,\n>>>\n>>> I don't have PERC H710 raid controller, but I think he would like to\n>>> know raid striping/chunk size or read/write cache ratio in\n>>> writeback-cache setting is the best. I'd like to know it, too:)\n>>\n>> The stripe size is actually a very good question. On spinning drives it\n>> usually does not matter too much - unless you have a very specialized\n>> workload, the 'medium size' is the right choice (AFAIK we're using 64kB\n>> on H710, which is the default).\n>\n> I am interested that raid stripe size of PERC H710 is 64kB. In HP\n> raid card, default chunk size is 256kB. If we use two disks with raid\n> 0, stripe size will be 512kB. I think that it might too big, but it\n> might be optimized in raid card... In actually, it isn't bad in that\n> settings.\n\nWith HP controllers this depends on RAID level (and maybe even\ncontroller). Which HP controller are you talking about? I have some\nbasic experience with P400/P800, and those have 16kB (RAID6), 64kB\n(RAID5) or 128kB (RAID10) defaults. None of them has 256kB.\n\nSee http://bit.ly/1bN3gIs (P800) and http://bit.ly/MdsEKN (P400).\n\n\n> I'm interested in raid card internal behavior. Fortunately, linux raid\n> card driver is open souce, so we might good at looking the source code\n> when we have time.\n\nWhat do you mean by \"linux raid card driver\"? Afaik the admin tools may\nbe available, but the interesting stuff happens inside the controller,\nand that's still proprietary.\n\n>> With SSDs this might actually matter much more, as the SSDs work with\n>> \"erase blocks\" (mostly 512kB), and I suspect using small stripe might\n>> result in repeated writes to the same block - overwriting one block\n>> repeatedly and thus increased wearout. But maybe the controller will\n>> handle that just fine, e.g. by coalescing the writes and sending them to\n>> the drive as a single write. Or maybe the drive can do that in local\n>> write cache (all SSDs have that).\n>\n> I have heard that genuine raid card with genuine ssds are optimized in\n> these ssds. It is important that using compatible with ssd for\n> performance. If the worst case, life time of ssd is be short, and will\n> be bad performance.\n\nWell, that's the main question here, right? Because if the \"worst case\"\nactually happens to be true, then what's the point of SSDs? You have a\ndisk that does not provite the performance you expected, died much\nsooner than you expected and maybe suddenly so it interrupted the operation.\n\nSo instead of paying more for higher performance, you paid more for bad\nperformance and much shorter life of the disk.\n\nCoincidentally we're currently trying to find the answer to this\nquestion too. That is - how long will the SSD endure in that particular\nRAID level? Does that pay off?\n\nBTW what you mean by \"genuine raid card\" and \"genuine ssds\"?\n\n> I'm wondering about effective of readahead in OS and raid card. In \n> general, readahead data by raid card is stored in raid cache, and\n> not stored in OS caches. Readahead data by OS is stored in OS cache.\n> I'd like to use all raid cache for only write cache, because fsync()\n> becomes faster. But then, it cannot use readahead very much by raid\n> card.. If we hope to use more effectively, we have to clear it, but\n> it seems difficult:(\n\nI've done a lot of testing of this on H710 in 2012 (~18 months ago),\nmeasuring combinations of\n\n * read-ahead on controller (adaptive, enabled, disabled)\n * read-ahead in kernel (with various sizes)\n * scheduler\n\nThe test was the simplest and most suitable workload for this - just\n\"dd\" with 1MB block size (AFAIK, would have to check the scripts).\n\nIn short, my findings are that:\n\n * read-ahead in kernel matters - tweak this\n * read-ahead on controller sucks - either makes no difference, or\n actually harms performance (adaptive with small values set for\n kernel read-ahead)\n * scheduler made no difference (at least for this workload)\n\nSo we disable readahead on the controller, use 24576 for kernel and it\nworks fine.\n\nI've done the same test with fusionio iodrive (attached to PCIe, not\nthrough controller) - absolutely no difference.\n\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 20 Feb 2014 01:13:25 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimal settings for RAID controller - optimized for\n writes" }, { "msg_contents": "On Wed, Feb 19, 2014 at 12:09 PM, Scott Marlowe <[email protected]> wrote:\n> On Wed, Feb 19, 2014 at 8:13 AM, Merlin Moncure <[email protected]> wrote:\n>> On Tue, Feb 18, 2014 at 2:41 PM, Tomas Vondra <[email protected]> wrote:\n>>> On 18.2.2014 02:23, KONDO Mitsumasa wrote:\n>>>> Hi,\n>>>>\n>>>> I don't have PERC H710 raid controller, but I think he would like to\n>>>> know raid striping/chunk size or read/write cache ratio in\n>>>> writeback-cache setting is the best. I'd like to know it, too:)\n>>>\n>>> We do have dozens of H710 controllers, but not with SSDs. I've been\n>>> unable to find reliable answers how it handles TRIM, and how that works\n>>> with wearout reporting (using SMART).\n>>\n>> AFAIK (I haven't looked for a few months), they don't support TRIM.\n>> The only hardware RAID vendor that has even basic TRIM support Intel\n>> and that's no accident; I have a theory that enterprise storage\n>> vendors are deliberately holding back SSD: SSD (at least, the newer,\n>> better ones) destroy the business model for \"enterprise storage\n>> equipment\" in a large percentage of applications. A 2u server with,\n>> say, 10 s3700 drives gives *far* superior performance to most SANs\n>> that cost under 100k$. For about 1/10th of the price.\n>>\n>> If you have a server that is i/o constrained as opposed to storage\n>> constrained (AKA: a database) hard drives make zero economic sense.\n>> If your vendor is jerking you around by charging large multiples of\n>> market rates for storage and/or disallowing drives that actually\n>> perform well in their storage gear, choose a new vendor. And consider\n>> using software raid.\n>\n> You can also do the old trick of underprovisioning and / or\n> underutilizing all the space on SSDs. I.e. put 10 600GB SSDs under a\n> HW RAID controller in RAID-10, then only parititon out 1/2 the storage\n> you get from that. so you get 1.5TB os storage and the drives are\n> underutilized enough to have spare space.\n>\n> Right now I'm testing on a machine with 2x Intel E5-2690s\n> (http://ark.intel.com/products/64596/intel-xeon-processor-e5-2690-20m-cache-2_90-ghz-8_00-gts-intel-qpi)\n> 512GB RAM and 6x600GB Intel SSDs (not sure which ones) under a LSI\n> MegaRAID 9266. I'm able to crank out 6500 to 7200 TPS under pgbench on\n> a scale 1000 db at 8 to 60 clients on that machine. It's not cheap,\n> but storage wise it's WAY cheaper than most SANS and very fast.\n> pg_xlog is on a pair of non-descript SATA spinners btw.\n\nYeah -- underprovisioing certainly helps but for any write heavy\nconfiguration, all else being equal, TRIM support will perform faster\nand have less wear. Those drives are likely the older 320 600gb. The\nnewer s3700 are much faster although they cost around twice as much.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 19 Feb 2014 18:18:36 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimal settings for RAID controller - optimized for writes" }, { "msg_contents": "On 19.2.2014 16:13, Merlin Moncure wrote:\n> On Tue, Feb 18, 2014 at 2:41 PM, Tomas Vondra <[email protected]> wrote:\n>> On 18.2.2014 02:23, KONDO Mitsumasa wrote:\n>>> Hi,\n>>>\n>>> I don't have PERC H710 raid controller, but I think he would like to\n>>> know raid striping/chunk size or read/write cache ratio in\n>>> writeback-cache setting is the best. I'd like to know it, too:)\n>>\n>> We do have dozens of H710 controllers, but not with SSDs. I've been\n>> unable to find reliable answers how it handles TRIM, and how that works\n>> with wearout reporting (using SMART).\n> \n> AFAIK (I haven't looked for a few months), they don't support TRIM.\n> The only hardware RAID vendor that has even basic TRIM support Intel\n> and that's no accident; I have a theory that enterprise storage\n> vendors are deliberately holding back SSD: SSD (at least, the newer,\n> better ones) destroy the business model for \"enterprise storage\n> equipment\" in a large percentage of applications. A 2u server with,\n> say, 10 s3700 drives gives *far* superior performance to most SANs\n> that cost under 100k$. For about 1/10th of the price.\n\nYeah, maybe. I'm generally a bit skeptic when it comes to conspiration\ntheories like this, but for ~1 year we all know that it might easily\nhappen to be true. So maybe ...\n\nNevertheless, I'd guess this is another case of the \"Nobody ever got\nfired for buying X\", where X is a storage product based on spinning\ndrives, proven to be reliable, with known operational statistics and\npretty good understanding of how it works. While \"Y\" is a new thing\nbased on SSDs, that got rather bad reputation initially because of a\nhype and premature usage of consumer-grade products for unsuitable\nstuff. Also, each vendor of Y uses different tricks, which makes\napplication of experiences across vendors (or even various generations\nof drives of the same vendor) very difficult.\n\nFactor in how conservative DBAs happen to be, and I think it might be\nthis particular feedback loop, forcing the vendors not to push this.\n\n> If you have a server that is i/o constrained as opposed to storage\n> constrained (AKA: a database) hard drives make zero economic sense.\n> If your vendor is jerking you around by charging large multiples of\n> market rates for storage and/or disallowing drives that actually\n> perform well in their storage gear, choose a new vendor. And consider\n> using software raid.\n\nYeah, exactly.\n\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 20 Feb 2014 01:26:43 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimal settings for RAID controller - optimized for\n writes" }, { "msg_contents": "On 19.2.2014 19:09, Scott Marlowe wrote:\n> \n> You can also do the old trick of underprovisioning and / or\n> underutilizing all the space on SSDs. I.e. put 10 600GB SSDs under a\n> HW RAID controller in RAID-10, then only parititon out 1/2 the storage\n> you get from that. so you get 1.5TB os storage and the drives are\n> underutilized enough to have spare space.\n\nYeah. AFAIK that's basically what Intel did with S3500 -> S3700.\n\nWhat I'm trying to find is the 'sweet spot' considering lifespan,\ncapacity, performance and price. That's why I'm still wondering if there\nare some experiences with current generation of SSDs and RAID\ncontrollers, with RAID levels other than RAID-10.\n\nSay I have 8x 400GB SSD, 75k/32k read/write IOPS each (i.e. it's\nbasically the S3700 from Intel). Assuming the writes are ~25% of the\nworkload, this is what I get for RAID10 vs. RAID6 (math done using\nhttp://www.wmarow.com/strcalc/)\n\n | capacity GB | bandwidth MB/s | IOPS\n ---------------------------------------------------\n RAID-10 | 1490 | 2370 | 300k\n RAID-6 | 2230 | 1070 | 130k\n\nLet's say the app can't really generate 130k IOPS (we'll saturate CPU\nway before that), so even if the real-world numbers will be less than\n50% of this, we're not going to hit disks as the main bottleneck.\n\nSo let's assume there's no observable performance difference between\nRAID10 and RAID6 in our case. But we could put 1.5x the amount of data\non the RAID6, making it much cheaper (we're talking about non-trivial\nnumbers of such machines).\n\nThe question is - how long will it last before the SSDs die because of\nwearout? Will the RAID controller make it worse due to (not) handling\nTRIM? Will we know how much time we have left, i.e. will the controller\nprovide the info the drives provide through SMART?\n\n> Right now I'm testing on a machine with 2x Intel E5-2690s\n> (http://ark.intel.com/products/64596/intel-xeon-processor-e5-2690-20m-cache-2_90-ghz-8_00-gts-intel-qpi)\n> 512GB RAM and 6x600GB Intel SSDs (not sure which ones) under a LSI\n\nMost likely S3500. S3700 are not offered with 600GB capacity.\n\n> MegaRAID 9266. I'm able to crank out 6500 to 7200 TPS under pgbench on\n> a scale 1000 db at 8 to 60 clients on that machine. It's not cheap,\n> but storage wise it's WAY cheaper than most SANS and very fast.\n> pg_xlog is on a pair of non-descript SATA spinners btw.\n\nNice. I've done some testing with fusionio iodrive duo (2 devices in\nRAID0) ~ year ago, and I got 12k TPS (or ~15k with WAL on SAS RAID). So\nconsidering the price, the 7.2k TPS is really good IMHO.\n\nregards\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 20 Feb 2014 02:10:13 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimal settings for RAID controller - optimized for\n writes" }, { "msg_contents": "On Wed, Feb 19, 2014 at 6:10 PM, Tomas Vondra <[email protected]> wrote:\n> On 19.2.2014 19:09, Scott Marlowe wrote:\n\n>> Right now I'm testing on a machine with 2x Intel E5-2690s\n>> (http://ark.intel.com/products/64596/intel-xeon-processor-e5-2690-20m-cache-2_90-ghz-8_00-gts-intel-qpi)\n>> 512GB RAM and 6x600GB Intel SSDs (not sure which ones) under a LSI\n>\n> Most likely S3500. S3700 are not offered with 600GB capacity.\n>\n>> MegaRAID 9266. I'm able to crank out 6500 to 7200 TPS under pgbench on\n>> a scale 1000 db at 8 to 60 clients on that machine. It's not cheap,\n>> but storage wise it's WAY cheaper than most SANS and very fast.\n>> pg_xlog is on a pair of non-descript SATA spinners btw.\n>\n> Nice. I've done some testing with fusionio iodrive duo (2 devices in\n> RAID0) ~ year ago, and I got 12k TPS (or ~15k with WAL on SAS RAID). So\n> considering the price, the 7.2k TPS is really good IMHO.\n\nThe part number reported by the LSI is: SSDSC2BB600G4 so I'm assuming\nit's an SLC drive. Done some further testing, I keep well over 6k tps\nright up to 128 clients. At no time is there any IOWait under vmstat,\nand if I turn off fsync speed goes up by some tiny amount, so I'm\nguessing I'm CPU bound at this point. This machine has dual 8 core HT\nIntels CPUs.\n\n We have another class of machine running on FusionIO IODrive2 MLC\ncards in RAID-1 and 4 6 core non-HT CPUs. It's a bit slower (1366\nversus 1600MHz Memory, slower CPU clocks and interconects etc) and it\ncan do about 5k tps and again, like the ther machine, no IO Wait, all\nCPU bound. I'd say once you get to a certain level of IO Subsystem it\ngets harder and harder to max it out.\n\nI'd love to have a 64 core 4 socket AMD top of the line system to\ncompare here. But honestly both class of machines are more than fast\nenough for what we need, and our major load is from select statements\nso fitting the db into RAM is more important that IOPs for what we do.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 19 Feb 2014 18:47:58 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimal settings for RAID controller - optimized for writes" }, { "msg_contents": "Hi,\n\n(2014/02/20 9:13), Tomas Vondra wrote:\n> Hi,\n>\n> On 19.2.2014 03:45, KONDO Mitsumasa wrote:\n>> (2014/02/19 5:41), Tomas Vondra wrote:\n>>> On 18.2.2014 02:23, KONDO Mitsumasa wrote:\n>>>> Hi,\n>>>>\n>>>> I don't have PERC H710 raid controller, but I think he would like to\n>>>> know raid striping/chunk size or read/write cache ratio in\n>>>> writeback-cache setting is the best. I'd like to know it, too:)\n>>>\n>>> The stripe size is actually a very good question. On spinning drives it\n>>> usually does not matter too much - unless you have a very specialized\n>>> workload, the 'medium size' is the right choice (AFAIK we're using 64kB\n>>> on H710, which is the default).\n>>\n>> I am interested that raid stripe size of PERC H710 is 64kB. In HP\n>> raid card, default chunk size is 256kB. If we use two disks with raid\n>> 0, stripe size will be 512kB. I think that it might too big, but it\n>> might be optimized in raid card... In actually, it isn't bad in that\n>> settings.\n>\n> With HP controllers this depends on RAID level (and maybe even\n> controller). Which HP controller are you talking about? I have some\n> basic experience with P400/P800, and those have 16kB (RAID6), 64kB\n> (RAID5) or 128kB (RAID10) defaults. None of them has 256kB.\n > See http://bit.ly/1bN3gIs (P800) and http://bit.ly/MdsEKN (P400).\nI use P410 and P420 that are equiped in DL360 gen7 and DL360gen8. They\nseems relatively latest. I check raid stripe size(RAID1+0) using hpacucli tool,\nand it is surely 256kB chunk size. And P420 enables to set higher/smaller chunk \nsizes which range is 8KB - 1024kB? higher. But I don't know the best parameter in \npostgres:(\n\n>> I'm interested in raid card internal behavior. Fortunately, linux raid\n>> card driver is open souce, so we might good at looking the source code\n>> when we have time.\n>\n> What do you mean by \"linux raid card driver\"? Afaik the admin tools may\n> be available, but the interesting stuff happens inside the controller,\n> and that's still proprietary.\nI said open source driver. HP drivers are in under following url.\nhttp://cciss.sourceforge.net/\n\nHowever, unless I read driver source code roughly, core part of raid card \nprograming is in farmware, as you say. It seems that just to drive from OS.\nI'm interested in elevetor algorithm, when I read driver source code.\nBut detail algorithm might be in firmware..\n\n>>> With SSDs this might actually matter much more, as the SSDs work with\n>>> \"erase blocks\" (mostly 512kB), and I suspect using small stripe might\n>>> result in repeated writes to the same block - overwriting one block\n>>> repeatedly and thus increased wearout. But maybe the controller will\n>>> handle that just fine, e.g. by coalescing the writes and sending them to\n>>> the drive as a single write. Or maybe the drive can do that in local\n>>> write cache (all SSDs have that).\n>>\n>> I have heard that genuine raid card with genuine ssds are optimized in\n>> these ssds. It is important that using compatible with ssd for\n>> performance. If the worst case, life time of ssd is be short, and will\n>> be bad performance.\n>\n> Well, that's the main question here, right? Because if the \"worst case\"\n> actually happens to be true, then what's the point of SSDs?\nSorry, this thread topic is SSD stiriping size tuning. I'm interested in magnetic \ndisk especially. But also interested SSD.\n\n> You have a\n> disk that does not provite the performance you expected, died much\n> sooner than you expected and maybe suddenly so it interrupted the operation.\n> So instead of paying more for higher performance, you paid more for bad\n> performance and much shorter life of the disk.\nI'm intetested in that changing raid chunk size will be short life. I have not \nhad this point. It mgiht be true. And I'd like to test it using SMART cheacker if \nwe have time.\n\n\n> Coincidentally we're currently trying to find the answer to this\n> question too. That is - how long will the SSD endure in that particular\n> RAID level? Does that pay off?\n>\n> BTW what you mean by \"genuine raid card\" and \"genuine ssds\"?\nI want to say \"genuine\" as it is same manufacturing maker or vendor.\n\n>> I'm wondering about effective of readahead in OS and raid card. In\n>> general, readahead data by raid card is stored in raid cache, and\n>> not stored in OS caches. Readahead data by OS is stored in OS cache.\n>> I'd like to use all raid cache for only write cache, because fsync()\n>> becomes faster. But then, it cannot use readahead very much by raid\n>> card.. If we hope to use more effectively, we have to clear it, but\n>> it seems difficult:(\n>\n> I've done a lot of testing of this on H710 in 2012 (~18 months ago),\n> measuring combinations of\n>\n> * read-ahead on controller (adaptive, enabled, disabled)\n> * read-ahead in kernel (with various sizes)\n> * scheduler\n>\n> The test was the simplest and most suitable workload for this - just\n> \"dd\" with 1MB block size (AFAIK, would have to check the scripts).\n>\n> In short, my findings are that:\n>\n> * read-ahead in kernel matters - tweak this\n> * read-ahead on controller sucks - either makes no difference, or\n> actually harms performance (adaptive with small values set for\n> kernel read-ahead)\n> * scheduler made no difference (at least for this workload)\n>\n> So we disable readahead on the controller, use 24576 for kernel and it\n> works fine.\n>\n> I've done the same test with fusionio iodrive (attached to PCIe, not\n> through controller) - absolutely no difference.\nI'd like to know random access(8kB) performance, it does not seems it..\nBut this is inteteresting data. What command did you use kernel readahead paramter?\nIf you use blockdev, value 256 indicate using 256 * 512B(sector size)=128kB \nreadahaed parameter.\nAnd you set 245676, it will be 245676 * 512B = 120MB readahead parameter.\nI think it is too big, but it is optimized in your enviroment.\nIn the end of the day, is it good for too big readahead, rather than small \nreadahead or nothing? If we have big RAM, it seems true. But not in the \nsituations, is it not? It is difficult problem.\n\nRegards,\n--\nMitsumasa KONDO\nNTT Open Source Software Center\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 20 Feb 2014 17:29:12 +0900", "msg_from": "KONDO Mitsumasa <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimal settings for RAID controller - optimized for\n writes" }, { "msg_contents": "On 20.2.2014 02:47, Scott Marlowe wrote:\n> On Wed, Feb 19, 2014 at 6:10 PM, Tomas Vondra <[email protected]> wrote:\n>> On 19.2.2014 19:09, Scott Marlowe wrote:\n> \n>>> Right now I'm testing on a machine with 2x Intel E5-2690s\n>>> (http://ark.intel.com/products/64596/intel-xeon-processor-e5-2690-20m-cache-2_90-ghz-8_00-gts-intel-qpi)\n>>> 512GB RAM and 6x600GB Intel SSDs (not sure which ones) under a LSI\n>>\n>> Most likely S3500. S3700 are not offered with 600GB capacity.\n>>\n>>> MegaRAID 9266. I'm able to crank out 6500 to 7200 TPS under pgbench on\n>>> a scale 1000 db at 8 to 60 clients on that machine. It's not cheap,\n>>> but storage wise it's WAY cheaper than most SANS and very fast.\n>>> pg_xlog is on a pair of non-descript SATA spinners btw.\n>>\n>> Nice. I've done some testing with fusionio iodrive duo (2 devices in\n>> RAID0) ~ year ago, and I got 12k TPS (or ~15k with WAL on SAS RAID). So\n>> considering the price, the 7.2k TPS is really good IMHO.\n> \n> The part number reported by the LSI is: SSDSC2BB600G4 so I'm assuming\n> it's an SLC drive. Done some further testing, I keep well over 6k tps\n> right up to 128 clients. At no time is there any IOWait under vmstat,\n> and if I turn off fsync speed goes up by some tiny amount, so I'm\n> guessing I'm CPU bound at this point. This machine has dual 8 core HT\n> Intels CPUs.\n\nNo, it's S3500, which is a MLC drive.\n\nhttp://ark.intel.com/products/74944/Intel-SSD-DC-S3500-Series-600GB-2_5in-SATA-6Gbs-20nm-MLC\n\nTomas\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 21 Feb 2014 00:58:38 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimal settings for RAID controller - optimized for\n writes" } ]
[ { "msg_contents": "I have postgresql 8.4.15 on Ubuntu 10.04 and this query:\n\n SELECT MAX(probeTable.PROBE_ALARM_EVENT_ID) AS MAX_EVENT_ID\n FROM ALARM_EVENT eventTable\n INNER JOIN ALARM_EVENT_PROBE probeTable\n ON eventTable.ALARM_EVENT_ID = probeTable.ALARM_EVENT_ID\n WHERE probeTable.PROBE_ID = 2\n\nwhich is running slower than it could. Table definitions and explain\nanalyze output below.\nThe first explain is the current plan (uses sequential scans).\nThe second is after I have disabled sequential scans, and is the plan\nI would prefer.\n\nI have vacuum analyzed both tables. In terms of relevant changes to\nthe default postgresql.conf, we have these:\n\n shared_buffers = 28MB\n constraint_exclusion = on\n\nI want to understand why the optimiser is choosing the plan with\nsequential table scans, rather than the plan with index scans.\nI am not sure how to interpret the predicted vs actual times/costs,\nand want to understand why the predicted cost for the index scan plan\nseems to be way off.\n\nI have read: https://wiki.postgresql.org/images/4/45/Explaining_EXPLAIN.pdf\n\n\n show server_version;\n\n server_version\n ----------------\n 8.4.15\n\n\n Table \"public.alarm_event_probe\"\n Column | Type | Modifiers\n ----------------------+---------+-----------\n alarm_event_id | bigint | not null\n probe_id | integer | not null\n probe_alarm_event_id | bigint | not null\n Indexes:\n \"alarm_event_probe_pkey\" PRIMARY KEY, btree (alarm_event_id)\n \"alarm_event_probe_unique\" UNIQUE, btree (probe_id,\nprobe_alarm_event_id)\n Foreign-key constraints:\n \"alarm_event_probe_fk\" FOREIGN KEY (probe_id) REFERENCES probe(probe_id)\n Triggers:\n alarm_event_probe_alarm_event_foreign_key_trigger BEFORE\nINSERT OR UPDATE ON alarm_event_probe FOR EACH ROW EXECUTE PROCEDURE\nalarm_event_foreign_key_function()\n\n Table \"public.alarm_event\"\n Column | Type |\n Modifiers\n ------------------------+-----------------------------+----------------------------------------------------------------------\n alarm_event_id | bigint | not null\ndefault nextval('alarm_event_alarm_event_id_seq'::regclass)\n timestamp_device_utc | timestamp without time zone |\n timestamp_manager_utc | timestamp without time zone |\n timestamp_proxy_utc | timestamp without time zone | not null\n timestamp_database_utc | timestamp without time zone | not null\ndefault systimestamp()\n device_name | character varying(32) | not null\n device_location | character varying(4) | not null\n device_type | character varying(6) | not null\n device_category | character varying(32) | not null\n device_instance | integer | not null\n proxy_name | character varying(128) | not null\n proxy_instance | character varying(32) | not null\n proxy_source | character varying(256) | not null\n native_alarm_id | character varying(32) |\n alarm_name | character varying(64) | not null\n alarm_severity | integer |\n alarm_description | character varying(1024) | not null\n org_code | integer | not null default 348\n domain_id | integer |\n service_state | integer | not null default 0\n proactive | boolean | not null\ndefault true\n alarm_id | bigint |\n Indexes:\n \"alarm_event_pkey\" PRIMARY KEY, btree (alarm_event_id)\n Check constraints:\n \"alarm_event_alarm_severity_range\" CHECK (alarm_severity >= 0\nAND alarm_severity <= 5)\n \"alarm_event_service_state_range\" CHECK (service_state >= 0\nAND service_state <= 2)\n Foreign-key constraints:\n \"alarm_event_domain_fk\" FOREIGN KEY (domain_id) REFERENCES\ndomain(domain_id)\n \"alarm_event_organisation_fk\" FOREIGN KEY (org_code)\nREFERENCES organisation(org_code)\n Triggers:\n alarm_event_1_raw_trigger AFTER INSERT ON alarm_event FOR EACH\nROW EXECUTE PROCEDURE alarm_event_raw_trigger_function()\n alarm_event_2_trigger_a AFTER INSERT OR DELETE OR UPDATE ON\nalarm_event FOR EACH ROW EXECUTE PROCEDURE\nalarm_event_trigger_function()\n alarm_event_probe_alarm_event_delete_cascade_trigger BEFORE\nDELETE ON alarm_event FOR EACH ROW EXECUTE PROCEDURE\nalarm_event_delete_alarm_event_probe_function()\n alarm_event_trigger_b BEFORE INSERT OR DELETE OR UPDATE ON\nalarm_event FOR EACH ROW EXECUTE PROCEDURE\nalarm_event_trigger_function()\n alarm_event_zz_insert_trigger AFTER INSERT ON alarm_event FOR\nEACH ROW EXECUTE PROCEDURE alarm_event_insert()\n Number of child tables: 33 (Use \\d+ to list them.)\n\n\n QUERY PLAN\n ---------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=451984.11..451984.12 rows=1 width=8) (actual\ntime=248693.886..248693.889 rows=1 loops=1)\n -> Hash Join (cost=107.06..443578.92 rows=3362075 width=8)\n(actual time=2521.275..248683.459 rows=1934 loops=1)\n Hash Cond: (eventtable.alarm_event_id = probetable.alarm_event_id)\n -> Append (cost=0.00..370989.07 rows=7772408 width=8)\n(actual time=14.364..220430.509 rows=7771865 loops=1)\n -> Seq Scan on alarm_event eventtable\n(cost=0.00..3.00 rows=1 width=8) (actual time=0.088..0.088 rows=0\nloops=1)\n -> Seq Scan on alarm_event_y2011m12 eventtable\n(cost=0.00..12638.54 rows=290254 width=8) (actual\ntime=14.267..11136.290 rows=290254 loops=1)\n -> Seq Scan on alarm_event_y2011m10 eventtable\n(cost=0.00..9146.57 rows=214457 width=8) (actual time=7.719..5820.900\nrows=214457 loops=1)\n -> Seq Scan on alarm_event_y2011m09 eventtable\n(cost=0.00..1183.60 rows=27660 width=8) (actual time=0.046..111.918\nrows=27660 loops=1)\n -> Seq Scan on alarm_event_y2011m11 eventtable\n(cost=0.00..6209.42 rows=149342 width=8) (actual time=0.055..662.708\nrows=149342 loops=1)\n -> Seq Scan on alarm_event_y2012m01 eventtable\n(cost=0.00..8661.84 rows=207184 width=8) (actual time=0.075..943.065\nrows=207184 loops=1)\n -> Seq Scan on alarm_event_y2012m02 eventtable\n(cost=0.00..7824.78 rows=182378 width=8) (actual time=0.051..6620.416\nrows=182378 loops=1)\n -> Seq Scan on alarm_event_y2012m03 eventtable\n(cost=0.00..16717.50 rows=386350 width=8) (actual\ntime=101.301..14018.406 rows=386350 loops=1)\n -> Seq Scan on alarm_event_y2012m04 eventtable\n(cost=0.00..10125.17 rows=238117 width=8) (actual\ntime=25.155..3013.045 rows=238117 loops=1)\n -> Seq Scan on alarm_event_y2012m05 eventtable\n(cost=0.00..10747.73 rows=251573 width=8) (actual time=0.058..1605.062\nrows=251573 loops=1)\n -> Seq Scan on alarm_event_y2012m06 eventtable\n(cost=0.00..16638.79 rows=387879 width=8) (actual time=0.122..4477.169\nrows=387879 loops=1)\n -> Seq Scan on alarm_event_y2012m07 eventtable\n(cost=0.00..9675.58 rows=222658 width=8) (actual time=85.891..8504.216\nrows=222658 loops=1)\n -> Seq Scan on alarm_event_y2012m08 eventtable\n(cost=0.00..9570.01 rows=234201 width=8) (actual\ntime=106.049..7978.829 rows=234201 loops=1)\n -> Seq Scan on alarm_event_y2012m09 eventtable\n(cost=0.00..12731.91 rows=300791 width=8) (actual time=5.516..5596.174\nrows=300791 loops=1)\n -> Seq Scan on alarm_event_y2012m10 eventtable\n(cost=0.00..11052.83 rows=266483 width=8) (actual time=0.064..1159.065\nrows=266483 loops=1)\n -> Seq Scan on alarm_event_y2012m11 eventtable\n(cost=0.00..9540.78 rows=226778 width=8) (actual time=0.045..878.269\nrows=226778 loops=1)\n -> Seq Scan on alarm_event_y2012m12 eventtable\n(cost=0.00..8110.64 rows=208464 width=8) (actual time=0.061..883.399\nrows=208464 loops=1)\n -> Seq Scan on alarm_event_y2013m01 eventtable\n(cost=0.00..13391.78 rows=272178 width=8) (actual\ntime=0.028..10847.953 rows=270708 loops=1)\n -> Seq Scan on alarm_event_y2013m02 eventtable\n(cost=0.00..15720.65 rows=297365 width=8) (actual\ntime=134.814..8192.080 rows=297204 loops=1)\n -> Seq Scan on alarm_event_y2013m03 eventtable\n(cost=0.00..41027.29 rows=810229 width=8) (actual\ntime=11.567..8419.318 rows=805929 loops=1)\n -> Seq Scan on alarm_event_y2013m04 eventtable\n(cost=0.00..13382.35 rows=253135 width=8) (actual time=0.072..9731.649\nrows=253329 loops=1)\n -> Seq Scan on alarm_event_y2013m05 eventtable\n(cost=0.00..9793.55 rows=175455 width=8) (actual time=4.434..9965.956\nrows=176525 loops=1)\n -> Seq Scan on alarm_event_y2013m06 eventtable\n(cost=0.00..9961.74 rows=184074 width=8) (actual\ntime=123.567..11228.522 rows=184335 loops=1)\n -> Seq Scan on alarm_event_y2013m07 eventtable\n(cost=0.00..10330.15 rows=190215 width=8) (actual time=4.728..4839.910\nrows=189743 loops=1)\n -> Seq Scan on alarm_event_y2013m08 eventtable\n(cost=0.00..8808.29 rows=160329 width=8) (actual time=8.731..2313.534\nrows=161477 loops=1)\n -> Seq Scan on alarm_event_y2013m09 eventtable\n(cost=0.00..14232.17 rows=273617 width=8) (actual time=0.035..1367.249\nrows=273621 loops=1)\n -> Seq Scan on alarm_event_y2013m10 eventtable\n(cost=0.00..17202.44 rows=320544 width=8) (actual time=0.105..1853.031\nrows=320310 loops=1)\n -> Seq Scan on alarm_event_y2013m11 eventtable\n(cost=0.00..15857.97 rows=278997 width=8) (actual time=0.044..7627.316\nrows=281997 loops=1)\n -> Seq Scan on alarm_event_y2013m12 eventtable\n(cost=0.00..15012.38 rows=278738 width=8) (actual\ntime=121.673..11397.349 rows=280114 loops=1)\n -> Seq Scan on alarm_event_y2014m01 eventtable\n(cost=0.00..17534.21 rows=331521 width=8) (actual\ntime=70.202..7098.822 rows=330846 loops=1)\n -> Seq Scan on alarm_event_y2014m02 eventtable\n(cost=0.00..8124.21 rows=151321 width=8) (actual time=22.929..1894.643\nrows=151158 loops=1)\n -> Seq Scan on alarm_event_y2014m03 eventtable\n(cost=0.00..10.40 rows=40 width=8) (actual time=0.003..0.003 rows=0\nloops=1)\n -> Seq Scan on alarm_event_y2014m04 eventtable\n(cost=0.00..10.40 rows=40 width=8) (actual time=0.004..0.004 rows=0\nloops=1)\n -> Seq Scan on alarm_event_y2014m05 eventtable\n(cost=0.00..10.40 rows=40 width=8) (actual time=0.003..0.003 rows=0\nloops=1)\n -> Hash (cost=82.89..82.89 rows=1934 width=16) (actual\ntime=16.402..16.402 rows=1934 loops=1)\n -> Seq Scan on alarm_event_probe probetable\n(cost=0.00..82.89 rows=1934 width=16) (actual time=0.403..8.985\nrows=1934 loops=1)\n Filter: (probe_id = 2)\n Total runtime: 248864.890 ms\n (42 rows)\n\n set enable_seqscan = false;\n SET\n\n\n QUERY PLAN\n ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=502051.06..502051.07 rows=1 width=8) (actual\ntime=11597.236..11597.239 rows=1 loops=1)\n -> Nested Loop (cost=0.00..493645.84 rows=3362085 width=8)\n(actual time=151.505..11587.339 rows=1934 loops=1)\n Join Filter: (eventtable.alarm_event_id =\nprobetable.alarm_event_id)\n -> Index Scan using alarm_event_probe_pkey on\nalarm_event_probe probetable (cost=0.00..168.49 rows=1934 width=16)\n(actual time=143.484..159.889 rows=1934 loops=1)\n Filter: (probe_id = 2)\n -> Append (cost=0.00..254.73 rows=34 width=8) (actual\ntime=5.343..5.874 rows=1 loops=1934)\n -> Index Scan using alarm_event_pkey on\nalarm_event eventtable (cost=0.00..6.65 rows=1 width=8) (actual\ntime=0.008..0.008 rows=0 loops=1934)\n Index Cond: (eventtable.alarm_event_id =\nprobetable.alarm_event_id)\n -> Index Scan using\nalarm_event_y2011m12_alarm_event_id_ix on alarm_event_y2011m12\neventtable (cost=0.00..7.89 rows=1 width=8) (actual time=0.378..0.378\nrows=0 loops=1934)\n Index Cond: (eventtable.alarm_event_id =\nprobetable.alarm_event_id)\n -> Index Scan using\nalarm_event_y2011m10_alarm_event_id_ix on alarm_event_y2011m10\neventtable (cost=0.00..7.87 rows=1 width=8) (actual time=0.008..0.008\nrows=0 loops=1934)\n Index Cond: (eventtable.alarm_event_id =\nprobetable.alarm_event_id)\n -> Index Scan using\nalarm_event_y2011m09_alarm_event_id_ix on alarm_event_y2011m09\neventtable (cost=0.00..7.79 rows=1 width=8) (actual time=0.007..0.007\nrows=0 loops=1934)\n Index Cond: (eventtable.alarm_event_id =\nprobetable.alarm_event_id)\n -> Index Scan using\nalarm_event_y2011m11_alarm_event_id_ix on alarm_event_y2011m11\neventtable (cost=0.00..7.85 rows=1 width=8) (actual time=0.008..0.008\nrows=0 loops=1934)\n Index Cond: (eventtable.alarm_event_id =\nprobetable.alarm_event_id)\n -> Index Scan using\nalarm_event_y2012m01_alarm_event_id_ix on alarm_event_y2012m01\neventtable (cost=0.00..7.87 rows=1 width=8) (actual time=0.479..0.480\nrows=0 loops=1934)\n Index Cond: (eventtable.alarm_event_id =\nprobetable.alarm_event_id)\n -> Index Scan using\nalarm_event_y2012m02_alarm_event_id_ix on alarm_event_y2012m02\neventtable (cost=0.00..7.86 rows=1 width=8) (actual time=0.015..0.015\nrows=0 loops=1934)\n Index Cond: (eventtable.alarm_event_id =\nprobetable.alarm_event_id)\n -> Index Scan using\nalarm_event_y2012m03_alarm_event_id_ix on alarm_event_y2012m03\neventtable (cost=0.00..7.91 rows=1 width=8) (actual time=0.016..0.016\nrows=0 loops=1934)\n Index Cond: (eventtable.alarm_event_id =\nprobetable.alarm_event_id)\n -> Index Scan using\nalarm_event_y2012m04_alarm_event_id_ix on alarm_event_y2012m04\neventtable (cost=0.00..7.87 rows=1 width=8) (actual time=0.014..0.014\nrows=0 loops=1934)\n Index Cond: (eventtable.alarm_event_id =\nprobetable.alarm_event_id)\n -> Index Scan using\nalarm_event_y2012m05_alarm_event_id_ix on alarm_event_y2012m05\neventtable (cost=0.00..7.88 rows=1 width=8) (actual time=0.015..0.015\nrows=0 loops=1934)\n Index Cond: (eventtable.alarm_event_id =\nprobetable.alarm_event_id)\n -> Index Scan using\nalarm_event_y2012m06_alarm_event_id_ix on alarm_event_y2012m06\neventtable (cost=0.00..7.91 rows=1 width=8) (actual time=0.071..0.071\nrows=0 loops=1934)\n Index Cond: (eventtable.alarm_event_id =\nprobetable.alarm_event_id)\n -> Index Scan using\nalarm_event_y2012m07_alarm_event_id_ix on alarm_event_y2012m07\neventtable (cost=0.00..7.87 rows=1 width=8) (actual time=0.016..0.016\nrows=0 loops=1934)\n Index Cond: (eventtable.alarm_event_id =\nprobetable.alarm_event_id)\n -> Index Scan using\nalarm_event_y2012m08_alarm_event_id_ix on alarm_event_y2012m08\neventtable (cost=0.00..7.88 rows=1 width=8) (actual time=0.027..0.027\nrows=0 loops=1934)\n Index Cond: (eventtable.alarm_event_id =\nprobetable.alarm_event_id)\n -> Index Scan using\nalarm_event_y2012m09_alarm_event_id_ix on alarm_event_y2012m09\neventtable (cost=0.00..7.89 rows=1 width=8) (actual time=0.664..0.670\nrows=0 loops=1934)\n Index Cond: (eventtable.alarm_event_id =\nprobetable.alarm_event_id)\n -> Index Scan using\nalarm_event_y2012m10_alarm_event_id_ix on alarm_event_y2012m10\neventtable (cost=0.00..7.88 rows=1 width=8) (actual time=0.381..0.381\nrows=0 loops=1934)\n Index Cond: (eventtable.alarm_event_id =\nprobetable.alarm_event_id)\n -> Index Scan using\nalarm_event_y2012m11_alarm_event_id_ix on alarm_event_y2012m11\neventtable (cost=0.00..7.87 rows=1 width=8) (actual time=0.308..0.309\nrows=0 loops=1934)\n Index Cond: (eventtable.alarm_event_id =\nprobetable.alarm_event_id)\n -> Index Scan using\nalarm_event_y2012m12_alarm_event_id_ix on alarm_event_y2012m12\neventtable (cost=0.00..7.86 rows=1 width=8) (actual time=0.033..0.033\nrows=0 loops=1934)\n Index Cond: (eventtable.alarm_event_id =\nprobetable.alarm_event_id)\n -> Index Scan using\nalarm_event_y2013m01_alarm_event_id_ix on alarm_event_y2013m01\neventtable (cost=0.00..7.89 rows=1 width=8) (actual time=0.032..0.032\nrows=0 loops=1934)\n Index Cond: (eventtable.alarm_event_id =\nprobetable.alarm_event_id)\n -> Index Scan using\nalarm_event_y2013m02_alarm_event_id_ix on alarm_event_y2013m02\neventtable (cost=0.00..7.90 rows=1 width=8) (actual time=0.254..0.254\nrows=0 loops=1934)\n Index Cond: (eventtable.alarm_event_id =\nprobetable.alarm_event_id)\n -> Index Scan using\nalarm_event_y2013m03_alarm_event_id_ix on alarm_event_y2013m03\neventtable (cost=0.00..8.03 rows=1 width=8) (actual time=0.030..0.030\nrows=0 loops=1934)\n Index Cond: (eventtable.alarm_event_id =\nprobetable.alarm_event_id)\n -> Index Scan using\nalarm_event_y2013m04_alarm_event_id_ix on alarm_event_y2013m04\neventtable (cost=0.00..7.89 rows=1 width=8) (actual time=0.095..0.095\nrows=0 loops=1934)\n Index Cond: (eventtable.alarm_event_id =\nprobetable.alarm_event_id)\n -> Index Scan using\nalarm_event_y2013m05_alarm_event_id_ix on alarm_event_y2013m05\neventtable (cost=0.00..7.86 rows=1 width=8) (actual time=0.023..0.023\nrows=0 loops=1934)\n Index Cond: (eventtable.alarm_event_id =\nprobetable.alarm_event_id)\n -> Index Scan using\nalarm_event_y2013m06_alarm_event_id_ix on alarm_event_y2013m06\neventtable (cost=0.00..7.87 rows=1 width=8) (actual time=0.104..0.104\nrows=0 loops=1934)\n Index Cond: (eventtable.alarm_event_id =\nprobetable.alarm_event_id)\n -> Index Scan using\nalarm_event_y2013m07_alarm_event_id_ix on alarm_event_y2013m07\neventtable (cost=0.00..7.87 rows=1 width=8) (actual time=0.179..0.180\nrows=0 loops=1934)\n Index Cond: (eventtable.alarm_event_id =\nprobetable.alarm_event_id)\n -> Index Scan using\nalarm_event_y2013m08_alarm_event_id_ix on alarm_event_y2013m08\neventtable (cost=0.00..7.87 rows=1 width=8) (actual time=0.250..0.250\nrows=0 loops=1934)\n Index Cond: (eventtable.alarm_event_id =\nprobetable.alarm_event_id)\n -> Index Scan using\nalarm_event_y2013m09_alarm_event_id_ix on alarm_event_y2013m09\neventtable (cost=0.00..7.89 rows=1 width=8) (actual time=0.626..0.627\nrows=0 loops=1934)\n Index Cond: (eventtable.alarm_event_id =\nprobetable.alarm_event_id)\n -> Index Scan using\nalarm_event_y2013m10_alarm_event_id_ix on alarm_event_y2013m10\neventtable (cost=0.00..7.90 rows=1 width=8) (actual time=0.300..0.300\nrows=0 loops=1934)\n Index Cond: (eventtable.alarm_event_id =\nprobetable.alarm_event_id)\n -> Index Scan using\nalarm_event_y2013m11_alarm_event_id_ix on alarm_event_y2013m11\neventtable (cost=0.00..7.90 rows=1 width=8) (actual time=0.201..0.201\nrows=0 loops=1934)\n Index Cond: (eventtable.alarm_event_id =\nprobetable.alarm_event_id)\n -> Index Scan using\nalarm_event_y2013m12_alarm_event_id_ix on alarm_event_y2013m12\neventtable (cost=0.00..7.90 rows=1 width=8) (actual time=0.552..0.552\nrows=0 loops=1934)\n Index Cond: (eventtable.alarm_event_id =\nprobetable.alarm_event_id)\n -> Index Scan using\nalarm_event_y2014m01_alarm_event_id_ix on alarm_event_y2014m01\neventtable (cost=0.00..7.91 rows=1 width=8) (actual time=0.323..0.324\nrows=0 loops=1934)\n Index Cond: (eventtable.alarm_event_id =\nprobetable.alarm_event_id)\n -> Index Scan using\nalarm_event_y2014m02_alarm_event_id_ix on alarm_event_y2014m02\neventtable (cost=0.00..7.86 rows=1 width=8) (actual time=0.302..0.303\nrows=0 loops=1934)\n Index Cond: (eventtable.alarm_event_id =\nprobetable.alarm_event_id)\n -> Index Scan using\nalarm_event_y2014m03_alarm_event_id_ix on alarm_event_y2014m03\neventtable (cost=0.00..3.87 rows=1 width=8) (actual time=0.005..0.005\nrows=0 loops=1934)\n Index Cond: (eventtable.alarm_event_id =\nprobetable.alarm_event_id)\n -> Index Scan using\nalarm_event_y2014m04_alarm_event_id_ix on alarm_event_y2014m04\neventtable (cost=0.00..3.87 rows=1 width=8) (actual time=0.005..0.005\nrows=0 loops=1934)\n Index Cond: (eventtable.alarm_event_id =\nprobetable.alarm_event_id)\n -> Index Scan using\nalarm_event_y2014m05_alarm_event_id_ix on alarm_event_y2014m05\neventtable (cost=0.00..3.87 rows=1 width=8) (actual time=0.005..0.005\nrows=0 loops=1934)\n Index Cond: (eventtable.alarm_event_id =\nprobetable.alarm_event_id)\n Total runtime: 11599.260 ms\n (75 rows)\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 18 Feb 2014 10:54:32 +1300", "msg_from": "Alistair Bayley <[email protected]>", "msg_from_op": true, "msg_subject": "Why is the optimiser choosing the slower query, or, understanding\n explain analyze output" }, { "msg_contents": "On Mon, Feb 17, 2014 at 1:54 PM, Alistair Bayley <[email protected]>wrote:\n\n> I have postgresql 8.4.15 on Ubuntu 10.04 and this query:\n>\n> SELECT MAX(probeTable.PROBE_ALARM_EVENT_ID) AS MAX_EVENT_ID\n> FROM ALARM_EVENT eventTable\n> INNER JOIN ALARM_EVENT_PROBE probeTable\n> ON eventTable.ALARM_EVENT_ID = probeTable.ALARM_EVENT_ID\n> WHERE probeTable.PROBE_ID = 2\n>\n> which is running slower than it could. Table definitions and explain\n> analyze output below.\n> The first explain is the current plan (uses sequential scans).\n> The second is after I have disabled sequential scans, and is the plan\n> I would prefer.\n>\n> I have vacuum analyzed both tables. In terms of relevant changes to\n> the default postgresql.conf, we have these:\n>\n> shared_buffers = 28MB\n> constraint_exclusion = on\n>\n> I want to understand why the optimiser is choosing the plan with\n> sequential table scans, rather than the plan with index scans.\n> I am not sure how to interpret the predicted vs actual times/costs,\n> and want to understand why the predicted cost for the index scan plan\n> seems to be way off.\n>\n\nThe planner clamps the estimated number of rows from an index scan at 1\nrow, even if it actually believes the number will be 0. That makes the\nlogical simpler, avoiding needs to test for division by zero all over the\nplace, and probably makes it more robust to mis-estimation in most use\ncases. But in this case, that means it thinks it will find 34 rows, one\nfrom each partition, which is way too high.\n\nNow, there certainly is some cost to test an index and finding that no rows\nin it can match. But your query is probably probing the same spot in each\nindex for each negative match, which means all the blocks are already in\nmemory. But PostgreSQL doesn't know that, so even if it didn't do the\nclamp it would probably still not get the right answer.\n\nCheers,\n\nJeff\n\nOn Mon, Feb 17, 2014 at 1:54 PM, Alistair Bayley <[email protected]> wrote:\nI have postgresql 8.4.15 on Ubuntu 10.04 and this query:\n\n    SELECT MAX(probeTable.PROBE_ALARM_EVENT_ID) AS MAX_EVENT_ID\n    FROM ALARM_EVENT eventTable\n    INNER JOIN ALARM_EVENT_PROBE probeTable\n       ON eventTable.ALARM_EVENT_ID = probeTable.ALARM_EVENT_ID\n    WHERE probeTable.PROBE_ID = 2\n\nwhich is running slower than it could. Table definitions and explain\nanalyze output below.\nThe first explain is the current plan (uses sequential scans).\nThe second is after I have disabled sequential scans, and is the plan\nI would prefer.\n\nI have vacuum analyzed both tables. In terms of relevant changes to\nthe default postgresql.conf, we have these:\n\n    shared_buffers = 28MB\n    constraint_exclusion = on\n\nI want to understand why the optimiser is choosing the plan with\nsequential table scans, rather than the plan with index scans.\nI am not sure how to interpret the predicted vs actual times/costs,\nand want to understand why the predicted cost for the index scan plan\nseems to be way off.The planner clamps the estimated number of rows from an index scan at 1 row, even if it actually believes the number will be 0.  That makes the logical simpler, avoiding needs to test for division by zero all over the place, and probably makes it more robust to mis-estimation in most use cases.  But in this case, that means it thinks it will find 34 rows, one from each partition, which is way too high.  \nNow, there certainly is some cost to test an index and finding that no rows in it can match.  But your query is probably probing the same spot in each index for each negative match, which means all the blocks are already in memory.  But PostgreSQL doesn't know that, so even if it didn't do the clamp it would probably still not get the right answer.\nCheers,Jeff", "msg_date": "Mon, 17 Feb 2014 14:48:44 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is the optimiser choosing the slower query, or,\n understanding explain analyze output" }, { "msg_contents": "Jeff Janes <[email protected]> writes:\n> On Mon, Feb 17, 2014 at 1:54 PM, Alistair Bayley <[email protected]>wrote:\n>> I want to understand why the optimiser is choosing the plan with\n>> sequential table scans, rather than the plan with index scans.\n\n> The planner clamps the estimated number of rows from an index scan at 1\n> row, even if it actually believes the number will be 0. That makes the\n> logical simpler, avoiding needs to test for division by zero all over the\n> place, and probably makes it more robust to mis-estimation in most use\n> cases. But in this case, that means it thinks it will find 34 rows, one\n> from each partition, which is way too high.\n\nEven if it believed the zero row estimate it's probably getting\ninternally, the cost estimate wouldn't change much, because as you say\nit's still got to assume that the index will be traversed to verify that\nthere's no such row(s).\n\nI notice though that the cost estimate for the seqscan plan isn't all that\nmuch lower than that for the indexscan plan. Probably lowering\nrandom_page_cost a bit would change the planner's mind. We have no\ninformation about total size of database vs available RAM, but if it's\na mostly memory-resident database then such a change would be a good idea.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 17 Feb 2014 20:40:10 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is the optimiser choosing the slower query, or,\n understanding explain analyze output" }, { "msg_contents": "On 18 February 2014 14:40, Tom Lane <[email protected]> wrote:\n> I notice though that the cost estimate for the seqscan plan isn't all that\n> much lower than that for the indexscan plan. Probably lowering\n> random_page_cost a bit would change the planner's mind. We have no\n> information about total size of database vs available RAM, but if it's\n> a mostly memory-resident database then such a change would be a good idea.\n\nselect pg_size_pretty(pg_database_size('fms'));\n\n pg_size_pretty\n----------------\n 3066 MB\n(1 row)\n\nThe DB sits on a dedicated VM with 2G RAM, of which only about 600M is\ncurrently used. So assuming it is mostly memory-resident seems pretty\nreasonable.\n\nI'm particularly interested in the massive different between cost and\nactual for the index plan. The seq scan plan has 451984/248694 (ratio\n1.82) for cost/actual, while the index plan has 502051/11597 (ratio\n43.29). At least the seq scan plan is only out by a factor of ~2.\n\nThe row estimate for the Nested Loop op is 3362085 (vs 1934 actual).\nThe optimiser estimated 1934 rows (accurate!) for the\nalarm_event_probe scan. As this table is joined to alarm_event on the\nPK (alarm_event_id), each row in alarm_event_probe can match at most\none row from alarm_event, so the most rows you could expect from the\njoin would be 1934. The optimiser does not seem to realise that the\njoin is 1-to-1, or 1-to-0.\n\nFWIW set random_page_cost = 3.6 causes it to generate the preferred plan.\n\nI was under the impression that the best way to solve these kind of\noptimiser problems was to ensure that the optimiser had good stats\ninformation etc. There don't seem to be too many ways to direct it\nwhen it makes poor choices.\n\nWhat's the best way to fix this?\n 1. set random_page_cost = 3.0\n 2. set enable_seqscan = false;\n\nOr something else?\n\nThanks,\nAlistair\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 18 Feb 2014 15:18:19 +1300", "msg_from": "Alistair Bayley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why is the optimiser choosing the slower query, or,\n understanding explain analyze output" }, { "msg_contents": "Alistair Bayley <[email protected]> writes:\n> On 18 February 2014 14:40, Tom Lane <[email protected]> wrote:\n>> I notice though that the cost estimate for the seqscan plan isn't all that\n>> much lower than that for the indexscan plan. Probably lowering\n>> random_page_cost a bit would change the planner's mind. We have no\n>> information about total size of database vs available RAM, but if it's\n>> a mostly memory-resident database then such a change would be a good idea.\n\n> [ database size is 3GB, RAM 2GB ]\n\nThe usual advice for database-in-RAM scenarios is to set random_page_cost\n= 1, or even to lower both random_page_cost and seq_page_cost below 1.\nIn this case, since it's not going to be entirely RAM-resident, a\ncompromise setting around 2 might be a good idea.\n\n> I'm particularly interested in the massive different between cost and\n> actual for the index plan. The seq scan plan has 451984/248694 (ratio\n> 1.82) for cost/actual, while the index plan has 502051/11597 (ratio\n> 43.29). At least the seq scan plan is only out by a factor of ~2.\n\nMost likely this means that the index plan is taking a lot more advantage\nof locality-of-reference than the planner is giving it credit for.\nI wouldn't put too much faith in those numbers by themselves though,\nbecause that's what nearly always happens if you run the same case\nthrough EXPLAIN more than once: all the data it needs is already in\ncache. It's a good idea to pay attention to what happens when the plan\ndoes have to read in some new data.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 17 Feb 2014 21:30:14 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is the optimiser choosing the slower query, or,\n understanding explain analyze output" } ]
[ { "msg_contents": "I am running PG 9.2.4 and I am trying to figure out why my database size \nshows one value, but the sum of my total relation sizes is so much less.\n\nBasically, I'm told my database is 188MB, but the sum of my total \nrelation sizes adds up to just 8.7MB, which is 1/20th of the reported \ntotal. Where is the 19/20th of my data then? We do make significant \nuse of large objects, so I suspect it's in there. Is there a relation \nsize query that would include the large object data associated with any \nOIDs in those tables?\n\nHere's the data I am working off of:\n\nFirst, I run a query to get my total DB size (this is after a restore \nfrom a backup, so it should not have too many \"holes\"):\n\nbpn=# SELECT pg_size_pretty(pg_database_size('bpn'));\n pg_size_pretty\n----------------\n 188 MB\n(1 row)\n\nSecond, I run this query (from \nhttp://wiki.postgresql.org/wiki/Disk_Usage) to get the total relation \nsizes for the tables in that database:\n\nbpn=# SELECT nspname || '.' || relname AS \"relation\",\nbpn-# pg_size_pretty(pg_total_relation_size(C.oid)) AS \"total_size\"\nbpn-# FROM pg_class C\nbpn-# LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)\nbpn-# WHERE nspname NOT IN ('pg_catalog', 'information_schema')\nbpn-# AND C.relkind <> 'i'\nbpn-# AND nspname !~ '^pg_toast'\nbpn-# ORDER BY pg_total_relation_size(C.oid) DESC;\n relation | total_size\n---------------------------------------------------------+------------\n public.esf_outbound_email_message | 1624 kB\n public.esf_transaction_activity_log | 968 kB\n public.esf_blob | 560 kB\n public.esf_outbound_email_message_attachment | 552 kB\n public.esf_tran_report_field_string | 232 kB\n public.esf_system_activity_log | 192 kB\n public.esf_permission_option_group | 184 kB\n public.esf_library_document_version_page | 176 kB\n public.esf_transaction | 152 kB\n public.esf_transaction_party | 136 kB\n public.esf_library_dropdown_version_option | 128 kB\n public.esf_signature_key | 112 kB\n public.esf_transaction_document | 104 kB\n public.esf_permission_option | 96 kB\n public.esf_library_email_template_version | 96 kB\n public.esf_transaction_party_document | 96 kB\n public.esf_field_template | 88 kB\n public.esf_transaction_party_assignment | 88 kB\n public.esf_outbound_email_message_response | 88 kB\n public.esf_user_activity_log | 88 kB\n public.esf_library_buttonmessage | 80 kB\n public.esf_library_email_template | 80 kB\n public.esf_library_documentstyle | 80 kB\n public.esf_package | 80 kB\n public.esf_package_version_party_template | 80 kB\n public.esf_permission | 80 kB\n public.esf_report_template | 80 kB\n public.esf_transaction_template | 80 kB\n public.esf_user | 80 kB\n public.esf_userlogin | 80 kB\n public.esf_group | 80 kB\n public.esf_library | 80 kB\n public.esf_library_document | 80 kB\n public.esf_library_dropdown | 80 kB\n public.esf_stats | 80 kB\n public.esf_library_image | 80 kB\n public.esf_library_propertyset | 80 kB\n public.esf_package_version | 72 kB\n public.esf_package_version_document | 72 kB\n public.esf_report_template_report_field | 72 kB\n public.esf_library_document_version | 72 kB\n public.esf_library_documentstyle_version | 72 kB\n public.esf_group_user | 72 kB\n public.esf_report_field_template | 72 kB\n public.esf_library_buttonmessage_version | 72 kB\n public.esf_library_propertyset_version | 72 kB\n public.esf_library_dropdown_version | 72 kB\n public.esf_party_template_field_template | 72 kB\n public.esf_library_image_version | 72 kB\n public.esf_party_template | 72 kB\n public.esf_label_template | 56 kB\n public.esf_library_document_version_page_field_template | 56 kB\n public.esf_package_version_report_field | 56 kB\n public.esf_package_version_party_document_party | 56 kB\n public.esf_session_key | 56 kB\n public.esf_userlogin_history | 56 kB\n public.esf_deployment | 56 kB\n public.esf_report_template_transaction_template | 56 kB\n public.esf_library_serial | 24 kB\n public.esf_http_send_request | 24 kB\n public.esf_library_file | 24 kB\n public.esf_tran_report_field_tranfileid | 16 kB\n public.esf_library_file_version | 16 kB\n public.esf_tran_report_field_long | 16 kB\n public.esf_http_send_response | 16 kB\n public.esf_tran_report_field_date | 16 kB\n public.esf_tran_report_field_numeric | 16 kB\n public.esf_library_serial_version | 16 kB\n public.esf_transaction_file | 16 kB\n public.esf_transaction_party_renotify | 16 kB\n public.esf_library_image_version_overlay_field | 8192 bytes\n(71 rows)\n\nBut when I add up those 71 rows, it's only 8,728,192 bytes (roughly 8.7MB).\n\n\n\n\n\n\n\n\n\n I am running PG 9.2.4 and I am trying to figure out why my database\n size shows one value, but the sum of my total relation sizes is so\n much less.  \n\n Basically, I'm told my database is 188MB, but the sum of my total\n relation sizes adds up to just 8.7MB, which is 1/20th of the\n reported total.  Where is the 19/20th of my data then?  We do make\n significant use of large objects, so I suspect it's in there.  Is\n there a relation size query that would include the large object data\n associated with any OIDs in those tables?\n\n Here's the data I am working off of:\n\n First, I run a query to get my total DB size (this is after a\n restore from a backup, so it should not have too many \"holes\"):\n\nbpn=# SELECT pg_size_pretty(pg_database_size('bpn'));\n pg_size_pretty\n----------------\n 188 MB\n(1 row)\n\n Second, I run this query (from\n http://wiki.postgresql.org/wiki/Disk_Usage) to get the total\n relation sizes for the tables in that database:\n\nbpn=# SELECT nspname || '.' || relname AS \"relation\",\nbpn-#     pg_size_pretty(pg_total_relation_size(C.oid)) AS\n \"total_size\"\nbpn-#   FROM pg_class C\nbpn-#   LEFT JOIN pg_namespace N ON (N.oid =\n C.relnamespace)\nbpn-#   WHERE nspname NOT IN ('pg_catalog',\n 'information_schema')\nbpn-#     AND C.relkind <> 'i'\nbpn-#     AND nspname !~ '^pg_toast'\nbpn-#   ORDER BY pg_total_relation_size(C.oid) DESC;\n                        relation                         |\n total_size\n---------------------------------------------------------+------------\n public.esf_outbound_email_message                       |\n 1624 kB\n public.esf_transaction_activity_log                     |\n 968 kB\n public.esf_blob                                         |\n 560 kB\n public.esf_outbound_email_message_attachment            |\n 552 kB\n public.esf_tran_report_field_string                     |\n 232 kB\n public.esf_system_activity_log                          |\n 192 kB\n public.esf_permission_option_group                      |\n 184 kB\n public.esf_library_document_version_page                |\n 176 kB\n public.esf_transaction                                  |\n 152 kB\n public.esf_transaction_party                            |\n 136 kB\n public.esf_library_dropdown_version_option              |\n 128 kB\n public.esf_signature_key                                |\n 112 kB\n public.esf_transaction_document                         |\n 104 kB\n public.esf_permission_option                            |\n 96 kB\n public.esf_library_email_template_version               |\n 96 kB\n public.esf_transaction_party_document                   |\n 96 kB\n public.esf_field_template                               |\n 88 kB\n public.esf_transaction_party_assignment                 |\n 88 kB\n public.esf_outbound_email_message_response              |\n 88 kB\n public.esf_user_activity_log                            |\n 88 kB\n public.esf_library_buttonmessage                        |\n 80 kB\n public.esf_library_email_template                       |\n 80 kB\n public.esf_library_documentstyle                        |\n 80 kB\n public.esf_package                                      |\n 80 kB\n public.esf_package_version_party_template               |\n 80 kB\n public.esf_permission                                   |\n 80 kB\n public.esf_report_template                              |\n 80 kB\n public.esf_transaction_template                         |\n 80 kB\n public.esf_user                                         |\n 80 kB\n public.esf_userlogin                                    |\n 80 kB\n public.esf_group                                        |\n 80 kB\n public.esf_library                                      |\n 80 kB\n public.esf_library_document                             |\n 80 kB\n public.esf_library_dropdown                             |\n 80 kB\n public.esf_stats                                        |\n 80 kB\n public.esf_library_image                                |\n 80 kB\n public.esf_library_propertyset                          |\n 80 kB\n public.esf_package_version                              |\n 72 kB\n public.esf_package_version_document                     |\n 72 kB\n public.esf_report_template_report_field                 |\n 72 kB\n public.esf_library_document_version                     |\n 72 kB\n public.esf_library_documentstyle_version                |\n 72 kB\n public.esf_group_user                                   |\n 72 kB\n public.esf_report_field_template                        |\n 72 kB\n public.esf_library_buttonmessage_version                |\n 72 kB\n public.esf_library_propertyset_version                  |\n 72 kB\n public.esf_library_dropdown_version                     |\n 72 kB\n public.esf_party_template_field_template                |\n 72 kB\n public.esf_library_image_version                        |\n 72 kB\n public.esf_party_template                               |\n 72 kB\n public.esf_label_template                               |\n 56 kB\n public.esf_library_document_version_page_field_template |\n 56 kB\n public.esf_package_version_report_field                 |\n 56 kB\n public.esf_package_version_party_document_party         |\n 56 kB\n public.esf_session_key                                  |\n 56 kB\n public.esf_userlogin_history                            |\n 56 kB\n public.esf_deployment                                   |\n 56 kB\n public.esf_report_template_transaction_template         |\n 56 kB\n public.esf_library_serial                               |\n 24 kB\n public.esf_http_send_request                            |\n 24 kB\n public.esf_library_file                                 |\n 24 kB\n public.esf_tran_report_field_tranfileid                 |\n 16 kB\n public.esf_library_file_version                         |\n 16 kB\n public.esf_tran_report_field_long                       |\n 16 kB\n public.esf_http_send_response                           |\n 16 kB\n public.esf_tran_report_field_date                       |\n 16 kB\n public.esf_tran_report_field_numeric                    |\n 16 kB\n public.esf_library_serial_version                       |\n 16 kB\n public.esf_transaction_file                             |\n 16 kB\n public.esf_transaction_party_renotify                   |\n 16 kB\n public.esf_library_image_version_overlay_field          |\n 8192 bytes\n(71 rows)\n\n But when I add up those 71 rows, it's only 8,728,192 bytes (roughly\n 8.7MB).", "msg_date": "Mon, 17 Feb 2014 14:14:44 -0800", "msg_from": "David Wall <[email protected]>", "msg_from_op": true, "msg_subject": "DB size and TABLE sizes don't seem to add up" }, { "msg_contents": "On 02/18/2014 12:14 AM, David Wall wrote:\n> I am running PG 9.2.4 and I am trying to figure out why my database size\n> shows one value, but the sum of my total relation sizes is so much less.\n>\n> Basically, I'm told my database is 188MB, but the sum of my total\n> relation sizes adds up to just 8.7MB, which is 1/20th of the reported\n> total. Where is the 19/20th of my data then? We do make significant\n> use of large objects, so I suspect it's in there. Is there a relation\n> size query that would include the large object data associated with any\n> OIDs in those tables?\n\nYou can use \"select pg_total_relation_size('pg_largeobject')\" to get the \ntotal size of the large objects. Attributing large objects to the tables \nthat refer them is more difficult. For a single table, something like this:\n\nselect sum(pg_column_size(lo.data))\nfrom lotest_stash_values t, pg_largeobject lo\nwhere lo.loid = t.loid;\n\nReplace \"lotest_stash_values\" with the table's name and lo.loid with the \nname of the OID column.\n\n- Heikki\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 18 Feb 2014 10:34:32 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DB size and TABLE sizes don't seem to add up" }, { "msg_contents": "\nOn 2/18/2014 12:34 AM, Heikki Linnakangas wrote:\n> On 02/18/2014 12:14 AM, David Wall wrote:\n>> I am running PG 9.2.4 and I am trying to figure out why my database size\n>> shows one value, but the sum of my total relation sizes is so much less.\n>>\n>> Basically, I'm told my database is 188MB, but the sum of my total\n>> relation sizes adds up to just 8.7MB, which is 1/20th of the reported\n>> total. Where is the 19/20th of my data then? We do make significant\n>> use of large objects, so I suspect it's in there. Is there a relation\n>> size query that would include the large object data associated with any\n>> OIDs in those tables?\n>\n> You can use \"select pg_total_relation_size('pg_largeobject')\" to get \n> the total size of the large objects. Attributing large objects to the \n> tables that refer them is more difficult. For a single table, \n> something like this:\n>\n> select sum(pg_column_size(lo.data))\n> from lotest_stash_values t, pg_largeobject lo\n> where lo.loid = t.loid;\n>\n> Replace \"lotest_stash_values\" with the table's name and lo.loid with \n> the name of the OID column.\n\nThanks, Heikki. It's generally even trickier for us because we have a \nblob table that other components use for storing \nlarge/binary/unstructured objects (the code handles \ncompression/decompression and encryption/decryption options for us). So \nthose tables have an UUID that points to a row in that table that \ncontains the actual LOID. I'll use your technique to at least tell me \nthe size for specific tables where I can build the query like you've \ndescribed.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 18 Feb 2014 09:48:50 -0800", "msg_from": "David Wall <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DB size and TABLE sizes don't seem to add up" } ]
[ { "msg_contents": "Hello Postgresql experts,\n\nWe are facing issues with our PostgreSQL databases running on Ubuntu\nserver, right after we shifted our databases from OpenSuse O/S.\n\nIt's a new database servers runs fine for most of the time (Avg. Load 0.5\nto 1.0) but suddenly spikes once/twice a day.This happens four times in\nlast three day and during this, simple update/select statements started\ntaking minutes (1 to 5 Minutes) instead of 5-50 mSec.\n\nAnd this max out database 250 connections. This event halt all processes\nfor about 15- 20 min and then everything back to normal. I verified\ncheckpoint and vacuum related activities but this isn't showing any problem\nto me. (attached logs)\n\nTop/vmstat output shows all resources were suddenly utilized by %us during\nsame time. iostat doesn't shows any IO related bottleneck. I have added\ncompleted logs for yesterday outage (13:45 to 14:15) .\n\n\nprocs -----------memory---------- ---swap-- -----io---- -system--\n----cpu----\n r b swpd free buff cache si so bi bo in cs us sy id\nwa\n44 0 0 201929344 345260 50775772 0 0 2 15 2 2 2\n0 98 0\n40 0 0 201919264 345260 50775864 0 0 0 224 9409 1663 98\n1 1 0\n40 0 0 201915344 345260 50775880 0 0 0 280 8803 1674 99\n0 0 0\n38 0 0 201911296 345260 50775888 0 0 0 156 8753 1469 99\n0 0 0\n40 0 0 201902416 345260 50775888 0 0 0 224 9060 2775 98\n1 1 0\n\nFree -m\n total used free shared buffers cached\nMem: 251 59 192 0 0 48\n-/+ buffers/cache: 10 241\nSwap: 29 0 29\n\nSystem information.\n\nConnections into our databases are coming from WebServer (running on PHP\nand Apache) and script servers (PHP).We have verified apache logs and we\ndidn't find connection traffic during same interval.\n\nHardware information:\n\nDELL PowerEdge R715\nIntel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz\nUbuntu 12.04.4 LTS\nkernel: 3.8.0-35-generic 64 bit\nPostgresql version: 9.0.13\nRAM: 256 GB 32 Cores CPU\nps_xlog : RAID 1\ndata folder : RAID10 (6 Strips)\nread:write ratio- 85:15\n\n\nPgbouncer configured on database side(250 allowed connections)\n\nPostgresql Configuration:\nDatabase Size: 28GB\nVaccum analyzed daily\ncheckpoint_completion_target = 0.9\nmaintenance_work_mem = 16GB\nshared_buffers = 8GB # we reduced this from 32 GB.\nmax_connections = 300\ncheckpoint_segments = 32\ncheckpoint_timeout = 8min\n\ndetailed postgresql configuration: http://pastie.org/8754957\ncheckpoint/vacuum information http://pastie.org/8754954\nTop command o/p: http://pastie.org/8755007\niostat o/p: http://pastie.org/8755009\nsysctl.configuration : http://pastie.org/8755197\n\nWe have recently upgraded O/S kernels to fix this issue but this it didn't\nhelp. We are tried to modify some O/S parameters based on some discussions-\n\nhttp://www.postgresql.org/message-id/[email protected]\n\nvm.dirty_background_bytes = 33554432 # I reduced this based on some forums.\nvm.dirty_bytes = 536870912\nvm.overcommit_memory=2\nkernel.sched_migration_cost = 5000000\nkernel.sched_autogroup_enabled = 0\n\nWe believe that our PostgreSQL configuration is not correct according to\navailable memory on machine and need some urgent tuning into it.\n\nCould you please guide me on troubleshooting this issue.\n\nThanks in advance.\n\nAshutosh.D\nPSI.\n\nHello Postgresql experts,We are facing issues with our PostgreSQL databases running on Ubuntu server, right after we shifted our databases from OpenSuse O/S.It's a new database servers runs fine for most of the time (Avg. Load 0.5 to 1.0) but suddenly spikes once/twice a day.This happens four times in last three day and during this, simple update/select statements started taking minutes (1 to 5 Minutes) instead of 5-50 mSec.\nAnd this max out database 250 connections. This event halt all processes for about 15- 20 min and then everything back to normal. I verified checkpoint and vacuum related activities but this isn't showing any problem to me. (attached logs)\nTop/vmstat output shows all resources were suddenly utilized by %us during same time. iostat doesn't shows any IO related bottleneck. I have added completed logs for yesterday outage (13:45 to 14:15) .\nprocs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa44  0      0 201929344 345260 50775772    0    0     2    15    2    2  2  0 98  0\n40  0      0 201919264 345260 50775864    0    0     0   224 9409 1663 98  1  1  040  0      0 201915344 345260 50775880    0    0     0   280 8803 1674 99  0  0  038  0      0 201911296 345260 50775888    0    0     0   156 8753 1469 99  0  0  0\n40  0      0 201902416 345260 50775888    0    0     0   224 9060 2775 98  1  1  0Free -m  total       used       free     shared    buffers     cachedMem:           251         59        192          0          0         48\n-/+ buffers/cache:         10        241Swap:           29          0         29System information.Connections into our databases are coming from WebServer (running on PHP and Apache) and script servers (PHP).We have verified apache logs and we didn't find connection traffic during same interval.\nHardware information:DELL PowerEdge R715 Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHzUbuntu 12.04.4 LTSkernel: 3.8.0-35-generic 64 bitPostgresql version: 9.0.13RAM: 256 GB 32 Cores CPU\nps_xlog : RAID 1data folder : RAID10 (6 Strips)read:write ratio- 85:15Pgbouncer configured on database side(250 allowed connections)Postgresql Configuration:Database Size: 28GB\nVaccum analyzed dailycheckpoint_completion_target = 0.9maintenance_work_mem = 16GBshared_buffers = 8GB # we reduced this from 32 GB.max_connections = 300 checkpoint_segments = 32checkpoint_timeout = 8min\ndetailed postgresql configuration: http://pastie.org/8754957checkpoint/vacuum information http://pastie.org/8754954Top command o/p: http://pastie.org/8755007\niostat o/p: http://pastie.org/8755009sysctl.configuration : http://pastie.org/8755197We have recently upgraded O/S kernels to fix this issue but this it didn't help. We are tried to modify some O/S parameters based on some discussions-\nhttp://www.postgresql.org/message-id/[email protected]_background_bytes = 33554432 # I reduced this based on some forums.\nvm.dirty_bytes = 536870912vm.overcommit_memory=2kernel.sched_migration_cost = 5000000 kernel.sched_autogroup_enabled = 0 We believe that our PostgreSQL configuration is not correct according to available memory on machine and need some urgent tuning into it.\nCould you please guide me on troubleshooting this issue.Thanks in advance.Ashutosh.DPSI.", "msg_date": "Fri, 21 Feb 2014 17:52:08 +0530", "msg_from": "Ashutosh Durugkar <[email protected]>", "msg_from_op": true, "msg_subject": "Postgresql tunning-- help needed" }, { "msg_contents": "On 21.2.2014 13:22, Ashutosh Durugkar wrote:\n> Hello Postgresql experts,\n> \n> We are facing issues with our PostgreSQL databases running on Ubuntu\n> server, right after we shifted our databases from OpenSuse O/S.\n>\n> It's a new database servers runs fine for most of the time (Avg.\n> Load 0.5 to 1.0) but suddenly spikes once/twice a day.This happens\n> four times in last three day and during this, simple update/select\n> statements started taking minutes (1 to 5 Minutes) instead of 5-50\n> mSec.\n> \n> And this max out database 250 connections. This event halt all processes\n> for about 15- 20 min and then everything back to normal. I verified\n> checkpoint and vacuum related activities but this isn't showing any\n> problem to me. (attached logs)\n\nThat is pretty high number of connections, considering the number of\nCPUs / spindles. That may easily turn into a big issue, considering the\nconfiguration (see below).\n\n> Top/vmstat output shows all resources were suddenly utilized by %us\n> during same time. iostat doesn't shows any IO related bottleneck. I have\n> added completed logs for yesterday outage (13:45 to 14:15) .\n> \n> \n> procs -----------memory---------- ---swap-- -----io---- -system--\n> ----cpu----\n> r b swpd free buff cache si so bi bo in cs us sy\n> id wa\n> 44 0 0 201929344 345260 50775772 0 0 2 15 2 2 \n> 2 0 98 0\n> 40 0 0 201919264 345260 50775864 0 0 0 224 9409 1663\n> 98 1 1 0\n> 40 0 0 201915344 345260 50775880 0 0 0 280 8803 1674\n> 99 0 0 0\n> 38 0 0 201911296 345260 50775888 0 0 0 156 8753 1469\n> 99 0 0 0\n> 40 0 0 201902416 345260 50775888 0 0 0 224 9060 2775\n> 98 1 1 0\n\nWhat about top? Who's eating the CPU? Backend processes or something\nelse? Try to run \"perf top\" to see what functions are at the top (I\nsuspect it might be related to spinlocks).\n\n> Free -m\n> total used free shared buffers cached\n> Mem: 251 59 192 0 0 48\n> -/+ buffers/cache: 10 241\n> Swap: 29 0 29\n> \n> System information.\n> \n> Connections into our databases are coming from WebServer (running on PHP\n> and Apache) and script servers (PHP).We have verified apache logs and we\n> didn't find connection traffic during same interval.\n> \n> Hardware information:\n> \n> DELL PowerEdge R715\n> Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz\n> Ubuntu 12.04.4 LTS\n> kernel: 3.8.0-35-generic 64 bit\n> Postgresql version: 9.0.13\n> RAM: 256 GB 32 Cores CPU\n> ps_xlog : RAID 1\n> data folder : RAID10 (6 Strips)\n> read:write ratio- 85:15\n> \n> Pgbouncer configured on database side(250 allowed connections)\n> \n> Postgresql Configuration:\n> Database Size: 28GB\n\nSo you have 28GB database with 256GB of RAM?\n\n> Vaccum analyzed daily\n> checkpoint_completion_target = 0.9\n> maintenance_work_mem = 16GB\n> shared_buffers = 8GB # we reduced this from 32 GB.\n\nGood.\n\n> max_connections = 300\n> checkpoint_segments = 32\n> checkpoint_timeout = 8min\n> \n> detailed postgresql configuration: http://pastie.org/8754957\n\nNothing really suspicious here, i.e. nothing I could point out as an\nobvious cause. A few notes, though\n\n- shared_buffers : 8GB seems about right\n\n- maintenance_work_mem=16GB : set it to 512MB-1GB, with just 28GB\n database there's no point in using 16GB (and I'm yet to see a\n database where a value this high actually improves anything)\n\n- max_connections=300 : With 32 cores, you can handle ~40-50\n connections tops, assuming all of them are active, so if you know\n most of the 300 connections are idle it's fine. Which is about the\n number of connections in the vmstat output you posted.\n\n- work_mem : 256MB seems ok, but depending on the queries you're\n executing (with 300 connections and moderately complex queries, this\n may easily explode into your face)\n\n- full_page_writes = off # we turned it off for increase performance.\n\n Seriously? That's nonsense and you may easily end up with corrupted\n database. Tweak it only if you have actual performance issues and\n if you know your storage won't cause torn pages. Also, this only\n helps with I/O problems, which is not your case.\n\n Using the same logic, you might set 'fsync=off' to \"fix\" performance\n issues (don't do that!).\n\n> checkpoint/vacuum information http://pastie.org/8754954\n\nSeems fine. The checkpoints are ~5% at most, i.e. ~400MB, which should\nnot be a big deal. The iostat log is fine so checkpoints are not the issue.\n\n> Top command o/p: http://pastie.org/8755007\n\nWell, it seems\n\n> iostat o/p: http://pastie.org/8755009\n> sysctl.configuration : http://pastie.org/8755197\n>\n> We have recently upgraded O/S kernels to fix this issue but this it\n> didn't help. We are tried to modify some O/S parameters based on some\n> discussions-\n> \n> http://www.postgresql.org/message-id/[email protected]\n\nNot sure how that's related, as the processes are spending time in \"user\ntime\". BTW, is that a NUMA machine?\n\n> vm.dirty_background_bytes = 33554432 # I reduced this based on some forums.\n> vm.dirty_bytes = 536870912\n\nI don't think you want to do this related to the issue, but are you sure\nyou want to do this?\n\n vm.dirty_background_bytes = 33554432\n vm.dirty_bytes = 536870912\n\nIMHO that's way low. I mean, forcing the processes to block the IO if\nthere's more than 512MB of dirty data in page cache. What I usually do\nis something like\n\n vm.dirty_background_bytes = :write cache on controller:\n vm.dirty_bytes = 4-8x dirty_background_bytes\n\nAssuming you have write cache with BBU, of course.\n\n> vm.overcommit_memory=2\n\nWell, so how much swap you have? This together with\n\n vm.overcommit_ratio = 50\n\n(which you mentioned in sysctl.conf) means \"use only 50% of RAM, plus\nswap\". I assume you have just a few GB of swap, so you've just thrown\naway ~50% of RAM. Not the best idea, IMHO.\n\nAnyway, I'm not sure it's the cause, given you have ~28GB database,\nwhich easily fits into RAM. Although you have rather high number of\nconnections, which may cause issues.\n\n> kernel.sched_migration_cost = 5000000\n> kernel.sched_autogroup_enabled = 0\n> \n> We believe that our PostgreSQL configuration is not correct according\n> to available memory on machine and need some urgent tuning into it.\n\nNo, the configuration seems reasonable to me (with the exceptions\nmentioned above).\n\n> Could you please guide me on troubleshooting this issue.\n\nIf I had to guess, I'd expect it to be one of these issues\n\n(1) locking issue - spinlocks or lwlocks\n\n - e.g. all sessions are trying to acquire the same lock (or a small\n number of locks), for example by updating the same row, or maybe\n it's about spinlocks (which is consistent with high CPU usage)\n\n - lwlocks: collect snapshot from pg_locks (WHERE NOT granted)\n\n - spinlocks: run \"perf top\" (check CPU consumed by __spin__lock)\n\n(2) sudden change of application behavior\n\n - such issues happen when the application server suddenly\n reconnects all the connections and re-executes expensive tasks, etc.\n\n - there's ~20 sessions in \"authentication\" state (suspicious)\n\n - investigate what happens at the application side\n\n(3) sudden change of execution plans\n\n - assuming the executed queries remain the same, something else had\n to change - for example execution plans\n\n - also, this might be an issue with PREPARED statements, see\n\n\nhttp://www.postgresql.org/message-id/CAFj8pRDCingX=b42+FoMM+pk7JL63zUXc3d48OMpaqHxrhSpeA@mail.gmail.com\n\n - try to collect the \"bad\" execution plans and compare them to\n plans when the database is performing normally\n\n - consider using log_min_duration_statement / auto_explain\n\nregards\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 27 Feb 2014 00:59:46 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql tunning-- help needed" } ]
[ { "msg_contents": "Hello,\n\nWe had a problem with PostgreSQL not using an index scan in 2 similar\nqueries, the only difference between them is the array cast from text[] to\nlocation_type[] (array of enum values).\n\nThe execution plans are the following:\n\n1.\nHash Join (cost=1.68..64194.88 rows=962149 width=62) (actual\ntime=0.096..3580.542 rows=62 loops=1)\n Hash Cond: (location.topology_id = topology.t_id)\n -> Seq Scan on location (cost=0.00..34126.05 rows=962149 width=58)\n(actual time=0.031..3580.261 rows=62 loops=1)\n Filter: (type = ANY\n(('{CITY,VILLAGE,TOWN,ROOM}'::text[])::location_type[]))\n -> Hash (cost=1.30..1.30 rows=30 width=8) (actual time=0.041..0.041\nrows=31 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 2kB\n -> Seq Scan on topology (cost=0.00..1.30 rows=30 width=8) (actual\ntime=0.005..0.019 rows=31 loops=1)\nTotal runtime: 3580.604 ms\n\n2.\nHash Join (cost=29.91..3649.53 rows=1435 width=62) (actual\ntime=0.366..0.811 rows=62 loops=1)\n Hash Cond: (location.topology_id = topology.t_id)\n -> Bitmap Heap Scan on location (cost=28.24..3603.01 rows=1435\nwidth=58) (actual time=0.239..0.311 rows=62 loops=1)\n Recheck Cond: (type = ANY\n('{CITY,VILLAGE,TOWN,ROOM}'::location_type[]))\n -> Bitmap Index Scan on location_type_idx (cost=0.00..27.88\nrows=1435 width=0) (actual time=0.223..0.223 rows=62 loops=1)\n Index Cond: (type = ANY\n('{CITY,VILLAGE,TOWN,ROOM}'::location_type[]))\n -> Hash (cost=1.30..1.30 rows=30 width=8) (actual time=0.076..0.076\nrows=31 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 2kB\n -> Seq Scan on topology (cost=0.00..1.30 rows=30 width=8) (actual\ntime=0.019..0.041 rows=31 loops=1)\nTotal runtime: 0.934 ms\n\n\nThe problematic line is this one:\n\n -> Seq Scan on location (cost=0.00..34126.05 rows=962149 width=58)\n(actual time=0.031..3580.261 rows=62 loops=1)\n Filter: (type = ANY\n(('{CITY,VILLAGE,TOWN,ROOM}'::text[])::location_type[]))\n\nThe PostgreSQL version this query is running is 9.3.2.\n\nIs it expected that index is not used during such a cast? If so, what would\nbe the better way to force the index usage when doing array casts?\n\nSincerely,\n-- \nAlexey Klyukin\n\nHello,We had a problem with PostgreSQL not using an index scan in 2 similar queries, the only difference between them is the array cast from text[] to location_type[] (array of enum values).\nThe execution plans are the following:1.Hash Join  (cost=1.68..64194.88 rows=962149 width=62) (actual time=0.096..3580.542 rows=62 loops=1)  Hash Cond: (location.topology_id = topology.t_id)\n  ->  Seq Scan on location  (cost=0.00..34126.05 rows=962149 width=58) (actual time=0.031..3580.261 rows=62 loops=1)        Filter: (type = ANY (('{CITY,VILLAGE,TOWN,ROOM}'::text[])::location_type[]))\n  ->  Hash  (cost=1.30..1.30 rows=30 width=8) (actual time=0.041..0.041 rows=31 loops=1)        Buckets: 1024  Batches: 1  Memory Usage: 2kB        ->  Seq Scan on topology  (cost=0.00..1.30 rows=30 width=8) (actual time=0.005..0.019 rows=31 loops=1)\nTotal runtime: 3580.604 ms2.Hash Join  (cost=29.91..3649.53 rows=1435 width=62) (actual time=0.366..0.811 rows=62 loops=1)  Hash Cond: (location.topology_id = topology.t_id)\n  ->  Bitmap Heap Scan on location  (cost=28.24..3603.01 rows=1435 width=58) (actual time=0.239..0.311 rows=62 loops=1)        Recheck Cond: (type = ANY ('{CITY,VILLAGE,TOWN,ROOM}'::location_type[]))\n        ->  Bitmap Index Scan on location_type_idx  (cost=0.00..27.88 rows=1435 width=0) (actual time=0.223..0.223 rows=62 loops=1)              Index Cond: (type = ANY ('{CITY,VILLAGE,TOWN,ROOM}'::location_type[]))\n  ->  Hash  (cost=1.30..1.30 rows=30 width=8) (actual time=0.076..0.076 rows=31 loops=1)        Buckets: 1024  Batches: 1  Memory Usage: 2kB        ->  Seq Scan on topology  (cost=0.00..1.30 rows=30 width=8) (actual time=0.019..0.041 rows=31 loops=1)\nTotal runtime: 0.934 msThe problematic line is this one:  ->  Seq Scan on location  (cost=0.00..34126.05 rows=962149 width=58) (actual time=0.031..3580.261 rows=62 loops=1)\n        Filter: (type = ANY (('{CITY,VILLAGE,TOWN,ROOM}'::text[])::location_type[]))The PostgreSQL version this query is running is 9.3.2.Is it expected that index is not used during such a cast? If so, what would be the better way to force the index usage when doing array casts?\nSincerely,-- Alexey Klyukin", "msg_date": "Fri, 21 Feb 2014 19:01:09 +0100", "msg_from": "Alexey Klyukin <[email protected]>", "msg_from_op": true, "msg_subject": "Lack of index usage when doing array casts" }, { "msg_contents": "Alexey Klyukin <[email protected]> writes:\n> We had a problem with PostgreSQL not using an index scan in 2 similar\n> queries, the only difference between them is the array cast from text[] to\n> location_type[] (array of enum values).\n\nHmm. IIRC the text to enum cast is considered stable not immutable, which\nis why that doesn't get folded to a Const on sight. However, it seems\nlike it'd be okay for scalararraysel() to reduce stable expressions for\nestimation purposes, ie it should be using estimate_expression_value.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 21 Feb 2014 14:04:57 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lack of index usage when doing array casts" }, { "msg_contents": "I wrote:\n> Hmm. IIRC the text to enum cast is considered stable not immutable, which\n> is why that doesn't get folded to a Const on sight. However, it seems\n> like it'd be okay for scalararraysel() to reduce stable expressions for\n> estimation purposes, ie it should be using estimate_expression_value.\n\nI've committed a patch for this; it will be in 9.3.4.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 21 Feb 2014 17:12:01 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lack of index usage when doing array casts" }, { "msg_contents": "Thank you!\n\nHopefully I'll be able to give it a spin next week and will let you know\nwhether the patch improved the execution plans in our environment.\n\nSincerely,\n--\nAlexey Klyukin\n\nThank you!\nHopefully I'll be able to give it a spin next week and will let you know whether the patch improved the execution plans in our environment.Sincerely,--\n\nAlexey Klyukin", "msg_date": "Fri, 21 Feb 2014 23:37:47 +0100", "msg_from": "Alexey Klyukin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Lack of index usage when doing array casts" }, { "msg_contents": "On Fri, Feb 21, 2014 at 2:37 PM, Alexey Klyukin <[email protected]> wrote:\n> Hopefully I'll be able to give it a spin next week and will let you know\n> whether the patch improved the execution plans in our environment.\n\n9.3.3 is out this week; you'll have to wait a few months for this if\nyou're using standard packages, I'm afraid.\n\n\n-- \nRegards,\nPeter Geoghegan\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 21 Feb 2014 14:45:17 -0800", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lack of index usage when doing array casts" } ]
[ { "msg_contents": "Appears one of my bigger, but older DB's cored or other this morning and\nwhen it came back up the DB shows that it can't start and is possibly\ncorrupted. I've read this was actually due to a kernel bug sometime back\n(or at least tied to the kernel bug).\n\nI'm wondering if there was any other work arounds or \"tricks\" that I may\ntry to recover, vs doing a restore from backup?\n\n2014-02-23 03:46:08 PST LOG: aborting startup due to startup process\nfailure\n2014-02-23 11:10:09 PST LOG: database system was interrupted while in\nrecovery at 2014-02-23 03:46:04 PST\n2014-02-23 11:10:09 PST HINT: This probably means that some data is\ncorrupted and you will have to use the last backup for recovery.\n2014-02-23 11:10:09 PST LOG: database system was not properly shut\ndown; automatic recovery in progress\n2014-02-23 11:10:09 PST LOG: consistent recovery state reached at\n1493/24398AA8\n2014-02-23 11:10:09 PST LOG: redo starts at 1493/5306FC8\n*2014-02-23 11:10:09 PST PANIC: heap_update_redo: invalid lp*\n2014-02-23 11:10:09 PST CONTEXT: xlog redo hot_update: rel\n16399/868691025/959835680; tid 1180404/38; new 1180404/40\n2014-02-23 11:10:09 PST LOG: startup process (PID 3175) was terminated\nby signal 6: Aborted\n2014-02-23 11:10:09 PST LOG: aborting startup due to startup process\nfailure\n\n\nNot holding out hope, but maybe just maybe someone has some ideas/shortcuts\nto maybe get this DB back up\n\nThanks\nTory\n\nAppears one of my bigger, but older DB's cored or other this morning and when it came back up the DB shows that it can't start and is possibly corrupted. I've read this was actually due to a kernel bug sometime back (or at least tied to the kernel bug).\nI'm wondering if there was any other work arounds or \"tricks\" that I may try to recover, vs doing a restore from backup?2014-02-23 03:46:08 PST    LOG:  aborting startup due to startup process failure\n2014-02-23 11:10:09 PST    LOG:  database system was interrupted while in recovery at 2014-02-23 03:46:04 PST2014-02-23 11:10:09 PST    HINT:  This probably means that some data is corrupted and you will have to use the last backup for recovery.\n2014-02-23 11:10:09 PST    LOG:  database system was not properly shut down; automatic recovery in progress2014-02-23 11:10:09 PST    LOG:  consistent recovery state reached at 1493/24398AA82014-02-23 11:10:09 PST    LOG:  redo starts at 1493/5306FC8\n2014-02-23 11:10:09 PST    PANIC:  heap_update_redo: invalid lp2014-02-23 11:10:09 PST    CONTEXT:  xlog redo hot_update: rel 16399/868691025/959835680; tid 1180404/38; new 1180404/402014-02-23 11:10:09 PST    LOG:  startup process (PID 3175) was terminated by signal 6: Aborted\n2014-02-23 11:10:09 PST    LOG:  aborting startup due to startup process failureNot holding out hope, but maybe just maybe someone has some ideas/shortcuts to maybe get this DB back up\nThanksTory", "msg_date": "Sun, 23 Feb 2014 12:42:43 -0800", "msg_from": "Tory M Blue <[email protected]>", "msg_from_op": true, "msg_subject": "9.1.2 Postgres corruption, any way to recover?" }, { "msg_contents": "On 23.2.2014 21:42, Tory M Blue wrote:\n> Appears one of my bigger, but older DB's cored or other this morning and\n> when it came back up the DB shows that it can't start and is possibly\n> corrupted. I've read this was actually due to a kernel bug sometime back\n> (or at least tied to the kernel bug).\n...\n>\n> Not holding out hope, but maybe just maybe someone has some\n> ideas/shortcuts to maybe get this DB back up\n\n\nI think the first thing you should ask yourself is why you're running\n9.1.2, i.e. a 3 years old revision, instead of the current 9.1.12. Maybe\nit's not the cause of the bug, but still ...\n\nAlso, it seems to me that the corruption happened some time ago and you\nonly discovered it now. Which is strange, because corrupted page header\nshould kill every backup attempt. Are you sure you really have backups?\nI mean, tested and working backups?\n\nDo you have an idea how many blocks are actually corrupted? Is it just\nthis single one, or are there more? Are you sure it's a actually due to\na kernel bug, and not a storage failure (for example)? And what kernel\ndo you have in mind?\n\nThere are certainly tricks to make it work (e.g. zeroing the block with\ncorrupted header), but that means data loss (you won't have data from\nthe block) and it's tedious / time consuming. If you have a working\nbackup, and if it's acceptable to loose the data since then, you should\nprobably do that.\n\nThe only thing that might help you to recover all the data is probably\nPITR, i.e. a base backup + WAL archive (or replication).\n\nregards\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 23 Feb 2014 22:57:55 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 9.1.2 Postgres corruption, any way to recover?" } ]
[ { "msg_contents": "So I recently had some corruption, that forced a rebuild. At that time i\nupgraded to my current release of 9.2.4 (stable in production for a time\nand I know it's not the latest greatest).\n\nSo reloaded data, started loading more, things were good, several reboots\netc no issues.\n\nThis morning the system crashed again (so there may be some hardware\nissues). But for now I need again ask if there is something in code I can\ndo vs a complete rebuild again.\n\n014-02-26 07:09:16 PST LOG: database system was interrupted while in\nrecovery at 2014-02-26 07:05:09 PST\n2014-02-26 07:09:16 PST HINT: This probably means that some data is\ncorrupted and you will have to use the last backup for recovery.\n2014-02-26 07:09:16 PST LOG: database system was not properly shut\ndown; automatic recovery in progress\n2014-02-26 07:09:16 PST LOG: redo starts at 9A/89505268\n*2014-02-26 07:09:16 PST WARNING: specified item offset is too large*\n2014-02-26 07:09:16 PST CONTEXT: xlog redo insert: rel\n16398/16384/12765884; tid 9943/137\n2014-02-26 07:09:16 PST PANIC: btree_insert_redo: failed to add item\n2014-02-26 07:09:16 PST CONTEXT: xlog redo insert: rel\n16398/16384/12765884; tid 9943/137\n2014-02-26 07:09:16 PST LOG: startup process (PID 2391) was terminated\nby signal 6: Aborted\n2014-02-26 07:09:16 PST LOG: aborting startup due to startup process\nfailure\nthanks again\nTory\n\nSo I recently had some corruption, that forced a rebuild. At that time i upgraded to my current release of 9.2.4 (stable in production for a time and I know it's not the latest greatest).\nSo reloaded data, started loading more, things were good, several reboots etc no issues.This morning the system crashed again (so there may be some hardware issues). But for now I need again ask if there is something in code I can do vs a complete rebuild again.\n014-02-26 07:09:16 PST    LOG:  database system was interrupted while in recovery at 2014-02-26 07:05:09 PST\n2014-02-26 07:09:16 PST    HINT:  This probably means that some data is corrupted and you will have to use the last backup for recovery.2014-02-26 07:09:16 PST    LOG:  database system was not properly shut down; automatic recovery in progress\n2014-02-26 07:09:16 PST    LOG:  redo starts at 9A/895052682014-02-26 07:09:16 PST    WARNING:  specified item offset is too large\n2014-02-26 07:09:16 PST    CONTEXT:  xlog redo insert: rel 16398/16384/12765884; tid 9943/1372014-02-26 07:09:16 PST    PANIC:  btree_insert_redo: failed to add item\n2014-02-26 07:09:16 PST    CONTEXT:  xlog redo insert: rel 16398/16384/12765884; tid 9943/1372014-02-26 07:09:16 PST    LOG:  startup process (PID 2391) was terminated by signal 6: Aborted\n2014-02-26 07:09:16 PST    LOG:  aborting startup due to startup process failurethanks againTory", "msg_date": "Wed, 26 Feb 2014 07:17:12 -0800", "msg_from": "Tory M Blue <[email protected]>", "msg_from_op": true, "msg_subject": "9.2.4 specified item offset is too large, now what?" } ]
[ { "msg_contents": "Hello to everybody and thanks in advance to take a look to this message.\nI'm new in this list and with PostgreSQL. \nMy queries are taking too much time to complete and I don't know what to do right now. I think I'm providing all the info required for you to help me. If you need extra info please tell me.\n\nI am using DQL included in the last version of symfony2 (2.4.2). This is the query, formed by DQL, but coppied-pasted to the psql client (9.1.11, server 8.3.8)\n\nexplain analyze SELECT e0_.id AS id0, e0_.name AS name1, e0_.qualifier AS qualifier2, e0_.\"tagMethod\" AS tagmethod3, e0_.curation AS curation4, e0_.created AS created5, e0_.updated AS updated6, d1_.id AS id7, d1_.kind AS kind8, d1_.uid AS uid9, d1_.\"sentenceId\" AS sentenceid10, d1_.text AS text11, d1_.hepval AS hepval12, d1_.cardval AS cardval13, d1_.nephval AS nephval14, d1_.phosval AS phosval15, d1_.\"patternCount\" AS patterncount16, d1_.\"ruleScore\" AS rulescore17, d1_.\"hepTermNormScore\" AS heptermnormscore18, d1_.\"hepTermVarScore\" AS heptermvarscore19, d1_.created AS created20, d1_.updated AS updated21, e0_.document_id AS document_id22 FROM Entity2Document e0_ INNER JOIN documentold d1_ ON e0_.document_id = d1_.id WHERE e0_.name ='ranitidine' AND e0_.qualifier = 'CompoundDict' AND d1_.hepval IS NOT NULL ORDER BY d1_.hepval DESC limit 10;\n\n\nlimtox=> \\d+ documentold;\n Table \"public.documentold\"\n Column | Type | Modifiers | Storage | Description \n------------------+--------------------------------+-----------+----------+-------------\n id | integer | not null | plain | \n kind | character varying(255) | not null | extended | \n uid | character varying(255) | not null | extended | \n sentenceId | character varying(255) | not null | extended | \n text | text | not null | extended | \n hepval | double precision | | plain | \n created | timestamp(0) without time zone | not null | plain | \n updated | timestamp(0) without time zone | | plain | \n cardval | double precision | | plain | \n nephval | double precision | | plain | \n phosval | double precision | | plain | \n patternCount | double precision | | plain | \n ruleScore | double precision | | plain | \n hepTermNormScore | double precision | | plain | \n hepTermVarScore | double precision | | plain | \nIndexes:\n \"DocumentOLD_pkey\" PRIMARY KEY, btree (id)\n \"document_cardval_index\" btree (cardval)\n \"document_heptermnorm_index\" btree (\"hepTermNormScore\" DESC NULLS LAST)\n \"document_heptermvar_index\" btree (\"hepTermVarScore\" DESC NULLS LAST)\n \"document_hepval_index\" btree (hepval DESC NULLS LAST)\n \"document_kind_index\" btree (kind)\n \"document_nephval_index\" btree (nephval DESC NULLS LAST)\n \"document_patterncount_index\" btree (\"patternCount\" DESC NULLS LAST)\n \"document_phosval_index\" btree (phosval DESC NULLS LAST)\n \"document_rulescore_index\" btree (\"ruleScore\" DESC NULLS LAST)\n \"document_sentenceid_index\" btree (\"sentenceId\")\n \"document_uid_index\" btree (uid)\nReferenced by:\n TABLE \"hepkeywordtermnorm2document\" CONSTRAINT \"fk_1c19bcd0c33f7837\" FOREIGN KEY (document_id) REFERENCES documentold(id)\n TABLE \"cytochrome2document\" CONSTRAINT \"fk_21f7636fc33f7837\" FOREIGN KEY (document_id) REFERENCES documentold(id)\n TABLE \"hepkeywordtermvariant2document\" CONSTRAINT \"fk_a316e36bc33f7837\" FOREIGN KEY (document_id) REFERENCES documentold(id)\n TABLE \"entity2document\" CONSTRAINT \"fk_a6020c0dc33f7837\" FOREIGN KEY (document_id) REFERENCES documentold(id)\n TABLE \"specie2document\" CONSTRAINT \"fk_b6e551c8c33f7837\" FOREIGN KEY (document_id) REFERENCES documentold(id)\nHas OIDs: no\n\n\n\n\nlimtox=> \\d+ entity2document; Table \"public.entity2document\" Column | Type | Modifiers | Storage | Description -------------+--------------------------------+---------------------------------+----------+------------- id | integer | not null | plain | \n document_id | integer | | plain | \n name | character varying(255) | not null | extended | \n qualifier | character varying(255) | not null | extended | \n tagMethod | character varying(255) | default NULL::character varying | extended | \n created | timestamp(0) without time zone | not null | plain | \n updated | timestamp(0) without time zone | | plain | \n curation | integer | | plain | \nIndexes:\n \"Entity2Document_pkey\" PRIMARY KEY, btree (id)\n \"entity2Document_name_index\" btree (name)\n \"entity2document_name_qualifier_index\" btree (name, qualifier)\n \"idx_a6020c0dc33f7837\" btree (document_id)\n \"qualifier_index\" btree (qualifier)\nForeign-key constraints:\n \"fk_a6020c0dc33f7837\" FOREIGN KEY (document_id) REFERENCES documentold(id)\nHas OIDs: no\n\n\n\n\n\n\nTable metadata:\n documentold: 124.515.592 of rows. It has several columns with a large proportion of NULLs(updated, patternCount, ruleScore, hepTermNormScore, hepTermVarScore)\n entity2document: 93.785.968 of rows. It has two columns with a large proportion of NULLs (updated, curation)\n \nNone of the tables receive updates or deletes regularly\n \n \n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=387929.02..387929.05 rows=10 width=313) (actual time=55980.472..55980.476 rows=10 loops=1)\n -> Sort (cost=387929.02..387966.75 rows=15090 width=313) (actual time=55980.471..55980.473 rows=10 loops=1)\n Sort Key: d1_.hepval\n Sort Method: top-N heapsort Memory: 28kB\n -> Nested Loop (cost=469.14..387602.93 rows=15090 width=313) (actual time=96.716..55974.004 rows=2774 loops=1)\n -> Bitmap Heap Scan on entity2document e0_ (cost=469.14..54851.25 rows=15090 width=59) (actual time=51.299..8452.592 rows=2774 loops=1)\n Recheck Cond: (((name)::text = 'Cimetidine'::text) AND ((qualifier)::text = 'CompoundDict'::text))\n -> Bitmap Index Scan on entity2document_name_qualifier_index (cost=0.00..465.36 rows=15090 width=0) (actual time=36.467..36.467 rows=2774 loops=1)\n Index Cond: (((name)::text = 'Cimetidine'::text) AND ((qualifier)::text = 'CompoundDict'::text))\n -> Index Scan using \"DocumentOLD_pkey\" on documentold d1_ (cost=0.00..22.04 rows=1 width=254) (actual time=17.113..17.129 rows=1 loops=2774)\n Index Cond: (d1_.id = e0_.document_id)\n Filter: (d1_.hepval IS NOT NULL)\n Total runtime: 55980.554 ms\n(13 rows)\n\n version \n-----------------------------------------------------------------------------------------------------\n PostgreSQL 8.3.8 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.4.real (Ubuntu 10.4.1-3ubuntu3) 10.4.1\n \n This query has been always slow. It's fast only when it's cached. Vacuum and analyze have been done manually very recently\n \n \n \n \n SELECT name, current_setting(name), source\n FROM pg_settings\n WHERE source NOT IN ('default', 'override');\n \n name | current_setting | source \n----------------------------+--------------------+----------------------\n client_encoding | UTF8 | client\n DateStyle | ISO, DMY | configuration file\n default_text_search_config | pg_catalog.spanish | configuration file\n effective_cache_size | 7500MB | configuration file\n lc_messages | es_ES.UTF-8 | configuration file\n lc_monetary | es_ES.UTF-8 | configuration file\n lc_numeric | C | configuration file\n lc_time | es_ES.UTF-8 | configuration file\n listen_addresses | * | configuration file\n log_line_prefix | %t | configuration file\n log_timezone | localtime | command line\n maintenance_work_mem | 2000MB | configuration file\n max_connections | 100 | configuration file\n max_fsm_pages | 63217760 | configuration file\n max_stack_depth | 2MB | environment variable\n port | 5432 | configuration file\n shared_buffers | 1500MB | configuration file\n ssl | on | configuration file\n tcp_keepalives_count | 9 | configuration file\n tcp_keepalives_idle | 7200 | configuration file\n tcp_keepalives_interval | 75 | configuration file\n TimeZone | localtime | command line\n timezone_abbreviations | Default | command line\n work_mem | 50MB | configuration file\n \n Setting the work_mem to 3000MB doesn't change anything...\n \n Everything seems good to me but the Recheck Cond, because of the large ammount of rows, is slowing the query too much. I have read that is not a good point to try to get rid of recheck cond (maybe even not possible, I don't know, I'm new to PostgreSQL). I'd like to know what I am doing wrong and how can I solve it...\n \n Any help please?\n \n Thank you very much,\n \n Andrés\n**NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n**CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 26 Feb 2014 16:41:31 +0100", "msg_from": "\"acanada\" <[email protected]>", "msg_from_op": true, "msg_subject": "Query taking long time" }, { "msg_contents": "Hello,\n\nI have changed the multicolumn index from:\n\t\"entity2document_name_qualifier_index\" btree (name, qualifier)\n to:\n\t \"document_qualifier_name_index\" btree (qualifier, name)\n\nAnd now the planner doesn't \"Recheck cond:\" (there are only three different qualifiers vs. millions of names...)\n\nBut still taking long time\n\n\nLimit (cost=384043.64..384043.66 rows=10 width=313) (actual time=80555.930..80555.934 rows=10 loops=1)\n -> Sort (cost=384043.64..384081.19 rows=15020 width=313) (actual time=80555.928..80555.931 rows=10 loops=1)\n Sort Key: d1_.hepval\n Sort Method: top-N heapsort Memory: 29kB\n -> Nested Loop (cost=0.00..383719.06 rows=15020 width=313) (actual time=223.778..80547.196 rows=3170 loops=1)\n -> Index Scan using document_qualifier_name_index on entity2document e0_ (cost=0.00..52505.40 rows=15020 width=59) (actual time=126.880..11549.392 rows=3170 loops=1)\n Index Cond: (((qualifier)::text = 'CompoundDict'::text) AND ((name)::text = 'galactosamine'::text))\n -> Index Scan using \"DocumentOLD_pkey\" on documentold d1_ (cost=0.00..22.04 rows=1 width=254) (actual time=21.747..21.764 rows=1 loops=3170)\n Index Cond: (d1_.id = e0_.document_id)\n Filter: (d1_.hepval IS NOT NULL)\n Total runtime: 80556.027 ms\n\n\n\nAny help/point to any direction, would be very appreciated.\nThank you,\nAndrés\n\nEl Feb 26, 2014, a las 4:41 PM, acanada escribió:\n\n> Hello to everybody and thanks in advance to take a look to this message.\n> I'm new in this list and with PostgreSQL. \n> My queries are taking too much time to complete and I don't know what to do right now. I think I'm providing all the info required for you to help me. If you need extra info please tell me.\n> \n> I am using DQL included in the last version of symfony2 (2.4.2). This is the query, formed by DQL, but coppied-pasted to the psql client (9.1.11, server 8.3.8)\n> \n> explain analyze SELECT e0_.id AS id0, e0_.name AS name1, e0_.qualifier AS qualifier2, e0_.\"tagMethod\" AS tagmethod3, e0_.curation AS curation4, e0_.created AS created5, e0_.updated AS updated6, d1_.id AS id7, d1_.kind AS kind8, d1_.uid AS uid9, d1_.\"sentenceId\" AS sentenceid10, d1_.text AS text11, d1_.hepval AS hepval12, d1_.cardval AS cardval13, d1_.nephval AS nephval14, d1_.phosval AS phosval15, d1_.\"patternCount\" AS patterncount16, d1_.\"ruleScore\" AS rulescore17, d1_.\"hepTermNormScore\" AS heptermnormscore18, d1_.\"hepTermVarScore\" AS heptermvarscore19, d1_.created AS created20, d1_.updated AS updated21, e0_.document_id AS document_id22 FROM Entity2Document e0_ INNER JOIN documentold d1_ ON e0_.document_id = d1_.id WHERE e0_.name ='ranitidine' AND e0_.qualifier = 'CompoundDict' AND d1_.hepval IS NOT NULL ORDER BY d1_.hepval DESC limit 10;\n> \n> \n> limtox=> \\d+ documentold;\n> Table \"public.documentold\"\n> Column | Type | Modifiers | Storage | Description \n> ------------------+--------------------------------+-----------+----------+-------------\n> id | integer | not null | plain | \n> kind | character varying(255) | not null | extended | \n> uid | character varying(255) | not null | extended | \n> sentenceId | character varying(255) | not null | extended | \n> text | text | not null | extended | \n> hepval | double precision | | plain | \n> created | timestamp(0) without time zone | not null | plain | \n> updated | timestamp(0) without time zone | | plain | \n> cardval | double precision | | plain | \n> nephval | double precision | | plain | \n> phosval | double precision | | plain | \n> patternCount | double precision | | plain | \n> ruleScore | double precision | | plain | \n> hepTermNormScore | double precision | | plain | \n> hepTermVarScore | double precision | | plain | \n> Indexes:\n> \"DocumentOLD_pkey\" PRIMARY KEY, btree (id)\n> \"document_cardval_index\" btree (cardval)\n> \"document_heptermnorm_index\" btree (\"hepTermNormScore\" DESC NULLS LAST)\n> \"document_heptermvar_index\" btree (\"hepTermVarScore\" DESC NULLS LAST)\n> \"document_hepval_index\" btree (hepval DESC NULLS LAST)\n> \"document_kind_index\" btree (kind)\n> \"document_nephval_index\" btree (nephval DESC NULLS LAST)\n> \"document_patterncount_index\" btree (\"patternCount\" DESC NULLS LAST)\n> \"document_phosval_index\" btree (phosval DESC NULLS LAST)\n> \"document_rulescore_index\" btree (\"ruleScore\" DESC NULLS LAST)\n> \"document_sentenceid_index\" btree (\"sentenceId\")\n> \"document_uid_index\" btree (uid)\n> Referenced by:\n> TABLE \"hepkeywordtermnorm2document\" CONSTRAINT \"fk_1c19bcd0c33f7837\" FOREIGN KEY (document_id) REFERENCES documentold(id)\n> TABLE \"cytochrome2document\" CONSTRAINT \"fk_21f7636fc33f7837\" FOREIGN KEY (document_id) REFERENCES documentold(id)\n> TABLE \"hepkeywordtermvariant2document\" CONSTRAINT \"fk_a316e36bc33f7837\" FOREIGN KEY (document_id) REFERENCES documentold(id)\n> TABLE \"entity2document\" CONSTRAINT \"fk_a6020c0dc33f7837\" FOREIGN KEY (document_id) REFERENCES documentold(id)\n> TABLE \"specie2document\" CONSTRAINT \"fk_b6e551c8c33f7837\" FOREIGN KEY (document_id) REFERENCES documentold(id)\n> Has OIDs: no\n> \n> \n> \n> \n> limtox=> \\d+ entity2document; Table \"public.entity2document\" Column | Type | Modifiers | Storage | Description -------------+--------------------------------+---------------------------------+----------+------------- id | integer | not null | plain | \n> document_id | integer | | plain | \n> name | character varying(255) | not null | extended | \n> qualifier | character varying(255) | not null | extended | \n> tagMethod | character varying(255) | default NULL::character varying | extended | \n> created | timestamp(0) without time zone | not null | plain | \n> updated | timestamp(0) without time zone | | plain | \n> curation | integer | | plain | \n> Indexes:\n> \"Entity2Document_pkey\" PRIMARY KEY, btree (id)\n> \"entity2Document_name_index\" btree (name)\n> \"entity2document_name_qualifier_index\" btree (name, qualifier)\n> \"idx_a6020c0dc33f7837\" btree (document_id)\n> \"qualifier_index\" btree (qualifier)\n> Foreign-key constraints:\n> \"fk_a6020c0dc33f7837\" FOREIGN KEY (document_id) REFERENCES documentold(id)\n> Has OIDs: no\n> \n> \n> \n> \n> \n> \n> Table metadata:\n> documentold: 124.515.592 of rows. It has several columns with a large proportion of NULLs(updated, patternCount, ruleScore, hepTermNormScore, hepTermVarScore)\n> entity2document: 93.785.968 of rows. It has two columns with a large proportion of NULLs (updated, curation)\n> \n> None of the tables receive updates or deletes regularly\n> \n> \n> QUERY PLAN \n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=387929.02..387929.05 rows=10 width=313) (actual time=55980.472..55980.476 rows=10 loops=1)\n> -> Sort (cost=387929.02..387966.75 rows=15090 width=313) (actual time=55980.471..55980.473 rows=10 loops=1)\n> Sort Key: d1_.hepval\n> Sort Method: top-N heapsort Memory: 28kB\n> -> Nested Loop (cost=469.14..387602.93 rows=15090 width=313) (actual time=96.716..55974.004 rows=2774 loops=1)\n> -> Bitmap Heap Scan on entity2document e0_ (cost=469.14..54851.25 rows=15090 width=59) (actual time=51.299..8452.592 rows=2774 loops=1)\n> Recheck Cond: (((name)::text = 'Cimetidine'::text) AND ((qualifier)::text = 'CompoundDict'::text))\n> -> Bitmap Index Scan on entity2document_name_qualifier_index (cost=0.00..465.36 rows=15090 width=0) (actual time=36.467..36.467 rows=2774 loops=1)\n> Index Cond: (((name)::text = 'Cimetidine'::text) AND ((qualifier)::text = 'CompoundDict'::text))\n> -> Index Scan using \"DocumentOLD_pkey\" on documentold d1_ (cost=0.00..22.04 rows=1 width=254) (actual time=17.113..17.129 rows=1 loops=2774)\n> Index Cond: (d1_.id = e0_.document_id)\n> Filter: (d1_.hepval IS NOT NULL)\n> Total runtime: 55980.554 ms\n> (13 rows)\n> \n> version \n> -----------------------------------------------------------------------------------------------------\n> PostgreSQL 8.3.8 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.4.real (Ubuntu 10.4.1-3ubuntu3) 10.4.1\n> \n> This query has been always slow. It's fast only when it's cached. Vacuum and analyze have been done manually very recently\n> \n> \n> \n> \n> SELECT name, current_setting(name), source\n> FROM pg_settings\n> WHERE source NOT IN ('default', 'override');\n> \n> name | current_setting | source \n> ----------------------------+--------------------+----------------------\n> client_encoding | UTF8 | client\n> DateStyle | ISO, DMY | configuration file\n> default_text_search_config | pg_catalog.spanish | configuration file\n> effective_cache_size | 7500MB | configuration file\n> lc_messages | es_ES.UTF-8 | configuration file\n> lc_monetary | es_ES.UTF-8 | configuration file\n> lc_numeric | C | configuration file\n> lc_time | es_ES.UTF-8 | configuration file\n> listen_addresses | * | configuration file\n> log_line_prefix | %t | configuration file\n> log_timezone | localtime | command line\n> maintenance_work_mem | 2000MB | configuration file\n> max_connections | 100 | configuration file\n> max_fsm_pages | 63217760 | configuration file\n> max_stack_depth | 2MB | environment variable\n> port | 5432 | configuration file\n> shared_buffers | 1500MB | configuration file\n> ssl | on | configuration file\n> tcp_keepalives_count | 9 | configuration file\n> tcp_keepalives_idle | 7200 | configuration file\n> tcp_keepalives_interval | 75 | configuration file\n> TimeZone | localtime | command line\n> timezone_abbreviations | Default | command line\n> work_mem | 50MB | configuration file\n> \n> Setting the work_mem to 3000MB doesn't change anything...\n> \n> Everything seems good to me but the Recheck Cond, because of the large ammount of rows, is slowing the query too much. I have read that is not a good point to try to get rid of recheck cond (maybe even not possible, I don't know, I'm new to PostgreSQL). I'd like to know what I am doing wrong and how can I solve it...\n> \n> Any help please?\n> \n> Thank you very much,\n> \n> Andrés\n> **NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n> **CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n> \n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n**NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n**CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 27 Feb 2014 11:31:29 +0100", "msg_from": "\"acanada\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query taking long time" }, { "msg_contents": "Hello,\n\nThankyou for your answer.\nI have made more changes than a simple re-indexing recently. I have moved the sorting field to the table in order to avoid the join clause. Now the schema is very simple. The query only implies one table:\n\nx=> \\d+ entity_compounddict2document;\n Table \"public.entity_compounddict2document\"\n Column | Type | Modifiers | Storage | Description \n------------------+--------------------------------+-----------+----------+-------------\n id | integer | not null | plain | \n document_id | integer | | plain | \n name | character varying(255) | | extended | \n qualifier | character varying(255) | | extended | \n tagMethod | character varying(255) | | extended | \n created | timestamp(0) without time zone | | plain | \n updated | timestamp(0) without time zone | | plain | \n curation | integer | | plain | \n hepval | double precision | | plain | \n cardval | double precision | | plain | \n nephval | double precision | | plain | \n phosval | double precision | | plain | \n patternCount | double precision | | plain | \n ruleScore | double precision | | plain | \n hepTermNormScore | double precision | | plain | \n hepTermVarScore | double precision | | plain | \nIndexes:\n \"entity_compounddict2document_pkey\" PRIMARY KEY, btree (id)\n \"entity_compound2document_cardval\" btree (cardval)\n \"entity_compound2document_heptermnormscore\" btree (\"hepTermNormScore\")\n \"entity_compound2document_heptermvarscore\" btree (\"hepTermVarScore\")\n \"entity_compound2document_hepval\" btree (hepval)\n \"entity_compound2document_name\" btree (name)\n \"entity_compound2document_nephval\" btree (nephval)\n \"entity_compound2document_patterncount\" btree (\"patternCount\")\n \"entity_compound2document_phosval\" btree (phosval)\n \"entity_compound2document_rulescore\" btree (\"ruleScore\")\nHas OIDs: no\n\n tablename | indexname | num_rows | table_size | index_size | unique | number_of_scans | tuples_read | tuples_fetched \n entity_compounddict2document | entity_compound2document_cardval | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n entity_compounddict2document | entity_compound2document_heptermnormscore | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n entity_compounddict2document | entity_compound2document_heptermvarscore | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n entity_compounddict2document | entity_compound2document_hepval | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n entity_compounddict2document | entity_compound2document_name | 5.42452e+07 | 6763 MB | 1505 MB | Y | 24 | 178680 | 0\n entity_compounddict2document | entity_compound2document_nephval | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n entity_compounddict2document | entity_compound2document_patterncount | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n entity_compounddict2document | entity_compound2document_phosval | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n entity_compounddict2document | entity_compound2document_rulescore | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n entity_compounddict2document | entity_compounddict2document_pkey | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n\nThe table has aprox. 54,000,000 rows\nThere are no NULLs in hepval field and pg_settings haven't changed. I also have done \"analyze\" to this table.\n\nI have simplified the query and added the last advise that you told me:\n\nQuery: \n\n\t explain analyze select * from (select * from entity_compounddict2document where name='ranitidine') as a order by a.hepval;\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=11060.50..11067.55 rows=2822 width=133) (actual time=32715.097..32716.488 rows=13512 loops=1)\n Sort Key: entity_compounddict2document.hepval\n Sort Method: quicksort Memory: 2301kB\n -> Bitmap Heap Scan on entity_compounddict2document (cost=73.82..10898.76 rows=2822 width=133) (actual time=6.034..32695.483 rows=13512 loops=1)\n Recheck Cond: ((name)::text = 'ranitidine'::text)\n -> Bitmap Index Scan on entity_compound2document_name (cost=0.00..73.12 rows=2822 width=0) (actual time=3.221..3.221 rows=13512 loops=1)\n Index Cond: ((name)::text = 'ranitidine'::text)\n Total runtime: 32717.548 ms\n\n\nAnother query:\n\t\n\texplain analyze select * from (select * from entity_compounddict2document where name='progesterone' ) as a order by a.hepval;\n\n\t\t\t\t\t\tQUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=367879.25..368209.24 rows=131997 width=133) (actual time=9262.887..9287.046 rows=138165 loops=1)\n Sort Key: entity_compounddict2document.hepval\n Sort Method: quicksort Memory: 25622kB\n -> Bitmap Heap Scan on entity_compounddict2document (cost=2906.93..356652.81 rows=131997 width=133) (actual time=76.316..9038.485 rows=138165 loops=1)\n Recheck Cond: ((name)::text = 'progesterone'::text)\n -> Bitmap Index Scan on entity_compound2document_name (cost=0.00..2873.93 rows=131997 width=0) (actual time=40.913..40.913 rows=138165 loops=1)\n Index Cond: ((name)::text = 'progesterone'::text)\n Total runtime: 9296.815 ms\n\n\nIt has improved (I supose because of the lack of the join table) but still taking a lot of time... Anything I can do??\n\nAny help would be very appreciated. Thank you very much\n\nAndrés.\n\n\nEl Mar 3, 2014, a las 1:04 AM, Venkata Balaji Nagothi escribió:\n\n> Any Re-Indexing was done recently ?\n> \n> If the SELECT query without ORDER BY is showing low cost, then, the query can be re-written as below to see if the performance improves. If the resultant rows of the query are \n> \n> select * from (select query without order by clause) a order by a.hepval and so on -- something like that.\n> \n> This should lower the cost of the query because the planner chooses to sort on the resultant set of the rows rather than sorting the table and getting the results.\n> \n> Please let us know if this helps !\n> \n> Venkata Balaji N\n> \n> Sr. Database Administrator\n> Fujitsu Australia\n> \n> \n> On Fri, Feb 28, 2014 at 8:55 PM, acanada <[email protected]> wrote:\n> Thankyou for your answer!\n> \n> Sizes of Tables and Indexes are:\n> \n> relname | rows_in_bytes | num_rows | number_of_indexes | unique | single_column | multi_column \n> --------------------------------+---------------+-------------+-------------------+--------+---------------+--------------\n> documentold | 119 MB | 1.24516e+08 | 12 | Y | 12 | 0\n> entity2document | 89 MB | 9.33666e+07 | 5 | Y | 4 | 1\n> \n> tablename | indexname | num_rows | table_size | index_size | unique | number_of_scans | tuples_read | tuples_fetched \n> --------------------------------+------------------------------------------------+-------------+------------+------------+--------+-----------------+-------------+----------------\n> documentold | DocumentOLD_pkey | 1.24516e+08 | 33 GB | 2708 MB | Y | 45812802 | 924462741 | 924084958\n> documentold | document_cardval_index | 1.24516e+08 | 33 GB | 2763 MB | Y | 0 | 0 | 0\n> documentold | document_heptermnorm_index | 1.24516e+08 | 33 GB | 2667 MB | Y | 0 | 0 | 0\n> documentold | document_heptermvar_index | 1.24516e+08 | 33 GB | 2667 MB | Y | 0 | 0 | 0\n> documentold | document_hepval_index | 1.24516e+08 | 33 GB | 2667 MB | Y | 0 | 0 | 0\n> documentold | document_kind_index | 1.24516e+08 | 33 GB | 2859 MB | Y | 0 | 0 | 0\n> documentold | document_nephval_index | 1.24516e+08 | 33 GB | 2667 MB | Y | 0 | 0 | 0\n> documentold | document_patterncount_index | 1.24516e+08 | 33 GB | 2667 MB | Y | 0 | 0 | 0\n> documentold | document_phosval_index | 1.24516e+08 | 33 GB | 2667 MB | Y | 0 | 0 | 0\n> documentold | document_rulescore_index | 1.24516e+08 | 33 GB | 2667 MB | Y | 0 | 0 | 0\n> documentold | document_sentenceid_index | 1.24516e+08 | 33 GB | 3867 MB | Y | 8089466 | 12669585 | 7597923\n> documentold | document_uid_index | 1.24516e+08 | 33 GB | 3889 MB | Y | 0 | 0 | 0\n> entity2document | Entity2Document_pkey | 9.33666e+07 | 7216 MB | 2000 MB | Y | 2942 | 2942 | 2942\n> entity2document | document_qualifier_name_index | 9.33666e+07 | 7216 MB | 3557 MB | Y | 93 | 1091680 | 124525\n> entity2document | entity2Document_name_index | 9.33666e+07 | 7216 MB | 2550 MB | Y | 4330 | 3320634 | 2\n> entity2document | idx_a6020c0dc33f7837 | 9.33666e+07 | 7216 MB | 2000 MB | Y | 2465927 | 1661666 | 1661666\n> entity2document | qualifier_index | 9.33666e+07 | 7216 MB | 2469 MB | Y | 51 | 2333120186 | 0\n> \n> The explain plan shows lower cost without order by!!\n> There are no NULLs in the hepval field...\n> \n> \n> Thank you for your time!\n> \n> Andrés\n> \n> \n> \n> El Feb 28, 2014, a las 2:28 AM, Venkata Balaji Nagothi escribió:\n> \n>> Hi Andres,\n>> \n>> Can you please help us with the below information.\n>> \n>> - Sizes of Tables and Indexes\n>> - The Explain plan shows same/higher cost without ORDER BY clause ?\n>> \n>> I suspect huge number of NULLs might be the problem. If you can please get us the above information, then we can probably know if the cost is genuine.\n>> \n>> Venkata Balaji N\n>> \n>> Sr. Database Administrator\n>> Fujitsu Australia\n>> \n>> \n>> \n>> On Thu, Feb 27, 2014 at 9:31 PM, acanada <[email protected]> wrote:\n>> Hello,\n>> \n>> I have changed the multicolumn index from:\n>> \"entity2document_name_qualifier_index\" btree (name, qualifier)\n>> to:\n>> \"document_qualifier_name_index\" btree (qualifier, name)\n>> \n>> And now the planner doesn't \"Recheck cond:\" (there are only three different qualifiers vs. millions of names...)\n>> \n>> But still taking long time\n>> \n>> \n>> Limit (cost=384043.64..384043.66 rows=10 width=313) (actual time=80555.930..80555.934 rows=10 loops=1)\n>> -> Sort (cost=384043.64..384081.19 rows=15020 width=313) (actual time=80555.928..80555.931 rows=10 loops=1)\n>> Sort Key: d1_.hepval\n>> Sort Method: top-N heapsort Memory: 29kB\n>> -> Nested Loop (cost=0.00..383719.06 rows=15020 width=313) (actual time=223.778..80547.196 rows=3170 loops=1)\n>> -> Index Scan using document_qualifier_name_index on entity2document e0_ (cost=0.00..52505.40 rows=15020 width=59) (actual time=126.880..11549.392 rows=3170 loops=1)\n>> Index Cond: (((qualifier)::text = 'CompoundDict'::text) AND ((name)::text = 'galactosamine'::text))\n>> -> Index Scan using \"DocumentOLD_pkey\" on documentold d1_ (cost=0.00..22.04 rows=1 width=254) (actual time=21.747..21.764 rows=1 loops=3170)\n>> Index Cond: (d1_.id = e0_.document_id)\n>> Filter: (d1_.hepval IS NOT NULL)\n>> Total runtime: 80556.027 ms\n>> \n>> \n>> \n>> Any help/point to any direction, would be very appreciated.\n>> Thank you,\n>> Andrés\n>> \n>> El Feb 26, 2014, a las 4:41 PM, acanada escribió:\n>> \n>> > Hello to everybody and thanks in advance to take a look to this message.\n>> > I'm new in this list and with PostgreSQL.\n>> > My queries are taking too much time to complete and I don't know what to do right now. I think I'm providing all the info required for you to help me. If you need extra info please tell me.\n>> >\n>> > I am using DQL included in the last version of symfony2 (2.4.2). This is the query, formed by DQL, but coppied-pasted to the psql client (9.1.11, server 8.3.8)\n>> >\n>> > explain analyze SELECT e0_.id AS id0, e0_.name AS name1, e0_.qualifier AS qualifier2, e0_.\"tagMethod\" AS tagmethod3, e0_.curation AS curation4, e0_.created AS created5, e0_.updated AS updated6, d1_.id AS id7, d1_.kind AS kind8, d1_.uid AS uid9, d1_.\"sentenceId\" AS sentenceid10, d1_.text AS text11, d1_.hepval AS hepval12, d1_.cardval AS cardval13, d1_.nephval AS nephval14, d1_.phosval AS phosval15, d1_.\"patternCount\" AS patterncount16, d1_.\"ruleScore\" AS rulescore17, d1_.\"hepTermNormScore\" AS heptermnormscore18, d1_.\"hepTermVarScore\" AS heptermvarscore19, d1_.created AS created20, d1_.updated AS updated21, e0_.document_id AS document_id22 FROM Entity2Document e0_ INNER JOIN documentold d1_ ON e0_.document_id = d1_.id WHERE e0_.name ='ranitidine' AND e0_.qualifier = 'CompoundDict' AND d1_.hepval IS NOT NULL ORDER BY d1_.hepval DESC limit 10;\n>> >\n>> >\n>> > limtox=> \\d+ documentold;\n>> > Table \"public.documentold\"\n>> > Column | Type | Modifiers | Storage | Description\n>> > ------------------+--------------------------------+-----------+----------+-------------\n>> > id | integer | not null | plain |\n>> > kind | character varying(255) | not null | extended |\n>> > uid | character varying(255) | not null | extended |\n>> > sentenceId | character varying(255) | not null | extended |\n>> > text | text | not null | extended |\n>> > hepval | double precision | | plain |\n>> > created | timestamp(0) without time zone | not null | plain |\n>> > updated | timestamp(0) without time zone | | plain |\n>> > cardval | double precision | | plain |\n>> > nephval | double precision | | plain |\n>> > phosval | double precision | | plain |\n>> > patternCount | double precision | | plain |\n>> > ruleScore | double precision | | plain |\n>> > hepTermNormScore | double precision | | plain |\n>> > hepTermVarScore | double precision | | plain |\n>> > Indexes:\n>> > \"DocumentOLD_pkey\" PRIMARY KEY, btree (id)\n>> > \"document_cardval_index\" btree (cardval)\n>> > \"document_heptermnorm_index\" btree (\"hepTermNormScore\" DESC NULLS LAST)\n>> > \"document_heptermvar_index\" btree (\"hepTermVarScore\" DESC NULLS LAST)\n>> > \"document_hepval_index\" btree (hepval DESC NULLS LAST)\n>> > \"document_kind_index\" btree (kind)\n>> > \"document_nephval_index\" btree (nephval DESC NULLS LAST)\n>> > \"document_patterncount_index\" btree (\"patternCount\" DESC NULLS LAST)\n>> > \"document_phosval_index\" btree (phosval DESC NULLS LAST)\n>> > \"document_rulescore_index\" btree (\"ruleScore\" DESC NULLS LAST)\n>> > \"document_sentenceid_index\" btree (\"sentenceId\")\n>> > \"document_uid_index\" btree (uid)\n>> > Referenced by:\n>> > TABLE \"hepkeywordtermnorm2document\" CONSTRAINT \"fk_1c19bcd0c33f7837\" FOREIGN KEY (document_id) REFERENCES documentold(id)\n>> > TABLE \"cytochrome2document\" CONSTRAINT \"fk_21f7636fc33f7837\" FOREIGN KEY (document_id) REFERENCES documentold(id)\n>> > TABLE \"hepkeywordtermvariant2document\" CONSTRAINT \"fk_a316e36bc33f7837\" FOREIGN KEY (document_id) REFERENCES documentold(id)\n>> > TABLE \"entity2document\" CONSTRAINT \"fk_a6020c0dc33f7837\" FOREIGN KEY (document_id) REFERENCES documentold(id)\n>> > TABLE \"specie2document\" CONSTRAINT \"fk_b6e551c8c33f7837\" FOREIGN KEY (document_id) REFERENCES documentold(id)\n>> > Has OIDs: no\n>> >\n>> >\n>> >\n>> >\n>> > limtox=> \\d+ entity2document; Table \"public.entity2document\" Column | Type | Modifiers | Storage | Description -------------+--------------------------------+---------------------------------+----------+------------- id | integer | not null | plain |\n>> > document_id | integer | | plain |\n>> > name | character varying(255) | not null | extended |\n>> > qualifier | character varying(255) | not null | extended |\n>> > tagMethod | character varying(255) | default NULL::character varying | extended |\n>> > created | timestamp(0) without time zone | not null | plain |\n>> > updated | timestamp(0) without time zone | | plain |\n>> > curation | integer | | plain |\n>> > Indexes:\n>> > \"Entity2Document_pkey\" PRIMARY KEY, btree (id)\n>> > \"entity2Document_name_index\" btree (name)\n>> > \"entity2document_name_qualifier_index\" btree (name, qualifier)\n>> > \"idx_a6020c0dc33f7837\" btree (document_id)\n>> > \"qualifier_index\" btree (qualifier)\n>> > Foreign-key constraints:\n>> > \"fk_a6020c0dc33f7837\" FOREIGN KEY (document_id) REFERENCES documentold(id)\n>> > Has OIDs: no\n>> >\n>> >\n>> >\n>> >\n>> >\n>> >\n>> > Table metadata:\n>> > documentold: 124.515.592 of rows. It has several columns with a large proportion of NULLs(updated, patternCount, ruleScore, hepTermNormScore, hepTermVarScore)\n>> > entity2document: 93.785.968 of rows. It has two columns with a large proportion of NULLs (updated, curation)\n>> >\n>> > None of the tables receive updates or deletes regularly\n>> >\n>> >\n>> > QUERY PLAN\n>> > --------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>> > Limit (cost=387929.02..387929.05 rows=10 width=313) (actual time=55980.472..55980.476 rows=10 loops=1)\n>> > -> Sort (cost=387929.02..387966.75 rows=15090 width=313) (actual time=55980.471..55980.473 rows=10 loops=1)\n>> > Sort Key: d1_.hepval\n>> > Sort Method: top-N heapsort Memory: 28kB\n>> > -> Nested Loop (cost=469.14..387602.93 rows=15090 width=313) (actual time=96.716..55974.004 rows=2774 loops=1)\n>> > -> Bitmap Heap Scan on entity2document e0_ (cost=469.14..54851.25 rows=15090 width=59) (actual time=51.299..8452.592 rows=2774 loops=1)\n>> > Recheck Cond: (((name)::text = 'Cimetidine'::text) AND ((qualifier)::text = 'CompoundDict'::text))\n>> > -> Bitmap Index Scan on entity2document_name_qualifier_index (cost=0.00..465.36 rows=15090 width=0) (actual time=36.467..36.467 rows=2774 loops=1)\n>> > Index Cond: (((name)::text = 'Cimetidine'::text) AND ((qualifier)::text = 'CompoundDict'::text))\n>> > -> Index Scan using \"DocumentOLD_pkey\" on documentold d1_ (cost=0.00..22.04 rows=1 width=254) (actual time=17.113..17.129 rows=1 loops=2774)\n>> > Index Cond: (d1_.id = e0_.document_id)\n>> > Filter: (d1_.hepval IS NOT NULL)\n>> > Total runtime: 55980.554 ms\n>> > (13 rows)\n>> >\n>> > version\n>> > -----------------------------------------------------------------------------------------------------\n>> > PostgreSQL 8.3.8 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.4.real (Ubuntu 10.4.1-3ubuntu3) 10.4.1\n>> >\n>> > This query has been always slow. It's fast only when it's cached. Vacuum and analyze have been done manually very recently\n>> >\n>> >\n>> >\n>> >\n>> > SELECT name, current_setting(name), source\n>> > FROM pg_settings\n>> > WHERE source NOT IN ('default', 'override');\n>> >\n>> > name | current_setting | source\n>> > ----------------------------+--------------------+----------------------\n>> > client_encoding | UTF8 | client\n>> > DateStyle | ISO, DMY | configuration file\n>> > default_text_search_config | pg_catalog.spanish | configuration file\n>> > effective_cache_size | 7500MB | configuration file\n>> > lc_messages | es_ES.UTF-8 | configuration file\n>> > lc_monetary | es_ES.UTF-8 | configuration file\n>> > lc_numeric | C | configuration file\n>> > lc_time | es_ES.UTF-8 | configuration file\n>> > listen_addresses | * | configuration file\n>> > log_line_prefix | %t | configuration file\n>> > log_timezone | localtime | command line\n>> > maintenance_work_mem | 2000MB | configuration file\n>> > max_connections | 100 | configuration file\n>> > max_fsm_pages | 63217760 | configuration file\n>> > max_stack_depth | 2MB | environment variable\n>> > port | 5432 | configuration file\n>> > shared_buffers | 1500MB | configuration file\n>> > ssl | on | configuration file\n>> > tcp_keepalives_count | 9 | configuration file\n>> > tcp_keepalives_idle | 7200 | configuration file\n>> > tcp_keepalives_interval | 75 | configuration file\n>> > TimeZone | localtime | command line\n>> > timezone_abbreviations | Default | command line\n>> > work_mem | 50MB | configuration file\n>> >\n>> > Setting the work_mem to 3000MB doesn't change anything...\n>> >\n>> > Everything seems good to me but the Recheck Cond, because of the large ammount of rows, is slowing the query too much. I have read that is not a good point to try to get rid of recheck cond (maybe even not possible, I don't know, I'm new to PostgreSQL). I'd like to know what I am doing wrong and how can I solve it...\n>> >\n>> > Any help please?\n>> >\n>> > Thank you very much,\n>> >\n>> > Andrés\n>> > **NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n>> > **CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n>> >\n>> >\n>> >\n>> > --\n>> > Sent via pgsql-performance mailing list ([email protected])\n>> > To make changes to your subscription:\n>> > http://www.postgresql.org/mailpref/pgsql-performance\n>> \n>> \n>> **NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n>> **CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n>> \n>> \n>> \n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>> \n> \n> \n> \n> **NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n> \n> **CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n> \n> \n> \n\n\n\n**NOTA DE CONFIDENCIALIDAD** Este correo electrÿnico, y en su caso los ficheros adjuntos, pueden contener informaciÿn protegida para el uso exclusivo de su destinatario. Se prohÿbe la distribuciÿn, reproducciÿn o cualquier otro tipo de transmisiÿn por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n**CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n\n\n\nHello,Thankyou for your answer.I have made more changes than a simple re-indexing recently. I have moved the sorting field to the table in order to avoid the join clause. Now the schema is very simple. The query only implies one table:x=> \\d+ entity_compounddict2document;                      Table \"public.entity_compounddict2document\"      Column      |              Type              | Modifiers | Storage  | Description ------------------+--------------------------------+-----------+----------+------------- id               | integer                        | not null  | plain    |  document_id      | integer                        |           | plain    |  name             | character varying(255)         |           | extended |  qualifier        | character varying(255)         |           | extended |  tagMethod        | character varying(255)         |           | extended |  created          | timestamp(0) without time zone |           | plain    |  updated          | timestamp(0) without time zone |           | plain    |  curation         | integer                        |           | plain    |  hepval           | double precision               |           | plain    |  cardval          | double precision               |           | plain    |  nephval          | double precision               |           | plain    |  phosval          | double precision               |           | plain    |  patternCount     | double precision               |           | plain    |  ruleScore        | double precision               |           | plain    |  hepTermNormScore | double precision               |           | plain    |  hepTermVarScore  | double precision               |           | plain    | Indexes:    \"entity_compounddict2document_pkey\" PRIMARY KEY, btree (id)    \"entity_compound2document_cardval\" btree (cardval)    \"entity_compound2document_heptermnormscore\" btree (\"hepTermNormScore\")    \"entity_compound2document_heptermvarscore\" btree (\"hepTermVarScore\")    \"entity_compound2document_hepval\" btree (hepval)    \"entity_compound2document_name\" btree (name)    \"entity_compound2document_nephval\" btree (nephval)    \"entity_compound2document_patterncount\" btree (\"patternCount\")    \"entity_compound2document_phosval\" btree (phosval)    \"entity_compound2document_rulescore\" btree (\"ruleScore\")Has OIDs: no           tablename            |                   indexname                    |  num_rows   | table_size | index_size | unique | number_of_scans | tuples_read | tuples_fetched  entity_compounddict2document   | entity_compound2document_cardval               | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0 entity_compounddict2document   | entity_compound2document_heptermnormscore      | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0 entity_compounddict2document   | entity_compound2document_heptermvarscore       | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0 entity_compounddict2document   | entity_compound2document_hepval                | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0 entity_compounddict2document   | entity_compound2document_name                  | 5.42452e+07 | 6763 MB    | 1505 MB    | Y      |              24 |      178680 |              0 entity_compounddict2document   | entity_compound2document_nephval               | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0 entity_compounddict2document   | entity_compound2document_patterncount          | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0 entity_compounddict2document   | entity_compound2document_phosval               | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0 entity_compounddict2document   | entity_compound2document_rulescore             | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0 entity_compounddict2document   | entity_compounddict2document_pkey              | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0The table has aprox. 54,000,000 rowsThere are no NULLs in hepval field and pg_settings haven't changed. I also have done \"analyze\" to this table.I have simplified the query and added the last advise that you told me:Query:   explain analyze select * from (select * from entity_compounddict2document  where name='ranitidine') as a order by a.hepval;                                                                      QUERY PLAN                                                                      ------------------------------------------------------------------------------------------------------------------------------------------------------ Sort  (cost=11060.50..11067.55 rows=2822 width=133) (actual time=32715.097..32716.488 rows=13512 loops=1)   Sort Key: entity_compounddict2document.hepval   Sort Method:  quicksort  Memory: 2301kB   ->  Bitmap Heap Scan on entity_compounddict2document  (cost=73.82..10898.76 rows=2822 width=133) (actual time=6.034..32695.483 rows=13512 loops=1)         Recheck Cond: ((name)::text = 'ranitidine'::text)         ->  Bitmap Index Scan on entity_compound2document_name  (cost=0.00..73.12 rows=2822 width=0) (actual time=3.221..3.221 rows=13512 loops=1)               Index Cond: ((name)::text = 'ranitidine'::text) Total runtime: 32717.548 msAnother query: explain analyze select * from (select * from entity_compounddict2document  where name='progesterone' ) as a  order by a.hepval;\t\t\t\t\t\tQUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------ Sort  (cost=367879.25..368209.24 rows=131997 width=133) (actual time=9262.887..9287.046 rows=138165 loops=1)   Sort Key: entity_compounddict2document.hepval   Sort Method:  quicksort  Memory: 25622kB   ->  Bitmap Heap Scan on entity_compounddict2document  (cost=2906.93..356652.81 rows=131997 width=133) (actual time=76.316..9038.485 rows=138165 loops=1)         Recheck Cond: ((name)::text = 'progesterone'::text)         ->  Bitmap Index Scan on entity_compound2document_name  (cost=0.00..2873.93 rows=131997 width=0) (actual time=40.913..40.913 rows=138165 loops=1)               Index Cond: ((name)::text = 'progesterone'::text) Total runtime: 9296.815 msIt has improved (I supose because of the lack of the join table) but still taking a lot of time... Anything I can do??Any help would be very appreciated. Thank you very muchAndrés.El Mar 3, 2014, a las 1:04 AM, Venkata Balaji Nagothi escribió:Any Re-Indexing was done recently ?If the SELECT query without ORDER BY is showing low cost, then, the query can be re-written as below to see if the performance improves. If the resultant rows of the query are \nselect * from (select query without order by clause) a order by a.hepval and so on -- something like that.This should lower the cost of the query because the planner chooses to sort on the resultant set of the rows rather than sorting the table and getting the results.\nPlease let us know if this helps !Venkata Balaji NSr. Database AdministratorFujitsu Australia\n\nOn Fri, Feb 28, 2014 at 8:55 PM, acanada <[email protected]> wrote:\nThankyou for your answer!Sizes of Tables and Indexes are:            relname             | rows_in_bytes |  num_rows   | number_of_indexes | unique | single_column | multi_column \n--------------------------------+---------------+-------------+-------------------+--------+---------------+-------------- documentold                    | 119 MB        | 1.24516e+08 |                12 | Y      |            12 |            0\n entity2document                | 89 MB         | 9.33666e+07 |                 5 | Y      |             4 |            1           tablename            |                   indexname                    |  num_rows   | table_size | index_size | unique | number_of_scans | tuples_read | tuples_fetched \n--------------------------------+------------------------------------------------+-------------+------------+------------+--------+-----------------+-------------+---------------- documentold                    | DocumentOLD_pkey                               | 1.24516e+08 | 33 GB      | 2708 MB    | Y      |        45812802 |   924462741 |      924084958\n documentold                    | document_cardval_index                         | 1.24516e+08 | 33 GB      | 2763 MB    | Y      |               0 |           0 |              0 documentold                    | document_heptermnorm_index                     | 1.24516e+08 | 33 GB      | 2667 MB    | Y      |               0 |           0 |              0\n documentold                    | document_heptermvar_index                      | 1.24516e+08 | 33 GB      | 2667 MB    | Y      |               0 |           0 |              0 documentold                    | document_hepval_index                          | 1.24516e+08 | 33 GB      | 2667 MB    | Y      |               0 |           0 |              0\n documentold                    | document_kind_index                            | 1.24516e+08 | 33 GB      | 2859 MB    | Y      |               0 |           0 |              0 documentold                    | document_nephval_index                         | 1.24516e+08 | 33 GB      | 2667 MB    | Y      |               0 |           0 |              0\n documentold                    | document_patterncount_index                    | 1.24516e+08 | 33 GB      | 2667 MB    | Y      |               0 |           0 |              0 documentold                    | document_phosval_index                         | 1.24516e+08 | 33 GB      | 2667 MB    | Y      |               0 |           0 |              0\n documentold                    | document_rulescore_index                       | 1.24516e+08 | 33 GB      | 2667 MB    | Y      |               0 |           0 |              0 documentold                    | document_sentenceid_index                      | 1.24516e+08 | 33 GB      | 3867 MB    | Y      |         8089466 |    12669585 |        7597923\n documentold                    | document_uid_index                             | 1.24516e+08 | 33 GB      | 3889 MB    | Y      |               0 |           0 |              0 entity2document                | Entity2Document_pkey                           | 9.33666e+07 | 7216 MB    | 2000 MB    | Y      |            2942 |        2942 |           2942\n entity2document                | document_qualifier_name_index                  | 9.33666e+07 | 7216 MB    | 3557 MB    | Y      |              93 |     1091680 |         124525 entity2document                | entity2Document_name_index                     | 9.33666e+07 | 7216 MB    | 2550 MB    | Y      |            4330 |     3320634 |              2\n entity2document                | idx_a6020c0dc33f7837                           | 9.33666e+07 | 7216 MB    | 2000 MB    | Y      |         2465927 |     1661666 |        1661666 entity2document                | qualifier_index                                | 9.33666e+07 | 7216 MB    | 2469 MB    | Y      |              51 |  2333120186 |              0\nThe explain plan shows lower cost without order by!!There are no NULLs in the hepval field...Thank you for your time!\nAndrésEl Feb 28, 2014, a las 2:28 AM, Venkata Balaji Nagothi escribió:Hi Andres,\nCan you please help us with the below information.- Sizes of Tables and Indexes- The Explain plan shows same/higher cost without ORDER BY clause ?\nI suspect huge number of NULLs might be the problem. If you can please get us the above information, then we can probably know if the cost is genuine.\nVenkata Balaji NSr. Database AdministratorFujitsu Australia\nOn Thu, Feb 27, 2014 at 9:31 PM, acanada <[email protected]> wrote:\n\nHello,\n\nI have changed the multicolumn index from:\n        \"entity2document_name_qualifier_index\" btree (name, qualifier)\n to:\n         \"document_qualifier_name_index\" btree (qualifier, name)\n\nAnd now the planner doesn't \"Recheck cond:\"  (there are only three different qualifiers vs. millions of names...)\n\nBut still taking long time\n\n\nLimit  (cost=384043.64..384043.66 rows=10 width=313) (actual time=80555.930..80555.934 rows=10 loops=1)\n   ->  Sort  (cost=384043.64..384081.19 rows=15020 width=313) (actual time=80555.928..80555.931 rows=10 loops=1)\n         Sort Key: d1_.hepval\n         Sort Method:  top-N heapsort  Memory: 29kB\n         ->  Nested Loop  (cost=0.00..383719.06 rows=15020 width=313) (actual time=223.778..80547.196 rows=3170 loops=1)\n               ->  Index Scan using document_qualifier_name_index on entity2document e0_  (cost=0.00..52505.40 rows=15020 width=59) (actual time=126.880..11549.392 rows=3170 loops=1)\n                     Index Cond: (((qualifier)::text = 'CompoundDict'::text) AND ((name)::text = 'galactosamine'::text))\n               ->  Index Scan using \"DocumentOLD_pkey\" on documentold d1_  (cost=0.00..22.04 rows=1 width=254) (actual time=21.747..21.764 rows=1 loops=3170)\n                     Index Cond: (d1_.id = e0_.document_id)\n                     Filter: (d1_.hepval IS NOT NULL)\n Total runtime: 80556.027 ms\n\n\n\nAny help/point to any direction, would be very appreciated.\nThank you,\nAndrés\n\nEl Feb 26, 2014, a las 4:41 PM, acanada escribió:\n\n> Hello to everybody and thanks in advance to take a look to this message.\n> I'm new in this list and with PostgreSQL.\n> My queries are taking too much time to complete and I don't know what to do right now. I think I'm providing all  the info required for you to help me. If you need extra info please tell me.\n>\n> I am using DQL included in the last version of symfony2 (2.4.2). This is the query, formed by DQL, but coppied-pasted to the psql client (9.1.11, server 8.3.8)\n>\n> explain analyze SELECT e0_.id AS id0, e0_.name AS name1, e0_.qualifier AS qualifier2, e0_.\"tagMethod\" AS tagmethod3, e0_.curation AS curation4, e0_.created AS created5, e0_.updated AS updated6, d1_.id AS id7, d1_.kind AS kind8, d1_.uid AS uid9, d1_.\"sentenceId\" AS sentenceid10, d1_.text AS text11, d1_.hepval AS hepval12, d1_.cardval AS cardval13, d1_.nephval AS nephval14, d1_.phosval AS phosval15, d1_.\"patternCount\" AS patterncount16, d1_.\"ruleScore\" AS rulescore17, d1_.\"hepTermNormScore\" AS heptermnormscore18, d1_.\"hepTermVarScore\" AS heptermvarscore19, d1_.created AS created20, d1_.updated AS updated21, e0_.document_id AS document_id22 FROM Entity2Document e0_ INNER JOIN documentold d1_ ON e0_.document_id = d1_.id WHERE e0_.name ='ranitidine' AND e0_.qualifier = 'CompoundDict' AND d1_.hepval IS NOT NULL ORDER BY d1_.hepval DESC limit 10;\n\n\n>\n>\n> limtox=> \\d+ documentold;\n>                               Table \"public.documentold\"\n>      Column      |              Type              | Modifiers | Storage  | Description\n> ------------------+--------------------------------+-----------+----------+-------------\n> id               | integer                        | not null  | plain    |\n> kind             | character varying(255)         | not null  | extended |\n> uid              | character varying(255)         | not null  | extended |\n> sentenceId       | character varying(255)         | not null  | extended |\n> text             | text                           | not null  | extended |\n> hepval           | double precision               |           | plain    |\n> created          | timestamp(0) without time zone | not null  | plain    |\n> updated          | timestamp(0) without time zone |           | plain    |\n> cardval          | double precision               |           | plain    |\n> nephval          | double precision               |           | plain    |\n> phosval          | double precision               |           | plain    |\n> patternCount     | double precision               |           | plain    |\n> ruleScore        | double precision               |           | plain    |\n> hepTermNormScore | double precision               |           | plain    |\n> hepTermVarScore  | double precision               |           | plain    |\n> Indexes:\n>    \"DocumentOLD_pkey\" PRIMARY KEY, btree (id)\n>    \"document_cardval_index\" btree (cardval)\n>    \"document_heptermnorm_index\" btree (\"hepTermNormScore\" DESC NULLS LAST)\n>    \"document_heptermvar_index\" btree (\"hepTermVarScore\" DESC NULLS LAST)\n>    \"document_hepval_index\" btree (hepval DESC NULLS LAST)\n>    \"document_kind_index\" btree (kind)\n>    \"document_nephval_index\" btree (nephval DESC NULLS LAST)\n>    \"document_patterncount_index\" btree (\"patternCount\" DESC NULLS LAST)\n>    \"document_phosval_index\" btree (phosval DESC NULLS LAST)\n>    \"document_rulescore_index\" btree (\"ruleScore\" DESC NULLS LAST)\n>    \"document_sentenceid_index\" btree (\"sentenceId\")\n>    \"document_uid_index\" btree (uid)\n> Referenced by:\n>    TABLE \"hepkeywordtermnorm2document\" CONSTRAINT \"fk_1c19bcd0c33f7837\" FOREIGN KEY (document_id) REFERENCES documentold(id)\n>    TABLE \"cytochrome2document\" CONSTRAINT \"fk_21f7636fc33f7837\" FOREIGN KEY (document_id) REFERENCES documentold(id)\n>    TABLE \"hepkeywordtermvariant2document\" CONSTRAINT \"fk_a316e36bc33f7837\" FOREIGN KEY (document_id) REFERENCES documentold(id)\n>    TABLE \"entity2document\" CONSTRAINT \"fk_a6020c0dc33f7837\" FOREIGN KEY (document_id) REFERENCES documentold(id)\n>    TABLE \"specie2document\" CONSTRAINT \"fk_b6e551c8c33f7837\" FOREIGN KEY (document_id) REFERENCES documentold(id)\n> Has OIDs: no\n>\n>\n>\n>\n> limtox=> \\d+ entity2document;                                     Table \"public.entity2document\"   Column    |              Type              |            Modifiers            | Storage  | Description -------------+--------------------------------+---------------------------------+----------+------------- id          | integer                        | not null                        | plain    |\n\n\n> document_id | integer                        |                                 | plain    |\n> name        | character varying(255)         | not null                        | extended |\n> qualifier   | character varying(255)         | not null                        | extended |\n> tagMethod   | character varying(255)         | default NULL::character varying | extended |\n> created     | timestamp(0) without time zone | not null                        | plain    |\n> updated     | timestamp(0) without time zone |                                 | plain    |\n> curation    | integer                        |                                 | plain    |\n> Indexes:\n>    \"Entity2Document_pkey\" PRIMARY KEY, btree (id)\n>    \"entity2Document_name_index\" btree (name)\n>    \"entity2document_name_qualifier_index\" btree (name, qualifier)\n>    \"idx_a6020c0dc33f7837\" btree (document_id)\n>    \"qualifier_index\" btree (qualifier)\n> Foreign-key constraints:\n>    \"fk_a6020c0dc33f7837\" FOREIGN KEY (document_id) REFERENCES documentold(id)\n> Has OIDs: no\n>\n>\n>\n>\n>\n>\n> Table metadata:\n>    documentold: 124.515.592 of rows. It has several columns with a large proportion of NULLs(updated, patternCount, ruleScore, hepTermNormScore, hepTermVarScore)\n>    entity2document: 93.785.968 of rows. It has two columns with a large proportion of NULLs (updated, curation)\n>\n> None of the tables receive updates or deletes regularly\n>\n>\n>                                                                                QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit  (cost=387929.02..387929.05 rows=10 width=313) (actual time=55980.472..55980.476 rows=10 loops=1)\n>   ->  Sort  (cost=387929.02..387966.75 rows=15090 width=313) (actual time=55980.471..55980.473 rows=10 loops=1)\n>         Sort Key: d1_.hepval\n>         Sort Method:  top-N heapsort  Memory: 28kB\n>         ->  Nested Loop  (cost=469.14..387602.93 rows=15090 width=313) (actual time=96.716..55974.004 rows=2774 loops=1)\n>               ->  Bitmap Heap Scan on entity2document e0_  (cost=469.14..54851.25 rows=15090 width=59) (actual time=51.299..8452.592 rows=2774 loops=1)\n>                     Recheck Cond: (((name)::text = 'Cimetidine'::text) AND ((qualifier)::text = 'CompoundDict'::text))\n>                     ->  Bitmap Index Scan on entity2document_name_qualifier_index  (cost=0.00..465.36 rows=15090 width=0) (actual time=36.467..36.467 rows=2774 loops=1)\n\n>                           Index Cond: (((name)::text = 'Cimetidine'::text) AND ((qualifier)::text = 'CompoundDict'::text))\n>               ->  Index Scan using \"DocumentOLD_pkey\" on documentold d1_  (cost=0.00..22.04 rows=1 width=254) (actual time=17.113..17.129 rows=1 loops=2774)\n>                     Index Cond: (d1_.id = e0_.document_id)\n>                     Filter: (d1_.hepval IS NOT NULL)\n> Total runtime: 55980.554 ms\n> (13 rows)\n>\n> version\n> -----------------------------------------------------------------------------------------------------\n> PostgreSQL 8.3.8 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.4.real (Ubuntu 10.4.1-3ubuntu3) 10.4.1\n>\n> This query has been always slow. It's fast only when it's cached. Vacuum and analyze have been done manually very recently\n>\n>\n>\n>\n> SELECT name, current_setting(name), source\n>  FROM pg_settings\n>  WHERE source NOT IN ('default', 'override');\n>\n>             name            |  current_setting   |        source\n> ----------------------------+--------------------+----------------------\n> client_encoding            | UTF8               | client\n> DateStyle                  | ISO, DMY           | configuration file\n> default_text_search_config | pg_catalog.spanish | configuration file\n> effective_cache_size       | 7500MB             | configuration file\n> lc_messages                | es_ES.UTF-8        | configuration file\n> lc_monetary                | es_ES.UTF-8        | configuration file\n> lc_numeric                 | C                  | configuration file\n> lc_time                    | es_ES.UTF-8        | configuration file\n> listen_addresses           | *                  | configuration file\n> log_line_prefix            | %t                 | configuration file\n> log_timezone               | localtime          | command line\n> maintenance_work_mem       | 2000MB             | configuration file\n> max_connections            | 100                | configuration file\n> max_fsm_pages              | 63217760           | configuration file\n> max_stack_depth            | 2MB                | environment variable\n> port                       | 5432               | configuration file\n> shared_buffers             | 1500MB             | configuration file\n> ssl                        | on                 | configuration file\n> tcp_keepalives_count       | 9                  | configuration file\n> tcp_keepalives_idle        | 7200               | configuration file\n> tcp_keepalives_interval    | 75                 | configuration file\n> TimeZone                   | localtime          | command line\n> timezone_abbreviations     | Default            | command line\n> work_mem                   | 50MB               | configuration file\n>\n> Setting the work_mem to 3000MB doesn't change anything...\n>\n> Everything seems good to me but the Recheck Cond, because of the large ammount of rows, is slowing the query too much. I have read that is not a good point to try to get rid of recheck cond (maybe even not possible, I don't know, I'm new to PostgreSQL). I'd like to know what I am doing wrong and how can I solve it...\n\n\n>\n> Any help please?\n>\n> Thank you very much,\n>\n> Andrés\n> **NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n\n\n> **CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n\n\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n**NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n\n\n**CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n**NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n**CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n\n\n**NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.**CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.", "msg_date": "Mon, 3 Mar 2014 11:17:44 +0100", "msg_from": "\"acanada\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query taking long time" }, { "msg_contents": "> I have simplified the query and added the last advise that you told me:\n>\n> Query:\n>\n> explain analyze select * from (select * from\nentity_compounddict2document where name='ranitidine') as a order by\na.hepval;\n>\nDo you need full result?\n\nIf you need just top-n rows, then index on\nentity_compounddict2document(name, a.hepval) might help.\n\nRegards,\nVladimir Sitnikov\n\n\n> I have simplified the query and added the last advise that you told me:\n>\n> Query: \n>\n>  explain analyze select * from (select * from entity_compounddict2document  where name='ranitidine') as a order by a.hepval;\n>         \nDo you need full result? \nIf you need just top-n rows, then index on entity_compounddict2document(name, a.hepval) might help.\nRegards,\nVladimir Sitnikov", "msg_date": "Mon, 3 Mar 2014 21:17:40 +0400", "msg_from": "Vladimir Sitnikov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query taking long time" }, { "msg_contents": "On Mon, Mar 3, 2014 at 9:17 PM, acanada <[email protected]> wrote:\n\n> Hello,\n>\n> Thankyou for your answer.\n> I have made more changes than a simple re-indexing recently. I have moved\n> the sorting field to the table in order to avoid the join clause. Now the\n> schema is very simple. The query only implies one table:\n>\n> x=> \\d+ entity_compounddict2document;\n> Table \"public.entity_compounddict2document\"\n> Column | Type | Modifiers | Storage\n> | Description\n>\n> ------------------+--------------------------------+-----------+----------+-------------\n> id | integer | not null | plain\n> |\n> document_id | integer | | plain\n> |\n> name | character varying(255) | | extended\n> |\n> qualifier | character varying(255) | | extended\n> |\n> tagMethod | character varying(255) | | extended\n> |\n> created | timestamp(0) without time zone | | plain\n> |\n> updated | timestamp(0) without time zone | | plain\n> |\n> curation | integer | | plain\n> |\n> hepval | double precision | | plain\n> |\n> cardval | double precision | | plain\n> |\n> nephval | double precision | | plain\n> |\n> phosval | double precision | | plain\n> |\n> patternCount | double precision | | plain\n> |\n> ruleScore | double precision | | plain\n> |\n> hepTermNormScore | double precision | | plain\n> |\n> hepTermVarScore | double precision | | plain\n> |\n> Indexes:\n> \"entity_compounddict2document_pkey\" PRIMARY KEY, btree (id)\n> \"entity_compound2document_cardval\" btree (cardval)\n> \"entity_compound2document_heptermnormscore\" btree (\"hepTermNormScore\")\n> \"entity_compound2document_heptermvarscore\" btree (\"hepTermVarScore\")\n> \"entity_compound2document_hepval\" btree (hepval)\n> \"entity_compound2document_name\" btree (name)\n> \"entity_compound2document_nephval\" btree (nephval)\n> \"entity_compound2document_patterncount\" btree (\"patternCount\")\n> \"entity_compound2document_phosval\" btree (phosval)\n> \"entity_compound2document_rulescore\" btree (\"ruleScore\")\n> Has OIDs: no\n>\n> tablename | indexname\n> | num_rows | table_size | index_size\n> | unique | number_of_scans | tuples_read | tuples_fetched\n> entity_compounddict2document | entity_compound2document_cardval\n> | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 |\n> 0 | 0\n> entity_compounddict2document |\n> entity_compound2document_heptermnormscore | 5.42452e+07 | 6763 MB |\n> 1162 MB | Y | 0 | 0 | 0\n> entity_compounddict2document | entity_compound2document_heptermvarscore\n> | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 |\n> 0 | 0\n> entity_compounddict2document | entity_compound2document_hepval\n> | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 |\n> 0 | 0\n> entity_compounddict2document | entity_compound2document_name\n> | 5.42452e+07 | 6763 MB | 1505 MB | Y | 24 |\n> 178680 | 0\n> entity_compounddict2document | entity_compound2document_nephval\n> | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 |\n> 0 | 0\n> entity_compounddict2document | entity_compound2document_patterncount\n> | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 |\n> 0 | 0\n> entity_compounddict2document | entity_compound2document_phosval\n> | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 |\n> 0 | 0\n> entity_compounddict2document | entity_compound2document_rulescore\n> | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 |\n> 0 | 0\n> entity_compounddict2document | entity_compounddict2document_pkey\n> | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 |\n> 0 | 0\n>\n> The table has aprox. 54,000,000 rows\n> There are no NULLs in hepval field and pg_settings haven't changed. I also\n> have done \"analyze\" to this table.\n>\n> I have simplified the query and added the last advise that you told me:\n>\n> Query:\n>\n> explain analyze select * from (select * from entity_compounddict2document\n> where name='ranitidine') as a order by a.hepval;\n>\n> QUERY PLAN\n>\n>\n> ------------------------------------------------------------------------------------------------------------------------------------------------------\n> Sort (cost=11060.50..11067.55 rows=2822 width=133) (actual\n> time=32715.097..32716.488 rows=13512 loops=1)\n> Sort Key: entity_compounddict2document.hepval\n> Sort Method: quicksort Memory: 2301kB\n> -> Bitmap Heap Scan on entity_compounddict2document\n> (cost=73.82..10898.76 rows=2822 width=133) (actual time=6.034..32695.483\n> rows=13512 loops=1)\n> Recheck Cond: ((name)::text = 'ranitidine'::text)\n> -> Bitmap Index Scan on entity_compound2document_name\n> (cost=0.00..73.12 rows=2822 width=0) (actual time=3.221..3.221 rows=13512\n> loops=1)\n> Index Cond: ((name)::text = 'ranitidine'::text)\n> Total runtime: 32717.548 ms\n>\n> Another query:\n> explain analyze select * from (select * from\n> entity_compounddict2document where name='progesterone' ) as a order by\n> a.hepval;\n>\n> QUERY PLAN\n>\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Sort (cost=367879.25..368209.24 rows=131997 width=133) (actual\n> time=9262.887..9287.046 rows=138165 loops=1)\n> Sort Key: entity_compounddict2document.hepval\n> Sort Method: quicksort Memory: 25622kB\n> -> Bitmap Heap Scan on entity_compounddict2document\n> (cost=2906.93..356652.81 rows=131997 width=133) (actual\n> time=76.316..9038.485 rows=138165 loops=1)\n> Recheck Cond: ((name)::text = 'progesterone'::text)\n> -> Bitmap Index Scan on entity_compound2document_name\n> (cost=0.00..2873.93 rows=131997 width=0) (actual time=40.913..40.913\n> rows=138165 loops=1)\n> Index Cond: ((name)::text = 'progesterone'::text)\n> Total runtime: 9296.815 ms\n>\n>\n> It has improved (I supose because of the lack of the join table) but still\n> taking a lot of time... Anything I can do??\n>\n> Any help would be very appreciated. Thank you very much.\n>\n\n\nGood to know performance has increased.\n\n\"entity_compounddict2document\" table goes through high INSERTS ?\n\nCan you help us know if the \"helpval\" column and \"name\" column have high\nduplicate values ? \"n_distinct\" value from pg_stats table would have that\ninfo.\n\nBelow could be a possible workaround -\n\nAs mentioned earlier in this email, a composite Index on name and hepval\ncolumn might help. If the table does not go through lot of INSERTS, then\nconsider performing a CLUSTER on the table using the same INDEX.\n\nOther recommendations -\n\nPlease drop off all the Non-primary key Indexes which have 0 scans / hits.\nThis would harm the DB and the DB server whilst maintenance and DML\noperations.\n\nRegards,\nVenkata Balaji N\n\nFujitsu Australia\n\nOn Mon, Mar 3, 2014 at 9:17 PM, acanada <[email protected]> wrote:\nHello,Thankyou for your answer.\n\nI have made more changes than a simple re-indexing recently. I have moved the sorting field to the table in order to avoid the join clause. Now the schema is very simple. The query only implies one table:\nx=> \\d+ entity_compounddict2document;                      Table \"public.entity_compounddict2document\"      Column      |              Type              | Modifiers | Storage  | Description \n------------------+--------------------------------+-----------+----------+------------- id               | integer                        | not null  | plain    |  document_id      | integer                        |           | plain    | \n name             | character varying(255)         |           | extended |  qualifier        | character varying(255)         |           | extended |  tagMethod        | character varying(255)         |           | extended | \n created          | timestamp(0) without time zone |           | plain    |  updated          | timestamp(0) without time zone |           | plain    |  curation         | integer                        |           | plain    | \n hepval           | double precision               |           | plain    |  cardval          | double precision               |           | plain    |  nephval          | double precision               |           | plain    | \n phosval          | double precision               |           | plain    |  patternCount     | double precision               |           | plain    |  ruleScore        | double precision               |           | plain    | \n hepTermNormScore | double precision               |           | plain    |  hepTermVarScore  | double precision               |           | plain    | Indexes:    \"entity_compounddict2document_pkey\" PRIMARY KEY, btree (id)\n    \"entity_compound2document_cardval\" btree (cardval)    \"entity_compound2document_heptermnormscore\" btree (\"hepTermNormScore\")    \"entity_compound2document_heptermvarscore\" btree (\"hepTermVarScore\")\n    \"entity_compound2document_hepval\" btree (hepval)    \"entity_compound2document_name\" btree (name)    \"entity_compound2document_nephval\" btree (nephval)\n\n    \"entity_compound2document_patterncount\" btree (\"patternCount\")    \"entity_compound2document_phosval\" btree (phosval)    \"entity_compound2document_rulescore\" btree (\"ruleScore\")\nHas OIDs: no           tablename            |                   indexname                                              |  num_rows    | table_size  | index_size | unique | number_of_scans | tuples_read | tuples_fetched \n entity_compounddict2document   | entity_compound2document_cardval               | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0 entity_compounddict2document   | entity_compound2document_heptermnormscore      | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n entity_compounddict2document   | entity_compound2document_heptermvarscore       | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0 entity_compounddict2document   | entity_compound2document_hepval                | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n entity_compounddict2document   | entity_compound2document_name                  | 5.42452e+07 | 6763 MB    | 1505 MB    | Y      |              24 |      178680 |              0 entity_compounddict2document   | entity_compound2document_nephval               | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n entity_compounddict2document   | entity_compound2document_patterncount          | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0 entity_compounddict2document   | entity_compound2document_phosval               | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n entity_compounddict2document   | entity_compound2document_rulescore             | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0 entity_compounddict2document   | entity_compounddict2document_pkey              | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\nThe table has aprox. 54,000,000 rowsThere are no NULLs in hepval field and pg_settings haven't changed. I also have done \"analyze\" to this table.\n\nI have simplified the query and added the last advise that you told me:Query:   explain analyze select * from (select * from entity_compounddict2document  where name='ranitidine') as a order by a.hepval;\n                                                                      QUERY PLAN                                                                      ------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort  (cost=11060.50..11067.55 rows=2822 width=133) (actual time=32715.097..32716.488 rows=13512 loops=1)   Sort Key: entity_compounddict2document.hepval   Sort Method:  quicksort  Memory: 2301kB\n   ->  Bitmap Heap Scan on entity_compounddict2document  (cost=73.82..10898.76 rows=2822 width=133) (actual time=6.034..32695.483 rows=13512 loops=1)         Recheck Cond: ((name)::text = 'ranitidine'::text)\n         ->  Bitmap Index Scan on entity_compound2document_name  (cost=0.00..73.12 rows=2822 width=0) (actual time=3.221..3.221 rows=13512 loops=1)               Index Cond: ((name)::text = 'ranitidine'::text)\n Total runtime: 32717.548 msAnother query: explain analyze select * from (select * from entity_compounddict2document  where name='progesterone' ) as a  order by a.hepval;\n \t\t\t\t\t\t QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort  (cost=367879.25..368209.24 rows=131997 width=133) (actual time=9262.887..9287.046 rows=138165 loops=1)   Sort Key: entity_compounddict2document.hepval   Sort Method:  quicksort  Memory: 25622kB\n   ->  Bitmap Heap Scan on entity_compounddict2document  (cost=2906.93..356652.81 rows=131997 width=133) (actual time=76.316..9038.485 rows=138165 loops=1)         Recheck Cond: ((name)::text = 'progesterone'::text)\n         ->  Bitmap Index Scan on entity_compound2document_name  (cost=0.00..2873.93 rows=131997 width=0) (actual time=40.913..40.913 rows=138165 loops=1)               Index Cond: ((name)::text = 'progesterone'::text)\n Total runtime: 9296.815 msIt has improved (I supose because of the lack of the join table) but still taking a lot of time... Anything I can do??\nAny help would be very appreciated. Thank you very much.Good to know performance has increased.\"entity_compounddict2document\" table goes through high INSERTS ?\nCan you help us know if the \"helpval\" column and \"name\" column have high duplicate values ? \"n_distinct\" value from pg_stats table would have that info. \nBelow could be a possible workaround -As mentioned earlier in this email, a composite Index on name and hepval column might help. If the table does not go through lot of INSERTS, then consider performing a CLUSTER on the table using the same INDEX.\nOther recommendations -Please drop off all the Non-primary key Indexes which have 0 scans / hits. This would harm the DB and the DB server whilst maintenance and DML operations.\nRegards,Venkata Balaji NFujitsu Australia", "msg_date": "Tue, 4 Mar 2014 10:28:22 +1100", "msg_from": "Venkata Balaji Nagothi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query taking long time" }, { "msg_contents": "Hello!\nThe table doesn't go through high inserts so I'm taking into account your \"CLUSTER\" advise. Thanks.\nI'm afraid that I cannot drop the indexes that don't have scans hits because they will have scans and hits very soon\n\nDuplicated values for this table are:\n\ntablename | attname | n_distinct\nentity_compounddict2document | name | 16635\nentity_compounddict2document | hepval | 2.04444e+06\n\nThank you very much for your help!!\nAndrés\n\nEl Mar 4, 2014, a las 12:28 AM, Venkata Balaji Nagothi escribió:\n\n> On Mon, Mar 3, 2014 at 9:17 PM, acanada <[email protected]> wrote:\n> Hello,\n> \n> Thankyou for your answer.\n> I have made more changes than a simple re-indexing recently. I have moved the sorting field to the table in order to avoid the join clause. Now the schema is very simple. The query only implies one table:\n> \n> x=> \\d+ entity_compounddict2document;\n> Table \"public.entity_compounddict2document\"\n> Column | Type | Modifiers | Storage | Description \n> ------------------+--------------------------------+-----------+----------+-------------\n> id | integer | not null | plain | \n> document_id | integer | | plain | \n> name | character varying(255) | | extended | \n> qualifier | character varying(255) | | extended | \n> tagMethod | character varying(255) | | extended | \n> created | timestamp(0) without time zone | | plain | \n> updated | timestamp(0) without time zone | | plain | \n> curation | integer | | plain | \n> hepval | double precision | | plain | \n> cardval | double precision | | plain | \n> nephval | double precision | | plain | \n> phosval | double precision | | plain | \n> patternCount | double precision | | plain | \n> ruleScore | double precision | | plain | \n> hepTermNormScore | double precision | | plain | \n> hepTermVarScore | double precision | | plain | \n> Indexes:\n> \"entity_compounddict2document_pkey\" PRIMARY KEY, btree (id)\n> \"entity_compound2document_cardval\" btree (cardval)\n> \"entity_compound2document_heptermnormscore\" btree (\"hepTermNormScore\")\n> \"entity_compound2document_heptermvarscore\" btree (\"hepTermVarScore\")\n> \"entity_compound2document_hepval\" btree (hepval)\n> \"entity_compound2document_name\" btree (name)\n> \"entity_compound2document_nephval\" btree (nephval)\n> \"entity_compound2document_patterncount\" btree (\"patternCount\")\n> \"entity_compound2document_phosval\" btree (phosval)\n> \"entity_compound2document_rulescore\" btree (\"ruleScore\")\n> Has OIDs: no\n> \n> tablename | indexname | num_rows | table_size | index_size | unique | number_of_scans | tuples_read | tuples_fetched \n> entity_compounddict2document | entity_compound2document_cardval | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n> entity_compounddict2document | entity_compound2document_heptermnormscore | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n> entity_compounddict2document | entity_compound2document_heptermvarscore | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n> entity_compounddict2document | entity_compound2document_hepval | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n> entity_compounddict2document | entity_compound2document_name | 5.42452e+07 | 6763 MB | 1505 MB | Y | 24 | 178680 | 0\n> entity_compounddict2document | entity_compound2document_nephval | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n> entity_compounddict2document | entity_compound2document_patterncount | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n> entity_compounddict2document | entity_compound2document_phosval | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n> entity_compounddict2document | entity_compound2document_rulescore | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n> entity_compounddict2document | entity_compounddict2document_pkey | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n> \n> The table has aprox. 54,000,000 rows\n> There are no NULLs in hepval field and pg_settings haven't changed. I also have done \"analyze\" to this table.\n> \n> I have simplified the query and added the last advise that you told me:\n> \n> Query: \n> \n> \t explain analyze select * from (select * from entity_compounddict2document where name='ranitidine') as a order by a.hepval;\n> QUERY PLAN \n> ------------------------------------------------------------------------------------------------------------------------------------------------------\n> Sort (cost=11060.50..11067.55 rows=2822 width=133) (actual time=32715.097..32716.488 rows=13512 loops=1)\n> Sort Key: entity_compounddict2document.hepval\n> Sort Method: quicksort Memory: 2301kB\n> -> Bitmap Heap Scan on entity_compounddict2document (cost=73.82..10898.76 rows=2822 width=133) (actual time=6.034..32695.483 rows=13512 loops=1)\n> Recheck Cond: ((name)::text = 'ranitidine'::text)\n> -> Bitmap Index Scan on entity_compound2document_name (cost=0.00..73.12 rows=2822 width=0) (actual time=3.221..3.221 rows=13512 loops=1)\n> Index Cond: ((name)::text = 'ranitidine'::text)\n> Total runtime: 32717.548 ms\n> \n> Another query:\n> \t\n> \texplain analyze select * from (select * from entity_compounddict2document where name='progesterone' ) as a order by a.hepval;\n> \n> \t\t\t\t\t\t QUERY PLAN \n> ------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Sort (cost=367879.25..368209.24 rows=131997 width=133) (actual time=9262.887..9287.046 rows=138165 loops=1)\n> Sort Key: entity_compounddict2document.hepval\n> Sort Method: quicksort Memory: 25622kB\n> -> Bitmap Heap Scan on entity_compounddict2document (cost=2906.93..356652.81 rows=131997 width=133) (actual time=76.316..9038.485 rows=138165 loops=1)\n> Recheck Cond: ((name)::text = 'progesterone'::text)\n> -> Bitmap Index Scan on entity_compound2document_name (cost=0.00..2873.93 rows=131997 width=0) (actual time=40.913..40.913 rows=138165 loops=1)\n> Index Cond: ((name)::text = 'progesterone'::text)\n> Total runtime: 9296.815 ms\n> \n> \n> It has improved (I supose because of the lack of the join table) but still taking a lot of time... Anything I can do??\n> \n> Any help would be very appreciated. Thank you very much.\n> \n> \n> Good to know performance has increased.\n> \n> \"entity_compounddict2document\" table goes through high INSERTS ?\n> \n> Can you help us know if the \"helpval\" column and \"name\" column have high duplicate values ? \"n_distinct\" value from pg_stats table would have that info. \n> \n> Below could be a possible workaround -\n> \n> As mentioned earlier in this email, a composite Index on name and hepval column might help. If the table does not go through lot of INSERTS, then consider performing a CLUSTER on the table using the same INDEX.\n> \n> Other recommendations -\n> \n> Please drop off all the Non-primary key Indexes which have 0 scans / hits. This would harm the DB and the DB server whilst maintenance and DML operations.\n> \n> Regards,\n> Venkata Balaji N\n> \n> Fujitsu Australia\n\n\n\n**NOTA DE CONFIDENCIALIDAD** Este correo electrÿnico, y en su caso los ficheros adjuntos, pueden contener informaciÿn protegida para el uso exclusivo de su destinatario. Se prohÿbe la distribuciÿn, reproducciÿn o cualquier otro tipo de transmisiÿn por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n**CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n\n\n\nHello!The table doesn't go through high inserts so I'm taking into account your \"CLUSTER\" advise. Thanks.I'm afraid that I cannot drop the indexes that don't have scans hits because they will have scans and hits very soonDuplicated values for this table are:tablename           |     attname      | n_distinctentity_compounddict2document | name             |       16635entity_compounddict2document | hepval           | 2.04444e+06Thank you very much for your help!!AndrésEl Mar 4, 2014, a las 12:28 AM, Venkata Balaji Nagothi escribió:On Mon, Mar 3, 2014 at 9:17 PM, acanada <[email protected]> wrote:\nHello,Thankyou for your answer.\n\nI have made more changes than a simple re-indexing recently. I have moved the sorting field to the table in order to avoid the join clause. Now the schema is very simple. The query only implies one table:\nx=> \\d+ entity_compounddict2document;                      Table \"public.entity_compounddict2document\"      Column      |              Type              | Modifiers | Storage  | Description \n------------------+--------------------------------+-----------+----------+------------- id               | integer                        | not null  | plain    |  document_id      | integer                        |           | plain    | \n name             | character varying(255)         |           | extended |  qualifier        | character varying(255)         |           | extended |  tagMethod        | character varying(255)         |           | extended | \n created          | timestamp(0) without time zone |           | plain    |  updated          | timestamp(0) without time zone |           | plain    |  curation         | integer                        |           | plain    | \n hepval           | double precision               |           | plain    |  cardval          | double precision               |           | plain    |  nephval          | double precision               |           | plain    | \n phosval          | double precision               |           | plain    |  patternCount     | double precision               |           | plain    |  ruleScore        | double precision               |           | plain    | \n hepTermNormScore | double precision               |           | plain    |  hepTermVarScore  | double precision               |           | plain    | Indexes:    \"entity_compounddict2document_pkey\" PRIMARY KEY, btree (id)\n    \"entity_compound2document_cardval\" btree (cardval)    \"entity_compound2document_heptermnormscore\" btree (\"hepTermNormScore\")    \"entity_compound2document_heptermvarscore\" btree (\"hepTermVarScore\")\n    \"entity_compound2document_hepval\" btree (hepval)    \"entity_compound2document_name\" btree (name)    \"entity_compound2document_nephval\" btree (nephval)\n\n    \"entity_compound2document_patterncount\" btree (\"patternCount\")    \"entity_compound2document_phosval\" btree (phosval)    \"entity_compound2document_rulescore\" btree (\"ruleScore\")\nHas OIDs: no           tablename            |                   indexname                                              |  num_rows    | table_size  | index_size | unique | number_of_scans | tuples_read | tuples_fetched \n entity_compounddict2document   | entity_compound2document_cardval               | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0 entity_compounddict2document   | entity_compound2document_heptermnormscore      | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n entity_compounddict2document   | entity_compound2document_heptermvarscore       | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0 entity_compounddict2document   | entity_compound2document_hepval                | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n entity_compounddict2document   | entity_compound2document_name                  | 5.42452e+07 | 6763 MB    | 1505 MB    | Y      |              24 |      178680 |              0 entity_compounddict2document   | entity_compound2document_nephval               | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n entity_compounddict2document   | entity_compound2document_patterncount          | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0 entity_compounddict2document   | entity_compound2document_phosval               | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n entity_compounddict2document   | entity_compound2document_rulescore             | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0 entity_compounddict2document   | entity_compounddict2document_pkey              | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\nThe table has aprox. 54,000,000 rowsThere are no NULLs in hepval field and pg_settings haven't changed. I also have done \"analyze\" to this table.\n\nI have simplified the query and added the last advise that you told me:Query:   explain analyze select * from (select * from entity_compounddict2document  where name='ranitidine') as a order by a.hepval;\n                                                                      QUERY PLAN                                                                      ------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort  (cost=11060.50..11067.55 rows=2822 width=133) (actual time=32715.097..32716.488 rows=13512 loops=1)   Sort Key: entity_compounddict2document.hepval   Sort Method:  quicksort  Memory: 2301kB\n   ->  Bitmap Heap Scan on entity_compounddict2document  (cost=73.82..10898.76 rows=2822 width=133) (actual time=6.034..32695.483 rows=13512 loops=1)         Recheck Cond: ((name)::text = 'ranitidine'::text)\n         ->  Bitmap Index Scan on entity_compound2document_name  (cost=0.00..73.12 rows=2822 width=0) (actual time=3.221..3.221 rows=13512 loops=1)               Index Cond: ((name)::text = 'ranitidine'::text)\n Total runtime: 32717.548 msAnother query: explain analyze select * from (select * from entity_compounddict2document  where name='progesterone' ) as a  order by a.hepval;\n \t\t\t\t\t\t QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort  (cost=367879.25..368209.24 rows=131997 width=133) (actual time=9262.887..9287.046 rows=138165 loops=1)   Sort Key: entity_compounddict2document.hepval   Sort Method:  quicksort  Memory: 25622kB\n   ->  Bitmap Heap Scan on entity_compounddict2document  (cost=2906.93..356652.81 rows=131997 width=133) (actual time=76.316..9038.485 rows=138165 loops=1)         Recheck Cond: ((name)::text = 'progesterone'::text)\n         ->  Bitmap Index Scan on entity_compound2document_name  (cost=0.00..2873.93 rows=131997 width=0) (actual time=40.913..40.913 rows=138165 loops=1)               Index Cond: ((name)::text = 'progesterone'::text)\n Total runtime: 9296.815 msIt has improved (I supose because of the lack of the join table) but still taking a lot of time... Anything I can do??\nAny help would be very appreciated. Thank you very much.Good to know performance has increased.\"entity_compounddict2document\" table goes through high INSERTS ?\nCan you help us know if the \"helpval\" column and \"name\" column have high duplicate values ? \"n_distinct\" value from pg_stats table would have that info. \nBelow could be a possible workaround -As mentioned earlier in this email, a composite Index on name and hepval column might help. If the table does not go through lot of INSERTS, then consider performing a CLUSTER on the table using the same INDEX.\nOther recommendations -Please drop off all the Non-primary key Indexes which have 0 scans / hits. This would harm the DB and the DB server whilst maintenance and DML operations.\nRegards,Venkata Balaji NFujitsu Australia\n\n**NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.**CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.", "msg_date": "Tue, 4 Mar 2014 11:10:27 +0100", "msg_from": "\"acanada\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query taking long time" }, { "msg_contents": "Hello,\n\nI don't know if this helps to figure out what is the problem but after adding the multicolumn index on name and hepval, the performance is even worse (¿?). Ten times worse...\n\nexplain analyze select * from (select * from entity_compounddict2document where name='progesterone') as a order by a.hepval;\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=422746.18..423143.94 rows=159104 width=133) (actual time=95769.674..95797.943 rows=138165 loops=1)\n Sort Key: entity_compounddict2document.hepval\n Sort Method: quicksort Memory: 25622kB\n -> Bitmap Heap Scan on entity_compounddict2document (cost=3501.01..408999.90 rows=159104 width=133) (actual time=70.789..95519.258 rows=138165 loops=1)\n Recheck Cond: ((name)::text = 'progesterone'::text)\n -> Bitmap Index Scan on entity_compound2document_name (cost=0.00..3461.23 rows=159104 width=0) (actual time=35.174..35.174 rows=138165 loops=1)\n Index Cond: ((name)::text = 'progesterone'::text)\n Total runtime: 95811.838 ms\n(8 rows)\n\nAny ideas please?\n\nThank you \nAndrés.\n\n\n\nEl Mar 4, 2014, a las 12:28 AM, Venkata Balaji Nagothi escribió:\n\n> On Mon, Mar 3, 2014 at 9:17 PM, acanada <[email protected]> wrote:\n> Hello,\n> \n> Thankyou for your answer.\n> I have made more changes than a simple re-indexing recently. I have moved the sorting field to the table in order to avoid the join clause. Now the schema is very simple. The query only implies one table:\n> \n> x=> \\d+ entity_compounddict2document;\n> Table \"public.entity_compounddict2document\"\n> Column | Type | Modifiers | Storage | Description \n> ------------------+--------------------------------+-----------+----------+-------------\n> id | integer | not null | plain | \n> document_id | integer | | plain | \n> name | character varying(255) | | extended | \n> qualifier | character varying(255) | | extended | \n> tagMethod | character varying(255) | | extended | \n> created | timestamp(0) without time zone | | plain | \n> updated | timestamp(0) without time zone | | plain | \n> curation | integer | | plain | \n> hepval | double precision | | plain | \n> cardval | double precision | | plain | \n> nephval | double precision | | plain | \n> phosval | double precision | | plain | \n> patternCount | double precision | | plain | \n> ruleScore | double precision | | plain | \n> hepTermNormScore | double precision | | plain | \n> hepTermVarScore | double precision | | plain | \n> Indexes:\n> \"entity_compounddict2document_pkey\" PRIMARY KEY, btree (id)\n> \"entity_compound2document_cardval\" btree (cardval)\n> \"entity_compound2document_heptermnormscore\" btree (\"hepTermNormScore\")\n> \"entity_compound2document_heptermvarscore\" btree (\"hepTermVarScore\")\n> \"entity_compound2document_hepval\" btree (hepval)\n> \"entity_compound2document_name\" btree (name)\n> \"entity_compound2document_nephval\" btree (nephval)\n> \"entity_compound2document_patterncount\" btree (\"patternCount\")\n> \"entity_compound2document_phosval\" btree (phosval)\n> \"entity_compound2document_rulescore\" btree (\"ruleScore\")\n> Has OIDs: no\n> \n> tablename | indexname | num_rows | table_size | index_size | unique | number_of_scans | tuples_read | tuples_fetched \n> entity_compounddict2document | entity_compound2document_cardval | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n> entity_compounddict2document | entity_compound2document_heptermnormscore | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n> entity_compounddict2document | entity_compound2document_heptermvarscore | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n> entity_compounddict2document | entity_compound2document_hepval | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n> entity_compounddict2document | entity_compound2document_name | 5.42452e+07 | 6763 MB | 1505 MB | Y | 24 | 178680 | 0\n> entity_compounddict2document | entity_compound2document_nephval | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n> entity_compounddict2document | entity_compound2document_patterncount | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n> entity_compounddict2document | entity_compound2document_phosval | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n> entity_compounddict2document | entity_compound2document_rulescore | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n> entity_compounddict2document | entity_compounddict2document_pkey | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n> \n> The table has aprox. 54,000,000 rows\n> There are no NULLs in hepval field and pg_settings haven't changed. I also have done \"analyze\" to this table.\n> \n> I have simplified the query and added the last advise that you told me:\n> \n> Query: \n> \n> \t explain analyze select * from (select * from entity_compounddict2document where name='ranitidine') as a order by a.hepval;\n> QUERY PLAN \n> ------------------------------------------------------------------------------------------------------------------------------------------------------\n> Sort (cost=11060.50..11067.55 rows=2822 width=133) (actual time=32715.097..32716.488 rows=13512 loops=1)\n> Sort Key: entity_compounddict2document.hepval\n> Sort Method: quicksort Memory: 2301kB\n> -> Bitmap Heap Scan on entity_compounddict2document (cost=73.82..10898.76 rows=2822 width=133) (actual time=6.034..32695.483 rows=13512 loops=1)\n> Recheck Cond: ((name)::text = 'ranitidine'::text)\n> -> Bitmap Index Scan on entity_compound2document_name (cost=0.00..73.12 rows=2822 width=0) (actual time=3.221..3.221 rows=13512 loops=1)\n> Index Cond: ((name)::text = 'ranitidine'::text)\n> Total runtime: 32717.548 ms\n> \n> Another query:\n> \t\n> \texplain analyze select * from (select * from entity_compounddict2document where name='progesterone' ) as a order by a.hepval;\n> \n> \t\t\t\t\t\t QUERY PLAN \n> ------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Sort (cost=367879.25..368209.24 rows=131997 width=133) (actual time=9262.887..9287.046 rows=138165 loops=1)\n> Sort Key: entity_compounddict2document.hepval\n> Sort Method: quicksort Memory: 25622kB\n> -> Bitmap Heap Scan on entity_compounddict2document (cost=2906.93..356652.81 rows=131997 width=133) (actual time=76.316..9038.485 rows=138165 loops=1)\n> Recheck Cond: ((name)::text = 'progesterone'::text)\n> -> Bitmap Index Scan on entity_compound2document_name (cost=0.00..2873.93 rows=131997 width=0) (actual time=40.913..40.913 rows=138165 loops=1)\n> Index Cond: ((name)::text = 'progesterone'::text)\n> Total runtime: 9296.815 ms\n> \n> \n> It has improved (I supose because of the lack of the join table) but still taking a lot of time... Anything I can do??\n> \n> Any help would be very appreciated. Thank you very much.\n> \n> \n> Good to know performance has increased.\n> \n> \"entity_compounddict2document\" table goes through high INSERTS ?\n> \n> Can you help us know if the \"helpval\" column and \"name\" column have high duplicate values ? \"n_distinct\" value from pg_stats table would have that info. \n> \n> Below could be a possible workaround -\n> \n> As mentioned earlier in this email, a composite Index on name and hepval column might help. If the table does not go through lot of INSERTS, then consider performing a CLUSTER on the table using the same INDEX.\n> \n> Other recommendations -\n> \n> Please drop off all the Non-primary key Indexes which have 0 scans / hits. This would harm the DB and the DB server whilst maintenance and DML operations.\n> \n> Regards,\n> Venkata Balaji N\n> \n> Fujitsu Australia\n\n\n\n**NOTA DE CONFIDENCIALIDAD** Este correo electrÿnico, y en su caso los ficheros adjuntos, pueden contener informaciÿn protegida para el uso exclusivo de su destinatario. Se prohÿbe la distribuciÿn, reproducciÿn o cualquier otro tipo de transmisiÿn por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n**CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n\n\n\nHello,I don't know if this helps to figure out what is the problem but after adding the multicolumn index on name and hepval, the performance is even worse (¿?).  Ten times worse...explain analyze select * from (select * from entity_compounddict2document  where name='progesterone') as a order by a.hepval;                                                                         QUERY PLAN                                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------- Sort  (cost=422746.18..423143.94 rows=159104 width=133) (actual time=95769.674..95797.943 rows=138165 loops=1)   Sort Key: entity_compounddict2document.hepval   Sort Method:  quicksort  Memory: 25622kB   ->  Bitmap Heap Scan on entity_compounddict2document  (cost=3501.01..408999.90 rows=159104 width=133) (actual time=70.789..95519.258 rows=138165 loops=1)         Recheck Cond: ((name)::text = 'progesterone'::text)         ->  Bitmap Index Scan on entity_compound2document_name  (cost=0.00..3461.23 rows=159104 width=0) (actual time=35.174..35.174 rows=138165 loops=1)               Index Cond: ((name)::text = 'progesterone'::text) Total runtime: 95811.838 ms(8 rows)Any ideas please?Thank you Andrés.El Mar 4, 2014, a las 12:28 AM, Venkata Balaji Nagothi escribió:On Mon, Mar 3, 2014 at 9:17 PM, acanada <[email protected]> wrote:\nHello,Thankyou for your answer.\n\nI have made more changes than a simple re-indexing recently. I have moved the sorting field to the table in order to avoid the join clause. Now the schema is very simple. The query only implies one table:\nx=> \\d+ entity_compounddict2document;                      Table \"public.entity_compounddict2document\"      Column      |              Type              | Modifiers | Storage  | Description \n------------------+--------------------------------+-----------+----------+------------- id               | integer                        | not null  | plain    |  document_id      | integer                        |           | plain    | \n name             | character varying(255)         |           | extended |  qualifier        | character varying(255)         |           | extended |  tagMethod        | character varying(255)         |           | extended | \n created          | timestamp(0) without time zone |           | plain    |  updated          | timestamp(0) without time zone |           | plain    |  curation         | integer                        |           | plain    | \n hepval           | double precision               |           | plain    |  cardval          | double precision               |           | plain    |  nephval          | double precision               |           | plain    | \n phosval          | double precision               |           | plain    |  patternCount     | double precision               |           | plain    |  ruleScore        | double precision               |           | plain    | \n hepTermNormScore | double precision               |           | plain    |  hepTermVarScore  | double precision               |           | plain    | Indexes:    \"entity_compounddict2document_pkey\" PRIMARY KEY, btree (id)\n    \"entity_compound2document_cardval\" btree (cardval)    \"entity_compound2document_heptermnormscore\" btree (\"hepTermNormScore\")    \"entity_compound2document_heptermvarscore\" btree (\"hepTermVarScore\")\n    \"entity_compound2document_hepval\" btree (hepval)    \"entity_compound2document_name\" btree (name)    \"entity_compound2document_nephval\" btree (nephval)\n\n    \"entity_compound2document_patterncount\" btree (\"patternCount\")    \"entity_compound2document_phosval\" btree (phosval)    \"entity_compound2document_rulescore\" btree (\"ruleScore\")\nHas OIDs: no           tablename            |                   indexname                                              |  num_rows    | table_size  | index_size | unique | number_of_scans | tuples_read | tuples_fetched \n entity_compounddict2document   | entity_compound2document_cardval               | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0 entity_compounddict2document   | entity_compound2document_heptermnormscore      | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n entity_compounddict2document   | entity_compound2document_heptermvarscore       | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0 entity_compounddict2document   | entity_compound2document_hepval                | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n entity_compounddict2document   | entity_compound2document_name                  | 5.42452e+07 | 6763 MB    | 1505 MB    | Y      |              24 |      178680 |              0 entity_compounddict2document   | entity_compound2document_nephval               | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n entity_compounddict2document   | entity_compound2document_patterncount          | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0 entity_compounddict2document   | entity_compound2document_phosval               | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n entity_compounddict2document   | entity_compound2document_rulescore             | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0 entity_compounddict2document   | entity_compounddict2document_pkey              | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\nThe table has aprox. 54,000,000 rowsThere are no NULLs in hepval field and pg_settings haven't changed. I also have done \"analyze\" to this table.\n\nI have simplified the query and added the last advise that you told me:Query:   explain analyze select * from (select * from entity_compounddict2document  where name='ranitidine') as a order by a.hepval;\n                                                                      QUERY PLAN                                                                      ------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort  (cost=11060.50..11067.55 rows=2822 width=133) (actual time=32715.097..32716.488 rows=13512 loops=1)   Sort Key: entity_compounddict2document.hepval   Sort Method:  quicksort  Memory: 2301kB\n   ->  Bitmap Heap Scan on entity_compounddict2document  (cost=73.82..10898.76 rows=2822 width=133) (actual time=6.034..32695.483 rows=13512 loops=1)         Recheck Cond: ((name)::text = 'ranitidine'::text)\n         ->  Bitmap Index Scan on entity_compound2document_name  (cost=0.00..73.12 rows=2822 width=0) (actual time=3.221..3.221 rows=13512 loops=1)               Index Cond: ((name)::text = 'ranitidine'::text)\n Total runtime: 32717.548 msAnother query: explain analyze select * from (select * from entity_compounddict2document  where name='progesterone' ) as a  order by a.hepval;\n \t\t\t\t\t\t QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort  (cost=367879.25..368209.24 rows=131997 width=133) (actual time=9262.887..9287.046 rows=138165 loops=1)   Sort Key: entity_compounddict2document.hepval   Sort Method:  quicksort  Memory: 25622kB\n   ->  Bitmap Heap Scan on entity_compounddict2document  (cost=2906.93..356652.81 rows=131997 width=133) (actual time=76.316..9038.485 rows=138165 loops=1)         Recheck Cond: ((name)::text = 'progesterone'::text)\n         ->  Bitmap Index Scan on entity_compound2document_name  (cost=0.00..2873.93 rows=131997 width=0) (actual time=40.913..40.913 rows=138165 loops=1)               Index Cond: ((name)::text = 'progesterone'::text)\n Total runtime: 9296.815 msIt has improved (I supose because of the lack of the join table) but still taking a lot of time... Anything I can do??\nAny help would be very appreciated. Thank you very much.Good to know performance has increased.\"entity_compounddict2document\" table goes through high INSERTS ?\nCan you help us know if the \"helpval\" column and \"name\" column have high duplicate values ? \"n_distinct\" value from pg_stats table would have that info. \nBelow could be a possible workaround -As mentioned earlier in this email, a composite Index on name and hepval column might help. If the table does not go through lot of INSERTS, then consider performing a CLUSTER on the table using the same INDEX.\nOther recommendations -Please drop off all the Non-primary key Indexes which have 0 scans / hits. This would harm the DB and the DB server whilst maintenance and DML operations.\nRegards,Venkata Balaji NFujitsu Australia\n\n**NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.**CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.", "msg_date": "Tue, 4 Mar 2014 12:23:48 +0100", "msg_from": "\"acanada\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query taking long time" }, { "msg_contents": "After looking at the distinct values, yes the composite Index on \"name\" and\n\"hepval\" is not recommended. That would worsen - its expected.\n\nWe need to look for other possible work around. Please drop off the above\nIndex. Let me see if i can drill further into this.\n\nMeanwhile - can you help us know the memory parameters (work_mem,\ntemp_buffers etc) set ?\n\nDo you have any other processes effecting this query's performance ?\n\nAny info about your Disk, RAM, CPU would also help.\n\nRegards,\nVenkata Balaji N\n\nFujitsu Australia\n\n\n\n\nVenkata Balaji N\n\nSr. Database Administrator\nFujitsu Australia\n\n\nOn Tue, Mar 4, 2014 at 10:23 PM, acanada <[email protected]> wrote:\n\n> Hello,\n>\n> I don't know if this helps to figure out what is the problem but after\n> adding the multicolumn index on name and hepval, the performance is even\n> worse (¿?). Ten times worse...\n>\n> explain analyze select * from (select * from entity_compounddict2document\n> where name='progesterone') as a order by a.hepval;\n>\n> QUERY PLAN\n>\n>\n> -------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Sort (cost=422746.18..423143.94 rows=159104 width=133) (actual\n> time=95769.674..95797.943 rows=138165 loops=1)\n> Sort Key: entity_compounddict2document.hepval\n> Sort Method: quicksort Memory: 25622kB\n> -> Bitmap Heap Scan on entity_compounddict2document\n> (cost=3501.01..408999.90 rows=159104 width=133) (actual\n> time=70.789..95519.258 rows=138165 loops=1)\n> Recheck Cond: ((name)::text = 'progesterone'::text)\n> -> Bitmap Index Scan on entity_compound2document_name\n> (cost=0.00..3461.23 rows=159104 width=0) (actual time=35.174..35.174rows=138165 loops=1)\n> Index Cond: ((name)::text = 'progesterone'::text)\n> Total runtime: 95811.838 ms\n> (8 rows)\n>\n> Any ideas please?\n>\n> Thank you\n> Andrés.\n>\n>\n>\n> El Mar 4, 2014, a las 12:28 AM, Venkata Balaji Nagothi escribió:\n>\n> On Mon, Mar 3, 2014 at 9:17 PM, acanada <[email protected]> wrote:\n>\n>> Hello,\n>>\n>> Thankyou for your answer.\n>> I have made more changes than a simple re-indexing recently. I have moved\n>> the sorting field to the table in order to avoid the join clause. Now the\n>> schema is very simple. The query only implies one table:\n>>\n>> x=> \\d+ entity_compounddict2document;\n>> Table \"public.entity_compounddict2document\"\n>> Column | Type | Modifiers | Storage\n>> | Description\n>>\n>> ------------------+--------------------------------+-----------+----------+-------------\n>> id | integer | not null | plain\n>> |\n>> document_id | integer | | plain\n>> |\n>> name | character varying(255) | | extended\n>> |\n>> qualifier | character varying(255) | | extended\n>> |\n>> tagMethod | character varying(255) | | extended\n>> |\n>> created | timestamp(0) without time zone | | plain\n>> |\n>> updated | timestamp(0) without time zone | | plain\n>> |\n>> curation | integer | | plain\n>> |\n>> hepval | double precision | | plain\n>> |\n>> cardval | double precision | | plain\n>> |\n>> nephval | double precision | | plain\n>> |\n>> phosval | double precision | | plain\n>> |\n>> patternCount | double precision | | plain\n>> |\n>> ruleScore | double precision | | plain\n>> |\n>> hepTermNormScore | double precision | | plain\n>> |\n>> hepTermVarScore | double precision | | plain\n>> |\n>> Indexes:\n>> \"entity_compounddict2document_pkey\" PRIMARY KEY, btree (id)\n>> \"entity_compound2document_cardval\" btree (cardval)\n>> \"entity_compound2document_heptermnormscore\" btree (\"hepTermNormScore\")\n>> \"entity_compound2document_heptermvarscore\" btree (\"hepTermVarScore\")\n>> \"entity_compound2document_hepval\" btree (hepval)\n>> \"entity_compound2document_name\" btree (name)\n>> \"entity_compound2document_nephval\" btree (nephval)\n>> \"entity_compound2document_patterncount\" btree (\"patternCount\")\n>> \"entity_compound2document_phosval\" btree (phosval)\n>> \"entity_compound2document_rulescore\" btree (\"ruleScore\")\n>> Has OIDs: no\n>>\n>> tablename | indexname\n>> | num_rows | table_size | index_size\n>> | unique | number_of_scans | tuples_read | tuples_fetched\n>> entity_compounddict2document | entity_compound2document_cardval\n>> | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0\n>> | 0 | 0\n>> entity_compounddict2document |\n>> entity_compound2document_heptermnormscore | 5.42452e+07 | 6763 MB |\n>> 1162 MB | Y | 0 | 0 | 0\n>> entity_compounddict2document |\n>> entity_compound2document_heptermvarscore | 5.42452e+07 | 6763 MB |\n>> 1162 MB | Y | 0 | 0 | 0\n>> entity_compounddict2document | entity_compound2document_hepval\n>> | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 |\n>> 0 | 0\n>> entity_compounddict2document | entity_compound2document_name\n>> | 5.42452e+07 | 6763 MB | 1505 MB | Y | 24 |\n>> 178680 | 0\n>> entity_compounddict2document | entity_compound2document_nephval\n>> | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0\n>> | 0 | 0\n>> entity_compounddict2document | entity_compound2document_patterncount\n>> | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 |\n>> 0 | 0\n>> entity_compounddict2document | entity_compound2document_phosval\n>> | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0\n>> | 0 | 0\n>> entity_compounddict2document | entity_compound2document_rulescore\n>> | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0\n>> | 0 | 0\n>> entity_compounddict2document | entity_compounddict2document_pkey\n>> | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 |\n>> 0 | 0\n>>\n>> The table has aprox. 54,000,000 rows\n>> There are no NULLs in hepval field and pg_settings haven't changed. I\n>> also have done \"analyze\" to this table.\n>>\n>> I have simplified the query and added the last advise that you told me:\n>>\n>> Query:\n>>\n>> explain analyze select * from (select * from\n>> entity_compounddict2document where name='ranitidine') as a order by\n>> a.hepval;\n>>\n>> QUERY PLAN\n>>\n>>\n>> ------------------------------------------------------------------------------------------------------------------------------------------------------\n>> Sort (cost=11060.50..11067.55 rows=2822 width=133) (actual\n>> time=32715.097..32716.488 rows=13512 loops=1)\n>> Sort Key: entity_compounddict2document.hepval\n>> Sort Method: quicksort Memory: 2301kB\n>> -> Bitmap Heap Scan on entity_compounddict2document\n>> (cost=73.82..10898.76 rows=2822 width=133) (actual time=6.034..32695.483\n>> rows=13512 loops=1)\n>> Recheck Cond: ((name)::text = 'ranitidine'::text)\n>> -> Bitmap Index Scan on entity_compound2document_name\n>> (cost=0.00..73.12 rows=2822 width=0) (actual time=3.221..3.221 rows=13512\n>> loops=1)\n>> Index Cond: ((name)::text = 'ranitidine'::text)\n>> Total runtime: 32717.548 ms\n>>\n>> Another query:\n>> explain analyze select * from (select * from\n>> entity_compounddict2document where name='progesterone' ) as a order by\n>> a.hepval;\n>>\n>> QUERY PLAN\n>>\n>> ------------------------------------------------------------------------------------------------------------------------------------------------------------\n>> Sort (cost=367879.25..368209.24 rows=131997 width=133) (actual\n>> time=9262.887..9287.046 rows=138165 loops=1)\n>> Sort Key: entity_compounddict2document.hepval\n>> Sort Method: quicksort Memory: 25622kB\n>> -> Bitmap Heap Scan on entity_compounddict2document\n>> (cost=2906.93..356652.81 rows=131997 width=133) (actual\n>> time=76.316..9038.485 rows=138165 loops=1)\n>> Recheck Cond: ((name)::text = 'progesterone'::text)\n>> -> Bitmap Index Scan on entity_compound2document_name\n>> (cost=0.00..2873.93 rows=131997 width=0) (actual time=40.913..40.913\n>> rows=138165 loops=1)\n>> Index Cond: ((name)::text = 'progesterone'::text)\n>> Total runtime: 9296.815 ms\n>>\n>>\n>> It has improved (I supose because of the lack of the join table) but\n>> still taking a lot of time... Anything I can do??\n>>\n>> Any help would be very appreciated. Thank you very much.\n>>\n>\n>\n> Good to know performance has increased.\n>\n> \"entity_compounddict2document\" table goes through high INSERTS ?\n>\n> Can you help us know if the \"helpval\" column and \"name\" column have high\n> duplicate values ? \"n_distinct\" value from pg_stats table would have that\n> info.\n>\n> Below could be a possible workaround -\n>\n> As mentioned earlier in this email, a composite Index on name and hepval\n> column might help. If the table does not go through lot of INSERTS, then\n> consider performing a CLUSTER on the table using the same INDEX.\n>\n> Other recommendations -\n>\n> Please drop off all the Non-primary key Indexes which have 0 scans / hits.\n> This would harm the DB and the DB server whilst maintenance and DML\n> operations.\n>\n> Regards,\n> Venkata Balaji N\n>\n> Fujitsu Australia\n>\n>\n>\n> ***NOTA DE CONFIDENCIALIDAD*** Este correo electrónico, y en su caso los\n> ficheros adjuntos, pueden contener información protegida para el uso\n> exclusivo de su destinatario. Se prohíbe la distribución, reproducción o\n> cualquier otro tipo de transmisión por parte de otra persona que no sea el\n> destinatario. Si usted recibe por error este correo, se ruega comunicarlo\n> al remitente y borrar el mensaje recibido.\n>\n> ***CONFIDENTIALITY NOTICE*** This email communication and any attachments\n> may contain confidential and privileged information for the sole use of the\n> designated recipient named above. Distribution, reproduction or any other\n> use of this transmission by any party other than the intended recipient is\n> prohibited. If you are not the intended recipient please contact the sender\n> and delete all copies.\n>\n>\n\nAfter looking at the distinct values, yes the composite Index on \"name\" and \"hepval\" is not recommended. That would worsen - its expected.We need to look for other possible work around. Please drop off the above Index. Let me see if i can drill further into this.\nMeanwhile - can you help us know the memory parameters (work_mem, temp_buffers etc) set ?Do you have any other processes effecting this query's performance ?\nAny info about your Disk, RAM, CPU would also help.Regards,Venkata Balaji NFujitsu Australia\nVenkata Balaji NSr. Database AdministratorFujitsu Australia\nOn Tue, Mar 4, 2014 at 10:23 PM, acanada <[email protected]> wrote:\nHello,I don't know if this helps to figure out what is the problem but after adding the multicolumn index on name and hepval, the performance is even worse (¿?).  Ten times worse...\nexplain analyze select * from (select * from entity_compounddict2document  where name='progesterone') as a order by a.hepval;                                                                         QUERY PLAN                                                                          \n------------------------------------------------------------------------------------------------------------------------------------------------------------- Sort  (cost=422746.18..423143.94 rows=159104 width=133) (actual time=95769.674..95797.943 rows=138165 loops=1)\n   Sort Key: entity_compounddict2document.hepval   Sort Method:  quicksort  Memory: 25622kB   ->  Bitmap Heap Scan on entity_compounddict2document  (cost=3501.01..408999.90 rows=159104 width=133) (actual time=70.789..95519.258 rows=138165 loops=1)\n         Recheck Cond: ((name)::text = 'progesterone'::text)         ->  Bitmap Index Scan on entity_compound2document_name  (cost=0.00..3461.23 rows=159104 width=0) (actual time=35.174..35.174 rows=138165 loops=1)\n               Index Cond: ((name)::text = 'progesterone'::text) Total runtime: 95811.838 ms(8 rows)Any ideas please?\nThank you Andrés.El Mar 4, 2014, a las 12:28 AM, Venkata Balaji Nagothi escribió:\nOn Mon, Mar 3, 2014 at 9:17 PM, acanada <[email protected]> wrote:\nHello,Thankyou for your answer.\n\n\nI have made more changes than a simple re-indexing recently. I have moved the sorting field to the table in order to avoid the join clause. Now the schema is very simple. The query only implies one table:\nx=> \\d+ entity_compounddict2document;                      Table \"public.entity_compounddict2document\"      Column      |              Type              | Modifiers | Storage  | Description \n------------------+--------------------------------+-----------+----------+------------- id               | integer                        | not null  | plain    |  document_id      | integer                        |           | plain    | \n name             | character varying(255)         |           | extended |  qualifier        | character varying(255)         |           | extended |  tagMethod        | character varying(255)         |           | extended | \n created          | timestamp(0) without time zone |           | plain    |  updated          | timestamp(0) without time zone |           | plain    |  curation         | integer                        |           | plain    | \n hepval           | double precision               |           | plain    |  cardval          | double precision               |           | plain    |  nephval          | double precision               |           | plain    | \n phosval          | double precision               |           | plain    |  patternCount     | double precision               |           | plain    |  ruleScore        | double precision               |           | plain    | \n hepTermNormScore | double precision               |           | plain    |  hepTermVarScore  | double precision               |           | plain    | Indexes:    \"entity_compounddict2document_pkey\" PRIMARY KEY, btree (id)\n    \"entity_compound2document_cardval\" btree (cardval)    \"entity_compound2document_heptermnormscore\" btree (\"hepTermNormScore\")    \"entity_compound2document_heptermvarscore\" btree (\"hepTermVarScore\")\n    \"entity_compound2document_hepval\" btree (hepval)    \"entity_compound2document_name\" btree (name)    \"entity_compound2document_nephval\" btree (nephval)\n\n\n    \"entity_compound2document_patterncount\" btree (\"patternCount\")    \"entity_compound2document_phosval\" btree (phosval)    \"entity_compound2document_rulescore\" btree (\"ruleScore\")\nHas OIDs: no           tablename            |                   indexname                                              |  num_rows    | table_size  | index_size | unique | number_of_scans | tuples_read | tuples_fetched \n entity_compounddict2document   | entity_compound2document_cardval               | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0 entity_compounddict2document   | entity_compound2document_heptermnormscore      | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n entity_compounddict2document   | entity_compound2document_heptermvarscore       | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0 entity_compounddict2document   | entity_compound2document_hepval                | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n entity_compounddict2document   | entity_compound2document_name                  | 5.42452e+07 | 6763 MB    | 1505 MB    | Y      |              24 |      178680 |              0 entity_compounddict2document   | entity_compound2document_nephval               | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n entity_compounddict2document   | entity_compound2document_patterncount          | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0 entity_compounddict2document   | entity_compound2document_phosval               | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n entity_compounddict2document   | entity_compound2document_rulescore             | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0 entity_compounddict2document   | entity_compounddict2document_pkey              | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\nThe table has aprox. 54,000,000 rowsThere are no NULLs in hepval field and pg_settings haven't changed. I also have done \"analyze\" to this table.\n\n\nI have simplified the query and added the last advise that you told me:Query:   explain analyze select * from (select * from entity_compounddict2document  where name='ranitidine') as a order by a.hepval;\n                                                                      QUERY PLAN                                                                      ------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort  (cost=11060.50..11067.55 rows=2822 width=133) (actual time=32715.097..32716.488 rows=13512 loops=1)   Sort Key: entity_compounddict2document.hepval   Sort Method:  quicksort  Memory: 2301kB\n   ->  Bitmap Heap Scan on entity_compounddict2document  (cost=73.82..10898.76 rows=2822 width=133) (actual time=6.034..32695.483 rows=13512 loops=1)         Recheck Cond: ((name)::text = 'ranitidine'::text)\n         ->  Bitmap Index Scan on entity_compound2document_name  (cost=0.00..73.12 rows=2822 width=0) (actual time=3.221..3.221 rows=13512 loops=1)               Index Cond: ((name)::text = 'ranitidine'::text)\n Total runtime: 32717.548 msAnother query: explain analyze select * from (select * from entity_compounddict2document  where name='progesterone' ) as a  order by a.hepval;\n \t\t\t\t\t\t QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort  (cost=367879.25..368209.24 rows=131997 width=133) (actual time=9262.887..9287.046 rows=138165 loops=1)   Sort Key: entity_compounddict2document.hepval   Sort Method:  quicksort  Memory: 25622kB\n   ->  Bitmap Heap Scan on entity_compounddict2document  (cost=2906.93..356652.81 rows=131997 width=133) (actual time=76.316..9038.485 rows=138165 loops=1)         Recheck Cond: ((name)::text = 'progesterone'::text)\n         ->  Bitmap Index Scan on entity_compound2document_name  (cost=0.00..2873.93 rows=131997 width=0) (actual time=40.913..40.913 rows=138165 loops=1)               Index Cond: ((name)::text = 'progesterone'::text)\n Total runtime: 9296.815 msIt has improved (I supose because of the lack of the join table) but still taking a lot of time... Anything I can do??\nAny help would be very appreciated. Thank you very much.Good to know performance has increased.\"entity_compounddict2document\" table goes through high INSERTS ?\nCan you help us know if the \"helpval\" column and \"name\" column have high duplicate values ? \"n_distinct\" value from pg_stats table would have that info. \nBelow could be a possible workaround -As mentioned earlier in this email, a composite Index on name and hepval column might help. If the table does not go through lot of INSERTS, then consider performing a CLUSTER on the table using the same INDEX.\nOther recommendations -Please drop off all the Non-primary key Indexes which have 0 scans / hits. This would harm the DB and the DB server whilst maintenance and DML operations.\nRegards,Venkata Balaji NFujitsu Australia\n\n**NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n**CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.", "msg_date": "Wed, 5 Mar 2014 10:35:46 +1100", "msg_from": "Venkata Balaji Nagothi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query taking long time" }, { "msg_contents": "Hi Venkata,\n\nIncreasing the work_mem doesn't improve results. After raising it to 1GB:\n\nlimtox=> explain analyze select * from entity_compounddict2document where name='Troglitazone' order by hepval;\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=11083.47..11090.54 rows=2828 width=133) (actual time=19679.354..19679.494 rows=1283 loops=1)\n Sort Key: hepval\n Sort Method: quicksort Memory: 238kB\n -> Bitmap Heap Scan on entity_compounddict2document (cost=73.87..10921.34 rows=2828 width=133) (actual time=93.926..19677.110 rows=1283 loops=1)\n Recheck Cond: ((name)::text = 'Troglitazone'::text)\n -> Bitmap Index Scan on entity_compound2document_name (cost=0.00..73.16 rows=2828 width=0) (actual time=78.005..78.005 rows=1283 loops=1)\n Index Cond: ((name)::text = 'Troglitazone'::text)\n Total runtime: 19679.680 ms\n\nThere are not temp files in the data_directory... I have set to 1MB log_temp_files and did this query again but there's nothing related to tmp files in it...\n\nI cannot see \"temp_files\" column in pg_stat_database view (using 8.3 version) :-(\n\n\nThanks for your help. Regards,\n\nAndrés\n\n\n\nEl Mar 6, 2014, a las 2:36 AM, Venkata Balaji Nagothi escribió:\n\n> Hi Andres,\n> \n> Sorting cost is high !!\n> \n> This query must be going for a disk sort, do you see temp files getting created in pg_stat_tmp directory in the data directory ?\n> \n> Or you can enable \"log_temp_files\" to probably 1 MB or so. This will log the information about temp files in the log file in pg_log directory.\n> \n> Or you can see \"temp_files\" column in pg_stat_database view before and after the execution of the query.\n> \n> How much work_mem did you give ? you can increase the work_mem to probably 1 GB at session level and run the query. That might have different results.\n> \n> Dropping of other Indexes will not effect this query's performance. I recommended them to be dropped to avoid maintenance and performance over-head in future.\n> \n> Venkata Balaji N\n> \n> Sr. Database Administrator\n> Fujitsu Australia\n> \n> \n> On Wed, Mar 5, 2014 at 10:14 PM, acanada <[email protected]> wrote:\n> Hi,\n> \n> This are the parameters of the server:\n> \n> SELECT name, current_setting(name), source\n> FROM pg_settings\n> WHERE source NOT IN ('default', 'override');\n> \n> name | current_setting | source \n> ----------------------------+--------------------+----------------------\n> client_encoding | UTF8 | client\n> DateStyle | ISO, DMY | configuration file\n> default_statistics_target | 100 | configuration file\n> default_text_search_config | pg_catalog.spanish | configuration file\n> effective_cache_size | 7500MB | configuration file\n> lc_messages | es_ES.UTF-8 | configuration file\n> lc_monetary | es_ES.UTF-8 | configuration file\n> lc_numeric | C | configuration file\n> lc_time | es_ES.UTF-8 | configuration file\n> listen_addresses | * | configuration file\n> log_line_prefix | %t | configuration file\n> log_timezone | localtime | command line\n> maintenance_work_mem | 2000MB | configuration file\n> max_connections | 100 | configuration file\n> max_fsm_pages | 63217760 | configuration file\n> max_stack_depth | 2MB | environment variable\n> port | 5432 | configuration file\n> shared_buffers | 1500MB | configuration file\n> ssl | on | configuration file\n> tcp_keepalives_count | 9 | configuration file\n> tcp_keepalives_idle | 7200 | configuration file\n> tcp_keepalives_interval | 75 | configuration file\n> TimeZone | localtime | command line\n> timezone_abbreviations | Default | command line\n> work_mem | 50MB | configuration file\n> \n> \n> The server has 2 processors quadcore, 10GB of RAM and data is located in a fiber disk of 2TB.\n> Changing work_mem parameter seems to have no effect on the perform.\n> \n> Now I have a curious situation. I created a new table, the one we are query against. This table, entity_compounddict2document has less rows aprox. 50M vs. the original table entity2document2 that has 94M rows.\n> Well, after dropping indexes not already in use, both tables have the same performance with this query... \n> \n> \n> explain analyze select * from entity_compounddict2document where name='Troglitazone' order by hepval;\n> QUERY PLAN \n> ------------------------------------------------------------------------------------------------------------------------------------------------------\n> Sort (cost=11083.47..11090.54 rows=2828 width=133) (actual time=19708.019..19708.136 rows=1283 loops=1)\n> Sort Key: hepval\n> Sort Method: quicksort Memory: 238kB\n> -> Bitmap Heap Scan on entity_compounddict2document (cost=73.87..10921.34 rows=2828 width=133) (actual time=44.292..19705.954 rows=1283 loops=1)\n> Recheck Cond: ((name)::text = 'Troglitazone'::text)\n> -> Bitmap Index Scan on entity_compound2document_name (cost=0.00..73.16 rows=2828 width=0) (actual time=28.159..28.159 rows=1283 loops=1)\n> Index Cond: ((name)::text = 'Troglitazone'::text)\n> Total runtime: 19708.275 ms\n> (8 rows)\n> \n> \n> explain analyze select * from entity2document2 where name='Troglitazone' order by hepval;\n> QUERY PLAN \n> ----------------------------------------------------------------------------------------------------------------------------------------------\n> Sort (cost=18237.75..18249.38 rows=4653 width=123) (actual time=18945.732..18945.869 rows=1283 loops=1)\n> Sort Key: hepval\n> Sort Method: quicksort Memory: 238kB\n> -> Bitmap Heap Scan on entity2document2 (cost=117.37..17954.29 rows=4653 width=123) (actual time=41.703..18943.720 rows=1283 loops=1)\n> Recheck Cond: ((name)::text = 'Troglitazone'::text)\n> -> Bitmap Index Scan on entity2document2_name (cost=0.00..116.20 rows=4653 width=0) (actual time=28.703..28.703 rows=1283 loops=1)\n> Index Cond: ((name)::text = 'Troglitazone'::text)\n> Total runtime: 18945.991 ms\n> (8 rows)\n> \n> Description of the tables are:\n> \n> limtox=> \\d+ entity_compounddict2document;\n> Table \"public.entity_compounddict2document\"\n> Column | Type | Modifiers | Storage | Description \n> ------------------+--------------------------------+-----------+----------+-------------\n> id | integer | not null | plain | \n> document_id | integer | | plain | \n> name | character varying(255) | | extended | \n> qualifier | character varying(255) | | extended | \n> tagMethod | character varying(255) | | extended | \n> created | timestamp(0) without time zone | | plain | \n> updated | timestamp(0) without time zone | | plain | \n> curation | integer | | plain | \n> hepval | double precision | | plain | \n> cardval | double precision | | plain | \n> nephval | double precision | | plain | \n> phosval | double precision | | plain | \n> patternCount | double precision | | plain | \n> ruleScore | double precision | | plain | \n> hepTermNormScore | double precision | | plain | \n> hepTermVarScore | double precision | | plain | \n> Indexes:\n> \"entity_compounddict2document_pkey\" PRIMARY KEY, btree (id)\n> \"entity_compound2document_name\" btree (name)\n> \"entity_compound2document_nephval\" btree (nephval)\n> Has OIDs: no\n> \n> \n> limtox=> \\d+ entity2document2;\n> Table \"public.entity2document2\"\n> Column | Type | Modifiers | Storage | Description \n> ------------------+--------------------------------+-----------+----------+-------------\n> id | integer | not null | plain | \n> document_id | integer | | plain | \n> name | character varying(255) | | extended | \n> qualifier | character varying(255) | | extended | \n> tagMethod | character varying(255) | | extended | \n> created | timestamp(0) without time zone | | plain | \n> updated | timestamp(0) without time zone | | plain | \n> curation | integer | | plain | \n> hepval | double precision | | plain | \n> cardval | double precision | | plain | \n> nephval | double precision | | plain | \n> phosval | double precision | | plain | \n> patternCount | double precision | | plain | \n> ruleScore | double precision | | plain | \n> hepTermNormScore | double precision | | plain | \n> hepTermVarScore | double precision | | plain | \n> Indexes:\n> \"entity2document2_pkey\" PRIMARY KEY, btree (id)\n> \"entity2document2_hepval_index\" btree (hepval)\n> \"entity2document2_name\" btree (name)\n> \"entity2document2_qualifier_name_hepval\" btree (qualifier, name)\n> \"entity2document_qualifier_index\" btree (qualifier)\n> Has OIDs: no\n> \n> \n> \n> I really appreciate your help!!\n> \n> Regards,\n> Andrés\n> \n> \n> \n> \n> El Mar 5, 2014, a las 12:35 AM, Venkata Balaji Nagothi escribió:\n> \n>> After looking at the distinct values, yes the composite Index on \"name\" and \"hepval\" is not recommended. That would worsen - its expected.\n>> \n>> We need to look for other possible work around. Please drop off the above Index. Let me see if i can drill further into this.\n>> \n>> Meanwhile - can you help us know the memory parameters (work_mem, temp_buffers etc) set ?\n>> \n>> Do you have any other processes effecting this query's performance ?\n>> \n>> Any info about your Disk, RAM, CPU would also help.\n>> \n>> Regards,\n>> Venkata Balaji N\n>> \n>> Fujitsu Australia\n>> \n>> \n>> \n>> \n>> Venkata Balaji N\n>> \n>> Sr. Database Administrator\n>> Fujitsu Australia\n>> \n>> \n>> On Tue, Mar 4, 2014 at 10:23 PM, acanada <[email protected]> wrote:\n>> Hello,\n>> \n>> I don't know if this helps to figure out what is the problem but after adding the multicolumn index on name and hepval, the performance is even worse (¿?). Ten times worse...\n>> \n>> explain analyze select * from (select * from entity_compounddict2document where name='progesterone') as a order by a.hepval;\n>> QUERY PLAN \n>> -------------------------------------------------------------------------------------------------------------------------------------------------------------\n>> Sort (cost=422746.18..423143.94 rows=159104 width=133) (actual time=95769.674..95797.943 rows=138165 loops=1)\n>> Sort Key: entity_compounddict2document.hepval\n>> Sort Method: quicksort Memory: 25622kB\n>> -> Bitmap Heap Scan on entity_compounddict2document (cost=3501.01..408999.90 rows=159104 width=133) (actual time=70.789..95519.258 rows=138165 loops=1)\n>> Recheck Cond: ((name)::text = 'progesterone'::text)\n>> -> Bitmap Index Scan on entity_compound2document_name (cost=0.00..3461.23 rows=159104 width=0) (actual time=35.174..35.174 rows=138165 loops=1)\n>> Index Cond: ((name)::text = 'progesterone'::text)\n>> Total runtime: 95811.838 ms\n>> (8 rows)\n>> \n>> Any ideas please?\n>> \n>> Thank you \n>> Andrés.\n>> \n>> \n>> \n>> El Mar 4, 2014, a las 12:28 AM, Venkata Balaji Nagothi escribió:\n>> \n>>> On Mon, Mar 3, 2014 at 9:17 PM, acanada <[email protected]> wrote:\n>>> Hello,\n>>> \n>>> Thankyou for your answer.\n>>> I have made more changes than a simple re-indexing recently. I have moved the sorting field to the table in order to avoid the join clause. Now the schema is very simple. The query only implies one table:\n>>> \n>>> x=> \\d+ entity_compounddict2document;\n>>> Table \"public.entity_compounddict2document\"\n>>> Column | Type | Modifiers | Storage | Description \n>>> ------------------+--------------------------------+-----------+----------+-------------\n>>> id | integer | not null | plain | \n>>> document_id | integer | | plain | \n>>> name | character varying(255) | | extended | \n>>> qualifier | character varying(255) | | extended | \n>>> tagMethod | character varying(255) | | extended | \n>>> created | timestamp(0) without time zone | | plain | \n>>> updated | timestamp(0) without time zone | | plain | \n>>> curation | integer | | plain | \n>>> hepval | double precision | | plain | \n>>> cardval | double precision | | plain | \n>>> nephval | double precision | | plain | \n>>> phosval | double precision | | plain | \n>>> patternCount | double precision | | plain | \n>>> ruleScore | double precision | | plain | \n>>> hepTermNormScore | double precision | | plain | \n>>> hepTermVarScore | double precision | | plain | \n>>> Indexes:\n>>> \"entity_compounddict2document_pkey\" PRIMARY KEY, btree (id)\n>>> \"entity_compound2document_cardval\" btree (cardval)\n>>> \"entity_compound2document_heptermnormscore\" btree (\"hepTermNormScore\")\n>>> \"entity_compound2document_heptermvarscore\" btree (\"hepTermVarScore\")\n>>> \"entity_compound2document_hepval\" btree (hepval)\n>>> \"entity_compound2document_name\" btree (name)\n>>> \"entity_compound2document_nephval\" btree (nephval)\n>>> \"entity_compound2document_patterncount\" btree (\"patternCount\")\n>>> \"entity_compound2document_phosval\" btree (phosval)\n>>> \"entity_compound2document_rulescore\" btree (\"ruleScore\")\n>>> Has OIDs: no\n>>> \n>>> tablename | indexname | num_rows | table_size | index_size | unique | number_of_scans | tuples_read | tuples_fetched \n>>> entity_compounddict2document | entity_compound2document_cardval | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>> entity_compounddict2document | entity_compound2document_heptermnormscore | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>> entity_compounddict2document | entity_compound2document_heptermvarscore | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>> entity_compounddict2document | entity_compound2document_hepval | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>> entity_compounddict2document | entity_compound2document_name | 5.42452e+07 | 6763 MB | 1505 MB | Y | 24 | 178680 | 0\n>>> entity_compounddict2document | entity_compound2document_nephval | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>> entity_compounddict2document | entity_compound2document_patterncount | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>> entity_compounddict2document | entity_compound2document_phosval | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>> entity_compounddict2document | entity_compound2document_rulescore | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>> entity_compounddict2document | entity_compounddict2document_pkey | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>> \n>>> The table has aprox. 54,000,000 rows\n>>> There are no NULLs in hepval field and pg_settings haven't changed. I also have done \"analyze\" to this table.\n>>> \n>>> I have simplified the query and added the last advise that you told me:\n>>> \n>>> Query: \n>>> \n>>> \t explain analyze select * from (select * from entity_compounddict2document where name='ranitidine') as a order by a.hepval;\n>>> QUERY PLAN \n>>> ------------------------------------------------------------------------------------------------------------------------------------------------------\n>>> Sort (cost=11060.50..11067.55 rows=2822 width=133) (actual time=32715.097..32716.488 rows=13512 loops=1)\n>>> Sort Key: entity_compounddict2document.hepval\n>>> Sort Method: quicksort Memory: 2301kB\n>>> -> Bitmap Heap Scan on entity_compounddict2document (cost=73.82..10898.76 rows=2822 width=133) (actual time=6.034..32695.483 rows=13512 loops=1)\n>>> Recheck Cond: ((name)::text = 'ranitidine'::text)\n>>> -> Bitmap Index Scan on entity_compound2document_name (cost=0.00..73.12 rows=2822 width=0) (actual time=3.221..3.221 rows=13512 loops=1)\n>>> Index Cond: ((name)::text = 'ranitidine'::text)\n>>> Total runtime: 32717.548 ms\n>>> \n>>> Another query:\n>>> \t\n>>> \texplain analyze select * from (select * from entity_compounddict2document where name='progesterone' ) as a order by a.hepval;\n>>> \n>>> \t\t\t\t\t\t QUERY PLAN \n>>> ------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>> Sort (cost=367879.25..368209.24 rows=131997 width=133) (actual time=9262.887..9287.046 rows=138165 loops=1)\n>>> Sort Key: entity_compounddict2document.hepval\n>>> Sort Method: quicksort Memory: 25622kB\n>>> -> Bitmap Heap Scan on entity_compounddict2document (cost=2906.93..356652.81 rows=131997 width=133) (actual time=76.316..9038.485 rows=138165 loops=1)\n>>> Recheck Cond: ((name)::text = 'progesterone'::text)\n>>> -> Bitmap Index Scan on entity_compound2document_name (cost=0.00..2873.93 rows=131997 width=0) (actual time=40.913..40.913 rows=138165 loops=1)\n>>> Index Cond: ((name)::text = 'progesterone'::text)\n>>> Total runtime: 9296.815 ms\n>>> \n>>> \n>>> It has improved (I supose because of the lack of the join table) but still taking a lot of time... Anything I can do??\n>>> \n>>> Any help would be very appreciated. Thank you very much.\n>>> \n>>> \n>>> Good to know performance has increased.\n>>> \n>>> \"entity_compounddict2document\" table goes through high INSERTS ?\n>>> \n>>> Can you help us know if the \"helpval\" column and \"name\" column have high duplicate values ? \"n_distinct\" value from pg_stats table would have that info. \n>>> \n>>> Below could be a possible workaround -\n>>> \n>>> As mentioned earlier in this email, a composite Index on name and hepval column might help. If the table does not go through lot of INSERTS, then consider performing a CLUSTER on the table using the same INDEX.\n>>> \n>>> Other recommendations -\n>>> \n>>> Please drop off all the Non-primary key Indexes which have 0 scans / hits. This would harm the DB and the DB server whilst maintenance and DML operations.\n>>> \n>>> Regards,\n>>> Venkata Balaji N\n>>> \n>>> Fujitsu Australia\n>> \n>> \n>> \n>> **NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n>> \n>> **CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n>> \n>> \n>> \n> \n> \n> \n> **NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n> \n> **CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n> \n> \n> \n\n\n\n**NOTA DE CONFIDENCIALIDAD** Este correo electrÿnico, y en su caso los ficheros adjuntos, pueden contener informaciÿn protegida para el uso exclusivo de su destinatario. Se prohÿbe la distribuciÿn, reproducciÿn o cualquier otro tipo de transmisiÿn por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n**CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n\n\n\nHi Venkata,Increasing the work_mem doesn't improve results. After raising it to 1GB:limtox=> explain analyze select * from entity_compounddict2document  where name='Troglitazone' order by hepval;                                                                      QUERY PLAN                                                                      ------------------------------------------------------------------------------------------------------------------------------------------------------ Sort  (cost=11083.47..11090.54 rows=2828 width=133) (actual time=19679.354..19679.494 rows=1283 loops=1)   Sort Key: hepval   Sort Method:  quicksort  Memory: 238kB   ->  Bitmap Heap Scan on entity_compounddict2document  (cost=73.87..10921.34 rows=2828 width=133) (actual time=93.926..19677.110 rows=1283 loops=1)         Recheck Cond: ((name)::text = 'Troglitazone'::text)         ->  Bitmap Index Scan on entity_compound2document_name  (cost=0.00..73.16 rows=2828 width=0) (actual time=78.005..78.005 rows=1283 loops=1)               Index Cond: ((name)::text = 'Troglitazone'::text) Total runtime: 19679.680 msThere are not temp files in the data_directory... I have set to 1MB log_temp_files and did this query again but there's nothing related to tmp files in it...I cannot see  \"temp_files\" column in pg_stat_database view (using 8.3 version)  :-(Thanks for your help. Regards,AndrésEl Mar 6, 2014, a las 2:36 AM, Venkata Balaji Nagothi escribió:Hi Andres,Sorting cost is high !!This query must be going for a disk sort, do you see temp files getting created in pg_stat_tmp directory in the data directory ?\nOr you can enable \"log_temp_files\" to probably 1 MB or so. This will log the information about temp files in the log file in pg_log directory.Or you can see \"temp_files\" column in pg_stat_database view before and after the execution of the query.\nHow much work_mem did you give ? you can increase the work_mem to probably 1 GB at session level and run the query. That might have different results.Dropping of other Indexes will not effect this query's performance. I recommended them to be dropped to avoid maintenance and performance over-head in future.\nVenkata Balaji NSr. Database AdministratorFujitsu Australia\nOn Wed, Mar 5, 2014 at 10:14 PM, acanada <[email protected]> wrote:\nHi,This are the parameters of the server: SELECT name, current_setting(name), source  FROM pg_settings\n  WHERE source NOT IN ('default', 'override');            name            |  current_setting   |        source        ----------------------------+--------------------+----------------------\n client_encoding            | UTF8               | client DateStyle                  | ISO, DMY           | configuration file default_statistics_target  | 100                | configuration file\n default_text_search_config | pg_catalog.spanish | configuration file effective_cache_size       | 7500MB             | configuration file lc_messages                | es_ES.UTF-8        | configuration file\n lc_monetary                | es_ES.UTF-8        | configuration file lc_numeric                 | C                  | configuration file lc_time                    | es_ES.UTF-8        | configuration file\n listen_addresses           | *                  | configuration file log_line_prefix            | %t                 | configuration file log_timezone               | localtime          | command line\n maintenance_work_mem       | 2000MB             | configuration file max_connections            | 100                | configuration file max_fsm_pages              | 63217760           | configuration file\n max_stack_depth            | 2MB                | environment variable port                       | 5432               | configuration file shared_buffers             | 1500MB             | configuration file\n ssl                        | on                 | configuration file tcp_keepalives_count       | 9                  | configuration file tcp_keepalives_idle        | 7200               | configuration file\n tcp_keepalives_interval    | 75                 | configuration file TimeZone                   | localtime          | command line timezone_abbreviations     | Default            | command line\n work_mem                   | 50MB               | configuration fileThe server has 2 processors quadcore, 10GB of RAM and data is located in a fiber disk of 2TB.\nChanging work_mem parameter seems to have no effect on the perform.Now I have a curious situation. I created a new table, the one we are query against. This table, entity_compounddict2document has less rows aprox. 50M vs. the original table entity2document2 that has 94M rows.\nWell, after dropping indexes not already in use, both tables have the same performance with this query...  explain analyze select * from entity_compounddict2document  where name='Troglitazone' order by hepval;\n                                                                      QUERY PLAN                                                                      ------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort  (cost=11083.47..11090.54 rows=2828 width=133) (actual time=19708.019..19708.136 rows=1283 loops=1)   Sort Key: hepval   Sort Method:  quicksort  Memory: 238kB   ->  Bitmap Heap Scan on entity_compounddict2document  (cost=73.87..10921.34 rows=2828 width=133) (actual time=44.292..19705.954 rows=1283 loops=1)\n         Recheck Cond: ((name)::text = 'Troglitazone'::text)         ->  Bitmap Index Scan on entity_compound2document_name  (cost=0.00..73.16 rows=2828 width=0) (actual time=28.159..28.159 rows=1283 loops=1)\n               Index Cond: ((name)::text = 'Troglitazone'::text) Total runtime: 19708.275 ms(8 rows)explain analyze select * from entity2document2  where name='Troglitazone' order by hepval;\n                                                                  QUERY PLAN                                                                  ----------------------------------------------------------------------------------------------------------------------------------------------\n Sort  (cost=18237.75..18249.38 rows=4653 width=123) (actual time=18945.732..18945.869 rows=1283 loops=1)   Sort Key: hepval   Sort Method:  quicksort  Memory: 238kB   ->  Bitmap Heap Scan on entity2document2  (cost=117.37..17954.29 rows=4653 width=123) (actual time=41.703..18943.720 rows=1283 loops=1)\n         Recheck Cond: ((name)::text = 'Troglitazone'::text)         ->  Bitmap Index Scan on entity2document2_name  (cost=0.00..116.20 rows=4653 width=0) (actual time=28.703..28.703 rows=1283 loops=1)\n               Index Cond: ((name)::text = 'Troglitazone'::text) Total runtime: 18945.991 ms(8 rows)Description of the tables are:\nlimtox=> \\d+ entity_compounddict2document;                      Table \"public.entity_compounddict2document\"      Column      |              Type              | Modifiers | Storage  | Description \n------------------+--------------------------------+-----------+----------+------------- id               | integer                        | not null  | plain    |  document_id      | integer                        |           | plain    | \n name             | character varying(255)         |           | extended |  qualifier        | character varying(255)         |           | extended |  tagMethod        | character varying(255)         |           | extended | \n created          | timestamp(0) without time zone |           | plain    |  updated          | timestamp(0) without time zone |           | plain    |  curation         | integer                        |           | plain    | \n hepval           | double precision               |           | plain    |  cardval          | double precision               |           | plain    |  nephval          | double precision               |           | plain    | \n phosval          | double precision               |           | plain    |  patternCount     | double precision               |           | plain    |  ruleScore        | double precision               |           | plain    | \n hepTermNormScore | double precision               |           | plain    |  hepTermVarScore  | double precision               |           | plain    | Indexes:    \"entity_compounddict2document_pkey\" PRIMARY KEY, btree (id)\n    \"entity_compound2document_name\" btree (name)    \"entity_compound2document_nephval\" btree (nephval)Has OIDs: no\nlimtox=> \\d+ entity2document2;                            Table \"public.entity2document2\"      Column      |              Type              | Modifiers | Storage  | Description \n------------------+--------------------------------+-----------+----------+------------- id               | integer                        | not null  | plain    |  document_id      | integer                        |           | plain    | \n name             | character varying(255)         |           | extended |  qualifier        | character varying(255)         |           | extended |  tagMethod        | character varying(255)         |           | extended | \n created          | timestamp(0) without time zone |           | plain    |  updated          | timestamp(0) without time zone |           | plain    |  curation         | integer                        |           | plain    | \n hepval           | double precision               |           | plain    |  cardval          | double precision               |           | plain    |  nephval          | double precision               |           | plain    | \n phosval          | double precision               |           | plain    |  patternCount     | double precision               |           | plain    |  ruleScore        | double precision               |           | plain    | \n hepTermNormScore | double precision               |           | plain    |  hepTermVarScore  | double precision               |           | plain    | Indexes:    \"entity2document2_pkey\" PRIMARY KEY, btree (id)\n    \"entity2document2_hepval_index\" btree (hepval)    \"entity2document2_name\" btree (name)    \"entity2document2_qualifier_name_hepval\" btree (qualifier, name)\n    \"entity2document_qualifier_index\" btree (qualifier)Has OIDs: noI really appreciate your help!!Regards,\nAndrésEl Mar 5, 2014, a las 12:35 AM, Venkata Balaji Nagothi escribió:After looking at the distinct values, yes the composite Index on \"name\" and \"hepval\" is not recommended. That would worsen - its expected.\nWe need to look for other possible work around. Please drop off the above Index. Let me see if i can drill further into this.\nMeanwhile - can you help us know the memory parameters (work_mem, temp_buffers etc) set ?Do you have any other processes effecting this query's performance ?\nAny info about your Disk, RAM, CPU would also help.Regards,Venkata Balaji NFujitsu Australia\nVenkata Balaji NSr. Database AdministratorFujitsu Australia\nOn Tue, Mar 4, 2014 at 10:23 PM, acanada <[email protected]> wrote:\nHello,I don't know if this helps to figure out what is the problem but after adding the multicolumn index on name and hepval, the performance is even worse (¿?).  Ten times worse...\nexplain analyze select * from (select * from entity_compounddict2document  where name='progesterone') as a order by a.hepval;                                                                         QUERY PLAN                                                                          \n------------------------------------------------------------------------------------------------------------------------------------------------------------- Sort  (cost=422746.18..423143.94 rows=159104 width=133) (actual time=95769.674..95797.943 rows=138165 loops=1)\n   Sort Key: entity_compounddict2document.hepval   Sort Method:  quicksort  Memory: 25622kB   ->  Bitmap Heap Scan on entity_compounddict2document  (cost=3501.01..408999.90 rows=159104 width=133) (actual time=70.789..95519.258 rows=138165 loops=1)\n         Recheck Cond: ((name)::text = 'progesterone'::text)         ->  Bitmap Index Scan on entity_compound2document_name  (cost=0.00..3461.23 rows=159104 width=0) (actual time=35.174..35.174 rows=138165 loops=1)\n               Index Cond: ((name)::text = 'progesterone'::text) Total runtime: 95811.838 ms(8 rows)Any ideas please?\nThank you Andrés.El Mar 4, 2014, a las 12:28 AM, Venkata Balaji Nagothi escribió:\nOn Mon, Mar 3, 2014 at 9:17 PM, acanada <[email protected]> wrote:\nHello,Thankyou for your answer.\n\n\n\n\nI have made more changes than a simple re-indexing recently. I have moved the sorting field to the table in order to avoid the join clause. Now the schema is very simple. The query only implies one table:\nx=> \\d+ entity_compounddict2document;                      Table \"public.entity_compounddict2document\"      Column      |              Type              | Modifiers | Storage  | Description \n------------------+--------------------------------+-----------+----------+------------- id               | integer                        | not null  | plain    |  document_id      | integer                        |           | plain    | \n name             | character varying(255)         |           | extended |  qualifier        | character varying(255)         |           | extended |  tagMethod        | character varying(255)         |           | extended | \n created          | timestamp(0) without time zone |           | plain    |  updated          | timestamp(0) without time zone |           | plain    |  curation         | integer                        |           | plain    | \n hepval           | double precision               |           | plain    |  cardval          | double precision               |           | plain    |  nephval          | double precision               |           | plain    | \n phosval          | double precision               |           | plain    |  patternCount     | double precision               |           | plain    |  ruleScore        | double precision               |           | plain    | \n hepTermNormScore | double precision               |           | plain    |  hepTermVarScore  | double precision               |           | plain    | Indexes:    \"entity_compounddict2document_pkey\" PRIMARY KEY, btree (id)\n    \"entity_compound2document_cardval\" btree (cardval)    \"entity_compound2document_heptermnormscore\" btree (\"hepTermNormScore\")    \"entity_compound2document_heptermvarscore\" btree (\"hepTermVarScore\")\n    \"entity_compound2document_hepval\" btree (hepval)    \"entity_compound2document_name\" btree (name)    \"entity_compound2document_nephval\" btree (nephval)\n\n\n\n\n    \"entity_compound2document_patterncount\" btree (\"patternCount\")    \"entity_compound2document_phosval\" btree (phosval)    \"entity_compound2document_rulescore\" btree (\"ruleScore\")\nHas OIDs: no           tablename            |                   indexname                                              |  num_rows    | table_size  | index_size | unique | number_of_scans | tuples_read | tuples_fetched \n entity_compounddict2document   | entity_compound2document_cardval               | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0 entity_compounddict2document   | entity_compound2document_heptermnormscore      | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n entity_compounddict2document   | entity_compound2document_heptermvarscore       | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0 entity_compounddict2document   | entity_compound2document_hepval                | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n entity_compounddict2document   | entity_compound2document_name                  | 5.42452e+07 | 6763 MB    | 1505 MB    | Y      |              24 |      178680 |              0 entity_compounddict2document   | entity_compound2document_nephval               | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n entity_compounddict2document   | entity_compound2document_patterncount          | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0 entity_compounddict2document   | entity_compound2document_phosval               | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n entity_compounddict2document   | entity_compound2document_rulescore             | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0 entity_compounddict2document   | entity_compounddict2document_pkey              | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\nThe table has aprox. 54,000,000 rowsThere are no NULLs in hepval field and pg_settings haven't changed. I also have done \"analyze\" to this table.\n\n\n\n\nI have simplified the query and added the last advise that you told me:Query:   explain analyze select * from (select * from entity_compounddict2document  where name='ranitidine') as a order by a.hepval;\n                                                                      QUERY PLAN                                                                      ------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort  (cost=11060.50..11067.55 rows=2822 width=133) (actual time=32715.097..32716.488 rows=13512 loops=1)   Sort Key: entity_compounddict2document.hepval   Sort Method:  quicksort  Memory: 2301kB\n   ->  Bitmap Heap Scan on entity_compounddict2document  (cost=73.82..10898.76 rows=2822 width=133) (actual time=6.034..32695.483 rows=13512 loops=1)         Recheck Cond: ((name)::text = 'ranitidine'::text)\n         ->  Bitmap Index Scan on entity_compound2document_name  (cost=0.00..73.12 rows=2822 width=0) (actual time=3.221..3.221 rows=13512 loops=1)               Index Cond: ((name)::text = 'ranitidine'::text)\n Total runtime: 32717.548 msAnother query: explain analyze select * from (select * from entity_compounddict2document  where name='progesterone' ) as a  order by a.hepval;\n \t\t\t\t\t\t QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort  (cost=367879.25..368209.24 rows=131997 width=133) (actual time=9262.887..9287.046 rows=138165 loops=1)   Sort Key: entity_compounddict2document.hepval   Sort Method:  quicksort  Memory: 25622kB\n   ->  Bitmap Heap Scan on entity_compounddict2document  (cost=2906.93..356652.81 rows=131997 width=133) (actual time=76.316..9038.485 rows=138165 loops=1)         Recheck Cond: ((name)::text = 'progesterone'::text)\n         ->  Bitmap Index Scan on entity_compound2document_name  (cost=0.00..2873.93 rows=131997 width=0) (actual time=40.913..40.913 rows=138165 loops=1)               Index Cond: ((name)::text = 'progesterone'::text)\n Total runtime: 9296.815 msIt has improved (I supose because of the lack of the join table) but still taking a lot of time... Anything I can do??\nAny help would be very appreciated. Thank you very much.Good to know performance has increased.\"entity_compounddict2document\" table goes through high INSERTS ?\nCan you help us know if the \"helpval\" column and \"name\" column have high duplicate values ? \"n_distinct\" value from pg_stats table would have that info. \nBelow could be a possible workaround -As mentioned earlier in this email, a composite Index on name and hepval column might help. If the table does not go through lot of INSERTS, then consider performing a CLUSTER on the table using the same INDEX.\nOther recommendations -Please drop off all the Non-primary key Indexes which have 0 scans / hits. This would harm the DB and the DB server whilst maintenance and DML operations.\nRegards,Venkata Balaji NFujitsu Australia\n\n**NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n**CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n\n\n**NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n**CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n\n\n**NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.**CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.", "msg_date": "Thu, 6 Mar 2014 11:47:41 +0100", "msg_from": "\"acanada\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query taking long time" }, { "msg_contents": "Il 05/mar/2014 00:36 \"Venkata Balaji Nagothi\" <[email protected]> ha scritto:\n>\n> After looking at the distinct values, yes the composite Index on \"name\"\nand \"hepval\" is not recommended. That would worsen - its expected.\n>\n> We need to look for other possible work around. Please drop off the above\nIndex. Let me see if i can drill further into this.\n>\n> Meanwhile - can you help us know the memory parameters (work_mem,\ntemp_buffers etc) set ?\n>\n> Do you have any other processes effecting this query's performance ?\n>\n> Any info about your Disk, RAM, CPU would also help.\n>\n> Regards,\n> Venkata Balaji N\n>\n> Fujitsu Australia\n>\n>\n>\n>\n> Venkata Balaji N\n>\n> Sr. Database Administrator\n> Fujitsu Australia\n>\n>\n> On Tue, Mar 4, 2014 at 10:23 PM, acanada <[email protected]> wrote:\n>>\n>> Hello,\n>>\n>> I don't know if this helps to figure out what is the problem but after\nadding the multicolumn index on name and hepval, the performance is even\nworse (¿?). Ten times worse...\n>>\n>> explain analyze select * from (select * from\nentity_compounddict2document where name='progesterone') as a order by\na.hepval;\n>>\n QUERY PLAN\n\n>>\n-------------------------------------------------------------------------------------------------------------------------------------------------------------\n>> Sort (cost=422746.18..423143.94 rows=159104 width=133) (actual\ntime=95769.674..95797.943 rows=138165 loops=1)\n>> Sort Key: entity_compounddict2document.hepval\n>> Sort Method: quicksort Memory: 25622kB\n>> -> Bitmap Heap Scan on entity_compounddict2document\n (cost=3501.01..408999.90 rows=159104 width=133) (actual\ntime=70.789..95519.258 rows=138165 loops=1)\n>> Recheck Cond: ((name)::text = 'progesterone'::text)\n>> -> Bitmap Index Scan on entity_compound2document_name\n (cost=0.00..3461.23 rows=159104 width=0) (actual time=35.174..35.174\nrows=138165 loops=1)\n>> Index Cond: ((name)::text = 'progesterone'::text)\n>> Total runtime: 95811.838 ms\n>> (8 rows)\n>>\n>> Any ideas please?\n>>\n>> Thank you\n>> Andrés.\n>>\n>>\n>>\n>> El Mar 4, 2014, a las 12:28 AM, Venkata Balaji Nagothi escribió:\n>>\n>>> On Mon, Mar 3, 2014 at 9:17 PM, acanada <[email protected]> wrote:\n>>>>\n>>>> Hello,\n>>>>\n>>>> Thankyou for your answer.\n>>>> I have made more changes than a simple re-indexing recently. I have\nmoved the sorting field to the table in order to avoid the join clause. Now\nthe schema is very simple. The query only implies one table:\n>>>>\n>>>> x=> \\d+ entity_compounddict2document;\n>>>> Table \"public.entity_compounddict2document\"\n>>>> Column | Type | Modifiers |\nStorage | Description\n>>>>\n------------------+--------------------------------+-----------+----------+-------------\n>>>> id | integer | not null | plain\n |\n>>>> document_id | integer | | plain\n |\n>>>> name | character varying(255) | |\nextended |\n>>>> qualifier | character varying(255) | |\nextended |\n>>>> tagMethod | character varying(255) | |\nextended |\n>>>> created | timestamp(0) without time zone | | plain\n |\n>>>> updated | timestamp(0) without time zone | | plain\n |\n>>>> curation | integer | | plain\n |\n>>>> hepval | double precision | | plain\n |\n>>>> cardval | double precision | | plain\n |\n>>>> nephval | double precision | | plain\n |\n>>>> phosval | double precision | | plain\n |\n>>>> patternCount | double precision | | plain\n |\n>>>> ruleScore | double precision | | plain\n |\n>>>> hepTermNormScore | double precision | | plain\n |\n>>>> hepTermVarScore | double precision | | plain\n |\n>>>> Indexes:\n>>>> \"entity_compounddict2document_pkey\" PRIMARY KEY, btree (id)\n>>>> \"entity_compound2document_cardval\" btree (cardval)\n>>>> \"entity_compound2document_heptermnormscore\" btree\n(\"hepTermNormScore\")\n>>>> \"entity_compound2document_heptermvarscore\" btree\n(\"hepTermVarScore\")\n>>>> \"entity_compound2document_hepval\" btree (hepval)\n>>>> \"entity_compound2document_name\" btree (name)\n>>>> \"entity_compound2document_nephval\" btree (nephval)\n>>>> \"entity_compound2document_patterncount\" btree (\"patternCount\")\n>>>> \"entity_compound2document_phosval\" btree (phosval)\n>>>> \"entity_compound2document_rulescore\" btree (\"ruleScore\")\n>>>> Has OIDs: no\n>>>>\n>>>> tablename | indexname\n | num_rows | table_size |\nindex_size | unique | number_of_scans | tuples_read | tuples_fetched\n>>>> entity_compounddict2document | entity_compound2document_cardval\n | 5.42452e+07 | 6763 MB | 1162 MB | Y |\n0 | 0 | 0\n>>>> entity_compounddict2document |\nentity_compound2document_heptermnormscore | 5.42452e+07 | 6763 MB |\n1162 MB | Y | 0 | 0 | 0\n>>>> entity_compounddict2document |\nentity_compound2document_heptermvarscore | 5.42452e+07 | 6763 MB |\n1162 MB | Y | 0 | 0 | 0\n>>>> entity_compounddict2document | entity_compound2document_hepval\n | 5.42452e+07 | 6763 MB | 1162 MB | Y |\n0 | 0 | 0\n>>>> entity_compounddict2document | entity_compound2document_name\n | 5.42452e+07 | 6763 MB | 1505 MB | Y |\n 24 | 178680 | 0\n>>>> entity_compounddict2document | entity_compound2document_nephval\n | 5.42452e+07 | 6763 MB | 1162 MB | Y |\n0 | 0 | 0\n>>>> entity_compounddict2document |\nentity_compound2document_patterncount | 5.42452e+07 | 6763 MB |\n1162 MB | Y | 0 | 0 | 0\n>>>> entity_compounddict2document | entity_compound2document_phosval\n | 5.42452e+07 | 6763 MB | 1162 MB | Y |\n0 | 0 | 0\n>>>> entity_compounddict2document | entity_compound2document_rulescore\n | 5.42452e+07 | 6763 MB | 1162 MB | Y |\n0 | 0 | 0\n>>>> entity_compounddict2document | entity_compounddict2document_pkey\n | 5.42452e+07 | 6763 MB | 1162 MB | Y |\n0 | 0 | 0\n>>>>\n>>>> The table has aprox. 54,000,000 rows\n>>>> There are no NULLs in hepval field and pg_settings haven't changed. I\nalso have done \"analyze\" to this table.\n>>>>\n>>>> I have simplified the query and added the last advise that you told me:\n>>>>\n>>>> Query:\n>>>>\n>>>> explain analyze select * from (select * from\nentity_compounddict2document where name='ranitidine') as a order by\na.hepval;\n>>>>\nQUERY PLAN\n\n>>>>\n------------------------------------------------------------------------------------------------------------------------------------------------------\n>>>> Sort (cost=11060.50..11067.55 rows=2822 width=133) (actual\ntime=32715.097..32716.488 rows=13512 loops=1)\n>>>> Sort Key: entity_compounddict2document.hepval\n>>>> Sort Method: quicksort Memory: 2301kB\n>>>> -> Bitmap Heap Scan on entity_compounddict2document\n (cost=73.82..10898.76 rows=2822 width=133) (actual time=6.034..32695.483\nrows=13512 loops=1)\n>>>> Recheck Cond: ((name)::text = 'ranitidine'::text)\n>>>> -> Bitmap Index Scan on entity_compound2document_name\n (cost=0.00..73.12 rows=2822 width=0) (actual time=3.221..3.221 rows=13512\nloops=1)\n>>>> Index Cond: ((name)::text = 'ranitidine'::text)\n>>>> Total runtime: 32717.548 ms\n>>>>\n>>>> Another query:\n>>>> explain analyze select * from (select * from\nentity_compounddict2document where name='progesterone' ) as a order by\na.hepval;\n>>>>\n>>>> QUERY PLAN\n>>>>\n------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>>> Sort (cost=367879.25..368209.24 rows=131997 width=133) (actual\ntime=9262.887..9287.046 rows=138165 loops=1)\n>>>> Sort Key: entity_compounddict2document.hepval\n>>>> Sort Method: quicksort Memory: 25622kB\n>>>> -> Bitmap Heap Scan on entity_compounddict2document\n (cost=2906.93..356652.81 rows=131997 width=133) (actual\ntime=76.316..9038.485 rows=138165 loops=1)\n>>>> Recheck Cond: ((name)::text = 'progesterone'::text)\n>>>> -> Bitmap Index Scan on entity_compound2document_name\n (cost=0.00..2873.93 rows=131997 width=0) (actual time=40.913..40.913\nrows=138165 loops=1)\n>>>> Index Cond: ((name)::text = 'progesterone'::text)\n>>>> Total runtime: 9296.815 ms\n>>>>\n>>>>\n>>>> It has improved (I supose because of the lack of the join table) but\nstill taking a lot of time... Anything I can do??\n>>>>\n>>>> Any help would be very appreciated. Thank you very much.\n>>>\n>>>\n>>>\n>>> Good to know performance has increased.\n>>>\n>>> \"entity_compounddict2document\" table goes through high INSERTS ?\n>>>\n>>> Can you help us know if the \"helpval\" column and \"name\" column have\nhigh duplicate values ? \"n_distinct\" value from pg_stats table would have\nthat info.\n>>>\n>>> Below could be a possible workaround -\n>>>\n>>> As mentioned earlier in this email, a composite Index on name and\nhepval column might help. If the table does not go through lot of INSERTS,\nthen consider performing a CLUSTER on the table using the same INDEX.\n>>>\n>>> Other recommendations -\n>>>\n>>> Please drop off all the Non-primary key Indexes which have 0 scans /\nhits. This would harm the DB and the DB server whilst maintenance and DML\noperations.\n>>>\n>>> Regards,\n>>> Venkata Balaji N\n>>>\n>>> Fujitsu Australia\n>>\n>>\n>>\n>> **NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los\nficheros adjuntos, pueden contener información protegida para el uso\nexclusivo de su destinatario. Se prohíbe la distribución, reproducción o\ncualquier otro tipo de transmisión por parte de otra persona que no sea el\ndestinatario. Si usted recibe por error este correo, se ruega comunicarlo\nal remitente y borrar el mensaje recibido.\n>>\n>> **CONFIDENTIALITY NOTICE** This email communication and any attachments\nmay contain confidential and privileged information for the sole use of the\ndesignated recipient named above. Distribution, reproduction or any other\nuse of this transmission by any party other than the intended recipient is\nprohibited. If you are not the intended recipient please contact the sender\nand delete all copies.\n>>\n>\n\n\nHi I think the problem is th heap scan of the table , that the backend have\nto do because the btree to bitmap conversion becomes lossy. Try to disable\nthe enable_bitmapscan for the current session and rerun the query.\n\nMat Dba\n\n\nIl 05/mar/2014 00:36 \"Venkata Balaji Nagothi\" <[email protected]> ha scritto:\n>\n> After looking at the distinct values, yes the composite Index on \"name\" and \"hepval\" is not recommended. That would worsen - its expected.\n>\n> We need to look for other possible work around. Please drop off the above Index. Let me see if i can drill further into this.\n>\n> Meanwhile - can you help us know the memory parameters (work_mem, temp_buffers etc) set ?\n>\n> Do you have any other processes effecting this query's performance ?\n>\n> Any info about your Disk, RAM, CPU would also help.\n>\n> Regards,\n> Venkata Balaji N\n>\n> Fujitsu Australia\n>\n>\n>\n>\n> Venkata Balaji N\n>\n> Sr. Database Administrator\n> Fujitsu Australia\n>\n>\n> On Tue, Mar 4, 2014 at 10:23 PM, acanada <[email protected]> wrote:\n>>\n>> Hello,\n>>\n>> I don't know if this helps to figure out what is the problem but after adding the multicolumn index on name and hepval, the performance is even worse (¿?).  Ten times worse...\n>>\n>> explain analyze select * from (select * from entity_compounddict2document  where name='progesterone') as a order by a.hepval;\n>>                                                                          QUERY PLAN                                                                          \n>> -------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>  Sort  (cost=422746.18..423143.94 rows=159104 width=133) (actual time=95769.674..95797.943 rows=138165 loops=1)\n>>    Sort Key: entity_compounddict2document.hepval\n>>    Sort Method:  quicksort  Memory: 25622kB\n>>    ->  Bitmap Heap Scan on entity_compounddict2document  (cost=3501.01..408999.90 rows=159104 width=133) (actual time=70.789..95519.258 rows=138165 loops=1)\n>>          Recheck Cond: ((name)::text = 'progesterone'::text)\n>>          ->  Bitmap Index Scan on entity_compound2document_name  (cost=0.00..3461.23 rows=159104 width=0) (actual time=35.174..35.174 rows=138165 loops=1)\n>>                Index Cond: ((name)::text = 'progesterone'::text)\n>>  Total runtime: 95811.838 ms\n>> (8 rows)\n>>\n>> Any ideas please?\n>>\n>> Thank you \n>> Andrés.\n>>\n>>\n>>\n>> El Mar 4, 2014, a las 12:28 AM, Venkata Balaji Nagothi escribió:\n>>\n>>> On Mon, Mar 3, 2014 at 9:17 PM, acanada <[email protected]> wrote:\n>>>>\n>>>> Hello,\n>>>>\n>>>> Thankyou for your answer.\n>>>> I have made more changes than a simple re-indexing recently. I have moved the sorting field to the table in order to avoid the join clause. Now the schema is very simple. The query only implies one table:\n\n>>>>\n>>>> x=> \\d+ entity_compounddict2document;\n>>>>                       Table \"public.entity_compounddict2document\"\n>>>>       Column      |              Type              | Modifiers | Storage  | Description \n>>>> ------------------+--------------------------------+-----------+----------+-------------\n>>>>  id               | integer                        | not null  | plain    | \n>>>>  document_id      | integer                        |           | plain    | \n>>>>  name             | character varying(255)         |           | extended | \n>>>>  qualifier        | character varying(255)         |           | extended | \n>>>>  tagMethod        | character varying(255)         |           | extended | \n>>>>  created          | timestamp(0) without time zone |           | plain    | \n>>>>  updated          | timestamp(0) without time zone |           | plain    | \n>>>>  curation         | integer                        |           | plain    | \n>>>>  hepval           | double precision               |           | plain    | \n>>>>  cardval          | double precision               |           | plain    | \n>>>>  nephval          | double precision               |           | plain    | \n>>>>  phosval          | double precision               |           | plain    | \n>>>>  patternCount     | double precision               |           | plain    | \n>>>>  ruleScore        | double precision               |           | plain    | \n>>>>  hepTermNormScore | double precision               |           | plain    | \n>>>>  hepTermVarScore  | double precision               |           | plain    | \n>>>> Indexes:\n>>>>     \"entity_compounddict2document_pkey\" PRIMARY KEY, btree (id)\n>>>>     \"entity_compound2document_cardval\" btree (cardval)\n>>>>     \"entity_compound2document_heptermnormscore\" btree (\"hepTermNormScore\")\n>>>>     \"entity_compound2document_heptermvarscore\" btree (\"hepTermVarScore\")\n>>>>     \"entity_compound2document_hepval\" btree (hepval)\n>>>>     \"entity_compound2document_name\" btree (name)\n>>>>     \"entity_compound2document_nephval\" btree (nephval)\n>>>>     \"entity_compound2document_patterncount\" btree (\"patternCount\")\n>>>>     \"entity_compound2document_phosval\" btree (phosval)\n>>>>     \"entity_compound2document_rulescore\" btree (\"ruleScore\")\n>>>> Has OIDs: no\n>>>>\n>>>>            tablename            |                   indexname                                              |  num_rows    | table_size  | index_size | unique | number_of_scans | tuples_read | tuples_fetched \n\n>>>>  entity_compounddict2document   | entity_compound2document_cardval               | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>  entity_compounddict2document   | entity_compound2document_heptermnormscore      | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>  entity_compounddict2document   | entity_compound2document_heptermvarscore       | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>  entity_compounddict2document   | entity_compound2document_hepval                | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>  entity_compounddict2document   | entity_compound2document_name                  | 5.42452e+07 | 6763 MB    | 1505 MB    | Y      |              24 |      178680 |              0\n>>>>  entity_compounddict2document   | entity_compound2document_nephval               | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>  entity_compounddict2document   | entity_compound2document_patterncount          | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>  entity_compounddict2document   | entity_compound2document_phosval               | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>  entity_compounddict2document   | entity_compound2document_rulescore             | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>  entity_compounddict2document   | entity_compounddict2document_pkey              | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>\n>>>> The table has aprox. 54,000,000 rows\n>>>> There are no NULLs in hepval field and pg_settings haven't changed. I also have done \"analyze\" to this table.\n>>>>\n>>>> I have simplified the query and added the last advise that you told me:\n>>>>\n>>>> Query: \n>>>>\n>>>>  explain analyze select * from (select * from entity_compounddict2document  where name='ranitidine') as a order by a.hepval;\n>>>>                                                                       QUERY PLAN                                                                      \n>>>> ------------------------------------------------------------------------------------------------------------------------------------------------------\n>>>>  Sort  (cost=11060.50..11067.55 rows=2822 width=133) (actual time=32715.097..32716.488 rows=13512 loops=1)\n>>>>    Sort Key: entity_compounddict2document.hepval\n>>>>    Sort Method:  quicksort  Memory: 2301kB\n>>>>    ->  Bitmap Heap Scan on entity_compounddict2document  (cost=73.82..10898.76 rows=2822 width=133) (actual time=6.034..32695.483 rows=13512 loops=1)\n>>>>          Recheck Cond: ((name)::text = 'ranitidine'::text)\n>>>>          ->  Bitmap Index Scan on entity_compound2document_name  (cost=0.00..73.12 rows=2822 width=0) (actual time=3.221..3.221 rows=13512 loops=1)\n>>>>                Index Cond: ((name)::text = 'ranitidine'::text)\n>>>>  Total runtime: 32717.548 ms\n>>>>\n>>>> Another query:\n>>>> explain analyze select * from (select * from entity_compounddict2document  where name='progesterone' ) as a  order by a.hepval;\n>>>>\n>>>> QUERY PLAN\n>>>> ------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>>>  Sort  (cost=367879.25..368209.24 rows=131997 width=133) (actual time=9262.887..9287.046 rows=138165 loops=1)\n>>>>    Sort Key: entity_compounddict2document.hepval\n>>>>    Sort Method:  quicksort  Memory: 25622kB\n>>>>    ->  Bitmap Heap Scan on entity_compounddict2document  (cost=2906.93..356652.81 rows=131997 width=133) (actual time=76.316..9038.485 rows=138165 loops=1)\n>>>>          Recheck Cond: ((name)::text = 'progesterone'::text)\n>>>>          ->  Bitmap Index Scan on entity_compound2document_name  (cost=0.00..2873.93 rows=131997 width=0) (actual time=40.913..40.913 rows=138165 loops=1)\n>>>>                Index Cond: ((name)::text = 'progesterone'::text)\n>>>>  Total runtime: 9296.815 ms\n>>>>\n>>>>\n>>>> It has improved (I supose because of the lack of the join table) but still taking a lot of time... Anything I can do??\n>>>>\n>>>> Any help would be very appreciated. Thank you very much.\n>>>\n>>>\n>>>\n>>> Good to know performance has increased.\n>>>\n>>> \"entity_compounddict2document\" table goes through high INSERTS ?\n>>>\n>>> Can you help us know if the \"helpval\" column and \"name\" column have high duplicate values ? \"n_distinct\" value from pg_stats table would have that info. \n>>>\n>>> Below could be a possible workaround -\n>>>\n>>> As mentioned earlier in this email, a composite Index on name and hepval column might help. If the table does not go through lot of INSERTS, then consider performing a CLUSTER on the table using the same INDEX.\n\n>>>\n>>> Other recommendations -\n>>>\n>>> Please drop off all the Non-primary key Indexes which have 0 scans / hits. This would harm the DB and the DB server whilst maintenance and DML operations.\n>>>\n>>> Regards,\n>>> Venkata Balaji N\n>>>\n>>> Fujitsu Australia\n>>\n>>\n>>\n>> **NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n\n>>\n>> **CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n\n>>\n>\nHi I think the problem is th heap scan of the table , that the backend have to do because the btree to bitmap conversion becomes lossy. Try to disable the enable_bitmapscan for the current session and rerun the query. \nMat Dba", "msg_date": "Thu, 6 Mar 2014 14:11:32 +0100", "msg_from": "desmodemone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query taking long time" }, { "msg_contents": "Hello Mat,\n\nSetting enable_bitmapscan to off doesn't really helps. It gets worse...\n\nx=> SET enable_bitmapscan=off; \nSET\nx=> explain analyze select * from (select * from entity2document2 where name='ranitidine' ) as a order by a.hepval;\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=18789.21..18800.70 rows=4595 width=131) (actual time=79965.282..79966.657 rows=13512 loops=1)\n Sort Key: entity2document2.hepval\n Sort Method: quicksort Memory: 2301kB\n -> Index Scan using entity2document2_name on entity2document2 (cost=0.00..18509.70 rows=4595 width=131) (actual time=67.507..79945.362 rows=13512 loops=1)\n Index Cond: ((name)::text = 'ranitidine'::text)\n Total runtime: 79967.705 ms\n(6 rows)\n\nAny other idea? \n\nThank you very much for your help. Regards,\nAndrés\n\nEl Mar 6, 2014, a las 2:11 PM, desmodemone escribió:\n\n> \n> Il 05/mar/2014 00:36 \"Venkata Balaji Nagothi\" <[email protected]> ha scritto:\n> >\n> > After looking at the distinct values, yes the composite Index on \"name\" and \"hepval\" is not recommended. That would worsen - its expected.\n> >\n> > We need to look for other possible work around. Please drop off the above Index. Let me see if i can drill further into this.\n> >\n> > Meanwhile - can you help us know the memory parameters (work_mem, temp_buffers etc) set ?\n> >\n> > Do you have any other processes effecting this query's performance ?\n> >\n> > Any info about your Disk, RAM, CPU would also help.\n> >\n> > Regards,\n> > Venkata Balaji N\n> >\n> > Fujitsu Australia\n> >\n> >\n> >\n> >\n> > Venkata Balaji N\n> >\n> > Sr. Database Administrator\n> > Fujitsu Australia\n> >\n> >\n> > On Tue, Mar 4, 2014 at 10:23 PM, acanada <[email protected]> wrote:\n> >>\n> >> Hello,\n> >>\n> >> I don't know if this helps to figure out what is the problem but after adding the multicolumn index on name and hepval, the performance is even worse (¿?). Ten times worse...\n> >>\n> >> explain analyze select * from (select * from entity_compounddict2document where name='progesterone') as a order by a.hepval;\n> >> QUERY PLAN \n> >> -------------------------------------------------------------------------------------------------------------------------------------------------------------\n> >> Sort (cost=422746.18..423143.94 rows=159104 width=133) (actual time=95769.674..95797.943 rows=138165 loops=1)\n> >> Sort Key: entity_compounddict2document.hepval\n> >> Sort Method: quicksort Memory: 25622kB\n> >> -> Bitmap Heap Scan on entity_compounddict2document (cost=3501.01..408999.90 rows=159104 width=133) (actual time=70.789..95519.258 rows=138165 loops=1)\n> >> Recheck Cond: ((name)::text = 'progesterone'::text)\n> >> -> Bitmap Index Scan on entity_compound2document_name (cost=0.00..3461.23 rows=159104 width=0) (actual time=35.174..35.174 rows=138165 loops=1)\n> >> Index Cond: ((name)::text = 'progesterone'::text)\n> >> Total runtime: 95811.838 ms\n> >> (8 rows)\n> >>\n> >> Any ideas please?\n> >>\n> >> Thank you \n> >> Andrés.\n> >>\n> >>\n> >>\n> >> El Mar 4, 2014, a las 12:28 AM, Venkata Balaji Nagothi escribió:\n> >>\n> >>> On Mon, Mar 3, 2014 at 9:17 PM, acanada <[email protected]> wrote:\n> >>>>\n> >>>> Hello,\n> >>>>\n> >>>> Thankyou for your answer.\n> >>>> I have made more changes than a simple re-indexing recently. I have moved the sorting field to the table in order to avoid the join clause. Now the schema is very simple. The query only implies one table:\n> >>>>\n> >>>> x=> \\d+ entity_compounddict2document;\n> >>>> Table \"public.entity_compounddict2document\"\n> >>>> Column | Type | Modifiers | Storage | Description \n> >>>> ------------------+--------------------------------+-----------+----------+-------------\n> >>>> id | integer | not null | plain | \n> >>>> document_id | integer | | plain | \n> >>>> name | character varying(255) | | extended | \n> >>>> qualifier | character varying(255) | | extended | \n> >>>> tagMethod | character varying(255) | | extended | \n> >>>> created | timestamp(0) without time zone | | plain | \n> >>>> updated | timestamp(0) without time zone | | plain | \n> >>>> curation | integer | | plain | \n> >>>> hepval | double precision | | plain | \n> >>>> cardval | double precision | | plain | \n> >>>> nephval | double precision | | plain | \n> >>>> phosval | double precision | | plain | \n> >>>> patternCount | double precision | | plain | \n> >>>> ruleScore | double precision | | plain | \n> >>>> hepTermNormScore | double precision | | plain | \n> >>>> hepTermVarScore | double precision | | plain | \n> >>>> Indexes:\n> >>>> \"entity_compounddict2document_pkey\" PRIMARY KEY, btree (id)\n> >>>> \"entity_compound2document_cardval\" btree (cardval)\n> >>>> \"entity_compound2document_heptermnormscore\" btree (\"hepTermNormScore\")\n> >>>> \"entity_compound2document_heptermvarscore\" btree (\"hepTermVarScore\")\n> >>>> \"entity_compound2document_hepval\" btree (hepval)\n> >>>> \"entity_compound2document_name\" btree (name)\n> >>>> \"entity_compound2document_nephval\" btree (nephval)\n> >>>> \"entity_compound2document_patterncount\" btree (\"patternCount\")\n> >>>> \"entity_compound2document_phosval\" btree (phosval)\n> >>>> \"entity_compound2document_rulescore\" btree (\"ruleScore\")\n> >>>> Has OIDs: no\n> >>>>\n> >>>> tablename | indexname | num_rows | table_size | index_size | unique | number_of_scans | tuples_read | tuples_fetched \n> >>>> entity_compounddict2document | entity_compound2document_cardval | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n> >>>> entity_compounddict2document | entity_compound2document_heptermnormscore | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n> >>>> entity_compounddict2document | entity_compound2document_heptermvarscore | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n> >>>> entity_compounddict2document | entity_compound2document_hepval | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n> >>>> entity_compounddict2document | entity_compound2document_name | 5.42452e+07 | 6763 MB | 1505 MB | Y | 24 | 178680 | 0\n> >>>> entity_compounddict2document | entity_compound2document_nephval | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n> >>>> entity_compounddict2document | entity_compound2document_patterncount | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n> >>>> entity_compounddict2document | entity_compound2document_phosval | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n> >>>> entity_compounddict2document | entity_compound2document_rulescore | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n> >>>> entity_compounddict2document | entity_compounddict2document_pkey | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n> >>>>\n> >>>> The table has aprox. 54,000,000 rows\n> >>>> There are no NULLs in hepval field and pg_settings haven't changed. I also have done \"analyze\" to this table.\n> >>>>\n> >>>> I have simplified the query and added the last advise that you told me:\n> >>>>\n> >>>> Query: \n> >>>>\n> >>>> explain analyze select * from (select * from entity_compounddict2document where name='ranitidine') as a order by a.hepval;\n> >>>> QUERY PLAN \n> >>>> ------------------------------------------------------------------------------------------------------------------------------------------------------\n> >>>> Sort (cost=11060.50..11067.55 rows=2822 width=133) (actual time=32715.097..32716.488 rows=13512 loops=1)\n> >>>> Sort Key: entity_compounddict2document.hepval\n> >>>> Sort Method: quicksort Memory: 2301kB\n> >>>> -> Bitmap Heap Scan on entity_compounddict2document (cost=73.82..10898.76 rows=2822 width=133) (actual time=6.034..32695.483 rows=13512 loops=1)\n> >>>> Recheck Cond: ((name)::text = 'ranitidine'::text)\n> >>>> -> Bitmap Index Scan on entity_compound2document_name (cost=0.00..73.12 rows=2822 width=0) (actual time=3.221..3.221 rows=13512 loops=1)\n> >>>> Index Cond: ((name)::text = 'ranitidine'::text)\n> >>>> Total runtime: 32717.548 ms\n> >>>>\n> >>>> Another query:\n> >>>> explain analyze select * from (select * from entity_compounddict2document where name='progesterone' ) as a order by a.hepval;\n> >>>>\n> >>>> QUERY PLAN\n> >>>> ------------------------------------------------------------------------------------------------------------------------------------------------------------\n> >>>> Sort (cost=367879.25..368209.24 rows=131997 width=133) (actual time=9262.887..9287.046 rows=138165 loops=1)\n> >>>> Sort Key: entity_compounddict2document.hepval\n> >>>> Sort Method: quicksort Memory: 25622kB\n> >>>> -> Bitmap Heap Scan on entity_compounddict2document (cost=2906.93..356652.81 rows=131997 width=133) (actual time=76.316..9038.485 rows=138165 loops=1)\n> >>>> Recheck Cond: ((name)::text = 'progesterone'::text)\n> >>>> -> Bitmap Index Scan on entity_compound2document_name (cost=0.00..2873.93 rows=131997 width=0) (actual time=40.913..40.913 rows=138165 loops=1)\n> >>>> Index Cond: ((name)::text = 'progesterone'::text)\n> >>>> Total runtime: 9296.815 ms\n> >>>>\n> >>>>\n> >>>> It has improved (I supose because of the lack of the join table) but still taking a lot of time... Anything I can do??\n> >>>>\n> >>>> Any help would be very appreciated. Thank you very much.\n> >>>\n> >>>\n> >>>\n> >>> Good to know performance has increased.\n> >>>\n> >>> \"entity_compounddict2document\" table goes through high INSERTS ?\n> >>>\n> >>> Can you help us know if the \"helpval\" column and \"name\" column have high duplicate values ? \"n_distinct\" value from pg_stats table would have that info. \n> >>>\n> >>> Below could be a possible workaround -\n> >>>\n> >>> As mentioned earlier in this email, a composite Index on name and hepval column might help. If the table does not go through lot of INSERTS, then consider performing a CLUSTER on the table using the same INDEX.\n> >>>\n> >>> Other recommendations -\n> >>>\n> >>> Please drop off all the Non-primary key Indexes which have 0 scans / hits. This would harm the DB and the DB server whilst maintenance and DML operations.\n> >>>\n> >>> Regards,\n> >>> Venkata Balaji N\n> >>>\n> >>> Fujitsu Australia\n> >>\n> >>\n> >>\n> >> **NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n> >>\n> >> **CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n> >>\n> >\n> \n> \n> \n> Hi I think the problem is th heap scan of the table , that the backend have to do because the btree to bitmap conversion becomes lossy. Try to disable the enable_bitmapscan for the current session and rerun the query.\n> \n> Mat Dba\n> \n\n\n\n**NOTA DE CONFIDENCIALIDAD** Este correo electrÿnico, y en su caso los ficheros adjuntos, pueden contener informaciÿn protegida para el uso exclusivo de su destinatario. Se prohÿbe la distribuciÿn, reproducciÿn o cualquier otro tipo de transmisiÿn por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n**CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n\n\n\nHello Mat,Setting enable_bitmapscan to off doesn't really helps. It gets worse...x=> SET enable_bitmapscan=off; SETx=> explain analyze select * from (select * from entity2document2  where name='ranitidine' ) as a  order by a.hepval;                                                                           QUERY PLAN                                                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------- Sort  (cost=18789.21..18800.70 rows=4595 width=131) (actual time=79965.282..79966.657 rows=13512 loops=1)   Sort Key: entity2document2.hepval   Sort Method:  quicksort  Memory: 2301kB   ->  Index Scan using entity2document2_name on entity2document2  (cost=0.00..18509.70 rows=4595 width=131) (actual time=67.507..79945.362 rows=13512 loops=1)         Index Cond: ((name)::text = 'ranitidine'::text) Total runtime: 79967.705 ms(6 rows)Any other idea? Thank you very much for your help. Regards,AndrésEl Mar 6, 2014, a las 2:11 PM, desmodemone escribió:\nIl 05/mar/2014 00:36 \"Venkata Balaji Nagothi\" <[email protected]> ha scritto:\n>\n> After looking at the distinct values, yes the composite Index on \"name\" and \"hepval\" is not recommended. That would worsen - its expected.\n>\n> We need to look for other possible work around. Please drop off the above Index. Let me see if i can drill further into this.\n>\n> Meanwhile - can you help us know the memory parameters (work_mem, temp_buffers etc) set ?\n>\n> Do you have any other processes effecting this query's performance ?\n>\n> Any info about your Disk, RAM, CPU would also help.\n>\n> Regards,\n> Venkata Balaji N\n>\n> Fujitsu Australia\n>\n>\n>\n>\n> Venkata Balaji N\n>\n> Sr. Database Administrator\n> Fujitsu Australia\n>\n>\n> On Tue, Mar 4, 2014 at 10:23 PM, acanada <[email protected]> wrote:\n>>\n>> Hello,\n>>\n>> I don't know if this helps to figure out what is the problem but after adding the multicolumn index on name and hepval, the performance is even worse (¿?).  Ten times worse...\n>>\n>> explain analyze select * from (select * from entity_compounddict2document  where name='progesterone') as a order by a.hepval;\n>>                                                                          QUERY PLAN                                                                          \n>> -------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>  Sort  (cost=422746.18..423143.94 rows=159104 width=133) (actual time=95769.674..95797.943 rows=138165 loops=1)\n>>    Sort Key: entity_compounddict2document.hepval\n>>    Sort Method:  quicksort  Memory: 25622kB\n>>    ->  Bitmap Heap Scan on entity_compounddict2document  (cost=3501.01..408999.90 rows=159104 width=133) (actual time=70.789..95519.258 rows=138165 loops=1)\n>>          Recheck Cond: ((name)::text = 'progesterone'::text)\n>>          ->  Bitmap Index Scan on entity_compound2document_name  (cost=0.00..3461.23 rows=159104 width=0) (actual time=35.174..35.174 rows=138165 loops=1)\n>>                Index Cond: ((name)::text = 'progesterone'::text)\n>>  Total runtime: 95811.838 ms\n>> (8 rows)\n>>\n>> Any ideas please?\n>>\n>> Thank you \n>> Andrés.\n>>\n>>\n>>\n>> El Mar 4, 2014, a las 12:28 AM, Venkata Balaji Nagothi escribió:\n>>\n>>> On Mon, Mar 3, 2014 at 9:17 PM, acanada <[email protected]> wrote:\n>>>>\n>>>> Hello,\n>>>>\n>>>> Thankyou for your answer.\n>>>> I have made more changes than a simple re-indexing recently. I have moved the sorting field to the table in order to avoid the join clause. Now the schema is very simple. The query only implies one table:\n\n>>>>\n>>>> x=> \\d+ entity_compounddict2document;\n>>>>                       Table \"public.entity_compounddict2document\"\n>>>>       Column      |              Type              | Modifiers | Storage  | Description \n>>>> ------------------+--------------------------------+-----------+----------+-------------\n>>>>  id               | integer                        | not null  | plain    | \n>>>>  document_id      | integer                        |           | plain    | \n>>>>  name             | character varying(255)         |           | extended | \n>>>>  qualifier        | character varying(255)         |           | extended | \n>>>>  tagMethod        | character varying(255)         |           | extended | \n>>>>  created          | timestamp(0) without time zone |           | plain    | \n>>>>  updated          | timestamp(0) without time zone |           | plain    | \n>>>>  curation         | integer                        |           | plain    | \n>>>>  hepval           | double precision               |           | plain    | \n>>>>  cardval          | double precision               |           | plain    | \n>>>>  nephval          | double precision               |           | plain    | \n>>>>  phosval          | double precision               |           | plain    | \n>>>>  patternCount     | double precision               |           | plain    | \n>>>>  ruleScore        | double precision               |           | plain    | \n>>>>  hepTermNormScore | double precision               |           | plain    | \n>>>>  hepTermVarScore  | double precision               |           | plain    | \n>>>> Indexes:\n>>>>     \"entity_compounddict2document_pkey\" PRIMARY KEY, btree (id)\n>>>>     \"entity_compound2document_cardval\" btree (cardval)\n>>>>     \"entity_compound2document_heptermnormscore\" btree (\"hepTermNormScore\")\n>>>>     \"entity_compound2document_heptermvarscore\" btree (\"hepTermVarScore\")\n>>>>     \"entity_compound2document_hepval\" btree (hepval)\n>>>>     \"entity_compound2document_name\" btree (name)\n>>>>     \"entity_compound2document_nephval\" btree (nephval)\n>>>>     \"entity_compound2document_patterncount\" btree (\"patternCount\")\n>>>>     \"entity_compound2document_phosval\" btree (phosval)\n>>>>     \"entity_compound2document_rulescore\" btree (\"ruleScore\")\n>>>> Has OIDs: no\n>>>>\n>>>>            tablename            |                   indexname                                              |  num_rows    | table_size  | index_size | unique | number_of_scans | tuples_read | tuples_fetched \n\n>>>>  entity_compounddict2document   | entity_compound2document_cardval               | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>  entity_compounddict2document   | entity_compound2document_heptermnormscore      | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>  entity_compounddict2document   | entity_compound2document_heptermvarscore       | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>  entity_compounddict2document   | entity_compound2document_hepval                | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>  entity_compounddict2document   | entity_compound2document_name                  | 5.42452e+07 | 6763 MB    | 1505 MB    | Y      |              24 |      178680 |              0\n>>>>  entity_compounddict2document   | entity_compound2document_nephval               | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>  entity_compounddict2document   | entity_compound2document_patterncount          | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>  entity_compounddict2document   | entity_compound2document_phosval               | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>  entity_compounddict2document   | entity_compound2document_rulescore             | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>  entity_compounddict2document   | entity_compounddict2document_pkey              | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>\n>>>> The table has aprox. 54,000,000 rows\n>>>> There are no NULLs in hepval field and pg_settings haven't changed. I also have done \"analyze\" to this table.\n>>>>\n>>>> I have simplified the query and added the last advise that you told me:\n>>>>\n>>>> Query: \n>>>>\n>>>>  explain analyze select * from (select * from entity_compounddict2document  where name='ranitidine') as a order by a.hepval;\n>>>>                                                                       QUERY PLAN                                                                      \n>>>> ------------------------------------------------------------------------------------------------------------------------------------------------------\n>>>>  Sort  (cost=11060.50..11067.55 rows=2822 width=133) (actual time=32715.097..32716.488 rows=13512 loops=1)\n>>>>    Sort Key: entity_compounddict2document.hepval\n>>>>    Sort Method:  quicksort  Memory: 2301kB\n>>>>    ->  Bitmap Heap Scan on entity_compounddict2document  (cost=73.82..10898.76 rows=2822 width=133) (actual time=6.034..32695.483 rows=13512 loops=1)\n>>>>          Recheck Cond: ((name)::text = 'ranitidine'::text)\n>>>>          ->  Bitmap Index Scan on entity_compound2document_name  (cost=0.00..73.12 rows=2822 width=0) (actual time=3.221..3.221 rows=13512 loops=1)\n>>>>                Index Cond: ((name)::text = 'ranitidine'::text)\n>>>>  Total runtime: 32717.548 ms\n>>>>\n>>>> Another query:\n>>>> explain analyze select * from (select * from entity_compounddict2document  where name='progesterone' ) as a  order by a.hepval;\n>>>>\n>>>> QUERY PLAN\n>>>> ------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>>>  Sort  (cost=367879.25..368209.24 rows=131997 width=133) (actual time=9262.887..9287.046 rows=138165 loops=1)\n>>>>    Sort Key: entity_compounddict2document.hepval\n>>>>    Sort Method:  quicksort  Memory: 25622kB\n>>>>    ->  Bitmap Heap Scan on entity_compounddict2document  (cost=2906.93..356652.81 rows=131997 width=133) (actual time=76.316..9038.485 rows=138165 loops=1)\n>>>>          Recheck Cond: ((name)::text = 'progesterone'::text)\n>>>>          ->  Bitmap Index Scan on entity_compound2document_name  (cost=0.00..2873.93 rows=131997 width=0) (actual time=40.913..40.913 rows=138165 loops=1)\n>>>>                Index Cond: ((name)::text = 'progesterone'::text)\n>>>>  Total runtime: 9296.815 ms\n>>>>\n>>>>\n>>>> It has improved (I supose because of the lack of the join table) but still taking a lot of time... Anything I can do??\n>>>>\n>>>> Any help would be very appreciated. Thank you very much.\n>>>\n>>>\n>>>\n>>> Good to know performance has increased.\n>>>\n>>> \"entity_compounddict2document\" table goes through high INSERTS ?\n>>>\n>>> Can you help us know if the \"helpval\" column and \"name\" column have high duplicate values ? \"n_distinct\" value from pg_stats table would have that info. \n>>>\n>>> Below could be a possible workaround -\n>>>\n>>> As mentioned earlier in this email, a composite Index on name and hepval column might help. If the table does not go through lot of INSERTS, then consider performing a CLUSTER on the table using the same INDEX.\n\n>>>\n>>> Other recommendations -\n>>>\n>>> Please drop off all the Non-primary key Indexes which have 0 scans / hits. This would harm the DB and the DB server whilst maintenance and DML operations.\n>>>\n>>> Regards,\n>>> Venkata Balaji N\n>>>\n>>> Fujitsu Australia\n>>\n>>\n>>\n>> **NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n\n>>\n>> **CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n\n>>\n>Hi I think the problem is th heap scan of the table , that the backend have to do because the btree to bitmap conversion becomes lossy. Try to disable the enable_bitmapscan for the current session and rerun the query. Mat Dba\n\n**NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.**CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.", "msg_date": "Thu, 6 Mar 2014 15:45:14 +0100", "msg_from": "\"acanada\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query taking long time" }, { "msg_contents": "\n> Hello Mat,\n> \n> Setting enable_bitmapscan to off doesn't really helps. It gets worse...\n> \n> x=> SET enable_bitmapscan=off; \n> SET\n> x=> explain analyze select * from (select * from entity2document2 where name='ranitidine' ) as a order by a.hepval;\n> QUERY PLAN \n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Sort (cost=18789.21..18800.70 rows=4595 width=131) (actual time=79965.282..79966.657 rows=13512 loops=1)\n> Sort Key: entity2document2.hepval\n> Sort Method: quicksort Memory: 2301kB\n> -> Index Scan using entity2document2_name on entity2document2 (cost=0.00..18509.70 rows=4595 width=131) (actual time=67.507..79945.362 rows=13512 loops=1)\n> Index Cond: ((name)::text = 'ranitidine'::text)\n> Total runtime: 79967.705 ms\n> (6 rows)\n> \n> Any other idea? \n> \n\nPlease post your hw configuration. I think that your db is on disk and they are slow.\n\n\n\n> Thank you very much for your help. Regards,\n> Andrés\n> \n> El Mar 6, 2014, a las 2:11 PM, desmodemone escribió:\n> \n>> \n>> Il 05/mar/2014 00:36 \"Venkata Balaji Nagothi\" <[email protected]> ha scritto:\n>> >\n>> > After looking at the distinct values, yes the composite Index on \"name\" and \"hepval\" is not recommended. That would worsen - its expected.\n>> >\n>> > We need to look for other possible work around. Please drop off the above Index. Let me see if i can drill further into this.\n>> >\n>> > Meanwhile - can you help us know the memory parameters (work_mem, temp_buffers etc) set ?\n>> >\n>> > Do you have any other processes effecting this query's performance ?\n>> >\n>> > Any info about your Disk, RAM, CPU would also help.\n>> >\n>> > Regards,\n>> > Venkata Balaji N\n>> >\n>> > Fujitsu Australia\n>> >\n>> >\n>> >\n>> >\n>> > Venkata Balaji N\n>> >\n>> > Sr. Database Administrator\n>> > Fujitsu Australia\n>> >\n>> >\n>> > On Tue, Mar 4, 2014 at 10:23 PM, acanada <[email protected]> wrote:\n>> >>\n>> >> Hello,\n>> >>\n>> >> I don't know if this helps to figure out what is the problem but after adding the multicolumn index on name and hepval, the performance is even worse (¿?). Ten times worse...\n>> >>\n>> >> explain analyze select * from (select * from entity_compounddict2document where name='progesterone') as a order by a.hepval;\n>> >> QUERY PLAN \n>> >> -------------------------------------------------------------------------------------------------------------------------------------------------------------\n>> >> Sort (cost=422746.18..423143.94 rows=159104 width=133) (actual time=95769.674..95797.943 rows=138165 loops=1)\n>> >> Sort Key: entity_compounddict2document.hepval\n>> >> Sort Method: quicksort Memory: 25622kB\n>> >> -> Bitmap Heap Scan on entity_compounddict2document (cost=3501.01..408999.90 rows=159104 width=133) (actual time=70.789..95519.258 rows=138165 loops=1)\n>> >> Recheck Cond: ((name)::text = 'progesterone'::text)\n>> >> -> Bitmap Index Scan on entity_compound2document_name (cost=0.00..3461.23 rows=159104 width=0) (actual time=35.174..35.174 rows=138165 loops=1)\n>> >> Index Cond: ((name)::text = 'progesterone'::text)\n>> >> Total runtime: 95811.838 ms\n>> >> (8 rows)\n>> >>\n>> >> Any ideas please?\n>> >>\n>> >> Thank you \n>> >> Andrés.\n>> >>\n>> >>\n>> >>\n>> >> El Mar 4, 2014, a las 12:28 AM, Venkata Balaji Nagothi escribió:\n>> >>\n>> >>> On Mon, Mar 3, 2014 at 9:17 PM, acanada <[email protected]> wrote:\n>> >>>>\n>> >>>> Hello,\n>> >>>>\n>> >>>> Thankyou for your answer.\n>> >>>> I have made more changes than a simple re-indexing recently. I have moved the sorting field to the table in order to avoid the join clause. Now the schema is very simple. The query only implies one table:\n>> >>>>\n>> >>>> x=> \\d+ entity_compounddict2document;\n>> >>>> Table \"public.entity_compounddict2document\"\n>> >>>> Column | Type | Modifiers | Storage | Description \n>> >>>> ------------------+--------------------------------+-----------+----------+-------------\n>> >>>> id | integer | not null | plain | \n>> >>>> document_id | integer | | plain | \n>> >>>> name | character varying(255) | | extended | \n>> >>>> qualifier | character varying(255) | | extended | \n>> >>>> tagMethod | character varying(255) | | extended | \n>> >>>> created | timestamp(0) without time zone | | plain | \n>> >>>> updated | timestamp(0) without time zone | | plain | \n>> >>>> curation | integer | | plain | \n>> >>>> hepval | double precision | | plain | \n>> >>>> cardval | double precision | | plain | \n>> >>>> nephval | double precision | | plain | \n>> >>>> phosval | double precision | | plain | \n>> >>>> patternCount | double precision | | plain | \n>> >>>> ruleScore | double precision | | plain | \n>> >>>> hepTermNormScore | double precision | | plain | \n>> >>>> hepTermVarScore | double precision | | plain | \n>> >>>> Indexes:\n>> >>>> \"entity_compounddict2document_pkey\" PRIMARY KEY, btree (id)\n>> >>>> \"entity_compound2document_cardval\" btree (cardval)\n>> >>>> \"entity_compound2document_heptermnormscore\" btree (\"hepTermNormScore\")\n>> >>>> \"entity_compound2document_heptermvarscore\" btree (\"hepTermVarScore\")\n>> >>>> \"entity_compound2document_hepval\" btree (hepval)\n>> >>>> \"entity_compound2document_name\" btree (name)\n>> >>>> \"entity_compound2document_nephval\" btree (nephval)\n>> >>>> \"entity_compound2document_patterncount\" btree (\"patternCount\")\n>> >>>> \"entity_compound2document_phosval\" btree (phosval)\n>> >>>> \"entity_compound2document_rulescore\" btree (\"ruleScore\")\n>> >>>> Has OIDs: no\n>> >>>>\n>> >>>> tablename | indexname | num_rows | table_size | index_size | unique | number_of_scans | tuples_read | tuples_fetched \n>> >>>> entity_compounddict2document | entity_compound2document_cardval | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>> >>>> entity_compounddict2document | entity_compound2document_heptermnormscore | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>> >>>> entity_compounddict2document | entity_compound2document_heptermvarscore | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>> >>>> entity_compounddict2document | entity_compound2document_hepval | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>> >>>> entity_compounddict2document | entity_compound2document_name | 5.42452e+07 | 6763 MB | 1505 MB | Y | 24 | 178680 | 0\n>> >>>> entity_compounddict2document | entity_compound2document_nephval | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>> >>>> entity_compounddict2document | entity_compound2document_patterncount | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>> >>>> entity_compounddict2document | entity_compound2document_phosval | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>> >>>> entity_compounddict2document | entity_compound2document_rulescore | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>> >>>> entity_compounddict2document | entity_compounddict2document_pkey | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>> >>>>\n>> >>>> The table has aprox. 54,000,000 rows\n>> >>>> There are no NULLs in hepval field and pg_settings haven't changed. I also have done \"analyze\" to this table.\n>> >>>>\n>> >>>> I have simplified the query and added the last advise that you told me:\n>> >>>>\n>> >>>> Query: \n>> >>>>\n>> >>>> explain analyze select * from (select * from entity_compounddict2document where name='ranitidine') as a order by a.hepval;\n>> >>>> QUERY PLAN \n>> >>>> ------------------------------------------------------------------------------------------------------------------------------------------------------\n>> >>>> Sort (cost=11060.50..11067.55 rows=2822 width=133) (actual time=32715.097..32716.488 rows=13512 loops=1)\n>> >>>> Sort Key: entity_compounddict2document.hepval\n>> >>>> Sort Method: quicksort Memory: 2301kB\n>> >>>> -> Bitmap Heap Scan on entity_compounddict2document (cost=73.82..10898.76 rows=2822 width=133) (actual time=6.034..32695.483 rows=13512 loops=1)\n>> >>>> Recheck Cond: ((name)::text = 'ranitidine'::text)\n>> >>>> -> Bitmap Index Scan on entity_compound2document_name (cost=0.00..73.12 rows=2822 width=0) (actual time=3.221..3.221 rows=13512 loops=1)\n>> >>>> Index Cond: ((name)::text = 'ranitidine'::text)\n>> >>>> Total runtime: 32717.548 ms\n>> >>>>\n>> >>>> Another query:\n>> >>>> explain analyze select * from (select * from entity_compounddict2document where name='progesterone' ) as a order by a.hepval;\n>> >>>>\n>> >>>> QUERY PLAN\n>> >>>> ------------------------------------------------------------------------------------------------------------------------------------------------------------\n>> >>>> Sort (cost=367879.25..368209.24 rows=131997 width=133) (actual time=9262.887..9287.046 rows=138165 loops=1)\n>> >>>> Sort Key: entity_compounddict2document.hepval\n>> >>>> Sort Method: quicksort Memory: 25622kB\n>> >>>> -> Bitmap Heap Scan on entity_compounddict2document (cost=2906.93..356652.81 rows=131997 width=133) (actual time=76.316..9038.485 rows=138165 loops=1)\n>> >>>> Recheck Cond: ((name)::text = 'progesterone'::text)\n>> >>>> -> Bitmap Index Scan on entity_compound2document_name (cost=0.00..2873.93 rows=131997 width=0) (actual time=40.913..40.913 rows=138165 loops=1)\n>> >>>> Index Cond: ((name)::text = 'progesterone'::text)\n>> >>>> Total runtime: 9296.815 ms\n>> >>>>\n>> >>>>\n>> >>>> It has improved (I supose because of the lack of the join table) but still taking a lot of time... Anything I can do??\n>> >>>>\n>> >>>> Any help would be very appreciated. Thank you very much.\n>> >>>\n>> >>>\n>> >>>\n>> >>> Good to know performance has increased.\n>> >>>\n>> >>> \"entity_compounddict2document\" table goes through high INSERTS ?\n>> >>>\n>> >>> Can you help us know if the \"helpval\" column and \"name\" column have high duplicate values ? \"n_distinct\" value from pg_stats table would have that info. \n>> >>>\n>> >>> Below could be a possible workaround -\n>> >>>\n>> >>> As mentioned earlier in this email, a composite Index on name and hepval column might help. If the table does not go through lot of INSERTS, then consider performing a CLUSTER on the table using the same INDEX.\n>> >>>\n>> >>> Other recommendations -\n>> >>>\n>> >>> Please drop off all the Non-primary key Indexes which have 0 scans / hits. This would harm the DB and the DB server whilst maintenance and DML operations.\n>> >>>\n>> >>> Regards,\n>> >>> Venkata Balaji N\n>> >>>\n>> >>> Fujitsu Australia\n>> >>\n>> >>\n>> >>\n>> >> **NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n>> >>\n>> >> **CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n>> >>\n>> >\n>> \n>> \n>> \n>> Hi I think the problem is th heap scan of the table , that the backend have to do because the btree to bitmap conversion becomes lossy. Try to disable the enable_bitmapscan for the current session and rerun the query.\n>> \n>> Mat Dba\n>> \n> \n> \n> \n> **NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n> \n> **CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n> \n> \n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 7 Mar 2014 12:39:31 +0300", "msg_from": "Evgeniy Shishkin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query taking long time" }, { "msg_contents": "\nEl Mar 7, 2014, a las 10:39 AM, Evgeniy Shishkin escribió:\n\n> \n>> Hello Mat,\n>> \n>> Setting enable_bitmapscan to off doesn't really helps. It gets worse...\n>> \n>> x=> SET enable_bitmapscan=off; \n>> SET\n>> x=> explain analyze select * from (select * from entity2document2 where name='ranitidine' ) as a order by a.hepval;\n>> QUERY PLAN \n>> ----------------------------------------------------------------------------------------------------------------------------------------------------------------\n>> Sort (cost=18789.21..18800.70 rows=4595 width=131) (actual time=79965.282..79966.657 rows=13512 loops=1)\n>> Sort Key: entity2document2.hepval\n>> Sort Method: quicksort Memory: 2301kB\n>> -> Index Scan using entity2document2_name on entity2document2 (cost=0.00..18509.70 rows=4595 width=131) (actual time=67.507..79945.362 rows=13512 loops=1)\n>> Index Cond: ((name)::text = 'ranitidine'::text)\n>> Total runtime: 79967.705 ms\n>> (6 rows)\n>> \n>> Any other idea? \n>> \n> \n> Please post your hw configuration. I think that your db is on disk and they are slow.\n\nThe server has 2 processors quadcore, 10GB of RAM and data is located in a fiber disk of 2TB. It doesn't seem to be the problem... \n\nThank you\n\nAndrés\n\n> \n> \n> \n>> Thank you very much for your help. Regards,\n>> Andrés\n>> \n>> El Mar 6, 2014, a las 2:11 PM, desmodemone escribió:\n>> \n>>> \n>>> Il 05/mar/2014 00:36 \"Venkata Balaji Nagothi\" <[email protected]> ha scritto:\n>>>> \n>>>> After looking at the distinct values, yes the composite Index on \"name\" and \"hepval\" is not recommended. That would worsen - its expected.\n>>>> \n>>>> We need to look for other possible work around. Please drop off the above Index. Let me see if i can drill further into this.\n>>>> \n>>>> Meanwhile - can you help us know the memory parameters (work_mem, temp_buffers etc) set ?\n>>>> \n>>>> Do you have any other processes effecting this query's performance ?\n>>>> \n>>>> Any info about your Disk, RAM, CPU would also help.\n>>>> \n>>>> Regards,\n>>>> Venkata Balaji N\n>>>> \n>>>> Fujitsu Australia\n>>>> \n>>>> \n>>>> \n>>>> \n>>>> Venkata Balaji N\n>>>> \n>>>> Sr. Database Administrator\n>>>> Fujitsu Australia\n>>>> \n>>>> \n>>>> On Tue, Mar 4, 2014 at 10:23 PM, acanada <[email protected]> wrote:\n>>>>> \n>>>>> Hello,\n>>>>> \n>>>>> I don't know if this helps to figure out what is the problem but after adding the multicolumn index on name and hepval, the performance is even worse (¿?). Ten times worse...\n>>>>> \n>>>>> explain analyze select * from (select * from entity_compounddict2document where name='progesterone') as a order by a.hepval;\n>>>>> QUERY PLAN \n>>>>> -------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>>>> Sort (cost=422746.18..423143.94 rows=159104 width=133) (actual time=95769.674..95797.943 rows=138165 loops=1)\n>>>>> Sort Key: entity_compounddict2document.hepval\n>>>>> Sort Method: quicksort Memory: 25622kB\n>>>>> -> Bitmap Heap Scan on entity_compounddict2document (cost=3501.01..408999.90 rows=159104 width=133) (actual time=70.789..95519.258 rows=138165 loops=1)\n>>>>> Recheck Cond: ((name)::text = 'progesterone'::text)\n>>>>> -> Bitmap Index Scan on entity_compound2document_name (cost=0.00..3461.23 rows=159104 width=0) (actual time=35.174..35.174 rows=138165 loops=1)\n>>>>> Index Cond: ((name)::text = 'progesterone'::text)\n>>>>> Total runtime: 95811.838 ms\n>>>>> (8 rows)\n>>>>> \n>>>>> Any ideas please?\n>>>>> \n>>>>> Thank you \n>>>>> Andrés.\n>>>>> \n>>>>> \n>>>>> \n>>>>> El Mar 4, 2014, a las 12:28 AM, Venkata Balaji Nagothi escribió:\n>>>>> \n>>>>>> On Mon, Mar 3, 2014 at 9:17 PM, acanada <[email protected]> wrote:\n>>>>>>> \n>>>>>>> Hello,\n>>>>>>> \n>>>>>>> Thankyou for your answer.\n>>>>>>> I have made more changes than a simple re-indexing recently. I have moved the sorting field to the table in order to avoid the join clause. Now the schema is very simple. The query only implies one table:\n>>>>>>> \n>>>>>>> x=> \\d+ entity_compounddict2document;\n>>>>>>> Table \"public.entity_compounddict2document\"\n>>>>>>> Column | Type | Modifiers | Storage | Description \n>>>>>>> ------------------+--------------------------------+-----------+----------+-------------\n>>>>>>> id | integer | not null | plain | \n>>>>>>> document_id | integer | | plain | \n>>>>>>> name | character varying(255) | | extended | \n>>>>>>> qualifier | character varying(255) | | extended | \n>>>>>>> tagMethod | character varying(255) | | extended | \n>>>>>>> created | timestamp(0) without time zone | | plain | \n>>>>>>> updated | timestamp(0) without time zone | | plain | \n>>>>>>> curation | integer | | plain | \n>>>>>>> hepval | double precision | | plain | \n>>>>>>> cardval | double precision | | plain | \n>>>>>>> nephval | double precision | | plain | \n>>>>>>> phosval | double precision | | plain | \n>>>>>>> patternCount | double precision | | plain | \n>>>>>>> ruleScore | double precision | | plain | \n>>>>>>> hepTermNormScore | double precision | | plain | \n>>>>>>> hepTermVarScore | double precision | | plain | \n>>>>>>> Indexes:\n>>>>>>> \"entity_compounddict2document_pkey\" PRIMARY KEY, btree (id)\n>>>>>>> \"entity_compound2document_cardval\" btree (cardval)\n>>>>>>> \"entity_compound2document_heptermnormscore\" btree (\"hepTermNormScore\")\n>>>>>>> \"entity_compound2document_heptermvarscore\" btree (\"hepTermVarScore\")\n>>>>>>> \"entity_compound2document_hepval\" btree (hepval)\n>>>>>>> \"entity_compound2document_name\" btree (name)\n>>>>>>> \"entity_compound2document_nephval\" btree (nephval)\n>>>>>>> \"entity_compound2document_patterncount\" btree (\"patternCount\")\n>>>>>>> \"entity_compound2document_phosval\" btree (phosval)\n>>>>>>> \"entity_compound2document_rulescore\" btree (\"ruleScore\")\n>>>>>>> Has OIDs: no\n>>>>>>> \n>>>>>>> tablename | indexname | num_rows | table_size | index_size | unique | number_of_scans | tuples_read | tuples_fetched \n>>>>>>> entity_compounddict2document | entity_compound2document_cardval | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>>>>>> entity_compounddict2document | entity_compound2document_heptermnormscore | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>>>>>> entity_compounddict2document | entity_compound2document_heptermvarscore | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>>>>>> entity_compounddict2document | entity_compound2document_hepval | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>>>>>> entity_compounddict2document | entity_compound2document_name | 5.42452e+07 | 6763 MB | 1505 MB | Y | 24 | 178680 | 0\n>>>>>>> entity_compounddict2document | entity_compound2document_nephval | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>>>>>> entity_compounddict2document | entity_compound2document_patterncount | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>>>>>> entity_compounddict2document | entity_compound2document_phosval | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>>>>>> entity_compounddict2document | entity_compound2document_rulescore | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>>>>>> entity_compounddict2document | entity_compounddict2document_pkey | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>>>>>> \n>>>>>>> The table has aprox. 54,000,000 rows\n>>>>>>> There are no NULLs in hepval field and pg_settings haven't changed. I also have done \"analyze\" to this table.\n>>>>>>> \n>>>>>>> I have simplified the query and added the last advise that you told me:\n>>>>>>> \n>>>>>>> Query: \n>>>>>>> \n>>>>>>> explain analyze select * from (select * from entity_compounddict2document where name='ranitidine') as a order by a.hepval;\n>>>>>>> QUERY PLAN \n>>>>>>> ------------------------------------------------------------------------------------------------------------------------------------------------------\n>>>>>>> Sort (cost=11060.50..11067.55 rows=2822 width=133) (actual time=32715.097..32716.488 rows=13512 loops=1)\n>>>>>>> Sort Key: entity_compounddict2document.hepval\n>>>>>>> Sort Method: quicksort Memory: 2301kB\n>>>>>>> -> Bitmap Heap Scan on entity_compounddict2document (cost=73.82..10898.76 rows=2822 width=133) (actual time=6.034..32695.483 rows=13512 loops=1)\n>>>>>>> Recheck Cond: ((name)::text = 'ranitidine'::text)\n>>>>>>> -> Bitmap Index Scan on entity_compound2document_name (cost=0.00..73.12 rows=2822 width=0) (actual time=3.221..3.221 rows=13512 loops=1)\n>>>>>>> Index Cond: ((name)::text = 'ranitidine'::text)\n>>>>>>> Total runtime: 32717.548 ms\n>>>>>>> \n>>>>>>> Another query:\n>>>>>>> explain analyze select * from (select * from entity_compounddict2document where name='progesterone' ) as a order by a.hepval;\n>>>>>>> \n>>>>>>> QUERY PLAN\n>>>>>>> ------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>>>>>> Sort (cost=367879.25..368209.24 rows=131997 width=133) (actual time=9262.887..9287.046 rows=138165 loops=1)\n>>>>>>> Sort Key: entity_compounddict2document.hepval\n>>>>>>> Sort Method: quicksort Memory: 25622kB\n>>>>>>> -> Bitmap Heap Scan on entity_compounddict2document (cost=2906.93..356652.81 rows=131997 width=133) (actual time=76.316..9038.485 rows=138165 loops=1)\n>>>>>>> Recheck Cond: ((name)::text = 'progesterone'::text)\n>>>>>>> -> Bitmap Index Scan on entity_compound2document_name (cost=0.00..2873.93 rows=131997 width=0) (actual time=40.913..40.913 rows=138165 loops=1)\n>>>>>>> Index Cond: ((name)::text = 'progesterone'::text)\n>>>>>>> Total runtime: 9296.815 ms\n>>>>>>> \n>>>>>>> \n>>>>>>> It has improved (I supose because of the lack of the join table) but still taking a lot of time... Anything I can do??\n>>>>>>> \n>>>>>>> Any help would be very appreciated. Thank you very much.\n>>>>>> \n>>>>>> \n>>>>>> \n>>>>>> Good to know performance has increased.\n>>>>>> \n>>>>>> \"entity_compounddict2document\" table goes through high INSERTS ?\n>>>>>> \n>>>>>> Can you help us know if the \"helpval\" column and \"name\" column have high duplicate values ? \"n_distinct\" value from pg_stats table would have that info. \n>>>>>> \n>>>>>> Below could be a possible workaround -\n>>>>>> \n>>>>>> As mentioned earlier in this email, a composite Index on name and hepval column might help. If the table does not go through lot of INSERTS, then consider performing a CLUSTER on the table using the same INDEX.\n>>>>>> \n>>>>>> Other recommendations -\n>>>>>> \n>>>>>> Please drop off all the Non-primary key Indexes which have 0 scans / hits. This would harm the DB and the DB server whilst maintenance and DML operations.\n>>>>>> \n>>>>>> Regards,\n>>>>>> Venkata Balaji N\n>>>>>> \n>>>>>> Fujitsu Australia\n>>>>> \n>>>>> \n>>>>> \n>>>>> **NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n>>>>> \n>>>>> **CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n>>>>> \n>>>> \n>>> \n>>> \n>>> \n>>> Hi I think the problem is th heap scan of the table , that the backend have to do because the btree to bitmap conversion becomes lossy. Try to disable the enable_bitmapscan for the current session and rerun the query.\n>>> \n>>> Mat Dba\n>>> \n>> \n>> \n>> \n>> **NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n>> \n>> **CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n>> \n>> \n> \n\n\n**NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n**CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 7 Mar 2014 10:46:38 +0100", "msg_from": "\"acanada\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query taking long time" }, { "msg_contents": "\nOn 07 Mar 2014, at 12:46, acanada <[email protected]> wrote:\n\n> \n> El Mar 7, 2014, a las 10:39 AM, Evgeniy Shishkin escribió:\n> \n>> \n>>> Hello Mat,\n>>> \n>>> Setting enable_bitmapscan to off doesn't really helps. It gets worse...\n>>> \n>>> x=> SET enable_bitmapscan=off; \n>>> SET\n>>> x=> explain analyze select * from (select * from entity2document2 where name='ranitidine' ) as a order by a.hepval;\n>>> QUERY PLAN \n>>> ----------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>> Sort (cost=18789.21..18800.70 rows=4595 width=131) (actual time=79965.282..79966.657 rows=13512 loops=1)\n>>> Sort Key: entity2document2.hepval\n>>> Sort Method: quicksort Memory: 2301kB\n>>> -> Index Scan using entity2document2_name on entity2document2 (cost=0.00..18509.70 rows=4595 width=131) (actual time=67.507..79945.362 rows=13512 loops=1)\n>>> Index Cond: ((name)::text = 'ranitidine'::text)\n>>> Total runtime: 79967.705 ms\n>>> (6 rows)\n>>> \n>>> Any other idea? \n>>> \n>> \n>> Please post your hw configuration. I think that your db is on disk and they are slow.\n> \n> The server has 2 processors quadcore, 10GB of RAM and data is located in a fiber disk of 2TB. It doesn't seem to be the problem… \n\nAnd your database size is?\n\nAlso do this timings get better in consecutive runs? \n\n\n> \n\n> Thank you\n> \n> Andrés\n> \n>> \n>> \n>> \n>>> Thank you very much for your help. Regards,\n>>> Andrés\n>>> \n>>> El Mar 6, 2014, a las 2:11 PM, desmodemone escribió:\n>>> \n>>>> \n>>>> Il 05/mar/2014 00:36 \"Venkata Balaji Nagothi\" <[email protected]> ha scritto:\n>>>>> \n>>>>> After looking at the distinct values, yes the composite Index on \"name\" and \"hepval\" is not recommended. That would worsen - its expected.\n>>>>> \n>>>>> We need to look for other possible work around. Please drop off the above Index. Let me see if i can drill further into this.\n>>>>> \n>>>>> Meanwhile - can you help us know the memory parameters (work_mem, temp_buffers etc) set ?\n>>>>> \n>>>>> Do you have any other processes effecting this query's performance ?\n>>>>> \n>>>>> Any info about your Disk, RAM, CPU would also help.\n>>>>> \n>>>>> Regards,\n>>>>> Venkata Balaji N\n>>>>> \n>>>>> Fujitsu Australia\n>>>>> \n>>>>> \n>>>>> \n>>>>> \n>>>>> Venkata Balaji N\n>>>>> \n>>>>> Sr. Database Administrator\n>>>>> Fujitsu Australia\n>>>>> \n>>>>> \n>>>>> On Tue, Mar 4, 2014 at 10:23 PM, acanada <[email protected]> wrote:\n>>>>>> \n>>>>>> Hello,\n>>>>>> \n>>>>>> I don't know if this helps to figure out what is the problem but after adding the multicolumn index on name and hepval, the performance is even worse (¿?). Ten times worse...\n>>>>>> \n>>>>>> explain analyze select * from (select * from entity_compounddict2document where name='progesterone') as a order by a.hepval;\n>>>>>> QUERY PLAN \n>>>>>> -------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>>>>> Sort (cost=422746.18..423143.94 rows=159104 width=133) (actual time=95769.674..95797.943 rows=138165 loops=1)\n>>>>>> Sort Key: entity_compounddict2document.hepval\n>>>>>> Sort Method: quicksort Memory: 25622kB\n>>>>>> -> Bitmap Heap Scan on entity_compounddict2document (cost=3501.01..408999.90 rows=159104 width=133) (actual time=70.789..95519.258 rows=138165 loops=1)\n>>>>>> Recheck Cond: ((name)::text = 'progesterone'::text)\n>>>>>> -> Bitmap Index Scan on entity_compound2document_name (cost=0.00..3461.23 rows=159104 width=0) (actual time=35.174..35.174 rows=138165 loops=1)\n>>>>>> Index Cond: ((name)::text = 'progesterone'::text)\n>>>>>> Total runtime: 95811.838 ms\n>>>>>> (8 rows)\n>>>>>> \n>>>>>> Any ideas please?\n>>>>>> \n>>>>>> Thank you \n>>>>>> Andrés.\n>>>>>> \n>>>>>> \n>>>>>> \n>>>>>> El Mar 4, 2014, a las 12:28 AM, Venkata Balaji Nagothi escribió:\n>>>>>> \n>>>>>>> On Mon, Mar 3, 2014 at 9:17 PM, acanada <[email protected]> wrote:\n>>>>>>>> \n>>>>>>>> Hello,\n>>>>>>>> \n>>>>>>>> Thankyou for your answer.\n>>>>>>>> I have made more changes than a simple re-indexing recently. I have moved the sorting field to the table in order to avoid the join clause. Now the schema is very simple. The query only implies one table:\n>>>>>>>> \n>>>>>>>> x=> \\d+ entity_compounddict2document;\n>>>>>>>> Table \"public.entity_compounddict2document\"\n>>>>>>>> Column | Type | Modifiers | Storage | Description \n>>>>>>>> ------------------+--------------------------------+-----------+----------+-------------\n>>>>>>>> id | integer | not null | plain | \n>>>>>>>> document_id | integer | | plain | \n>>>>>>>> name | character varying(255) | | extended | \n>>>>>>>> qualifier | character varying(255) | | extended | \n>>>>>>>> tagMethod | character varying(255) | | extended | \n>>>>>>>> created | timestamp(0) without time zone | | plain | \n>>>>>>>> updated | timestamp(0) without time zone | | plain | \n>>>>>>>> curation | integer | | plain | \n>>>>>>>> hepval | double precision | | plain | \n>>>>>>>> cardval | double precision | | plain | \n>>>>>>>> nephval | double precision | | plain | \n>>>>>>>> phosval | double precision | | plain | \n>>>>>>>> patternCount | double precision | | plain | \n>>>>>>>> ruleScore | double precision | | plain | \n>>>>>>>> hepTermNormScore | double precision | | plain | \n>>>>>>>> hepTermVarScore | double precision | | plain | \n>>>>>>>> Indexes:\n>>>>>>>> \"entity_compounddict2document_pkey\" PRIMARY KEY, btree (id)\n>>>>>>>> \"entity_compound2document_cardval\" btree (cardval)\n>>>>>>>> \"entity_compound2document_heptermnormscore\" btree (\"hepTermNormScore\")\n>>>>>>>> \"entity_compound2document_heptermvarscore\" btree (\"hepTermVarScore\")\n>>>>>>>> \"entity_compound2document_hepval\" btree (hepval)\n>>>>>>>> \"entity_compound2document_name\" btree (name)\n>>>>>>>> \"entity_compound2document_nephval\" btree (nephval)\n>>>>>>>> \"entity_compound2document_patterncount\" btree (\"patternCount\")\n>>>>>>>> \"entity_compound2document_phosval\" btree (phosval)\n>>>>>>>> \"entity_compound2document_rulescore\" btree (\"ruleScore\")\n>>>>>>>> Has OIDs: no\n>>>>>>>> \n>>>>>>>> tablename | indexname | num_rows | table_size | index_size | unique | number_of_scans | tuples_read | tuples_fetched \n>>>>>>>> entity_compounddict2document | entity_compound2document_cardval | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>>>>>>> entity_compounddict2document | entity_compound2document_heptermnormscore | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>>>>>>> entity_compounddict2document | entity_compound2document_heptermvarscore | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>>>>>>> entity_compounddict2document | entity_compound2document_hepval | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>>>>>>> entity_compounddict2document | entity_compound2document_name | 5.42452e+07 | 6763 MB | 1505 MB | Y | 24 | 178680 | 0\n>>>>>>>> entity_compounddict2document | entity_compound2document_nephval | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>>>>>>> entity_compounddict2document | entity_compound2document_patterncount | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>>>>>>> entity_compounddict2document | entity_compound2document_phosval | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>>>>>>> entity_compounddict2document | entity_compound2document_rulescore | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>>>>>>> entity_compounddict2document | entity_compounddict2document_pkey | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>>>>>>> \n>>>>>>>> The table has aprox. 54,000,000 rows\n>>>>>>>> There are no NULLs in hepval field and pg_settings haven't changed. I also have done \"analyze\" to this table.\n>>>>>>>> \n>>>>>>>> I have simplified the query and added the last advise that you told me:\n>>>>>>>> \n>>>>>>>> Query: \n>>>>>>>> \n>>>>>>>> explain analyze select * from (select * from entity_compounddict2document where name='ranitidine') as a order by a.hepval;\n>>>>>>>> QUERY PLAN \n>>>>>>>> ------------------------------------------------------------------------------------------------------------------------------------------------------\n>>>>>>>> Sort (cost=11060.50..11067.55 rows=2822 width=133) (actual time=32715.097..32716.488 rows=13512 loops=1)\n>>>>>>>> Sort Key: entity_compounddict2document.hepval\n>>>>>>>> Sort Method: quicksort Memory: 2301kB\n>>>>>>>> -> Bitmap Heap Scan on entity_compounddict2document (cost=73.82..10898.76 rows=2822 width=133) (actual time=6.034..32695.483 rows=13512 loops=1)\n>>>>>>>> Recheck Cond: ((name)::text = 'ranitidine'::text)\n>>>>>>>> -> Bitmap Index Scan on entity_compound2document_name (cost=0.00..73.12 rows=2822 width=0) (actual time=3.221..3.221 rows=13512 loops=1)\n>>>>>>>> Index Cond: ((name)::text = 'ranitidine'::text)\n>>>>>>>> Total runtime: 32717.548 ms\n>>>>>>>> \n>>>>>>>> Another query:\n>>>>>>>> explain analyze select * from (select * from entity_compounddict2document where name='progesterone' ) as a order by a.hepval;\n>>>>>>>> \n>>>>>>>> QUERY PLAN\n>>>>>>>> ------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>>>>>>> Sort (cost=367879.25..368209.24 rows=131997 width=133) (actual time=9262.887..9287.046 rows=138165 loops=1)\n>>>>>>>> Sort Key: entity_compounddict2document.hepval\n>>>>>>>> Sort Method: quicksort Memory: 25622kB\n>>>>>>>> -> Bitmap Heap Scan on entity_compounddict2document (cost=2906.93..356652.81 rows=131997 width=133) (actual time=76.316..9038.485 rows=138165 loops=1)\n>>>>>>>> Recheck Cond: ((name)::text = 'progesterone'::text)\n>>>>>>>> -> Bitmap Index Scan on entity_compound2document_name (cost=0.00..2873.93 rows=131997 width=0) (actual time=40.913..40.913 rows=138165 loops=1)\n>>>>>>>> Index Cond: ((name)::text = 'progesterone'::text)\n>>>>>>>> Total runtime: 9296.815 ms\n>>>>>>>> \n>>>>>>>> \n>>>>>>>> It has improved (I supose because of the lack of the join table) but still taking a lot of time... Anything I can do??\n>>>>>>>> \n>>>>>>>> Any help would be very appreciated. Thank you very much.\n>>>>>>> \n>>>>>>> \n>>>>>>> \n>>>>>>> Good to know performance has increased.\n>>>>>>> \n>>>>>>> \"entity_compounddict2document\" table goes through high INSERTS ?\n>>>>>>> \n>>>>>>> Can you help us know if the \"helpval\" column and \"name\" column have high duplicate values ? \"n_distinct\" value from pg_stats table would have that info. \n>>>>>>> \n>>>>>>> Below could be a possible workaround -\n>>>>>>> \n>>>>>>> As mentioned earlier in this email, a composite Index on name and hepval column might help. If the table does not go through lot of INSERTS, then consider performing a CLUSTER on the table using the same INDEX.\n>>>>>>> \n>>>>>>> Other recommendations -\n>>>>>>> \n>>>>>>> Please drop off all the Non-primary key Indexes which have 0 scans / hits. This would harm the DB and the DB server whilst maintenance and DML operations.\n>>>>>>> \n>>>>>>> Regards,\n>>>>>>> Venkata Balaji N\n>>>>>>> \n>>>>>>> Fujitsu Australia\n>>>>>> \n>>>>>> \n>>>>>> \n>>>>>> **NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n>>>>>> \n>>>>>> **CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n>>>>>> \n>>>>> \n>>>> \n>>>> \n>>>> \n>>>> Hi I think the problem is th heap scan of the table , that the backend have to do because the btree to bitmap conversion becomes lossy. Try to disable the enable_bitmapscan for the current session and rerun the query.\n>>>> \n>>>> Mat Dba\n>>>> \n>>> \n>>> \n>>> \n>>> **NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n>>> \n>>> **CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n>>> \n>>> \n>> \n> \n> \n> **NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n> **CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 7 Mar 2014 13:03:39 +0300", "msg_from": "Evgeniy Shishkin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query taking long time" }, { "msg_contents": "\nEl Mar 7, 2014, a las 11:03 AM, Evgeniy Shishkin escribió:\n\n> \n> On 07 Mar 2014, at 12:46, acanada <[email protected]> wrote:\n> \n>> \n>> El Mar 7, 2014, a las 10:39 AM, Evgeniy Shishkin escribió:\n>> \n>>> \n>>>> Hello Mat,\n>>>> \n>>>> Setting enable_bitmapscan to off doesn't really helps. It gets worse...\n>>>> \n>>>> x=> SET enable_bitmapscan=off; \n>>>> SET\n>>>> x=> explain analyze select * from (select * from entity2document2 where name='ranitidine' ) as a order by a.hepval;\n>>>> QUERY PLAN \n>>>> ----------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>>> Sort (cost=18789.21..18800.70 rows=4595 width=131) (actual time=79965.282..79966.657 rows=13512 loops=1)\n>>>> Sort Key: entity2document2.hepval\n>>>> Sort Method: quicksort Memory: 2301kB\n>>>> -> Index Scan using entity2document2_name on entity2document2 (cost=0.00..18509.70 rows=4595 width=131) (actual time=67.507..79945.362 rows=13512 loops=1)\n>>>> Index Cond: ((name)::text = 'ranitidine'::text)\n>>>> Total runtime: 79967.705 ms\n>>>> (6 rows)\n>>>> \n>>>> Any other idea? \n>>>> \n>>> \n>>> Please post your hw configuration. I think that your db is on disk and they are slow.\n>> \n>> The server has 2 processors quadcore, 10GB of RAM and data is located in a fiber disk of 2TB. It doesn't seem to be the problem… \n> \n> And your database size is?\n> \n> Also do this timings get better in consecutive runs? \n> \n\nThe table entity2document2 has 30GB. In consecutive runs it gets much better... 30ms aprox.\n\n\n\n> \n>> \n> \n>> Thank you\n>> \n>> Andrés\n>> \n>>> \n>>> \n>>> \n>>>> Thank you very much for your help. Regards,\n>>>> Andrés\n>>>> \n>>>> El Mar 6, 2014, a las 2:11 PM, desmodemone escribió:\n>>>> \n>>>>> \n>>>>> Il 05/mar/2014 00:36 \"Venkata Balaji Nagothi\" <[email protected]> ha scritto:\n>>>>>> \n>>>>>> After looking at the distinct values, yes the composite Index on \"name\" and \"hepval\" is not recommended. That would worsen - its expected.\n>>>>>> \n>>>>>> We need to look for other possible work around. Please drop off the above Index. Let me see if i can drill further into this.\n>>>>>> \n>>>>>> Meanwhile - can you help us know the memory parameters (work_mem, temp_buffers etc) set ?\n>>>>>> \n>>>>>> Do you have any other processes effecting this query's performance ?\n>>>>>> \n>>>>>> Any info about your Disk, RAM, CPU would also help.\n>>>>>> \n>>>>>> Regards,\n>>>>>> Venkata Balaji N\n>>>>>> \n>>>>>> Fujitsu Australia\n>>>>>> \n>>>>>> \n>>>>>> \n>>>>>> \n>>>>>> Venkata Balaji N\n>>>>>> \n>>>>>> Sr. Database Administrator\n>>>>>> Fujitsu Australia\n>>>>>> \n>>>>>> \n>>>>>> On Tue, Mar 4, 2014 at 10:23 PM, acanada <[email protected]> wrote:\n>>>>>>> \n>>>>>>> Hello,\n>>>>>>> \n>>>>>>> I don't know if this helps to figure out what is the problem but after adding the multicolumn index on name and hepval, the performance is even worse (¿?). Ten times worse...\n>>>>>>> \n>>>>>>> explain analyze select * from (select * from entity_compounddict2document where name='progesterone') as a order by a.hepval;\n>>>>>>> QUERY PLAN \n>>>>>>> -------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>>>>>> Sort (cost=422746.18..423143.94 rows=159104 width=133) (actual time=95769.674..95797.943 rows=138165 loops=1)\n>>>>>>> Sort Key: entity_compounddict2document.hepval\n>>>>>>> Sort Method: quicksort Memory: 25622kB\n>>>>>>> -> Bitmap Heap Scan on entity_compounddict2document (cost=3501.01..408999.90 rows=159104 width=133) (actual time=70.789..95519.258 rows=138165 loops=1)\n>>>>>>> Recheck Cond: ((name)::text = 'progesterone'::text)\n>>>>>>> -> Bitmap Index Scan on entity_compound2document_name (cost=0.00..3461.23 rows=159104 width=0) (actual time=35.174..35.174 rows=138165 loops=1)\n>>>>>>> Index Cond: ((name)::text = 'progesterone'::text)\n>>>>>>> Total runtime: 95811.838 ms\n>>>>>>> (8 rows)\n>>>>>>> \n>>>>>>> Any ideas please?\n>>>>>>> \n>>>>>>> Thank you \n>>>>>>> Andrés.\n>>>>>>> \n>>>>>>> \n>>>>>>> \n>>>>>>> El Mar 4, 2014, a las 12:28 AM, Venkata Balaji Nagothi escribió:\n>>>>>>> \n>>>>>>>> On Mon, Mar 3, 2014 at 9:17 PM, acanada <[email protected]> wrote:\n>>>>>>>>> \n>>>>>>>>> Hello,\n>>>>>>>>> \n>>>>>>>>> Thankyou for your answer.\n>>>>>>>>> I have made more changes than a simple re-indexing recently. I have moved the sorting field to the table in order to avoid the join clause. Now the schema is very simple. The query only implies one table:\n>>>>>>>>> \n>>>>>>>>> x=> \\d+ entity_compounddict2document;\n>>>>>>>>> Table \"public.entity_compounddict2document\"\n>>>>>>>>> Column | Type | Modifiers | Storage | Description \n>>>>>>>>> ------------------+--------------------------------+-----------+----------+-------------\n>>>>>>>>> id | integer | not null | plain | \n>>>>>>>>> document_id | integer | | plain | \n>>>>>>>>> name | character varying(255) | | extended | \n>>>>>>>>> qualifier | character varying(255) | | extended | \n>>>>>>>>> tagMethod | character varying(255) | | extended | \n>>>>>>>>> created | timestamp(0) without time zone | | plain | \n>>>>>>>>> updated | timestamp(0) without time zone | | plain | \n>>>>>>>>> curation | integer | | plain | \n>>>>>>>>> hepval | double precision | | plain | \n>>>>>>>>> cardval | double precision | | plain | \n>>>>>>>>> nephval | double precision | | plain | \n>>>>>>>>> phosval | double precision | | plain | \n>>>>>>>>> patternCount | double precision | | plain | \n>>>>>>>>> ruleScore | double precision | | plain | \n>>>>>>>>> hepTermNormScore | double precision | | plain | \n>>>>>>>>> hepTermVarScore | double precision | | plain | \n>>>>>>>>> Indexes:\n>>>>>>>>> \"entity_compounddict2document_pkey\" PRIMARY KEY, btree (id)\n>>>>>>>>> \"entity_compound2document_cardval\" btree (cardval)\n>>>>>>>>> \"entity_compound2document_heptermnormscore\" btree (\"hepTermNormScore\")\n>>>>>>>>> \"entity_compound2document_heptermvarscore\" btree (\"hepTermVarScore\")\n>>>>>>>>> \"entity_compound2document_hepval\" btree (hepval)\n>>>>>>>>> \"entity_compound2document_name\" btree (name)\n>>>>>>>>> \"entity_compound2document_nephval\" btree (nephval)\n>>>>>>>>> \"entity_compound2document_patterncount\" btree (\"patternCount\")\n>>>>>>>>> \"entity_compound2document_phosval\" btree (phosval)\n>>>>>>>>> \"entity_compound2document_rulescore\" btree (\"ruleScore\")\n>>>>>>>>> Has OIDs: no\n>>>>>>>>> \n>>>>>>>>> tablename | indexname | num_rows | table_size | index_size | unique | number_of_scans | tuples_read | tuples_fetched \n>>>>>>>>> entity_compounddict2document | entity_compound2document_cardval | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>>>>>>>> entity_compounddict2document | entity_compound2document_heptermnormscore | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>>>>>>>> entity_compounddict2document | entity_compound2document_heptermvarscore | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>>>>>>>> entity_compounddict2document | entity_compound2document_hepval | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>>>>>>>> entity_compounddict2document | entity_compound2document_name | 5.42452e+07 | 6763 MB | 1505 MB | Y | 24 | 178680 | 0\n>>>>>>>>> entity_compounddict2document | entity_compound2document_nephval | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>>>>>>>> entity_compounddict2document | entity_compound2document_patterncount | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>>>>>>>> entity_compounddict2document | entity_compound2document_phosval | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>>>>>>>> entity_compounddict2document | entity_compound2document_rulescore | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>>>>>>>> entity_compounddict2document | entity_compounddict2document_pkey | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>>>>>>>> \n>>>>>>>>> The table has aprox. 54,000,000 rows\n>>>>>>>>> There are no NULLs in hepval field and pg_settings haven't changed. I also have done \"analyze\" to this table.\n>>>>>>>>> \n>>>>>>>>> I have simplified the query and added the last advise that you told me:\n>>>>>>>>> \n>>>>>>>>> Query: \n>>>>>>>>> \n>>>>>>>>> explain analyze select * from (select * from entity_compounddict2document where name='ranitidine') as a order by a.hepval;\n>>>>>>>>> QUERY PLAN \n>>>>>>>>> ------------------------------------------------------------------------------------------------------------------------------------------------------\n>>>>>>>>> Sort (cost=11060.50..11067.55 rows=2822 width=133) (actual time=32715.097..32716.488 rows=13512 loops=1)\n>>>>>>>>> Sort Key: entity_compounddict2document.hepval\n>>>>>>>>> Sort Method: quicksort Memory: 2301kB\n>>>>>>>>> -> Bitmap Heap Scan on entity_compounddict2document (cost=73.82..10898.76 rows=2822 width=133) (actual time=6.034..32695.483 rows=13512 loops=1)\n>>>>>>>>> Recheck Cond: ((name)::text = 'ranitidine'::text)\n>>>>>>>>> -> Bitmap Index Scan on entity_compound2document_name (cost=0.00..73.12 rows=2822 width=0) (actual time=3.221..3.221 rows=13512 loops=1)\n>>>>>>>>> Index Cond: ((name)::text = 'ranitidine'::text)\n>>>>>>>>> Total runtime: 32717.548 ms\n>>>>>>>>> \n>>>>>>>>> Another query:\n>>>>>>>>> explain analyze select * from (select * from entity_compounddict2document where name='progesterone' ) as a order by a.hepval;\n>>>>>>>>> \n>>>>>>>>> QUERY PLAN\n>>>>>>>>> ------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>>>>>>>> Sort (cost=367879.25..368209.24 rows=131997 width=133) (actual time=9262.887..9287.046 rows=138165 loops=1)\n>>>>>>>>> Sort Key: entity_compounddict2document.hepval\n>>>>>>>>> Sort Method: quicksort Memory: 25622kB\n>>>>>>>>> -> Bitmap Heap Scan on entity_compounddict2document (cost=2906.93..356652.81 rows=131997 width=133) (actual time=76.316..9038.485 rows=138165 loops=1)\n>>>>>>>>> Recheck Cond: ((name)::text = 'progesterone'::text)\n>>>>>>>>> -> Bitmap Index Scan on entity_compound2document_name (cost=0.00..2873.93 rows=131997 width=0) (actual time=40.913..40.913 rows=138165 loops=1)\n>>>>>>>>> Index Cond: ((name)::text = 'progesterone'::text)\n>>>>>>>>> Total runtime: 9296.815 ms\n>>>>>>>>> \n>>>>>>>>> \n>>>>>>>>> It has improved (I supose because of the lack of the join table) but still taking a lot of time... Anything I can do??\n>>>>>>>>> \n>>>>>>>>> Any help would be very appreciated. Thank you very much.\n>>>>>>>> \n>>>>>>>> \n>>>>>>>> \n>>>>>>>> Good to know performance has increased.\n>>>>>>>> \n>>>>>>>> \"entity_compounddict2document\" table goes through high INSERTS ?\n>>>>>>>> \n>>>>>>>> Can you help us know if the \"helpval\" column and \"name\" column have high duplicate values ? \"n_distinct\" value from pg_stats table would have that info. \n>>>>>>>> \n>>>>>>>> Below could be a possible workaround -\n>>>>>>>> \n>>>>>>>> As mentioned earlier in this email, a composite Index on name and hepval column might help. If the table does not go through lot of INSERTS, then consider performing a CLUSTER on the table using the same INDEX.\n>>>>>>>> \n>>>>>>>> Other recommendations -\n>>>>>>>> \n>>>>>>>> Please drop off all the Non-primary key Indexes which have 0 scans / hits. This would harm the DB and the DB server whilst maintenance and DML operations.\n>>>>>>>> \n>>>>>>>> Regards,\n>>>>>>>> Venkata Balaji N\n>>>>>>>> \n>>>>>>>> Fujitsu Australia\n>>>>>>> \n>>>>>>> \n>>>>>>> \n>>>>>>> **NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n>>>>>>> \n>>>>>>> **CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n>>>>>>> \n>>>>>> \n>>>>> \n>>>>> \n>>>>> \n>>>>> Hi I think the problem is th heap scan of the table , that the backend have to do because the btree to bitmap conversion becomes lossy. Try to disable the enable_bitmapscan for the current session and rerun the query.\n>>>>> \n>>>>> Mat Dba\n>>>>> \n>>>> \n>>>> \n>>>> \n>>>> **NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n>>>> \n>>>> **CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n>>>> \n>>>> \n>>> \n>> \n>> \n>> **NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n>> **CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n> \n\n\n**NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n**CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 7 Mar 2014 11:18:03 +0100", "msg_from": "\"acanada\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query taking long time" }, { "msg_contents": "\nOn 07 Mar 2014, at 13:18, acanada <[email protected]> wrote:\n\n> The table entity2document2 has 30GB. In consecutive runs it gets much better... 30ms apron.\n\nSo you just benchmarking your hard drives with random iops.\n\nYou need more ram and faster disks.\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 10 Mar 2014 17:45:06 +0300", "msg_from": "Evgeniy Shishkin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query taking long time" }, { "msg_contents": "Hello Evgeniy!\n\nI can move the database to another server...\nThis is the cat of /proc/cpuinfo. Does it have enough power or should I go for a better one??\n\n(It has 32 processors like this one):\n\ncat /proc/cpuinfo \nprocessor : 0\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 45\nmodel name : Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz\nstepping : 7\nmicrocode : 0x70d\ncpu MHz : 1200.000\ncache size : 20480 KB\nphysical id : 0\nsiblings : 16\ncore id : 0\ncpu cores : 8\napicid : 0\ninitial apicid : 0\nfpu : yes\nfpu_exception : yes\ncpuid level : 13\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid\nbogomips : 5187.62\nclflush size : 64\ncache_alignment : 64\naddress sizes : 46 bits physical, 48 bits virtual\npower management:\n\n\nfree\n total used free shared buffers cached\nMem: 65901148 32702336 33198812 0 264936 20625024\n-/+ buffers/cache: 11812376 54088772\nSwap: 134217724 413088 133804636\n\n\nThank you for your help,\nAndrés\n\n\nEl Mar 10, 2014, a las 3:45 PM, Evgeniy Shishkin escribió:\n\n> \n> On 07 Mar 2014, at 13:18, acanada <[email protected]> wrote:\n> \n>> The table entity2document2 has 30GB. In consecutive runs it gets much better... 30ms apron.\n> \n> So you just benchmarking your hard drives with random iops.\n> \n> You need more ram and faster disks.\n\n\n**NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n**CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 10 Mar 2014 17:30:20 +0100", "msg_from": "\"acanada\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query taking long time" }, { "msg_contents": "On Tue, Mar 11, 2014 at 3:30 AM, acanada <[email protected]> wrote:\n\n> Hello Evgeniy!\n>\n> I can move the database to another server...\n> This is the cat of /proc/cpuinfo. Does it have enough power or should I go\n> for a better one??\n>\n> (It has 32 processors like this one):\n>\n> cat /proc/cpuinfo\n> processor : 0\n> vendor_id : GenuineIntel\n> cpu family : 6\n> model : 45\n> model name : Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz\n> stepping : 7\n> microcode : 0x70d\n> cpu MHz : 1200.000\n> cache size : 20480 KB\n> physical id : 0\n> siblings : 16\n> core id : 0\n> cpu cores : 8\n> apicid : 0\n> initial apicid : 0\n> fpu : yes\n> fpu_exception : yes\n> cpuid level : 13\n> wp : yes\n> flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca\n> cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx\n> pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl\n> xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx\n> smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt\n> tsc_deadline_timer aes xsave avx lahf_lm ida arat epb xsaveopt pln pts\n> dtherm tpr_shadow vnmi flexpriority ept vpid\n> bogomips : 5187.62\n> clflush size : 64\n> cache_alignment : 64\n> address sizes : 46 bits physical, 48 bits virtual\n> power management:\n>\n>\n> free\n> total used free shared buffers cached\n> Mem: 65901148 32702336 33198812 0 264936 20625024\n> -/+ buffers/cache: 11812376 54088772\n> Swap: 134217724 413088 133804636\n>\n\nPlease let us know the Disk configuration of the server. Also, any other\nprocesses use this server heavily ?\n\n\nVenkata Balaji N\n\nSr. Database Administrator\nFujitsu Australia\n\nOn Tue, Mar 11, 2014 at 3:30 AM, acanada <[email protected]> wrote:\nHello Evgeniy!\n\nI can move the database to another server...\nThis is the cat of /proc/cpuinfo. Does it have enough power or should I go for a better one??\n\n(It has 32 processors like this one):\n\ncat /proc/cpuinfo\nprocessor       : 0\nvendor_id       : GenuineIntel\ncpu family      : 6\nmodel           : 45\nmodel name      : Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz\nstepping        : 7\nmicrocode       : 0x70d\ncpu MHz         : 1200.000\ncache size      : 20480 KB\nphysical id     : 0\nsiblings        : 16\ncore id         : 0\ncpu cores       : 8\napicid          : 0\ninitial apicid  : 0\nfpu             : yes\nfpu_exception   : yes\ncpuid level     : 13\nwp              : yes\nflags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid\n\nbogomips        : 5187.62\nclflush size    : 64\ncache_alignment : 64\naddress sizes   : 46 bits physical, 48 bits virtual\npower management:\n\n\nfree\n             total       used       free     shared    buffers     cached\nMem:      65901148   32702336   33198812          0     264936   20625024\n-/+ buffers/cache:   11812376   54088772\nSwap:    134217724     413088  133804636Please let us know the Disk configuration of the server. Also, any other processes use this server heavily ?\nVenkata Balaji NSr. Database AdministratorFujitsu Australia", "msg_date": "Tue, 11 Mar 2014 06:49:38 +1100", "msg_from": "Venkata Balaji Nagothi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query taking long time" }, { "msg_contents": "Hello Andres,\n with enable_bitmapscan=off; could you do :\n\nexplain ( analyze , buffers ) select * from entity2document2 where\nname='ranitidine' ;\n\nI think it's interesting to understand how much it's clustered the table\nentity2document2.\ninfact the query extract 13512 rows in 79945.362 ms around 4 ms for row,\nand I suspect the table is not well clustered on that column, so every time\nthe\nprocess is asking for a different page of the table or the i/o system have\nsome problem.\n\nMoreover, another point it's : how much it's big ? the rows are arounf 94M\n, but how much it's big ? it's important the average row length\n\n\nHave a nice day\n\n2014-03-06 15:45 GMT+01:00 acanada <[email protected]>:\n\n> Hello Mat,\n>\n> Setting enable_bitmapscan to off doesn't really helps. It gets worse...\n>\n> x=> SET enable_bitmapscan=off;\n> SET\n> x=> explain analyze select * from (select * from entity2document2 where\n> name='ranitidine' ) as a order by a.hepval;\n>\n> QUERY PLAN\n>\n>\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Sort (cost=18789.21..18800.70 rows=4595 width=131) (actual\n> time=79965.282..79966.657 rows=13512 loops=1)\n> Sort Key: entity2document2.hepval\n> Sort Method: quicksort Memory: 2301kB\n> -> Index Scan using entity2document2_name on entity2document2\n> (cost=0.00..18509.70 rows=4595 width=131) (actual time=67.507..79945.362\n> rows=13512 loops=1)\n> Index Cond: ((name)::text = 'ranitidine'::text)\n> Total runtime: 79967.705 ms\n> (6 rows)\n>\n> Any other idea?\n>\n> Thank you very much for your help. Regards,\n> Andrés\n>\n> El Mar 6, 2014, a las 2:11 PM, desmodemone escribió:\n>\n>\n> Il 05/mar/2014 00:36 \"Venkata Balaji Nagothi\" <[email protected]> ha\n> scritto:\n> >\n> > After looking at the distinct values, yes the composite Index on \"name\"\n> and \"hepval\" is not recommended. That would worsen - its expected.\n> >\n> > We need to look for other possible work around. Please drop off the\n> above Index. Let me see if i can drill further into this.\n> >\n> > Meanwhile - can you help us know the memory parameters (work_mem,\n> temp_buffers etc) set ?\n> >\n> > Do you have any other processes effecting this query's performance ?\n> >\n> > Any info about your Disk, RAM, CPU would also help.\n> >\n> > Regards,\n> > Venkata Balaji N\n> >\n> > Fujitsu Australia\n> >\n> >\n> >\n> >\n> > Venkata Balaji N\n> >\n> > Sr. Database Administrator\n> > Fujitsu Australia\n> >\n> >\n> > On Tue, Mar 4, 2014 at 10:23 PM, acanada <[email protected]> wrote:\n> >>\n> >> Hello,\n> >>\n> >> I don't know if this helps to figure out what is the problem but after\n> adding the multicolumn index on name and hepval, the performance is even\n> worse (¿?). Ten times worse...\n> >>\n> >> explain analyze select * from (select * from\n> entity_compounddict2document where name='progesterone') as a order by\n> a.hepval;\n> >>\n> QUERY PLAN\n>\n> >>\n> -------------------------------------------------------------------------------------------------------------------------------------------------------------\n> >> Sort (cost=422746.18..423143.94 rows=159104 width=133) (actual\n> time=95769.674..95797.943 rows=138165 loops=1)\n> >> Sort Key: entity_compounddict2document.hepval\n> >> Sort Method: quicksort Memory: 25622kB\n> >> -> Bitmap Heap Scan on entity_compounddict2document\n> (cost=3501.01..408999.90 rows=159104 width=133) (actual\n> time=70.789..95519.258 rows=138165 loops=1)\n> >> Recheck Cond: ((name)::text = 'progesterone'::text)\n> >> -> Bitmap Index Scan on entity_compound2document_name\n> (cost=0.00..3461.23 rows=159104 width=0) (actual time=35.174..35.174rows=138165 loops=1)\n> >> Index Cond: ((name)::text = 'progesterone'::text)\n> >> Total runtime: 95811.838 ms\n> >> (8 rows)\n> >>\n> >> Any ideas please?\n> >>\n> >> Thank you\n> >> Andrés.\n> >>\n> >>\n> >>\n> >> El Mar 4, 2014, a las 12:28 AM, Venkata Balaji Nagothi escribió:\n> >>\n> >>> On Mon, Mar 3, 2014 at 9:17 PM, acanada <[email protected]> wrote:\n> >>>>\n> >>>> Hello,\n> >>>>\n> >>>> Thankyou for your answer.\n> >>>> I have made more changes than a simple re-indexing recently. I have\n> moved the sorting field to the table in order to avoid the join clause. Now\n> the schema is very simple. The query only implies one table:\n> >>>>\n> >>>> x=> \\d+ entity_compounddict2document;\n> >>>> Table \"public.entity_compounddict2document\"\n> >>>> Column | Type | Modifiers |\n> Storage | Description\n> >>>>\n> ------------------+--------------------------------+-----------+----------+-------------\n> >>>> id | integer | not null |\n> plain |\n> >>>> document_id | integer | |\n> plain |\n> >>>> name | character varying(255) | |\n> extended |\n> >>>> qualifier | character varying(255) | |\n> extended |\n> >>>> tagMethod | character varying(255) | |\n> extended |\n> >>>> created | timestamp(0) without time zone | |\n> plain |\n> >>>> updated | timestamp(0) without time zone | |\n> plain |\n> >>>> curation | integer | |\n> plain |\n> >>>> hepval | double precision | |\n> plain |\n> >>>> cardval | double precision | |\n> plain |\n> >>>> nephval | double precision | |\n> plain |\n> >>>> phosval | double precision | |\n> plain |\n> >>>> patternCount | double precision | |\n> plain |\n> >>>> ruleScore | double precision | |\n> plain |\n> >>>> hepTermNormScore | double precision | |\n> plain |\n> >>>> hepTermVarScore | double precision | |\n> plain |\n> >>>> Indexes:\n> >>>> \"entity_compounddict2document_pkey\" PRIMARY KEY, btree (id)\n> >>>> \"entity_compound2document_cardval\" btree (cardval)\n> >>>> \"entity_compound2document_heptermnormscore\" btree\n> (\"hepTermNormScore\")\n> >>>> \"entity_compound2document_heptermvarscore\" btree\n> (\"hepTermVarScore\")\n> >>>> \"entity_compound2document_hepval\" btree (hepval)\n> >>>> \"entity_compound2document_name\" btree (name)\n> >>>> \"entity_compound2document_nephval\" btree (nephval)\n> >>>> \"entity_compound2document_patterncount\" btree (\"patternCount\")\n> >>>> \"entity_compound2document_phosval\" btree (phosval)\n> >>>> \"entity_compound2document_rulescore\" btree (\"ruleScore\")\n> >>>> Has OIDs: no\n> >>>>\n> >>>> tablename | indexname\n> | num_rows | table_size |\n> index_size | unique | number_of_scans | tuples_read | tuples_fetched\n> >>>> entity_compounddict2document | entity_compound2document_cardval\n> | 5.42452e+07 | 6763 MB | 1162 MB | Y |\n> 0 | 0 | 0\n> >>>> entity_compounddict2document |\n> entity_compound2document_heptermnormscore | 5.42452e+07 | 6763 MB |\n> 1162 MB | Y | 0 | 0 | 0\n> >>>> entity_compounddict2document |\n> entity_compound2document_heptermvarscore | 5.42452e+07 | 6763 MB |\n> 1162 MB | Y | 0 | 0 | 0\n> >>>> entity_compounddict2document | entity_compound2document_hepval\n> | 5.42452e+07 | 6763 MB | 1162 MB | Y |\n> 0 | 0 | 0\n> >>>> entity_compounddict2document | entity_compound2document_name\n> | 5.42452e+07 | 6763 MB | 1505 MB | Y |\n> 24 | 178680 | 0\n> >>>> entity_compounddict2document | entity_compound2document_nephval\n> | 5.42452e+07 | 6763 MB | 1162 MB | Y |\n> 0 | 0 | 0\n> >>>> entity_compounddict2document |\n> entity_compound2document_patterncount | 5.42452e+07 | 6763 MB |\n> 1162 MB | Y | 0 | 0 | 0\n> >>>> entity_compounddict2document | entity_compound2document_phosval\n> | 5.42452e+07 | 6763 MB | 1162 MB | Y |\n> 0 | 0 | 0\n> >>>> entity_compounddict2document | entity_compound2document_rulescore\n> | 5.42452e+07 | 6763 MB | 1162 MB | Y |\n> 0 | 0 | 0\n> >>>> entity_compounddict2document | entity_compounddict2document_pkey\n> | 5.42452e+07 | 6763 MB | 1162 MB | Y |\n> 0 | 0 | 0\n> >>>>\n> >>>> The table has aprox. 54,000,000 rows\n> >>>> There are no NULLs in hepval field and pg_settings haven't changed. I\n> also have done \"analyze\" to this table.\n> >>>>\n> >>>> I have simplified the query and added the last advise that you told\n> me:\n> >>>>\n> >>>> Query:\n> >>>>\n> >>>> explain analyze select * from (select * from\n> entity_compounddict2document where name='ranitidine') as a order by\n> a.hepval;\n> >>>>\n> QUERY PLAN\n>\n> >>>>\n> ------------------------------------------------------------------------------------------------------------------------------------------------------\n> >>>> Sort (cost=11060.50..11067.55 rows=2822 width=133) (actual\n> time=32715.097..32716.488 rows=13512 loops=1)\n> >>>> Sort Key: entity_compounddict2document.hepval\n> >>>> Sort Method: quicksort Memory: 2301kB\n> >>>> -> Bitmap Heap Scan on entity_compounddict2document\n> (cost=73.82..10898.76 rows=2822 width=133) (actual time=6.034..32695.483\n> rows=13512 loops=1)\n> >>>> Recheck Cond: ((name)::text = 'ranitidine'::text)\n> >>>> -> Bitmap Index Scan on entity_compound2document_name\n> (cost=0.00..73.12 rows=2822 width=0) (actual time=3.221..3.221 rows=13512\n> loops=1)\n> >>>> Index Cond: ((name)::text = 'ranitidine'::text)\n> >>>> Total runtime: 32717.548 ms\n> >>>>\n> >>>> Another query:\n> >>>> explain analyze select * from (select * from\n> entity_compounddict2document where name='progesterone' ) as a order by\n> a.hepval;\n> >>>>\n> >>>> QUERY PLAN\n> >>>>\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------\n> >>>> Sort (cost=367879.25..368209.24 rows=131997 width=133) (actual\n> time=9262.887..9287.046 rows=138165 loops=1)\n> >>>> Sort Key: entity_compounddict2document.hepval\n> >>>> Sort Method: quicksort Memory: 25622kB\n> >>>> -> Bitmap Heap Scan on entity_compounddict2document\n> (cost=2906.93..356652.81 rows=131997 width=133) (actual\n> time=76.316..9038.485 rows=138165 loops=1)\n> >>>> Recheck Cond: ((name)::text = 'progesterone'::text)\n> >>>> -> Bitmap Index Scan on entity_compound2document_name\n> (cost=0.00..2873.93 rows=131997 width=0) (actual time=40.913..40.913\n> rows=138165 loops=1)\n> >>>> Index Cond: ((name)::text = 'progesterone'::text)\n> >>>> Total runtime: 9296.815 ms\n> >>>>\n> >>>>\n> >>>> It has improved (I supose because of the lack of the join table) but\n> still taking a lot of time... Anything I can do??\n> >>>>\n> >>>> Any help would be very appreciated. Thank you very much.\n> >>>\n> >>>\n> >>>\n> >>> Good to know performance has increased.\n> >>>\n> >>> \"entity_compounddict2document\" table goes through high INSERTS ?\n> >>>\n> >>> Can you help us know if the \"helpval\" column and \"name\" column have\n> high duplicate values ? \"n_distinct\" value from pg_stats table would have\n> that info.\n> >>>\n> >>> Below could be a possible workaround -\n> >>>\n> >>> As mentioned earlier in this email, a composite Index on name and\n> hepval column might help. If the table does not go through lot of INSERTS,\n> then consider performing a CLUSTER on the table using the same INDEX.\n> >>>\n> >>> Other recommendations -\n> >>>\n> >>> Please drop off all the Non-primary key Indexes which have 0 scans /\n> hits. This would harm the DB and the DB server whilst maintenance and DML\n> operations.\n> >>>\n> >>> Regards,\n> >>> Venkata Balaji N\n> >>>\n> >>> Fujitsu Australia\n> >>\n> >>\n> >>\n> >> **NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los\n> ficheros adjuntos, pueden contener información protegida para el uso\n> exclusivo de su destinatario. Se prohíbe la distribución, reproducción o\n> cualquier otro tipo de transmisión por parte de otra persona que no sea el\n> destinatario. Si usted recibe por error este correo, se ruega comunicarlo\n> al remitente y borrar el mensaje recibido.\n> >>\n> >> **CONFIDENTIALITY NOTICE** This email communication and any attachments\n> may contain confidential and privileged information for the sole use of the\n> designated recipient named above. Distribution, reproduction or any other\n> use of this transmission by any party other than the intended recipient is\n> prohibited. If you are not the intended recipient please contact the sender\n> and delete all copies.\n> >>\n> >\n>\n>\n> Hi I think the problem is th heap scan of the table , that the backend\n> have to do because the btree to bitmap conversion becomes lossy. Try to\n> disable the enable_bitmapscan for the current session and rerun the query.\n>\n> Mat Dba\n>\n>\n>\n> ***NOTA DE CONFIDENCIALIDAD*** Este correo electrónico, y en su caso los\n> ficheros adjuntos, pueden contener información protegida para el uso\n> exclusivo de su destinatario. Se prohíbe la distribución, reproducción o\n> cualquier otro tipo de transmisión por parte de otra persona que no sea el\n> destinatario. Si usted recibe por error este correo, se ruega comunicarlo\n> al remitente y borrar el mensaje recibido.\n>\n> ***CONFIDENTIALITY NOTICE*** This email communication and any attachments\n> may contain confidential and privileged information for the sole use of the\n> designated recipient named above. Distribution, reproduction or any other\n> use of this transmission by any party other than the intended recipient is\n> prohibited. If you are not the intended recipient please contact the sender\n> and delete all copies.\n>\n>\n\nHello Andres,                        with enable_bitmapscan=off;   could you do :explain ( analyze , buffers ) select * from entity2document2  where name='ranitidine' ;\nI think it's interesting to understand how much it's clustered the table  entity2document2.infact the query extract 13512 rows in 79945.362 ms around 4 ms for row, and I suspect the table is not well clustered on that column, so every time the \nprocess is asking for a different page of the table or the i/o system have some problem.Moreover, another point it's : how much it's big ? the rows are arounf 94M , but how much it's big ?  it's important the average row length\nHave a nice day2014-03-06 15:45 GMT+01:00 acanada <[email protected]>:\nHello Mat,Setting enable_bitmapscan to off doesn't really helps. It gets worse...\nx=> SET enable_bitmapscan=off; SETx=> explain analyze select * from (select * from entity2document2  where name='ranitidine' ) as a  order by a.hepval;\n                                                                           QUERY PLAN                                                                           ----------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort  (cost=18789.21..18800.70 rows=4595 width=131) (actual time=79965.282..79966.657 rows=13512 loops=1)   Sort Key: entity2document2.hepval   Sort Method:  quicksort  Memory: 2301kB\n   ->  Index Scan using entity2document2_name on entity2document2  (cost=0.00..18509.70 rows=4595 width=131) (actual time=67.507..79945.362 rows=13512 loops=1)         Index Cond: ((name)::text = 'ranitidine'::text)\n Total runtime: 79967.705 ms(6 rows)Any other idea? Thank you very much for your help. Regards,AndrésEl Mar 6, 2014, a las 2:11 PM, desmodemone escribió:\n\nIl 05/mar/2014 00:36 \"Venkata Balaji Nagothi\" <[email protected]> ha scritto:\n>\n> After looking at the distinct values, yes the composite Index on \"name\" and \"hepval\" is not recommended. That would worsen - its expected.\n>\n> We need to look for other possible work around. Please drop off the above Index. Let me see if i can drill further into this.\n>\n> Meanwhile - can you help us know the memory parameters (work_mem, temp_buffers etc) set ?\n>\n> Do you have any other processes effecting this query's performance ?\n>\n> Any info about your Disk, RAM, CPU would also help.\n>\n> Regards,\n> Venkata Balaji N\n>\n> Fujitsu Australia\n>\n>\n>\n>\n> Venkata Balaji N\n>\n> Sr. Database Administrator\n> Fujitsu Australia\n>\n>\n> On Tue, Mar 4, 2014 at 10:23 PM, acanada <[email protected]> wrote:\n>>\n>> Hello,\n>>\n>> I don't know if this helps to figure out what is the problem but after adding the multicolumn index on name and hepval, the performance is even worse (¿?).  Ten times worse...\n>>\n>> explain analyze select * from (select * from entity_compounddict2document  where name='progesterone') as a order by a.hepval;\n>>                                                                          QUERY PLAN                                                                          \n>> -------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>  Sort  (cost=422746.18..423143.94 rows=159104 width=133) (actual time=95769.674..95797.943 rows=138165 loops=1)\n>>    Sort Key: entity_compounddict2document.hepval\n>>    Sort Method:  quicksort  Memory: 25622kB\n>>    ->  Bitmap Heap Scan on entity_compounddict2document  (cost=3501.01..408999.90 rows=159104 width=133) (actual time=70.789..95519.258 rows=138165 loops=1)\n>>          Recheck Cond: ((name)::text = 'progesterone'::text)\n>>          ->  Bitmap Index Scan on entity_compound2document_name  (cost=0.00..3461.23 rows=159104 width=0) (actual time=35.174..35.174 rows=138165 loops=1)\n\n>>                Index Cond: ((name)::text = 'progesterone'::text)\n>>  Total runtime: 95811.838 ms\n>> (8 rows)\n>>\n>> Any ideas please?\n>>\n>> Thank you \n>> Andrés.\n>>\n>>\n>>\n>> El Mar 4, 2014, a las 12:28 AM, Venkata Balaji Nagothi escribió:\n>>\n>>> On Mon, Mar 3, 2014 at 9:17 PM, acanada <[email protected]> wrote:\n>>>>\n>>>> Hello,\n>>>>\n>>>> Thankyou for your answer.\n>>>> I have made more changes than a simple re-indexing recently. I have moved the sorting field to the table in order to avoid the join clause. Now the schema is very simple. The query only implies one table:\n\n\n>>>>\n>>>> x=> \\d+ entity_compounddict2document;\n>>>>                       Table \"public.entity_compounddict2document\"\n>>>>       Column      |              Type              | Modifiers | Storage  | Description \n>>>> ------------------+--------------------------------+-----------+----------+-------------\n>>>>  id               | integer                        | not null  | plain    | \n>>>>  document_id      | integer                        |           | plain    | \n>>>>  name             | character varying(255)         |           | extended | \n>>>>  qualifier        | character varying(255)         |           | extended | \n>>>>  tagMethod        | character varying(255)         |           | extended | \n>>>>  created          | timestamp(0) without time zone |           | plain    | \n>>>>  updated          | timestamp(0) without time zone |           | plain    | \n>>>>  curation         | integer                        |           | plain    | \n>>>>  hepval           | double precision               |           | plain    | \n>>>>  cardval          | double precision               |           | plain    | \n>>>>  nephval          | double precision               |           | plain    | \n>>>>  phosval          | double precision               |           | plain    | \n>>>>  patternCount     | double precision               |           | plain    | \n>>>>  ruleScore        | double precision               |           | plain    | \n>>>>  hepTermNormScore | double precision               |           | plain    | \n>>>>  hepTermVarScore  | double precision               |           | plain    | \n>>>> Indexes:\n>>>>     \"entity_compounddict2document_pkey\" PRIMARY KEY, btree (id)\n>>>>     \"entity_compound2document_cardval\" btree (cardval)\n>>>>     \"entity_compound2document_heptermnormscore\" btree (\"hepTermNormScore\")\n>>>>     \"entity_compound2document_heptermvarscore\" btree (\"hepTermVarScore\")\n>>>>     \"entity_compound2document_hepval\" btree (hepval)\n>>>>     \"entity_compound2document_name\" btree (name)\n>>>>     \"entity_compound2document_nephval\" btree (nephval)\n>>>>     \"entity_compound2document_patterncount\" btree (\"patternCount\")\n>>>>     \"entity_compound2document_phosval\" btree (phosval)\n>>>>     \"entity_compound2document_rulescore\" btree (\"ruleScore\")\n>>>> Has OIDs: no\n>>>>\n>>>>            tablename            |                   indexname                                              |  num_rows    | table_size  | index_size | unique | number_of_scans | tuples_read | tuples_fetched \n\n\n>>>>  entity_compounddict2document   | entity_compound2document_cardval               | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>  entity_compounddict2document   | entity_compound2document_heptermnormscore      | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>  entity_compounddict2document   | entity_compound2document_heptermvarscore       | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>  entity_compounddict2document   | entity_compound2document_hepval                | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>  entity_compounddict2document   | entity_compound2document_name                  | 5.42452e+07 | 6763 MB    | 1505 MB    | Y      |              24 |      178680 |              0\n>>>>  entity_compounddict2document   | entity_compound2document_nephval               | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>  entity_compounddict2document   | entity_compound2document_patterncount          | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>  entity_compounddict2document   | entity_compound2document_phosval               | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>  entity_compounddict2document   | entity_compound2document_rulescore             | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>  entity_compounddict2document   | entity_compounddict2document_pkey              | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>\n>>>> The table has aprox. 54,000,000 rows\n>>>> There are no NULLs in hepval field and pg_settings haven't changed. I also have done \"analyze\" to this table.\n>>>>\n>>>> I have simplified the query and added the last advise that you told me:\n>>>>\n>>>> Query: \n>>>>\n>>>>  explain analyze select * from (select * from entity_compounddict2document  where name='ranitidine') as a order by a.hepval;\n>>>>                                                                       QUERY PLAN                                                                      \n>>>> ------------------------------------------------------------------------------------------------------------------------------------------------------\n>>>>  Sort  (cost=11060.50..11067.55 rows=2822 width=133) (actual time=32715.097..32716.488 rows=13512 loops=1)\n>>>>    Sort Key: entity_compounddict2document.hepval\n>>>>    Sort Method:  quicksort  Memory: 2301kB\n>>>>    ->  Bitmap Heap Scan on entity_compounddict2document  (cost=73.82..10898.76 rows=2822 width=133) (actual time=6.034..32695.483 rows=13512 loops=1)\n>>>>          Recheck Cond: ((name)::text = 'ranitidine'::text)\n>>>>          ->  Bitmap Index Scan on entity_compound2document_name  (cost=0.00..73.12 rows=2822 width=0) (actual time=3.221..3.221 rows=13512 loops=1)\n>>>>                Index Cond: ((name)::text = 'ranitidine'::text)\n>>>>  Total runtime: 32717.548 ms\n>>>>\n>>>> Another query:\n>>>> explain analyze select * from (select * from entity_compounddict2document  where name='progesterone' ) as a  order by a.hepval;\n>>>>\n>>>> QUERY PLAN\n>>>> ------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>>>  Sort  (cost=367879.25..368209.24 rows=131997 width=133) (actual time=9262.887..9287.046 rows=138165 loops=1)\n>>>>    Sort Key: entity_compounddict2document.hepval\n>>>>    Sort Method:  quicksort  Memory: 25622kB\n>>>>    ->  Bitmap Heap Scan on entity_compounddict2document  (cost=2906.93..356652.81 rows=131997 width=133) (actual time=76.316..9038.485 rows=138165 loops=1)\n>>>>          Recheck Cond: ((name)::text = 'progesterone'::text)\n>>>>          ->  Bitmap Index Scan on entity_compound2document_name  (cost=0.00..2873.93 rows=131997 width=0) (actual time=40.913..40.913 rows=138165 loops=1)\n>>>>                Index Cond: ((name)::text = 'progesterone'::text)\n>>>>  Total runtime: 9296.815 ms\n>>>>\n>>>>\n>>>> It has improved (I supose because of the lack of the join table) but still taking a lot of time... Anything I can do??\n>>>>\n>>>> Any help would be very appreciated. Thank you very much.\n>>>\n>>>\n>>>\n>>> Good to know performance has increased.\n>>>\n>>> \"entity_compounddict2document\" table goes through high INSERTS ?\n>>>\n>>> Can you help us know if the \"helpval\" column and \"name\" column have high duplicate values ? \"n_distinct\" value from pg_stats table would have that info. \n>>>\n>>> Below could be a possible workaround -\n>>>\n>>> As mentioned earlier in this email, a composite Index on name and hepval column might help. If the table does not go through lot of INSERTS, then consider performing a CLUSTER on the table using the same INDEX.\n\n\n>>>\n>>> Other recommendations -\n>>>\n>>> Please drop off all the Non-primary key Indexes which have 0 scans / hits. This would harm the DB and the DB server whilst maintenance and DML operations.\n>>>\n>>> Regards,\n>>> Venkata Balaji N\n>>>\n>>> Fujitsu Australia\n>>\n>>\n>>\n>> **NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n\n\n>>\n>> **CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n\n\n>>\n>Hi I think the problem is th heap scan of the table , that the backend have to do because the btree to bitmap conversion becomes lossy. Try to disable the enable_bitmapscan for the current session and rerun the query. \nMat Dba\n\n**NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n**CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.", "msg_date": "Mon, 10 Mar 2014 21:22:47 +0100", "msg_from": "desmodemone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query taking long time" }, { "msg_contents": "Hello,\n\nI cannot do explain (analyze, buffers) since I am on 8.3 postgres version.\nI am migrating to the new server and upgrading it.\nOnce it is ready again I will post the explain query here.\nThe new disk is SATA disk with 5TB, raid 0 or 1...\nlspci | grep -i raid\n00:1f.2 RAID bus controller: Intel Corporation C600/X79 series chipset SATA RAID Controller (rev 05)\n\nAll database is 200GB and the table entity2document2 is \n\nx=> select pg_size_pretty(pg_relation_size('entity2document2'));\n pg_size_pretty \n----------------\n 11 GB\n(1 row)\n\nx=> select pg_size_pretty(pg_total_relation_size('entity2document2'));\n pg_size_pretty \n----------------\n 29 GB\n(1 row)\n\nThe index of the name column:\nx=> select pg_size_pretty(pg_relation_size('entity2document2_name'));\n pg_size_pretty \n----------------\n 2550 MB\n(1 row)\n\n\nI am tunning the new server with this parameters...\nshared_buffers = 15000MB\nwork_mem = 1000MB\nmaintenance_work_mem = 2000MB\n\nAny other parameter that should be modified?\n\nThank you for your help!\nAndrés\n\n\nEl Mar 10, 2014, a las 9:22 PM, desmodemone escribió:\n\n> Hello Andres, \n> with enable_bitmapscan=off; could you do :\n> \n> explain ( analyze , buffers ) select * from entity2document2 where name='ranitidine' ;\n> \n> I think it's interesting to understand how much it's clustered the table entity2document2.\n> infact the query extract 13512 rows in 79945.362 ms around 4 ms for row, and I suspect the table is not well clustered on that column, so every time the \n> process is asking for a different page of the table or the i/o system have some problem.\n> \n> Moreover, another point it's : how much it's big ? the rows are arounf 94M , but how much it's big ? it's important the average row length\n> \n> \n> Have a nice day\n> \n> 2014-03-06 15:45 GMT+01:00 acanada <[email protected]>:\n> Hello Mat,\n> \n> Setting enable_bitmapscan to off doesn't really helps. It gets worse...\n> \n> x=> SET enable_bitmapscan=off; \n> SET\n> x=> explain analyze select * from (select * from entity2document2 where name='ranitidine' ) as a order by a.hepval;\n> QUERY PLAN \n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Sort (cost=18789.21..18800.70 rows=4595 width=131) (actual time=79965.282..79966.657 rows=13512 loops=1)\n> Sort Key: entity2document2.hepval\n> Sort Method: quicksort Memory: 2301kB\n> -> Index Scan using entity2document2_name on entity2document2 (cost=0.00..18509.70 rows=4595 width=131) (actual time=67.507..79945.362 rows=13512 loops=1)\n> Index Cond: ((name)::text = 'ranitidine'::text)\n> Total runtime: 79967.705 ms\n> (6 rows)\n> \n> Any other idea? \n> \n> Thank you very much for your help. Regards,\n> Andrés\n> \n> El Mar 6, 2014, a las 2:11 PM, desmodemone escribió:\n> \n>> \n>> Il 05/mar/2014 00:36 \"Venkata Balaji Nagothi\" <[email protected]> ha scritto:\n>> >\n>> > After looking at the distinct values, yes the composite Index on \"name\" and \"hepval\" is not recommended. That would worsen - its expected.\n>> >\n>> > We need to look for other possible work around. Please drop off the above Index. Let me see if i can drill further into this.\n>> >\n>> > Meanwhile - can you help us know the memory parameters (work_mem, temp_buffers etc) set ?\n>> >\n>> > Do you have any other processes effecting this query's performance ?\n>> >\n>> > Any info about your Disk, RAM, CPU would also help.\n>> >\n>> > Regards,\n>> > Venkata Balaji N\n>> >\n>> > Fujitsu Australia\n>> >\n>> >\n>> >\n>> >\n>> > Venkata Balaji N\n>> >\n>> > Sr. Database Administrator\n>> > Fujitsu Australia\n>> >\n>> >\n>> > On Tue, Mar 4, 2014 at 10:23 PM, acanada <[email protected]> wrote:\n>> >>\n>> >> Hello,\n>> >>\n>> >> I don't know if this helps to figure out what is the problem but after adding the multicolumn index on name and hepval, the performance is even worse (¿?). Ten times worse...\n>> >>\n>> >> explain analyze select * from (select * from entity_compounddict2document where name='progesterone') as a order by a.hepval;\n>> >> QUERY PLAN \n>> >> -------------------------------------------------------------------------------------------------------------------------------------------------------------\n>> >> Sort (cost=422746.18..423143.94 rows=159104 width=133) (actual time=95769.674..95797.943 rows=138165 loops=1)\n>> >> Sort Key: entity_compounddict2document.hepval\n>> >> Sort Method: quicksort Memory: 25622kB\n>> >> -> Bitmap Heap Scan on entity_compounddict2document (cost=3501.01..408999.90 rows=159104 width=133) (actual time=70.789..95519.258 rows=138165 loops=1)\n>> >> Recheck Cond: ((name)::text = 'progesterone'::text)\n>> >> -> Bitmap Index Scan on entity_compound2document_name (cost=0.00..3461.23 rows=159104 width=0) (actual time=35.174..35.174 rows=138165 loops=1)\n>> >> Index Cond: ((name)::text = 'progesterone'::text)\n>> >> Total runtime: 95811.838 ms\n>> >> (8 rows)\n>> >>\n>> >> Any ideas please?\n>> >>\n>> >> Thank you \n>> >> Andrés.\n>> >>\n>> >>\n>> >>\n>> >> El Mar 4, 2014, a las 12:28 AM, Venkata Balaji Nagothi escribió:\n>> >>\n>> >>> On Mon, Mar 3, 2014 at 9:17 PM, acanada <[email protected]> wrote:\n>> >>>>\n>> >>>> Hello,\n>> >>>>\n>> >>>> Thankyou for your answer.\n>> >>>> I have made more changes than a simple re-indexing recently. I have moved the sorting field to the table in order to avoid the join clause. Now the schema is very simple. The query only implies one table:\n>> >>>>\n>> >>>> x=> \\d+ entity_compounddict2document;\n>> >>>> Table \"public.entity_compounddict2document\"\n>> >>>> Column | Type | Modifiers | Storage | Description \n>> >>>> ------------------+--------------------------------+-----------+----------+-------------\n>> >>>> id | integer | not null | plain | \n>> >>>> document_id | integer | | plain | \n>> >>>> name | character varying(255) | | extended | \n>> >>>> qualifier | character varying(255) | | extended | \n>> >>>> tagMethod | character varying(255) | | extended | \n>> >>>> created | timestamp(0) without time zone | | plain | \n>> >>>> updated | timestamp(0) without time zone | | plain | \n>> >>>> curation | integer | | plain | \n>> >>>> hepval | double precision | | plain | \n>> >>>> cardval | double precision | | plain | \n>> >>>> nephval | double precision | | plain | \n>> >>>> phosval | double precision | | plain | \n>> >>>> patternCount | double precision | | plain | \n>> >>>> ruleScore | double precision | | plain | \n>> >>>> hepTermNormScore | double precision | | plain | \n>> >>>> hepTermVarScore | double precision | | plain | \n>> >>>> Indexes:\n>> >>>> \"entity_compounddict2document_pkey\" PRIMARY KEY, btree (id)\n>> >>>> \"entity_compound2document_cardval\" btree (cardval)\n>> >>>> \"entity_compound2document_heptermnormscore\" btree (\"hepTermNormScore\")\n>> >>>> \"entity_compound2document_heptermvarscore\" btree (\"hepTermVarScore\")\n>> >>>> \"entity_compound2document_hepval\" btree (hepval)\n>> >>>> \"entity_compound2document_name\" btree (name)\n>> >>>> \"entity_compound2document_nephval\" btree (nephval)\n>> >>>> \"entity_compound2document_patterncount\" btree (\"patternCount\")\n>> >>>> \"entity_compound2document_phosval\" btree (phosval)\n>> >>>> \"entity_compound2document_rulescore\" btree (\"ruleScore\")\n>> >>>> Has OIDs: no\n>> >>>>\n>> >>>> tablename | indexname | num_rows | table_size | index_size | unique | number_of_scans | tuples_read | tuples_fetched \n>> >>>> entity_compounddict2document | entity_compound2document_cardval | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>> >>>> entity_compounddict2document | entity_compound2document_heptermnormscore | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>> >>>> entity_compounddict2document | entity_compound2document_heptermvarscore | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>> >>>> entity_compounddict2document | entity_compound2document_hepval | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>> >>>> entity_compounddict2document | entity_compound2document_name | 5.42452e+07 | 6763 MB | 1505 MB | Y | 24 | 178680 | 0\n>> >>>> entity_compounddict2document | entity_compound2document_nephval | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>> >>>> entity_compounddict2document | entity_compound2document_patterncount | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>> >>>> entity_compounddict2document | entity_compound2document_phosval | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>> >>>> entity_compounddict2document | entity_compound2document_rulescore | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>> >>>> entity_compounddict2document | entity_compounddict2document_pkey | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>> >>>>\n>> >>>> The table has aprox. 54,000,000 rows\n>> >>>> There are no NULLs in hepval field and pg_settings haven't changed. I also have done \"analyze\" to this table.\n>> >>>>\n>> >>>> I have simplified the query and added the last advise that you told me:\n>> >>>>\n>> >>>> Query: \n>> >>>>\n>> >>>> explain analyze select * from (select * from entity_compounddict2document where name='ranitidine') as a order by a.hepval;\n>> >>>> QUERY PLAN \n>> >>>> ------------------------------------------------------------------------------------------------------------------------------------------------------\n>> >>>> Sort (cost=11060.50..11067.55 rows=2822 width=133) (actual time=32715.097..32716.488 rows=13512 loops=1)\n>> >>>> Sort Key: entity_compounddict2document.hepval\n>> >>>> Sort Method: quicksort Memory: 2301kB\n>> >>>> -> Bitmap Heap Scan on entity_compounddict2document (cost=73.82..10898.76 rows=2822 width=133) (actual time=6.034..32695.483 rows=13512 loops=1)\n>> >>>> Recheck Cond: ((name)::text = 'ranitidine'::text)\n>> >>>> -> Bitmap Index Scan on entity_compound2document_name (cost=0.00..73.12 rows=2822 width=0) (actual time=3.221..3.221 rows=13512 loops=1)\n>> >>>> Index Cond: ((name)::text = 'ranitidine'::text)\n>> >>>> Total runtime: 32717.548 ms\n>> >>>>\n>> >>>> Another query:\n>> >>>> explain analyze select * from (select * from entity_compounddict2document where name='progesterone' ) as a order by a.hepval;\n>> >>>>\n>> >>>> QUERY PLAN\n>> >>>> ------------------------------------------------------------------------------------------------------------------------------------------------------------\n>> >>>> Sort (cost=367879.25..368209.24 rows=131997 width=133) (actual time=9262.887..9287.046 rows=138165 loops=1)\n>> >>>> Sort Key: entity_compounddict2document.hepval\n>> >>>> Sort Method: quicksort Memory: 25622kB\n>> >>>> -> Bitmap Heap Scan on entity_compounddict2document (cost=2906.93..356652.81 rows=131997 width=133) (actual time=76.316..9038.485 rows=138165 loops=1)\n>> >>>> Recheck Cond: ((name)::text = 'progesterone'::text)\n>> >>>> -> Bitmap Index Scan on entity_compound2document_name (cost=0.00..2873.93 rows=131997 width=0) (actual time=40.913..40.913 rows=138165 loops=1)\n>> >>>> Index Cond: ((name)::text = 'progesterone'::text)\n>> >>>> Total runtime: 9296.815 ms\n>> >>>>\n>> >>>>\n>> >>>> It has improved (I supose because of the lack of the join table) but still taking a lot of time... Anything I can do??\n>> >>>>\n>> >>>> Any help would be very appreciated. Thank you very much.\n>> >>>\n>> >>>\n>> >>>\n>> >>> Good to know performance has increased.\n>> >>>\n>> >>> \"entity_compounddict2document\" table goes through high INSERTS ?\n>> >>>\n>> >>> Can you help us know if the \"helpval\" column and \"name\" column have high duplicate values ? \"n_distinct\" value from pg_stats table would have that info. \n>> >>>\n>> >>> Below could be a possible workaround -\n>> >>>\n>> >>> As mentioned earlier in this email, a composite Index on name and hepval column might help. If the table does not go through lot of INSERTS, then consider performing a CLUSTER on the table using the same INDEX.\n>> >>>\n>> >>> Other recommendations -\n>> >>>\n>> >>> Please drop off all the Non-primary key Indexes which have 0 scans / hits. This would harm the DB and the DB server whilst maintenance and DML operations.\n>> >>>\n>> >>> Regards,\n>> >>> Venkata Balaji N\n>> >>>\n>> >>> Fujitsu Australia\n>> >>\n>> >>\n>> >>\n>> >> **NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n>> >>\n>> >> **CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n>> >>\n>> >\n>> \n>> \n>> \n>> Hi I think the problem is th heap scan of the table , that the backend have to do because the btree to bitmap conversion becomes lossy. Try to disable the enable_bitmapscan for the current session and rerun the query.\n>> \n>> Mat Dba\n>> \n> \n> \n> \n> **NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n> \n> **CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n> \n> \n> \n\n\n\n**NOTA DE CONFIDENCIALIDAD** Este correo electrÿnico, y en su caso los ficheros adjuntos, pueden contener informaciÿn protegida para el uso exclusivo de su destinatario. Se prohÿbe la distribuciÿn, reproducciÿn o cualquier otro tipo de transmisiÿn por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n**CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n\n\n\nHello,I cannot do explain (analyze, buffers) since I am on 8.3 postgres version.I am migrating to the new server and upgrading it.Once it is ready again I will post the explain query here.The new disk is SATA disk with 5TB, raid 0 or 1...lspci | grep -i raid00:1f.2 RAID bus controller: Intel Corporation C600/X79 series chipset SATA RAID Controller (rev 05)All database is 200GB and the table entity2document2 is x=> select pg_size_pretty(pg_relation_size('entity2document2')); pg_size_pretty ---------------- 11 GB(1 row)x=> select pg_size_pretty(pg_total_relation_size('entity2document2')); pg_size_pretty ---------------- 29 GB(1 row)The index of the name column:x=> select pg_size_pretty(pg_relation_size('entity2document2_name')); pg_size_pretty ---------------- 2550 MB(1 row)I am tunning the new server with this parameters...shared_buffers = 15000MBwork_mem = 1000MBmaintenance_work_mem = 2000MBAny other parameter that should be modified?Thank you for your help!AndrésEl Mar 10, 2014, a las 9:22 PM, desmodemone escribió:Hello Andres,                        with enable_bitmapscan=off;   could you do :explain ( analyze , buffers ) select * from entity2document2  where name='ranitidine' ;\nI think it's interesting to understand how much it's clustered the table  entity2document2.infact the query extract 13512 rows in 79945.362 ms around 4 ms for row, and I suspect the table is not well clustered on that column, so every time the \nprocess is asking for a different page of the table or the i/o system have some problem.Moreover, another point it's : how much it's big ? the rows are arounf 94M , but how much it's big ?  it's important the average row length\nHave a nice day2014-03-06 15:45 GMT+01:00 acanada <[email protected]>:\nHello Mat,Setting enable_bitmapscan to off doesn't really helps. It gets worse...\nx=> SET enable_bitmapscan=off; SETx=> explain analyze select * from (select * from entity2document2  where name='ranitidine' ) as a  order by a.hepval;\n                                                                           QUERY PLAN                                                                           ----------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort  (cost=18789.21..18800.70 rows=4595 width=131) (actual time=79965.282..79966.657 rows=13512 loops=1)   Sort Key: entity2document2.hepval   Sort Method:  quicksort  Memory: 2301kB\n   ->  Index Scan using entity2document2_name on entity2document2  (cost=0.00..18509.70 rows=4595 width=131) (actual time=67.507..79945.362 rows=13512 loops=1)         Index Cond: ((name)::text = 'ranitidine'::text)\n Total runtime: 79967.705 ms(6 rows)Any other idea? Thank you very much for your help. Regards,AndrésEl Mar 6, 2014, a las 2:11 PM, desmodemone escribió:\n\nIl 05/mar/2014 00:36 \"Venkata Balaji Nagothi\" <[email protected]> ha scritto:\n>\n> After looking at the distinct values, yes the composite Index on \"name\" and \"hepval\" is not recommended. That would worsen - its expected.\n>\n> We need to look for other possible work around. Please drop off the above Index. Let me see if i can drill further into this.\n>\n> Meanwhile - can you help us know the memory parameters (work_mem, temp_buffers etc) set ?\n>\n> Do you have any other processes effecting this query's performance ?\n>\n> Any info about your Disk, RAM, CPU would also help.\n>\n> Regards,\n> Venkata Balaji N\n>\n> Fujitsu Australia\n>\n>\n>\n>\n> Venkata Balaji N\n>\n> Sr. Database Administrator\n> Fujitsu Australia\n>\n>\n> On Tue, Mar 4, 2014 at 10:23 PM, acanada <[email protected]> wrote:\n>>\n>> Hello,\n>>\n>> I don't know if this helps to figure out what is the problem but after adding the multicolumn index on name and hepval, the performance is even worse (¿?).  Ten times worse...\n>>\n>> explain analyze select * from (select * from entity_compounddict2document  where name='progesterone') as a order by a.hepval;\n>>                                                                          QUERY PLAN                                                                          \n>> -------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>  Sort  (cost=422746.18..423143.94 rows=159104 width=133) (actual time=95769.674..95797.943 rows=138165 loops=1)\n>>    Sort Key: entity_compounddict2document.hepval\n>>    Sort Method:  quicksort  Memory: 25622kB\n>>    ->  Bitmap Heap Scan on entity_compounddict2document  (cost=3501.01..408999.90 rows=159104 width=133) (actual time=70.789..95519.258 rows=138165 loops=1)\n>>          Recheck Cond: ((name)::text = 'progesterone'::text)\n>>          ->  Bitmap Index Scan on entity_compound2document_name  (cost=0.00..3461.23 rows=159104 width=0) (actual time=35.174..35.174 rows=138165 loops=1)\n\n>>                Index Cond: ((name)::text = 'progesterone'::text)\n>>  Total runtime: 95811.838 ms\n>> (8 rows)\n>>\n>> Any ideas please?\n>>\n>> Thank you \n>> Andrés.\n>>\n>>\n>>\n>> El Mar 4, 2014, a las 12:28 AM, Venkata Balaji Nagothi escribió:\n>>\n>>> On Mon, Mar 3, 2014 at 9:17 PM, acanada <[email protected]> wrote:\n>>>>\n>>>> Hello,\n>>>>\n>>>> Thankyou for your answer.\n>>>> I have made more changes than a simple re-indexing recently. I have moved the sorting field to the table in order to avoid the join clause. Now the schema is very simple. The query only implies one table:\n\n\n>>>>\n>>>> x=> \\d+ entity_compounddict2document;\n>>>>                       Table \"public.entity_compounddict2document\"\n>>>>       Column      |              Type              | Modifiers | Storage  | Description \n>>>> ------------------+--------------------------------+-----------+----------+-------------\n>>>>  id               | integer                        | not null  | plain    | \n>>>>  document_id      | integer                        |           | plain    | \n>>>>  name             | character varying(255)         |           | extended | \n>>>>  qualifier        | character varying(255)         |           | extended | \n>>>>  tagMethod        | character varying(255)         |           | extended | \n>>>>  created          | timestamp(0) without time zone |           | plain    | \n>>>>  updated          | timestamp(0) without time zone |           | plain    | \n>>>>  curation         | integer                        |           | plain    | \n>>>>  hepval           | double precision               |           | plain    | \n>>>>  cardval          | double precision               |           | plain    | \n>>>>  nephval          | double precision               |           | plain    | \n>>>>  phosval          | double precision               |           | plain    | \n>>>>  patternCount     | double precision               |           | plain    | \n>>>>  ruleScore        | double precision               |           | plain    | \n>>>>  hepTermNormScore | double precision               |           | plain    | \n>>>>  hepTermVarScore  | double precision               |           | plain    | \n>>>> Indexes:\n>>>>     \"entity_compounddict2document_pkey\" PRIMARY KEY, btree (id)\n>>>>     \"entity_compound2document_cardval\" btree (cardval)\n>>>>     \"entity_compound2document_heptermnormscore\" btree (\"hepTermNormScore\")\n>>>>     \"entity_compound2document_heptermvarscore\" btree (\"hepTermVarScore\")\n>>>>     \"entity_compound2document_hepval\" btree (hepval)\n>>>>     \"entity_compound2document_name\" btree (name)\n>>>>     \"entity_compound2document_nephval\" btree (nephval)\n>>>>     \"entity_compound2document_patterncount\" btree (\"patternCount\")\n>>>>     \"entity_compound2document_phosval\" btree (phosval)\n>>>>     \"entity_compound2document_rulescore\" btree (\"ruleScore\")\n>>>> Has OIDs: no\n>>>>\n>>>>            tablename            |                   indexname                                              |  num_rows    | table_size  | index_size | unique | number_of_scans | tuples_read | tuples_fetched \n\n\n>>>>  entity_compounddict2document   | entity_compound2document_cardval               | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>  entity_compounddict2document   | entity_compound2document_heptermnormscore      | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>  entity_compounddict2document   | entity_compound2document_heptermvarscore       | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>  entity_compounddict2document   | entity_compound2document_hepval                | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>  entity_compounddict2document   | entity_compound2document_name                  | 5.42452e+07 | 6763 MB    | 1505 MB    | Y      |              24 |      178680 |              0\n>>>>  entity_compounddict2document   | entity_compound2document_nephval               | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>  entity_compounddict2document   | entity_compound2document_patterncount          | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>  entity_compounddict2document   | entity_compound2document_phosval               | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>  entity_compounddict2document   | entity_compound2document_rulescore             | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>  entity_compounddict2document   | entity_compounddict2document_pkey              | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>\n>>>> The table has aprox. 54,000,000 rows\n>>>> There are no NULLs in hepval field and pg_settings haven't changed. I also have done \"analyze\" to this table.\n>>>>\n>>>> I have simplified the query and added the last advise that you told me:\n>>>>\n>>>> Query: \n>>>>\n>>>>  explain analyze select * from (select * from entity_compounddict2document  where name='ranitidine') as a order by a.hepval;\n>>>>                                                                       QUERY PLAN                                                                      \n>>>> ------------------------------------------------------------------------------------------------------------------------------------------------------\n>>>>  Sort  (cost=11060.50..11067.55 rows=2822 width=133) (actual time=32715.097..32716.488 rows=13512 loops=1)\n>>>>    Sort Key: entity_compounddict2document.hepval\n>>>>    Sort Method:  quicksort  Memory: 2301kB\n>>>>    ->  Bitmap Heap Scan on entity_compounddict2document  (cost=73.82..10898.76 rows=2822 width=133) (actual time=6.034..32695.483 rows=13512 loops=1)\n>>>>          Recheck Cond: ((name)::text = 'ranitidine'::text)\n>>>>          ->  Bitmap Index Scan on entity_compound2document_name  (cost=0.00..73.12 rows=2822 width=0) (actual time=3.221..3.221 rows=13512 loops=1)\n>>>>                Index Cond: ((name)::text = 'ranitidine'::text)\n>>>>  Total runtime: 32717.548 ms\n>>>>\n>>>> Another query:\n>>>> explain analyze select * from (select * from entity_compounddict2document  where name='progesterone' ) as a  order by a.hepval;\n>>>>\n>>>> QUERY PLAN\n>>>> ------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>>>  Sort  (cost=367879.25..368209.24 rows=131997 width=133) (actual time=9262.887..9287.046 rows=138165 loops=1)\n>>>>    Sort Key: entity_compounddict2document.hepval\n>>>>    Sort Method:  quicksort  Memory: 25622kB\n>>>>    ->  Bitmap Heap Scan on entity_compounddict2document  (cost=2906.93..356652.81 rows=131997 width=133) (actual time=76.316..9038.485 rows=138165 loops=1)\n>>>>          Recheck Cond: ((name)::text = 'progesterone'::text)\n>>>>          ->  Bitmap Index Scan on entity_compound2document_name  (cost=0.00..2873.93 rows=131997 width=0) (actual time=40.913..40.913 rows=138165 loops=1)\n>>>>                Index Cond: ((name)::text = 'progesterone'::text)\n>>>>  Total runtime: 9296.815 ms\n>>>>\n>>>>\n>>>> It has improved (I supose because of the lack of the join table) but still taking a lot of time... Anything I can do??\n>>>>\n>>>> Any help would be very appreciated. Thank you very much.\n>>>\n>>>\n>>>\n>>> Good to know performance has increased.\n>>>\n>>> \"entity_compounddict2document\" table goes through high INSERTS ?\n>>>\n>>> Can you help us know if the \"helpval\" column and \"name\" column have high duplicate values ? \"n_distinct\" value from pg_stats table would have that info. \n>>>\n>>> Below could be a possible workaround -\n>>>\n>>> As mentioned earlier in this email, a composite Index on name and hepval column might help. If the table does not go through lot of INSERTS, then consider performing a CLUSTER on the table using the same INDEX.\n\n\n>>>\n>>> Other recommendations -\n>>>\n>>> Please drop off all the Non-primary key Indexes which have 0 scans / hits. This would harm the DB and the DB server whilst maintenance and DML operations.\n>>>\n>>> Regards,\n>>> Venkata Balaji N\n>>>\n>>> Fujitsu Australia\n>>\n>>\n>>\n>> **NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n\n\n>>\n>> **CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n\n\n>>\n>Hi I think the problem is th heap scan of the table , that the backend have to do because the btree to bitmap conversion becomes lossy. Try to disable the enable_bitmapscan for the current session and rerun the query. Mat Dba\n\n**NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n**CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n\n\n**NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.**CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.", "msg_date": "Tue, 11 Mar 2014 16:56:37 +0100", "msg_from": "\"acanada\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query taking long time" }, { "msg_contents": "Hello,\n\nnew server with more ram will definitely help to keep your working set in memory.\nBut if you want your queries be fast on cold (on disk) data, then you need more/faster disks.\n\nAnd work_mem = 1000MB is too much, better set to 32MB so you don’t get OOM Killer.\nAnd may be slightly lower shared_buffers. \n\nOn 11 Mar 2014, at 18:56, acanada <[email protected]> wrote:\n\n> Hello,\n> \n> I cannot do explain (analyze, buffers) since I am on 8.3 postgres version.\n> I am migrating to the new server and upgrading it.\n> Once it is ready again I will post the explain query here.\n> The new disk is SATA disk with 5TB, raid 0 or 1...\n> lspci | grep -i raid\n> 00:1f.2 RAID bus controller: Intel Corporation C600/X79 series chipset SATA RAID Controller (rev 05)\n> \n> All database is 200GB and the table entity2document2 is \n> \n> x=> select pg_size_pretty(pg_relation_size('entity2document2'));\n> pg_size_pretty \n> ----------------\n> 11 GB\n> (1 row)\n> \n> x=> select pg_size_pretty(pg_total_relation_size('entity2document2'));\n> pg_size_pretty \n> ----------------\n> 29 GB\n> (1 row)\n> \n> The index of the name column:\n> x=> select pg_size_pretty(pg_relation_size('entity2document2_name'));\n> pg_size_pretty \n> ----------------\n> 2550 MB\n> (1 row)\n> \n> \n> I am tunning the new server with this parameters...\n> shared_buffers = 15000MB\n> work_mem = 1000MB\n> maintenance_work_mem = 2000MB\n> \n> Any other parameter that should be modified?\n> \n> Thank you for your help!\n> Andrés\n> \n> \n> El Mar 10, 2014, a las 9:22 PM, desmodemone escribió:\n> \n>> Hello Andres, \n>> with enable_bitmapscan=off; could you do :\n>> \n>> explain ( analyze , buffers ) select * from entity2document2 where name='ranitidine' ;\n>> \n>> I think it's interesting to understand how much it's clustered the table entity2document2.\n>> infact the query extract 13512 rows in 79945.362 ms around 4 ms for row, and I suspect the table is not well clustered on that column, so every time the \n>> process is asking for a different page of the table or the i/o system have some problem.\n>> \n>> Moreover, another point it's : how much it's big ? the rows are arounf 94M , but how much it's big ? it's important the average row length\n>> \n>> \n>> Have a nice day\n>> \n>> 2014-03-06 15:45 GMT+01:00 acanada <[email protected]>:\n>> Hello Mat,\n>> \n>> Setting enable_bitmapscan to off doesn't really helps. It gets worse...\n>> \n>> x=> SET enable_bitmapscan=off; \n>> SET\n>> x=> explain analyze select * from (select * from entity2document2 where name='ranitidine' ) as a order by a.hepval;\n>> QUERY PLAN \n>> ----------------------------------------------------------------------------------------------------------------------------------------------------------------\n>> Sort (cost=18789.21..18800.70 rows=4595 width=131) (actual time=79965.282..79966.657 rows=13512 loops=1)\n>> Sort Key: entity2document2.hepval\n>> Sort Method: quicksort Memory: 2301kB\n>> -> Index Scan using entity2document2_name on entity2document2 (cost=0.00..18509.70 rows=4595 width=131) (actual time=67.507..79945.362 rows=13512 loops=1)\n>> Index Cond: ((name)::text = 'ranitidine'::text)\n>> Total runtime: 79967.705 ms\n>> (6 rows)\n>> \n>> Any other idea? \n>> \n>> Thank you very much for your help. Regards,\n>> Andrés\n>> \n>> El Mar 6, 2014, a las 2:11 PM, desmodemone escribió:\n>> \n>>> \n>>> Il 05/mar/2014 00:36 \"Venkata Balaji Nagothi\" <[email protected]> ha scritto:\n>>> >\n>>> > After looking at the distinct values, yes the composite Index on \"name\" and \"hepval\" is not recommended. That would worsen - its expected.\n>>> >\n>>> > We need to look for other possible work around. Please drop off the above Index. Let me see if i can drill further into this.\n>>> >\n>>> > Meanwhile - can you help us know the memory parameters (work_mem, temp_buffers etc) set ?\n>>> >\n>>> > Do you have any other processes effecting this query's performance ?\n>>> >\n>>> > Any info about your Disk, RAM, CPU would also help.\n>>> >\n>>> > Regards,\n>>> > Venkata Balaji N\n>>> >\n>>> > Fujitsu Australia\n>>> >\n>>> >\n>>> >\n>>> >\n>>> > Venkata Balaji N\n>>> >\n>>> > Sr. Database Administrator\n>>> > Fujitsu Australia\n>>> >\n>>> >\n>>> > On Tue, Mar 4, 2014 at 10:23 PM, acanada <[email protected]> wrote:\n>>> >>\n>>> >> Hello,\n>>> >>\n>>> >> I don't know if this helps to figure out what is the problem but after adding the multicolumn index on name and hepval, the performance is even worse (¿?). Ten times worse...\n>>> >>\n>>> >> explain analyze select * from (select * from entity_compounddict2document where name='progesterone') as a order by a.hepval;\n>>> >> QUERY PLAN \n>>> >> -------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>> >> Sort (cost=422746.18..423143.94 rows=159104 width=133) (actual time=95769.674..95797.943 rows=138165 loops=1)\n>>> >> Sort Key: entity_compounddict2document.hepval\n>>> >> Sort Method: quicksort Memory: 25622kB\n>>> >> -> Bitmap Heap Scan on entity_compounddict2document (cost=3501.01..408999.90 rows=159104 width=133) (actual time=70.789..95519.258 rows=138165 loops=1)\n>>> >> Recheck Cond: ((name)::text = 'progesterone'::text)\n>>> >> -> Bitmap Index Scan on entity_compound2document_name (cost=0.00..3461.23 rows=159104 width=0) (actual time=35.174..35.174 rows=138165 loops=1)\n>>> >> Index Cond: ((name)::text = 'progesterone'::text)\n>>> >> Total runtime: 95811.838 ms\n>>> >> (8 rows)\n>>> >>\n>>> >> Any ideas please?\n>>> >>\n>>> >> Thank you \n>>> >> Andrés.\n>>> >>\n>>> >>\n>>> >>\n>>> >> El Mar 4, 2014, a las 12:28 AM, Venkata Balaji Nagothi escribió:\n>>> >>\n>>> >>> On Mon, Mar 3, 2014 at 9:17 PM, acanada <[email protected]> wrote:\n>>> >>>>\n>>> >>>> Hello,\n>>> >>>>\n>>> >>>> Thankyou for your answer.\n>>> >>>> I have made more changes than a simple re-indexing recently. I have moved the sorting field to the table in order to avoid the join clause. Now the schema is very simple. The query only implies one table:\n>>> >>>>\n>>> >>>> x=> \\d+ entity_compounddict2document;\n>>> >>>> Table \"public.entity_compounddict2document\"\n>>> >>>> Column | Type | Modifiers | Storage | Description \n>>> >>>> ------------------+--------------------------------+-----------+----------+-------------\n>>> >>>> id | integer | not null | plain | \n>>> >>>> document_id | integer | | plain | \n>>> >>>> name | character varying(255) | | extended | \n>>> >>>> qualifier | character varying(255) | | extended | \n>>> >>>> tagMethod | character varying(255) | | extended | \n>>> >>>> created | timestamp(0) without time zone | | plain | \n>>> >>>> updated | timestamp(0) without time zone | | plain | \n>>> >>>> curation | integer | | plain | \n>>> >>>> hepval | double precision | | plain | \n>>> >>>> cardval | double precision | | plain | \n>>> >>>> nephval | double precision | | plain | \n>>> >>>> phosval | double precision | | plain | \n>>> >>>> patternCount | double precision | | plain | \n>>> >>>> ruleScore | double precision | | plain | \n>>> >>>> hepTermNormScore | double precision | | plain | \n>>> >>>> hepTermVarScore | double precision | | plain | \n>>> >>>> Indexes:\n>>> >>>> \"entity_compounddict2document_pkey\" PRIMARY KEY, btree (id)\n>>> >>>> \"entity_compound2document_cardval\" btree (cardval)\n>>> >>>> \"entity_compound2document_heptermnormscore\" btree (\"hepTermNormScore\")\n>>> >>>> \"entity_compound2document_heptermvarscore\" btree (\"hepTermVarScore\")\n>>> >>>> \"entity_compound2document_hepval\" btree (hepval)\n>>> >>>> \"entity_compound2document_name\" btree (name)\n>>> >>>> \"entity_compound2document_nephval\" btree (nephval)\n>>> >>>> \"entity_compound2document_patterncount\" btree (\"patternCount\")\n>>> >>>> \"entity_compound2document_phosval\" btree (phosval)\n>>> >>>> \"entity_compound2document_rulescore\" btree (\"ruleScore\")\n>>> >>>> Has OIDs: no\n>>> >>>>\n>>> >>>> tablename | indexname | num_rows | table_size | index_size | unique | number_of_scans | tuples_read | tuples_fetched \n>>> >>>> entity_compounddict2document | entity_compound2document_cardval | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>> >>>> entity_compounddict2document | entity_compound2document_heptermnormscore | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>> >>>> entity_compounddict2document | entity_compound2document_heptermvarscore | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>> >>>> entity_compounddict2document | entity_compound2document_hepval | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>> >>>> entity_compounddict2document | entity_compound2document_name | 5.42452e+07 | 6763 MB | 1505 MB | Y | 24 | 178680 | 0\n>>> >>>> entity_compounddict2document | entity_compound2document_nephval | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>> >>>> entity_compounddict2document | entity_compound2document_patterncount | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>> >>>> entity_compounddict2document | entity_compound2document_phosval | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>> >>>> entity_compounddict2document | entity_compound2document_rulescore | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>> >>>> entity_compounddict2document | entity_compounddict2document_pkey | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>> >>>>\n>>> >>>> The table has aprox. 54,000,000 rows\n>>> >>>> There are no NULLs in hepval field and pg_settings haven't changed. I also have done \"analyze\" to this table.\n>>> >>>>\n>>> >>>> I have simplified the query and added the last advise that you told me:\n>>> >>>>\n>>> >>>> Query: \n>>> >>>>\n>>> >>>> explain analyze select * from (select * from entity_compounddict2document where name='ranitidine') as a order by a.hepval;\n>>> >>>> QUERY PLAN \n>>> >>>> ------------------------------------------------------------------------------------------------------------------------------------------------------\n>>> >>>> Sort (cost=11060.50..11067.55 rows=2822 width=133) (actual time=32715.097..32716.488 rows=13512 loops=1)\n>>> >>>> Sort Key: entity_compounddict2document.hepval\n>>> >>>> Sort Method: quicksort Memory: 2301kB\n>>> >>>> -> Bitmap Heap Scan on entity_compounddict2document (cost=73.82..10898.76 rows=2822 width=133) (actual time=6.034..32695.483 rows=13512 loops=1)\n>>> >>>> Recheck Cond: ((name)::text = 'ranitidine'::text)\n>>> >>>> -> Bitmap Index Scan on entity_compound2document_name (cost=0.00..73.12 rows=2822 width=0) (actual time=3.221..3.221 rows=13512 loops=1)\n>>> >>>> Index Cond: ((name)::text = 'ranitidine'::text)\n>>> >>>> Total runtime: 32717.548 ms\n>>> >>>>\n>>> >>>> Another query:\n>>> >>>> explain analyze select * from (select * from entity_compounddict2document where name='progesterone' ) as a order by a.hepval;\n>>> >>>>\n>>> >>>> QUERY PLAN\n>>> >>>> ------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>> >>>> Sort (cost=367879.25..368209.24 rows=131997 width=133) (actual time=9262.887..9287.046 rows=138165 loops=1)\n>>> >>>> Sort Key: entity_compounddict2document.hepval\n>>> >>>> Sort Method: quicksort Memory: 25622kB\n>>> >>>> -> Bitmap Heap Scan on entity_compounddict2document (cost=2906.93..356652.81 rows=131997 width=133) (actual time=76.316..9038.485 rows=138165 loops=1)\n>>> >>>> Recheck Cond: ((name)::text = 'progesterone'::text)\n>>> >>>> -> Bitmap Index Scan on entity_compound2document_name (cost=0.00..2873.93 rows=131997 width=0) (actual time=40.913..40.913 rows=138165 loops=1)\n>>> >>>> Index Cond: ((name)::text = 'progesterone'::text)\n>>> >>>> Total runtime: 9296.815 ms\n>>> >>>>\n>>> >>>>\n>>> >>>> It has improved (I supose because of the lack of the join table) but still taking a lot of time... Anything I can do??\n>>> >>>>\n>>> >>>> Any help would be very appreciated. Thank you very much.\n>>> >>>\n>>> >>>\n>>> >>>\n>>> >>> Good to know performance has increased.\n>>> >>>\n>>> >>> \"entity_compounddict2document\" table goes through high INSERTS ?\n>>> >>>\n>>> >>> Can you help us know if the \"helpval\" column and \"name\" column have high duplicate values ? \"n_distinct\" value from pg_stats table would have that info. \n>>> >>>\n>>> >>> Below could be a possible workaround -\n>>> >>>\n>>> >>> As mentioned earlier in this email, a composite Index on name and hepval column might help. If the table does not go through lot of INSERTS, then consider performing a CLUSTER on the table using the same INDEX.\n>>> >>>\n>>> >>> Other recommendations -\n>>> >>>\n>>> >>> Please drop off all the Non-primary key Indexes which have 0 scans / hits. This would harm the DB and the DB server whilst maintenance and DML operations.\n>>> >>>\n>>> >>> Regards,\n>>> >>> Venkata Balaji N\n>>> >>>\n>>> >>> Fujitsu Australia\n>>> >>\n>>> >>\n>>> >>\n>>> >> **NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n>>> >>\n>>> >> **CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n>>> >>\n>>> >\n>>> \n>>> \n>>> \n>>> Hi I think the problem is th heap scan of the table , that the backend have to do because the btree to bitmap conversion becomes lossy. Try to disable the enable_bitmapscan for the current session and rerun the query.\n>>> \n>>> Mat Dba\n>>> \n>> \n>> \n>> \n>> **NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n>> \n>> **CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n>> \n>> \n>> \n> \n> \n> \n> **NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n> \n> **CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n> \n> \n\n\nHello,new server with more ram will definitely help to keep your working set in memory.But if you want your queries be fast on cold (on disk) data, then you need more/faster disks.And work_mem = 1000MB is too much, better set to 32MB so you don’t get OOM Killer.And may be slightly lower shared_buffers. On 11 Mar 2014, at 18:56, acanada <[email protected]> wrote:Hello,I cannot do explain (analyze, buffers) since I am on 8.3 postgres version.I am migrating to the new server and upgrading it.Once it is ready again I will post the explain query here.The new disk is SATA disk with 5TB, raid 0 or 1...lspci | grep -i raid00:1f.2 RAID bus controller: Intel Corporation C600/X79 series chipset SATA RAID Controller (rev 05)All database is 200GB and the table entity2document2 is x=> select pg_size_pretty(pg_relation_size('entity2document2')); pg_size_pretty ---------------- 11 GB(1 row)x=> select pg_size_pretty(pg_total_relation_size('entity2document2')); pg_size_pretty ---------------- 29 GB(1 row)The index of the name column:x=> select pg_size_pretty(pg_relation_size('entity2document2_name')); pg_size_pretty ---------------- 2550 MB(1 row)I am tunning the new server with this parameters...shared_buffers = 15000MBwork_mem = 1000MBmaintenance_work_mem = 2000MBAny other parameter that should be modified?Thank you for your help!AndrésEl Mar 10, 2014, a las 9:22 PM, desmodemone escribió:Hello Andres,                        with enable_bitmapscan=off;   could you do :explain ( analyze , buffers ) select * from entity2document2  where name='ranitidine' ;\nI think it's interesting to understand how much it's clustered the table  entity2document2.infact the query extract 13512 rows in 79945.362 ms around 4 ms for row, and I suspect the table is not well clustered on that column, so every time the \nprocess is asking for a different page of the table or the i/o system have some problem.Moreover, another point it's : how much it's big ? the rows are arounf 94M , but how much it's big ?  it's important the average row length\nHave a nice day2014-03-06 15:45 GMT+01:00 acanada <[email protected]>:\nHello Mat,Setting enable_bitmapscan to off doesn't really helps. It gets worse...\nx=> SET enable_bitmapscan=off; SETx=> explain analyze select * from (select * from entity2document2  where name='ranitidine' ) as a  order by a.hepval;\n                                                                           QUERY PLAN                                                                           ----------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort  (cost=18789.21..18800.70 rows=4595 width=131) (actual time=79965.282..79966.657 rows=13512 loops=1)   Sort Key: entity2document2.hepval   Sort Method:  quicksort  Memory: 2301kB\n   ->  Index Scan using entity2document2_name on entity2document2  (cost=0.00..18509.70 rows=4595 width=131) (actual time=67.507..79945.362 rows=13512 loops=1)         Index Cond: ((name)::text = 'ranitidine'::text)\n Total runtime: 79967.705 ms(6 rows)Any other idea? Thank you very much for your help. Regards,AndrésEl Mar 6, 2014, a las 2:11 PM, desmodemone escribió:\n\nIl 05/mar/2014 00:36 \"Venkata Balaji Nagothi\" <[email protected]> ha scritto:\n>\n> After looking at the distinct values, yes the composite Index on \"name\" and \"hepval\" is not recommended. That would worsen - its expected.\n>\n> We need to look for other possible work around. Please drop off the above Index. Let me see if i can drill further into this.\n>\n> Meanwhile - can you help us know the memory parameters (work_mem, temp_buffers etc) set ?\n>\n> Do you have any other processes effecting this query's performance ?\n>\n> Any info about your Disk, RAM, CPU would also help.\n>\n> Regards,\n> Venkata Balaji N\n>\n> Fujitsu Australia\n>\n>\n>\n>\n> Venkata Balaji N\n>\n> Sr. Database Administrator\n> Fujitsu Australia\n>\n>\n> On Tue, Mar 4, 2014 at 10:23 PM, acanada <[email protected]> wrote:\n>>\n>> Hello,\n>>\n>> I don't know if this helps to figure out what is the problem but after adding the multicolumn index on name and hepval, the performance is even worse (¿?).  Ten times worse...\n>>\n>> explain analyze select * from (select * from entity_compounddict2document  where name='progesterone') as a order by a.hepval;\n>>                                                                          QUERY PLAN                                                                          \n>> -------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>  Sort  (cost=422746.18..423143.94 rows=159104 width=133) (actual time=95769.674..95797.943 rows=138165 loops=1)\n>>    Sort Key: entity_compounddict2document.hepval\n>>    Sort Method:  quicksort  Memory: 25622kB\n>>    ->  Bitmap Heap Scan on entity_compounddict2document  (cost=3501.01..408999.90 rows=159104 width=133) (actual time=70.789..95519.258 rows=138165 loops=1)\n>>          Recheck Cond: ((name)::text = 'progesterone'::text)\n>>          ->  Bitmap Index Scan on entity_compound2document_name  (cost=0.00..3461.23 rows=159104 width=0) (actual time=35.174..35.174 rows=138165 loops=1)\n\n>>                Index Cond: ((name)::text = 'progesterone'::text)\n>>  Total runtime: 95811.838 ms\n>> (8 rows)\n>>\n>> Any ideas please?\n>>\n>> Thank you \n>> Andrés.\n>>\n>>\n>>\n>> El Mar 4, 2014, a las 12:28 AM, Venkata Balaji Nagothi escribió:\n>>\n>>> On Mon, Mar 3, 2014 at 9:17 PM, acanada <[email protected]> wrote:\n>>>>\n>>>> Hello,\n>>>>\n>>>> Thankyou for your answer.\n>>>> I have made more changes than a simple re-indexing recently. I have moved the sorting field to the table in order to avoid the join clause. Now the schema is very simple. The query only implies one table:\n\n\n>>>>\n>>>> x=> \\d+ entity_compounddict2document;\n>>>>                       Table \"public.entity_compounddict2document\"\n>>>>       Column      |              Type              | Modifiers | Storage  | Description \n>>>> ------------------+--------------------------------+-----------+----------+-------------\n>>>>  id               | integer                        | not null  | plain    | \n>>>>  document_id      | integer                        |           | plain    | \n>>>>  name             | character varying(255)         |           | extended | \n>>>>  qualifier        | character varying(255)         |           | extended | \n>>>>  tagMethod        | character varying(255)         |           | extended | \n>>>>  created          | timestamp(0) without time zone |           | plain    | \n>>>>  updated          | timestamp(0) without time zone |           | plain    | \n>>>>  curation         | integer                        |           | plain    | \n>>>>  hepval           | double precision               |           | plain    | \n>>>>  cardval          | double precision               |           | plain    | \n>>>>  nephval          | double precision               |           | plain    | \n>>>>  phosval          | double precision               |           | plain    | \n>>>>  patternCount     | double precision               |           | plain    | \n>>>>  ruleScore        | double precision               |           | plain    | \n>>>>  hepTermNormScore | double precision               |           | plain    | \n>>>>  hepTermVarScore  | double precision               |           | plain    | \n>>>> Indexes:\n>>>>     \"entity_compounddict2document_pkey\" PRIMARY KEY, btree (id)\n>>>>     \"entity_compound2document_cardval\" btree (cardval)\n>>>>     \"entity_compound2document_heptermnormscore\" btree (\"hepTermNormScore\")\n>>>>     \"entity_compound2document_heptermvarscore\" btree (\"hepTermVarScore\")\n>>>>     \"entity_compound2document_hepval\" btree (hepval)\n>>>>     \"entity_compound2document_name\" btree (name)\n>>>>     \"entity_compound2document_nephval\" btree (nephval)\n>>>>     \"entity_compound2document_patterncount\" btree (\"patternCount\")\n>>>>     \"entity_compound2document_phosval\" btree (phosval)\n>>>>     \"entity_compound2document_rulescore\" btree (\"ruleScore\")\n>>>> Has OIDs: no\n>>>>\n>>>>            tablename            |                   indexname                                              |  num_rows    | table_size  | index_size | unique | number_of_scans | tuples_read | tuples_fetched \n\n\n>>>>  entity_compounddict2document   | entity_compound2document_cardval               | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>  entity_compounddict2document   | entity_compound2document_heptermnormscore      | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>  entity_compounddict2document   | entity_compound2document_heptermvarscore       | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>  entity_compounddict2document   | entity_compound2document_hepval                | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>  entity_compounddict2document   | entity_compound2document_name                  | 5.42452e+07 | 6763 MB    | 1505 MB    | Y      |              24 |      178680 |              0\n>>>>  entity_compounddict2document   | entity_compound2document_nephval               | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>  entity_compounddict2document   | entity_compound2document_patterncount          | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>  entity_compounddict2document   | entity_compound2document_phosval               | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>  entity_compounddict2document   | entity_compound2document_rulescore             | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>  entity_compounddict2document   | entity_compounddict2document_pkey              | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>\n>>>> The table has aprox. 54,000,000 rows\n>>>> There are no NULLs in hepval field and pg_settings haven't changed. I also have done \"analyze\" to this table.\n>>>>\n>>>> I have simplified the query and added the last advise that you told me:\n>>>>\n>>>> Query: \n>>>>\n>>>>  explain analyze select * from (select * from entity_compounddict2document  where name='ranitidine') as a order by a.hepval;\n>>>>                                                                       QUERY PLAN                                                                      \n>>>> ------------------------------------------------------------------------------------------------------------------------------------------------------\n>>>>  Sort  (cost=11060.50..11067.55 rows=2822 width=133) (actual time=32715.097..32716.488 rows=13512 loops=1)\n>>>>    Sort Key: entity_compounddict2document.hepval\n>>>>    Sort Method:  quicksort  Memory: 2301kB\n>>>>    ->  Bitmap Heap Scan on entity_compounddict2document  (cost=73.82..10898.76 rows=2822 width=133) (actual time=6.034..32695.483 rows=13512 loops=1)\n>>>>          Recheck Cond: ((name)::text = 'ranitidine'::text)\n>>>>          ->  Bitmap Index Scan on entity_compound2document_name  (cost=0.00..73.12 rows=2822 width=0) (actual time=3.221..3.221 rows=13512 loops=1)\n>>>>                Index Cond: ((name)::text = 'ranitidine'::text)\n>>>>  Total runtime: 32717.548 ms\n>>>>\n>>>> Another query:\n>>>> explain analyze select * from (select * from entity_compounddict2document  where name='progesterone' ) as a  order by a.hepval;\n>>>>\n>>>> QUERY PLAN\n>>>> ------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>>>  Sort  (cost=367879.25..368209.24 rows=131997 width=133) (actual time=9262.887..9287.046 rows=138165 loops=1)\n>>>>    Sort Key: entity_compounddict2document.hepval\n>>>>    Sort Method:  quicksort  Memory: 25622kB\n>>>>    ->  Bitmap Heap Scan on entity_compounddict2document  (cost=2906.93..356652.81 rows=131997 width=133) (actual time=76.316..9038.485 rows=138165 loops=1)\n>>>>          Recheck Cond: ((name)::text = 'progesterone'::text)\n>>>>          ->  Bitmap Index Scan on entity_compound2document_name  (cost=0.00..2873.93 rows=131997 width=0) (actual time=40.913..40.913 rows=138165 loops=1)\n>>>>                Index Cond: ((name)::text = 'progesterone'::text)\n>>>>  Total runtime: 9296.815 ms\n>>>>\n>>>>\n>>>> It has improved (I supose because of the lack of the join table) but still taking a lot of time... Anything I can do??\n>>>>\n>>>> Any help would be very appreciated. Thank you very much.\n>>>\n>>>\n>>>\n>>> Good to know performance has increased.\n>>>\n>>> \"entity_compounddict2document\" table goes through high INSERTS ?\n>>>\n>>> Can you help us know if the \"helpval\" column and \"name\" column have high duplicate values ? \"n_distinct\" value from pg_stats table would have that info. \n>>>\n>>> Below could be a possible workaround -\n>>>\n>>> As mentioned earlier in this email, a composite Index on name and hepval column might help. If the table does not go through lot of INSERTS, then consider performing a CLUSTER on the table using the same INDEX.\n\n\n>>>\n>>> Other recommendations -\n>>>\n>>> Please drop off all the Non-primary key Indexes which have 0 scans / hits. This would harm the DB and the DB server whilst maintenance and DML operations.\n>>>\n>>> Regards,\n>>> Venkata Balaji N\n>>>\n>>> Fujitsu Australia\n>>\n>>\n>>\n>> **NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n\n\n>>\n>> **CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n\n\n>>\n>Hi I think the problem is th heap scan of the table , that the backend have to do because the btree to bitmap conversion becomes lossy. Try to disable the enable_bitmapscan for the current session and rerun the query. Mat Dba\n\n**NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n**CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n\n\n**NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.**CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.", "msg_date": "Wed, 12 Mar 2014 02:12:13 +0300", "msg_from": "Evgeny Shishkin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query taking long time" }, { "msg_contents": "Hello,\n\nFirst of all I'd like to thank all of you for taking your time and help me with this. Thank you very much.\n\nI did migrate the database to the new server with 32 processors Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz and 60GB of RAM. \nEvegeny pointed that the disks I am using are not fast enough (For data: 00:1f.2 RAID bus controller: Intel Corporation C600/X79 series chipset SATA RAID Controller (rev 05); and for logging a SAS disk but with only 240GB available, database is 365GB...). I cannot change the locations of data and log since there's not enough space for the data in the SAS disk. Sadly this is a problem that I cannot solve any time soon...\n\nThe migration had really improved the performance\nI paste the before and after (the migration) explain analyze, buffers(if aplicable due to server versions)\n\nBEFORE:\nexplain analyze select * from (select * from entity2document2 where name='Acetaminophen' ) as a order by a.hepval;\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=18015.66..18027.15 rows=4595 width=139) (actual time=39755.942..39756.246 rows=2845 loops=1)\n Sort Key: entity2document2.hepval\n Sort Method: quicksort Memory: 578kB\n -> Bitmap Heap Scan on entity2document2 (cost=116.92..17736.15 rows=4595 width=139) (actual time=45.682..39751.255 rows=2845 loops=1)\n Recheck Cond: ((name)::text = 'Acetaminophen'::text)\n -> Bitmap Index Scan on entity2document2_name (cost=0.00..115.77 rows=4595 width=0) (actual time=45.124..45.124 rows=2845 loops=1)\n Index Cond: ((name)::text = 'Acetaminophen'::text)\n Total runtime: 39756.507 ms\n\n AFTER:\n explain (analyze,buffers) select * from (select * from entity2document2 where name='Acetaminophen' ) as a order by a.hepval;\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=18434.76..18446.51 rows=4701 width=131) (actual time=9196.634..9196.909 rows=2845 loops=1)\n Sort Key: entity2document2.hepval\n Sort Method: quicksort Memory: 604kB\n Buffers: shared hit=4 read=1725\n -> Bitmap Heap Scan on entity2document2 (cost=105.00..18148.03 rows=4701 width=131) (actual time=38.668..9190.318 rows=2845 loops=1)\n Recheck Cond: ((name)::text = 'Acetaminophen'::text)\n Buffers: shared hit=4 read=1725\n -> Bitmap Index Scan on entity2documentnew_name (cost=0.00..103.82 rows=4701 width=0) (actual time=30.905..30.905 rows=2845 loops=1)\n Index Cond: ((name)::text = 'Acetaminophen'::text)\n Buffers: shared hit=1 read=14\n Total runtime: 9197.186 ms\n\nThe improve is definitely good!!.\nThis is the table that I'm using: \n\\d+ entity2document2;\n Table \"public.entity2document2\"\n Column | Type | Modifiers | Storage | Stats target | Description \n------------------+--------------------------------+-----------+----------+--------------+-------------\n id | integer | not null | plain | | \n document_id | integer | | plain | | \n name | character varying(255) | not null | extended | | \n qualifier | character varying(255) | not null | extended | | \n tagMethod | character varying(255) | | extended | | \n created | timestamp(0) without time zone | not null | plain | | \n updated | timestamp(0) without time zone | | plain | | \n curation | integer | | plain | | \n hepval | double precision | | plain | | \n cardval | double precision | | plain | | \n nephval | double precision | | plain | | \n phosval | double precision | | plain | | \n patternCount | double precision | | plain | | \n ruleScore | double precision | | plain | | \n hepTermNormScore | double precision | | plain | | \n hepTermVarScore | double precision | | plain | | \n svmConfidence | double precision | | plain | | \nIndexes:\n\"ent_pkey\" PRIMARY KEY, btree (id)\n \"ent_cardval\" btree (cardval)\n \"ent_document_id\" btree (document_id)\n \"ent_heptermnormscore\" btree (\"hepTermNormScore\")\n \"ent_heptermvarscore\" btree (\"hepTermVarScore\")\n \"ent_hepval\" btree (hepval)\n \"ent_name\" btree (name)\n \"ent_nephval\" btree (nephval)\n \"ent_patterncount\" btree (\"patternCount\")\n \"ent_phosval\" btree (phosval)\n \"ent_qualifier\" btree (qualifier)\n \"ent_qualifier_name\" btree (qualifier, name)\n \"ent_rulescore\" btree (\"ruleScore\")\n \"ent_svm_confidence_index\" btree (\"svmConfidence\")\n\nAnd this are my current_settings\n\n name | current_setting | source \n----------------------------+--------------------+----------------------\n application_name | psql | client\n client_encoding | UTF8 | client\n DateStyle | ISO, MDY | configuration file\n default_text_search_config | pg_catalog.english | configuration file\n effective_cache_size | 45000MB | configuration file\n lc_messages | en_US.UTF-8 | configuration file\n lc_monetary | en_US.UTF-8 | configuration file\n lc_numeric | en_US.UTF-8 | configuration file\n lc_time | en_US.UTF-8 | configuration file\n listen_addresses | * | configuration file\n log_timezone | Europe/Madrid | configuration file\n logging_collector | on | configuration file\n maintenance_work_mem | 4000MB | configuration file\n max_connections | 100 | configuration file\n max_stack_depth | 2MB | environment variable\n shared_buffers | 10000MB | configuration file\n TimeZone | Europe/Madrid | configuration file\n work_mem | 32MB | configuration file\n\nThe size of the table is 41 GB and some statistics:\n relname | rows_in_bytes | num_rows | number_of_indexes | unique | single_column | multi_column \nentity2document2 | 89 MB | 9.33479e+07 | 14 | Y | 13 | 1\n\n\nI'm doing right now the CLUSTER on the table using the name+hepval multiple index as Venkata told me and will post you if it works. \nAnyway, even though the improvement is important, I'd like an increase of the performance. When the number of rows returned is high, the performance decreases too much.. \n\nIf anyone have any idea...\n\nBest regards,\nAndrés\n\n\n\n\nEl Mar 12, 2014, a las 12:12 AM, Evgeny Shishkin escribió:\n\n> Hello,\n> \n> new server with more ram will definitely help to keep your working set in memory.\n> But if you want your queries be fast on cold (on disk) data, then you need more/faster disks.\n> \n> And work_mem = 1000MB is too much, better set to 32MB so you don’t get OOM Killer.\n> And may be slightly lower shared_buffers. \n> \n> On 11 Mar 2014, at 18:56, acanada <[email protected]> wrote:\n> \n>> Hello,\n>> \n>> I cannot do explain (analyze, buffers) since I am on 8.3 postgres version.\n>> I am migrating to the new server and upgrading it.\n>> Once it is ready again I will post the explain query here.\n>> The new disk is SATA disk with 5TB, raid 0 or 1...\n>> lspci | grep -i raid\n>> 00:1f.2 RAID bus controller: Intel Corporation C600/X79 series chipset SATA RAID Controller (rev 05)\n>> \n>> All database is 200GB and the table entity2document2 is \n>> \n>> x=> select pg_size_pretty(pg_relation_size('entity2document2'));\n>> pg_size_pretty \n>> ----------------\n>> 11 GB\n>> (1 row)\n>> \n>> x=> select pg_size_pretty(pg_total_relation_size('entity2document2'));\n>> pg_size_pretty \n>> ----------------\n>> 29 GB\n>> (1 row)\n>> \n>> The index of the name column:\n>> x=> select pg_size_pretty(pg_relation_size('entity2document2_name'));\n>> pg_size_pretty \n>> ----------------\n>> 2550 MB\n>> (1 row)\n>> \n>> \n>> I am tunning the new server with this parameters...\n>> shared_buffers = 15000MB\n>> work_mem = 1000MB\n>> maintenance_work_mem = 2000MB\n>> \n>> Any other parameter that should be modified?\n>> \n>> Thank you for your help!\n>> Andrés\n>> \n>> \n>> El Mar 10, 2014, a las 9:22 PM, desmodemone escribió:\n>> \n>>> Hello Andres, \n>>> with enable_bitmapscan=off; could you do :\n>>> \n>>> explain ( analyze , buffers ) select * from entity2document2 where name='ranitidine' ;\n>>> \n>>> I think it's interesting to understand how much it's clustered the table entity2document2.\n>>> infact the query extract 13512 rows in 79945.362 ms around 4 ms for row, and I suspect the table is not well clustered on that column, so every time the \n>>> process is asking for a different page of the table or the i/o system have some problem.\n>>> \n>>> Moreover, another point it's : how much it's big ? the rows are arounf 94M , but how much it's big ? it's important the average row length\n>>> \n>>> \n>>> Have a nice day\n>>> \n>>> 2014-03-06 15:45 GMT+01:00 acanada <[email protected]>:\n>>> Hello Mat,\n>>> \n>>> Setting enable_bitmapscan to off doesn't really helps. It gets worse...\n>>> \n>>> x=> SET enable_bitmapscan=off; \n>>> SET\n>>> x=> explain analyze select * from (select * from entity2document2 where name='ranitidine' ) as a order by a.hepval;\n>>> QUERY PLAN \n>>> ----------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>> Sort (cost=18789.21..18800.70 rows=4595 width=131) (actual time=79965.282..79966.657 rows=13512 loops=1)\n>>> Sort Key: entity2document2.hepval\n>>> Sort Method: quicksort Memory: 2301kB\n>>> -> Index Scan using entity2document2_name on entity2document2 (cost=0.00..18509.70 rows=4595 width=131) (actual time=67.507..79945.362 rows=13512 loops=1)\n>>> Index Cond: ((name)::text = 'ranitidine'::text)\n>>> Total runtime: 79967.705 ms\n>>> (6 rows)\n>>> \n>>> Any other idea? \n>>> \n>>> Thank you very much for your help. Regards,\n>>> Andrés\n>>> \n>>> El Mar 6, 2014, a las 2:11 PM, desmodemone escribió:\n>>> \n>>>> \n>>>> Il 05/mar/2014 00:36 \"Venkata Balaji Nagothi\" <[email protected]> ha scritto:\n>>>> >\n>>>> > After looking at the distinct values, yes the composite Index on \"name\" and \"hepval\" is not recommended. That would worsen - its expected.\n>>>> >\n>>>> > We need to look for other possible work around. Please drop off the above Index. Let me see if i can drill further into this.\n>>>> >\n>>>> > Meanwhile - can you help us know the memory parameters (work_mem, temp_buffers etc) set ?\n>>>> >\n>>>> > Do you have any other processes effecting this query's performance ?\n>>>> >\n>>>> > Any info about your Disk, RAM, CPU would also help.\n>>>> >\n>>>> > Regards,\n>>>> > Venkata Balaji N\n>>>> >\n>>>> > Fujitsu Australia\n>>>> >\n>>>> >\n>>>> >\n>>>> >\n>>>> > Venkata Balaji N\n>>>> >\n>>>> > Sr. Database Administrator\n>>>> > Fujitsu Australia\n>>>> >\n>>>> >\n>>>> > On Tue, Mar 4, 2014 at 10:23 PM, acanada <[email protected]> wrote:\n>>>> >>\n>>>> >> Hello,\n>>>> >>\n>>>> >> I don't know if this helps to figure out what is the problem but after adding the multicolumn index on name and hepval, the performance is even worse (¿?). Ten times worse...\n>>>> >>\n>>>> >> explain analyze select * from (select * from entity_compounddict2document where name='progesterone') as a order by a.hepval;\n>>>> >> QUERY PLAN \n>>>> >> -------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>>> >> Sort (cost=422746.18..423143.94 rows=159104 width=133) (actual time=95769.674..95797.943 rows=138165 loops=1)\n>>>> >> Sort Key: entity_compounddict2document.hepval\n>>>> >> Sort Method: quicksort Memory: 25622kB\n>>>> >> -> Bitmap Heap Scan on entity_compounddict2document (cost=3501.01..408999.90 rows=159104 width=133) (actual time=70.789..95519.258 rows=138165 loops=1)\n>>>> >> Recheck Cond: ((name)::text = 'progesterone'::text)\n>>>> >> -> Bitmap Index Scan on entity_compound2document_name (cost=0.00..3461.23 rows=159104 width=0) (actual time=35.174..35.174 rows=138165 loops=1)\n>>>> >> Index Cond: ((name)::text = 'progesterone'::text)\n>>>> >> Total runtime: 95811.838 ms\n>>>> >> (8 rows)\n>>>> >>\n>>>> >> Any ideas please?\n>>>> >>\n>>>> >> Thank you \n>>>> >> Andrés.\n>>>> >>\n>>>> >>\n>>>> >>\n>>>> >> El Mar 4, 2014, a las 12:28 AM, Venkata Balaji Nagothi escribió:\n>>>> >>\n>>>> >>> On Mon, Mar 3, 2014 at 9:17 PM, acanada <[email protected]> wrote:\n>>>> >>>>\n>>>> >>>> Hello,\n>>>> >>>>\n>>>> >>>> Thankyou for your answer.\n>>>> >>>> I have made more changes than a simple re-indexing recently. I have moved the sorting field to the table in order to avoid the join clause. Now the schema is very simple. The query only implies one table:\n>>>> >>>>\n>>>> >>>> x=> \\d+ entity_compounddict2document;\n>>>> >>>> Table \"public.entity_compounddict2document\"\n>>>> >>>> Column | Type | Modifiers | Storage | Description \n>>>> >>>> ------------------+--------------------------------+-----------+----------+-------------\n>>>> >>>> id | integer | not null | plain | \n>>>> >>>> document_id | integer | | plain | \n>>>> >>>> name | character varying(255) | | extended | \n>>>> >>>> qualifier | character varying(255) | | extended | \n>>>> >>>> tagMethod | character varying(255) | | extended | \n>>>> >>>> created | timestamp(0) without time zone | | plain | \n>>>> >>>> updated | timestamp(0) without time zone | | plain | \n>>>> >>>> curation | integer | | plain | \n>>>> >>>> hepval | double precision | | plain | \n>>>> >>>> cardval | double precision | | plain | \n>>>> >>>> nephval | double precision | | plain | \n>>>> >>>> phosval | double precision | | plain | \n>>>> >>>> patternCount | double precision | | plain | \n>>>> >>>> ruleScore | double precision | | plain | \n>>>> >>>> hepTermNormScore | double precision | | plain | \n>>>> >>>> hepTermVarScore | double precision | | plain | \n>>>> >>>> Indexes:\n>>>> >>>> \"entity_compounddict2document_pkey\" PRIMARY KEY, btree (id)\n>>>> >>>> \"entity_compound2document_cardval\" btree (cardval)\n>>>> >>>> \"entity_compound2document_heptermnormscore\" btree (\"hepTermNormScore\")\n>>>> >>>> \"entity_compound2document_heptermvarscore\" btree (\"hepTermVarScore\")\n>>>> >>>> \"entity_compound2document_hepval\" btree (hepval)\n>>>> >>>> \"entity_compound2document_name\" btree (name)\n>>>> >>>> \"entity_compound2document_nephval\" btree (nephval)\n>>>> >>>> \"entity_compound2document_patterncount\" btree (\"patternCount\")\n>>>> >>>> \"entity_compound2document_phosval\" btree (phosval)\n>>>> >>>> \"entity_compound2document_rulescore\" btree (\"ruleScore\")\n>>>> >>>> Has OIDs: no\n>>>> >>>>\n>>>> >>>> tablename | indexname | num_rows | table_size | index_size | unique | number_of_scans | tuples_read | tuples_fetched \n>>>> >>>> entity_compounddict2document | entity_compound2document_cardval | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>>> >>>> entity_compounddict2document | entity_compound2document_heptermnormscore | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>>> >>>> entity_compounddict2document | entity_compound2document_heptermvarscore | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>>> >>>> entity_compounddict2document | entity_compound2document_hepval | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>>> >>>> entity_compounddict2document | entity_compound2document_name | 5.42452e+07 | 6763 MB | 1505 MB | Y | 24 | 178680 | 0\n>>>> >>>> entity_compounddict2document | entity_compound2document_nephval | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>>> >>>> entity_compounddict2document | entity_compound2document_patterncount | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>>> >>>> entity_compounddict2document | entity_compound2document_phosval | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>>> >>>> entity_compounddict2document | entity_compound2document_rulescore | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>>> >>>> entity_compounddict2document | entity_compounddict2document_pkey | 5.42452e+07 | 6763 MB | 1162 MB | Y | 0 | 0 | 0\n>>>> >>>>\n>>>> >>>> The table has aprox. 54,000,000 rows\n>>>> >>>> There are no NULLs in hepval field and pg_settings haven't changed. I also have done \"analyze\" to this table.\n>>>> >>>>\n>>>> >>>> I have simplified the query and added the last advise that you told me:\n>>>> >>>>\n>>>> >>>> Query: \n>>>> >>>>\n>>>> >>>> explain analyze select * from (select * from entity_compounddict2document where name='ranitidine') as a order by a.hepval;\n>>>> >>>> QUERY PLAN \n>>>> >>>> ------------------------------------------------------------------------------------------------------------------------------------------------------\n>>>> >>>> Sort (cost=11060.50..11067.55 rows=2822 width=133) (actual time=32715.097..32716.488 rows=13512 loops=1)\n>>>> >>>> Sort Key: entity_compounddict2document.hepval\n>>>> >>>> Sort Method: quicksort Memory: 2301kB\n>>>> >>>> -> Bitmap Heap Scan on entity_compounddict2document (cost=73.82..10898.76 rows=2822 width=133) (actual time=6.034..32695.483 rows=13512 loops=1)\n>>>> >>>> Recheck Cond: ((name)::text = 'ranitidine'::text)\n>>>> >>>> -> Bitmap Index Scan on entity_compound2document_name (cost=0.00..73.12 rows=2822 width=0) (actual time=3.221..3.221 rows=13512 loops=1)\n>>>> >>>> Index Cond: ((name)::text = 'ranitidine'::text)\n>>>> >>>> Total runtime: 32717.548 ms\n>>>> >>>>\n>>>> >>>> Another query:\n>>>> >>>> explain analyze select * from (select * from entity_compounddict2document where name='progesterone' ) as a order by a.hepval;\n>>>> >>>>\n>>>> >>>> QUERY PLAN\n>>>> >>>> ------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>>> >>>> Sort (cost=367879.25..368209.24 rows=131997 width=133) (actual time=9262.887..9287.046 rows=138165 loops=1)\n>>>> >>>> Sort Key: entity_compounddict2document.hepval\n>>>> >>>> Sort Method: quicksort Memory: 25622kB\n>>>> >>>> -> Bitmap Heap Scan on entity_compounddict2document (cost=2906.93..356652.81 rows=131997 width=133) (actual time=76.316..9038.485 rows=138165 loops=1)\n>>>> >>>> Recheck Cond: ((name)::text = 'progesterone'::text)\n>>>> >>>> -> Bitmap Index Scan on entity_compound2document_name (cost=0.00..2873.93 rows=131997 width=0) (actual time=40.913..40.913 rows=138165 loops=1)\n>>>> >>>> Index Cond: ((name)::text = 'progesterone'::text)\n>>>> >>>> Total runtime: 9296.815 ms\n>>>> >>>>\n>>>> >>>>\n>>>> >>>> It has improved (I supose because of the lack of the join table) but still taking a lot of time... Anything I can do??\n>>>> >>>>\n>>>> >>>> Any help would be very appreciated. Thank you very much.\n>>>> >>>\n>>>> >>>\n>>>> >>>\n>>>> >>> Good to know performance has increased.\n>>>> >>>\n>>>> >>> \"entity_compounddict2document\" table goes through high INSERTS ?\n>>>> >>>\n>>>> >>> Can you help us know if the \"helpval\" column and \"name\" column have high duplicate values ? \"n_distinct\" value from pg_stats table would have that info. \n>>>> >>>\n>>>> >>> Below could be a possible workaround -\n>>>> >>>\n>>>> >>> As mentioned earlier in this email, a composite Index on name and hepval column might help. If the table does not go through lot of INSERTS, then consider performing a CLUSTER on the table using the same INDEX.\n>>>> >>>\n>>>> >>> Other recommendations -\n>>>> >>>\n>>>> >>> Please drop off all the Non-primary key Indexes which have 0 scans / hits. This would harm the DB and the DB server whilst maintenance and DML operations.\n>>>> >>>\n>>>> >>> Regards,\n>>>> >>> Venkata Balaji N\n>>>> >>>\n>>>> >>> Fujitsu Australia\n>>>> >>\n>>>> >>\n>>>> >>\n>>>> >> **NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n>>>> >>\n>>>> >> **CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n>>>> >>\n>>>> >\n>>>> \n>>>> \n>>>> \n>>>> Hi I think the problem is th heap scan of the table , that the backend have to do because the btree to bitmap conversion becomes lossy. Try to disable the enable_bitmapscan for the current session and rerun the query.\n>>>> \n>>>> Mat Dba\n>>>> \n>>> \n>>> \n>>> \n>>> **NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n>>> \n>>> **CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n>>> \n>>> \n>>> \n>> \n>> \n>> \n>> **NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n>> \n>> **CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n>> \n>> \n> \n\n\n\n**NOTA DE CONFIDENCIALIDAD** Este correo electrÿnico, y en su caso los ficheros adjuntos, pueden contener informaciÿn protegida para el uso exclusivo de su destinatario. Se prohÿbe la distribuciÿn, reproducciÿn o cualquier otro tipo de transmisiÿn por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n**CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n\n\n\nHello,First of all I'd like to thank all of you for taking your time and help me with this. Thank you very much.I did migrate the database to the new server with 32 processors Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz  and 60GB of RAM. Evegeny pointed that the disks I am using are not fast enough (For data: 00:1f.2 RAID bus controller: Intel Corporation C600/X79 series chipset SATA RAID Controller (rev 05); and for logging a SAS disk but with only 240GB available, database is 365GB...). I cannot change the locations of data and log since there's not enough space for the data in the SAS disk.  Sadly this is a problem that I cannot solve any time soon...The migration had really improved the performanceI paste the before and after (the migration) explain analyze, buffers(if aplicable due to server versions)BEFORE:explain analyze select * from (select * from entity2document2  where name='Acetaminophen' ) as a  order by a.hepval;                                                                  QUERY PLAN                                                                  ---------------------------------------------------------------------------------------------------------------------------------------------- Sort  (cost=18015.66..18027.15 rows=4595 width=139) (actual time=39755.942..39756.246 rows=2845 loops=1)   Sort Key: entity2document2.hepval   Sort Method:  quicksort  Memory: 578kB   ->  Bitmap Heap Scan on entity2document2  (cost=116.92..17736.15 rows=4595 width=139) (actual time=45.682..39751.255 rows=2845 loops=1)         Recheck Cond: ((name)::text = 'Acetaminophen'::text)         ->  Bitmap Index Scan on entity2document2_name  (cost=0.00..115.77 rows=4595 width=0) (actual time=45.124..45.124 rows=2845 loops=1)               Index Cond: ((name)::text = 'Acetaminophen'::text) Total runtime: 39756.507 ms AFTER: explain (analyze,buffers) select * from (select * from entity2document2  where name='Acetaminophen' ) as a  order by a.hepval;                                                                   QUERY PLAN                                                                   ------------------------------------------------------------------------------------------------------------------------------------------------ Sort  (cost=18434.76..18446.51 rows=4701 width=131) (actual time=9196.634..9196.909 rows=2845 loops=1)   Sort Key: entity2document2.hepval   Sort Method: quicksort  Memory: 604kB   Buffers: shared hit=4 read=1725   ->  Bitmap Heap Scan on entity2document2  (cost=105.00..18148.03 rows=4701 width=131) (actual time=38.668..9190.318 rows=2845 loops=1)         Recheck Cond: ((name)::text = 'Acetaminophen'::text)         Buffers: shared hit=4 read=1725         ->  Bitmap Index Scan on entity2documentnew_name  (cost=0.00..103.82 rows=4701 width=0) (actual time=30.905..30.905 rows=2845 loops=1)               Index Cond: ((name)::text = 'Acetaminophen'::text)               Buffers: shared hit=1 read=14 Total runtime: 9197.186 msThe improve is definitely good!!.This is the table that I'm using: \\d+ entity2document2;                                    Table \"public.entity2document2\"      Column      |              Type              | Modifiers | Storage  | Stats target | Description ------------------+--------------------------------+-----------+----------+--------------+------------- id               | integer                        | not null  | plain    |              |  document_id      | integer                        |           | plain    |              |  name             | character varying(255)         | not null  | extended |              |  qualifier        | character varying(255)         | not null  | extended |              |  tagMethod        | character varying(255)         |           | extended |              |  created          | timestamp(0) without time zone | not null  | plain    |              |  updated          | timestamp(0) without time zone |           | plain    |              |  curation         | integer                        |           | plain    |              |  hepval           | double precision               |           | plain    |              |  cardval          | double precision               |           | plain    |              |  nephval          | double precision               |           | plain    |              |  phosval          | double precision               |           | plain    |              |  patternCount     | double precision               |           | plain    |              |  ruleScore        | double precision               |           | plain    |              |  hepTermNormScore | double precision               |           | plain    |              |  hepTermVarScore  | double precision               |           | plain    |              |  svmConfidence    | double precision               |           | plain    |              | Indexes:\"ent_pkey\" PRIMARY KEY, btree (id)    \"ent_cardval\" btree (cardval)    \"ent_document_id\" btree (document_id)    \"ent_heptermnormscore\" btree (\"hepTermNormScore\")    \"ent_heptermvarscore\" btree (\"hepTermVarScore\")    \"ent_hepval\" btree (hepval)    \"ent_name\" btree (name)    \"ent_nephval\" btree (nephval)    \"ent_patterncount\" btree (\"patternCount\")    \"ent_phosval\" btree (phosval)    \"ent_qualifier\" btree (qualifier)    \"ent_qualifier_name\" btree (qualifier, name)    \"ent_rulescore\" btree (\"ruleScore\")    \"ent_svm_confidence_index\" btree (\"svmConfidence\")And this are my current_settings            name            |  current_setting   |        source        ----------------------------+--------------------+---------------------- application_name           | psql               | client client_encoding            | UTF8               | client DateStyle                  | ISO, MDY           | configuration file default_text_search_config | pg_catalog.english | configuration file effective_cache_size       | 45000MB            | configuration file lc_messages                | en_US.UTF-8        | configuration file lc_monetary                | en_US.UTF-8        | configuration file lc_numeric                 | en_US.UTF-8        | configuration file lc_time                    | en_US.UTF-8        | configuration file listen_addresses           | *                  | configuration file log_timezone               | Europe/Madrid      | configuration file logging_collector          | on                 | configuration file maintenance_work_mem       | 4000MB             | configuration file max_connections            | 100                | configuration file max_stack_depth            | 2MB                | environment variable shared_buffers             | 10000MB            | configuration file TimeZone                   | Europe/Madrid      | configuration file work_mem                   | 32MB               | configuration fileThe size  of the table is 41 GB and some statistics: relname             | rows_in_bytes |  num_rows   | number_of_indexes | unique | single_column | multi_column entity2document2               | 89 MB         | 9.33479e+07 |                14 | Y      |            13 |            1I'm doing right now the CLUSTER on the table using the name+hepval multiple index as Venkata told me and will post you if it works. Anyway, even though the improvement is important, I'd like an increase of the performance. When the number of rows returned is high, the performance decreases too much.. If anyone have any idea...Best regards,AndrésEl Mar 12, 2014, a las 12:12 AM, Evgeny Shishkin escribió:\nHello,new server with more ram will definitely help to keep your working set in memory.But if you want your queries be fast on cold (on disk) data, then you need more/faster disks.And work_mem = 1000MB is too much, better set to 32MB so you don’t get OOM Killer.And may be slightly lower shared_buffers. On 11 Mar 2014, at 18:56, acanada <[email protected]> wrote:Hello,I cannot do explain (analyze, buffers) since I am on 8.3 postgres version.I am migrating to the new server and upgrading it.Once it is ready again I will post the explain query here.The new disk is SATA disk with 5TB, raid 0 or 1...lspci | grep -i raid00:1f.2 RAID bus controller: Intel Corporation C600/X79 series chipset SATA RAID Controller (rev 05)All database is 200GB and the table entity2document2 is x=> select pg_size_pretty(pg_relation_size('entity2document2')); pg_size_pretty ---------------- 11 GB(1 row)x=> select pg_size_pretty(pg_total_relation_size('entity2document2')); pg_size_pretty ---------------- 29 GB(1 row)The index of the name column:x=> select pg_size_pretty(pg_relation_size('entity2document2_name')); pg_size_pretty ---------------- 2550 MB(1 row)I am tunning the new server with this parameters...shared_buffers = 15000MBwork_mem = 1000MBmaintenance_work_mem = 2000MBAny other parameter that should be modified?Thank you for your help!AndrésEl Mar 10, 2014, a las 9:22 PM, desmodemone escribió:Hello Andres,                        with enable_bitmapscan=off;   could you do :explain ( analyze , buffers ) select * from entity2document2  where name='ranitidine' ;\nI think it's interesting to understand how much it's clustered the table  entity2document2.infact the query extract 13512 rows in 79945.362 ms around 4 ms for row, and I suspect the table is not well clustered on that column, so every time the \nprocess is asking for a different page of the table or the i/o system have some problem.Moreover, another point it's : how much it's big ? the rows are arounf 94M , but how much it's big ?  it's important the average row length\nHave a nice day2014-03-06 15:45 GMT+01:00 acanada <[email protected]>:\nHello Mat,Setting enable_bitmapscan to off doesn't really helps. It gets worse...\nx=> SET enable_bitmapscan=off; SETx=> explain analyze select * from (select * from entity2document2  where name='ranitidine' ) as a  order by a.hepval;\n                                                                           QUERY PLAN                                                                           ----------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort  (cost=18789.21..18800.70 rows=4595 width=131) (actual time=79965.282..79966.657 rows=13512 loops=1)   Sort Key: entity2document2.hepval   Sort Method:  quicksort  Memory: 2301kB\n   ->  Index Scan using entity2document2_name on entity2document2  (cost=0.00..18509.70 rows=4595 width=131) (actual time=67.507..79945.362 rows=13512 loops=1)         Index Cond: ((name)::text = 'ranitidine'::text)\n Total runtime: 79967.705 ms(6 rows)Any other idea? Thank you very much for your help. Regards,AndrésEl Mar 6, 2014, a las 2:11 PM, desmodemone escribió:\n\nIl 05/mar/2014 00:36 \"Venkata Balaji Nagothi\" <[email protected]> ha scritto:\n>\n> After looking at the distinct values, yes the composite Index on \"name\" and \"hepval\" is not recommended. That would worsen - its expected.\n>\n> We need to look for other possible work around. Please drop off the above Index. Let me see if i can drill further into this.\n>\n> Meanwhile - can you help us know the memory parameters (work_mem, temp_buffers etc) set ?\n>\n> Do you have any other processes effecting this query's performance ?\n>\n> Any info about your Disk, RAM, CPU would also help.\n>\n> Regards,\n> Venkata Balaji N\n>\n> Fujitsu Australia\n>\n>\n>\n>\n> Venkata Balaji N\n>\n> Sr. Database Administrator\n> Fujitsu Australia\n>\n>\n> On Tue, Mar 4, 2014 at 10:23 PM, acanada <[email protected]> wrote:\n>>\n>> Hello,\n>>\n>> I don't know if this helps to figure out what is the problem but after adding the multicolumn index on name and hepval, the performance is even worse (¿?).  Ten times worse...\n>>\n>> explain analyze select * from (select * from entity_compounddict2document  where name='progesterone') as a order by a.hepval;\n>>                                                                          QUERY PLAN                                                                          \n>> -------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>  Sort  (cost=422746.18..423143.94 rows=159104 width=133) (actual time=95769.674..95797.943 rows=138165 loops=1)\n>>    Sort Key: entity_compounddict2document.hepval\n>>    Sort Method:  quicksort  Memory: 25622kB\n>>    ->  Bitmap Heap Scan on entity_compounddict2document  (cost=3501.01..408999.90 rows=159104 width=133) (actual time=70.789..95519.258 rows=138165 loops=1)\n>>          Recheck Cond: ((name)::text = 'progesterone'::text)\n>>          ->  Bitmap Index Scan on entity_compound2document_name  (cost=0.00..3461.23 rows=159104 width=0) (actual time=35.174..35.174 rows=138165 loops=1)\n\n>>                Index Cond: ((name)::text = 'progesterone'::text)\n>>  Total runtime: 95811.838 ms\n>> (8 rows)\n>>\n>> Any ideas please?\n>>\n>> Thank you \n>> Andrés.\n>>\n>>\n>>\n>> El Mar 4, 2014, a las 12:28 AM, Venkata Balaji Nagothi escribió:\n>>\n>>> On Mon, Mar 3, 2014 at 9:17 PM, acanada <[email protected]> wrote:\n>>>>\n>>>> Hello,\n>>>>\n>>>> Thankyou for your answer.\n>>>> I have made more changes than a simple re-indexing recently. I have moved the sorting field to the table in order to avoid the join clause. Now the schema is very simple. The query only implies one table:\n\n\n>>>>\n>>>> x=> \\d+ entity_compounddict2document;\n>>>>                       Table \"public.entity_compounddict2document\"\n>>>>       Column      |              Type              | Modifiers | Storage  | Description \n>>>> ------------------+--------------------------------+-----------+----------+-------------\n>>>>  id               | integer                        | not null  | plain    | \n>>>>  document_id      | integer                        |           | plain    | \n>>>>  name             | character varying(255)         |           | extended | \n>>>>  qualifier        | character varying(255)         |           | extended | \n>>>>  tagMethod        | character varying(255)         |           | extended | \n>>>>  created          | timestamp(0) without time zone |           | plain    | \n>>>>  updated          | timestamp(0) without time zone |           | plain    | \n>>>>  curation         | integer                        |           | plain    | \n>>>>  hepval           | double precision               |           | plain    | \n>>>>  cardval          | double precision               |           | plain    | \n>>>>  nephval          | double precision               |           | plain    | \n>>>>  phosval          | double precision               |           | plain    | \n>>>>  patternCount     | double precision               |           | plain    | \n>>>>  ruleScore        | double precision               |           | plain    | \n>>>>  hepTermNormScore | double precision               |           | plain    | \n>>>>  hepTermVarScore  | double precision               |           | plain    | \n>>>> Indexes:\n>>>>     \"entity_compounddict2document_pkey\" PRIMARY KEY, btree (id)\n>>>>     \"entity_compound2document_cardval\" btree (cardval)\n>>>>     \"entity_compound2document_heptermnormscore\" btree (\"hepTermNormScore\")\n>>>>     \"entity_compound2document_heptermvarscore\" btree (\"hepTermVarScore\")\n>>>>     \"entity_compound2document_hepval\" btree (hepval)\n>>>>     \"entity_compound2document_name\" btree (name)\n>>>>     \"entity_compound2document_nephval\" btree (nephval)\n>>>>     \"entity_compound2document_patterncount\" btree (\"patternCount\")\n>>>>     \"entity_compound2document_phosval\" btree (phosval)\n>>>>     \"entity_compound2document_rulescore\" btree (\"ruleScore\")\n>>>> Has OIDs: no\n>>>>\n>>>>            tablename            |                   indexname                                              |  num_rows    | table_size  | index_size | unique | number_of_scans | tuples_read | tuples_fetched \n\n\n>>>>  entity_compounddict2document   | entity_compound2document_cardval               | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>  entity_compounddict2document   | entity_compound2document_heptermnormscore      | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>  entity_compounddict2document   | entity_compound2document_heptermvarscore       | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>  entity_compounddict2document   | entity_compound2document_hepval                | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>  entity_compounddict2document   | entity_compound2document_name                  | 5.42452e+07 | 6763 MB    | 1505 MB    | Y      |              24 |      178680 |              0\n>>>>  entity_compounddict2document   | entity_compound2document_nephval               | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>  entity_compounddict2document   | entity_compound2document_patterncount          | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>  entity_compounddict2document   | entity_compound2document_phosval               | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>  entity_compounddict2document   | entity_compound2document_rulescore             | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>  entity_compounddict2document   | entity_compounddict2document_pkey              | 5.42452e+07 | 6763 MB    | 1162 MB    | Y      |               0 |           0 |              0\n>>>>\n>>>> The table has aprox. 54,000,000 rows\n>>>> There are no NULLs in hepval field and pg_settings haven't changed. I also have done \"analyze\" to this table.\n>>>>\n>>>> I have simplified the query and added the last advise that you told me:\n>>>>\n>>>> Query: \n>>>>\n>>>>  explain analyze select * from (select * from entity_compounddict2document  where name='ranitidine') as a order by a.hepval;\n>>>>                                                                       QUERY PLAN                                                                      \n>>>> ------------------------------------------------------------------------------------------------------------------------------------------------------\n>>>>  Sort  (cost=11060.50..11067.55 rows=2822 width=133) (actual time=32715.097..32716.488 rows=13512 loops=1)\n>>>>    Sort Key: entity_compounddict2document.hepval\n>>>>    Sort Method:  quicksort  Memory: 2301kB\n>>>>    ->  Bitmap Heap Scan on entity_compounddict2document  (cost=73.82..10898.76 rows=2822 width=133) (actual time=6.034..32695.483 rows=13512 loops=1)\n>>>>          Recheck Cond: ((name)::text = 'ranitidine'::text)\n>>>>          ->  Bitmap Index Scan on entity_compound2document_name  (cost=0.00..73.12 rows=2822 width=0) (actual time=3.221..3.221 rows=13512 loops=1)\n>>>>                Index Cond: ((name)::text = 'ranitidine'::text)\n>>>>  Total runtime: 32717.548 ms\n>>>>\n>>>> Another query:\n>>>> explain analyze select * from (select * from entity_compounddict2document  where name='progesterone' ) as a  order by a.hepval;\n>>>>\n>>>> QUERY PLAN\n>>>> ------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>>>  Sort  (cost=367879.25..368209.24 rows=131997 width=133) (actual time=9262.887..9287.046 rows=138165 loops=1)\n>>>>    Sort Key: entity_compounddict2document.hepval\n>>>>    Sort Method:  quicksort  Memory: 25622kB\n>>>>    ->  Bitmap Heap Scan on entity_compounddict2document  (cost=2906.93..356652.81 rows=131997 width=133) (actual time=76.316..9038.485 rows=138165 loops=1)\n>>>>          Recheck Cond: ((name)::text = 'progesterone'::text)\n>>>>          ->  Bitmap Index Scan on entity_compound2document_name  (cost=0.00..2873.93 rows=131997 width=0) (actual time=40.913..40.913 rows=138165 loops=1)\n>>>>                Index Cond: ((name)::text = 'progesterone'::text)\n>>>>  Total runtime: 9296.815 ms\n>>>>\n>>>>\n>>>> It has improved (I supose because of the lack of the join table) but still taking a lot of time... Anything I can do??\n>>>>\n>>>> Any help would be very appreciated. Thank you very much.\n>>>\n>>>\n>>>\n>>> Good to know performance has increased.\n>>>\n>>> \"entity_compounddict2document\" table goes through high INSERTS ?\n>>>\n>>> Can you help us know if the \"helpval\" column and \"name\" column have high duplicate values ? \"n_distinct\" value from pg_stats table would have that info. \n>>>\n>>> Below could be a possible workaround -\n>>>\n>>> As mentioned earlier in this email, a composite Index on name and hepval column might help. If the table does not go through lot of INSERTS, then consider performing a CLUSTER on the table using the same INDEX.\n\n\n>>>\n>>> Other recommendations -\n>>>\n>>> Please drop off all the Non-primary key Indexes which have 0 scans / hits. This would harm the DB and the DB server whilst maintenance and DML operations.\n>>>\n>>> Regards,\n>>> Venkata Balaji N\n>>>\n>>> Fujitsu Australia\n>>\n>>\n>>\n>> **NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n\n\n>>\n>> **CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n\n\n>>\n>Hi I think the problem is th heap scan of the table , that the backend have to do because the btree to bitmap conversion becomes lossy. Try to disable the enable_bitmapscan for the current session and rerun the query. Mat Dba\n\n**NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n**CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n\n\n**NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.**CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n\n**NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.**CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.", "msg_date": "Wed, 19 Mar 2014 12:09:12 +0100", "msg_from": "\"acanada\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query taking long time" }, { "msg_contents": "On Wed, Mar 19, 2014 at 10:09 PM, acanada <[email protected]> wrote:\n\nHello,\n>\n> First of all I'd like to thank all of you for taking your time and help me\n> with this. Thank you very much.\n>\n> I did migrate the database to the new server with 32 processors Intel(R)\n> Xeon(R) CPU E5-2670 0 @ 2.60GHz and 60GB of RAM.\n> Evegeny pointed that the disks I am using are not fast enough (For\n> data: 00:1f.2 RAID bus controller: Intel Corporation C600/X79 series\n> chipset SATA RAID Controller (rev 05); and for logging a SAS disk but with\n> only 240GB available, database is 365GB...). I cannot change the locations\n> of data and log since there's not enough space for the data in the SAS\n> disk. Sadly this is a problem that I cannot solve any time soon...\n>\n> The migration had really improved the performance\n> I paste the before and after (the migration) explain analyze, buffers(if\n> aplicable due to server versions)\n>\n> BEFORE:\n> explain analyze select * from (select * from entity2document2 where\n> name='Acetaminophen' ) as a order by a.hepval;\n> QUERY\n> PLAN\n>\n> ----------------------------------------------------------------------------------------------------------------------------------------------\n> Sort (cost=18015.66..18027.15 rows=4595 width=139) (actual\n> time=39755.942..39756.246 rows=2845 loops=1)\n> Sort Key: entity2document2.hepval\n> Sort Method: quicksort Memory: 578kB\n> -> Bitmap Heap Scan on entity2document2 (cost=116.92..17736.15\n> rows=4595 width=139) (actual time=45.682..39751.255 rows=2845 loops=1)\n> Recheck Cond: ((name)::text = 'Acetaminophen'::text)\n> -> Bitmap Index Scan on entity2document2_name\n> (cost=0.00..115.77 rows=4595 width=0) (actual time=45.124..45.124\n> rows=2845 loops=1)\n> Index Cond: ((name)::text = 'Acetaminophen'::text)\n> Total runtime: 39756.507 ms\n>\n> AFTER:\n> explain (analyze,buffers) select * from (select * from entity2document2\n> where name='Acetaminophen' ) as a order by a.hepval;\n> QUERY\n> PLAN\n>\n> ------------------------------------------------------------------------------------------------------------------------------------------------\n> Sort (cost=18434.76..18446.51 rows=4701 width=131) (actual\n> time=9196.634..9196.909 rows=2845 loops=1)\n> Sort Key: entity2document2.hepval\n> Sort Method: quicksort Memory: 604kB\n> Buffers: shared hit=4 read=1725\n> -> Bitmap Heap Scan on entity2document2 (cost=105.00..18148.03\n> rows=4701 width=131) (actual time=38.668..9190.318 rows=2845 loops=1)\n> Recheck Cond: ((name)::text = 'Acetaminophen'::text)\n> Buffers: shared hit=4 read=1725\n> -> Bitmap Index Scan on entity2documentnew_name\n> (cost=0.00..103.82 rows=4701 width=0) (actual time=30.905..30.905\n> rows=2845 loops=1)\n> Index Cond: ((name)::text = 'Acetaminophen'::text)\n> Buffers: shared hit=1 read=14\n> Total runtime: 9197.186 ms\n>\n> The improve is definitely good!!.\n> This is the table that I'm using:\n> \\d+ entity2document2;\n> Table \"public.entity2document2\"\n> Column | Type | Modifiers | Storage\n> | Stats target | Description\n>\n> ------------------+--------------------------------+-----------+----------+--------------+-------------\n> id | integer | not null | plain\n> | |\n> document_id | integer | | plain\n> | |\n> name | character varying(255) | not null | extended\n> | |\n> qualifier | character varying(255) | not null | extended\n> | |\n> tagMethod | character varying(255) | | extended\n> | |\n> created | timestamp(0) without time zone | not null | plain\n> | |\n> updated | timestamp(0) without time zone | | plain\n> | |\n> curation | integer | | plain\n> | |\n> hepval | double precision | | plain\n> | |\n> cardval | double precision | | plain\n> | |\n> nephval | double precision | | plain\n> | |\n> phosval | double precision | | plain\n> | |\n> patternCount | double precision | | plain\n> | |\n> ruleScore | double precision | | plain\n> | |\n> hepTermNormScore | double precision | | plain\n> | |\n> hepTermVarScore | double precision | | plain\n> | |\n> svmConfidence | double precision | | plain\n> | |\n> Indexes:\n> \"ent_pkey\" PRIMARY KEY, btree (id)\n> \"ent_cardval\" btree (cardval)\n> \"ent_document_id\" btree (document_id)\n> \"ent_heptermnormscore\" btree (\"hepTermNormScore\")\n> \"ent_heptermvarscore\" btree (\"hepTermVarScore\")\n> \"ent_hepval\" btree (hepval)\n> \"ent_name\" btree (name)\n> \"ent_nephval\" btree (nephval)\n> \"ent_patterncount\" btree (\"patternCount\")\n> \"ent_phosval\" btree (phosval)\n> \"ent_qualifier\" btree (qualifier)\n> \"ent_qualifier_name\" btree (qualifier, name)\n> \"ent_rulescore\" btree (\"ruleScore\")\n> \"ent_svm_confidence_index\" btree (\"svmConfidence\")\n>\n> And this are my current_settings\n>\n> name | current_setting | source\n> ----------------------------+--------------------+----------------------\n> application_name | psql | client\n> client_encoding | UTF8 | client\n> DateStyle | ISO, MDY | configuration file\n> default_text_search_config | pg_catalog.english | configuration file\n> effective_cache_size | 45000MB | configuration file\n> lc_messages | en_US.UTF-8 | configuration file\n> lc_monetary | en_US.UTF-8 | configuration file\n> lc_numeric | en_US.UTF-8 | configuration file\n> lc_time | en_US.UTF-8 | configuration file\n> listen_addresses | * | configuration file\n> log_timezone | Europe/Madrid | configuration file\n> logging_collector | on | configuration file\n> maintenance_work_mem | 4000MB | configuration file\n> max_connections | 100 | configuration file\n> max_stack_depth | 2MB | environment variable\n> shared_buffers | 10000MB | configuration file\n> TimeZone | Europe/Madrid | configuration file\n> work_mem | 32MB | configuration file\n>\n> The size of the table is 41 GB and some statistics:\n> relname | rows_in_bytes | num_rows | number_of_indexes |\n> unique | single_column | multi_column\n> entity2document2 | 89 MB | 9.33479e+07 |\n> 14 | Y | 13 | 1\n>\n>\n> I'm doing right now the CLUSTER on the table using the name+hepval\n> multiple index as Venkata told me and will post you if it works.\n> Anyway, even though the improvement is important, I'd like an increase of\n> the performance. When the number of rows returned is high, the performance\n> decreases too much..\n>\n\nSorry, i have not been following this since sometime now.\n\nHardware configuration is better now. You were running on 8.3.x, can you\nplease help us know what version of Postgres is this ?\n\nDid you collect latest statistics and performed VACUUM after migration ?\n\nCan you get us the EXPLAIN plan for \"select * from entity2document2 where\nname='Acetaminophen' ; \" ?\n\nVenkata Balaji N\n\nSr. Database Administrator\nFujitsu Australia\n\nOn Wed, Mar 19, 2014 at 10:09 PM, acanada <[email protected]> wrote:\n\nHello,First of all I'd like to thank all of you for taking your time and help me with this. Thank you very much.I did migrate the database to the new server with 32 processors Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz  and 60GB of RAM. \nEvegeny pointed that the disks I am using are not fast enough (For data: 00:1f.2 RAID bus controller: Intel Corporation C600/X79 series chipset SATA RAID Controller (rev 05); and for logging a SAS disk but with only 240GB available, database is 365GB...). I cannot change the locations of data and log since there's not enough space for the data in the SAS disk.  Sadly this is a problem that I cannot solve any time soon...\nThe migration had really improved the performanceI paste the before and after (the migration) explain analyze, buffers(if aplicable due to server versions)BEFORE:\nexplain analyze select * from (select * from entity2document2  where name='Acetaminophen' ) as a  order by a.hepval;                                                                  QUERY PLAN                                                                  \n---------------------------------------------------------------------------------------------------------------------------------------------- Sort  (cost=18015.66..18027.15 rows=4595 width=139) (actual time=39755.942..39756.246 rows=2845 loops=1)\n   Sort Key: entity2document2.hepval   Sort Method:  quicksort  Memory: 578kB   ->  Bitmap Heap Scan on entity2document2  (cost=116.92..17736.15 rows=4595 width=139) (actual time=45.682..39751.255 rows=2845 loops=1)\n         Recheck Cond: ((name)::text = 'Acetaminophen'::text)         ->  Bitmap Index Scan on entity2document2_name  (cost=0.00..115.77 rows=4595 width=0) (actual time=45.124..45.124 rows=2845 loops=1)\n               Index Cond: ((name)::text = 'Acetaminophen'::text) Total runtime: 39756.507 ms AFTER: explain (analyze,buffers) select * from (select * from entity2document2  where name='Acetaminophen' ) as a  order by a.hepval;\n                                                                   QUERY PLAN                                                                   ------------------------------------------------------------------------------------------------------------------------------------------------\n Sort  (cost=18434.76..18446.51 rows=4701 width=131) (actual time=9196.634..9196.909 rows=2845 loops=1)   Sort Key: entity2document2.hepval   Sort Method: quicksort  Memory: 604kB   Buffers: shared hit=4 read=1725\n   ->  Bitmap Heap Scan on entity2document2  (cost=105.00..18148.03 rows=4701 width=131) (actual time=38.668..9190.318 rows=2845 loops=1)         Recheck Cond: ((name)::text = 'Acetaminophen'::text)\n         Buffers: shared hit=4 read=1725         ->  Bitmap Index Scan on entity2documentnew_name  (cost=0.00..103.82 rows=4701 width=0) (actual time=30.905..30.905 rows=2845 loops=1)               Index Cond: ((name)::text = 'Acetaminophen'::text)\n               Buffers: shared hit=1 read=14 Total runtime: 9197.186 msThe improve is definitely good!!.This is the table that I'm using: \\d+ entity2document2;\n                                    Table \"public.entity2document2\"      Column      |              Type              | Modifiers | Storage  | Stats target | Description ------------------+--------------------------------+-----------+----------+--------------+-------------\n id               | integer                        | not null  | plain    |              |  document_id      | integer                        |           | plain    |              | \n name             | character varying(255)         | not null  | extended |              |  qualifier        | character varying(255)         | not null  | extended |              | \n tagMethod        | character varying(255)         |           | extended |              |  created          | timestamp(0) without time zone | not null  | plain    |              | \n updated          | timestamp(0) without time zone |           | plain    |              |  curation         | integer                        |           | plain    |              |  hepval           | double precision               |           | plain    |              | \n cardval          | double precision               |           | plain    |              |  nephval          | double precision               |           | plain    |              |  phosval          | double precision               |           | plain    |              | \n patternCount     | double precision               |           | plain    |              |  ruleScore        | double precision               |           | plain    |              |  hepTermNormScore | double precision               |           | plain    |              | \n hepTermVarScore  | double precision               |           | plain    |              |  svmConfidence    | double precision               |           | plain    |              | Indexes:\n\"ent_pkey\" PRIMARY KEY, btree (id)    \"ent_cardval\" btree (cardval)    \"ent_document_id\" btree (document_id)    \"ent_heptermnormscore\" btree (\"hepTermNormScore\")\n    \"ent_heptermvarscore\" btree (\"hepTermVarScore\")    \"ent_hepval\" btree (hepval)    \"ent_name\" btree (name)    \"ent_nephval\" btree (nephval)\n    \"ent_patterncount\" btree (\"patternCount\")    \"ent_phosval\" btree (phosval)    \"ent_qualifier\" btree (qualifier)    \"ent_qualifier_name\" btree (qualifier, name)\n    \"ent_rulescore\" btree (\"ruleScore\")    \"ent_svm_confidence_index\" btree (\"svmConfidence\")And this are my current_settings\n            name            |  current_setting   |        source        ----------------------------+--------------------+---------------------- application_name           | psql               | client\n client_encoding            | UTF8               | client DateStyle                  | ISO, MDY           | configuration file default_text_search_config | pg_catalog.english | configuration file\n effective_cache_size       | 45000MB            | configuration file lc_messages                | en_US.UTF-8        | configuration file lc_monetary                | en_US.UTF-8        | configuration file\n lc_numeric                 | en_US.UTF-8        | configuration file lc_time                    | en_US.UTF-8        | configuration file listen_addresses           | *                  | configuration file\n log_timezone               | Europe/Madrid      | configuration file logging_collector          | on                 | configuration file maintenance_work_mem       | 4000MB             | configuration file\n max_connections            | 100                | configuration file max_stack_depth            | 2MB                | environment variable shared_buffers             | 10000MB            | configuration file\n TimeZone                   | Europe/Madrid      | configuration file work_mem                   | 32MB               | configuration fileThe size  of the table is 41 GB and some statistics:\n relname             | rows_in_bytes |  num_rows   | number_of_indexes | unique | single_column | multi_column entity2document2               | 89 MB         | 9.33479e+07 |                14 | Y      |            13 |            1\nI'm doing right now the CLUSTER on the table using the name+hepval multiple index as Venkata told me and will post you if it works. Anyway, even though the improvement is important, I'd like an increase of the performance. When the number of rows returned is high, the performance decreases too much..\nSorry, i have not been following this since sometime now.Hardware configuration is better now. You were running on 8.3.x, can you please help us know what version of Postgres is this ?\nDid you collect latest statistics and performed VACUUM after migration ?Can you get us the EXPLAIN plan for \"select * from entity2document2  where name='Acetaminophen' ; \" ?\nVenkata Balaji NSr. Database AdministratorFujitsu Australia", "msg_date": "Thu, 20 Mar 2014 10:30:38 +1100", "msg_from": "Venkata Balaji Nagothi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query taking long time" }, { "msg_contents": "Hello,\n\nNew server postgres version is 9.3. I'm not sure if I collected latest statistics after migration, if you mean if the current_settings or analyze queries that I posted were collected after migration... yes (notice that there are analyze query before migration and after migration, maybe I didn't illustrate right) \nSorry for that. Reading the statistics collector manual, I see there are plenty of parameters, and I'm not sure which one of them are you interested in, or if there's a query to collect them...\n\n\nThis is the explain for the query after clearing the cache (name of table has changed, not a mistake...)\n\nexplain analyze select * from entity2document where name='Acetaminophen';\n QUERY PLAN \n \n--------------------------------------------------------------------------------------------------------------------------------------\n-------\n Bitmap Heap Scan on entity2document (cost=104.47..17914.96 rows=4632 width=138) (actual time=62.811..12208.446 rows=2845 loops=1)\n Recheck Cond: ((name)::text = 'Acetaminophen'::text)\n -> Bitmap Index Scan on entity2document_name_index (cost=0.00..103.31 rows=4632 width=0) (actual time=34.357..34.357 rows=2845 lo\nops=1)\n Index Cond: ((name)::text = 'Acetaminophen'::text)\n Total runtime: 12216.115 ms\n(5 rows)\n\nIt's much better now than with old server (39756.507 ms) however still high. I'd like to improve it... \nThank you very much.\n\nCheers,\n\nAndrés\n\nPS: Also notice that this is a query after denormalizing the database to avoid joins of very big tables. Once the performance is good enough I'd like to normalize it again if it's possible... :-)\n\n\n\n\n\n\nEl Mar 20, 2014, a las 12:30 AM, Venkata Balaji Nagothi escribió:\n\n> Sorry, i have not been following this since sometime now.\n> \n> Hardware configuration is better now. You were running on 8.3.x, can you please help us know what version of Postgres is this ?\n> \n> Did you collect latest statistics and performed VACUUM after migration ?\n> \n> Can you get us the EXPLAIN plan for \"select * from entity2document2 where name='Acetaminophen' ; \" ?\n> \n> Venkata Balaji N\n> \n> Sr. Database Administrator\n> Fujitsu Australia\n\n\n**NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n**CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 20 Mar 2014 11:17:20 +0100", "msg_from": "\"acanada\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query taking long time" }, { "msg_contents": "On Thu, Mar 20, 2014 at 9:17 PM, acanada <[email protected]> wrote:\n\n> Hello,\n>\n> New server postgres version is 9.3. I'm not sure if I collected latest\n> statistics after migration, if you mean if the current_settings or analyze\n> queries that I posted were collected after migration... yes (notice that\n> there are analyze query before migration and after migration, maybe I\n> didn't illustrate right)\n> Sorry for that. Reading the statistics collector manual, I see there are\n> plenty of parameters, and I'm not sure which one of them are you interested\n> in, or if there's a query to collect them...\n>\n\nHi Andres,\n\nIf we do not have statistics, its hard to arrive at any conclusion\nregarding performance. The cost numbers we get without statistics are not\naccurate.\n\nAlso, you have migrated across 5 major versions since 8.3. It is very\nimportant to have latest statistics in place.\n\nPlease perform VACUUM FULL and ANALYZE of the database.\n\nPlease post the EXPLAIN plan after that.\n\nThanks & Regards,\n\nVenkata Balaji N\nFujitsu Australia\n\nOn Thu, Mar 20, 2014 at 9:17 PM, acanada <[email protected]> wrote:\nHello,\n\nNew server postgres version is 9.3. I'm not sure if I collected latest statistics after migration, if you mean if the current_settings or analyze queries that I posted were collected after migration... yes (notice that there are analyze query before migration and after migration, maybe I didn't illustrate right)\n\nSorry for that. Reading the statistics collector manual, I see there are plenty of parameters, and I'm not sure which one of them are you interested in, or if there's a query to collect them...\nHi Andres,If we do not have statistics, its hard to arrive at any conclusion regarding performance. The cost numbers we get without statistics are not accurate.Also, you have migrated across 5 major versions since 8.3. It is very important to have latest statistics in place.\nPlease perform VACUUM FULL and ANALYZE of the database.Please post the EXPLAIN plan after that.Thanks & Regards,\nVenkata Balaji NFujitsu Australia", "msg_date": "Sun, 23 Mar 2014 17:10:44 +1100", "msg_from": "Venkata Balaji Nagothi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query taking long time" } ]
[ { "msg_contents": "I am running the last version of PostgreSQL 9.3.3\nI have two tables detm and corm and a lot of datas in the column \ncormdata of corm table (1.4 GB).\n\nI have a GIN index on cormdata:\nCREATE INDEX ix_corm_fulltext_cormdata ON corm\n USING gin (to_tsvector('french'::regconfig, cormdata))\n WHERE cormishtml IS FALSE AND length(cormdata) < 20000;\n\nselect distinct b.detmmailid from corm b where \n(to_tsvector('french',b.cormdata) @@ to_tsquery('mauritanie') and \nb.cormishtml is false and length(b.cormdata) < 20000)\nis very fast and use the GIN index.\n\n\"HashAggregate (cost=2027.72..2031.00 rows=328 width=52)\"\n\" -> Bitmap Heap Scan on corm b (cost=24.25..2026.35 rows=548 width=52)\"\n\" Recheck Cond: ((to_tsvector('french'::regconfig, cormdata) @@ \nto_tsquery('mauritanie'::text)) AND (cormishtml IS FALSE) AND \n(length(cormdata) < 20000))\"\n\" -> Bitmap Index Scan on ix_corm_fulltext_cormdata \n(cost=0.00..24.11 rows=548 width=0)\"\n\" Index Cond: (to_tsvector('french'::regconfig, cormdata) \n@@ to_tsquery('mauritanie'::text))\"\n\n\nWith a join an another table detm, GIN index is not used\n\n\n explain select distinct a.detmmailid from detm a JOIN corm b on \na.detmmailid = b.detmmailid where ((to_tsvector('french',b.cormdata) @@ \nto_tsquery('mauritanie') and b.cormishtml is false and \nlength(b.cormdata) < 20000) OR ( detmobjet ~* 'mauritanie' ))\n\n\"HashAggregate (cost=172418.27..172423.98 rows=571 width=52)\"\n\" -> Hash Join (cost=28514.92..172416.85 rows=571 width=52)\"\n\" Hash Cond: (b.detmmailid = a.detmmailid)\"\n\" Join Filter: (((to_tsvector('french'::regconfig, b.cormdata) @@ \nto_tsquery('mauritanie'::text)) AND (b.cormishtml IS FALSE) AND \n(length(b.cormdata) < 20000)) OR (a.detmobjet ~* 'mauritanie'::text))\"\n\" -> Seq Scan on corm b (cost=0.00..44755.07 rows=449507 \nwidth=689)\"\n\" -> Hash (cost=19322.74..19322.74 rows=338574 width=94)\"\n\" -> Seq Scan on detm a (cost=0.00..19322.74 rows=338574 \nwidth=94)\"\n\n\nIf I remove OR ( detmobjet ~* 'mauritanie' ) in the select, the GIN \nindex is used\n explain select distinct a.detmmailid from detm a JOIN corm b on \na.detmmailid = b.detmmailid where ((to_tsvector('french',b.cormdata) @@ \nto_tsquery('mauritanie') and b.cormishtml is false and \nlength(b.cormdata) < 20000))\n\n\"HashAggregate (cost=4295.69..4301.17 rows=548 width=52)\"\n\" -> Nested Loop (cost=24.67..4294.32 rows=548 width=52)\"\n\" -> Bitmap Heap Scan on corm b (cost=24.25..2026.35 rows=548 \nwidth=52)\"\n\" Recheck Cond: ((to_tsvector('french'::regconfig, \ncormdata) @@ to_tsquery('mauritanie'::text)) AND (cormishtml IS FALSE) \nAND (length(cormdata) < 20000))\"\n\" -> Bitmap Index Scan on ix_corm_fulltext_cormdata \n(cost=0.00..24.11 rows=548 width=0)\"\n\" Index Cond: (to_tsvector('french'::regconfig, \ncormdata) @@ to_tsquery('mauritanie'::text))\"\n\" -> Index Only Scan using pkey_detm on detm a (cost=0.42..4.13 \nrows=1 width=52)\"\n\" Index Cond: (detmmailid = b.detmmailid)\"\n\nHow can i force the use of the GIN index ?\nthanks for your tips,\n\n-- \nJean-Max Reymond\nCKR Solutions Open Source http://www.ckr-solutions.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 27 Feb 2014 15:06:56 +0100", "msg_from": "Jean-Max Reymond <[email protected]>", "msg_from_op": true, "msg_subject": "not using my GIN index in JOIN expression" }, { "msg_contents": "On 02/27/2014 04:06 PM, Jean-Max Reymond wrote:\n> I am running the last version of PostgreSQL 9.3.3\n> I have two tables detm and corm and a lot of datas in the column\n> cormdata of corm table (1.4 GB).\n>\n> I have a GIN index on cormdata:\n> CREATE INDEX ix_corm_fulltext_cormdata ON corm\n> USING gin (to_tsvector('french'::regconfig, cormdata))\n> WHERE cormishtml IS FALSE AND length(cormdata) < 20000;\n>\n> select distinct b.detmmailid from corm b where\n> (to_tsvector('french',b.cormdata) @@ to_tsquery('mauritanie') and\n> b.cormishtml is false and length(b.cormdata) < 20000)\n> is very fast and use the GIN index.\n>\n> \"HashAggregate (cost=2027.72..2031.00 rows=328 width=52)\"\n> \" -> Bitmap Heap Scan on corm b (cost=24.25..2026.35 rows=548 width=52)\"\n> \" Recheck Cond: ((to_tsvector('french'::regconfig, cormdata) @@\n> to_tsquery('mauritanie'::text)) AND (cormishtml IS FALSE) AND\n> (length(cormdata) < 20000))\"\n> \" -> Bitmap Index Scan on ix_corm_fulltext_cormdata\n> (cost=0.00..24.11 rows=548 width=0)\"\n> \" Index Cond: (to_tsvector('french'::regconfig, cormdata)\n> @@ to_tsquery('mauritanie'::text))\"\n>\n>\n> With a join an another table detm, GIN index is not used\n>\n>\n> explain select distinct a.detmmailid from detm a JOIN corm b on\n> a.detmmailid = b.detmmailid where ((to_tsvector('french',b.cormdata) @@\n> to_tsquery('mauritanie') and b.cormishtml is false and\n> length(b.cormdata) < 20000) OR ( detmobjet ~* 'mauritanie' ))\n>\n> \"HashAggregate (cost=172418.27..172423.98 rows=571 width=52)\"\n> \" -> Hash Join (cost=28514.92..172416.85 rows=571 width=52)\"\n> \" Hash Cond: (b.detmmailid = a.detmmailid)\"\n> \" Join Filter: (((to_tsvector('french'::regconfig, b.cormdata) @@\n> to_tsquery('mauritanie'::text)) AND (b.cormishtml IS FALSE) AND\n> (length(b.cormdata) < 20000)) OR (a.detmobjet ~* 'mauritanie'::text))\"\n> \" -> Seq Scan on corm b (cost=0.00..44755.07 rows=449507\n> width=689)\"\n> \" -> Hash (cost=19322.74..19322.74 rows=338574 width=94)\"\n> \" -> Seq Scan on detm a (cost=0.00..19322.74 rows=338574\n> width=94)\"\n>\n>\n> If I remove OR ( detmobjet ~* 'mauritanie' ) in the select, the GIN\n> index is used\n> explain select distinct a.detmmailid from detm a JOIN corm b on\n> a.detmmailid = b.detmmailid where ((to_tsvector('french',b.cormdata) @@\n> to_tsquery('mauritanie') and b.cormishtml is false and\n> length(b.cormdata) < 20000))\n>\n> \"HashAggregate (cost=4295.69..4301.17 rows=548 width=52)\"\n> \" -> Nested Loop (cost=24.67..4294.32 rows=548 width=52)\"\n> \" -> Bitmap Heap Scan on corm b (cost=24.25..2026.35 rows=548\n> width=52)\"\n> \" Recheck Cond: ((to_tsvector('french'::regconfig,\n> cormdata) @@ to_tsquery('mauritanie'::text)) AND (cormishtml IS FALSE)\n> AND (length(cormdata) < 20000))\"\n> \" -> Bitmap Index Scan on ix_corm_fulltext_cormdata\n> (cost=0.00..24.11 rows=548 width=0)\"\n> \" Index Cond: (to_tsvector('french'::regconfig,\n> cormdata) @@ to_tsquery('mauritanie'::text))\"\n> \" -> Index Only Scan using pkey_detm on detm a (cost=0.42..4.13\n> rows=1 width=52)\"\n> \" Index Cond: (detmmailid = b.detmmailid)\"\n>\n> How can i force the use of the GIN index ?\n> thanks for your tips,\n\nThe problem with the OR detmobject ~* 'mauritanie' restriction is that \nthe rows that match that condition cannot be found using the GIN index. \nI think you'd want the system to fetch all the rows that match the other \ncondition using the GIN index, and do something else to find the other \nrows. The planner should be able to do that if you rewrite the query as \na UNION:\n\nselect a.detmmailid from detm a JOIN corm b on\na.detmmailid = b.detmmailid\nwhere (to_tsvector('french',b.cormdata) @@ to_tsquery('mauritanie') and \nb.cormishtml is false and length(b.cormdata) < 20000)\nunion\nselect a.detmmailid from detm a JOIN corm b on\na.detmmailid = b.detmmailid\nwhere detmobjet ~* 'mauritanie'\n\nNote that that will not return rows in 'detm' that have no matching rows \nin 'corm' table, even if they match the \"detmobjet ~* 'mauritanie\" \ncondition. That's what your original query also did, but if that's not \nwhat you want, leave out the JOIN from the second part of the union.\n\n- Heikki\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 27 Feb 2014 16:19:55 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: not using my GIN index in JOIN expression" }, { "msg_contents": "Le 27/02/2014 15:19, Heikki Linnakangas a ᅵcrit :\n> On 02/27/2014 04:06 PM, Jean-Max Reymond wrote:\n>> I am running the last version of PostgreSQL 9.3.3\n>> I have two tables detm and corm and a lot of datas in the column\n>> cormdata of corm table (1.4 GB).\n>>\n>> I have a GIN index on cormdata:\n>> CREATE INDEX ix_corm_fulltext_cormdata ON corm\n>> USING gin (to_tsvector('french'::regconfig, cormdata))\n>> WHERE cormishtml IS FALSE AND length(cormdata) < 20000;\n>>\n>> select distinct b.detmmailid from corm b where\n>> (to_tsvector('french',b.cormdata) @@ to_tsquery('mauritanie') and\n>> b.cormishtml is false and length(b.cormdata) < 20000)\n>> is very fast and use the GIN index.\n>>\n>> \"HashAggregate (cost=2027.72..2031.00 rows=328 width=52)\"\n>> \" -> Bitmap Heap Scan on corm b (cost=24.25..2026.35 rows=548\n>> width=52)\"\n>> \" Recheck Cond: ((to_tsvector('french'::regconfig, cormdata) @@\n>> to_tsquery('mauritanie'::text)) AND (cormishtml IS FALSE) AND\n>> (length(cormdata) < 20000))\"\n>> \" -> Bitmap Index Scan on ix_corm_fulltext_cormdata\n>> (cost=0.00..24.11 rows=548 width=0)\"\n>> \" Index Cond: (to_tsvector('french'::regconfig, cormdata)\n>> @@ to_tsquery('mauritanie'::text))\"\n>>\n>>\n>> With a join an another table detm, GIN index is not used\n>>\n>>\n>> explain select distinct a.detmmailid from detm a JOIN corm b on\n>> a.detmmailid = b.detmmailid where ((to_tsvector('french',b.cormdata) @@\n>> to_tsquery('mauritanie') and b.cormishtml is false and\n>> length(b.cormdata) < 20000) OR ( detmobjet ~* 'mauritanie' ))\n>>\n>> \"HashAggregate (cost=172418.27..172423.98 rows=571 width=52)\"\n>> \" -> Hash Join (cost=28514.92..172416.85 rows=571 width=52)\"\n>> \" Hash Cond: (b.detmmailid = a.detmmailid)\"\n>> \" Join Filter: (((to_tsvector('french'::regconfig, b.cormdata) @@\n>> to_tsquery('mauritanie'::text)) AND (b.cormishtml IS FALSE) AND\n>> (length(b.cormdata) < 20000)) OR (a.detmobjet ~* 'mauritanie'::text))\"\n>> \" -> Seq Scan on corm b (cost=0.00..44755.07 rows=449507\n>> width=689)\"\n>> \" -> Hash (cost=19322.74..19322.74 rows=338574 width=94)\"\n>> \" -> Seq Scan on detm a (cost=0.00..19322.74 rows=338574\n>> width=94)\"\n>>\n>>\n>> If I remove OR ( detmobjet ~* 'mauritanie' ) in the select, the GIN\n>> index is used\n>> explain select distinct a.detmmailid from detm a JOIN corm b on\n>> a.detmmailid = b.detmmailid where ((to_tsvector('french',b.cormdata) @@\n>> to_tsquery('mauritanie') and b.cormishtml is false and\n>> length(b.cormdata) < 20000))\n>>\n>> \"HashAggregate (cost=4295.69..4301.17 rows=548 width=52)\"\n>> \" -> Nested Loop (cost=24.67..4294.32 rows=548 width=52)\"\n>> \" -> Bitmap Heap Scan on corm b (cost=24.25..2026.35 rows=548\n>> width=52)\"\n>> \" Recheck Cond: ((to_tsvector('french'::regconfig,\n>> cormdata) @@ to_tsquery('mauritanie'::text)) AND (cormishtml IS FALSE)\n>> AND (length(cormdata) < 20000))\"\n>> \" -> Bitmap Index Scan on ix_corm_fulltext_cormdata\n>> (cost=0.00..24.11 rows=548 width=0)\"\n>> \" Index Cond: (to_tsvector('french'::regconfig,\n>> cormdata) @@ to_tsquery('mauritanie'::text))\"\n>> \" -> Index Only Scan using pkey_detm on detm a (cost=0.42..4.13\n>> rows=1 width=52)\"\n>> \" Index Cond: (detmmailid = b.detmmailid)\"\n>>\n>> How can i force the use of the GIN index ?\n>> thanks for your tips,\n>\n> The problem with the OR detmobject ~* 'mauritanie' restriction is that\n> the rows that match that condition cannot be found using the GIN index.\n> I think you'd want the system to fetch all the rows that match the other\n> condition using the GIN index, and do something else to find the other\n> rows. The planner should be able to do that if you rewrite the query as\n> a UNION:\n>\n> select a.detmmailid from detm a JOIN corm b on\n> a.detmmailid = b.detmmailid\n> where (to_tsvector('french',b.cormdata) @@ to_tsquery('mauritanie') and\n> b.cormishtml is false and length(b.cormdata) < 20000)\n> union\n> select a.detmmailid from detm a JOIN corm b on\n> a.detmmailid = b.detmmailid\n> where detmobjet ~* 'mauritanie'\n>\n> Note that that will not return rows in 'detm' that have no matching rows\n> in 'corm' table, even if they match the \"detmobjet ~* 'mauritanie\"\n> condition. That's what your original query also did, but if that's not\n> what you want, leave out the JOIN from the second part of the union.\n>\n> - Heikki\n\nIt works great: thanks a lot :-)\n\n-- \nJean-Max Reymond\nCKR Solutions Open Source http://www.ckr-solutions.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 27 Feb 2014 15:46:29 +0100", "msg_from": "Jean-Max Reymond <[email protected]>", "msg_from_op": true, "msg_subject": "Re: not using my GIN index in JOIN expression" } ]
[ { "msg_contents": "I'd like to understand why PostgreSQL is choosing to filter on the most\ninefficient predicate first in the query below.\n\nVersion: PostgreSQL 9.3.2 on x86_64-unknown-linux-gnu, compiled by gcc\n(SUSE Linux) 4.7.1 20120723 [gcc-4_7-branch revision 189773], 64-bit\nHardware: 24 2.93GHz Xeon cores and 32GB RAM\n\nTable in question:\n------------------------------------------------------------\nCREATE TABLE audit_trail (\n id int PRIMARY KEY NOT NULL,\n model varchar(128) NOT NULL,\n model_id int NOT NULL,\n created bool,\n updated bool,\n deleted bool,\n change_set text,\n created_by varchar(64) NOT NULL,\n created_date timestamp NOT NULL,\n updated_by varchar(64) NOT NULL,\n updated_date timestamp NOT NULL\n);\nCREATE UNIQUE INDEX audit_trail_pkey ON audit_trail(id);\n\nRow count: 5,306,596\nMin / Avg / Max character length of change_set column values: 11 / 165 /\n12717859\nNumber of unique values in model column: 196\n\nQuery and plan:\n------------------------------------------------------------\nSET track_io_timing = on;\nEXPLAIN (ANALYZE, BUFFERS)\nSELECT * FROM audit_trail WHERE model = 'User' AND model_id = 304 AND\nchange_set ILIKE '%test%';\n\nSeq Scan on audit_trail (cost=0.00..243427.98 rows=1 width=189) (actual\ntime=15509.722..17943.896 rows=6 loops=1)\n Filter: ((change_set ~~* '%test%'::text) AND ((model)::text =\n'User'::text) AND (model_id = 304))\n Rows Removed by Filter: 5306590\n Buffers: shared hit=10410 read=164384\n I/O Timings: read=310.189\nTotal runtime: 17943.930 ms\n\nObservations:\n------------------------------------------------------------\n1) Without the change_set predicate, the query runs in 1 second and returns\n461 rows.\n2) Turning statistics off for the change_set column has no effect on the\nplan or execution time:\n ALTER TABLE audit_trail ALTER COLUMN change_set SET STATISTICS 0;\n ANALYZE audit_trail(change_set);\n3) Setting statistics to the max value has no effect on the plan or\nexecution time:\n ALTER TABLE audit_trail ALTER COLUMN change_set SET STATISTICS 10000;\n ANALYZE audit_trail(change_set);\n4) Adding an index on (model, model_id) changes the plan so that it starts\nwith an index scan. Query time < 1s.\n CREATE INDEX audit_trail_model_idx ON audit_trail (model, model_id);\n\n Aggregate (cost=12.29..12.30 rows=1 width=4) (actual\ntime=1.455..1.456 rows=1 loops=1)\n -> Index Scan using audit_trail_model_idx on audit_trail this_\n(cost=0.56..12.29 rows=1 width=4) (actual time=1.446..1.446 rows=0 loops=1)\n Index Cond: (((model)::text = 'User'::text) AND (model_id =\n304))\n Filter: (change_set ~~* '%test%'::text)\n Rows Removed by Filter: 461\n\nAlthough adding the index will fix the performance problem, I'd like to\nunderstand why, in absence of the index, PostgreSQL would choose to filter\non the change_set value first. Since it is not specified first in the\npredicate and its column ordinality is higher than model and model_id, the\nplan generator must be choosing it first for some particular reason.\n\nAny insight is appreciated.\n\nI'd like to understand why PostgreSQL is choosing to filter on the most inefficient predicate first in the query below.Version: PostgreSQL 9.3.2 on x86_64-unknown-linux-gnu, compiled by gcc (SUSE Linux) 4.7.1 20120723 [gcc-4_7-branch revision 189773], 64-bit\nHardware: 24 2.93GHz Xeon cores and 32GB RAMTable in question:------------------------------------------------------------CREATE TABLE audit_trail (   id int PRIMARY KEY NOT NULL,   model varchar(128) NOT NULL,\n   model_id int NOT NULL,   created bool,   updated bool,   deleted bool,   change_set text,   created_by varchar(64) NOT NULL,   created_date timestamp NOT NULL,   updated_by varchar(64) NOT NULL,\n   updated_date timestamp NOT NULL);CREATE UNIQUE INDEX audit_trail_pkey ON audit_trail(id);Row count: 5,306,596Min / Avg / Max character length of change_set column values: 11 / 165 / 12717859Number of unique values in model column: 196\nQuery and plan:------------------------------------------------------------SET track_io_timing = on;EXPLAIN (ANALYZE, BUFFERS)SELECT * FROM audit_trail WHERE model = 'User' AND model_id = 304 AND change_set ILIKE '%test%';\nSeq Scan on audit_trail  (cost=0.00..243427.98 rows=1 width=189) (actual time=15509.722..17943.896 rows=6 loops=1)  Filter: ((change_set ~~* '%test%'::text) AND ((model)::text = 'User'::text) AND (model_id = 304))\n  Rows Removed by Filter: 5306590  Buffers: shared hit=10410 read=164384  I/O Timings: read=310.189Total runtime: 17943.930 msObservations:------------------------------------------------------------\n1) Without the change_set predicate, the query runs in 1 second and returns 461 rows.2) Turning statistics off for the change_set column has no effect on the plan or execution time:      ALTER TABLE audit_trail ALTER COLUMN change_set SET STATISTICS 0;\n      ANALYZE audit_trail(change_set);3) Setting statistics to the max value has no effect on the plan or execution time:      ALTER TABLE audit_trail ALTER COLUMN change_set SET STATISTICS 10000;      ANALYZE audit_trail(change_set);\n4) Adding an index on (model, model_id) changes the plan so that it starts with an index scan. Query time < 1s.      CREATE INDEX audit_trail_model_idx ON audit_trail (model, model_id);      Aggregate  (cost=12.29..12.30 rows=1 width=4) (actual time=1.455..1.456 rows=1 loops=1)\n        ->  Index Scan using audit_trail_model_idx on audit_trail this_  (cost=0.56..12.29 rows=1 width=4) (actual time=1.446..1.446 rows=0 loops=1)              Index Cond: (((model)::text = 'User'::text) AND (model_id = 304))\n              Filter: (change_set ~~* '%test%'::text)              Rows Removed by Filter: 461Although adding the index will fix the performance problem, I'd like to understand why, in absence of the index, PostgreSQL would choose to filter on the change_set value first. Since it is not specified first in the predicate and its column ordinality is higher than model and model_id, the plan generator must be choosing it first for some particular reason.\nAny insight is appreciated.", "msg_date": "Thu, 27 Feb 2014 11:57:42 -0500", "msg_from": "Tom Coogan <[email protected]>", "msg_from_op": true, "msg_subject": "Inefficient filter order in query plan" }, { "msg_contents": "Tom Coogan <[email protected]> writes:\n> I'd like to understand why PostgreSQL is choosing to filter on the most\n> inefficient predicate first in the query below.\n\nIt doesn't know that LIKE is any more expensive than the other operators,\nso there's no reason to do them in any particular order.\n\nYou could try increasing the cost attributed to the texticlike() function\nif you don't like the results you're getting here.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 27 Feb 2014 12:04:17 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inefficient filter order in query plan" }, { "msg_contents": "* Tom Lane ([email protected]) wrote:\n> Tom Coogan <[email protected]> writes:\n> > I'd like to understand why PostgreSQL is choosing to filter on the most\n> > inefficient predicate first in the query below.\n> \n> It doesn't know that LIKE is any more expensive than the other operators,\n> so there's no reason to do them in any particular order.\n> \n> You could try increasing the cost attributed to the texticlike() function\n> if you don't like the results you're getting here.\n\nPerhaps we should be attributing some additional cost to operations\nwhich (are likely to) require de-TOAST'ing a bunch of values? It's not\nobvious from the original email, but it's at least my suspicion that the\ndifference is amplified due to de-TOAST'ing of the values in that text\ncolumn, in addition to the straight-up function execution time\ndifferences.\n\nCosting integer (or anything that doesn't require pointer maniuplations)\noperations as cheaper than text-based operations also makes sense to me,\neven though of course there's more things happening when we do these\ncomparisons than the simple CPU-level act of doing the cmp.\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Thu, 27 Feb 2014 13:24:11 -0500", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inefficient filter order in query plan" }, { "msg_contents": "Stephen Frost <[email protected]> writes:\n> * Tom Lane ([email protected]) wrote:\n>> You could try increasing the cost attributed to the texticlike() function\n>> if you don't like the results you're getting here.\n\n> Perhaps we should be attributing some additional cost to operations\n> which (are likely to) require de-TOAST'ing a bunch of values? It's not\n> obvious from the original email, but it's at least my suspicion that the\n> difference is amplified due to de-TOAST'ing of the values in that text\n> column, in addition to the straight-up function execution time\n> differences.\n\nCould be. We've discussed adding some charge for touching\nlikely-to-be-toasted columns in the past, but nobody's done anything\nabout it. Note that I'd rather see that implemented as a nonzero cost\nfor Vars than as a charge for functions per se.\n\n> Costing integer (or anything that doesn't require pointer maniuplations)\n> operations as cheaper than text-based operations also makes sense to me,\n> even though of course there's more things happening when we do these\n> comparisons than the simple CPU-level act of doing the cmp.\n\nRight. We've bumped up the cost numbers for some extremely expensive\nfunctions, but I would like to have some actual data rather than just seat\nof the pants guesses before we go fooling with the standard CPU cost\nestimates for things at the level of regex matches.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 27 Feb 2014 14:02:27 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inefficient filter order in query plan" }, { "msg_contents": "On Thu, Feb 27, 2014 at 12:04 PM, Tom Lane <[email protected]> wrote:\n>\n> It doesn't know that LIKE is any more expensive than the other operators,\n> so there's no reason to do them in any particular order.\n>\n\nThanks Tom but why would strict equality checking (e.g. model =\n'User') have the same cost as LIKE operations which (may) have to do\npattern matching? I understand from the optimizer's perspective that\nthe actual cost could be equivalent depending upon what value is\nsearched (and the optimizer wouldn't know that value ahead of time).\nBut doesn't the potential for pattern matching warrant some difference\nin cost? From my experience, LIKE is almost always used with some\nform of pattern match in the supplied value.\n\nOn Thu, Feb 27, 2014 at 1:24 PM, Stephen Frost <[email protected]> wrote:\n>\n> Perhaps we should be attributing some additional cost to operations\n> which (are likely to) require de-TOAST'ing a bunch of values? It's not\n> obvious from the original email, but it's at least my suspicion that the\n> difference is amplified due to de-TOAST'ing of the values in that text\n> column, in addition to the straight-up function execution time\n> differences.\n\nLet me know if this is the wrong way to find this information but it\ndoesn't appear that any values in this particular table are TOAST'ed:\n\n SELECT oid, relname, reltoastrelid, relpages FROM pg_class WHERE\nrelname = 'audit_trail' OR oid = 7971231;\n\n oid | relname | reltoastrelid | relpages\n---------+------------------+---------------+----------\n 7971228 | audit_trail | 7971231 | 150502\n 7971231 | pg_toast_7971228 | 0 | 0\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 27 Feb 2014 15:34:04 -0500", "msg_from": "Tom Coogan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Inefficient filter order in query plan" }, { "msg_contents": "Tom Coogan <[email protected]> writes:\n> Thanks Tom but why would strict equality checking (e.g. model =\n> 'User') have the same cost as LIKE operations which (may) have to do\n> pattern matching?\n\nA bit of consultation of pg_proc.procost will show you that just about\nthe only internal functions with costs different from 1X cpu_operator_cost\nare those that do some sort of database access (and, in consequence, have\ntrue costs a couple orders of magnitude higher than a simple comparison).\nWe may eventually get around to refining the cost model so that it can\ntell the difference between = and LIKE, but nobody's yet done the work\nto decide which functions ought to get assigned what costs. I'm\ndisinclined to single out LIKE for special treatment in the absence of\nsome sort of framework for deciding which functions are worth penalizing.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 27 Feb 2014 15:47:30 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inefficient filter order in query plan" }, { "msg_contents": "On Thu, Feb 27, 2014 at 3:47 PM, Tom Lane <[email protected]> wrote:\n> A bit of consultation of pg_proc.procost will show you that just about\n> the only internal functions with costs different from 1X cpu_operator_cost\n> are those that do some sort of database access (and, in consequence, have\n> true costs a couple orders of magnitude higher than a simple comparison).\n> We may eventually get around to refining the cost model so that it can\n> tell the difference between = and LIKE, but nobody's yet done the work\n> to decide which functions ought to get assigned what costs. I'm\n> disinclined to single out LIKE for special treatment in the absence of\n> some sort of framework for deciding which functions are worth penalizing.\n>\n> regards, tom lane\n\nI agree that LIKE should not be singled out for special treatment\nsince the cost values should be determined with respect to all other\noperators and not just one.\n\nMy original question still remained though. The clause order of my\npredicate is being re-arranged due to factors unrelated to cost:\n\nPredicate: model = 'User' AND model_id = 304 AND change_set ILIKE '%test%'\n-->\nPlan filter: (change_set ~~* '%test%'::text) AND ((model)::text =\n'User'::text) AND (model_id = 304)\n\nFor anyone looking for an answer to a similar question in the future,\nthe following is summary of why this appears to be happening.\n\nUsing PostgreSQL 9.1.12 source:\n\n/src/backend/optimizer/plan/planmain.c:193\n - A call to deconstruct_jointree(PlannerInfo *root) is made. The\njointree inside of root still has the clause order preserved.\n/src/backend/optimizer/plan/initsplan.c:259\n - A call to deconstruct_recurse(PlannerInfo *root, Node *jtnode,\n....) is made.\n/src/backend/optimizer/plan/initsplan.c:353\n - The quals are looped over and a call to\ndistribute_qual_to_rels(PlannerInfo *root, Node *clause, ....) is made\nfor each.\n/src/backend/optimizer/plan/initsplan.c:1099\n - LIKE and equivalence clauses are handled differently.\n - Equivalence clauses are encountered first in my scenario\nand get processed by process_equivalence() on line 1104. return is\nthen immediately called.\n - A comment explains that equivalence clauses will be\nadded to the restriction list at a later time:\n \"If it is a true equivalence clause, send it to the\nEquivalenceClass\n machinery. We do *not* attach it directly to any\nrestriction or join\n lists. The EC code will propagate it to the\nappropriate places later.\"\n - The LIKE clause is last and gets processed by\ndistribute_restrictinfo_to_rels on line 1152.\n - Unlike equivalences, this results in the LIKE clause\ngetting added to the restriction list immediately and therefore is\nfirst in the list.\n/src/backend/optimizer/plan/planmain.c:207\n - Equivalence clauses finally get added to the restriction list\nvia a call to generate_base_implied_equalities(PlannerInfo *root)\n\nI don't understand the reason for delaying the addition of equivalence\nclauses to the restriction list but, for whatever reason, it appears\nto be by design.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 28 Feb 2014 13:29:29 -0500", "msg_from": "Tom Coogan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Inefficient filter order in query plan" } ]
[ { "msg_contents": "Hi Everyone,\nWe have a data set and access pattern that is causing us some performance\nissues. The data set is hierarchical with about 2 million rows at the\nlowest level (object), followed by 500k at the next level (container) and\napproximately 10 at the highest level (category).\n\nThe way the data is structured objects live in one primary container but\ncan also reside in one or more secondary containers. The secondary\ncontainers have to be filtered by category ids for access control. This\ndoesn't really pose a problem on the primary containers because they are\nhomogeneous by category but it slows things down on the secondary\ncontainers.\n\nThe access pattern certainly complicates things. We need to order the data\nby a value (chronological, score, etc) and jump to an arbitrary position\nand window within the ordering. I have provided an example of the\nchronological and score based ordering and indexing in the script provided\nbelow. I'm proxying the chronological ordering by using the sequence\ngenerated id for the sample code. In the application we use a timestamp.\n\nI have created sample code <https://gist.github.com/drsnyder/9277054> (\nhttps://gist.github.com/drsnyder/9277054) that will build a dataset that is\nsimilar to the one that we have in production. The distributions aren't\nexactly the same but they should reproduce the behavior. I have also\nprovided examples of the queries that give us problems.\n\nThe code is documented inline and points out the queries that are causing\nproblems. You should be able to run the script on a 9.2.2 (we use 9.2.6)\ndatabase in about 10m on a development laptop (or 1-2m on production-like\nhardware) to experiment with the data and the SQL. The total database size\nis about 570MB.\n\nThe primary query that I'm trying to optimize executes in about 1600ms on\nmy laptop and about 800ms on production-like hardware (more for the score\nversion). My target is to get the data fetch down below 100ms if possible.\n\nIf you have any suggestions it would be greatly appreciated. Am I missing\nsomething obvious? Is there a logically equivalent alternative that would\nbe more efficient?\n\nI'm also open to creative ways to cache the data in PostgreSQL. Should we\nuse a rollup table or arrays? Those options aren't ideal because of the\nmaintenance required but if its the only option I'm ok with that.\n\nThanks,\nDamon\n\nHi Everyone,We have a data set and access pattern that is causing us some performance issues. The data set is hierarchical with about 2 million rows at the lowest level (object), followed by 500k at the next level (container) and approximately 10 at the highest level (category). \nThe way the data is structured objects live in one primary container but can also reside in one or more secondary containers. The secondary containers have to be filtered by category ids for access control. This doesn't really pose a problem on the primary containers because they are homogeneous by category but it slows things down on the secondary containers.\nThe access pattern certainly complicates things. We need to order the data by a value (chronological, score, etc) and jump to an arbitrary position and window within the ordering. I have provided an example of the chronological and score based ordering and indexing in the script provided below. I'm proxying the chronological ordering by using the sequence generated id for the sample code. In the application we use a timestamp.\nI have created sample code (https://gist.github.com/drsnyder/9277054) that will build a dataset that is similar to the one that we have in production. The distributions aren't exactly the same but they should reproduce the behavior. I have also provided examples of the queries that give us problems. \nThe code is documented inline and points out the queries that are causing problems. You should be able to run the script on a 9.2.2 (we use 9.2.6) database in about 10m on a development laptop (or 1-2m on production-like hardware) to experiment with the data and the SQL. The total database size is about 570MB.\nThe primary query that I'm trying to optimize executes in about 1600ms on my laptop and about 800ms on production-like hardware (more for the score version). My target is to get the data fetch down below 100ms if possible. \nIf you have any suggestions it would be greatly appreciated. Am I missing something obvious? Is there a logically equivalent alternative that would be more efficient?I'm also open to creative ways to cache the data in PostgreSQL. Should we use a rollup table or arrays? Those options aren't ideal because of the maintenance required but if its the only option I'm ok with that.\nThanks,Damon", "msg_date": "Fri, 28 Feb 2014 12:01:52 -0800", "msg_from": "Damon Snyder <[email protected]>", "msg_from_op": true, "msg_subject": "Help with optimizing a query over hierarchical data" }, { "msg_contents": "On Fri, Feb 28, 2014 at 5:01 PM, Damon Snyder <[email protected]> wrote:\n> The primary query that I'm trying to optimize executes in about 1600ms on my\n> laptop and about 800ms on production-like hardware (more for the score\n> version). My target is to get the data fetch down below 100ms if possible.\n\nCould you post some explain analyze of those particular queries?\n\n> If you have any suggestions it would be greatly appreciated. Am I missing\n> something obvious? Is there a logically equivalent alternative that would be\n> more efficient?\n\nI'd suggest de-normalizing a bit. For instance, why don't you put the\nscore right into the object? I'm sure the indirection is hurting.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 1 Mar 2014 22:02:24 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with optimizing a query over hierarchical data" }, { "msg_contents": "Hi Claudio,\nThanks for responding. Here is the explain (http://explain.depesz.com/s/W3W)\nfor the ordering by meta container starting on line 192 (\nhttps://gist.github.com/drsnyder/9277054#file-object-ordering-setup-sql-L192\n).\n\nHere is the explain (http://explain.depesz.com/s/d1O) for the ordering by\nscore starting on line 192 (\nhttps://gist.github.com/drsnyder/9277054#file-object-ordering-setup-sql-L216\n).\n\nBoth of the explains were done with (ANALYZE, BUFFERS).\n\nThanks for the suggestion regarding de-normalizing. I'll consider that\napproach for the score based query.\n\nI've also included the server config changes made from updates to\npostgresql.conf on the box that I'm testing on. See below.\n\nThanks,\nDamon\n\n version\n\n--------------------------------------------------------------------------------------------------------------\n PostgreSQL 9.2.6 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7\n20120313 (Red Hat 4.4.7-3), 64-bit\n(1 row)\n\n name | current_setting | source\n------------------------------+--------------------+----------------------\n application_name | psql | client\n checkpoint_completion_target | 0.9 | configuration file\n checkpoint_segments | 16 | configuration file\n DateStyle | ISO, MDY | configuration file\n default_tablespace | ssd2 | user\n default_text_search_config | pg_catalog.english | configuration file\n effective_cache_size | 5632MB | configuration file\n lc_messages | en_US.UTF-8 | configuration file\n lc_monetary | en_US.UTF-8 | configuration file\n lc_numeric | en_US.UTF-8 | configuration file\n lc_time | en_US.UTF-8 | configuration file\n listen_addresses | * | configuration file\n log_destination | stderr | configuration file\n log_directory | pg_log | configuration file\n log_filename | postgresql-%a.log | configuration file\n log_line_prefix | %d %m %c %x: | configuration file\n log_min_duration_statement | 500ms | configuration file\n log_min_error_statement | error | configuration file\n log_min_messages | error | configuration file\n log_rotation_age | 1d | configuration file\n log_rotation_size | 0 | configuration file\n log_timezone | UTC | configuration file\n log_truncate_on_rotation | on | configuration file\n logging_collector | on | configuration file\n maintenance_work_mem | 480MB | configuration file\n max_connections | 80 | configuration file\n max_stack_depth | 2MB | environment variable\n port | 5432 | command line\n shared_buffers | 1920MB | configuration file\n TimeZone | UTC | configuration file\n wal_buffers | 16MB | configuration file\n work_mem | 8MB | configuration file\n(32 rows)\n\n\n\nOn Sat, Mar 1, 2014 at 5:02 PM, Claudio Freire <[email protected]>wrote:\n\n> On Fri, Feb 28, 2014 at 5:01 PM, Damon Snyder <[email protected]>\n> wrote:\n> > The primary query that I'm trying to optimize executes in about 1600ms\n> on my\n> > laptop and about 800ms on production-like hardware (more for the score\n> > version). My target is to get the data fetch down below 100ms if\n> possible.\n>\n> Could you post some explain analyze of those particular queries?\n>\n> > If you have any suggestions it would be greatly appreciated. Am I missing\n> > something obvious? Is there a logically equivalent alternative that\n> would be\n> > more efficient?\n>\n> I'd suggest de-normalizing a bit. For instance, why don't you put the\n> score right into the object? I'm sure the indirection is hurting.\n>\n\nHi Claudio,Thanks for responding. Here is the explain (http://explain.depesz.com/s/W3W) for the ordering by meta container starting on line 192 (https://gist.github.com/drsnyder/9277054#file-object-ordering-setup-sql-L192). \nHere is the explain (http://explain.depesz.com/s/d1O) for the ordering by score starting on line 192 (https://gist.github.com/drsnyder/9277054#file-object-ordering-setup-sql-L216).\nBoth of the explains were done with (ANALYZE, BUFFERS).Thanks for the suggestion regarding de-normalizing. I'll consider that approach for the score based query.\nI've also included the server config changes made from updates to postgresql.conf on the box that I'm testing on. See below.Thanks,Damon\n                                                  version                                                    --------------------------------------------------------------------------------------------------------------\n PostgreSQL 9.2.6 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-3), 64-bit(1 row)             name             |  current_setting   |        source        \n------------------------------+--------------------+---------------------- application_name             | psql               | client checkpoint_completion_target | 0.9                | configuration file\n checkpoint_segments          | 16                 | configuration file DateStyle                    | ISO, MDY           | configuration file default_tablespace           | ssd2               | user\n default_text_search_config   | pg_catalog.english | configuration file effective_cache_size         | 5632MB             | configuration file lc_messages                  | en_US.UTF-8        | configuration file\n lc_monetary                  | en_US.UTF-8        | configuration file lc_numeric                   | en_US.UTF-8        | configuration file lc_time                      | en_US.UTF-8        | configuration file\n listen_addresses             | *                  | configuration file log_destination              | stderr             | configuration file log_directory                | pg_log             | configuration file\n log_filename                 | postgresql-%a.log  | configuration file log_line_prefix              | %d %m %c %x:       | configuration file log_min_duration_statement   | 500ms              | configuration file\n log_min_error_statement      | error              | configuration file log_min_messages             | error              | configuration file log_rotation_age             | 1d                 | configuration file\n log_rotation_size            | 0                  | configuration file log_timezone                 | UTC                | configuration file log_truncate_on_rotation     | on                 | configuration file\n logging_collector            | on                 | configuration file maintenance_work_mem         | 480MB              | configuration file max_connections              | 80                 | configuration file\n max_stack_depth              | 2MB                | environment variable port                         | 5432               | command line shared_buffers               | 1920MB             | configuration file\n TimeZone                     | UTC                | configuration file wal_buffers                  | 16MB               | configuration file work_mem                     | 8MB                | configuration file\n(32 rows)On Sat, Mar 1, 2014 at 5:02 PM, Claudio Freire <[email protected]> wrote:\nOn Fri, Feb 28, 2014 at 5:01 PM, Damon Snyder <[email protected]> wrote:\n\n> The primary query that I'm trying to optimize executes in about 1600ms on my\n> laptop and about 800ms on production-like hardware (more for the score\n> version). My target is to get the data fetch down below 100ms if possible.\n\nCould you post some explain analyze of those particular queries?\n\n> If you have any suggestions it would be greatly appreciated. Am I missing\n> something obvious? Is there a logically equivalent alternative that would be\n> more efficient?\n\nI'd suggest de-normalizing a bit. For instance, why don't you put the\nscore right into the object? I'm sure the indirection is hurting.", "msg_date": "Mon, 3 Mar 2014 09:55:41 -0800", "msg_from": "Damon Snyder <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help with optimizing a query over hierarchical data" }, { "msg_contents": "Um... I think your problem is a misuse of CTE. Your CTE is building an\nintermediate of several thousands of rows only to select a dozen\nafterwards. You may want to consider a view or subquery, though I'm\nnot sure pg will be able to optimize much given your use of window\nfunctions, which forces a materialization of that intermediate result.\n\nI think you need to re-think your queries to be smarter about that.\n\nOn Mon, Mar 3, 2014 at 2:55 PM, Damon Snyder <[email protected]> wrote:\n> Hi Claudio,\n> Thanks for responding. Here is the explain (http://explain.depesz.com/s/W3W)\n> for the ordering by meta container starting on line 192\n> (https://gist.github.com/drsnyder/9277054#file-object-ordering-setup-sql-L192).\n>\n> Here is the explain (http://explain.depesz.com/s/d1O) for the ordering by\n> score starting on line 192\n> (https://gist.github.com/drsnyder/9277054#file-object-ordering-setup-sql-L216).\n>\n> Both of the explains were done with (ANALYZE, BUFFERS).\n>\n> Thanks for the suggestion regarding de-normalizing. I'll consider that\n> approach for the score based query.\n>\n> I've also included the server config changes made from updates to\n> postgresql.conf on the box that I'm testing on. See below.\n>\n> Thanks,\n> Damon\n>\n> version\n> --------------------------------------------------------------------------------------------------------------\n> PostgreSQL 9.2.6 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7\n> 20120313 (Red Hat 4.4.7-3), 64-bit\n> (1 row)\n>\n> name | current_setting | source\n> ------------------------------+--------------------+----------------------\n> application_name | psql | client\n> checkpoint_completion_target | 0.9 | configuration file\n> checkpoint_segments | 16 | configuration file\n> DateStyle | ISO, MDY | configuration file\n> default_tablespace | ssd2 | user\n> default_text_search_config | pg_catalog.english | configuration file\n> effective_cache_size | 5632MB | configuration file\n> lc_messages | en_US.UTF-8 | configuration file\n> lc_monetary | en_US.UTF-8 | configuration file\n> lc_numeric | en_US.UTF-8 | configuration file\n> lc_time | en_US.UTF-8 | configuration file\n> listen_addresses | * | configuration file\n> log_destination | stderr | configuration file\n> log_directory | pg_log | configuration file\n> log_filename | postgresql-%a.log | configuration file\n> log_line_prefix | %d %m %c %x: | configuration file\n> log_min_duration_statement | 500ms | configuration file\n> log_min_error_statement | error | configuration file\n> log_min_messages | error | configuration file\n> log_rotation_age | 1d | configuration file\n> log_rotation_size | 0 | configuration file\n> log_timezone | UTC | configuration file\n> log_truncate_on_rotation | on | configuration file\n> logging_collector | on | configuration file\n> maintenance_work_mem | 480MB | configuration file\n> max_connections | 80 | configuration file\n> max_stack_depth | 2MB | environment variable\n> port | 5432 | command line\n> shared_buffers | 1920MB | configuration file\n> TimeZone | UTC | configuration file\n> wal_buffers | 16MB | configuration file\n> work_mem | 8MB | configuration file\n> (32 rows)\n>\n>\n>\n> On Sat, Mar 1, 2014 at 5:02 PM, Claudio Freire <[email protected]>\n> wrote:\n>>\n>> On Fri, Feb 28, 2014 at 5:01 PM, Damon Snyder <[email protected]>\n>> wrote:\n>> > The primary query that I'm trying to optimize executes in about 1600ms\n>> > on my\n>> > laptop and about 800ms on production-like hardware (more for the score\n>> > version). My target is to get the data fetch down below 100ms if\n>> > possible.\n>>\n>> Could you post some explain analyze of those particular queries?\n>>\n>> > If you have any suggestions it would be greatly appreciated. Am I\n>> > missing\n>> > something obvious? Is there a logically equivalent alternative that\n>> > would be\n>> > more efficient?\n>>\n>> I'd suggest de-normalizing a bit. For instance, why don't you put the\n>> score right into the object? I'm sure the indirection is hurting.\n>\n>\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 3 Mar 2014 18:52:06 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with optimizing a query over hierarchical data" }, { "msg_contents": "Hi Claudio,\nSee my comments inline below.\n\n> Um... I think your problem is a misuse of CTE. Your CTE is building an\nintermediate of several thousands of rows only to select a dozen\nafterwards. You may want to consider a view or subquery, though I'm\nnot sure pg will be able to optimize much given your use of window\nfunctions, which forces a materialization of that intermediate result.\n\nThe application requires that we find an element and it's neighbors within\na sorted set at a given offset after filtering by category and status. In\nthe examples provided, we need position 50000, 6 above, and 6 below. Is\nthere a way do to that more efficiently without first determining the\nposition of each element within the set using a window function? How would\na subquery help?\n\nThe only solution I could come up with was to materialize the intermediate\nresult with the CTE (since you don't know ahead of time how many objects\nmatch the status and category criteria) then use the window to include the\nposition or index.\n\nThe only alternative that I can think of would be to materialize the\nelements of the set with an index on the lookup attributes and the\nattribute used to order them. That is, make what the CTE is doing\nmaterialized. Even in that case you will still need a window to determine\nthe position after filtering but you won't have any joins.\n\n> I think you need to re-think your queries to be smarter about that.\n\nAs I mentioned above, we need the position and it's neighbors to support a\nfeature. Do you have any suggestions as to how we might re-think them?\n\nThanks,\nDamon\n\n\n\nOn Mon, Mar 3, 2014 at 1:52 PM, Claudio Freire <[email protected]>wrote:\n\n> Um... I think your problem is a misuse of CTE. Your CTE is building an\n> intermediate of several thousands of rows only to select a dozen\n> afterwards. You may want to consider a view or subquery, though I'm\n> not sure pg will be able to optimize much given your use of window\n> functions, which forces a materialization of that intermediate result.\n>\n> I think you need to re-think your queries to be smarter about that.\n>\n> On Mon, Mar 3, 2014 at 2:55 PM, Damon Snyder <[email protected]>\n> wrote:\n> > Hi Claudio,\n> > Thanks for responding. Here is the explain (\n> http://explain.depesz.com/s/W3W)\n> > for the ordering by meta container starting on line 192\n> > (\n> https://gist.github.com/drsnyder/9277054#file-object-ordering-setup-sql-L192\n> ).\n> >\n> > Here is the explain (http://explain.depesz.com/s/d1O) for the ordering\n> by\n> > score starting on line 192\n> > (\n> https://gist.github.com/drsnyder/9277054#file-object-ordering-setup-sql-L216\n> ).\n> >\n> > Both of the explains were done with (ANALYZE, BUFFERS).\n> >\n> > Thanks for the suggestion regarding de-normalizing. I'll consider that\n> > approach for the score based query.\n> >\n> > I've also included the server config changes made from updates to\n> > postgresql.conf on the box that I'm testing on. See below.\n> >\n> > Thanks,\n> > Damon\n> >\n> > version\n> >\n> --------------------------------------------------------------------------------------------------------------\n> > PostgreSQL 9.2.6 on x86_64-unknown-linux-gnu, compiled by gcc (GCC)\n> 4.4.7\n> > 20120313 (Red Hat 4.4.7-3), 64-bit\n> > (1 row)\n> >\n> > name | current_setting | source\n> >\n> ------------------------------+--------------------+----------------------\n> > application_name | psql | client\n> > checkpoint_completion_target | 0.9 | configuration file\n> > checkpoint_segments | 16 | configuration file\n> > DateStyle | ISO, MDY | configuration file\n> > default_tablespace | ssd2 | user\n> > default_text_search_config | pg_catalog.english | configuration file\n> > effective_cache_size | 5632MB | configuration file\n> > lc_messages | en_US.UTF-8 | configuration file\n> > lc_monetary | en_US.UTF-8 | configuration file\n> > lc_numeric | en_US.UTF-8 | configuration file\n> > lc_time | en_US.UTF-8 | configuration file\n> > listen_addresses | * | configuration file\n> > log_destination | stderr | configuration file\n> > log_directory | pg_log | configuration file\n> > log_filename | postgresql-%a.log | configuration file\n> > log_line_prefix | %d %m %c %x: | configuration file\n> > log_min_duration_statement | 500ms | configuration file\n> > log_min_error_statement | error | configuration file\n> > log_min_messages | error | configuration file\n> > log_rotation_age | 1d | configuration file\n> > log_rotation_size | 0 | configuration file\n> > log_timezone | UTC | configuration file\n> > log_truncate_on_rotation | on | configuration file\n> > logging_collector | on | configuration file\n> > maintenance_work_mem | 480MB | configuration file\n> > max_connections | 80 | configuration file\n> > max_stack_depth | 2MB | environment variable\n> > port | 5432 | command line\n> > shared_buffers | 1920MB | configuration file\n> > TimeZone | UTC | configuration file\n> > wal_buffers | 16MB | configuration file\n> > work_mem | 8MB | configuration file\n> > (32 rows)\n> >\n> >\n> >\n> > On Sat, Mar 1, 2014 at 5:02 PM, Claudio Freire <[email protected]>\n> > wrote:\n> >>\n> >> On Fri, Feb 28, 2014 at 5:01 PM, Damon Snyder <[email protected]>\n> >> wrote:\n> >> > The primary query that I'm trying to optimize executes in about 1600ms\n> >> > on my\n> >> > laptop and about 800ms on production-like hardware (more for the score\n> >> > version). My target is to get the data fetch down below 100ms if\n> >> > possible.\n> >>\n> >> Could you post some explain analyze of those particular queries?\n> >>\n> >> > If you have any suggestions it would be greatly appreciated. Am I\n> >> > missing\n> >> > something obvious? Is there a logically equivalent alternative that\n> >> > would be\n> >> > more efficient?\n> >>\n> >> I'd suggest de-normalizing a bit. For instance, why don't you put the\n> >> score right into the object? I'm sure the indirection is hurting.\n> >\n> >\n>\n\nHi Claudio,See my comments inline below.> Um... I think your problem is a misuse of CTE. Your CTE is building an\nintermediate of several thousands of rows only to select a dozenafterwards. You may want to consider a view or subquery, though I'm\nnot sure pg will be able to optimize much given your use of windowfunctions, which forces a materialization of that intermediate result.\nThe application requires that we find an element and it's neighbors within a sorted set at a given offset after filtering by category and status. In the examples provided, we need position 50000, 6 above, and 6 below. Is there a way do to that more efficiently without first determining the position of each element within the set using a window function? How would a subquery help?\nThe only solution I could come up with was to materialize the intermediate result with the CTE (since you don't know ahead of time how many objects match the status and category criteria) then use the window to include the position or index.\nThe only alternative that I can think of would be to materialize the elements of the set with an index on the lookup attributes and the attribute used to order them. That is, make what the CTE is doing materialized. Even in that case you will still need a window to determine the position after filtering but you won't have any joins.\n> I think you need to re-think your queries to be smarter about that.\nAs I mentioned above, we need the position and it's neighbors to support a feature. Do you have any suggestions as to how we might re-think them?\nThanks,Damon\nOn Mon, Mar 3, 2014 at 1:52 PM, Claudio Freire <[email protected]> wrote:\nUm... I think your problem is a misuse of CTE. Your CTE is building an\nintermediate of several thousands of rows only to select a dozen\nafterwards. You may want to consider a view or subquery, though I'm\nnot sure pg will be able to optimize much given your use of window\nfunctions, which forces a materialization of that intermediate result.\n\nI think you need to re-think your queries to be smarter about that.\n\nOn Mon, Mar 3, 2014 at 2:55 PM, Damon Snyder <[email protected]> wrote:\n> Hi Claudio,\n> Thanks for responding. Here is the explain (http://explain.depesz.com/s/W3W)\n> for the ordering by meta container starting on line 192\n> (https://gist.github.com/drsnyder/9277054#file-object-ordering-setup-sql-L192).\n>\n> Here is the explain (http://explain.depesz.com/s/d1O) for the ordering by\n> score starting on line 192\n> (https://gist.github.com/drsnyder/9277054#file-object-ordering-setup-sql-L216).\n>\n> Both of the explains were done with (ANALYZE, BUFFERS).\n>\n> Thanks for the suggestion regarding de-normalizing. I'll consider that\n> approach for the score based query.\n>\n> I've also included the server config changes made from updates to\n> postgresql.conf on the box that I'm testing on. See below.\n>\n> Thanks,\n> Damon\n>\n>                                                   version\n> --------------------------------------------------------------------------------------------------------------\n>  PostgreSQL 9.2.6 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7\n> 20120313 (Red Hat 4.4.7-3), 64-bit\n> (1 row)\n>\n>              name             |  current_setting   |        source\n> ------------------------------+--------------------+----------------------\n>  application_name             | psql               | client\n>  checkpoint_completion_target | 0.9                | configuration file\n>  checkpoint_segments          | 16                 | configuration file\n>  DateStyle                    | ISO, MDY           | configuration file\n>  default_tablespace           | ssd2               | user\n>  default_text_search_config   | pg_catalog.english | configuration file\n>  effective_cache_size         | 5632MB             | configuration file\n>  lc_messages                  | en_US.UTF-8        | configuration file\n>  lc_monetary                  | en_US.UTF-8        | configuration file\n>  lc_numeric                   | en_US.UTF-8        | configuration file\n>  lc_time                      | en_US.UTF-8        | configuration file\n>  listen_addresses             | *                  | configuration file\n>  log_destination              | stderr             | configuration file\n>  log_directory                | pg_log             | configuration file\n>  log_filename                 | postgresql-%a.log  | configuration file\n>  log_line_prefix              | %d %m %c %x:       | configuration file\n>  log_min_duration_statement   | 500ms              | configuration file\n>  log_min_error_statement      | error              | configuration file\n>  log_min_messages             | error              | configuration file\n>  log_rotation_age             | 1d                 | configuration file\n>  log_rotation_size            | 0                  | configuration file\n>  log_timezone                 | UTC                | configuration file\n>  log_truncate_on_rotation     | on                 | configuration file\n>  logging_collector            | on                 | configuration file\n>  maintenance_work_mem         | 480MB              | configuration file\n>  max_connections              | 80                 | configuration file\n>  max_stack_depth              | 2MB                | environment variable\n>  port                         | 5432               | command line\n>  shared_buffers               | 1920MB             | configuration file\n>  TimeZone                     | UTC                | configuration file\n>  wal_buffers                  | 16MB               | configuration file\n>  work_mem                     | 8MB                | configuration file\n> (32 rows)\n>\n>\n>\n> On Sat, Mar 1, 2014 at 5:02 PM, Claudio Freire <[email protected]>\n> wrote:\n>>\n>> On Fri, Feb 28, 2014 at 5:01 PM, Damon Snyder <[email protected]>\n>> wrote:\n>> > The primary query that I'm trying to optimize executes in about 1600ms\n>> > on my\n>> > laptop and about 800ms on production-like hardware (more for the score\n>> > version). My target is to get the data fetch down below 100ms if\n>> > possible.\n>>\n>> Could you post some explain analyze of those particular queries?\n>>\n>> > If you have any suggestions it would be greatly appreciated. Am I\n>> > missing\n>> > something obvious? Is there a logically equivalent alternative that\n>> > would be\n>> > more efficient?\n>>\n>> I'd suggest de-normalizing a bit. For instance, why don't you put the\n>> score right into the object? I'm sure the indirection is hurting.\n>\n>", "msg_date": "Mon, 3 Mar 2014 17:12:16 -0800", "msg_from": "Damon Snyder <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help with optimizing a query over hierarchical data" }, { "msg_contents": "On Mon, Mar 3, 2014 at 10:12 PM, Damon Snyder <[email protected]> wrote:\n>\n>> Um... I think your problem is a misuse of CTE. Your CTE is building an\n> intermediate of several thousands of rows only to select a dozen\n> afterwards. You may want to consider a view or subquery, though I'm\n> not sure pg will be able to optimize much given your use of window\n> functions, which forces a materialization of that intermediate result.\n>\n> The application requires that we find an element and it's neighbors within a\n> sorted set at a given offset after filtering by category and status. In the\n> examples provided, we need position 50000, 6 above, and 6 below. Is there a\n> way do to that more efficiently without first determining the position of\n> each element within the set using a window function? How would a subquery\n> help?\n>\n> The only solution I could come up with was to materialize the intermediate\n> result with the CTE (since you don't know ahead of time how many objects\n> match the status and category criteria) then use the window to include the\n> position or index.\n\n\nYou're materializing on a per-query basis. That's no good (as your\ntimings show). Try to find a way to materialize on a more permanent\nbasis.\n\nI cannot give you a specific solution without investing way more time\nthan I would. But consider this: all your queries costs are CPU costs.\nYou need a better algorithm, or better hardware. I doubt you'll find\nhardware that performs 16 times faster, so you have to concentrate on\na better algorithm.\n\nAnd it's unlikely you'll find a better algorithm without a better data\nstructure. So you need to reorganize your database to make it easier\nto query. I don't think simple SQL optimizations will get you to your\nperformance goal.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 4 Mar 2014 01:20:18 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with optimizing a query over hierarchical data" }, { "msg_contents": "Hi Claudio,\nThanks for the help!\n\nDamon\n\n\nOn Mon, Mar 3, 2014 at 8:20 PM, Claudio Freire <[email protected]>wrote:\n\n> On Mon, Mar 3, 2014 at 10:12 PM, Damon Snyder <[email protected]>\n> wrote:\n> >\n> >> Um... I think your problem is a misuse of CTE. Your CTE is building an\n> > intermediate of several thousands of rows only to select a dozen\n> > afterwards. You may want to consider a view or subquery, though I'm\n> > not sure pg will be able to optimize much given your use of window\n> > functions, which forces a materialization of that intermediate result.\n> >\n> > The application requires that we find an element and it's neighbors\n> within a\n> > sorted set at a given offset after filtering by category and status. In\n> the\n> > examples provided, we need position 50000, 6 above, and 6 below. Is\n> there a\n> > way do to that more efficiently without first determining the position of\n> > each element within the set using a window function? How would a subquery\n> > help?\n> >\n> > The only solution I could come up with was to materialize the\n> intermediate\n> > result with the CTE (since you don't know ahead of time how many objects\n> > match the status and category criteria) then use the window to include\n> the\n> > position or index.\n>\n>\n> You're materializing on a per-query basis. That's no good (as your\n> timings show). Try to find a way to materialize on a more permanent\n> basis.\n>\n> I cannot give you a specific solution without investing way more time\n> than I would. But consider this: all your queries costs are CPU costs.\n> You need a better algorithm, or better hardware. I doubt you'll find\n> hardware that performs 16 times faster, so you have to concentrate on\n> a better algorithm.\n>\n> And it's unlikely you'll find a better algorithm without a better data\n> structure. So you need to reorganize your database to make it easier\n> to query. I don't think simple SQL optimizations will get you to your\n> performance goal.\n>\n\nHi Claudio,Thanks for the help!DamonOn Mon, Mar 3, 2014 at 8:20 PM, Claudio Freire <[email protected]> wrote:\nOn Mon, Mar 3, 2014 at 10:12 PM, Damon Snyder <[email protected]> wrote:\n\n>\n>> Um... I think your problem is a misuse of CTE. Your CTE is building an\n> intermediate of several thousands of rows only to select a dozen\n> afterwards. You may want to consider a view or subquery, though I'm\n> not sure pg will be able to optimize much given your use of window\n> functions, which forces a materialization of that intermediate result.\n>\n> The application requires that we find an element and it's neighbors within a\n> sorted set at a given offset after filtering by category and status. In the\n> examples provided, we need position 50000, 6 above, and 6 below. Is there a\n> way do to that more efficiently without first determining the position of\n> each element within the set using a window function? How would a subquery\n> help?\n>\n> The only solution I could come up with was to materialize the intermediate\n> result with the CTE (since you don't know ahead of time how many objects\n> match the status and category criteria) then use the window to include the\n> position or index.\n\n\nYou're materializing on a per-query basis. That's no good (as your\ntimings show). Try to find a way to materialize on a more permanent\nbasis.\n\nI cannot give you a specific solution without investing way more time\nthan I would. But consider this: all your queries costs are CPU costs.\nYou need a better algorithm, or better hardware. I doubt you'll find\nhardware that performs 16 times faster, so you have to concentrate on\na better algorithm.\n\nAnd it's unlikely you'll find a better algorithm without a better data\nstructure. So you need to reorganize your database to make it easier\nto query. I don't think simple SQL optimizations will get you to your\nperformance goal.", "msg_date": "Tue, 4 Mar 2014 09:06:12 -0800", "msg_from": "Damon Snyder <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help with optimizing a query over hierarchical data" } ]
[ { "msg_contents": "Hello,\n\nI have two versions of essentially the same query; one using nested joins,\nthe other using subselects. The version using the subselect is roughly an\norder of magnitude faster (~70ms on my box and data vs ~900ms for the\nnested joins). Of course the obvious answer here is just to use the faster\nversion, but I'd like to understand why the other version is so slow. These\nqueries are automatically generated by our code and I'd like to feel more\ninformed when deciding what style of query it should be generating (and to\nknow whether there is a way to write the nested-join queries that will more\nclosely approach the performance of the subselect).\n\n(The table aliasing is an artifact of the code that is generating this\nquery--I assume there is no big performance impact there, but perhaps that\nassumption is mistaken.)\n\nThe join version:\n\n(SELECT DISTINCT resource_type_1.*\n FROM resource_type AS resource_type_1\n LEFT JOIN group_authorization AS group_authorization_2\n INNER JOIN group_member AS group_member_4\n ON ( ( group_authorization_2.person_oid =\n group_member_4.person_oid )\n AND ( group_authorization_2.group_oid =\n group_member_4.group_oid ) )\n INNER JOIN wco_group AS group_5\n ON ( group_authorization_2.group_oid =\n group_5.obj_oid )\n ON ( resource_type_1.obj_oid =\ngroup_authorization_2.rtype_oid )\n WHERE ( ( ( ( ( group_5.end_date IS NULL )\n OR ( group_5.end_date >= '2014-03-03T18:08:23.543001Z' ) )\n AND ( ( group_member_4.expire IS NULL )\n OR ( group_member_4.expire >=\n'2014-03-03T18:08:23.543001Z'\n ) )\n AND ( ( group_authorization_2.expire IS NULL )\n OR ( group_authorization_2.expire >=\n '2014-03-03T18:08:23.543001Z'\n )\n )\n )\n AND ( group_authorization_2.person_oid = 1 ) )\n OR ( resource_type_1.authorized = false ) ))\n\n(explain (analyze, buffers) output is at http://explain.depesz.com/s/wPZL)\n\nThe subselect version:\n\n(SELECT DISTINCT resource_type_1.*\n FROM resource_type AS resource_type_1\n WHERE ( ( resource_type_1.authorized = false )\n OR ( resource_type_1.obj_oid IN (SELECT rtype_oid\n FROM group_authorization\n INNER JOIN group_member\n ON ( (\n group_member.group_oid\n =\n group_authorization.group_oid )\n AND ( group_member.person_oid =\n group_authorization.person_oid ) )\n INNER JOIN wco_group\n ON ( group_member.group_oid = wco_group.obj_oid )\n WHERE ( ( group_member.person_oid = 1 )\n AND ( ( group_authorization.expire >\n '2014-03-03T18:11:20.553844Z' )\n OR ( group_authorization.expire IS NULL ) )\n AND ( ( group_member.expire > '2014-03-03T18:11:20.553844Z'\n)\n OR ( group_member.expire IS NULL ) )\n AND ( ( wco_group.end_date > '2014-03-03T18:11:20.553844Z'\n)\n OR ( wco_group.end_date IS NULL ) ) )) ) ))\n\n(explain (analyze, buffers) output is at http://explain.depesz.com/s/70dd)\n\nThis is using Postgres 9.3.3. The table wco_group has ~5000 rows,\ngroup_member has ~15000 rows, and group_authorization is the big one with\n~385000 rows.\n\nI noticed that the nested join version was doing a lot of seq scans and not\nusing the indexes. I tried setting enable_seqscan to off to force index\nuse, and it was a bit slower that way, so the query optimizer is definitely\ndoing the right thing.\n\nAny thoughts would be much appreciated.\n\nThank you,\n-Eli\n\nHello,I have two versions of essentially the same query; one using nested joins, the other using subselects. The version using the subselect is roughly an order of magnitude faster (~70ms on my box and data vs ~900ms for the nested joins). Of course the obvious answer here is just to use the faster version, but I'd like to understand why the other version is so slow. These queries are automatically generated by our code and I'd like to feel more informed when deciding what style of query it should be generating (and to know whether there is a way to write the nested-join queries that will more closely approach the performance of the subselect).\n(The table aliasing is an artifact of the code that is generating this query--I assume there is no big performance impact there, but perhaps that assumption is mistaken.)The join version:(SELECT DISTINCT resource_type_1.* \n FROM   resource_type AS resource_type_1         LEFT JOIN group_authorization AS group_authorization_2                   INNER JOIN group_member AS group_member_4                           ON ( ( group_authorization_2.person_oid = \n                                 group_member_4.person_oid )                                AND ( group_authorization_2.group_oid =                             group_member_4.group_oid ) )                   INNER JOIN wco_group AS group_5 \n                          ON ( group_authorization_2.group_oid =                              group_5.obj_oid )                ON ( resource_type_1.obj_oid = group_authorization_2.rtype_oid ) WHERE  ( ( ( ( ( group_5.end_date IS NULL ) \n                 OR ( group_5.end_date >= '2014-03-03T18:08:23.543001Z' ) )               AND ( ( group_member_4.expire IS NULL )                      OR ( group_member_4.expire >= '2014-03-03T18:08:23.543001Z' \n                        ) )               AND ( ( group_authorization_2.expire IS NULL )                      OR ( group_authorization_2.expire >=                           '2014-03-03T18:08:23.543001Z' \n                        )                   )             )             AND ( group_authorization_2.person_oid = 1 ) )            OR ( resource_type_1.authorized = false ) ))(explain (analyze, buffers) output is at http://explain.depesz.com/s/wPZL)\nThe subselect version:(SELECT DISTINCT resource_type_1.*  FROM   resource_type AS resource_type_1  WHERE  ( ( resource_type_1.authorized = false )            OR ( resource_type_1.obj_oid IN (SELECT rtype_oid \n                                            FROM   group_authorization                                                   INNER JOIN group_member                                                           ON ( ( \n                                                   group_member.group_oid                                                   =                 group_authorization.group_oid )                 AND ( group_member.person_oid = \n                    group_authorization.person_oid ) )                 INNER JOIN wco_group                 ON ( group_member.group_oid = wco_group.obj_oid )                 WHERE  ( ( group_member.person_oid = 1 ) \n                AND ( ( group_authorization.expire >                         '2014-03-03T18:11:20.553844Z' )                 OR ( group_authorization.expire IS NULL ) )                 AND ( ( group_member.expire > '2014-03-03T18:11:20.553844Z' ) \n                OR ( group_member.expire IS NULL ) )                 AND ( ( wco_group.end_date > '2014-03-03T18:11:20.553844Z' )                 OR ( wco_group.end_date IS NULL ) ) )) ) )) (explain (analyze, buffers) output is at http://explain.depesz.com/s/70dd)\nThis is using Postgres 9.3.3. The table wco_group has ~5000 rows, group_member has ~15000 rows, and group_authorization is the big one with ~385000 rows.I noticed that the nested join version was doing a lot of seq scans and not using the indexes. I tried setting enable_seqscan to off to force index use, and it was a bit slower that way, so the query optimizer is definitely doing the right thing.\nAny thoughts would be much appreciated.Thank you,-Eli", "msg_date": "Mon, 3 Mar 2014 12:24:58 -0600", "msg_from": "Eli Naeher <[email protected]>", "msg_from_op": true, "msg_subject": "Help me understand why my subselect is an order of magnitude faster\n than my nested joins" }, { "msg_contents": "On 03-03-14 19:24, Eli Naeher wrote:\n> Hello,\n>\n> I have two versions of essentially the same query; one using nested \n> joins, the other using subselects. The version using the subselect is \n> roughly an order of magnitude faster (~70ms on my box and data vs \n> ~900ms for the nested joins). Of course the obvious answer here is \n> just to use the faster version, but I'd like to understand why the \n> other version is so slow. These queries are automatically generated by \n> our code and I'd like to feel more informed when deciding what style \n> of query it should be generating (and to know whether there is a way \n> to write the nested-join queries that will more closely approach the \n> performance of the subselect).\n>\n> (The table aliasing is an artifact of the code that is generating this \n> query--I assume there is no big performance impact there, but perhaps \n> that assumption is mistaken.)\n>\n> The join version:\n>\n> (SELECT DISTINCT resource_type_1.*\n> FROM resource_type AS resource_type_1\n> LEFT JOIN group_authorization AS group_authorization_2\n> INNER JOIN group_member AS group_member_4\n> ON ( ( group_authorization_2.person_oid =\n> group_member_4.person_oid )\n> AND ( group_authorization_2.group_oid =\n> group_member_4.group_oid ) )\n> INNER JOIN wco_group AS group_5\n> ON ( group_authorization_2.group_oid =\n> group_5.obj_oid )\n> ON ( resource_type_1.obj_oid = \n> group_authorization_2.rtype_oid )\n> WHERE ( ( ( ( ( group_5.end_date IS NULL )\n> OR ( group_5.end_date >= \n> '2014-03-03T18:08:23.543001Z' ) )\n> AND ( ( group_member_4.expire IS NULL )\n> OR ( group_member_4.expire >= \n> '2014-03-03T18:08:23.543001Z'\n> ) )\n> AND ( ( group_authorization_2.expire IS NULL )\n> OR ( group_authorization_2.expire >=\n> '2014-03-03T18:08:23.543001Z'\n> )\n> )\n> )\n> AND ( group_authorization_2.person_oid = 1 ) )\n> OR ( resource_type_1.authorized = false ) ))\n>\n> (explain (analyze, buffers) output is at http://explain.depesz.com/s/wPZL)\n>\n> The subselect version:\n>\n> (SELECT DISTINCT resource_type_1.*\n> FROM resource_type AS resource_type_1\n> WHERE ( ( resource_type_1.authorized = false )\n> OR ( resource_type_1.obj_oid IN (SELECT rtype_oid\n> FROM group_authorization\n> INNER JOIN group_member\n> ON ( (\n> group_member.group_oid\n> =\n> group_authorization.group_oid )\n> AND ( group_member.person_oid =\n> group_authorization.person_oid ) )\n> INNER JOIN wco_group\n> ON ( group_member.group_oid = wco_group.obj_oid )\n> WHERE ( ( group_member.person_oid = 1 )\n> AND ( ( group_authorization.expire >\n> '2014-03-03T18:11:20.553844Z' )\n> OR ( group_authorization.expire IS NULL ) )\n> AND ( ( group_member.expire > \n> '2014-03-03T18:11:20.553844Z' )\n> OR ( group_member.expire IS NULL ) )\n> AND ( ( wco_group.end_date > \n> '2014-03-03T18:11:20.553844Z' )\n> OR ( wco_group.end_date IS NULL ) ) )) ) ))\n>\n> (explain (analyze, buffers) output is at http://explain.depesz.com/s/70dd)\n>\n> This is using Postgres 9.3.3. The table wco_group has ~5000 rows, \n> group_member has ~15000 rows, and group_authorization is the big one \n> with ~385000 rows.\n>\n> I noticed that the nested join version was doing a lot of seq scans \n> and not using the indexes. I tried setting enable_seqscan to off to \n> force index use, and it was a bit slower that way, so the query \n> optimizer is definitely doing the right thing.\n>\n> Any thoughts would be much appreciated.\n>\n> Thank you,\n> -Eli\n\n\nThe explains show that the join version builds up an ever larger set of \nrows before finally filtering,\nwhile the subselect manages to reduce the number of rows to 2500 and \navoids the large set.\n\nThis may be as simple as the order in which you join, inner join's \nshould preferably eliminate as many rows\nas possible as quickly as possible.\n\nAlso, DISTINCT on * does not help, why are you getting duplicates and \nwhy can't you filter them out before doing the final select?\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 18 Mar 2014 22:20:26 +0100", "msg_from": "Vincent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help me understand why my subselect is an order of\n magnitude faster than my nested joins" } ]
[ { "msg_contents": "Hello,\n\nI have two versions of essentially the same query; one using nested joins,\nthe other using subselects. The version using the subselect is roughly an\norder of magnitude faster (~70ms on my box and data vs ~900ms for the\nnested joins). Of course the obvious answer here is just to use the faster\nversion, but I'd like to understand why the other version is so slow. These\nqueries are automatically generated by our code and I'd like to feel more\ninformed when deciding what style of query it should be generating (and to\nknow whether there is a way to write the nested-join queries that will more\nclosely approach the performance of the subselect).\n\n(The table aliasing is an artifact of the code that is generating this\nquery--I assume there is no big performance impact there, but perhaps that\nassumption is mistaken.)\n\nThe join version:\n\n(SELECT DISTINCT resource_type_1.*\n FROM resource_type AS resource_type_1\n LEFT JOIN group_authorization AS group_authorization_2\n INNER JOIN group_member AS group_member_4\n ON ( ( group_authorization_2.person_oid =\n group_member_4.person_oid )\n AND ( group_authorization_2.group_oid =\n group_member_4.group_oid ) )\n INNER JOIN wco_group AS group_5\n ON ( group_authorization_2.group_oid =\n group_5.obj_oid )\n ON ( resource_type_1.obj_oid = group_authorization_2.rtype_oid\n)\n WHERE ( ( ( ( ( group_5.end_date IS NULL )\n OR ( group_5.end_date >= '2014-03-03T18:08:23.543001Z' ) )\n AND ( ( group_member_4.expire IS NULL )\n OR ( group_member_4.expire >=\n'2014-03-03T18:08:23.543001Z'\n ) )\n AND ( ( group_authorization_2.expire IS NULL )\n OR ( group_authorization_2.expire >=\n '2014-03-03T18:08:23.543001Z'\n )\n )\n )\n AND ( group_authorization_2.person_oid = 1 ) )\n OR ( resource_type_1.authorized = false ) ))\n\n(explain (analyze, buffers) output is at http://explain.depesz.com/s/wPZL)\n\nThe subselect version:\n\n(SELECT DISTINCT resource_type_1.*\n FROM resource_type AS resource_type_1\n WHERE ( ( resource_type_1.authorized = false )\n OR ( resource_type_1.obj_oid IN (SELECT rtype_oid\n FROM group_authorization\n INNER JOIN group_member\n ON ( (\n group_member.group_oid\n =\n group_authorization.group_oid )\n AND ( group_member.person_oid =\n group_authorization.person_oid ) )\n INNER JOIN wco_group\n ON ( group_member.group_oid = wco_group.obj_oid )\n WHERE ( ( group_member.person_oid = 1 )\n AND ( ( group_authorization.expire >\n '2014-03-03T18:11:20.553844Z' )\n OR ( group_authorization.expire IS NULL ) )\n AND ( ( group_member.expire > '2014-03-03T18:11:20.553844Z'\n)\n OR ( group_member.expire IS NULL ) )\n AND ( ( wco_group.end_date > '2014-03-03T18:11:20.553844Z'\n)\n OR ( wco_group.end_date IS NULL ) ) )) ) ))\n\n(explain (analyze, buffers) output is at http://explain.depesz.com/s/70dd)\n\nThis is using Postgres 9.3.3. The table wco_group has ~5000 rows,\ngroup_member has ~15000 rows, and group_authorization is the big one with\n~385000 rows. Relevant DDL information is here:\nhttp://paste.lisp.org/display/141466.\n\nI noticed that the nested join version was doing a lot of seq scans and not\nusing the indexes. I tried setting enable_seqscan to off to force index\nuse, and it was a bit slower that way, so the query optimizer is definitely\ndoing the right thing.\n\nAny thoughts would be much appreciated.\n\nThank you,\n-Eli\n\nHello,\nI have two versions of essentially the same query; one using nested joins, the other using subselects. The version using the subselect is roughly an order of magnitude faster (~70ms on my box and data vs ~900ms for the nested joins). Of course the obvious answer here is just to use the faster version, but I'd like to understand why the other version is so slow. These queries are automatically generated by our code and I'd like to feel more informed when deciding what style of query it should be generating (and to know whether there is a way to write the nested-join queries that will more closely approach the performance of the subselect).\n(The table aliasing is an artifact of the code that is generating this query--I assume there is no big performance impact there, but perhaps that assumption is mistaken.)\nThe join version:\n(SELECT DISTINCT resource_type_1.* \n FROM   resource_type AS resource_type_1         LEFT JOIN group_authorization AS group_authorization_2 \n                  INNER JOIN group_member AS group_member_4                           ON ( ( group_authorization_2.person_oid = \n                                 group_member_4.person_oid )                                AND ( group_authorization_2.group_oid =\n                             group_member_4.group_oid ) )                   INNER JOIN wco_group AS group_5 \n                          ON ( group_authorization_2.group_oid = \n                             group_5.obj_oid )                ON ( resource_type_1.obj_oid = group_authorization_2.rtype_oid )\n WHERE  ( ( ( ( ( group_5.end_date IS NULL )                  OR ( group_5.end_date >= '2014-03-03T18:08:23.543001Z' ) ) \n              AND ( ( group_member_4.expire IS NULL )                      OR ( group_member_4.expire >= '2014-03-03T18:08:23.543001Z' \n                        ) )               AND ( ( group_authorization_2.expire IS NULL ) \n                     OR ( group_authorization_2.expire >=                           '2014-03-03T18:08:23.543001Z' \n                        )                   ) \n            )             AND ( group_authorization_2.person_oid = 1 ) ) \n           OR ( resource_type_1.authorized = false ) ))\n(explain (analyze, buffers) output is at http://explain.depesz.com/s/wPZL)\nThe subselect version:\n(SELECT DISTINCT resource_type_1.* \n FROM   resource_type AS resource_type_1  WHERE  ( ( resource_type_1.authorized = false ) \n           OR ( resource_type_1.obj_oid IN (SELECT rtype_oid                                             FROM   group_authorization\n                                                   INNER JOIN group_member\n                                                           ON ( (                                                    group_member.group_oid\n                                                   =                 group_authorization.group_oid ) \n                AND ( group_member.person_oid =                     group_authorization.person_oid ) ) \n                INNER JOIN wco_group                 ON ( group_member.group_oid = wco_group.obj_oid ) \n                WHERE  ( ( group_member.person_oid = 1 )                 AND ( ( group_authorization.expire > \n                        '2014-03-03T18:11:20.553844Z' )                 OR ( group_authorization.expire IS NULL ) ) \n                AND ( ( group_member.expire > '2014-03-03T18:11:20.553844Z' ) \n                OR ( group_member.expire IS NULL ) )                 AND ( ( wco_group.end_date > '2014-03-03T18:11:20.553844Z' ) \n                OR ( wco_group.end_date IS NULL ) ) )) ) )) \n(explain (analyze, buffers) output is at http://explain.depesz.com/s/70dd)\nThis is using Postgres 9.3.3. The table wco_group has ~5000 rows, group_member has ~15000 rows, and group_authorization is the big one with ~385000 rows. Relevant DDL information is here: http://paste.lisp.org/display/141466.\nI noticed that the nested join version was doing a lot of seq scans and not using the indexes. I tried setting enable_seqscan to off to force index use, and it was a bit slower that way, so the query optimizer is definitely doing the right thing.\nAny thoughts would be much appreciated.Thank you,-Eli", "msg_date": "Mon, 3 Mar 2014 12:55:12 -0600", "msg_from": "Eli Naeher <[email protected]>", "msg_from_op": true, "msg_subject": "Subselect an order of magnitude faster than nested joins" } ]
[ { "msg_contents": "Hi,\n\nI have a view which joins multiple tables to give me a result. It takes more than a minute to give me the result on psql prompt when I select all the data from that view.\nThe single CPU which is used to run this query is utilized 100%.Even if I fire a count(*) it takes 10 Sec. I wanted to know if there is anything we can do to speedup this query below 1 sec.\n\nCREATE OR REPLACE VIEW wh_rbtmapdetails_test AS\n SELECT map_tab.binarymapid,\n map_tab.whsongid,\n map_tab.binaryid,\n map_tab.previewbinaryid,\n map_tab.created,\n map_tab.createdby,\n map_tab.lastmodified,\n map_tab.lastmodifiedby,\n map_tab.stateon,\n map_tab.stateby,\n map_tab.statename,\n map_tab.statereason,\n bin_tab.filepath,\n bin_tab.filesizeinbytes,\n bin_tab.contenttypename,\n bin_tab.fileextension,\n bin_tab.filepath AS previewfilepath,\n bin_tab.contenttypename AS previewcontenttypename,\n bin_tab.fileextension AS previewfileextension,\n bin_tab.lastmodified AS binlastmodified,\n bin_tab.created AS bincreated,\n md_tab.whsongname,\n md_tab.whsongnamerx,\n md_tab.whmoviename,\n md_tab.whmovienamerx,\n md_tab.languagename,\n md_tab.contentproviderid,\n md_tab.rightsbodyid,\n md_tab.labelid,\n md_tab.isrc,\n md_tab.keywords,\n md_tab.cpcontentid,\n md_tab.songreleasedate,\n md_tab.moviereleasedate,\n md_tab.actor,\n md_tab.singer,\n md_tab.musicdirector,\n md_tab.moviedirector,\n md_tab.movieproducer,\n md_tab.rightsbodyname,\n md_tab.labelname,\n md_tab.contentprovidername,\n md_tab.categoryid,\n md_tab.categoryname,\n md_tab.subcategoryname,\n md_tab.genrename,\n md_tab.promocode,\n md_tab.cptransferdate,\n md_tab.lastmodified AS mdlastmodified,\n md_tab.created AS mdcreated,\n map_tab.holdercontentsubtypeid,\n NULL::unknown AS holdercontentsubtypename,\n md_tab.statename AS metadatastatename,\n md_tab.statereason AS metadatastatereason,\n bin_tab.statename AS binarystatename,\n bin_tab.statereason AS binarystatereason,\n md_tab.isbranded,\n md_tab.brandname,\n md_tab.aliaspromocode,\n md_tab.songreleaseyear,\n md_tab.moviereleaseyear,\n md_tab.songid,\n map_tab.holderstartdate,\n map_tab.holderenddate,\n md_tab.comments,\n md_tab.iprs,\n md_tab.lyricist,\n md_tab.workssociety,\n md_tab.publisher,\n md_tab.iswc,\n songartwork.stateon as artworkstateon,\n '' AS airtelvcode,\n '' AS airtelccode\n FROM songbinarymap map_tab\n inner join wh_songmetadatadetails_sukruth md_tab on map_tab.whsongid = md_tab.whsongid\n inner join songbinarywarehouse_rbt bin_tab on map_tab.binaryid = bin_tab.binaryid\n inner join contentprovider cp_tab on md_tab.contentproviderid = cp_tab.contentproviderid\n left join songartwork on songartwork.whsongid = map_tab.whsongid\n WHERE cp_tab.hide <> 1;\n\n--##############################################################################################################################\n\natlantisindia=# Select count(*) from wh_songmetadatadetails_sukruth;\n count\n---------\n2756891\natlantisindia=# Select count(*) from songbinarywarehouse_rbt;\n count\n---------\n3507188\natlantisindia=# Select count(*) from contentprovider;\ncount\n-------\n 446\natlantisindia=# Select count(*) from songartwork;\ncount\n--------\n292457\natlantisindia=# Select count(*) from songbinarymap;\n count\n---------\n3460677\n\nObjects used in the query:\nwh_songmetadatadetails_sukruth -- view\nsongbinarywarehouse_rbt -- Table\nsongartwork -- Table\nsongbinarymap -- Table\n\n/*\nCREATE OR REPLACE VIEW wh_songmetadatadetails_sukruth AS\n SELECT md_tab.whsongid,\n md_tab.whsongname,\n md_tab.whsongnamerx,\n md_tab.whmoviename,\n md_tab.whmovienamerx,\n md_tab.languagename,\n md_tab.contentproviderid,\n md_tab.rightsbodyid,\n md_tab.labelid,\n md_tab.isrc,\n md_tab.songreleasedate,\n md_tab.moviereleasedate,\n md_tab.actor,\n md_tab.singer,\n md_tab.musicdirector,\n md_tab.moviedirector,\n md_tab.movieproducer,\n md_tab.keywords,\n md_tab.cpcontentid,\n md_tab.created,\n md_tab.createdby,\n md_tab.lastmodified,\n md_tab.lastmodifiedby,\n md_tab.stateon,\n md_tab.stateby,\n md_tab.statename,\n md_tab.statereason,\n md_tab.comments,\n md_tab.oldcategoryname,\n md_tab.oldsubcategoryname,\n md_tab.oldgenrename,\n rightsbody.rightsbodyname,\n label.labelname,\n cp_tab.contentprovidername,\n cp_tab.alias_contentproviderid,\n md_tab.promocode,\n md_tab.isbranded,\n md_tab.brandname,\n 'now'::text::timestamp without time zone - md_tab.stateon AS sysdatestateondiff,\n md_tab.aliaspromocode,\n md_tab.songreleaseyear,\n md_tab.moviereleaseyear,\n md_tab.costrc,\n md_tab.categoryid,\n cpcategoryforselect.categoryname,\n md_tab.subcategoryname,\n md_tab.genrename,\n md_tab.metadatacorrection,\n md_tab.songid,\n songartwork.artworkbinaryid_1,\n md_tab.iprs,\n md_tab.lyricist,\n md_tab.workssociety,\n md_tab.publisher,\n md_tab.iswc,\n md_tab.umdbid,\n md_tab.umdbmodified,\n md_tab.cptransferdate,\n CAST('' as varchar(1)) airtelvcode,\n cast('' as varchar(1)) airtelccode\n FROM songmetadatawarehouse md_tab\n inner join contentprovider cp_tab on md_tab.contentproviderid::text = cp_tab.contentproviderid::text\n left join songartwork on songartwork.whsongid::text = md_tab.whsongid::text\n left join cpcategoryforselect on cpcategoryforselect.categoryid::text = md_tab.categoryid::text\n left join label on label.labelid::text = md_tab.labelid::text\n left join rightsbody on rightsbody.rightsbodyid::text = md_tab.rightsbodyid::text\n WHERE cp_tab.hide <> 1;\n*/\n--########################################### Exolain Plan ######################################################\nexplain analyze select * from wh_rbtmapdetails_test;\n QUERY PLAN\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n--------------------\nHash Left Join (cost=597862.86..1042112.62 rows=3322444 width=1714) (actual time=11015.390..29802.769 rows=2615033 loops=1)\n Hash Cond: ((map_tab.whsongid)::text = (songartwork.whsongid)::text)\n -> Hash Join (cost=581788.58..976448.68 rows=3322444 width=1706) (actual time=10828.076..25597.518 rows=2615033 loops=1)\n Hash Cond: ((md_tab.contentproviderid)::text = (cp_tab.contentproviderid)::text)\n -> Hash Left Join (cost=581761.54..930481.44 rows=3390870 width=1734) (actual time=10827.746..24577.167 rows=2615033 loops=1)\n Hash Cond: ((md_tab.categoryid)::text = (cpcategoryforselect.categoryid)::text)\n -> Hash Join (cost=581759.59..883858.42 rows=3390870 width=1702) (actual time=10827.690..23549.201 rows=2615033 loops=1)\n Hash Cond: ((map_tab.whsongid)::text = (md_tab.whsongid)::text)\n -> Hash Join (cost=192309.86..425892.93 rows=3460705 width=538) (actual time=2883.529..10130.740 rows=2802042 loops=1)\n Hash Cond: ((bin_tab.binaryid)::text = (map_tab.binaryid)::text)\n -> Seq Scan on songbinarywarehouse_rbt bin_tab (cost=0.00..159538.54 rows=3505554 width=135) (actual time=0.002..362.037 rows=3507188 loops\n=1)\n -> Hash (cost=149051.05..149051.05 rows=3460705 width=436) (actual time=2881.792..2881.792 rows=3460677 loops=1)\n Buckets: 524288 Batches: 1 Memory Usage: 888911kB\n -> Seq Scan on songbinarymap map_tab (cost=0.00..149051.05 rows=3460705 width=436) (actual time=0.004..1129.036 rows=3460677 loops=1)\n -> Hash (cost=353389.46..353389.46 rows=2884822 width=1197) (actual time=7941.814..7941.814 rows=2756891 loops=1)\n Buckets: 524288 Batches: 1 Memory Usage: 1339511kB\n -> Hash Left Join (cost=307.32..353389.46 rows=2884822 width=1197) (actual time=2.585..4859.733 rows=2756891 loops=1)\n Hash Cond: ((md_tab.rightsbodyid)::text = (rightsbody.rightsbodyid)::text)\n -> Hash Left Join (cost=234.82..313650.65 rows=2884822 width=1183) (actual time=1.978..3756.064 rows=2756891 loops=1)\n Hash Cond: ((md_tab.labelid)::text = (label.labelid)::text)\n -> Hash Join (cost=27.04..262958.49 rows=2884822 width=1167) (actual time=0.257..2703.354 rows=2756891 loops=1)\n Hash Cond: ((md_tab.contentproviderid)::text = (cp_tab_1.contentproviderid)::text)\n -> Seq Scan on songmetadatawarehouse md_tab (cost=0.00..223042.35 rows=2944235 width=1125) (actual time=0.002..363.384 ro\nws=2944240 loops=1)\n -> Hash (cost=21.57..21.57 rows=437 width=42) (actual time=0.239..0.239 rows=431 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 32kB\n -> Seq Scan on contentprovider cp_tab_1 (cost=0.00..21.57 rows=437 width=42) (actual time=0.003..0.141 rows=431 loo\nps=1)\n Filter: (hide <> 1)\n Rows Removed by Filter: 15\n -> Hash (cost=160.68..160.68 rows=3768 width=49) (actual time=1.708..1.708 rows=3768 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 301kB\n -> Seq Scan on label (cost=0.00..160.68 rows=3768 width=49) (actual time=0.003..0.797 rows=3768 loops=1)\n -> Hash (cost=55.00..55.00 rows=1400 width=47) (actual time=0.596..0.596 rows=1400 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 109kB\n -> Seq Scan on rightsbody (cost=0.00..55.00 rows=1400 width=47) (actual time=0.002..0.258 rows=1400 loops=1)\n -> Hash (cost=1.70..1.70 rows=20 width=65) (actual time=0.033..0.033 rows=20 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 2kB\n -> Subquery Scan on cpcategoryforselect (cost=0.00..1.70 rows=20 width=65) (actual time=0.013..0.025 rows=20 loops=1)\n -> Seq Scan on cpcategory (cost=0.00..1.50 rows=20 width=53) (actual time=0.012..0.022 rows=20 loops=1)\n -> Hash (cost=21.57..21.57 rows=437 width=28) (actual time=0.306..0.306 rows=431 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 26kB\n -> Seq Scan on contentprovider cp_tab (cost=0.00..21.57 rows=437 width=28) (actual time=0.027..0.212 rows=431 loops=1)\n Filter: (hide <> 1)\n Rows Removed by Filter: 15\n -> Hash (cost=12418.57..12418.57 rows=292457 width=41) (actual time=187.181..187.181 rows=292457 loops=1)\n Buckets: 32768 Batches: 1 Memory Usage: 20849kB\n -> Seq Scan on songartwork (cost=0.00..12418.57 rows=292457 width=41) (actual time=0.024..102.322 rows=292457 loops=1)\nTotal runtime: 29966.975 ms\n(47 rows)\n\n--##############################################################################################################################\n\nSystem Info:\n# lscpu\n\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nByte Order: Little Endian\nCPU(s): 16\nOn-line CPU(s) list: 0-15\nThread(s) per core: 2\nCore(s) per socket: 4\nSocket(s): 2\nNUMA node(s): 2\nVendor ID: GenuineIntel\nCPU family: 6\nModel: 45\nStepping: 7\nCPU MHz: 1200.000\nBogoMIPS: 6599.09\nVirtualization: VT-x\nL1d cache: 32K\nL1i cache: 32K\nL2 cache: 256K\nL3 cache: 10240K\nNUMA node0 CPU(s): 0-3,8-11\nNUMA node1 CPU(s): 4-7,12-15\n\n--##############################################################################################################################\n\n#free\n\n total used free shared buffers cached\nMem: 65932076 34444056 31488020 0 307152 31947732\n-/+ buffers/cache: 2189172 63742904\nSwap: 33038328 352928 32685400\n\n--##############################################################################################################################\n\n[root@localhost ~]# top\ntop - 00:42:59 up 113 days, 8:58, 3 users, load average: 0.08, 0.02, 0.01\nTasks: 568 total, 2 running, 565 sleeping, 0 stopped, 1 zombie\nCpu(s): 5.9%us, 0.5%sy, 0.0%ni, 93.6%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\nMem: 65932076k total, 40638080k used, 25293996k free, 311728k buffers\nSwap: 33038328k total, 352844k used, 32685484k free, 35244776k cached\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n36725 postgres 20 0 17.0g 5.9g 3.5g R 99.8 9.4 0:35.91 postmaster\n30038 oracle -2 0 19.1g 16m 14m S 1.3 0.0 2:29.59 oracle\n41270 root 20 0 15436 1640 956 R 0.7 0.0 0:00.12 top\n30141 oracle 20 0 19.1g 16m 14m S 0.3 0.0 0:14.72 oracle\n 1 root 20 0 19356 796 584 S 0.0 0.0 3:01.96 init\n 2 root 20 0 0 0 0 S 0.0 0.0 0:04.82 kthreadd\n 3 root RT 0 0 0 0 S 0.0 0.0 0:15.46 migration/0\n 4 root 20 0 0 0 0 S 0.0 0.0 0:16.53 ksoftirqd/0\n 5 root RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/0\n 6 root RT 0 0 0 0 S 0.0 0.0 0:09.89 watchdog/0\n 7 root RT 0 0 0 0 S 0.0 0.0 0:19.86 migration/1\n 8 root RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/1\n 9 root 20 0 0 0 0 S 0.0 0.0 0:21.03 ksoftirqd/1\n 10 root RT 0 0 0 0 S 0.0 0.0 0:08.66 watchdog/1\n 11 root RT 0 0 0 0 S 0.0 0.0 0:05.25 migration/2\n 12 root RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/2\n 13 root 20 0 0 0 0 S 0.0 0.0 0:04.83 ksoftirqd/2\n 14 root RT 0 0 0 0 S 0.0 0.0 0:08.87 watchdog/2\n 15 root RT 0 0 0 0 S 0.0 0.0 0:04.10 migration/3\n 16 root RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/3\n 17 root 20 0 0 0 0 S 0.0 0.0 0:03.56 ksoftirqd/3\n 18 root RT 0 0 0 0 S 0.0 0.0 0:08.68 watchdog/3\n 19 root RT 0 0 0 0 S 0.0 0.0 0:17.71 migration/4\n 20 root RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/4\n 21 root 20 0 0 0 0 S 0.0 0.0 0:30.47 ksoftirqd/4\n 22 root RT 0 0 0 0 S 0.0 0.0 0:11.47 watchdog/4\n 23 root RT 0 0 0 0 S 0.0 0.0 0:25.19 migration/5\n 24 root RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/5\n 25 root 20 0 0 0 0 S 0.0 0.0 0:35.14 ksoftirqd/5\n 26 root RT 0 0 0 0 S 0.0 0.0 0:10.34 watchdog/5\n 27 root RT 0 0 0 0 S 0.0 0.0 0:06.08 migration/6\n 28 root RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/6\n 29 root 20 0 0 0 0 S 0.0 0.0 0:14.71 ksoftirqd/6\n 30 root RT 0 0 0 0 S 0.0 0.0 0:09.54 watchdog/6\n 31 root RT 0 0 0 0 S 0.0 0.0 0:02.36 migration/7\n 32 root RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/7\n 33 root 20 0 0 0 0 S 0.0 0.0 0:06.70 ksoftirqd/7\n\n--##############################################################################################################################\n\n[root@localhost ~]# vmstat -a 1\nprocs -----------memory---------- ---swap-- -----io---- --system-- -----cpu-----\nr b swpd free inact active si so bi bo in cs us sy id wa st\n1 0 352844 19137128 16133388 29410720 0 0 2 8 0 0 0 0 100 0 0\n1 0 352844 20462344 16133388 28085988 0 0 0 16 2524 19813 6 1 92 0 0\n1 0 352844 20984756 16133412 27564440 0 0 0 32 2276 2711 4 3 94 0 0\n1 0 352844 20984524 16133412 27564760 0 0 0 0 2197 2572 6 0 94 0 0\n2 0 352844 20984544 16133412 27564760 0 0 0 20 2231 2634 6 0 94 0 0\n3 0 352844 20983176 16133416 27564752 0 0 0 216 2342 2690 6 0 93 0 0\n2 0 352844 20983308 16133416 27564808 0 0 0 0 2227 2634 6 0 94 0 0\n1 0 352844 20983556 16133416 27564808 0 0 0 0 2227 2597 6 0 94 0 0\n1 0 352844 20983664 16133416 27564808 0 0 0 32 2263 2643 6 0 93 0 0\n1 0 352844 20983400 16133416 27564920 0 0 0 4 2217 2616 6 0 94 0 0\n2 0 352844 20987048 16133428 27560248 0 0 0 108 3092 3163 7 1 92 0 0\n3 0 352844 20987172 16133416 27560248 0 0 0 32 2316 2716 6 0 94 0 0\n2 0 352844 20987188 16133412 27559624 0 0 0 0 2226 2651 6 0 94 0 0\n1 0 352844 20987304 16133412 27559624 0 0 0 0 2198 2614 6 0 94 0 0\n1 0 352844 20983808 16133416 27563632 0 0 0 32 2422 2839 7 0 93 0 0\n1 0 352844 20987420 16133416 27559956 0 0 0 0 2307 2638 6 0 94 0 0\n3 0 352844 20987712 16133416 27559700 0 0 0 140 2278 2704 6 0 94 0 0\n2 0 352844 20987712 16133412 27559752 0 0 0 88 2306 2706 6 0 94 0 0\n2 0 352844 20987844 16133416 27559660 0 0 0 0 2233 2600 6 0 94 0 0\n1 0 352844 20988000 16133416 27559660 0 0 0 0 2202 2582 6 0 94 0 0\n1 0 352844 20988124 16133416 27559664 0 0 0 44 2278 2654 6 0 94 0 0\n1 0 352844 20988240 16133416 27559664 0 0 0 0 2193 2571 6 0 94 0 0\n2 0 352844 20988248 16133416 27559664 0 0 0 64 2230 2656 6 0 94 0 0\n3 0 352844 20988356 16133416 27559668 0 0 0 32 2275 2637 6 0 94 0 0\n2 0 352844 20988668 16133416 27559668 0 0 0 12 2233 2664 6 0 94 0 0\n1 0 352844 20988660 16133420 27559672 0 0 0 0 2238 2614 6 0 94 0 0\n1 0 352844 20988784 16133420 27559672 0 0 0 44 2243 2658 6 0 94 0 0\n2 0 352844 20988040 16133420 27560036 0 0 0 12 2256 2664 6 0 93 0 0\n1 0 352844 20987312 16133416 27560752 0 0 0 88 2314 2706 6 0 93 0 0\n1 0 352844 20986600 16133468 27561040 0 0 0 64 2343 2688 6 0 94 0 0\n2 0 352844 20987212 16133460 27560668 0 0 0 12 2255 2658 6 0 94 0 0\n2 0 352844 20987336 16133460 27560724 0 0 0 12 2211 2593 6 0 94 0 0\n2 0 352844 20987352 16133464 27560604 0 0 0 32 2295 2732 6 0 94 0 0\n1 0 352844 20987148 16133464 27560832 0 0 0 0 3041 3197 7 1 92 0 0\n1 0 352844 20986848 16133464 27560728 0 0 0 108 2267 2670 6 0 94 0 0\n1 0 352844 20986948 16133468 27560608 0 0 0 32 2261 2652 6 0 94 0 0\n2 0 352844 20987064 16133468 27560608 0 0 0 0 2208 2610 6 0 94 0 0\n2 0 352844 20987064 16133468 27560608 0 0 0 0 2192 2559 6 0 94 0 0\n1 0 352844 20987236 16133468 27560608 0 0 0 32 2256 2630 6 0 94 0 0\n1 0 352844 20987368 16133468 27560608 0 0 0 0 2223 2624 6 0 94 0 0\n1 0 352844 20980036 16133552 27565904 0 0 0 208 2457 2855 7 0 93 0 0\n\n--##############################################################################################################################\n\n[root@localhost ~]# iostat -xn 1\nLinux 2.6.32-358.el6.x86_64 (localhost.localdomain) 03/07/2014 _x86_64_ (16 CPU)\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util\nsda 0.05 28.93 0.44 4.63 65.60 268.49 65.88 0.04 8.28 0.32 0.16\ndm-0 0.00 0.00 0.05 0.09 0.37 0.74 8.00 0.00 6.07 0.16 0.00\ndm-1 0.00 0.00 0.08 18.97 4.65 151.78 8.21 0.24 12.71 0.06 0.11\ndm-2 0.00 0.00 0.09 5.97 11.55 47.77 9.78 0.43 71.51 0.05 0.03\ndm-3 0.00 0.00 0.27 8.52 49.04 68.19 13.33 0.03 3.81 0.05 0.04\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util\nsda 0.00 12.00 0.00 14.00 0.00 208.00 14.86 0.00 0.00 0.00 0.00\ndm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-1 0.00 0.00 0.00 26.00 0.00 208.00 8.00 0.00 0.00 0.00 0.00\ndm-2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util\nsda 0.00 0.00 0.00 1.00 0.00 8.00 8.00 0.00 3.00 3.00 0.30\ndm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-1 0.00 0.00 0.00 1.00 0.00 8.00 8.00 0.00 3.00 3.00 0.30\ndm-2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util\nsda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util\nsda 0.00 24.00 0.00 11.00 0.00 280.00 25.45 0.00 0.00 0.00 0.00\ndm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-1 0.00 0.00 0.00 35.00 0.00 280.00 8.00 0.00 0.00 0.00 0.00\ndm-2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util\nsda 0.00 0.00 0.00 1.00 0.00 8.00 8.00 0.00 3.00 3.00 0.30\ndm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-2 0.00 0.00 0.00 1.00 0.00 8.00 8.00 0.00 3.00 3.00 0.30\ndm-3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util\nsda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util\nsda 0.00 6.00 0.00 5.00 0.00 88.00 17.60 0.00 0.00 0.00 0.00\ndm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-1 0.00 0.00 0.00 8.00 0.00 64.00 8.00 0.00 0.00 0.00 0.00\ndm-2 0.00 0.00 0.00 3.00 0.00 24.00 8.00 0.00 0.00 0.00 0.00\ndm-3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util\nsda 0.00 1.00 0.00 1.00 0.00 16.00 16.00 0.00 0.00 0.00 0.00\ndm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-1 0.00 0.00 0.00 2.00 0.00 16.00 8.00 0.00 0.00 0.00 0.00\ndm-2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util\nsda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n\n\n\nThanks,\nBikram\n\n________________________________\n\nDISCLAIMER: The information in this message is confidential and may be legally privileged. It is intended solely for the addressee. Access to this message by anyone else is unauthorized. If you are not the intended recipient, any disclosure, copying, or distribution of the message, or any action or omission taken by you in reliance on it, is prohibited and may be unlawful. Please immediately contact the sender if you have received this message in error. Further, this e-mail may contain viruses and all reasonable precaution to minimize the risk arising there from is taken by OnMobile. OnMobile is not liable for any damage sustained by you as a result of any virus in this e-mail. All applicable virus checks should be carried out by you before opening this e-mail or any attachment thereto.\nThank you - OnMobile Global Limited.\n\n\n\n\n\n\n\n\nHi,\n \nI have a view which joins  multiple tables to give me a result. It takes more than a  minute to give me the result on psql prompt when I select all the data from that view.\nThe single CPU which is used to run this query is utilized 100%.Even if I fire a count(*) it takes 10 Sec. I wanted to know if there is anything we can do to speedup this query below 1 sec.\n\n \nCREATE OR REPLACE VIEW wh_rbtmapdetails_test AS \n SELECT map_tab.binarymapid,\n    map_tab.whsongid,\n    map_tab.binaryid,\n    map_tab.previewbinaryid,\n    map_tab.created,\n    map_tab.createdby,\n    map_tab.lastmodified,\n    map_tab.lastmodifiedby,\n    map_tab.stateon,\n    map_tab.stateby,\n    map_tab.statename,\n    map_tab.statereason,\n    bin_tab.filepath,\n    bin_tab.filesizeinbytes,\n    bin_tab.contenttypename,\n    bin_tab.fileextension,\n    bin_tab.filepath AS previewfilepath,\n    bin_tab.contenttypename AS previewcontenttypename,\n    bin_tab.fileextension AS previewfileextension,\n    bin_tab.lastmodified AS binlastmodified,\n    bin_tab.created AS bincreated,\n    md_tab.whsongname,\n    md_tab.whsongnamerx,\n    md_tab.whmoviename,\n    md_tab.whmovienamerx,\n    md_tab.languagename,\n    md_tab.contentproviderid,\n    md_tab.rightsbodyid,\n    md_tab.labelid,\n    md_tab.isrc,\n    md_tab.keywords,\n    md_tab.cpcontentid,\n    md_tab.songreleasedate,\n    md_tab.moviereleasedate,\n    md_tab.actor,\n    md_tab.singer,\n    md_tab.musicdirector,\n   md_tab.moviedirector,\n    md_tab.movieproducer,\n    md_tab.rightsbodyname,\n    md_tab.labelname,\n    md_tab.contentprovidername,\n    md_tab.categoryid,\n    md_tab.categoryname,\n    md_tab.subcategoryname,\n    md_tab.genrename,\n    md_tab.promocode,\n    md_tab.cptransferdate,\n    md_tab.lastmodified AS mdlastmodified,\n    md_tab.created AS mdcreated,\n    map_tab.holdercontentsubtypeid,\n    NULL::unknown AS holdercontentsubtypename,\n    md_tab.statename AS metadatastatename,\n    md_tab.statereason AS metadatastatereason,\n    bin_tab.statename AS binarystatename,\n    bin_tab.statereason AS binarystatereason,\n    md_tab.isbranded,\n    md_tab.brandname,\n    md_tab.aliaspromocode,\n    md_tab.songreleaseyear,\n    md_tab.moviereleaseyear,\n    md_tab.songid,\n   map_tab.holderstartdate,\n    map_tab.holderenddate,\n    md_tab.comments,\n    md_tab.iprs,\n    md_tab.lyricist,\n    md_tab.workssociety,\n    md_tab.publisher,\n    md_tab.iswc,\n    songartwork.stateon as artworkstateon,\n    '' AS airtelvcode,\n    '' AS airtelccode\n   FROM songbinarymap map_tab\n   inner join wh_songmetadatadetails_sukruth md_tab on map_tab.whsongid = md_tab.whsongid\n   inner join songbinarywarehouse_rbt bin_tab on map_tab.binaryid = bin_tab.binaryid\n   inner join contentprovider cp_tab on md_tab.contentproviderid = cp_tab.contentproviderid\n   left join songartwork on songartwork.whsongid = map_tab.whsongid\n  WHERE cp_tab.hide <> 1;\n \n--##############################################################################################################################\n \natlantisindia=# Select count(*) from wh_songmetadatadetails_sukruth;\n  count\n---------\n2756891\natlantisindia=# Select count(*) from songbinarywarehouse_rbt;\n  count\n---------\n3507188\natlantisindia=# Select count(*) from contentprovider;\ncount\n-------\n   446\natlantisindia=# Select count(*) from songartwork;\ncount\n--------\n292457\natlantisindia=# Select count(*) from songbinarymap;\n  count\n---------\n3460677\n \nObjects used in the query: \nwh_songmetadatadetails_sukruth -- view\nsongbinarywarehouse_rbt -- Table\nsongartwork -- Table\nsongbinarymap -- Table\n \n/*\nCREATE OR REPLACE VIEW wh_songmetadatadetails_sukruth AS \n SELECT md_tab.whsongid,\n    md_tab.whsongname,\n    md_tab.whsongnamerx,\n    md_tab.whmoviename,\n    md_tab.whmovienamerx,\n    md_tab.languagename,\n    md_tab.contentproviderid,\n    md_tab.rightsbodyid,\n    md_tab.labelid,\n    md_tab.isrc,\n    md_tab.songreleasedate,\n    md_tab.moviereleasedate,\n    md_tab.actor,\n    md_tab.singer,\n    md_tab.musicdirector,\n    md_tab.moviedirector,\n    md_tab.movieproducer,\n    md_tab.keywords,\n    md_tab.cpcontentid,\n    md_tab.created,\n    md_tab.createdby,\n    md_tab.lastmodified,\n    md_tab.lastmodifiedby,\n    md_tab.stateon,\n    md_tab.stateby,\n    md_tab.statename,\n    md_tab.statereason,\n    md_tab.comments,\n    md_tab.oldcategoryname,\n    md_tab.oldsubcategoryname,\n    md_tab.oldgenrename,\n    rightsbody.rightsbodyname,\n    label.labelname,\n    cp_tab.contentprovidername,\n   cp_tab.alias_contentproviderid,\n    md_tab.promocode,\n    md_tab.isbranded,\n    md_tab.brandname,\n    'now'::text::timestamp without time zone - md_tab.stateon AS sysdatestateondiff,\n    md_tab.aliaspromocode,\n    md_tab.songreleaseyear,\n    md_tab.moviereleaseyear,\n    md_tab.costrc,\n    md_tab.categoryid,\n    cpcategoryforselect.categoryname,\n    md_tab.subcategoryname,\n    md_tab.genrename,\n    md_tab.metadatacorrection,\n    md_tab.songid,\n    songartwork.artworkbinaryid_1,\n    md_tab.iprs,\n    md_tab.lyricist,\n    md_tab.workssociety,\n    md_tab.publisher,\n    md_tab.iswc,\n    md_tab.umdbid,\n    md_tab.umdbmodified,\n    md_tab.cptransferdate,\n    CAST('' as varchar(1)) airtelvcode,\n    cast('' as varchar(1)) airtelccode \n   FROM songmetadatawarehouse md_tab\n   inner join contentprovider cp_tab on md_tab.contentproviderid::text = cp_tab.contentproviderid::text\n\n   left join songartwork on songartwork.whsongid::text = md_tab.whsongid::text\n   left join cpcategoryforselect on cpcategoryforselect.categoryid::text = md_tab.categoryid::text\n   left join label on label.labelid::text = md_tab.labelid::text\n   left join rightsbody on rightsbody.rightsbodyid::text = md_tab.rightsbodyid::text\n  WHERE cp_tab.hide <> 1;\n*/ \n--########################################### Exolain Plan ######################################################\nexplain analyze select * from wh_rbtmapdetails_test;\n                                                                                         QUERY PLAN\n \n------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n--------------------\nHash Left Join  (cost=597862.86..1042112.62 rows=3322444 width=1714) (actual time=11015.390..29802.769 rows=2615033 loops=1)\n   Hash Cond: ((map_tab.whsongid)::text = (songartwork.whsongid)::text)\n   ->  Hash Join  (cost=581788.58..976448.68 rows=3322444 width=1706) (actual time=10828.076..25597.518 rows=2615033 loops=1)\n         Hash Cond: ((md_tab.contentproviderid)::text = (cp_tab.contentproviderid)::text)\n         ->  Hash Left Join  (cost=581761.54..930481.44 rows=3390870 width=1734) (actual time=10827.746..24577.167 rows=2615033 loops=1)\n               Hash Cond: ((md_tab.categoryid)::text = (cpcategoryforselect.categoryid)::text)\n               ->  Hash Join  (cost=581759.59..883858.42 rows=3390870 width=1702) (actual time=10827.690..23549.201 rows=2615033 loops=1)\n                     Hash Cond: ((map_tab.whsongid)::text = (md_tab.whsongid)::text)\n                     ->  Hash Join  (cost=192309.86..425892.93 rows=3460705 width=538) (actual time=2883.529..10130.740 rows=2802042 loops=1)\n                           Hash Cond: ((bin_tab.binaryid)::text = (map_tab.binaryid)::text)\n                           ->  Seq Scan on songbinarywarehouse_rbt bin_tab  (cost=0.00..159538.54 rows=3505554 width=135) (actual time=0.002..362.037 rows=3507188 loops\n=1)\n                           ->  Hash  (cost=149051.05..149051.05 rows=3460705 width=436) (actual time=2881.792..2881.792 rows=3460677 loops=1)\n                                 Buckets: 524288  Batches: 1  Memory Usage: 888911kB\n                                 ->  Seq Scan on songbinarymap map_tab  (cost=0.00..149051.05 rows=3460705 width=436) (actual time=0.004..1129.036 rows=3460677 loops=1)\n                     ->  Hash  (cost=353389.46..353389.46 rows=2884822 width=1197) (actual time=7941.814..7941.814 rows=2756891 loops=1)\n                           Buckets: 524288  Batches: 1  Memory Usage: 1339511kB\n                           ->  Hash Left Join  (cost=307.32..353389.46 rows=2884822 width=1197) (actual time=2.585..4859.733 rows=2756891 loops=1)\n                                 Hash Cond: ((md_tab.rightsbodyid)::text = (rightsbody.rightsbodyid)::text)\n                                 ->  Hash Left Join  (cost=234.82..313650.65 rows=2884822 width=1183) (actual time=1.978..3756.064 rows=2756891 loops=1)\n                                       Hash Cond: ((md_tab.labelid)::text = (label.labelid)::text)\n                                       ->  Hash Join  (cost=27.04..262958.49 rows=2884822 width=1167) (actual time=0.257..2703.354 rows=2756891 loops=1)\n                                             Hash Cond: ((md_tab.contentproviderid)::text = (cp_tab_1.contentproviderid)::text)\n                                             ->  Seq Scan on songmetadatawarehouse md_tab  (cost=0.00..223042.35 rows=2944235 width=1125) (actual time=0.002..363.384 ro\nws=2944240 loops=1)\n                                             ->  Hash  (cost=21.57..21.57 rows=437 width=42) (actual time=0.239..0.239 rows=431 loops=1)\n                                                   Buckets: 1024  Batches: 1  Memory Usage: 32kB\n                                                   ->  Seq Scan on contentprovider cp_tab_1  (cost=0.00..21.57 rows=437 width=42) (actual time=0.003..0.141 rows=431 loo\nps=1)\n                                                         Filter: (hide <> 1)\n                                                         Rows Removed by Filter: 15\n                                       ->  Hash  (cost=160.68..160.68 rows=3768 width=49) (actual time=1.708..1.708 rows=3768 loops=1)\n                                             Buckets: 1024  Batches: 1  Memory Usage: 301kB\n                                             ->  Seq Scan on label  (cost=0.00..160.68 rows=3768 width=49) (actual time=0.003..0.797 rows=3768 loops=1)\n                                 ->  Hash  (cost=55.00..55.00 rows=1400 width=47) (actual time=0.596..0.596 rows=1400 loops=1)\n                                       Buckets: 1024  Batches: 1  Memory Usage: 109kB\n                                       ->  Seq Scan on rightsbody  (cost=0.00..55.00 rows=1400 width=47) (actual time=0.002..0.258 rows=1400 loops=1)\n               ->  Hash  (cost=1.70..1.70 rows=20 width=65) (actual time=0.033..0.033 rows=20 loops=1)\n                     Buckets: 1024  Batches: 1  Memory Usage: 2kB\n                     ->  Subquery Scan on cpcategoryforselect  (cost=0.00..1.70 rows=20 width=65) (actual time=0.013..0.025 rows=20 loops=1)\n                           ->  Seq Scan on cpcategory  (cost=0.00..1.50 rows=20 width=53) (actual time=0.012..0.022 rows=20 loops=1)\n         ->  Hash  (cost=21.57..21.57 rows=437 width=28) (actual time=0.306..0.306 rows=431 loops=1)\n               Buckets: 1024  Batches: 1  Memory Usage: 26kB\n               ->  Seq Scan on contentprovider cp_tab  (cost=0.00..21.57 rows=437 width=28) (actual time=0.027..0.212 rows=431 loops=1)\n                     Filter: (hide <> 1)\n                     Rows Removed by Filter: 15\n   ->  Hash  (cost=12418.57..12418.57 rows=292457 width=41) (actual time=187.181..187.181 rows=292457 loops=1)\n         Buckets: 32768  Batches: 1  Memory Usage: 20849kB\n         ->  Seq Scan on songartwork  (cost=0.00..12418.57 rows=292457 width=41) (actual time=0.024..102.322 rows=292457 loops=1)\nTotal runtime: 29966.975 ms\n(47 rows)\n \n--##############################################################################################################################\n \nSystem Info:\n# lscpu\n \nArchitecture:          x86_64\nCPU op-mode(s):        32-bit, 64-bit\nByte Order:            Little Endian\nCPU(s):                16\nOn-line CPU(s) list:   0-15\nThread(s) per core:    2\nCore(s) per socket:    4\nSocket(s):             2\nNUMA node(s):          2\nVendor ID:             GenuineIntel\nCPU family:            6\nModel:                 45\nStepping:              7\nCPU MHz:               1200.000\nBogoMIPS:              6599.09\nVirtualization:        VT-x\nL1d cache:             32K\nL1i cache:             32K\nL2 cache:              256K\nL3 cache:              10240K\nNUMA node0 CPU(s):     0-3,8-11\nNUMA node1 CPU(s):     4-7,12-15\n \n--##############################################################################################################################\n \n#free\n \n                                                total       used       free     shared    buffers     cached\nMem:      65932076   34444056   31488020          0     307152   31947732\n-/+ buffers/cache:    2189172   63742904\nSwap:     33038328     352928   32685400\n \n--##############################################################################################################################\n \n[root@localhost ~]# top\ntop - 00:42:59 up 113 days,  8:58,  3 users,  load average: 0.08, 0.02, 0.01\nTasks: 568 total,   2 running, 565 sleeping,   0 stopped,   1 zombie\nCpu(s):  5.9%us,  0.5%sy,  0.0%ni, 93.6%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st\nMem:  65932076k total, 40638080k used, 25293996k free,   311728k buffers\nSwap: 33038328k total,   352844k used, 32685484k free, 35244776k cached\n \n  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND\n36725 postgres  20   0 17.0g 5.9g 3.5g R 99.8  9.4   0:35.91 postmaster\n30038 oracle    -2   0 19.1g  16m  14m S  1.3  0.0   2:29.59 oracle\n41270 root      20   0 15436 1640  956 R  0.7  0.0   0:00.12 top\n30141 oracle    20   0 19.1g  16m  14m S  0.3  0.0   0:14.72 oracle\n    1 root      20   0 19356  796  584 S  0.0  0.0   3:01.96 init\n    2 root      20   0     0    0    0 S  0.0  0.0   0:04.82 kthreadd\n    3 root      RT   0     0    0    0 S  0.0  0.0   0:15.46 migration/0\n    4 root      20   0     0    0    0 S  0.0  0.0   0:16.53 ksoftirqd/0\n    5 root      RT   0     0    0    0 S  0.0  0.0   0:00.00 migration/0\n    6 root      RT   0     0    0    0 S  0.0  0.0   0:09.89 watchdog/0\n    7 root      RT   0     0    0    0 S  0.0  0.0   0:19.86 migration/1\n    8 root      RT   0     0    0    0 S  0.0  0.0   0:00.00 migration/1\n    9 root      20   0     0    0    0 S  0.0  0.0   0:21.03 ksoftirqd/1\n   10 root      RT   0     0    0    0 S  0.0  0.0   0:08.66 watchdog/1\n   11 root      RT   0     0    0    0 S  0.0  0.0   0:05.25 migration/2\n   12 root      RT   0     0    0    0 S  0.0  0.0   0:00.00 migration/2\n   13 root      20   0     0    0    0 S  0.0  0.0   0:04.83 ksoftirqd/2\n   14 root      RT   0     0    0    0 S  0.0  0.0   0:08.87 watchdog/2\n   15 root      RT   0     0    0    0 S  0.0  0.0   0:04.10 migration/3\n   16 root      RT   0     0    0    0 S  0.0  0.0   0:00.00 migration/3\n   17 root      20   0     0    0    0 S  0.0  0.0   0:03.56 ksoftirqd/3\n   18 root      RT   0     0    0    0 S  0.0  0.0   0:08.68 watchdog/3\n   19 root      RT   0     0    0    0 S  0.0  0.0   0:17.71 migration/4\n   20 root      RT   0     0    0    0 S  0.0  0.0   0:00.00 migration/4\n   21 root      20   0     0    0    0 S  0.0  0.0   0:30.47 ksoftirqd/4\n   22 root      RT   0     0    0    0 S  0.0  0.0   0:11.47 watchdog/4\n   23 root      RT   0     0    0    0 S  0.0  0.0   0:25.19 migration/5\n   24 root      RT   0     0    0    0 S  0.0  0.0   0:00.00 migration/5\n   25 root      20   0     0    0    0 S  0.0  0.0   0:35.14 ksoftirqd/5\n   26 root      RT   0     0    0    0 S  0.0  0.0   0:10.34 watchdog/5\n   27 root      RT   0     0    0    0 S  0.0  0.0   0:06.08 migration/6\n   28 root      RT   0     0    0    0 S  0.0  0.0   0:00.00 migration/6\n   29 root      20   0     0    0    0 S  0.0  0.0   0:14.71 ksoftirqd/6\n   30 root      RT   0     0    0    0 S  0.0  0.0   0:09.54 watchdog/6\n   31 root      RT   0     0    0    0 S  0.0  0.0   0:02.36 migration/7\n   32 root      RT   0     0    0    0 S  0.0  0.0   0:00.00 migration/7\n   33 root      20   0     0    0    0 S  0.0  0.0   0:06.70 ksoftirqd/7\n \n--##############################################################################################################################\n \n[root@localhost ~]# vmstat -a 1\nprocs -----------memory---------- ---swap-- -----io---- --system-- -----cpu-----\nr  b   swpd   free  inact active   si   so    bi    bo   in   cs us sy id wa st\n1  0 352844 19137128 16133388 29410720    0    0     2     8    0    0  0  0 100  0  0\n1  0 352844 20462344 16133388 28085988    0    0     0    16 2524 19813  6  1 92  0  0\n1  0 352844 20984756 16133412 27564440    0    0     0    32 2276 2711  4  3 94  0  0\n1  0 352844 20984524 16133412 27564760    0    0     0     0 2197 2572  6  0 94  0  0\n2  0 352844 20984544 16133412 27564760    0    0     0    20 2231 2634  6  0 94  0  0\n3  0 352844 20983176 16133416 27564752    0    0     0   216 2342 2690  6  0 93  0  0\n2  0 352844 20983308 16133416 27564808    0    0     0     0 2227 2634  6  0 94  0  0\n1  0 352844 20983556 16133416 27564808    0    0     0     0 2227 2597  6  0 94  0  0\n1  0 352844 20983664 16133416 27564808    0    0     0    32 2263 2643  6  0 93  0  0\n1  0 352844 20983400 16133416 27564920    0    0     0     4 2217 2616  6  0 94  0  0\n2  0 352844 20987048 16133428 27560248    0    0     0   108 3092 3163  7  1 92  0  0\n3  0 352844 20987172 16133416 27560248    0    0     0    32 2316 2716  6  0 94  0  0\n2  0 352844 20987188 16133412 27559624    0    0     0     0 2226 2651  6  0 94  0  0\n1  0 352844 20987304 16133412 27559624    0    0     0     0 2198 2614  6  0 94  0  0\n1  0 352844 20983808 16133416 27563632    0    0     0    32 2422 2839  7  0 93  0  0\n1  0 352844 20987420 16133416 27559956    0    0     0     0 2307 2638  6  0 94  0  0\n3  0 352844 20987712 16133416 27559700    0    0     0   140 2278 2704  6  0 94  0  0\n2  0 352844 20987712 16133412 27559752    0    0     0    88 2306 2706  6  0 94  0  0\n2  0 352844 20987844 16133416 27559660    0    0     0     0 2233 2600  6  0 94  0  0\n1  0 352844 20988000 16133416 27559660    0    0     0     0 2202 2582  6  0 94  0  0\n1  0 352844 20988124 16133416 27559664    0    0     0    44 2278 2654  6  0 94  0  0\n1  0 352844 20988240 16133416 27559664    0    0     0     0 2193 2571  6  0 94  0  0\n2  0 352844 20988248 16133416 27559664    0    0     0    64 2230 2656  6  0 94  0  0\n3  0 352844 20988356 16133416 27559668    0    0     0    32 2275 2637  6  0 94  0  0\n2  0 352844 20988668 16133416 27559668    0    0     0    12 2233 2664  6  0 94  0  0\n1  0 352844 20988660 16133420 27559672    0    0     0     0 2238 2614  6  0 94  0  0\n1  0 352844 20988784 16133420 27559672    0    0     0    44 2243 2658  6  0 94  0  0\n2  0 352844 20988040 16133420 27560036    0    0     0    12 2256 2664  6  0 93  0  0\n1  0 352844 20987312 16133416 27560752    0    0     0    88 2314 2706  6  0 93  0  0\n1  0 352844 20986600 16133468 27561040    0    0     0    64 2343 2688  6  0 94  0  0\n2  0 352844 20987212 16133460 27560668    0    0     0    12 2255 2658  6  0 94  0  0\n2  0 352844 20987336 16133460 27560724    0    0     0    12 2211 2593  6  0 94  0  0\n2  0 352844 20987352 16133464 27560604    0    0     0    32 2295 2732  6  0 94  0  0\n1  0 352844 20987148 16133464 27560832    0    0     0     0 3041 3197  7  1 92  0  0\n1  0 352844 20986848 16133464 27560728    0    0     0   108 2267 2670  6  0 94  0  0\n1  0 352844 20986948 16133468 27560608    0    0     0    32 2261 2652  6  0 94  0  0\n2  0 352844 20987064 16133468 27560608    0    0     0     0 2208 2610  6  0 94  0  0\n2  0 352844 20987064 16133468 27560608    0    0     0     0 2192 2559  6  0 94  0  0\n1  0 352844 20987236 16133468 27560608    0    0     0    32 2256 2630  6  0 94  0  0\n1  0 352844 20987368 16133468 27560608    0    0     0     0 2223 2624  6  0 94  0  0\n1  0 352844 20980036 16133552 27565904    0    0     0   208 2457 2855  7  0 93  0  0\n \n--##############################################################################################################################\n \n[root@localhost ~]# iostat -xn 1\nLinux 2.6.32-358.el6.x86_64 (localhost.localdomain)     03/07/2014      _x86_64_        (16 CPU)\n \nDevice:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util\nsda               0.05    28.93    0.44    4.63    65.60   268.49    65.88     0.04    8.28   0.32   0.16\ndm-0              0.00     0.00    0.05    0.09     0.37     0.74     8.00     0.00    6.07   0.16   0.00\ndm-1              0.00     0.00    0.08   18.97     4.65   151.78     8.21     0.24   12.71   0.06   0.11\ndm-2              0.00     0.00    0.09    5.97    11.55    47.77     9.78     0.43   71.51   0.05   0.03\ndm-3              0.00     0.00    0.27    8.52    49.04    68.19    13.33     0.03    3.81   0.05   0.04\n \nDevice:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util\nsda               0.00    12.00    0.00   14.00     0.00   208.00    14.86     0.00    0.00   0.00   0.00\ndm-0              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00\ndm-1              0.00     0.00    0.00   26.00     0.00   208.00     8.00     0.00    0.00   0.00   0.00\ndm-2              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00\ndm-3              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00\n \nDevice:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util\nsda               0.00     0.00    0.00    1.00     0.00     8.00     8.00     0.00    3.00   3.00   0.30\ndm-0              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00\ndm-1              0.00     0.00    0.00    1.00     0.00     8.00     8.00     0.00    3.00   3.00   0.30\ndm-2              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00\ndm-3              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00\n \nDevice:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util\nsda               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00\ndm-0              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00\ndm-1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00\ndm-2              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00\ndm-3              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00\n \nDevice:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util\nsda               0.00    24.00    0.00   11.00     0.00   280.00    25.45     0.00    0.00   0.00   0.00\ndm-0              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00\ndm-1              0.00     0.00    0.00   35.00     0.00   280.00     8.00     0.00    0.00   0.00   0.00\ndm-2              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00\ndm-3              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00\n \nDevice:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util\nsda               0.00     0.00    0.00    1.00     0.00     8.00     8.00     0.00    3.00   3.00   0.30\ndm-0              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00\ndm-1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00\ndm-2              0.00     0.00    0.00    1.00     0.00     8.00     8.00     0.00    3.00   3.00   0.30\ndm-3              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00\n \nDevice:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util\nsda               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00\ndm-0              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00\ndm-1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00\ndm-2              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00\ndm-3              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00\n \nDevice:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util\nsda               0.00     6.00    0.00    5.00     0.00    88.00    17.60     0.00    0.00   0.00   0.00\ndm-0              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00\ndm-1              0.00     0.00    0.00    8.00     0.00    64.00     8.00     0.00    0.00   0.00   0.00\ndm-2              0.00     0.00    0.00    3.00     0.00    24.00     8.00     0.00    0.00   0.00   0.00\ndm-3              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00\n \nDevice:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util\nsda               0.00     1.00    0.00    1.00     0.00    16.00    16.00     0.00    0.00   0.00   0.00\ndm-0              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00\ndm-1              0.00     0.00    0.00    2.00     0.00    16.00     8.00     0.00    0.00   0.00   0.00\ndm-2              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00\ndm-3              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00\n \nDevice:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util\nsda               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00\ndm-0              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00\ndm-1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00\ndm-2              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00\ndm-3              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00\n \n \n \nThanks,\nBikram\n\n\n\n\nDISCLAIMER: The information in this message is confidential and may be legally privileged. It is intended solely for the addressee. Access to this message by anyone else is unauthorized. If you are not the intended recipient, any disclosure, copying, or distribution\n of the message, or any action or omission taken by you in reliance on it, is prohibited and may be unlawful. Please immediately contact the sender if you have received this message in error. Further, this e-mail may contain viruses and all reasonable precaution\n to minimize the risk arising there from is taken by OnMobile. OnMobile is not liable for any damage sustained by you as a result of any virus in this e-mail. All applicable virus checks should be carried out by you before opening this e-mail or any attachment\n thereto.\nThank you - OnMobile Global Limited.", "msg_date": "Fri, 7 Mar 2014 06:05:12 +0000", "msg_from": "Bikram Kesari Naik <[email protected]>", "msg_from_op": true, "msg_subject": "Slow query" }, { "msg_contents": "Bikram Kesari Naik wrote\n> Hi,\n> \n> I have a view which joins multiple tables to give me a result. It takes\n> more than a minute to give me the result on psql prompt when I select all\n> the data from that view.\n> The single CPU which is used to run this query is utilized 100%.Even if I\n> fire a count(*) it takes 10 Sec. I wanted to know if there is anything we\n> can do to speedup this query below 1 sec.\n\nIn all likelihood you need to index your foreign keys, and possibly other\nfields, but as you haven't provided table and index definitions it is hard\nto say for sure.\n\nIdepeing on how many rows are hidden I'm not sure an unqualified query on\nthis view can run in 1/60th the time even with indexes present - the\nsequential scans are efficient if the proportion of the tables being\nreturned is high.\n\nDavid J.\n\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Slow-query-tp5795077p5795079.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 6 Mar 2014 22:23:08 -0800 (PST)", "msg_from": "David Johnston <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query" }, { "msg_contents": "Hi David,\n\nWe have indexes on all the columns which are used in the where clause and these tables are linked by foreign key constraint.\n\n\nThanks,\nBikram\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of David Johnston\nSent: Friday, March 07, 2014 11:53 AM\nTo: [email protected]\nSubject: Re: [PERFORM] Slow query\n\nBikram Kesari Naik wrote\n> Hi,\n>\n> I have a view which joins multiple tables to give me a result. It\n> takes more than a minute to give me the result on psql prompt when I\n> select all the data from that view.\n> The single CPU which is used to run this query is utilized 100%.Even\n> if I fire a count(*) it takes 10 Sec. I wanted to know if there is\n> anything we can do to speedup this query below 1 sec.\n\nIn all likelihood you need to index your foreign keys, and possibly other fields, but as you haven't provided table and index definitions it is hard to say for sure.\n\nIdepeing on how many rows are hidden I'm not sure an unqualified query on this view can run in 1/60th the time even with indexes present - the sequential scans are efficient if the proportion of the tables being returned is high.\n\nDavid J.\n\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Slow-query-tp5795077p5795079.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n________________________________\n\nDISCLAIMER: The information in this message is confidential and may be legally privileged. It is intended solely for the addressee. Access to this message by anyone else is unauthorized. If you are not the intended recipient, any disclosure, copying, or distribution of the message, or any action or omission taken by you in reliance on it, is prohibited and may be unlawful. Please immediately contact the sender if you have received this message in error. Further, this e-mail may contain viruses and all reasonable precaution to minimize the risk arising there from is taken by OnMobile. OnMobile is not liable for any damage sustained by you as a result of any virus in this e-mail. All applicable virus checks should be carried out by you before opening this e-mail or any attachment thereto.\nThank you - OnMobile Global Limited.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 7 Mar 2014 06:43:23 +0000", "msg_from": "Bikram Kesari Naik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query" }, { "msg_contents": "Bikram Kesari Naik wrote\n> Hi David,\n> \n> We have indexes on all the columns which are used in the where clause and\n> these tables are linked by foreign key constraint.\n> \n> \n> Thanks,\n> Bikram\n> \n> -----Original Message-----\n> From: \n\n> pgsql-performance-owner@\n\n> [mailto:\n\n> pgsql-performance-owner@\n\n> ] On Behalf Of David Johnston\n> Sent: Friday, March 07, 2014 11:53 AM\n> To: \n\n> pgsql-performance@\n\n> Subject: Re: [PERFORM] Slow query\n> \n> Bikram Kesari Naik wrote\n>> Hi,\n>>\n>> I have a view which joins multiple tables to give me a result. It\n>> takes more than a minute to give me the result on psql prompt when I\n>> select all the data from that view.\n>> The single CPU which is used to run this query is utilized 100%.Even\n>> if I fire a count(*) it takes 10 Sec. I wanted to know if there is\n>> anything we can do to speedup this query below 1 sec.\n> \n> In all likelihood you need to index your foreign keys, and possibly other\n> fields, but as you haven't provided table and index definitions it is hard\n> to say for sure.\n> \n> Idepeing on how many rows are hidden I'm not sure an unqualified query on\n> this view can run in 1/60th the time even with indexes present - the\n> sequential scans are efficient if the proportion of the tables being\n> returned is high.\n> \n> David J.\n\nRead these.\n\nhttps://wiki.postgresql.org/wiki/Using_EXPLAIN\nhttps://wiki.postgresql.org/wiki/SlowQueryQuestions\n\nAnd note, while the FK thing is likely not relevant in this situation\ndefining a constraint does not cause an index to be created. Depending on\nyour usage patterns defining those indexes can be helpful.\n\nOne last thought: not only are your row counts high but it seems like your\nrow sizes may also be large due to them containing binary content. You\nlikely need to take a different approach to solving whatever unspecified\nproblem this query is intended to solve if you need sub-second performance.\n\nThat all said the main area of improvement for this is system memory\nconcerns so, as noted in the links above, play with that and see what\nhappens.\n\nDavid J.\n\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Slow-query-tp5795077p5795086.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 6 Mar 2014 23:18:29 -0800 (PST)", "msg_from": "David Johnston <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query" } ]
[ { "msg_contents": "Hello folks,\n\nI have a table of about 700k rows in Postgres 9.3.3, which has the\nfollowing structure:\n\nColumns:\n content_body - text\n publish_date - timestamp without time zone\n published - boolean\n\nIndexes:\n \"articles_pkey\" PRIMARY KEY, btree (id)\n \"article_text_gin\" gin (article_text)\n \"articles_publish_date_id_index\" btree (publish_date DESC NULLS\nLAST, id DESC)\n\nThe query that I am making has a full text search query and a limit, as follows:\n\nWhen I search for a string which is in my index with a limit and order\nin the query it is fast:\n\nexplain analyze select * from \"articles\" where article_text @@\nplainto_tsquery('pg_catalog.simple', 'in_index') order by id limit 10;\n QUERY\nPLAN\n------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.42..1293.88 rows=10 width=1298) (actual\ntime=2.073..9.837 rows=10 loops=1)\n -> Index Scan using articles_pkey on articles\n(cost=0.42..462150.49 rows=3573 width=1298) (actual time=2.055..9.711\nrows=10 loops=1)\n Filter: (article_text @@ '''in_index'''::tsquery)\n Rows Removed by Filter: 611\n Total runtime: 9.952 ms\n\nHowever if the string is not in the index it takes much longer:\n\nexplain analyze select * from \"articles\" where article_text @@\nplainto_tsquery('pg_catalog.simple', 'not_in_index') order by id limit\n10;\n\nQUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.42..1293.88 rows=10 width=1298) (actual\ntime=5633.684..5633.684 rows=0 loops=1)\n -> Index Scan using articles_pkey on articles\n(cost=0.42..462150.49 rows=3573 width=1298) (actual\ntime=5633.672..5633.672 rows=0 loops=1)\n Filter: (article_text @@ '''not_in_index'''::tsquery)\n Rows Removed by Filter: 796146\n Total runtime: 5633.745 ms\n\nHowever if I remove the order clause it is fast for either case:\n\nexplain analyze select * from \"articles\" where article_text @@\nplainto_tsquery('pg_catalog.simple', 'in_index') limit 10;\n QUERY\nPLAN\n--------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=55.69..90.22 rows=10 width=1298) (actual\ntime=7.748..7.853 rows=10 loops=1)\n -> Bitmap Heap Scan on articles (cost=55.69..12390.60 rows=3573\nwidth=1298) (actual time=7.735..7.781 rows=10 loops=1)\n Recheck Cond: (article_text @@ '''in_index'''::tsquery)\n -> Bitmap Index Scan on article_text_gin (cost=0.00..54.80\nrows=3573 width=0) (actual time=5.977..5.977 rows=8910 loops=1)\n Index Cond: (article_text @@ '''in_index'''::tsquery)\n Total runtime: 7.952 ms\n\n\nexplain analyze select * from \"articles\" where article_text @@\nplainto_tsquery('pg_catalog.simple', 'not_in_index') limit 10;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=55.69..90.22 rows=10 width=1298) (actual\ntime=0.083..0.083 rows=0 loops=1)\n -> Bitmap Heap Scan on articles (cost=55.69..12390.60 rows=3573\nwidth=1298) (actual time=0.065..0.065 rows=0 loops=1)\n Recheck Cond: (article_text @@ '''not_in_index'''::tsquery)\n -> Bitmap Index Scan on article_text_gin (cost=0.00..54.80\nrows=3573 width=0) (actual time=0.047..0.047 rows=0 loops=1)\n Index Cond: (article_text @@ '''not_in_index'''::tsquery)\n Total runtime: 0.163 ms\n\nRemoving the limit clause has the same effect, although the in index\nquery is noticably slower:\n\nexplain analyze select * from \"articles\" where article_text @@\nplainto_tsquery('pg_catalog.simple', 'in_index') order by id;\n QUERY\nPLAN\n--------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=12601.46..12610.40 rows=3573 width=1298) (actual\ntime=106.347..140.481 rows=8910 loops=1)\n Sort Key: id\n Sort Method: external merge Disk: 12288kB\n -> Bitmap Heap Scan on articles (cost=55.69..12390.60 rows=3573\nwidth=1298) (actual time=5.618..50.329 rows=8910 loops=1)\n Recheck Cond: (article_text @@ '''in_index'''::tsquery)\n -> Bitmap Index Scan on article_text_gin (cost=0.00..54.80\nrows=3573 width=0) (actual time=4.243..4.243 rows=8910 loops=1)\n Index Cond: (article_text @@ '''in_index'''::tsquery)\n Total runtime: 170.987 ms\n\nexplain analyze select * from \"articles\" where article_text @@\nplainto_tsquery('pg_catalog.simple', 'not_in_index') order by id;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=12601.46..12610.40 rows=3573 width=1298) (actual\ntime=0.067..0.067 rows=0 loops=1)\n Sort Key: id\n Sort Method: quicksort Memory: 25kB\n -> Bitmap Heap Scan on articles (cost=55.69..12390.60 rows=3573\nwidth=1298) (actual time=0.044..0.044 rows=0 loops=1)\n Recheck Cond: (article_text @@ '''not_in_index'''::tsquery)\n -> Bitmap Index Scan on article_text_gin (cost=0.00..54.80\nrows=3573 width=0) (actual time=0.026..0.026 rows=0 loops=1)\n Index Cond: (article_text @@ '''not_in_index'''::tsquery)\n Total runtime: 0.148 ms\n\nThe little I can deduce is that overall, a bitmap index scan+bitmap\nheap scan is overall better for my queries then an index scan. How can\nI tell the query planner to do that though?\n\n\n-- \nMohan\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 8 Mar 2014 09:46:31 +0700", "msg_from": "Mohan Krishnan <[email protected]>", "msg_from_op": true, "msg_subject": "How can I get the query planner to use a bitmap index scap instead of\n an index scan ?" }, { "msg_contents": "On Fri, Mar 7, 2014 at 6:46 PM, Mohan Krishnan <[email protected]> wrote:\n\n> Hello folks,\n>\n> I have a table of about 700k rows in Postgres 9.3.3, which has the\n> following structure:\n>\n> Columns:\n> content_body - text\n> publish_date - timestamp without time zone\n> published - boolean\n>\n> Indexes:\n> \"articles_pkey\" PRIMARY KEY, btree (id)\n> \"article_text_gin\" gin (article_text)\n> \"articles_publish_date_id_index\" btree (publish_date DESC NULLS\n> LAST, id DESC)\n>\n\nYour indexes are on columns that are not in the list of columns you gave.\n Can you show us the actual table and index definitions?\n\n\n -> Index Scan using articles_pkey on articles\n> (cost=0.42..462150.49 rows=3573 width=1298) (actual time=2.055..9.711\n> rows=10 loops=1)\n> Filter: (article_text @@ '''in_index'''::tsquery)\n>\n...\n\n\n> -> Index Scan using articles_pkey on articles\n> (cost=0.42..462150.49 rows=3573 width=1298) (actual\n> time=5633.672..5633.672 rows=0 loops=1)\n> Filter: (article_text @@ '''not_in_index'''::tsquery)\n>\n\nThose estimates are way off, and it is not clear why they would be. Have\nyou analyzed your table recently?\n\nCheers,\n\nJeff\n\nOn Fri, Mar 7, 2014 at 6:46 PM, Mohan Krishnan <[email protected]> wrote:\nHello folks,\n\nI have a table of about 700k rows in Postgres 9.3.3, which has the\nfollowing structure:\n\nColumns:\n content_body  - text\n publish_date  - timestamp without time zone\n published     - boolean\n\nIndexes:\n    \"articles_pkey\" PRIMARY KEY, btree (id)\n    \"article_text_gin\" gin (article_text)\n    \"articles_publish_date_id_index\" btree (publish_date DESC NULLS\nLAST, id DESC)Your indexes are on columns that are not in the list of columns you gave.  Can you show us the actual table and index definitions?\n   ->  Index Scan using articles_pkey on articles\n(cost=0.42..462150.49 rows=3573 width=1298) (actual time=2.055..9.711\nrows=10 loops=1)\n         Filter: (article_text @@ '''in_index'''::tsquery)... \n\n   ->  Index Scan using articles_pkey on articles\n(cost=0.42..462150.49 rows=3573 width=1298) (actual\ntime=5633.672..5633.672 rows=0 loops=1)\n         Filter: (article_text @@ '''not_in_index'''::tsquery)Those estimates are way off, and it is not clear why they would be.  Have you analyzed your table recently?\nCheers,Jeff", "msg_date": "Sun, 9 Mar 2014 14:46:58 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How can I get the query planner to use a bitmap index\n scap instead of an index scan ?" }, { "msg_contents": "On Mon, Mar 10, 2014 at 4:46 AM, Jeff Janes <[email protected]> wrote:\n>\n> On Fri, Mar 7, 2014 at 6:46 PM, Mohan Krishnan <[email protected]> wrote:\n>>\n>> Hello folks,\n>>\n>> I have a table of about 700k rows in Postgres 9.3.3, which has the\n>> following structure:\n>>\n>> Columns:\n>> content_body - text\n>> publish_date - timestamp without time zone\n>> published - boolean\n>>\n>> Indexes:\n>> \"articles_pkey\" PRIMARY KEY, btree (id)\n>> \"article_text_gin\" gin (article_text)\n>> \"articles_publish_date_id_index\" btree (publish_date DESC NULLS\n>> LAST, id DESC)\n>\n>\n> Your indexes are on columns that are not in the list of columns you gave.\n> Can you show us the actual table and index definitions?\n\n\nSorry about that, here is the table and the index definitions\n\n Table \"public.articles\"\n Column | Type |\n Modifiers\n----------------------+-----------------------------+-------------------------------------------------------\n id | integer | not null default\nnextval('articles_id_seq'::regclass)\n title | text | not null\n content_body | text |\n publish_date | timestamp without time zone |\n created_at | timestamp without time zone | not null\n published | boolean |\n updated_at | timestamp without time zone | not null\n category_id | integer | not null\n article_text | tsvector |\n\nIndexes:\n \"articles_pkey\" PRIMARY KEY, btree (id)\n \"article_text_gin\" gin (article_text)\n \"articles_category_id_index\" btree (category_id)\n \"articles_created_at\" btree (created_at)\n \"articles_publish_date_id_index\" btree (publish_date DESC NULLS\nLAST, id DESC)\n \"articles_published_index\" btree (published)\n\n>\n>> -> Index Scan using articles_pkey on articles\n>> (cost=0.42..462150.49 rows=3573 width=1298) (actual time=2.055..9.711\n>> rows=10 loops=1)\n>> Filter: (article_text @@ '''in_index'''::tsquery)\n>\n> ...\n>\n>>\n>> -> Index Scan using articles_pkey on articles\n>> (cost=0.42..462150.49 rows=3573 width=1298) (actual\n>> time=5633.672..5633.672 rows=0 loops=1)\n>> Filter: (article_text @@ '''not_in_index'''::tsquery)\n>\n>\n> Those estimates are way off, and it is not clear why they would be. Have\n> you analyzed your table recently?\n\nYes I have analyzed them and rerun the queries - there is no\ndifference. What more debugging information can should I look at to\ndetermine why the estimates are way off ?\n\n> Cheers,\n>\n> Jeff\n\n\n\n-- \nMohan\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 10 Mar 2014 15:14:51 +0700", "msg_from": "Mohan Krishnan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How can I get the query planner to use a bitmap index\n scap instead of an index scan ?" } ]
[ { "msg_contents": "Synopsis: 8-table join with one \"WHERE foo IN (...)\" condition; works OK with fewer\nthan 5 items in the IN list, but at N=5, the planner starts using a compound index \nfor the first time that completely kills performance (5-6 minutes versus 0-12 seconds).\nI'm interested in learning what plays a role in this switch of plans (or the\nunanticipated relative slowness of the N=5 plan). TIA for any wisdom; I've finally\nmade a commitment to really delve into PG. -Kevin\n \n1. Queries and plans\n2. Answers to standard questions as per\nhttp://wiki.postgresql.org/wiki/Guide_to_reporting_problems\n3. Tables\n\n1. Queries and plans\n\nThe \"fast\" query, with 4 elements in the IN list.\n\nEXPLAIN (ANALYZE, BUFFERS) SELECT\n COUNT(DISTINCT \"core_person\".\"id\")\nFROM \"core_person\"\n INNER JOIN \"core_sample\" ON (\"core_person\".\"id\" = \"core_sample\".\"person_id\")\n INNER JOIN \"sample\" ON (\"core_sample\".\"varify_sample_id\" = \"sample\".\"id\")\n INNER JOIN \"sample_result\" ON (\"sample\".\"id\" = \"sample_result\".\"sample_id\")\n INNER JOIN \"variant\" ON (\"sample_result\".\"variant_id\" = \"variant\".\"id\")\n INNER JOIN \"variant_effect\"\n ON (\"variant\".\"id\" = \"variant_effect\".\"variant_id\")\n INNER JOIN \"transcript\"\n ON (\"variant_effect\".\"transcript_id\" = \"transcript\".\"id\")\n INNER JOIN \"gene\" ON (\"transcript\".\"gene_id\" = \"gene\".\"id\")\nWHERE \"gene\".\"symbol\" IN ('CFC1', 'PROSIT240', 'ZFPM2/FOG2', 'NKX2.5');\n\nhttp://explain.depesz.com/s/Wul\n\nAggregate (cost=287383.44..287383.45 rows=1 width=4) (actual time=674.434..674.434 rows=1 loops=1)\n Buffers: shared hit=908 read=412\n -> Nested Loop (cost=3530.40..287383.44 rows=1 width=4) (actual time=674.414..674.414 rows=0 loops=1)\n Buffers: shared hit=908 read=412\n -> Nested Loop (cost=3530.40..287379.14 rows=1 width=12) (actual time=674.413..674.413 rows=0 loops=1)\n Buffers: shared hit=908 read=412\n -> Hash Join (cost=3530.40..287375.56 rows=1 width=12) (actual time=674.413..674.413 rows=0 loops=1)\n Hash Cond: (sample_result.sample_id = core_sample.varify_sample_id)\n Buffers: shared hit=908 read=412\n -> Nested Loop (cost=4.32..283545.98 rows=80929 width=12) (actual time=163.609..571.237 rows=102 loops=1)\n Buffers: shared hit=419 read=63\n -> Nested Loop (cost=4.32..3426.09 rows=471 width=4) (actual time=93.595..112.404 rows=85 loops=1)\n Buffers: shared hit=19 read=21\n -> Nested Loop (cost=4.32..140.18 rows=17 width=4) (actual time=28.280..46.051 rows=4 loops=1)\n Buffers: shared hit=5 read=10\n -> Index Scan using gene_symbol on gene (cost=0.00..30.79 rows=4 width=4) (actual time=28.210..45.938 rows=1 loops=1)\n Index Cond: ((symbol)::text = ANY ('{CFC1,PROSIT240,ZFPM2/FOG2,NKX2.5}'::text[]))\n Buffers: shared hit=3 read=7\n -> Bitmap Heap Scan on transcript (cost=4.32..27.29 rows=6 width=8) (actual time=0.066..0.106 rows=4 loops=1)\n Recheck Cond: (gene_id = gene.id)\n Buffers: shared hit=2 read=3\n -> Bitmap Index Scan on transcript_gene_id (cost=0.00..4.32 rows=6 width=0) (actual time=0.049..0.049 rows=4 loops=1)\n Index Cond: (gene_id = gene.id)\n Buffers: shared hit=2 read=1\n -> Index Scan using variant_effect_transcript_id on variant_effect (cost=0.00..191.83 rows=146 width=8) (actual time=16.345..16.582 rows=21 loops=4)\n Index Cond: (transcript_id = transcript.id)\n Buffers: shared hit=14 read=11\n -> Index Scan using sample_result_variant_id on sample_result (cost=0.00..593.01 rows=172 width=8) (actual time=5.147..5.397 rows=1 loops=85)\n Index Cond: (variant_id = variant_effect.variant_id)\n Buffers: shared hit=400 read=42\n -> Hash (cost=3525.76..3525.76 rows=26 width=12) (actual time=103.125..103.125 rows=1129 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 49kB\n Buffers: shared hit=489 read=349\n -> Merge Join (cost=228.11..3525.76 rows=26 width=12) (actual time=57.236..102.752 rows=1129 loops=1)\n Merge Cond: (core_sample.varify_sample_id = sample.id)\n Buffers: shared hit=489 read=349\n -> Index Scan using core_sample_varify_sample_id on core_sample (cost=0.00..347661.45 rows=119344 width=8) (actual time=0.005..44.699 rows=1130 loops=1)\n Buffers: shared hit=484 read=312\n -> Sort (cost=220.25..227.02 rows=2705 width=4) (actual time=56.997..57.214 rows=2701 loops=1)\n Sort Key: sample.id\n Sort Method: quicksort Memory: 223kB\n Buffers: shared hit=5 read=37\n -> Seq Scan on sample (cost=0.00..66.05 rows=2705 width=4) (actual time=0.549..56.245 rows=2705 loops=1)\n Buffers: shared hit=2 read=37\n -> Index Only Scan using core_person_pkey on core_person (cost=0.00..3.58 rows=1 width=4) (never executed)\n Index Cond: (id = core_sample.person_id)\n Heap Fetches: 0\n -> Index Only Scan using variant_pkey on variant (cost=0.00..4.29 rows=1 width=4) (never executed)\n Index Cond: (id = sample_result.variant_id)\n Heap Fetches: 0\nTotal runtime: 674.797 ms\n\nThe \"slow\" query with 5 elements in IN list:\n\nEXPLAIN (ANALYZE, BUFFERS) SELECT\n COUNT(DISTINCT \"core_person\".\"id\")\nFROM \"core_person\"\n INNER JOIN \"core_sample\" ON (\"core_person\".\"id\" = \"core_sample\".\"person_id\")\n INNER JOIN \"sample\" ON (\"core_sample\".\"varify_sample_id\" = \"sample\".\"id\")\n INNER JOIN \"sample_result\" ON (\"sample\".\"id\" = \"sample_result\".\"sample_id\")\n INNER JOIN \"variant\" ON (\"sample_result\".\"variant_id\" = \"variant\".\"id\")\n INNER JOIN \"variant_effect\"\n ON (\"variant\".\"id\" = \"variant_effect\".\"variant_id\")\n INNER JOIN \"transcript\"\n ON (\"variant_effect\".\"transcript_id\" = \"transcript\".\"id\")\n INNER JOIN \"gene\" ON (\"transcript\".\"gene_id\" = \"gene\".\"id\")\nWHERE \"gene\".\"symbol\" IN ('CFC1', 'PROSIT240', 'ZFPM2/FOG2', 'NKX2.5', 'ZIC3');\n\nhttp://explain.depesz.com/s/BikZ\n\n QUERY PLAN \\\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\\\n---\n Aggregate (cost=293669.97..293669.98 rows=1 width=4) (actual time=404443.253..404443.253 rows=1 loops=1)\n Buffers: shared hit=95972873 read=1888636\n -> Nested Loop (cost=4341.32..293669.97 rows=1 width=4) (actual time=1270.642..404431.172 rows=19193 loops=1)\n Buffers: shared hit=95972867 read=1888636\n -> Nested Loop (cost=4341.32..293665.67 rows=1 width=12) (actual time=1243.095..403775.844 rows=19193 loops=1)\n Buffers: shared hit=95915300 read=1888623\n -> Hash Join (cost=4341.32..293662.08 rows=1 width=12) (actual time=1227.121..403667.061 rows=19193 loops=1)\n Hash Cond: (sample_result.variant_id = variant_effect.variant_id)\n Buffers: shared hit=95876819 read=1888598\n -> Nested Loop (cost=99.86..289414.83 rows=1542 width=8) (actual time=94.839..340982.730 rows=690103508 loops=1)\n Buffers: shared hit=95876766 read=1888588\n -> Hash Join (cost=99.86..3605.46 rows=26 width=12) (actual time=1.483..323.089 rows=1129 loops=1)\n Hash Cond: (core_sample.varify_sample_id = sample.id)\n Buffers: shared hit=351 read=1254\n -> Seq Scan on core_sample (cost=0.00..2759.44 rows=119344 width=8) (actual time=0.009..309.402 rows=119344 loops=1)\n Buffers: shared hit=312 read=1254\n -> Hash (cost=66.05..66.05 rows=2705 width=4) (actual time=1.227..1.227 rows=2705 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 96kB\n Buffers: shared hit=39\n -> Seq Scan on sample (cost=0.00..66.05 rows=2705 width=4) (actual time=0.008..0.691 rows=2705 loops=1)\n Buffers: shared hit=39\n -> Index Only Scan using sample_variant_idx on sample_result (cost=0.00..8220.58 rows=277209 width=8) (actual time=3.469..218.524 rows=611252 loops=112\\\n9)\n Index Cond: (sample_id = core_sample.varify_sample_id)\n Heap Fetches: 0\n Buffers: shared hit=95876415 read=1887334\n -> Hash (cost=4234.10..4234.10 rows=589 width=4) (actual time=326.003..326.003 rows=140 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 5kB\n Buffers: shared hit=53 read=10\n -> Nested Loop (cost=4.32..4234.10 rows=589 width=4) (actual time=0.083..325.953 rows=140 loops=1)\n Buffers: shared hit=53 read=10\n -> Nested Loop (cost=4.32..175.03 rows=21 width=4) (actual time=0.052..234.362 rows=8 loops=1)\n Buffers: shared hit=18 read=5\n -> Index Scan using gene_symbol on gene (cost=0.00..38.29 rows=5 width=4) (actual time=0.023..0.097 rows=2 loops=1)\n Index Cond: ((symbol)::text = ANY ('{CFC1,PROSIT240,ZFPM2/FOG2,NKX2.5,ZIC3}'::text[]))\n Buffers: shared hit=12 read=1\n -> Bitmap Heap Scan on transcript (cost=4.32..27.29 rows=6 width=8) (actual time=106.303..117.126 rows=4 loops=2)\n Recheck Cond: (gene_id = gene.id)\n Buffers: shared hit=6 read=4\n -> Bitmap Index Scan on transcript_gene_id (cost=0.00..4.32 rows=6 width=0) (actual time=93.564..93.564 rows=4 loops=2)\n Index Cond: (gene_id = gene.id)\n Buffers: shared hit=4 read=2\n -> Index Scan using variant_effect_transcript_id on variant_effect (cost=0.00..191.83 rows=146 width=8) (actual time=7.285..11.445 rows=18 loops=\\\n8)\n Index Cond: (transcript_id = transcript.id)\n Buffers: shared hit=35 read=5\n -> Index Only Scan using core_person_pkey on core_person (cost=0.00..3.58 rows=1 width=4) (actual time=0.004..0.004 rows=1 loops=19193)\n Index Cond: (id = core_sample.person_id)\n Heap Fetches: 0\n Buffers: shared hit=38481 read=25\n -> Index Only Scan using variant_pkey on variant (cost=0.00..4.29 rows=1 width=4) (actual time=0.033..0.033 rows=1 loops=19193)\n Index Cond: (id = sample_result.variant_id)\n Heap Fetches: 0\n Buffers: shared hit=57567 read=13\n Total runtime: 404443.608 ms\n\n\n2. Answers to standard questions as per\nhttp://wiki.postgresql.org/wiki/Guide_to_reporting_problems\n\nA description of what you are trying to achieve and what results you\nexpect:\n\n Ideally, I'd like this query to be usable for a couple dozen terms.\n (This may not be realistic given the current table layout and\n hardware?) If I drop the problem index, the query finishes in 1.5\n minutes for 17 gene symbols, which is ... better.\n \n FWIW, my observations:\n 1. The disk is slow on this system (60-75 MB/sec dd seq read time to\n /dev/null with bs=8k); I'm not sure if the cost constants need\n adjusting.\n 2. The plan changes at N=5 to introduce an index-only scan on\n sample_variant_idx which is 16 GB (box has 32 GB RAM). This index on\n the sample_result table is a compound index on foreign keys to the\n sample and variant tables that are often joined to the sample_result\n table (as in this query).\n 3. If I run the query in a transaction that drops the\n sample_variant_idx first, a fast plan is chosen. It's almost as if\n the planner is so pleased with itself for having noticed that it can\n use that compound index instead of the individual foreign key indexes\n that it throws caution to the winds ;-)\n 4. The sample_result table is large-ish (748M rows; 145 GB; 312\n GB incl extras) and sits in the middle of this join.\n\n What I tried so far: \n 1. Changed statistics target. At first this query was unusable even\n for N=1 because n_distinct was 264,475 on an involved column when it\n should have been 4,356,805. I increased the statistics target from\n 1,000 to 5,000, which brought n_distinct for that column up to\n 653,662. (I understand that an overly large statistics target can\n negatively affect plan times, and those are indeed in the vicinity of\n 400 msec now for typical queries. I should probably decrease.)\n 2. Learned how to coerce n_distinct, at which point the\n query started running much faster. As an experiment, I have coerced\n n_distinct for all the foreign key columns involved in the join.\n 3. Increased effective_cache_size to larger than memory and decreased it\n to 12GB, neither of which caused a good plan to be used.\n 4. Tried GEQO, which never came up with the dud plan involving\n sample_variant_idx; it doesn't seem quite kosher to plan all queries\n with GEQO, though, and our queries are automatically constructed by a\n query builder, so at the moment I don't have the ability to apply\n custom tweaks for individual queries ....\n 5. Temporarily dropped the sample_variant_idx, as mentioned above.\n I'm not sure yet if it's a good idea to do away with this altogether.\n\nPostgreSQL version number you are running:\n\n PostgreSQL 9.2.7 on x86_64-unknown-linux-gnu, compiled by gcc (GCC)\n 4.4.7 20120313 (Red Hat 4.4.7-4), 64-bit\n\nHow you installed PostgreSQL:\n\n PGDG yum repo\n \nChanges made to the settings in the postgresql.conf file: see Server\nConfiguration for a quick way to list them all.\n\n checkpoint_segments | 32 | configuration file\n default_statistics_target | 5000 | configuration file\n effective_cache_size | 24GB | configuration file\n log_planner_stats | on | configuration file\n shared_buffers | 8GB | configuration file\n work_mem | 150MB | configuration file\n\nOperating system and version:\n\n RHEL 6.4 - VM with kind of crappy SAN disk storage Linux\n resrhvardb01d.research.chop.edu 2.6.32-358.6.2.el6.x86_64 #1 SMP Tue\n May 14 15:48:21 EDT 2013 x86_64 x86_64 x86_64 GNU/Linux\n\nWhat program you're using to connect to PostgreSQL:\n\n psql for my tests; psycopg2 Python driver for app\n\nIs there anything relevant or unusual in the PostgreSQL server logs?:\n\n No\n\nCPU manufacturer and model, eg \"AMD Athlon X2\" or \"Intel Core 2 Duo\"\n\n VM, but /proc/cpuinfo shows two Intel(R) Xeon(R) CPU X5670 @ 2.93GHz\n\nAmount and size of RAM installed, eg \"2GB RAM\"\n\n 32 GB RAM\n\nStorage details (important for performance and corruption questions)\n\n Don't know (yet). Some kind of SAN. Our IT people manage this VM. \n We will be getting dedicated hardware in the near future.\n\n Using dd with an 8k blocksize, I see sequential read performance on\n uncached files of typically 60-74 MB/s.\n\n\n3. Tables\n\ngene table: 51,254 rows\n Table \"public.gene\"\n Column | Type | Modifiers | Storage | Stats target | Description\n---------+------------------------+---------------------------------------------------+----------+--------------+-------------\n id | integer | not null default nextval('gene_id_seq'::regclass) | plain | |\n chr_id | integer | not null | plain | |\n symbol | character varying(255) | not null | extended | 10000 |\n name | text | not null | extended | |\n hgnc_id | integer | | plain | |\nIndexes:\n \"gene_pkey\" PRIMARY KEY, btree (id)\n \"symbol_unique\" UNIQUE CONSTRAINT, btree (symbol)\n \"gene_chr_id\" btree (chr_id)\n \"gene_symbol\" btree (symbol)\n \"gene_symbol_like\" btree (symbol varchar_pattern_ops)\nForeign-key constraints:\n \"gene_chr_id_fkey\" FOREIGN KEY (chr_id) REFERENCES chromosome(id) DEFERRABLE INITIALLY DEFERRED\nReferenced by:\n TABLE \"exon\" CONSTRAINT \"exon_gene_id_fkey\" FOREIGN KEY (gene_id) REFERENCES gene(id) DEFERRABLE INITIALLY DEFERRED\n TABLE \"gene_detail\" CONSTRAINT \"gene_detail_gene_id_fkey\" FOREIGN KEY (gene_id) REFERENCES gene(id) DEFERRABLE INITIALLY DEFERRED\n TABLE \"gene_phenotype\" CONSTRAINT \"gene_id_refs_id_1a19729a\" FOREIGN KEY (gene_id) REFERENCES gene(id) DEFERRABLE INITIALLY DEFERRED\n TABLE \"gene_pubmed\" CONSTRAINT \"gene_id_refs_id_8e5839cd\" FOREIGN KEY (gene_id) REFERENCES gene(id) DEFERRABLE INITIALLY DEFERRED\n TABLE \"gene_families\" CONSTRAINT \"gene_id_refs_id_9de0e4fb\" FOREIGN KEY (gene_id) REFERENCES gene(id) DEFERRABLE INITIALLY DEFERRED\n TABLE \"gene_synonym\" CONSTRAINT \"gene_id_refs_id_b2bbb6ef\" FOREIGN KEY (gene_id) REFERENCES gene(id) DEFERRABLE INITIALLY DEFERRED\n TABLE \"geneset_setobject\" CONSTRAINT \"geneset_setobject_object_id_fkey\" FOREIGN KEY (object_id) REFERENCES gene(id) DEFERRABLE INITIALLY DEFERRED\n TABLE \"transcript\" CONSTRAINT \"transcript_gene_id_fkey\" FOREIGN KEY (gene_id) REFERENCES gene(id) DEFERRABLE INITIALLY DEFERRED\n TABLE \"variant_effect\" CONSTRAINT \"variant_effect_gene_id_fkey\" FOREIGN KEY (gene_id) REFERENCES gene(id) DEFERRABLE INITIALLY DEFERRED\n TABLE \"variant\" CONSTRAINT \"variant_gene_id_fkey\" FOREIGN KEY (gene_id) REFERENCES gene(id) DEFERRABLE INITIALLY DEFERRED\nHas OIDs: no\n\ntranscript table: 215,533 rows\n Table \"public.transcript\"\n Column | Type | Modifiers | Storage | Stats target | Description\n---------------------+------------------------+---------------------------------------------------------+----------+--------------+-------------\n id | integer | not null default nextval('transcript_id_seq'::regclass) | plain | |\n strand | character varying(1) | | extended | |\n start | integer | | plain | |\n end | integer | | plain | |\n coding_start | integer | | plain | |\n coding_end | integer | | plain | |\n coding_start_status | character varying(20) | | extended | |\n coding_end_status | character varying(20) | | extended | |\n exon_count | integer | | plain | |\n refseq_id | character varying(100) | not null | extended | |\n gene_id | integer | | plain | |\nIndexes:\n \"transcript_pkey\" PRIMARY KEY, btree (id)\n \"transcript_gene_id\" btree (gene_id)\n \"transcript_pkey_gene\" btree (id, gene_id)\n \"transcript_refseq_gene\" btree (refseq_id, gene_id)\nForeign-key constraints:\n \"transcript_gene_id_fkey\" FOREIGN KEY (gene_id) REFERENCES gene(id) DEFERRABLE INITIALLY DEFERRED\nReferenced by:\n TABLE \"transcript_exon\" CONSTRAINT \"transcript_id_refs_id_e2bf7f41\" FOREIGN KEY (transcript_id) REFERENCES transcript(id) DEFERRABLE INITIALLY DEFERRED\n TABLE \"variant_effect\" CONSTRAINT \"variant_effect_transcript_id_fkey\" FOREIGN KEY (transcript_id) REFERENCES transcript(id) DEFERRABLE INITIALLY DEFERRED\nHas OIDs: no\n\nvariant_effect table: 8,140,067 rows\n Table \"public.variant_effect\"\n Column | Type | Modifiers | Storage | Stats target | Description\n---------------------+------------------------+-------------------------------------------------------------+----------+--------------+-------------\n id | integer | not null default nextval('variant_effect_id_seq'::regclass) | plain | |\n variant_id | integer | | plain | |\n codon_change | text | | extended | |\n amino_acid_change | text | | extended | |\n exon_id | integer | | plain | |\n transcript_id | integer | | plain | |\n gene_id | integer | | plain | |\n effect_id | integer | | plain | |\n functional_class_id | integer | | plain | |\n hgvs_c | character varying(200) | | extended | |\n hgvs_p | character varying(200) | | extended | |\n segment | character varying(200) | | extended | |\nIndexes:\n \"variant_effect_pkey\" UNIQUE, btree (id)\n \"variant_effect_effect_id\" btree (effect_id)\n \"variant_effect_exon_id\" btree (exon_id)\n \"variant_effect_functional_class_id\" btree (functional_class_id)\n \"variant_effect_gene_id\" btree (gene_id)\n \"variant_effect_hgvs_c\" btree (hgvs_c)\n \"variant_effect_hgvs_c_like\" btree (hgvs_c varchar_pattern_ops)\n \"variant_effect_hgvs_p\" btree (hgvs_p)\n \"variant_effect_hgvs_p_like\" btree (hgvs_p varchar_pattern_ops)\n \"variant_effect_transcript_id\" btree (transcript_id)\n \"variant_effect_variant_id\" btree (variant_id)\n \"variant_effect_variant_transcript\" btree (variant_id, transcript_id)\nForeign-key constraints:\n \"variant_effect_effect_id_fkey\" FOREIGN KEY (effect_id) REFERENCES effect(id) DEFERRABLE INITIALLY DEFERRED\n \"variant_effect_exon_id_fkey\" FOREIGN KEY (exon_id) REFERENCES exon(id) DEFERRABLE INITIALLY DEFERRED\n \"variant_effect_functional_class_id_fkey\" FOREIGN KEY (functional_class_id) REFERENCES variant_functional_class(id) DEFERRABLE INITIALLY DEFERRED\n \"variant_effect_gene_id_fkey\" FOREIGN KEY (gene_id) REFERENCES gene(id) DEFERRABLE INITIALLY DEFERRED\n \"variant_effect_transcript_id_fkey\" FOREIGN KEY (transcript_id) REFERENCES transcript(id) DEFERRABLE INITIALLY DEFERRED\nHas OIDs: no\n\nvariant table: 6,132,722 rows; actually not used because of variant_effect.variant_id and sample_result.variant_id\n\n Table \"public.variant\"\n Column | Type | Modifiers | Storage | Stats target | Description\n----------+-----------------------+------------------------------------------------------+----------+--------------+-------------\n id | integer | not null default nextval('variant_id_seq'::regclass) | plain | |\n chr_id | integer | not null | plain | |\n pos | integer | not null | plain | |\n ref | text | not null | extended | |\n alt | text | not null | extended | |\n md5 | character varying(32) | not null | extended | |\n rsid | text | | extended | |\n type_id | integer | | plain | |\n liftover | boolean | | plain | |\n gene_id | integer | | plain | |\nIndexes:\n \"variant_chr_id_pos_ref_alt_key\" UNIQUE, btree (chr_id, pos, ref, alt)\n \"variant_pkey\" UNIQUE, btree (id)\n \"variant_alt\" btree (alt)\n \"variant_alt_like\" btree (alt text_pattern_ops)\n \"variant_chr_id\" btree (chr_id)\n \"variant_md5\" btree (md5)\n \"variant_ref\" btree (ref)\n \"variant_ref_like\" btree (ref text_pattern_ops)\n \"variant_rsid\" btree (rsid)\n \"variant_type_id\" btree (type_id)\nForeign-key constraints:\n \"variant_chr_id_fkey\" FOREIGN KEY (chr_id) REFERENCES chromosome(id) DEFERRABLE INITIALLY DEFERRED\n \"variant_gene_id_fkey\" FOREIGN KEY (gene_id) REFERENCES gene(id) DEFERRABLE INITIALLY DEFERRED\n \"variant_type_id_fkey\" FOREIGN KEY (type_id) REFERENCES variant_type(id) DEFERRABLE INITIALLY DEFERRED\nReferenced by:\n TABLE \"sample_result\" CONSTRAINT \"variant_id_refs_id_313c30dea59a86e8\" FOREIGN KEY (variant_id) REFERENCES variant(id)\nHas OIDs: no\n\nsample_result table: 748,183,031 rows; 145 GB; 312 GB incl indexes and toast)\n Table \"public.sample_result\"\n Column | Type | Modifiers | Storage | Stats target | Description\n-------------------------+--------------------------+------------------------------------------------------------+----------+--------------+-------------\n id | integer | not null default nextval('sample_result_id_seq'::regclass) | plain | |\n notes | text | | extended | |\n created | timestamp with time zone | not null | plain | |\n modified | timestamp with time zone | not null | plain | |\n sample_id | integer | not null | plain | |\n variant_id | integer | not null | plain | |\n quality | double precision | | plain | |\n read_depth | integer | | plain | |\n genotype_id | integer | | plain | |\n coverage_ref | integer | | plain | |\n coverage_alt | integer | | plain | |\n phred_scaled_likelihood | text | | extended | |\n downsampling | boolean | | plain | |\n spanning_deletions | double precision | | plain | |\n mq | double precision | | plain | |\n mq0 | double precision | | plain | |\n baseq_rank_sum | double precision | | plain | |\n mq_rank_sum | double precision | | plain | |\n read_pos_rank_sum | double precision | | plain | |\n strand_bias | double precision | | plain | |\n homopolymer_run | integer | | plain | |\n haplotype_score | double precision | | plain | |\n quality_by_depth | double precision | | plain | |\n fisher_strand | double precision | | plain | |\n genotype_quality | double precision | | plain | |\n in_dbsnp | boolean | | plain | |\n base_counts | character varying(100) | | extended | |\n raw_read_depth | integer | | plain | |\nIndexes:\n \"sample_result_pkey1\" PRIMARY KEY, btree (id)\n \"sample_result_pkey\" UNIQUE, btree (id)\n \"sample_result_genotype_id\" btree (genotype_id)\n \"sample_result_quality\" btree (quality)\n \"sample_result_raw_read_depth\" btree (raw_read_depth)\n \"sample_result_read_depth\" btree (read_depth)\n \"sample_result_sample_id\" btree (sample_id)\n \"sample_result_variant_id\" btree (variant_id)\n \"sample_variant_idx\" btree (sample_id, variant_id)\nForeign-key constraints:\n \"sample_id_refs_id_6fa6b6cc5d0f2984\" FOREIGN KEY (sample_id) REFERENCES sample(id) DEFERRABLE INITIALLY DEFERRED\n \"sample_result_genotype_id_fkey\" FOREIGN KEY (genotype_id) REFERENCES genotype(id) DEFERRABLE INITIALLY DEFERRED\n \"variant_id_refs_id_313c30dea59a86e8\" FOREIGN KEY (variant_id) REFERENCES variant(id)\nReferenced by:\n TABLE \"assessment\" CONSTRAINT \"sample_result_id_refs_id_5831a8ec3d1e4e0a\" FOREIGN KEY (sample_result_id) REFERENCES sample_result(id) DEFERRABLE INITIALLY DEFERRED\nHas OIDs: no\n\nsample table: 2705 rows\n Table \"public.sample\"\n Column | Type | Modifiers | Storage | Stats target | Description\n-------------+--------------------------+-----------------------------------------------------+----------+--------------+-------------\n id | integer | not null default nextval('sample_id_seq'::regclass) | plain | |\n notes | text | | extended | |\n created | timestamp with time zone | not null | plain | |\n modified | timestamp with time zone | not null | plain | |\n label | character varying(100) | not null default 'placholder'::character varying | extended | |\n batch_id | integer | not null | plain | |\n version | integer | not null | plain | |\n person_id | integer | | plain | |\n count | integer | not null | plain | |\n bio_sample | integer | | plain | |\n published | boolean | not null | plain | |\n md5 | character varying(32) | | extended | |\n name | character varying(100) | not null default 'placeholder'::character varying | extended | |\n project_id | integer | not null | plain | |\n tissue_id | integer | | plain | |\n vcf_colname | character varying(200) | | extended | |\nIndexes:\n \"sample_pkey\" PRIMARY KEY, btree (id)\n \"sample_version_c71a9c06ef358ed_uniq\" UNIQUE CONSTRAINT, btree (version, batch_id, name)\n \"sample_batch_id\" btree (batch_id)\n \"sample_cohort_id\" btree (batch_id)\n \"sample_label_like\" btree (label varchar_pattern_ops)\n \"sample_person_id\" btree (person_id)\n \"sample_project_id\" btree (project_id)\n \"sample_tissue_id\" btree (tissue_id)\nForeign-key constraints:\n \"cohort_id_refs_id_6c74dcea40694064\" FOREIGN KEY (batch_id) REFERENCES batch(id) DEFERRABLE INITIALLY DEFERRED\n \"project_id_refs_id_78e0c8fcf52a265d\" FOREIGN KEY (project_id) REFERENCES project(id) DEFERRABLE INITIALLY DEFERRED\n \"sample_person_id_fkey\" FOREIGN KEY (person_id) REFERENCES person(id) DEFERRABLE INITIALLY DEFERRED\n \"tissue_id_refs_id_2f16a55811371f5a\" FOREIGN KEY (tissue_id) REFERENCES tissue(id) DEFERRABLE INITIALLY DEFERRED\nReferenced by:\n TABLE \"metrics_sample_load\" CONSTRAINT \"metrics_sample_load_sample_id_fkey\" FOREIGN KEY (sample_id) REFERENCES sample(id)\n TABLE \"sample_phenotype\" CONSTRAINT \"sample_id_refs_id_2723d8269859c3bc\" FOREIGN KEY (sample_id) REFERENCES sample(id) DEFERRABLE INITIALLY DEFERRED\n TABLE \"cohort_sample\" CONSTRAINT \"sample_id_refs_id_435beca7ea3fecae\" FOREIGN KEY (sample_id) REFERENCES sample(id) DEFERRABLE INITIALLY DEFERRED\n TABLE \"sample_qc\" CONSTRAINT \"sample_id_refs_id_437acf3032c46c2b\" FOREIGN KEY (sample_id) REFERENCES sample(id) DEFERRABLE INITIALLY DEFERRED\n TABLE \"sample_manifest\" CONSTRAINT \"sample_id_refs_id_6dad1d60e5a86f62\" FOREIGN KEY (sample_id) REFERENCES sample(id) DEFERRABLE INITIALLY DEFERRED\n TABLE \"sample_result\" CONSTRAINT \"sample_id_refs_id_6fa6b6cc5d0f2984\" FOREIGN KEY (sample_id) REFERENCES sample(id) DEFERRABLE INITIALLY DEFERRED\n TABLE \"sample_run\" CONSTRAINT \"sample_run_sample_id_fkey\" FOREIGN KEY (sample_id) REFERENCES sample(id) DEFERRABLE INITIALLY DEFERRED\n TABLE \"pcgc.core_sample\" CONSTRAINT \"varify_sample_id_refs_id_75c34db2\" FOREIGN KEY (varify_sample_id) REFERENCES sample(id) DEFERRABLE INITIALLY DEFERRED\n TABLE \"big_sample\" CONSTRAINT \"vsample_id_refs_id_3ad233dd6a3f695e\" FOREIGN KEY (vsample_id) REFERENCES sample(id) DEFERRABLE INITIALLY DEFERRED\nHas OIDs: no\n\ncore_sample table: 119,344 rows\n Table \"pcgc.core_sample\"\n Column | Type | Modifiers | Storage | Stats target | Description\n------------------------+------------------------+----------------------------------------------------------+----------+--------------+-------------\n id | integer | not null default nextval('core_sample_id_seq'::regclass) | plain | |\n sample_id | character varying(20) | | extended | |\n person_id | integer | | plain | |\n sample_type | character varying(11) | | extended | |\n source_type | character varying(100) | | extended | |\n status | character varying(100) | | extended | |\n disposal_status | character varying(100) | | extended | |\n dna_qc_status | character varying(100) | | extended | |\n sample_identifier_type | character varying(20) | | extended | |\n varify_sample_id | integer | | plain | 10000 |\nIndexes:\n \"core_sample_pkey\" PRIMARY KEY, btree (id)\n \"core_sample_sample_id_uniq\" UNIQUE CONSTRAINT, btree (sample_id)\n \"core_sample_disposal_status\" btree (disposal_status)\n \"core_sample_disposal_status_like\" btree (disposal_status varchar_pattern_ops)\n \"core_sample_dna_qc_status\" btree (dna_qc_status)\n \"core_sample_dna_qc_status_like\" btree (dna_qc_status varchar_pattern_ops)\n \"core_sample_person_id\" btree (person_id)\n \"core_sample_sample_identifier_type\" btree (sample_identifier_type)\n \"core_sample_sample_identifier_type_like\" btree (sample_identifier_type varchar_pattern_ops)\n \"core_sample_sample_type\" btree (sample_type)\n \"core_sample_sample_type_like\" btree (sample_type varchar_pattern_ops)\n \"core_sample_source_type\" btree (source_type)\n \"core_sample_source_type_like\" btree (source_type varchar_pattern_ops)\n \"core_sample_status\" btree (status)\n \"core_sample_status_like\" btree (status varchar_pattern_ops)\n \"core_sample_varify_sample_id\" btree (varify_sample_id)\nForeign-key constraints:\n \"person_id_refs_id_56d51ee2\" FOREIGN KEY (person_id) REFERENCES core_person(id) DEFERRABLE INITIALLY DEFERRED\n \"varify_sample_id_refs_id_75c34db2\" FOREIGN KEY (varify_sample_id) REFERENCES sample(id) DEFERRABLE INITIALLY DEFERRED\nReferenced by:\n TABLE \"core_samplefile\" CONSTRAINT \"sample_id_refs_id_185ff8c9\" FOREIGN KEY (sample_id) REFERENCES core_sample(id) DEFERRABLE INITIALLY DEFERRED\n TABLE \"core_cnvconfirmation\" CONSTRAINT \"sample_id_refs_id_1c83b6a0\" FOREIGN KEY (sample_id) REFERENCES core_sample(id) DEFERRABLE INITIALLY DEFERRED\n TABLE \"core_variantcallconfirmation\" CONSTRAINT \"sample_id_refs_id_3beffe04\" FOREIGN KEY (sample_id) REFERENCES core_sample(id) DEFERRABLE INITIALLY DEFERRED\n TABLE \"dashboard_request\" CONSTRAINT \"sample_id_refs_id_46e54337\" FOREIGN KEY (sample_id) REFERENCES core_sample(id) DEFERRABLE INITIALLY DEFERRED\nHas OIDs: no\n\ncore_person: 15,746 rows\n Table \"pcgc.core_person\"\n Column | Type | Modifiers | Storage | Stats target | Description\n-----------------------+-------------------------+----------------------------------------------------------+----------+--------------+-------------\n id | integer | not null default nextval('core_person_id_seq'::regclass) | plain | |\n blinded_id | character varying(20) | not null | extended | |\n is_subject | boolean | not null | plain | |\n working_group_summary | character varying(100) | | extended | |\n consent_group | integer | | plain | |\n mendelian_consistent | boolean | not null | plain | |\n comments | character varying(100) | | extended | |\n relatives | character varying(1000) | | extended | |\nIndexes:\n \"core_person_pkey\" PRIMARY KEY, btree (id)\n \"core_person_blinded_id_key\" UNIQUE CONSTRAINT, btree (blinded_id)\n \"core_person_comments\" btree (comments)\n \"core_person_comments_like\" btree (comments varchar_pattern_ops)\n \"core_person_consent_group\" btree (consent_group)\n \"core_person_is_subject\" btree (is_subject)\n \"core_person_mendelian_consistent\" btree (mendelian_consistent)\n \"core_person_working_group_summary\" btree (working_group_summary)\n \"core_person_working_group_summary_like\" btree (working_group_summary varchar_pattern_ops)\nReferenced by:\n TABLE \"core_familymember\" CONSTRAINT \"person_id_refs_id_1b8249c8\" FOREIGN KEY (person_id) REFERENCES core_person(id) DEFERRABLE INITIALLY DEFERRED\n TABLE \"core_variantcallconfirmation\" CONSTRAINT \"person_id_refs_id_1b95d861\" FOREIGN KEY (person_id) REFERENCES core_person(id) DEFERRABLE INITIALLY DEFERRED\n TABLE \"core_othergenetictestresults\" CONSTRAINT \"person_id_refs_id_3322dca8\" FOREIGN KEY (person_id) REFERENCES core_person(id) DEFERRABLE INITIALLY DEFERRED\n TABLE \"core_karyotypeformula\" CONSTRAINT \"person_id_refs_id_4b0cf1ae\" FOREIGN KEY (person_id) REFERENCES core_person(id) DEFERRABLE INITIALLY DEFERRED\n TABLE \"core_tissuesample\" CONSTRAINT \"person_id_refs_id_54349bf1\" FOREIGN KEY (person_id) REFERENCES core_person(id) DEFERRABLE INITIALLY DEFERRED\n TABLE \"core_subject\" CONSTRAINT \"person_id_refs_id_55696453\" FOREIGN KEY (person_id) REFERENCES core_person(id) DEFERRABLE INITIALLY DEFERRED\n TABLE \"core_sample\" CONSTRAINT \"person_id_refs_id_56d51ee2\" FOREIGN KEY (person_id) REFERENCES core_person(id) DEFERRABLE INITIALLY DEFERRED\n TABLE \"core_guid\" CONSTRAINT \"person_id_refs_id_5c945e79\" FOREIGN KEY (person_id) REFERENCES core_person(id) DEFERRABLE INITIALLY DEFERRED\n TABLE \"core_mutationresults\" CONSTRAINT \"person_id_refs_id_65047092\" FOREIGN KEY (person_id) REFERENCES core_person(id) DEFERRABLE INITIALLY DEFERRED\n TABLE \"core_persondiagnosis\" CONSTRAINT \"person_id_refs_id_67b5236f\" FOREIGN KEY (person_id) REFERENCES core_person(id) DEFERRABLE INITIALLY DEFERRED\n TABLE \"core_microarrayresults\" CONSTRAINT \"person_id_refs_id_6bf9527a\" FOREIGN KEY (person_id) REFERENCES core_person(id) DEFERRABLE INITIALLY DEFERRED\n TABLE \"core_fishresults\" CONSTRAINT \"person_id_refs_id_a4734aaf\" FOREIGN KEY (person_id) REFERENCES core_person(id) DEFERRABLE INITIALLY DEFERRED\n TABLE \"core_copynumberresults\" CONSTRAINT \"person_id_refs_id_ae0ddf0d\" FOREIGN KEY (person_id) REFERENCES core_person(id) DEFERRABLE INITIALLY DEFERRED\n TABLE \"core_karyotypeabnormalitiesfather\" CONSTRAINT \"person_id_refs_id_b04014a\" FOREIGN KEY (person_id) REFERENCES core_person(id) DEFERRABLE INITIALLY DEFERRED\n TABLE \"core_blindfile\" CONSTRAINT \"person_id_refs_id_b24509e6\" FOREIGN KEY (person_id) REFERENCES core_person(id) DEFERRABLE INITIALLY DEFERRED\n TABLE \"core_probandformcompletion\" CONSTRAINT \"person_id_refs_id_b4ca4d2d\" FOREIGN KEY (person_id) REFERENCES core_person(id) DEFERRABLE INITIALLY DEFERRED\n TABLE \"core_workinggroupmembership\" CONSTRAINT \"person_id_refs_id_da4a9bc6\" FOREIGN KEY (person_id) REFERENCES core_person(id) DEFERRABLE INITIALLY DEFERRED\n TABLE \"core_karyotypeabnormalitiesproband\" CONSTRAINT \"person_id_refs_id_e3eb5c6b\" FOREIGN KEY (person_id) REFERENCES core_person(id) DEFERRABLE INITIALLY DEFERRED\n TABLE \"core_genetictesting\" CONSTRAINT \"person_id_refs_id_ed9fd34b\" FOREIGN KEY (person_id) REFERENCES core_person(id) DEFERRABLE INITIALLY DEFERRED\n TABLE \"core_karyotypeabnormalitiesmother\" CONSTRAINT \"person_id_refs_id_f6f61751\" FOREIGN KEY (person_id) REFERENCES core_person(id) DEFERRABLE INITIALLY DEFERRED\n TABLE \"core_cnvconfirmation\" CONSTRAINT \"person_id_refs_id_ff18d483\" FOREIGN KEY (person_id) REFERENCES core_person(id) DEFERRABLE INITIALLY DEFERRED\nHas OIDs: no\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 11 Mar 2014 20:02:39 +0000", "msg_from": "\"Murphy, Kevin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Ye olde slow query" }, { "msg_contents": "\"Murphy, Kevin\" <[email protected]> writes:\n> Synopsis: 8-table join with one \"WHERE foo IN (...)\" condition; works OK with fewer\n> than 5 items in the IN list, but at N=5, the planner starts using a compound index \n> for the first time that completely kills performance (5-6 minutes versus 0-12 seconds).\n> I'm interested in learning what plays a role in this switch of plans (or the\n> unanticipated relative slowness of the N=5 plan). TIA for any wisdom; I've finally\n> made a commitment to really delve into PG. -Kevin\n\nFWIW, I think the right question here is not \"why is the slow query\nslow?\", but \"why is the fast query fast?\". The planner is estimating\nthem both at nearly the same cost, and since that cost is quite high,\nI'd say it's not too wrong about the slow query. What it's wrong about\nis the fast query; so you need to look at where its estimates are way\noff base in that plan.\n\nIt looks like the trouble spot is this intermediate nested loop:\n\n> -> Nested Loop (cost=4.32..283545.98 rows=80929 width=12) (actual time=163.609..571.237 rows=102 loops=1)\n> Buffers: shared hit=419 read=63\n> -> Nested Loop (cost=4.32..3426.09 rows=471 width=4) (actual time=93.595..112.404 rows=85 loops=1)\n> ...\n> -> Index Scan using sample_result_variant_id on sample_result (cost=0.00..593.01 rows=172 width=8) (actual time=5.147..5.397 rows=1 loops=85)\n> Index Cond: (variant_id = variant_effect.variant_id)\n> Buffers: shared hit=400 read=42\n\nwhich is creating the bulk of the estimated cost for the whole plan,\nbut execution is actually pretty cheap. There seem to be two components\nto the misestimation: one is that the sub-nested loop is producing about a\nfifth as many rows as expected, and the other is that the probes into\nsample_result are producing (on average) 1 row, not the 172 rows the\nplanner expects. If you could get the latter estimate to be even within\none order of magnitude of reality, the planner would certainly see this\nplan as way cheaper than the other.\n\nSo I'm wondering if the stats on sample_result and variant_effect are up\nto date. If they are, you might try increasing the stats targets for the\nvariant_id columns.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 11 Mar 2014 18:23:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ye olde slow query" }, { "msg_contents": "Sorry for the delay; back on this, and thanks for the response.\n\nOn Mar 11, 2014, at 6:23 PM, Tom Lane <[email protected]> wrote:\n> \"Murphy, Kevin\" <[email protected]> writes:\n>> Synopsis: 8-table join with one \"WHERE foo IN (...)\" condition; works OK with fewer\n>> than 5 items in the IN list, but at N=5, the planner starts using a compound index \n>> for the first time that completely kills performance (5-6 minutes versus 0-12 seconds).\n>> […]\n> \n> FWIW, I think the right question here is not \"why is the slow query\n> slow?\", but \"why is the fast query fast?”.\n> […]\n> It looks like the trouble spot is this intermediate nested loop:\n> \n>> -> Nested Loop (cost=4.32..283545.98 rows=80929 width=12) (actual time=163.609..571.237 rows=102 loops=1)\n>> Buffers: shared hit=419 read=63\n>> -> Nested Loop (cost=4.32..3426.09 rows=471 width=4) (actual time=93.595..112.404 rows=85 loops=1)\n>> ...\n>> -> Index Scan using sample_result_variant_id on sample_result (cost=0.00..593.01 rows=172 width=8) (actual time=5.147..5.397 rows=1 loops=85)\n>> Index Cond: (variant_id = variant_effect.variant_id)\n>> Buffers: shared hit=400 read=42\n> \n> which is creating the bulk of the estimated cost for the whole plan,\n> but execution is actually pretty cheap. There seem to be two components\n> to the misestimation: one is that the sub-nested loop is producing about a\n> fifth as many rows as expected,\n\nThis may be because 3 out of the 4 user-supplied gene symbols were not present in the gene table at all. Restricting to valid genes prior to the query is probably a good idea.\n\n> and the other is that the probes into\n> sample_result are producing (on average) 1 row, not the 172 rows the\n> planner expects. If you could get the latter estimate to be even within\n> one order of magnitude of reality, the planner would certainly see this\n> plan as way cheaper than the other.\n\nI’m not sure about how to improve this. The stats were 5K globally and up to date, and I made them better, with no change. I tried increasing the stats on the foreign keys involved to 10K (and analyzing), but the same costs and plan are in play. I know the stats are updated now because I dumped and restored on new hardware and did a vacuum analyze. I previously mentioned that some of the vanilla n_distinct values were way off for the (790M row) sample_result table, so I have taken to coercing n_distinct using negative multipliers. This data doesn’t change very often (it hasn’t in many weeks).\n\nThere are 6M variants, but only 7.5% of them map to the sample_result table. Presumably the planner knows this because of the n_distinct value on sample_result.variant_id? Each variant maps to zero or sample_result records, but often very few, and never more than the number of samples (currently 1129).\n\n> \n> So I'm wondering if the stats on sample_result and variant_effect are up\n> to date. If they are, you might try increasing the stats targets for the\n> variant_id columns.\n\nThe stats were up to date and were at 5K globally. I tried increasing the stats on the foreign keys involved to 10K (and analyzing!), but the same costs and plan are in play. I know the stats are updated now because I dumped and restored on new hardware and did a vacuum analyze. I previously mentioned that some of the vanilla n_distinct values were way off for the (790M row) sample_result table, so I have taken to coercing n_distinct using negative multipliers.\n\nRegards,\nKevin\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 14 Apr 2014 13:55:59 +0000", "msg_from": "\"Murphy, Kevin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Ye olde slow query" } ]
[ { "msg_contents": "PostgreSQL 9.3.3 RHEL 6.4Total db Server memory 64GB# -----------------------------# PostgreSQL configuration file# -----------------------------max_connections = 100shared_buffers = 16GBwork_mem = 32MB                         maintenance_work_mem = 1GBseq_page_cost = 1.0                   random_page_cost = 2.0                cpu_tuple_cost = 0.03                  #cpu_index_tuple_cost = 0.005           #cpu_operator_cost = 0.0025             effective_cache_size = 48MBdefault_statistics_target = 100       constraint_exclusion = partition Partition table Setup---------------------CREATE TABLE measurement (    id              bigint not null,    city_id         bigint not null,    logdate         date not null,    peaktemp        bigint,    unitsales       bigint,    type            bigint,    uuid            uuid,    geom            geometry);CREATE TABLE measurement_y2006m02 (    CHECK ( logdate >= DATE '2006-02-01' AND logdate < DATE '2006-03-01' )) INHERITS (measurement);CREATE TABLE measurement_y2006m03 (    CHECK ( logdate >= DATE '2006-03-01' AND logdate < DATE '2006-04-01' )) INHERITS (measurement);...CREATE TABLE measurement_y2007m11 (    CHECK ( logdate >= DATE '2007-11-01' AND logdate < DATE '2007-12-01' )) INHERITS (measurement);CREATE TABLE measurement_y2007m12 (    CHECK ( logdate >= DATE '2007-12-01' AND logdate < DATE '2008-01-01' )) INHERITS (measurement);CREATE TABLE measurement_y2008m01 (    CHECK ( logdate >= DATE '2008-01-01' AND logdate < DATE '2008-02-01' )) INHERITS (measurement);Partition measurement_y2007m12 contains 38,261,732 rowsIndexes on partition measurement_y2007m12:    \"pkey_measurement_y2007m12\" PRIMARY KEY, btree (id), tablespace \"measurement_y2007\"    \"idx_measurement_uuid_y2003m12\" btree (uuid), tablespace \"measurement_y2007\"    \"idx_measurement_type_y2003m12\" btree (type), tablespace \"measurement_y2007\"    \"idx_measurement_city_y2003m12\" btree (city_id), tablespace \"measurement_y2007\"    \"idx_measurement_logdate_y2003m12\" btree (logdate), tablespace \"measurement_y2007\"    \"sidx_measurement_geom_y2003m12\" gist (geom), tablespace \"measurement_y2007\"*** Problem Query *** explain (analyze on, buffers on) Select * from measurement this_                                   where this_.logdate between '2007-12-19 23:38:41.22'::timestamp and '2007-12-20 08:01:04.22'::timestamp                                    and this_.city_id=25183 order by this_.logdate asc, this_.peaktemp asc, this_.unitsales asc limit 10000;                                                                                              QUERY PLAN                                                                                               ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit  (cost=33849.98..33855.15 rows=2068 width=618) (actual time=51710.803..51714.266 rows=10000 loops=1)   Buffers: shared hit=25614 read=39417   ->  Sort  (cost=33849.98..33855.15 rows=2068 width=618) (actual time=51710.799..51712.924 rows=10000 loops=1)         Sort Key: this_.logdate, this_.unitsales         Sort Method: top-N heapsort  Memory: 15938kB         Buffers: shared hit=25614 read=39417         ->  Append  (cost=0.00..33736.09 rows=2068 width=618) (actual time=50.210..50793.589 rows=312046 loops=1)               Buffers: shared hit=25608 read=39417               ->  Seq Scan on measurement this_  (cost=0.00..0.00 rows=1 width=840) (actual time=0.002..0.002 rows=0 loops=1)                     Filter: ((logdate >= '2007-12-19 23:38:41.22'::timestamp without time zone) AND (logdate <= '2007-12-20 08:01:04.22'::timestamp without time zone) AND (city_id = 25183))               ->  Index Scan using idx_measurement_city_y2007m12 on measurement_y2007m12 this__1  (cost=0.56..33736.09 rows=2067 width=618) (actual time=50.206..50731.637 rows=312046 loops=1)                     Index Cond: (city_id = 25183)                     Filter:  ((logdate >= '2007-12-19 23:38:41.22'::timestamp without time zone) AND (logdate <= '2007-12-20 08:01:04.22'::timestamp without time zone))                     Buffers: shared hit=25608 read=39417 Total runtime: 51717.639 ms   <--- *** unacceptable ***(15 rows)  Total Rows meeting query criteria---------------------------------Select count(*) from measurement this_ where this_.logdate between '2007-12-19 23:38:41.22'::timestamp and '2007-12-20 08:01:04.22'::timestamp and this_.city_id=25183;count------312046Total Rows in the partition table referenced------------------------------------------Select\n count(*) from measurement_y2007m12;  count---------38261732Does anyone know how to speed up this query? I removed the order by clause and that significantly reduced the run time to approx. 2000-3000 ms. This query is being recorded repeatedly in our logs and executes very slowly for our UI users from 12000 ms thru 68000 msAny suggestions would be appreciated.thanks\n", "msg_date": "Thu, 13 Mar 2014 12:26:54 -0700", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Very slow query in PostgreSQL 9.3.3" }, { "msg_contents": "2014-03-13 20:26 GMT+01:00 <[email protected]>:\n\n> PostgreSQL 9.3.3 RHEL 6.4\n>\n> Total db Server memory 64GB\n>\n>\n> # -----------------------------\n> # PostgreSQL configuration file\n> # -----------------------------\n> max_connections = 100\n> shared_buffers = 16GB\n> work_mem = 32MB\n> maintenance_work_mem = 1GB\n> seq_page_cost = 1.0\n> random_page_cost = 2.0\n> cpu_tuple_cost = 0.03\n> #cpu_index_tuple_cost = 0.005\n> #cpu_operator_cost = 0.0025\n> effective_cache_size = 48MB\n> default_statistics_target = 100\n> constraint_exclusion = partition\n>\n> Partition table Setup\n> ---------------------\n>\n> CREATE TABLE measurement (\n> id bigint not null,\n> city_id bigint not null,\n> logdate date not null,\n> peaktemp bigint,\n> unitsales bigint,\n> type bigint,\n> uuid uuid,\n> geom geometry\n> );\n>\n>\n> CREATE TABLE measurement_y2006m02 (\n> CHECK ( logdate >= DATE '2006-02-01' AND logdate < DATE '2006-03-01' )\n> ) INHERITS (measurement);\n> CREATE TABLE measurement_y2006m03 (\n> CHECK ( logdate >= DATE '2006-03-01' AND logdate < DATE '2006-04-01' )\n> ) INHERITS (measurement);\n> ...\n> CREATE TABLE measurement_y2007m11 (\n> CHECK ( logdate >= DATE '2007-11-01' AND logdate < DATE '2007-12-01' )\n> ) INHERITS (measurement);\n> CREATE TABLE measurement_y2007m12 (\n> CHECK ( logdate >= DATE '2007-12-01' AND logdate < DATE '2008-01-01' )\n> ) INHERITS (measurement);\n> CREATE TABLE measurement_y2008m01 (\n> CHECK ( logdate >= DATE '2008-01-01' AND logdate < DATE '2008-02-01' )\n> ) INHERITS (measurement);\n>\n> Partition measurement_y2007m12 contains 38,261,732 rows\n>\n> Indexes on partition measurement_y2007m12:\n> \"pkey_measurement_y2007m12\" PRIMARY KEY, btree (id), tablespace\n> \"measurement_y2007\"\n> \"idx_measurement_uuid_y2003m12\" btree (uuid), tablespace\n> \"measurement_y2007\"\n> \"idx_measurement_type_y2003m12\" btree (type), tablespace\n> \"measurement_y2007\"\n> \"idx_measurement_city_y2003m12\" btree (city_id), tablespace\n> \"measurement_y2007\"\n> \"idx_measurement_logdate_y2003m12\" btree (logdate), tablespace\n> \"measurement_y2007\"\n> \"sidx_measurement_geom_y2003m12\" gist (geom), tablespace\n> \"measurement_y2007\"\n>\n> **** Problem Query *** *\n>\n> explain (analyze on, buffers on) Select * from measurement this_\n> where this_.logdate between '2007-12-19\n> 23:38:41.22'::timestamp and '2007-12-20 08:01:04.22'::timestamp\n> and this_.city_id=25183 order by\n> this_.logdate asc, this_.peaktemp asc, this_.unitsales asc limit 10000;\n>\n>\n> QUERY\n> PLAN\n>\n>\n> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=33849.98..33855.15 rows=2068 width=618) (actual\n> time=51710.803..51714.266 rows=10000 loops=1)\n> Buffers: shared hit=25614 read=39417\n> -> Sort (cost=33849.98..33855.15 rows=2068 width=618) (actual\n> time=51710.799..51712.924 rows=10000 loops=1)\n> Sort Key: this_.logdate, this_.unitsales\n> Sort Method: top-N heapsort Memory: 15938kB\n> Buffers: shared hit=25614 read=39417\n> -> Append (cost=0.00..33736.09 rows=2068 width=618) (actual\n> time=50.210..50793.589 rows=312046 loops=1)\n> Buffers: shared hit=25608 read=39417\n> -> Seq Scan on measurement this_ (cost=0.00..0.00 rows=1\n> width=840) (actual time=0.002..0.002 rows=0 loops=1)\n> Filter: ((logdate >= '2007-12-19\n> 23:38:41.22'::timestamp without time zone) AND (logdate <= '2007-12-20\n> 08:01:04.22'::timestamp without time zone) AND (city_id = 25183))\n> -> Index Scan using idx_measurement_city_y2007m12 on\n> measurement_y2007m12 this__1 (cost=0.56..33736.09 rows=2067 width=618)\n> (actual time=50.206..50731.637 rows=312046 loops=1)\n> Index Cond: (city_id = 25183)\n> Filter: ((logdate >= '2007-12-19\n> 23:38:41.22'::timestamp without time zone) AND (logdate <= '2007-12-20\n> 08:01:04.22'::timestamp without time zone))\n> Buffers: shared hit=25608 read=39417\n>\n> Total runtime: *51717.639 ms* <--- *** unacceptable ***\n>\n> (15 rows)\n>\n> Total Rows meeting query criteria\n> ---------------------------------\n>\n> Select count(*) from measurement this_ where this_.logdate between\n> '2007-12-19 23:38:41.22'::timestamp and '2007-12-20 08:01:04.22'::timestamp\n> and this_.city_id=25183;\n>\n> count\n> ------\n> 312046\n>\n> Total Rows in the partition table referenced\n> ------------------------------------------\n>\n> Select count(*) from measurement_y2007m12;\n>\n> count\n> ---------\n> 38261732\n>\n>\n>\n>\n> *Does anyone know how to speed up this query? I removed the order by\n> clause and that significantly reduced the run time to approx. 2000-3000 ms.\n> This query is being recorded repeatedly in our logs and executes very\n> slowly for our UI users from 12000 ms thru 68000 msAny suggestions would be\n> appreciated.*\n>\n\nsort (ORDER BY clause) enforce a reading of complete partitions. And it is\nslow - it is strange so reading 300K rows needs a 5K sec. Probably your IO\nis overloaded.\n\nRegards\n\nPavel Stehule\n\n\n>\n> thanks\n>\n\n2014-03-13 20:26 GMT+01:00 <[email protected]>:\nPostgreSQL 9.3.3 RHEL 6.4Total db Server memory 64GB\n# -----------------------------# PostgreSQL configuration file# -----------------------------max_connections = 100shared_buffers = 16GBwork_mem = 32MB                         maintenance_work_mem = 1GB\r\n\r\nseq_page_cost = 1.0                   random_page_cost = 2.0                cpu_tuple_cost = 0.03                  #cpu_index_tuple_cost = 0.005           #cpu_operator_cost = 0.0025             effective_cache_size = 48MB\r\n\r\ndefault_statistics_target = 100       constraint_exclusion = partition Partition table Setup---------------------CREATE TABLE measurement (    id              bigint not null,    city_id         bigint not null,\r\n\r\n    logdate         date not null,    peaktemp        bigint,    unitsales       bigint,    type            bigint,    uuid            uuid,    geom            geometry);CREATE TABLE measurement_y2006m02 (\r\n\r\n    CHECK ( logdate >= DATE '2006-02-01' AND logdate < DATE '2006-03-01' )) INHERITS (measurement);CREATE TABLE measurement_y2006m03 (    CHECK ( logdate >= DATE '2006-03-01' AND logdate < DATE '2006-04-01' )\r\n\r\n) INHERITS (measurement);...CREATE TABLE measurement_y2007m11 (    CHECK ( logdate >= DATE '2007-11-01' AND logdate < DATE '2007-12-01' )) INHERITS (measurement);CREATE TABLE measurement_y2007m12 (\r\n\r\n    CHECK ( logdate >= DATE '2007-12-01' AND logdate < DATE '2008-01-01' )) INHERITS (measurement);CREATE TABLE measurement_y2008m01 (    CHECK ( logdate >= DATE '2008-01-01' AND logdate < DATE '2008-02-01' )\r\n\r\n) INHERITS (measurement);Partition measurement_y2007m12 contains 38,261,732 rowsIndexes on partition measurement_y2007m12:    \"pkey_measurement_y2007m12\" PRIMARY KEY, btree (id), tablespace \"measurement_y2007\"\r\n\r\n    \"idx_measurement_uuid_y2003m12\" btree (uuid), tablespace \"measurement_y2007\"    \"idx_measurement_type_y2003m12\" btree (type), tablespace \"measurement_y2007\"    \"idx_measurement_city_y2003m12\" btree (city_id), tablespace \"measurement_y2007\"\r\n\r\n    \"idx_measurement_logdate_y2003m12\" btree (logdate), tablespace \"measurement_y2007\"    \"sidx_measurement_geom_y2003m12\" gist (geom), tablespace \"measurement_y2007\"*** Problem Query *** \nexplain (analyze on, buffers on) Select * from measurement this_                                   where this_.logdate between '2007-12-19 23:38:41.22'::timestamp and '2007-12-20 08:01:04.22'::timestamp\r\n\r\n                                    and this_.city_id=25183 order by this_.logdate asc, this_.peaktemp asc, this_.unitsales asc limit 10000;                                                                                              QUERY PLAN                                                                                               \r\n\r\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit  (cost=33849.98..33855.15 rows=2068 width=618) (actual time=51710.803..51714.266 rows=10000 loops=1)\r\n\r\n   Buffers: shared hit=25614 read=39417   ->  Sort  (cost=33849.98..33855.15 rows=2068 width=618) (actual time=51710.799..51712.924 rows=10000 loops=1)         Sort Key: this_.logdate, this_.unitsales         Sort Method: top-N heapsort  Memory: 15938kB\r\n\r\n         Buffers: shared hit=25614 read=39417         ->  Append  (cost=0.00..33736.09 rows=2068 width=618) (actual time=50.210..50793.589 rows=312046 loops=1)               Buffers: shared hit=25608 read=39417\r\n\r\n               ->  Seq Scan on measurement this_  (cost=0.00..0.00 rows=1 width=840) (actual time=0.002..0.002 rows=0 loops=1)                     Filter: ((logdate >= '2007-12-19 23:38:41.22'::timestamp without time zone) AND (logdate <= '2007-12-20 08:01:04.22'::timestamp without time zone) AND (city_id = 25183))\r\n\r\n               ->  Index Scan using idx_measurement_city_y2007m12 on measurement_y2007m12 this__1  (cost=0.56..33736.09 rows=2067 width=618) (actual time=50.206..50731.637 rows=312046 loops=1)                     Index Cond: (city_id = 25183)\r\n\r\n                     Filter:  ((logdate >= '2007-12-19 23:38:41.22'::timestamp without time zone) AND (logdate <= '2007-12-20 08:01:04.22'::timestamp without time zone))                     Buffers: shared hit=25608 read=39417\n Total runtime: 51717.639 ms   <--- *** unacceptable ***(15 rows)  Total Rows meeting query criteria---------------------------------Select count(*) from measurement this_ where this_.logdate between '2007-12-19 23:38:41.22'::timestamp and '2007-12-20 08:01:04.22'::timestamp and this_.city_id=25183;\ncount------312046Total Rows in the partition table referenced------------------------------------------Select\r\n count(*) from measurement_y2007m12;  count---------38261732Does anyone know how to speed up this query? I removed the order by clause and that significantly reduced the run time to approx. 2000-3000 ms. This query is being recorded repeatedly \r\n\r\nin our logs and executes very slowly for our UI users from 12000 ms thru 68000 msAny suggestions would be appreciated.sort (ORDER BY clause) enforce a reading of complete partitions. And it is slow - it is strange so reading 300K rows needs a 5K sec. Probably your IO is overloaded.\nRegardsPavel Stehule \nthanks", "msg_date": "Thu, 13 Mar 2014 20:36:15 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BUGS] Very slow query in PostgreSQL 9.3.3" }, { "msg_contents": "On Thu, Mar 13, 2014 at 12:26 PM, <[email protected]> wrote:\n> *** Problem Query ***\n>\n> explain (analyze on, buffers on) Select * from measurement this_\n> where this_.logdate between '2007-12-19\n> 23:38:41.22'::timestamp and '2007-12-20 08:01:04.22'::timestamp\n> and this_.city_id=25183 order by\n> this_.logdate asc, this_.peaktemp asc, this_.unitsales asc limit 10000;\n>\n[...]\n> Total runtime: 51717.639 ms <--- *** unacceptable ***\n\nTry to create a multi-column index on the partition by (city_id,\nlogdate). Then run the original query and the query without peaktemp\nand nitsales on the order by. Compare the results, and if the first\none will not be satisfying try to add these two columns to the end of\nthe column list of your multi-column index on the order as they appear\nin your query. It should do the trick. If it wont, please, show the\nplans.\n\n-- \nKind regards,\nSergey Konoplev\nPostgreSQL Consultant and DBA\n\nhttp://www.linkedin.com/in/grayhemp\n+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979\[email protected]\n\n\n-- \nSent via pgsql-general mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-general\n", "msg_date": "Thu, 13 Mar 2014 15:12:25 -0700", "msg_from": "Sergey Konoplev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BUGS] Very slow query in PostgreSQL 9.3.3" }, { "msg_contents": "2014-03-14 4:26 GMT+09:00 <[email protected]>:\n> PostgreSQL 9.3.3 RHEL 6.4\n>\n> Total db Server memory 64GB\n\n(...)\n> effective_cache_size = 48MB\n\nI'm not sure if this will help directly, but is the value for\n'effective_cache_size' intentional? 48 *GB* would be a more likely\nsetting.\n\nRegards\n\nIan Barwick\n\n\n-- \nSent via pgsql-general mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-general\n", "msg_date": "Fri, 14 Mar 2014 11:19:10 +0900", "msg_from": "Ian Lawrence Barwick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Very slow query in PostgreSQL 9.3.3" } ]
[ { "msg_contents": "Hello,\n\nI'm having time issues when adding new fields to a big table. I hope you can point me some hints to speed up the updates of a table with 124 million rows...\n\nThis is what I do:\n\t\nFirst I create a tmp_table with the data that will be added to the big table:\n\n\\d+ svm_confidence_id_tmp\n Table \"public.svm_confidence_id_tmp\"\n Column | Type | Modifiers | Storage | Stats target | Description \n---------------+------------------+-----------+---------+--------------+-------------\n id | integer | not null | plain | | \n svmconfidence | double precision | | plain | | \nIndexes:\n \"svm_confidence_id_tmp_pkey\" PRIMARY KEY, btree (id)\n\n\n\nThe table where this data will be added has the svmConfidence field\n\n\\d+ document;\n Table \"public.document\"\n Column | Type | Modifiers | Storage | Stats target | Description \n------------------+--------------------------------+-----------+----------+--------------+-------------\n id | integer | not null | plain | | \n kind | character varying(255) | not null | extended | | \n uid | character varying(255) | not null | extended | | \n sentenceId | character varying(255) | not null | extended | | \n text | text | not null | extended | | \n hepval | double precision | | plain | | \n created | timestamp(0) without time zone | not null | plain | | \n updated | timestamp(0) without time zone | | plain | | \n cardval | double precision | | plain | | \n nephval | double precision | | plain | | \n phosval | double precision | | plain | | \n patternCount | double precision | | plain | | \n ruleScore | double precision | | plain | | \n hepTermNormScore | double precision | | plain | | \n hepTermVarScore | double precision | | plain | | \n svm | double precision | | plain | | \n svmConfidence | double precision | | plain | | \nIndexes:\n \"DocumentOLD_pkey\" PRIMARY KEY, btree (id)\n \"document_cardval_index\" btree (cardval)\n \"document_heptermnorm_index\" btree (\"hepTermNormScore\" DESC NULLS LAST)\n \"document_heptermvar_index\" btree (\"hepTermVarScore\" DESC NULLS LAST)\n \"document_hepval_index\" btree (hepval DESC NULLS LAST)\n \"document_kind_index\" btree (kind)\n \"document_nephval_index\" btree (nephval DESC NULLS LAST)\n \"document_patterncount_index\" btree (\"patternCount\" DESC NULLS LAST)\n \"document_phosval_index\" btree (phosval DESC NULLS LAST)\n \"document_rulescore_index\" btree (\"ruleScore\" DESC NULLS LAST)\n \"document_sentenceid_index\" btree (\"sentenceId\")\n \"document_svm_index\" btree (svm)\n \"document_uid_index\" btree (uid)\n\nThen I update the svmConfidence field of the document table like this:\n\n update document as d set \"svmConfidence\" = st.svmconfidence from svm_confidence_id_tmp as st where st.id = d.id;\n\nBut it takes too much time.\nIs there something to take into account? Any hints?\nShould I do it in a different way?\n\nThanks for your time...\n\nAndrés\n**NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n**CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 14 Mar 2014 12:30:03 +0100", "msg_from": "\"acanada\" <[email protected]>", "msg_from_op": true, "msg_subject": "Adding new field to big table" }, { "msg_contents": "On Fri, Mar 14, 2014 at 4:30 AM, acanada <[email protected]> wrote:\n\n> Hello,\n>\n> I'm having time issues when adding new fields to a big table. I hope you\n> can point me some hints to speed up the updates of a table with 124 million\n> rows...\n>\n> This is what I do:\n>\n> First I create a tmp_table with the data that will be added to the big\n> table:\n>\n> \\d+ svm_confidence_id_tmp\n> Table \"public.svm_confidence_id_tmp\"\n> Column | Type | Modifiers | Storage | Stats target |\n> Description\n>\n> ---------------+------------------+-----------+---------+--------------+-------------\n> id | integer | not null | plain | |\n> svmconfidence | double precision | | plain | |\n> Indexes:\n> \"svm_confidence_id_tmp_pkey\" PRIMARY KEY, btree (id)\n>\n>\n>\n....\n\n\n\n> Then I update the svmConfidence field of the document table like this:\n>\n> update document as d set \"svmConfidence\" = st.svmconfidence from\n> svm_confidence_id_tmp as st where st.id = d.id;\n>\n> But it takes too much time.\n> Is there something to take into account? Any hints?\n> Should I do it in a different way?\n>\n\nIf your concern is how much time it has the rows locked for, you can break\nit into a series of shorter transactions:\n\nwith t as (delete from svm_confidence_id_tmp where id in (select id from\nsvm_confidence_id_tmp limit 10000) returning * )\nupdate document as d set \"svmConfidence\" = t.svmconfidence from t where t.id\n=d.id;\n\nCheers,\n\nJeff\n\nOn Fri, Mar 14, 2014 at 4:30 AM, acanada <[email protected]> wrote:\nHello,\n\nI'm having time issues when adding new fields to a big table. I hope you can point me some hints to speed up the updates of a table with 124 million rows...\n\nThis is what I do:\n\nFirst I create a tmp_table with the data that will be added to the big table:\n\n\\d+ svm_confidence_id_tmp\n                        Table \"public.svm_confidence_id_tmp\"\n    Column     |       Type       | Modifiers | Storage | Stats target | Description\n---------------+------------------+-----------+---------+--------------+-------------\n id            | integer          | not null  | plain   |              |\n svmconfidence | double precision |           | plain   |              |\nIndexes:\n    \"svm_confidence_id_tmp_pkey\" PRIMARY KEY, btree (id)\n\n....\n\nThen I update the svmConfidence field of the document table like this:\n\n update document as d set \"svmConfidence\" = st.svmconfidence from svm_confidence_id_tmp as st where st.id = d.id;\n\nBut it takes too much time.\nIs there something to take into account? Any hints?\nShould I do it in a different way?If your concern is how much time it has the rows locked for, you can break it into a series of shorter transactions:with t as (delete from svm_confidence_id_tmp where id in (select id from svm_confidence_id_tmp limit 10000) returning * )\nupdate document as d set \"svmConfidence\" = t.svmconfidence from t where t.id=d.id;Cheers,\nJeff", "msg_date": "Fri, 14 Mar 2014 09:49:00 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding new field to big table" }, { "msg_contents": "Hello Jeff,\n\nThe lock time is not a problem. The problem is that takes too much time. I will need to add more fields to this table in the near future and I'd like to know if the process can be accelerated by any parameter, workaround or whatever...\n\nThank you for your answer.\n\nCheers,\nAndrés\n\nEl Mar 14, 2014, a las 5:49 PM, Jeff Janes escribió:\n\n> On Fri, Mar 14, 2014 at 4:30 AM, acanada <[email protected]> wrote:\n> Hello,\n> \n> I'm having time issues when adding new fields to a big table. I hope you can point me some hints to speed up the updates of a table with 124 million rows...\n> \n> This is what I do:\n> \n> First I create a tmp_table with the data that will be added to the big table:\n> \n> \\d+ svm_confidence_id_tmp\n> Table \"public.svm_confidence_id_tmp\"\n> Column | Type | Modifiers | Storage | Stats target | Description\n> ---------------+------------------+-----------+---------+--------------+-------------\n> id | integer | not null | plain | |\n> svmconfidence | double precision | | plain | |\n> Indexes:\n> \"svm_confidence_id_tmp_pkey\" PRIMARY KEY, btree (id)\n> \n> \n> \n> ....\n> \n> \n> \n> Then I update the svmConfidence field of the document table like this:\n> \n> update document as d set \"svmConfidence\" = st.svmconfidence from svm_confidence_id_tmp as st where st.id = d.id;\n> \n> But it takes too much time.\n> Is there something to take into account? Any hints?\n> Should I do it in a different way?\n> \n> If your concern is how much time it has the rows locked for, you can break it into a series of shorter transactions:\n> \n> with t as (delete from svm_confidence_id_tmp where id in (select id from svm_confidence_id_tmp limit 10000) returning * )\n> update document as d set \"svmConfidence\" = t.svmconfidence from t where t.id=d.id;\n> \n> Cheers,\n> \n> Jeff\n\n\n\n**NOTA DE CONFIDENCIALIDAD** Este correo electrÿnico, y en su caso los ficheros adjuntos, pueden contener informaciÿn protegida para el uso exclusivo de su destinatario. Se prohÿbe la distribuciÿn, reproducciÿn o cualquier otro tipo de transmisiÿn por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n**CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n\n\n\nHello Jeff,The lock time is not a problem. The problem is that takes too much time. I will need to add more fields to this table in the near future and I'd like to know if the process can be accelerated by any parameter, workaround or whatever...Thank you for your answer.Cheers,AndrésEl Mar 14, 2014, a las 5:49 PM, Jeff Janes escribió:On Fri, Mar 14, 2014 at 4:30 AM, acanada <[email protected]> wrote:\nHello,\n\nI'm having time issues when adding new fields to a big table. I hope you can point me some hints to speed up the updates of a table with 124 million rows...\n\nThis is what I do:\n\nFirst I create a tmp_table with the data that will be added to the big table:\n\n\\d+ svm_confidence_id_tmp\n                        Table \"public.svm_confidence_id_tmp\"\n    Column     |       Type       | Modifiers | Storage | Stats target | Description\n---------------+------------------+-----------+---------+--------------+-------------\n id            | integer          | not null  | plain   |              |\n svmconfidence | double precision |           | plain   |              |\nIndexes:\n    \"svm_confidence_id_tmp_pkey\" PRIMARY KEY, btree (id)\n\n....\n\nThen I update the svmConfidence field of the document table like this:\n\n update document as d set \"svmConfidence\" = st.svmconfidence from svm_confidence_id_tmp as st where st.id = d.id;\n\nBut it takes too much time.\nIs there something to take into account? Any hints?\nShould I do it in a different way?If your concern is how much time it has the rows locked for, you can break it into a series of shorter transactions:with t as (delete from svm_confidence_id_tmp where id in (select id from svm_confidence_id_tmp limit 10000) returning * )\nupdate document as d set \"svmConfidence\" = t.svmconfidence from t where t.id=d.id;Cheers,\nJeff\n\n**NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.**CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.", "msg_date": "Fri, 14 Mar 2014 18:06:10 +0100", "msg_from": "\"acanada\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding new field to big table" }, { "msg_contents": "On Fri, Mar 14, 2014 at 10:06 AM, acanada <[email protected]> wrote:\n\n> Hello Jeff,\n>\n> The lock time is not a problem. The problem is that takes too much time. I\n> will need to add more fields to this table in the near future and I'd like\n> to know if the process can be accelerated by any parameter, workaround or\n> whatever...\n>\n> Thank you for your answer.\n>\n\nOK. Can you provide an explain (analyze, buffers), and the other\ninformation described here:\nhttp://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nIt may be faster to make a new table by selecting a join on the existing\ntables and then replace the master table with it.\n\nAlso, if you are going to be doing a lot of bulk updates like this,\nlowering the fillfactor to below 50% might be helpful.\n\nCheers,\n\nJeff\n\nOn Fri, Mar 14, 2014 at 10:06 AM, acanada <[email protected]> wrote:\nHello Jeff,The lock time is not a problem. The problem is that takes too much time. I will need to add more fields to this table in the near future and I'd like to know if the process can be accelerated by any parameter, workaround or whatever...\nThank you for your answer.OK.  Can you provide an explain (analyze, buffers), and the other information described here: http://wiki.postgresql.org/wiki/Slow_Query_Questions\nIt may be faster to make a new table by selecting a join on the existing tables and then replace the master table with it.Also, if you are going to be doing a lot of bulk updates like this, lowering the fillfactor to below 50% might be helpful.\n Cheers,Jeff", "msg_date": "Fri, 14 Mar 2014 12:29:07 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding new field to big table" }, { "msg_contents": "Hello,\n\nJeff and Jeffrey thank you for your tips.\nThis is the explain of the query:\nx=> explain update document as d set \"svmConfidence\" = st.svmconfidence from svm_confidence_id_tmp as st where st.id = d.id;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------\n Update on document d (cost=4204242.82..61669685.86 rows=124515592 width=284)\n -> Hash Join (cost=4204242.82..61669685.86 rows=124515592 width=284)\n Hash Cond: (d.id = st.id)\n -> Seq Scan on document d (cost=0.00..8579122.97 rows=203066697 width=270)\n -> Hash (cost=1918213.92..1918213.92 rows=124515592 width=18)\n -> Seq Scan on svm_confidence_id_tmp st (cost=0.00..1918213.92 rows=124515592 width=18)\n(6 rows)\n\nIt's not using the index, most of the rows are beeing updated.\nI'm trying with the CTAS solution.\n\nCheers,\nAndrés.\n\nEl Mar 14, 2014, a las 8:29 PM, Jeff Janes escribió:\n\n> On Fri, Mar 14, 2014 at 10:06 AM, acanada <[email protected]> wrote:\n> Hello Jeff,\n> \n> The lock time is not a problem. The problem is that takes too much time. I will need to add more fields to this table in the near future and I'd like to know if the process can be accelerated by any parameter, workaround or whatever...\n> \n> Thank you for your answer.\n> \n> OK. Can you provide an explain (analyze, buffers), and the other information described here: http://wiki.postgresql.org/wiki/Slow_Query_Questions\n> \n> It may be faster to make a new table by selecting a join on the existing tables and then replace the master table with it.\n> \n> Also, if you are going to be doing a lot of bulk updates like this, lowering the fillfactor to below 50% might be helpful.\n> \n> Cheers,\n> \n> Jeff\n> \n\n\n\n**NOTA DE CONFIDENCIALIDAD** Este correo electrÿnico, y en su caso los ficheros adjuntos, pueden contener informaciÿn protegida para el uso exclusivo de su destinatario. Se prohÿbe la distribuciÿn, reproducciÿn o cualquier otro tipo de transmisiÿn por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.\n**CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.\n\n\n\nHello,Jeff and Jeffrey thank you for your tips.This is the explain of the query:x=> explain update document as d set \"svmConfidence\" = st.svmconfidence from svm_confidence_id_tmp as st where st.id = d.id;                                               QUERY PLAN                                                --------------------------------------------------------------------------------------------------------- Update on document d  (cost=4204242.82..61669685.86 rows=124515592 width=284)   ->  Hash Join  (cost=4204242.82..61669685.86 rows=124515592 width=284)         Hash Cond: (d.id = st.id)         ->  Seq Scan on document d  (cost=0.00..8579122.97 rows=203066697 width=270)         ->  Hash  (cost=1918213.92..1918213.92 rows=124515592 width=18)               ->  Seq Scan on svm_confidence_id_tmp st  (cost=0.00..1918213.92 rows=124515592 width=18)(6 rows)It's not using the index, most of the rows are beeing updated.I'm trying with the CTAS solution.Cheers,Andrés.El Mar 14, 2014, a las 8:29 PM, Jeff Janes escribió:On Fri, Mar 14, 2014 at 10:06 AM, acanada <[email protected]> wrote:\nHello Jeff,The lock time is not a problem. The problem is that takes too much time. I will need to add more fields to this table in the near future and I'd like to know if the process can be accelerated by any parameter, workaround or whatever...\nThank you for your answer.OK.  Can you provide an explain (analyze, buffers), and the other information described here: http://wiki.postgresql.org/wiki/Slow_Query_Questions\nIt may be faster to make a new table by selecting a join on the existing tables and then replace the master table with it.Also, if you are going to be doing a lot of bulk updates like this, lowering the fillfactor to below 50% might be helpful.\n Cheers,Jeff\n\n**NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en su caso los ficheros adjuntos, pueden contener información protegida para el uso exclusivo de su destinatario. Se prohíbe la distribución, reproducción o cualquier otro tipo de transmisión por parte de otra persona que no sea el destinatario. Si usted recibe por error este correo, se ruega comunicarlo al remitente y borrar el mensaje recibido.**CONFIDENTIALITY NOTICE** This email communication and any attachments may contain confidential and privileged information for the sole use of the designated recipient named above. Distribution, reproduction or any other use of this transmission by any party other than the intended recipient is prohibited. If you are not the intended recipient please contact the sender and delete all copies.", "msg_date": "Mon, 17 Mar 2014 10:39:09 +0100", "msg_from": "\"acanada\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding new field to big table" }, { "msg_contents": "On Monday, March 17, 2014, acanada <[email protected]> wrote:\n\n> Hello,\n>\n> Jeff and Jeffrey thank you for your tips.\n> This is the explain of the query:\n> x=> explain update document as d set \"svmConfidence\" = st.svmconfidence\n> from svm_confidence_id_tmp as st where st.id = d.id;\n> QUERY PLAN\n>\n>\n> ---------------------------------------------------------------------------------------------------------\n> Update on document d (cost=4204242.82..61669685.86 rows=124515592\n> width=284)\n> -> Hash Join (cost=4204242.82..61669685.86 rows=124515592 width=284)\n> Hash Cond: (d.id = st.id)\n> -> Seq Scan on document d (cost=0.00..8579122.97 rows=203066697\n> width=270)\n> -> Hash (cost=1918213.92..1918213.92 rows=124515592 width=18)\n> -> Seq Scan on svm_confidence_id_tmp st\n> (cost=0.00..1918213.92 rows=124515592 width=18)\n> (6 rows)\n>\n> It's not using the index, most of the rows are beeing updated.\n> I'm trying with the CTAS solution.\n>\n\nOnce this hash join spills to disk, the performance is going to get very\nbad. The problem is that the outer table is going to get split into\nbatches and written to disk. If this were just a select, that would not be\na problem because when it reads each batch back in, that is all it needs to\ndo as the temp file contains all the necessary info. But with an update,\neach batch that it reads back in and matches to the inner side, it then\nneeds to back to the physical table to do the update, using the ctid saved\nin the batches to find the table tuple. So in effect this adds a nested\nloop from the hashed copy of the table to the real copy, and the locality\nof reference between those is poor when there are many batches. I'm pretty\nsure that the extra cost of doing this look up is not taken into account by\nthe planner. But, if it chooses a different plan than a hash join, that\nother plan might also have the same problem.\n\nSome things for you to consider, other than CTAS:\n\n1) Are you analyzing your temporary table before you do the update? That\nmight switch it to a different plan.\n\n2) Make work_mem as large as you can stand, just for the one session that\nruns the update, to try to avoid spilling to disk.\n\n3) If you set enable_hashjoin off temporarily in this session, what plan do\nyou get?\n\n0) Why are you creating the temporary table? You must have some process\nthat comes up with the value for the new column to put in the temporary\ntable, why not just stick it directly into the original table?\n\nSome things for the PostgreSQL hackers to consider:\n\n1) When the hash spills to disk, it seems to write to disk the entire row\nthat is going to be updated (except for the one column which is going to be\noverwritten) plus that tuple's ctid. It doesn't seem like this is\nnecessary, it should only need to write the ctid and the join key (and\nperhaps any quals?). Since it has to visit the old row anyway to set its\ncmax, it can pull out the rest of the data to make the new tuple while it\nis there. If it wrote a lot less data to the temp tables it could make a\nlot less batches for the same work_mem, and here the cost is directly\nproportional to the number of batches. (Also, for the table not being\nupdated, it writes the ctid to temp space when there seems to be no use for\nit.)\n\n2) Should the planner account for the scattered reads needed to join to the\noriginal table on ctid for update from whatever materialized version of the\ntable is created? Of course all other plans would also need to be\nsimilarly instrumented. I think most of them would have the same problem\nas the hash_join. The one type I can think of that doesn't would be a\nmerge join in which there is strong correlation between the merge key and\nthe ctid order on the table to be updated.\n\n3) It seems like the truly efficient way to run such an update on a very\nlarge data set would be join the two tables (hash or merge), then sort the\nresult on the ctid of the updated table, then do a \"merge join\" between\nthat sorted result and the physical table. I don't think such a method\ncurrently is known to the planner, is it? How hard would it be to make it?\n\nThe example I have been using is:\n\n\nalter table pgbench_accounts add filler2 text;\ncreate table foo as select aid, md5(aid::text) from pgbench_accounts;\nanalyze;\nexplain (verbose) update pgbench_accounts set filler2 =md5 from foo where\npgbench_accounts.aid=foo.aid;\n\nCheers,\n\nJeff\n\nOn Monday, March 17, 2014, acanada <[email protected]> wrote:\nHello,Jeff and Jeffrey thank you for your tips.This is the explain of the query:x=> explain update document as d set \"svmConfidence\" = st.svmconfidence from svm_confidence_id_tmp as st where st.id = d.id;\n                                               QUERY PLAN                                                ---------------------------------------------------------------------------------------------------------\n Update on document d  (cost=4204242.82..61669685.86 rows=124515592 width=284)   ->  Hash Join  (cost=4204242.82..61669685.86 rows=124515592 width=284)         Hash Cond: (d.id = st.id)\n         ->  Seq Scan on document d  (cost=0.00..8579122.97 rows=203066697 width=270)         ->  Hash  (cost=1918213.92..1918213.92 rows=124515592 width=18)               ->  Seq Scan on svm_confidence_id_tmp st  (cost=0.00..1918213.92 rows=124515592 width=18)\n(6 rows)It's not using the index, most of the rows are beeing updated.I'm trying with the CTAS solution.Once this hash join spills to disk, the performance is going to get very bad.  The problem is that the outer table is going to get split into batches and written to disk.  If this were just a select, that would not be a problem because when it reads each batch back in, that is all it needs to do as the temp file contains all the necessary info.  But with an update, each batch that it reads back in and matches to the inner side, it then needs to back to the physical table to do the update, using the ctid saved in the batches to find the table tuple.  So in effect this adds a nested loop from the hashed copy of the table to the real copy, and the locality of reference between those is poor when there are many batches.  I'm pretty sure that the extra cost of doing this look up is not taken into account by the planner.  But, if it chooses a different plan than a hash join, that other plan might also have the same problem.\nSome things for you to consider, other than CTAS:1) Are you analyzing your temporary table before you do the update?  That might switch it to a different plan.\n2) Make work_mem as large as you can stand, just for the one session that runs the update, to try to avoid spilling to disk.3) If you set enable_hashjoin off temporarily in this session, what plan do you get?\n0) Why are you creating the temporary table?  You must have some process that comes up with the value for the new column to put in the temporary table, why not just stick it directly into the original table?\nSome things for the PostgreSQL hackers to consider:1) When the hash spills to disk, it seems to write to disk the entire row that is going to be updated (except for the one column which is going to be overwritten) plus that tuple's ctid.  It doesn't seem like this is necessary, it should only need to write the ctid and the join key (and perhaps any quals?).  Since it has to visit the old row anyway to set its cmax, it can pull out the rest of the data to make the new tuple while it is there.  If it wrote a lot less data to the temp tables it could make a lot less batches for the same work_mem, and here the cost is directly proportional to the number of batches.  (Also, for the table not being updated, it writes the ctid to temp space when there seems to be no use for it.)\n2) Should the planner account for the scattered reads needed to join to the original table on ctid for update from whatever materialized version of the table is created?  Of course all other plans would also need to be similarly instrumented.  I think most of them would have the same problem as the hash_join.  The one type I can think of that doesn't would be a merge join in which there is strong correlation between the merge key and the ctid order on the table to be updated.\n3) It seems like the truly efficient way to run such an update on a very large data set would be join the two tables (hash or merge), then sort the result on the ctid of the updated table, then do a \"merge join\" between that sorted result and the physical table.  I don't think such a method currently is known to the planner, is it?  How hard would it be to make it?\nThe example I have been using is:alter table pgbench_accounts add filler2 text;create table foo as select aid, md5(aid::text) from pgbench_accounts;\nanalyze;explain (verbose) update pgbench_accounts set filler2 =md5 from foo where pgbench_accounts.aid=foo.aid;Cheers,Jeff", "msg_date": "Mon, 17 Mar 2014 23:04:23 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding new field to big table" }, { "msg_contents": "Jeff I think adding the new table is the best way to handle this issue.\n\n\n\n\nFrom: Jeff Janes <[email protected]>\nTo: acanada <[email protected]>, \nCc: postgres performance list <[email protected]>\nDate: 03/18/2014 02:05 AM\nSubject: Re: [PERFORM] Adding new field to big table\nSent by: [email protected]\n\n\n\nOn Monday, March 17, 2014, acanada <[email protected]> wrote:\nHello,\n\nJeff and Jeffrey thank you for your tips.\nThis is the explain of the query:\nx=> explain update document as d set \"svmConfidence\" = st.svmconfidence \nfrom svm_confidence_id_tmp as st where st.id = d.id;\n QUERY PLAN \n \n---------------------------------------------------------------------------------------------------------\n Update on document d (cost=4204242.82..61669685.86 rows=124515592 \nwidth=284)\n -> Hash Join (cost=4204242.82..61669685.86 rows=124515592 width=284)\n Hash Cond: (d.id = st.id)\n -> Seq Scan on document d (cost=0.00..8579122.97 rows=203066697 \nwidth=270)\n -> Hash (cost=1918213.92..1918213.92 rows=124515592 width=18)\n -> Seq Scan on svm_confidence_id_tmp st \n (cost=0.00..1918213.92 rows=124515592 width=18)\n(6 rows)\n\nIt's not using the index, most of the rows are beeing updated.\nI'm trying with the CTAS solution.\n\nOnce this hash join spills to disk, the performance is going to get very \nbad. The problem is that the outer table is going to get split into \nbatches and written to disk. If this were just a select, that would not \nbe a problem because when it reads each batch back in, that is all it \nneeds to do as the temp file contains all the necessary info. But with an \nupdate, each batch that it reads back in and matches to the inner side, it \nthen needs to back to the physical table to do the update, using the ctid \nsaved in the batches to find the table tuple. So in effect this adds a \nnested loop from the hashed copy of the table to the real copy, and the \nlocality of reference between those is poor when there are many batches. \n I'm pretty sure that the extra cost of doing this look up is not taken \ninto account by the planner. But, if it chooses a different plan than a \nhash join, that other plan might also have the same problem.\n\nSome things for you to consider, other than CTAS:\n\n1) Are you analyzing your temporary table before you do the update? That \nmight switch it to a different plan.\n\n2) Make work_mem as large as you can stand, just for the one session that \nruns the update, to try to avoid spilling to disk.\n\n3) If you set enable_hashjoin off temporarily in this session, what plan \ndo you get?\n\n0) Why are you creating the temporary table? You must have some process \nthat comes up with the value for the new column to put in the temporary \ntable, why not just stick it directly into the original table?\n\nSome things for the PostgreSQL hackers to consider:\n\n1) When the hash spills to disk, it seems to write to disk the entire row \nthat is going to be updated (except for the one column which is going to \nbe overwritten) plus that tuple's ctid. It doesn't seem like this is \nnecessary, it should only need to write the ctid and the join key (and \nperhaps any quals?). Since it has to visit the old row anyway to set its \ncmax, it can pull out the rest of the data to make the new tuple while it \nis there. If it wrote a lot less data to the temp tables it could make a \nlot less batches for the same work_mem, and here the cost is directly \nproportional to the number of batches. (Also, for the table not being \nupdated, it writes the ctid to temp space when there seems to be no use \nfor it.)\n\n2) Should the planner account for the scattered reads needed to join to \nthe original table on ctid for update from whatever materialized version \nof the table is created? Of course all other plans would also need to be \nsimilarly instrumented. I think most of them would have the same problem \nas the hash_join. The one type I can think of that doesn't would be a \nmerge join in which there is strong correlation between the merge key and \nthe ctid order on the table to be updated.\n\n3) It seems like the truly efficient way to run such an update on a very \nlarge data set would be join the two tables (hash or merge), then sort the \nresult on the ctid of the updated table, then do a \"merge join\" between \nthat sorted result and the physical table. I don't think such a method \ncurrently is known to the planner, is it? How hard would it be to make \nit?\n\nThe example I have been using is:\n\n\nalter table pgbench_accounts add filler2 text;\ncreate table foo as select aid, md5(aid::text) from pgbench_accounts;\nanalyze;\nexplain (verbose) update pgbench_accounts set filler2 =md5 from foo where \npgbench_accounts.aid=foo.aid;\n\nCheers,\n\nJeff\n\n\n-- Confidentiality Notice --\nThis email message, including all the attachments, is for the sole use of the intended recipient(s) and contains confidential information. Unauthorized use or disclosure is prohibited. If you are not the intended recipient, you may not use, disclose, copy or disseminate this information. If you are not the intended recipient, please contact the sender immediately by reply email and destroy all copies of the original message,\nincluding attachments.\n\nJeff I think adding the new table is the\nbest way to handle this issue.\n\n\n\n\nFrom:      \n Jeff Janes <[email protected]>\nTo:      \n acanada <[email protected]>,\n\nCc:      \n postgres performance\nlist <[email protected]>\nDate:      \n 03/18/2014 02:05 AM\nSubject:    \n   Re: [PERFORM]\nAdding new field to big table\nSent by:    \n   [email protected]\n\n\n\n\nOn Monday, March 17, 2014, acanada <[email protected]>\nwrote:\nHello,\n\nJeff and Jeffrey thank you for your tips.\nThis is the explain of the query:\nx=> explain update document as d set \"svmConfidence\"\n= st.svmconfidence from svm_confidence_id_tmp as st where st.id\n= d.id;\n               \n                     \n         QUERY PLAN        \n                     \n                 \n---------------------------------------------------------------------------------------------------------\n Update on document d  (cost=4204242.82..61669685.86\nrows=124515592 width=284)\n   ->  Hash Join  (cost=4204242.82..61669685.86\nrows=124515592 width=284)\n         Hash Cond: (d.id\n= st.id)\n         ->  Seq Scan\non document d  (cost=0.00..8579122.97 rows=203066697 width=270)\n         ->  Hash  (cost=1918213.92..1918213.92\nrows=124515592 width=18)\n               ->\n Seq Scan on svm_confidence_id_tmp st  (cost=0.00..1918213.92\nrows=124515592 width=18)\n(6 rows)\n\nIt's not using the index, most of the rows are beeing\nupdated.\nI'm trying with the CTAS solution.\n\nOnce this hash join spills to disk, the performance is\ngoing to get very bad.  The problem is that the outer table is going\nto get split into batches and written to disk.  If this were just\na select, that would not be a problem because when it reads each batch\nback in, that is all it needs to do as the temp file contains all the necessary\ninfo.  But with an update, each batch that it reads back in and matches\nto the inner side, it then needs to back to the physical table to do the\nupdate, using the ctid saved in the batches to find the table tuple.  So\nin effect this adds a nested loop from the hashed copy of the table to\nthe real copy, and the locality of reference between those is poor when\nthere are many batches.  I'm pretty sure that the extra cost of doing\nthis look up is not taken into account by the planner.  But, if it\nchooses a different plan than a hash join, that other plan might also have\nthe same problem.\n\nSome things for you to consider, other than CTAS:\n\n1) Are you analyzing your temporary table before you do\nthe update?  That might switch it to a different plan.\n\n2) Make work_mem as large as you can stand, just for the\none session that runs the update, to try to avoid spilling to disk.\n\n3) If you set enable_hashjoin off temporarily in this\nsession, what plan do you get?\n\n0) Why are you creating the temporary table?  You\nmust have some process that comes up with the value for the new column\nto put in the temporary table, why not just stick it directly into the\noriginal table?\n\nSome things for the PostgreSQL hackers to consider:\n\n1) When the hash spills to disk, it seems to write to\ndisk the entire row that is going to be updated (except for the one column\nwhich is going to be overwritten) plus that tuple's ctid.  It doesn't\nseem like this is necessary, it should only need to write the ctid and\nthe join key (and perhaps any quals?).  Since it has to visit the\nold row anyway to set its cmax, it can pull out the rest of the data to\nmake the new tuple while it is there.  If it wrote a lot less data\nto the temp tables it could make a lot less batches for the same work_mem,\nand here the cost is directly proportional to the number of batches.  (Also,\nfor the table not being updated, it writes the ctid to temp space when\nthere seems to be no use for it.)\n\n2) Should the planner account for the scattered reads\nneeded to join to the original table on ctid for update from whatever materialized\nversion of the table is created?  Of course all other plans would\nalso need to be similarly instrumented.  I think most of them would\nhave the same problem as the hash_join.  The one type I can think\nof that doesn't would be a merge join in which there is strong correlation\nbetween the merge key and the ctid order on the table to be updated.\n\n3) It seems like the truly efficient way to run such an\nupdate on a very large data set would be join the two tables (hash or merge),\nthen sort the result on the ctid of the updated table, then do a \"merge\njoin\" between that sorted result and the physical table.  I don't\nthink such a method currently is known to the planner, is it?  How\nhard would it be to make it?\n\nThe example I have been using is:\n\n\nalter table pgbench_accounts add filler2 text;\ncreate table foo as select aid, md5(aid::text) from pgbench_accounts;\nanalyze;\nexplain (verbose) update pgbench_accounts set filler2\n=md5 from foo where pgbench_accounts.aid=foo.aid;\n\nCheers,\n\nJeff\n\n\n-- Confidentiality Notice --\nThis email message, including all the attachments, is for the sole use of the intended recipient(s) and contains confidential information. Unauthorized use or disclosure is prohibited. If you are not the intended recipient, you may not use, disclose, copy or disseminate this information. If you are not the intended recipient, please contact the sender immediately by reply email and destroy all copies of the original message,\nincluding attachments.", "msg_date": "Wed, 23 Apr 2014 13:59:07 -0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Adding new field to big table" } ]
[ { "msg_contents": "In PostgreSQL 9.3.3 Documentation 11.8. Partial Indexes Example 11-2\n(http://www.postgresql.org/docs/9.3/interactive/indexes-partial.html),\nthe partial index is created\n\nCREATE INDEX orders_unbilled_index ON orders (order_nr) WHERE billed\nis not true;\n\nAnd the suggested use mode is\n\nSELECT * FROM orders WHERE billed is not true AND order_nr < 10000;\n\nMy question is after an update to the billed column is done, will PG\nautomatically add or remove records whose billed are just set to false\nor true to/from the b-tree?\n\nThanks in advance.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 18 Mar 2014 14:26:27 -0700", "msg_from": "Yu Zhao <[email protected]>", "msg_from_op": true, "msg_subject": "question about partial index" }, { "msg_contents": "On 18 March 2014 22:26, Yu Zhao <[email protected]> wrote:\n\n> In PostgreSQL 9.3.3 Documentation 11.8. Partial Indexes Example 11-2\n> (http://www.postgresql.org/docs/9.3/interactive/indexes-partial.html),\n> the partial index is created\n>\n> CREATE INDEX orders_unbilled_index ON orders (order_nr) WHERE billed\n> is not true;\n>\n> And the suggested use mode is\n>\n> SELECT * FROM orders WHERE billed is not true AND order_nr < 10000;\n>\n> My question is after an update to the billed column is done, will PG\n> automatically add or remove records whose billed are just set to false\n> or true to/from the b-tree?\n>\n> Thanks in advance.\n>\n\n\nHi,\nthe short answer is: yes, it will work as you expect.\n\nThe long answer is: no, it will not simply add/remove because postgres\nkeeps many different versions of the same row, so when you change the\ncolumn from false to true, the new row version will be added to the index,\nwhen you change from true to false, the previous rows will be still stored\nin the index as well, because there could be some older transaction which\nshould see some older version of the row.\n\nThe mechanism is quite internal, and you shouldn't bother. As a database\nuser you should just see, that the index is updated automatically, and it\nwill store all rows where billed = true.\n\nregards,\nSzymon\n\nOn 18 March 2014 22:26, Yu Zhao <[email protected]> wrote:\nIn PostgreSQL 9.3.3 Documentation 11.8. Partial Indexes Example 11-2\n(http://www.postgresql.org/docs/9.3/interactive/indexes-partial.html),\nthe partial index is created\n\nCREATE INDEX orders_unbilled_index ON orders (order_nr) WHERE billed\nis not true;\n\nAnd the suggested use mode is\n\nSELECT * FROM orders WHERE billed is not true AND order_nr < 10000;\n\nMy question is after an update to the billed column is done, will PG\nautomatically add or remove records whose billed are just set to false\nor true to/from the b-tree?\n\nThanks in advance.\nHi,the short answer is: yes, it will work as you expect.The long answer is: no, it will not simply add/remove because postgres keeps many different versions of the same row, so when you change the column from false to true, the new row version will be added to the index, when you change from true to false, the previous rows will be still stored in the index as well, because there could be some older transaction which should see some older version of the row.\nThe mechanism is quite internal, and you shouldn't bother. As a database user you should just see, that the index is updated automatically, and it will store all rows where billed = true.\nregards,Szymon", "msg_date": "Tue, 18 Mar 2014 22:51:26 +0100", "msg_from": "Szymon Guz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: question about partial index" } ]
[ { "msg_contents": "Hello, \n\nwe have 3 servers with postgresql 9.3.3. One is master and two slaves.\nWe run synchronous_replication and fsync, synchronous_commit and full_page_writes are on.\n\nSuddenly master hang up with hardware failure, it is a strange bug in iLo which we investigate with HP.\n\nBefore master was rebooted, i ran ps aux on slave\npostgres: wal receiver process streaming 12/F1031DF8\n\nLast messages in slaves logs was\n2014-03-19 02:41:29.005 GMT,,,7389,,53108c69.1cdd,16029,,2014-02-28 13:17:29 GMT,,0,LOG,00000,\"recovery restart point at 12/DFFBB3E8\",\"last completed transaction was at log time 2014-03-19 02:41:28.886869+00\",,,,,,,,\"\"\n\nand then there was silence, because master hang.\n\nThen master was rebooted and slave wrote in log\n2014-03-19 15:36:39.176 GMT,,,7392,,53108c69.1ce0,2,,2014-02-28 13:17:29 GMT,,0,FATAL,XX000,\"terminating walreceiver due to timeout\",,,,,,,,,\"\"\n2014-03-19 15:36:39.177 GMT,,,7388,,53108c69.1cdc,6,,2014-02-28 13:17:29 GMT,1/0,0,LOG,00000,\"record with zero length at 12/F1031DF8\",,,,,,,,,\"\"\n2014-03-19 15:36:57.181 GMT,,,12100,,5329b996.2f44,1,,2014-03-19 15:36:54 GMT,,0,FATAL,XX000,\"could not connect to the primary server: could not connect to server: No route to host\n Is the server running on host \"\"10.162.2.50\"\" and accepting\n TCP/IP connections on port 5432?\n\",,,,,,,,,\"\"\n\nThen master finally came back, slave wrote\n2014-03-19 15:40:09.389 GMT,,,13121,,5329ba59.3341,1,,2014-03-19 15:40:09 GMT,,0,FATAL,XX000,\"could not connect to the primary server: FATAL: the database system is starting up\n\",,,,,,,,,\"\"\n2014-03-19 15:40:16.468 GMT,,,13136,,5329ba5e.3350,1,,2014-03-19 15:40:14 GMT,,0,LOG,00000,\"started streaming WAL from primary at 12/F1000000 on timeline 1\",,,,,,,,,\"\"\n2014-03-19 15:40:16.468 GMT,,,13136,,5329ba5e.3350,2,,2014-03-19 15:40:14 GMT,,0,FATAL,XX000,\"could not receive data from WAL stream: ERROR: requested starting point 12/F1000000 is ahead of the WAL flush position of this server 12/F0FFFCE8\n\",,,,,,,,,\"\"\n\nlast message was repeated several times\nand then this happened\n\n2014-03-19 15:42:04.623 GMT,,,13722,,5329bacc.359a,1,,2014-03-19 15:42:04 GMT,,0,LOG,00000,\"started streaming WAL from primary at 12/F1000000 on timeline 1\",,,,,,,,,\"\"\n2014-03-19 15:42:04.628 GMT,,,7388,,53108c69.1cdc,7,,2014-02-28 13:17:29 GMT,1/0,0,LOG,00000,\"invalid record length at 12/F1031DF8\",,,,,,,,,\"\"\n2014-03-19 15:42:04.628 GMT,,,13722,,5329bacc.359a,2,,2014-03-19 15:42:04 GMT,,0,FATAL,57P01,\"terminating walreceiver process due to administrator command\",,,,,,,,,\"\"\n2014-03-19 15:42:09.628 GMT,,,7388,,53108c69.1cdc,8,,2014-02-28 13:17:29 GMT,1/0,0,LOG,00000,\"invalid record length at 12/F1031DF8\",,,,,,,,,\"\"\n2014-03-19 15:42:14.628 GMT,,,7388,,53108c69.1cdc,9,,2014-02-28 13:17:29 GMT,1/0,0,LOG,00000,\"invalid record length at 12/F1031DF8\",,,,,,,,,\"\"\n2014-03-19 15:42:19.628 GMT,,,7388,,53108c69.1cdc,10,,2014-02-28 13:17:29 GMT,1/0,0,LOG,00000,\"invalid record length at 12/F1031DF8\",,,,,,,,,”\" \nand it just repeats forever.\n\n\nMeanwhile on master\n2014-03-19 15:39:30.957 GMT,,,7115,,5329ba32.1bcb,2,,2014-03-19 15:39:30 GMT,,0,LOG,00000,\"database system was not properly shut down; automatic recovery in progress\",,,,,,,,,\"\"\n2014-03-19 15:39:30.989 GMT,,,7115,,5329ba32.1bcb,3,,2014-03-19 15:39:30 GMT,,0,LOG,00000,\"redo starts at 12/DFFBB3E8\",,,,,,,,,\"\"\n2014-03-19 15:39:47.114 GMT,,,7115,,5329ba32.1bcb,4,,2014-03-19 15:39:30 GMT,,0,LOG,00000,\"redo done at 12/F0FFFC38\",,,,,,,,,\"\"\n2014-03-19 15:39:47.114 GMT,,,7115,,5329ba32.1bcb,5,,2014-03-19 15:39:30 GMT,,0,LOG,00000,\"last completed transaction was at log time 2014-03-19 05:02:29.273138+00\",,,,,,,,,\"\"\n2014-03-19 15:39:47.115 GMT,,,7115,,5329ba32.1bcb,6,,2014-03-19 15:39:30 GMT,,0,LOG,00000,\"checkpoint starting: end-of-recovery immediate\",,,,,,,,,\"\"\n2014-03-19 15:40:16.466 GMT,\"replicator\",\"\",7986,\"10.162.2.52:44336\",5329ba5e.1f32,1,\"idle\",2014-03-19 15:40:14 GMT,2/0,0,ERROR,XX000,\"requested starting point 12/F1000000 is ahead of the WAL flush position of this server 12/F0FFFCE8\",,,,,,,,,\"walreceiver\"\n\nSo, all two slaves are disconnected from master, which somehow is past his slaves.\n\nI decided to promote one of the slaves, so we can have some snapshot of the data.\nrelevant logs from this are \n2014-03-19 16:50:43.118 GMT,,,4444,,5329cae3.115c,3,,2014-03-19 16:50:43 GMT,,0,LOG,00000,\"redo starts at 12/DFFBB3E8\",,,,,,,,,\"\"\n2014-03-19 16:50:50.028 GMT,\"dboperator\",\"postgres\",4452,\"[local]\",5329caea.1164,1,\"\",2014-03-19 16:50:50 GMT,,0,FATAL,57P03,\"the database system is starting up\",,,,,,,,,\"\"\n2014-03-19 16:50:51.128 GMT,,,4444,,5329cae3.115c,4,,2014-03-19 16:50:43 GMT,,0,LOG,00000,\"invalid contrecord length 5736 at 12/F0FFFC80\",,,,,,,,,\"\"\n2014-03-19 16:50:51.128 GMT,,,4444,,5329cae3.115c,5,,2014-03-19 16:50:43 GMT,,0,LOG,00000,\"redo done at 12/F0FFFC38\",,,,,,,,,””\n\nIt is interesting that redo done at 12/F0FFFC38 both on master and promoted slave.\n\nThe main question is there is actual latest data, and how is it possible that master is behind his slave in synchronous replication.\n\nThanks for the help.\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 19 Mar 2014 20:10:22 +0300", "msg_from": "Evgeniy Shishkin <[email protected]>", "msg_from_op": true, "msg_subject": "slave wal is ahead of master" }, { "msg_contents": "fsync\n... taking more knowledge around this will shed some light to understand\nthis problem \"slave ahead of master\"\n\n\"there was silence, because master hang.\"\n... replication halted here, master holds the latest copy which is missing\nat both the slaves\n\n\"I decided to promote one of the slaves\"\n... only 2 slaves are left, and one among them is going to be the master\nnow, in master/slave nomenclature the data from master is considered as the\nvalid one from this point onward\n\n\"master is behind his slave\"\n... you mentioned that the original master comes up in the mean time one of\nthe slave was already a master\n\n\nOn Wed, Mar 19, 2014 at 10:40 PM, Evgeniy Shishkin <[email protected]>wrote:\n\n> Hello,\n>\n> we have 3 servers with postgresql 9.3.3. One is master and two slaves.\n> We run synchronous_replication and fsync, synchronous_commit and\n> full_page_writes are on.\n>\n> Suddenly master hang up with hardware failure, it is a strange bug in iLo\n> which we investigate with HP.\n>\n> Before master was rebooted, i ran ps aux on slave\n> postgres: wal receiver process streaming 12/F1031DF8\n>\n> Last messages in slaves logs was\n> 2014-03-19 02:41:29.005 GMT,,,7389,,53108c69.1cdd,16029,,2014-02-28\n> 13:17:29 GMT,,0,LOG,00000,\"recovery restart point at 12/DFFBB3E8\",\"last\n> completed transaction was at log time 2014-03-19\n> 02:41:28.886869+00\",,,,,,,,\"\"\n>\n> and then there was silence, because master hang.\n>\n> Then master was rebooted and slave wrote in log\n> 2014-03-19 15:36:39.176 GMT,,,7392,,53108c69.1ce0,2,,2014-02-28 13:17:29\n> GMT,,0,FATAL,XX000,\"terminating walreceiver due to timeout\",,,,,,,,,\"\"\n> 2014-03-19 15:36:39.177 GMT,,,7388,,53108c69.1cdc,6,,2014-02-28 13:17:29\n> GMT,1/0,0,LOG,00000,\"record with zero length at 12/F1031DF8\",,,,,,,,,\"\"\n> 2014-03-19 15:36:57.181 GMT,,,12100,,5329b996.2f44,1,,2014-03-19 15:36:54\n> GMT,,0,FATAL,XX000,\"could not connect to the primary server: could not\n> connect to server: No route to host\n> Is the server running on host \"\"10.162.2.50\"\" and accepting\n> TCP/IP connections on port 5432?\n> \",,,,,,,,,\"\"\n>\n> Then master finally came back, slave wrote\n> 2014-03-19 15:40:09.389 GMT,,,13121,,5329ba59.3341,1,,2014-03-19 15:40:09\n> GMT,,0,FATAL,XX000,\"could not connect to the primary server: FATAL: the\n> database system is starting up\n> \",,,,,,,,,\"\"\n> 2014-03-19 15:40:16.468 GMT,,,13136,,5329ba5e.3350,1,,2014-03-19 15:40:14\n> GMT,,0,LOG,00000,\"started streaming WAL from primary at 12/F1000000 on\n> timeline 1\",,,,,,,,,\"\"\n> 2014-03-19 15:40:16.468 GMT,,,13136,,5329ba5e.3350,2,,2014-03-19 15:40:14\n> GMT,,0,FATAL,XX000,\"could not receive data from WAL stream: ERROR:\n> requested starting point 12/F1000000 is ahead of the WAL flush position of\n> this server 12/F0FFFCE8\n> \",,,,,,,,,\"\"\n>\n> last message was repeated several times\n> and then this happened\n>\n> 2014-03-19 15:42:04.623 GMT,,,13722,,5329bacc.359a,1,,2014-03-19 15:42:04\n> GMT,,0,LOG,00000,\"started streaming WAL from primary at 12/F1000000 on\n> timeline 1\",,,,,,,,,\"\"\n> 2014-03-19 15:42:04.628 GMT,,,7388,,53108c69.1cdc,7,,2014-02-28 13:17:29\n> GMT,1/0,0,LOG,00000,\"invalid record length at 12/F1031DF8\",,,,,,,,,\"\"\n> 2014-03-19 15:42:04.628 GMT,,,13722,,5329bacc.359a,2,,2014-03-19 15:42:04\n> GMT,,0,FATAL,57P01,\"terminating walreceiver process due to administrator\n> command\",,,,,,,,,\"\"\n> 2014-03-19 15:42:09.628 GMT,,,7388,,53108c69.1cdc,8,,2014-02-28 13:17:29\n> GMT,1/0,0,LOG,00000,\"invalid record length at 12/F1031DF8\",,,,,,,,,\"\"\n> 2014-03-19 15:42:14.628 GMT,,,7388,,53108c69.1cdc,9,,2014-02-28 13:17:29\n> GMT,1/0,0,LOG,00000,\"invalid record length at 12/F1031DF8\",,,,,,,,,\"\"\n> 2014-03-19 15:42:19.628 GMT,,,7388,,53108c69.1cdc,10,,2014-02-28 13:17:29\n> GMT,1/0,0,LOG,00000,\"invalid record length at 12/F1031DF8\",,,,,,,,,\"\"\n> and it just repeats forever.\n>\n>\n> Meanwhile on master\n> 2014-03-19 15:39:30.957 GMT,,,7115,,5329ba32.1bcb,2,,2014-03-19 15:39:30\n> GMT,,0,LOG,00000,\"database system was not properly shut down; automatic\n> recovery in progress\",,,,,,,,,\"\"\n> 2014-03-19 15:39:30.989 GMT,,,7115,,5329ba32.1bcb,3,,2014-03-19 15:39:30\n> GMT,,0,LOG,00000,\"redo starts at 12/DFFBB3E8\",,,,,,,,,\"\"\n> 2014-03-19 15:39:47.114 GMT,,,7115,,5329ba32.1bcb,4,,2014-03-19 15:39:30\n> GMT,,0,LOG,00000,\"redo done at 12/F0FFFC38\",,,,,,,,,\"\"\n> 2014-03-19 15:39:47.114 GMT,,,7115,,5329ba32.1bcb,5,,2014-03-19 15:39:30\n> GMT,,0,LOG,00000,\"last completed transaction was at log time 2014-03-19\n> 05:02:29.273138+00\",,,,,,,,,\"\"\n> 2014-03-19 15:39:47.115 GMT,,,7115,,5329ba32.1bcb,6,,2014-03-19 15:39:30\n> GMT,,0,LOG,00000,\"checkpoint starting: end-of-recovery immediate\",,,,,,,,,\"\"\n> 2014-03-19 15:40:16.466 GMT,\"replicator\",\"\",7986,\"10.162.2.52:44336\",5329ba5e.1f32,1,\"idle\",2014-03-19\n> 15:40:14 GMT,2/0,0,ERROR,XX000,\"requested starting point 12/F1000000 is\n> ahead of the WAL flush position of this server\n> 12/F0FFFCE8\",,,,,,,,,\"walreceiver\"\n>\n> So, all two slaves are disconnected from master, which somehow is past his\n> slaves.\n>\n> I decided to promote one of the slaves, so we can have some snapshot of\n> the data.\n> relevant logs from this are\n> 2014-03-19 16:50:43.118 GMT,,,4444,,5329cae3.115c,3,,2014-03-19 16:50:43\n> GMT,,0,LOG,00000,\"redo starts at 12/DFFBB3E8\",,,,,,,,,\"\"\n> 2014-03-19 16:50:50.028\n> GMT,\"dboperator\",\"postgres\",4452,\"[local]\",5329caea.1164,1,\"\",2014-03-19\n> 16:50:50 GMT,,0,FATAL,57P03,\"the database system is starting up\",,,,,,,,,\"\"\n> 2014-03-19 16:50:51.128 GMT,,,4444,,5329cae3.115c,4,,2014-03-19 16:50:43\n> GMT,,0,LOG,00000,\"invalid contrecord length 5736 at 12/F0FFFC80\",,,,,,,,,\"\"\n> 2014-03-19 16:50:51.128 GMT,,,4444,,5329cae3.115c,5,,2014-03-19 16:50:43\n> GMT,,0,LOG,00000,\"redo done at 12/F0FFFC38\",,,,,,,,,\"\"\n>\n> It is interesting that redo done at 12/F0FFFC38 both on master and\n> promoted slave.\n>\n> The main question is there is actual latest data, and how is it possible\n> that master is behind his slave in synchronous replication.\n>\n> Thanks for the help.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nfsync... taking more knowledge around this will shed some light to understand this problem \"slave ahead of master\"\n\"there was silence, because master hang.\"... replication halted here, master holds the latest copy which is missing at both the slaves\n\"I decided to promote one of the slaves\"... only 2 slaves are left, and one among them is going to be the master now, in master/slave nomenclature the data from master is considered as the valid one from this point onward\n\"master is behind his slave\"... you mentioned that the original master comes up in the mean time one of the slave was already a master\nOn Wed, Mar 19, 2014 at 10:40 PM, Evgeniy Shishkin <[email protected]> wrote:\nHello,\n\nwe have 3 servers with postgresql 9.3.3. One is master and two slaves.\nWe run synchronous_replication and fsync, synchronous_commit and full_page_writes are on.\n\nSuddenly master hang up with hardware failure, it is a strange bug in iLo which we investigate with HP.\n\nBefore master was rebooted, i ran ps aux on slave\npostgres: wal receiver process   streaming 12/F1031DF8\n\nLast messages in slaves logs was\n2014-03-19 02:41:29.005 GMT,,,7389,,53108c69.1cdd,16029,,2014-02-28 13:17:29 GMT,,0,LOG,00000,\"recovery restart point at 12/DFFBB3E8\",\"last completed transaction was at log time 2014-03-19 02:41:28.886869+00\",,,,,,,,\"\"\n\nand then there was silence, because master hang.\n\nThen master was rebooted and slave wrote in log\n2014-03-19 15:36:39.176 GMT,,,7392,,53108c69.1ce0,2,,2014-02-28 13:17:29 GMT,,0,FATAL,XX000,\"terminating walreceiver due to timeout\",,,,,,,,,\"\"\n2014-03-19 15:36:39.177 GMT,,,7388,,53108c69.1cdc,6,,2014-02-28 13:17:29 GMT,1/0,0,LOG,00000,\"record with zero length at 12/F1031DF8\",,,,,,,,,\"\"\n2014-03-19 15:36:57.181 GMT,,,12100,,5329b996.2f44,1,,2014-03-19 15:36:54 GMT,,0,FATAL,XX000,\"could not connect to the primary server: could not connect to server: No route to host\n        Is the server running on host \"\"10.162.2.50\"\" and accepting\n        TCP/IP connections on port 5432?\n\",,,,,,,,,\"\"\n\nThen master finally came back, slave wrote\n2014-03-19 15:40:09.389 GMT,,,13121,,5329ba59.3341,1,,2014-03-19 15:40:09 GMT,,0,FATAL,XX000,\"could not connect to the primary server: FATAL:  the database system is starting up\n\",,,,,,,,,\"\"\n2014-03-19 15:40:16.468 GMT,,,13136,,5329ba5e.3350,1,,2014-03-19 15:40:14 GMT,,0,LOG,00000,\"started streaming WAL from primary at 12/F1000000 on timeline 1\",,,,,,,,,\"\"\n2014-03-19 15:40:16.468 GMT,,,13136,,5329ba5e.3350,2,,2014-03-19 15:40:14 GMT,,0,FATAL,XX000,\"could not receive data from WAL stream: ERROR:  requested starting point 12/F1000000 is ahead of the WAL flush position of this server 12/F0FFFCE8\n\n\",,,,,,,,,\"\"\n\nlast message was repeated several times\nand then this happened\n\n2014-03-19 15:42:04.623 GMT,,,13722,,5329bacc.359a,1,,2014-03-19 15:42:04 GMT,,0,LOG,00000,\"started streaming WAL from primary at 12/F1000000 on timeline 1\",,,,,,,,,\"\"\n2014-03-19 15:42:04.628 GMT,,,7388,,53108c69.1cdc,7,,2014-02-28 13:17:29 GMT,1/0,0,LOG,00000,\"invalid record length at 12/F1031DF8\",,,,,,,,,\"\"\n2014-03-19 15:42:04.628 GMT,,,13722,,5329bacc.359a,2,,2014-03-19 15:42:04 GMT,,0,FATAL,57P01,\"terminating walreceiver process due to administrator command\",,,,,,,,,\"\"\n2014-03-19 15:42:09.628 GMT,,,7388,,53108c69.1cdc,8,,2014-02-28 13:17:29 GMT,1/0,0,LOG,00000,\"invalid record length at 12/F1031DF8\",,,,,,,,,\"\"\n2014-03-19 15:42:14.628 GMT,,,7388,,53108c69.1cdc,9,,2014-02-28 13:17:29 GMT,1/0,0,LOG,00000,\"invalid record length at 12/F1031DF8\",,,,,,,,,\"\"\n2014-03-19 15:42:19.628 GMT,,,7388,,53108c69.1cdc,10,,2014-02-28 13:17:29 GMT,1/0,0,LOG,00000,\"invalid record length at 12/F1031DF8\",,,,,,,,,”\"\nand it just repeats forever.\n\n\nMeanwhile on master\n2014-03-19 15:39:30.957 GMT,,,7115,,5329ba32.1bcb,2,,2014-03-19 15:39:30 GMT,,0,LOG,00000,\"database system was not properly shut down; automatic recovery in progress\",,,,,,,,,\"\"\n2014-03-19 15:39:30.989 GMT,,,7115,,5329ba32.1bcb,3,,2014-03-19 15:39:30 GMT,,0,LOG,00000,\"redo starts at 12/DFFBB3E8\",,,,,,,,,\"\"\n2014-03-19 15:39:47.114 GMT,,,7115,,5329ba32.1bcb,4,,2014-03-19 15:39:30 GMT,,0,LOG,00000,\"redo done at 12/F0FFFC38\",,,,,,,,,\"\"\n2014-03-19 15:39:47.114 GMT,,,7115,,5329ba32.1bcb,5,,2014-03-19 15:39:30 GMT,,0,LOG,00000,\"last completed transaction was at log time 2014-03-19 05:02:29.273138+00\",,,,,,,,,\"\"\n2014-03-19 15:39:47.115 GMT,,,7115,,5329ba32.1bcb,6,,2014-03-19 15:39:30 GMT,,0,LOG,00000,\"checkpoint starting: end-of-recovery immediate\",,,,,,,,,\"\"\n2014-03-19 15:40:16.466 GMT,\"replicator\",\"\",7986,\"10.162.2.52:44336\",5329ba5e.1f32,1,\"idle\",2014-03-19 15:40:14 GMT,2/0,0,ERROR,XX000,\"requested starting point 12/F1000000 is ahead of the WAL flush position of this server 12/F0FFFCE8\",,,,,,,,,\"walreceiver\"\n\nSo, all two slaves are disconnected from master, which somehow is past his slaves.\n\nI decided to promote one of the slaves, so we can have some snapshot of the data.\nrelevant logs from this are\n2014-03-19 16:50:43.118 GMT,,,4444,,5329cae3.115c,3,,2014-03-19 16:50:43 GMT,,0,LOG,00000,\"redo starts at 12/DFFBB3E8\",,,,,,,,,\"\"\n2014-03-19 16:50:50.028 GMT,\"dboperator\",\"postgres\",4452,\"[local]\",5329caea.1164,1,\"\",2014-03-19 16:50:50 GMT,,0,FATAL,57P03,\"the database system is starting up\",,,,,,,,,\"\"\n\n2014-03-19 16:50:51.128 GMT,,,4444,,5329cae3.115c,4,,2014-03-19 16:50:43 GMT,,0,LOG,00000,\"invalid contrecord length 5736 at 12/F0FFFC80\",,,,,,,,,\"\"\n2014-03-19 16:50:51.128 GMT,,,4444,,5329cae3.115c,5,,2014-03-19 16:50:43 GMT,,0,LOG,00000,\"redo done at 12/F0FFFC38\",,,,,,,,,””\n\nIt is interesting that redo done at 12/F0FFFC38 both on master and promoted slave.\n\nThe main question is there is actual latest data, and how is it possible that master is behind his slave in synchronous replication.\n\nThanks for the help.\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 19 Mar 2014 23:41:54 +0530", "msg_from": "Sethu Prasad <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slave wal is ahead of master" }, { "msg_contents": "On 19 Mar 2014, at 21:11, Sethu Prasad <[email protected]> wrote:\n\n> fsync\n> ... taking more knowledge around this will shed some light to understand this problem \"slave ahead of master\"\n> \n> \"there was silence, because master hang.\"\n> ... replication halted here, master holds the latest copy which is missing at both the slaves\n> \n\nMaster hang in very strange way. I could ssh to it, and see dmesg, but not any other command. Also tcp connection was alive to slaves.\nSo we can’t say that slaves did not receive data.\n\n\n> \"I decided to promote one of the slaves\"\n> ... only 2 slaves are left, and one among them is going to be the master now, in master/slave nomenclature the data from master is considered as the valid one from this point onward\n\nThere was no any failover procedure. Slaves was slaves. \n\n\n> \n> \"master is behind his slave\"\n> ... you mentioned that the original master comes up in the mean time one of the slave was already a master\n> \n\nNo, after we rebooted master slaves did not reconnected to it.\nLater i stopped replication on one of the slaves to preserve data its state.\n\n\nSo the main question is, under which circumstances slaves can not reconnect to master with error that master is behind.\nWith fsync on, and synchronous* on.\n\n> \n> On Wed, Mar 19, 2014 at 10:40 PM, Evgeniy Shishkin <[email protected]> wrote:\n> Hello,\n> \n> we have 3 servers with postgresql 9.3.3. One is master and two slaves.\n> We run synchronous_replication and fsync, synchronous_commit and full_page_writes are on.\n> \n> Suddenly master hang up with hardware failure, it is a strange bug in iLo which we investigate with HP.\n> \n> Before master was rebooted, i ran ps aux on slave\n> postgres: wal receiver process streaming 12/F1031DF8\n> \n> Last messages in slaves logs was\n> 2014-03-19 02:41:29.005 GMT,,,7389,,53108c69.1cdd,16029,,2014-02-28 13:17:29 GMT,,0,LOG,00000,\"recovery restart point at 12/DFFBB3E8\",\"last completed transaction was at log time 2014-03-19 02:41:28.886869+00\",,,,,,,,\"\"\n> \n> and then there was silence, because master hang.\n> \n> Then master was rebooted and slave wrote in log\n> 2014-03-19 15:36:39.176 GMT,,,7392,,53108c69.1ce0,2,,2014-02-28 13:17:29 GMT,,0,FATAL,XX000,\"terminating walreceiver due to timeout\",,,,,,,,,\"\"\n> 2014-03-19 15:36:39.177 GMT,,,7388,,53108c69.1cdc,6,,2014-02-28 13:17:29 GMT,1/0,0,LOG,00000,\"record with zero length at 12/F1031DF8\",,,,,,,,,\"\"\n> 2014-03-19 15:36:57.181 GMT,,,12100,,5329b996.2f44,1,,2014-03-19 15:36:54 GMT,,0,FATAL,XX000,\"could not connect to the primary server: could not connect to server: No route to host\n> Is the server running on host \"\"10.162.2.50\"\" and accepting\n> TCP/IP connections on port 5432?\n> \",,,,,,,,,\"\"\n> \n> Then master finally came back, slave wrote\n> 2014-03-19 15:40:09.389 GMT,,,13121,,5329ba59.3341,1,,2014-03-19 15:40:09 GMT,,0,FATAL,XX000,\"could not connect to the primary server: FATAL: the database system is starting up\n> \",,,,,,,,,\"\"\n> 2014-03-19 15:40:16.468 GMT,,,13136,,5329ba5e.3350,1,,2014-03-19 15:40:14 GMT,,0,LOG,00000,\"started streaming WAL from primary at 12/F1000000 on timeline 1\",,,,,,,,,\"\"\n> 2014-03-19 15:40:16.468 GMT,,,13136,,5329ba5e.3350,2,,2014-03-19 15:40:14 GMT,,0,FATAL,XX000,\"could not receive data from WAL stream: ERROR: requested starting point 12/F1000000 is ahead of the WAL flush position of this server 12/F0FFFCE8\n> \",,,,,,,,,\"\"\n> \n> last message was repeated several times\n> and then this happened\n> \n> 2014-03-19 15:42:04.623 GMT,,,13722,,5329bacc.359a,1,,2014-03-19 15:42:04 GMT,,0,LOG,00000,\"started streaming WAL from primary at 12/F1000000 on timeline 1\",,,,,,,,,\"\"\n> 2014-03-19 15:42:04.628 GMT,,,7388,,53108c69.1cdc,7,,2014-02-28 13:17:29 GMT,1/0,0,LOG,00000,\"invalid record length at 12/F1031DF8\",,,,,,,,,\"\"\n> 2014-03-19 15:42:04.628 GMT,,,13722,,5329bacc.359a,2,,2014-03-19 15:42:04 GMT,,0,FATAL,57P01,\"terminating walreceiver process due to administrator command\",,,,,,,,,\"\"\n> 2014-03-19 15:42:09.628 GMT,,,7388,,53108c69.1cdc,8,,2014-02-28 13:17:29 GMT,1/0,0,LOG,00000,\"invalid record length at 12/F1031DF8\",,,,,,,,,\"\"\n> 2014-03-19 15:42:14.628 GMT,,,7388,,53108c69.1cdc,9,,2014-02-28 13:17:29 GMT,1/0,0,LOG,00000,\"invalid record length at 12/F1031DF8\",,,,,,,,,\"\"\n> 2014-03-19 15:42:19.628 GMT,,,7388,,53108c69.1cdc,10,,2014-02-28 13:17:29 GMT,1/0,0,LOG,00000,\"invalid record length at 12/F1031DF8\",,,,,,,,,”\"\n> and it just repeats forever.\n> \n> \n> Meanwhile on master\n> 2014-03-19 15:39:30.957 GMT,,,7115,,5329ba32.1bcb,2,,2014-03-19 15:39:30 GMT,,0,LOG,00000,\"database system was not properly shut down; automatic recovery in progress\",,,,,,,,,\"\"\n> 2014-03-19 15:39:30.989 GMT,,,7115,,5329ba32.1bcb,3,,2014-03-19 15:39:30 GMT,,0,LOG,00000,\"redo starts at 12/DFFBB3E8\",,,,,,,,,\"\"\n> 2014-03-19 15:39:47.114 GMT,,,7115,,5329ba32.1bcb,4,,2014-03-19 15:39:30 GMT,,0,LOG,00000,\"redo done at 12/F0FFFC38\",,,,,,,,,\"\"\n> 2014-03-19 15:39:47.114 GMT,,,7115,,5329ba32.1bcb,5,,2014-03-19 15:39:30 GMT,,0,LOG,00000,\"last completed transaction was at log time 2014-03-19 05:02:29.273138+00\",,,,,,,,,\"\"\n> 2014-03-19 15:39:47.115 GMT,,,7115,,5329ba32.1bcb,6,,2014-03-19 15:39:30 GMT,,0,LOG,00000,\"checkpoint starting: end-of-recovery immediate\",,,,,,,,,\"\"\n> 2014-03-19 15:40:16.466 GMT,\"replicator\",\"\",7986,\"10.162.2.52:44336\",5329ba5e.1f32,1,\"idle\",2014-03-19 15:40:14 GMT,2/0,0,ERROR,XX000,\"requested starting point 12/F1000000 is ahead of the WAL flush position of this server 12/F0FFFCE8\",,,,,,,,,\"walreceiver\"\n> \n> So, all two slaves are disconnected from master, which somehow is past his slaves.\n> \n> I decided to promote one of the slaves, so we can have some snapshot of the data.\n> relevant logs from this are\n> 2014-03-19 16:50:43.118 GMT,,,4444,,5329cae3.115c,3,,2014-03-19 16:50:43 GMT,,0,LOG,00000,\"redo starts at 12/DFFBB3E8\",,,,,,,,,\"\"\n> 2014-03-19 16:50:50.028 GMT,\"dboperator\",\"postgres\",4452,\"[local]\",5329caea.1164,1,\"\",2014-03-19 16:50:50 GMT,,0,FATAL,57P03,\"the database system is starting up\",,,,,,,,,\"\"\n> 2014-03-19 16:50:51.128 GMT,,,4444,,5329cae3.115c,4,,2014-03-19 16:50:43 GMT,,0,LOG,00000,\"invalid contrecord length 5736 at 12/F0FFFC80\",,,,,,,,,\"\"\n> 2014-03-19 16:50:51.128 GMT,,,4444,,5329cae3.115c,5,,2014-03-19 16:50:43 GMT,,0,LOG,00000,\"redo done at 12/F0FFFC38\",,,,,,,,,””\n> \n> It is interesting that redo done at 12/F0FFFC38 both on master and promoted slave.\n> \n> The main question is there is actual latest data, and how is it possible that master is behind his slave in synchronous replication.\n> \n> Thanks for the help.\n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n\nOn 19 Mar 2014, at 21:11, Sethu Prasad <[email protected]> wrote:fsync... taking more knowledge around this will shed some light to understand this problem \"slave ahead of master\"\n\"there was silence, because master hang.\"... replication halted here, master holds the latest copy which is missing at both the slaves\nMaster hang in very strange way. I could ssh to it, and see dmesg, but not any other command. Also tcp connection was alive to slaves.So we can’t say that slaves did not receive data.\"I decided to promote one of the slaves\"... only 2 slaves are left, and one among them is going to be the master now, in master/slave nomenclature the data from master is considered as the valid one from this point onwardThere was no any failover procedure. Slaves was slaves. \n\"master is behind his slave\"... you mentioned that the original master comes up in the mean time one of the slave was already a master\nNo, after we rebooted master slaves did not reconnected to it.Later i stopped replication on one of the slaves to preserve data its state.So the main question is, under which circumstances slaves can not reconnect to master with error that master is behind.With fsync on, and synchronous* on.On Wed, Mar 19, 2014 at 10:40 PM, Evgeniy Shishkin <[email protected]> wrote:\nHello,\n\nwe have 3 servers with postgresql 9.3.3. One is master and two slaves.\nWe run synchronous_replication and fsync, synchronous_commit and full_page_writes are on.\n\nSuddenly master hang up with hardware failure, it is a strange bug in iLo which we investigate with HP.\n\nBefore master was rebooted, i ran ps aux on slave\npostgres: wal receiver process   streaming 12/F1031DF8\n\nLast messages in slaves logs was\n2014-03-19 02:41:29.005 GMT,,,7389,,53108c69.1cdd,16029,,2014-02-28 13:17:29 GMT,,0,LOG,00000,\"recovery restart point at 12/DFFBB3E8\",\"last completed transaction was at log time 2014-03-19 02:41:28.886869+00\",,,,,,,,\"\"\n\nand then there was silence, because master hang.\n\nThen master was rebooted and slave wrote in log\n2014-03-19 15:36:39.176 GMT,,,7392,,53108c69.1ce0,2,,2014-02-28 13:17:29 GMT,,0,FATAL,XX000,\"terminating walreceiver due to timeout\",,,,,,,,,\"\"\n2014-03-19 15:36:39.177 GMT,,,7388,,53108c69.1cdc,6,,2014-02-28 13:17:29 GMT,1/0,0,LOG,00000,\"record with zero length at 12/F1031DF8\",,,,,,,,,\"\"\n2014-03-19 15:36:57.181 GMT,,,12100,,5329b996.2f44,1,,2014-03-19 15:36:54 GMT,,0,FATAL,XX000,\"could not connect to the primary server: could not connect to server: No route to host\n        Is the server running on host \"\"10.162.2.50\"\" and accepting\n        TCP/IP connections on port 5432?\n\",,,,,,,,,\"\"\n\nThen master finally came back, slave wrote\n2014-03-19 15:40:09.389 GMT,,,13121,,5329ba59.3341,1,,2014-03-19 15:40:09 GMT,,0,FATAL,XX000,\"could not connect to the primary server: FATAL:  the database system is starting up\n\",,,,,,,,,\"\"\n2014-03-19 15:40:16.468 GMT,,,13136,,5329ba5e.3350,1,,2014-03-19 15:40:14 GMT,,0,LOG,00000,\"started streaming WAL from primary at 12/F1000000 on timeline 1\",,,,,,,,,\"\"\n2014-03-19 15:40:16.468 GMT,,,13136,,5329ba5e.3350,2,,2014-03-19 15:40:14 GMT,,0,FATAL,XX000,\"could not receive data from WAL stream: ERROR:  requested starting point 12/F1000000 is ahead of the WAL flush position of this server 12/F0FFFCE8\n\n\",,,,,,,,,\"\"\n\nlast message was repeated several times\nand then this happened\n\n2014-03-19 15:42:04.623 GMT,,,13722,,5329bacc.359a,1,,2014-03-19 15:42:04 GMT,,0,LOG,00000,\"started streaming WAL from primary at 12/F1000000 on timeline 1\",,,,,,,,,\"\"\n2014-03-19 15:42:04.628 GMT,,,7388,,53108c69.1cdc,7,,2014-02-28 13:17:29 GMT,1/0,0,LOG,00000,\"invalid record length at 12/F1031DF8\",,,,,,,,,\"\"\n2014-03-19 15:42:04.628 GMT,,,13722,,5329bacc.359a,2,,2014-03-19 15:42:04 GMT,,0,FATAL,57P01,\"terminating walreceiver process due to administrator command\",,,,,,,,,\"\"\n2014-03-19 15:42:09.628 GMT,,,7388,,53108c69.1cdc,8,,2014-02-28 13:17:29 GMT,1/0,0,LOG,00000,\"invalid record length at 12/F1031DF8\",,,,,,,,,\"\"\n2014-03-19 15:42:14.628 GMT,,,7388,,53108c69.1cdc,9,,2014-02-28 13:17:29 GMT,1/0,0,LOG,00000,\"invalid record length at 12/F1031DF8\",,,,,,,,,\"\"\n2014-03-19 15:42:19.628 GMT,,,7388,,53108c69.1cdc,10,,2014-02-28 13:17:29 GMT,1/0,0,LOG,00000,\"invalid record length at 12/F1031DF8\",,,,,,,,,”\"\nand it just repeats forever.\n\n\nMeanwhile on master\n2014-03-19 15:39:30.957 GMT,,,7115,,5329ba32.1bcb,2,,2014-03-19 15:39:30 GMT,,0,LOG,00000,\"database system was not properly shut down; automatic recovery in progress\",,,,,,,,,\"\"\n2014-03-19 15:39:30.989 GMT,,,7115,,5329ba32.1bcb,3,,2014-03-19 15:39:30 GMT,,0,LOG,00000,\"redo starts at 12/DFFBB3E8\",,,,,,,,,\"\"\n2014-03-19 15:39:47.114 GMT,,,7115,,5329ba32.1bcb,4,,2014-03-19 15:39:30 GMT,,0,LOG,00000,\"redo done at 12/F0FFFC38\",,,,,,,,,\"\"\n2014-03-19 15:39:47.114 GMT,,,7115,,5329ba32.1bcb,5,,2014-03-19 15:39:30 GMT,,0,LOG,00000,\"last completed transaction was at log time 2014-03-19 05:02:29.273138+00\",,,,,,,,,\"\"\n2014-03-19 15:39:47.115 GMT,,,7115,,5329ba32.1bcb,6,,2014-03-19 15:39:30 GMT,,0,LOG,00000,\"checkpoint starting: end-of-recovery immediate\",,,,,,,,,\"\"\n2014-03-19 15:40:16.466 GMT,\"replicator\",\"\",7986,\"10.162.2.52:44336\",5329ba5e.1f32,1,\"idle\",2014-03-19 15:40:14 GMT,2/0,0,ERROR,XX000,\"requested starting point 12/F1000000 is ahead of the WAL flush position of this server 12/F0FFFCE8\",,,,,,,,,\"walreceiver\"\n\nSo, all two slaves are disconnected from master, which somehow is past his slaves.\n\nI decided to promote one of the slaves, so we can have some snapshot of the data.\nrelevant logs from this are\n2014-03-19 16:50:43.118 GMT,,,4444,,5329cae3.115c,3,,2014-03-19 16:50:43 GMT,,0,LOG,00000,\"redo starts at 12/DFFBB3E8\",,,,,,,,,\"\"\n2014-03-19 16:50:50.028 GMT,\"dboperator\",\"postgres\",4452,\"[local]\",5329caea.1164,1,\"\",2014-03-19 16:50:50 GMT,,0,FATAL,57P03,\"the database system is starting up\",,,,,,,,,\"\"\n\n2014-03-19 16:50:51.128 GMT,,,4444,,5329cae3.115c,4,,2014-03-19 16:50:43 GMT,,0,LOG,00000,\"invalid contrecord length 5736 at 12/F0FFFC80\",,,,,,,,,\"\"\n2014-03-19 16:50:51.128 GMT,,,4444,,5329cae3.115c,5,,2014-03-19 16:50:43 GMT,,0,LOG,00000,\"redo done at 12/F0FFFC38\",,,,,,,,,””\n\nIt is interesting that redo done at 12/F0FFFC38 both on master and promoted slave.\n\nThe main question is there is actual latest data, and how is it possible that master is behind his slave in synchronous replication.\n\nThanks for the help.\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 19 Mar 2014 23:01:34 +0300", "msg_from": "Evgeny Shishkin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slave wal is ahead of master" } ]
[ { "msg_contents": "Hi,\n\nI have a table (sanact) with 23.125.525 rows (and a hundred columns).\nI am doing a select, that did not finish after some 15 hours... Select\nis as follows:\n\nselect * from sanact where sanact___rfovsnide = 'MYVERSION' order by\nsanactcsu;\n\nThere is an index on sanact___rfovsnide and doing EXPLAIN shows it is used.\nResulting dataset should be 1626000 rows.\n\niostat shows 99.5% idle disks, almost no activity.\ntop shows almost no cpu usage.\n\nWhere should I be looking for a problem ?\n\nThanks in advance,\n\nFranck", "msg_date": "Thu, 20 Mar 2014 14:51:45 +0100", "msg_from": "Franck Routier <[email protected]>", "msg_from_op": true, "msg_subject": "long lasting select, no io nor cpu usage ?" }, { "msg_contents": "Franck Routier <[email protected]> writes:\n> I am doing a select, that did not finish after some 15 hours... Select\n> is as follows:\n\n> select * from sanact where sanact___rfovsnide = 'MYVERSION' order by\n> sanactcsu;\n\n> There is an index on sanact___rfovsnide and doing EXPLAIN shows it is used.\n> Resulting dataset should be 1626000 rows.\n\n> iostat shows 99.5% idle disks, almost no activity.\n> top shows almost no cpu usage.\n\n> Where should I be looking for a problem ?\n\npg_locks, probably.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 20 Mar 2014 09:56:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: long lasting select, no io nor cpu usage ?" }, { "msg_contents": "Le 20/03/2014 14:56, Tom Lane a écrit :\n> pg_locks, probably. regards, tom lane\n\nselect * from pg_stat_activity shows 'F'alse in the waiting column for\nthe query.\n\nCan I rely on that or should I be investigating further for subtile\ntypes of locks ?", "msg_date": "Thu, 20 Mar 2014 15:07:31 +0100", "msg_from": "Franck Routier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: long lasting select, no io nor cpu usage ?" }, { "msg_contents": "Franck Routier <[email protected]> writes:\n> Le 20/03/2014 14:56, Tom Lane a �crit :\n>> pg_locks, probably. regards, tom lane\n\n> select * from pg_stat_activity shows 'F'alse in the waiting column for\n> the query.\n\nHm. The next most likely theory is that it's waiting on network I/O,\nbut it's hard to tell that from the outside. Can you attach to the\nstuck backend with gdb and get a stack trace?\nhttp://wiki.postgresql.org/wiki/Generating_a_stack_trace_of_a_PostgreSQL_backend\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 20 Mar 2014 10:15:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: long lasting select, no io nor cpu usage ?" }, { "msg_contents": "Le 20/03/2014 15:15, Tom Lane a écrit :\n> Hm. The next most likely theory is that it's waiting on network I/O,\n> but it's hard to tell that from the outside. Can you attach to the\n> stuck backend with gdb and get a stack trace?\n> http://wiki.postgresql.org/wiki/Generating_a_stack_trace_of_a_PostgreSQL_backend\n>\n> \t\t\tregards, tom lane\n>\nI found the problem, not related to postgresql after all.\nThe client (CloverETL, using jdbc) was stuck and not \"consuming\" the\nrecords. I killed and restarted the ETL and all is fine now.\n\nThanks a lot for your time and help,\n\nFranck", "msg_date": "Thu, 20 Mar 2014 17:27:17 +0100", "msg_from": "Franck Routier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: long lasting select, no io nor cpu usage ?" } ]
[ { "msg_contents": "Hi,\n\nI'd like to know from the query planner which query plan alternatives\nhave been generated and rejected. Is this possible?\n\n--Stefan\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 20 Mar 2014 17:53:58 +0100", "msg_from": "Stefan Keller <[email protected]>", "msg_from_op": true, "msg_subject": "Getting query plan alternatives from query planner?" }, { "msg_contents": "Stefan Keller <[email protected]> writes:\n> I'd like to know from the query planner which query plan alternatives\n> have been generated and rejected. Is this possible?\n\nNo, not really. People have occasionally hacked the planner to print\nrejected paths before they're discarded, but there's no convenient way\nto do anything except send the data to the postmaster log, which isn't\nall that convenient. A bigger problem is that people who are asking\nfor this typically imagine that the planner generates complete plans\nbefore rejecting them; which it does not. Path alternatives are rejected\nwhenever possible before moving up to the next join level, so that what\nwe have rejected is actually just a plan fragment in most cases.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 20 Mar 2014 13:08:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Getting query plan alternatives from query planner?" }, { "msg_contents": "Hi Tom\n\nYou wrote:\n> Path alternatives are rejected\n> whenever possible before moving up to the next join level, so that what\n> we have rejected is actually just a plan fragment in most cases.\n\nThanks for the quick answer. This sounds like a fair implementation decision.\n\nBackground for asking this is of course, that one want's 1. to\nunderstand and 2. influence the optimizer in cases where one thinks\nthat the planner is wrong :-).\n\nSo, the bottom line is\n1. that PostgreSQL doesn't offer no means to understand the planner\nexcept EXPLAIN-ing the chosen plan?\n2. and there's no road map to introduce planner hinting (like in\nEnterpriseDB or Ora)?\n\nRegards, Stefan\n\n2014-03-20 18:08 GMT+01:00 Tom Lane <[email protected]>:\n> Stefan Keller <[email protected]> writes:\n>> I'd like to know from the query planner which query plan alternatives\n>> have been generated and rejected. Is this possible?\n>\n> No, not really. People have occasionally hacked the planner to print\n> rejected paths before they're discarded, but there's no convenient way\n> to do anything except send the data to the postmaster log, which isn't\n> all that convenient. A bigger problem is that people who are asking\n> for this typically imagine that the planner generates complete plans\n> before rejecting them; which it does not. Path alternatives are rejected\n> whenever possible before moving up to the next join level, so that what\n> we have rejected is actually just a plan fragment in most cases.\n>\n> regards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 21 Mar 2014 08:37:55 +0100", "msg_from": "Stefan Keller <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Getting query plan alternatives from query planner?" }, { "msg_contents": "On Fri, Mar 21, 2014 at 1:07 PM, Stefan Keller <[email protected]> wrote:\n\n> Hi Tom\n>\n> You wrote:\n> > Path alternatives are rejected\n> > whenever possible before moving up to the next join level, so that what\n> > we have rejected is actually just a plan fragment in most cases.\n>\n> Thanks for the quick answer. This sounds like a fair implementation\n> decision.\n>\n> Background for asking this is of course, that one want's 1. to\n> understand and 2. influence the optimizer in cases where one thinks\n> that the planner is wrong :-).\n>\n> So, the bottom line is\n> 1. that PostgreSQL doesn't offer no means to understand the planner\n> except EXPLAIN-ing the chosen plan?\n> 2. and there's no road map to introduce planner hinting (like in\n> EnterpriseDB or Ora)?\n>\n>\nWe recently had some discussion for planner hints. There is no plan for\nhaving planner hints ATM. However, we are looking at ways at which we can\nimprove the query planner for some cases where it makes statistical bad\nestimations and gives bad plans.\n\nRegards,\n\nAtri\n\n-- \nRegards,\n\nAtri\n*l'apprenant*\n\nOn Fri, Mar 21, 2014 at 1:07 PM, Stefan Keller <[email protected]> wrote:\nHi Tom\n\nYou wrote:\n> Path alternatives are rejected\n> whenever possible before moving up to the next join level, so that what\n> we have rejected is actually just a plan fragment in most cases.\n\nThanks for the quick answer. This sounds like a fair implementation decision.\n\nBackground for asking this is of course, that one want's 1. to\nunderstand and 2. influence the optimizer in cases where one thinks\nthat the planner is wrong :-).\n\nSo, the bottom line is\n1. that PostgreSQL doesn't offer no means to understand the planner\nexcept EXPLAIN-ing the chosen plan?\n2. and there's no road map to introduce planner hinting (like in\nEnterpriseDB or Ora)?\nWe recently had some discussion for planner hints. There is no plan for having planner hints ATM. However, we are looking at ways at which we can improve the query planner for some cases where it makes statistical bad estimations and gives bad plans.\nRegards,Atri -- Regards, Atril'apprenant", "msg_date": "Fri, 21 Mar 2014 13:21:31 +0530", "msg_from": "Atri Sharma <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Getting query plan alternatives from query planner?" }, { "msg_contents": "There have been many discussions about adding hints to Postgres over the\nyears. All have been firmly rejected by the Postgres developers, with\nwell-argued reasons. Search the archives to learn more about this topic.\n\nOn the other hand, Postgres does have hints. They're just called settings.\nYou can disable certain types of joins with SET commands. On top of that,\nthere are \"fences\" that the optimizer can't cross that you can use to force\nthe optimizer to consider certain sub-queries separately (e.g. \"offset 0\"\non a subquery).\n\nCraig\n\n\nOn Fri, Mar 21, 2014 at 12:51 AM, Atri Sharma <[email protected]> wrote:\n\n>\n>\n>\n> On Fri, Mar 21, 2014 at 1:07 PM, Stefan Keller <[email protected]> wrote:\n>\n>> Hi Tom\n>>\n>> You wrote:\n>> > Path alternatives are rejected\n>> > whenever possible before moving up to the next join level, so that what\n>> > we have rejected is actually just a plan fragment in most cases.\n>>\n>> Thanks for the quick answer. This sounds like a fair implementation\n>> decision.\n>>\n>> Background for asking this is of course, that one want's 1. to\n>> understand and 2. influence the optimizer in cases where one thinks\n>> that the planner is wrong :-).\n>>\n>> So, the bottom line is\n>> 1. that PostgreSQL doesn't offer no means to understand the planner\n>> except EXPLAIN-ing the chosen plan?\n>> 2. and there's no road map to introduce planner hinting (like in\n>> EnterpriseDB or Ora)?\n>>\n>>\n> We recently had some discussion for planner hints. There is no plan for\n> having planner hints ATM. However, we are looking at ways at which we can\n> improve the query planner for some cases where it makes statistical bad\n> estimations and gives bad plans.\n>\n> Regards,\n>\n> Atri\n>\n> --\n> Regards,\n>\n> Atri\n> *l'apprenant*\n>\n\nThere have been many discussions about adding hints to Postgres over the years. All have been firmly rejected by the Postgres developers, with well-argued reasons.  Search the archives to learn more about this topic.\nOn the other hand, Postgres does have hints.  They're just called settings. You can disable certain types of joins with SET commands. On top of that, there are \"fences\" that the optimizer can't cross that you can use to force the optimizer to consider certain sub-queries separately (e.g. \"offset 0\" on a subquery).\nCraigOn Fri, Mar 21, 2014 at 12:51 AM, Atri Sharma <[email protected]> wrote:\nOn Fri, Mar 21, 2014 at 1:07 PM, Stefan Keller <[email protected]> wrote:\nHi Tom\n\nYou wrote:\n> Path alternatives are rejected\n> whenever possible before moving up to the next join level, so that what\n> we have rejected is actually just a plan fragment in most cases.\n\nThanks for the quick answer. This sounds like a fair implementation decision.\n\nBackground for asking this is of course, that one want's 1. to\nunderstand and 2. influence the optimizer in cases where one thinks\nthat the planner is wrong :-).\n\nSo, the bottom line is\n1. that PostgreSQL doesn't offer no means to understand the planner\nexcept EXPLAIN-ing the chosen plan?\n2. and there's no road map to introduce planner hinting (like in\nEnterpriseDB or Ora)?\nWe recently had some discussion for planner hints. There is no plan for having planner hints ATM. However, we are looking at ways at which we can improve the query planner for some cases where it makes statistical bad estimations and gives bad plans.\nRegards,Atri -- Regards, Atril'apprenant", "msg_date": "Fri, 21 Mar 2014 06:34:55 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Getting query plan alternatives from query planner?" }, { "msg_contents": "Craig James <[email protected]> writes:\n> There have been many discussions about adding hints to Postgres over the\n> years. All have been firmly rejected by the Postgres developers, with\n> well-argued reasons. Search the archives to learn more about this topic.\n\nTo clarify: there are good reasons not to like what Oracle calls hints.\nOn the other hand, the concept of hints that tell the planner what\nselectivity or rowcount to expect (as opposed to trying to control the\nplan directly) has met with generally more positive reviews. There's\nno specific design yet, and certainly no implementation roadmap, but\nI'd not be surprised if we get something like that a few years down\nthe road.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 21 Mar 2014 10:06:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Getting query plan alternatives from query planner?" }, { "msg_contents": "On 03/21/2014 08:34 AM, Craig James wrote:\n\n> There have been many discussions about adding hints to Postgres over the\n> years. All have been firmly rejected by the Postgres developers, with\n> well-argued reasons. Search the archives to learn more about this topic.\n\nWhile that's true, and I agree with the sentiment, it could also be \nargued that what we have now is actually worse than hints.\n\nI've been bitten several times by wrong query plans. The cause is \nusually due to bad correlation estimates or edge-cases due to incomplete \nstats. Aside from cranking default_statistics_target up to 10,000, these \nissues tend to get solved through optimization fences. Reorganize a \nquery into a CTE, or use the (gross) OFFSET 0 trick. How are these \nnothing other than unofficial hints?\n\nWell... they're worse, really. Hints can be deprecated, disabled in \nconfigs, or ignored in extreme cases. Optimization fences are truly \nforever. Unless of course they're removed. In which case, a bunch of \nqueries that exploited them will suddenly perform a whole lot worse, \ncausing organizations to delay upgrading PostgreSQL.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 21 Mar 2014 14:43:41 -0500", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Getting query plan alternatives from query planner?" }, { "msg_contents": "Shaun Thomas <[email protected]> wrote:\n\n> these issues tend to get solved through optimization fences.\n> Reorganize a query into a CTE, or use the (gross) OFFSET 0 trick.\n> How are these nothing other than unofficial hints?\n\nYeah, the cognitive dissonance levels get pretty high around this\nissue. Some of the same people who argue strenuously against\nadding hints about what plan should be chosen also argue against\nhaving clearly equivalent queries optimize to the same plan because\nthey find the fact that they don't useful for coercing a decent\nplan sometimes. That amounts to a hint, but obscure and\nundocumented. (The OP may be wondering what this \"OFFSET 0 trick\"\nis, and how he can use it.)\n\n> Well... they're worse, really. Hints can be deprecated, disabled\n> in configs, or ignored in extreme cases. Optimization fences are\n> truly forever.\n\n+1\n\nWith explicit, documented hints, one could search for hints of a\nparticular type should the optimizer improve to the point where\nthey are no longer needed. It is harder to do that with subtle\ndifferences in syntax choice. Figuring out which CTEs or LIMITs\nwere chosen because they caused optimization barriers rather than\nfor their semantic merit takes some effort.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 14 Apr 2014 07:39:17 -0700 (PDT)", "msg_from": "Kevin Grittner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Getting query plan alternatives from query planner?" }, { "msg_contents": "I don't know how anyone else feels about this, as I don't think I've \nseen this ever suggested, but my ideal would be a way to configure the \ndatabase to recognize specific queries and to have a way of influencing \nits plan choice for that query. I'm intentionally wording that last part \nvaguely, as I'm not sure what would be best or practical there. Ideally, \nperhaps, would be to be able to store a particular plan for that query \nand have it always use it.\n\nI don't want either hints OR fence distortions in my application code, \nwhich might have to work with different versions of PostgreSQL with \ndifferent optimization characteristics, different servers with different \nperformance characteristics, or even different database products \nentirely. A solution to a server-side problem should live on the server \nnot on the client. That's why I've always preferred PostgeSQL's server \nsettings for tweaking the optimizer to the hints offered by other products.\n\nOn 4/14/2014 10:39 AM, Kevin Grittner wrote:\n> Shaun Thomas <[email protected]> wrote:\n>\n>> these issues tend to get solved through optimization fences.\n>> Reorganize a query into a CTE, or use the (gross) OFFSET 0 trick.\n>> How are these nothing other than unofficial hints?\n> Yeah, the cognitive dissonance levels get pretty high around this\n> issue. Some of the same people who argue strenuously against\n> adding hints about what plan should be chosen also argue against\n> having clearly equivalent queries optimize to the same plan because\n> they find the fact that they don't useful for coercing a decent\n> plan sometimes. That amounts to a hint, but obscure and\n> undocumented. (The OP may be wondering what this \"OFFSET 0 trick\"\n> is, and how he can use it.)\n>\n>> Well... they're worse, really. Hints can be deprecated, disabled\n>> in configs, or ignored in extreme cases. Optimization fences are\n>> truly forever.\n> +1\n>\n> With explicit, documented hints, one could search for hints of a\n> particular type should the optimizer improve to the point where\n> they are no longer needed. It is harder to do that with subtle\n> differences in syntax choice. Figuring out which CTEs or LIMITs\n> were chosen because they caused optimization barriers rather than\n> for their semantic merit takes some effort.\n>\n> --\n> Kevin Grittner\n> EDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n>\n>\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 14 Apr 2014 10:59:52 -0400", "msg_from": "Eric Schwarzenbach <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Getting query plan alternatives from query planner?" } ]
[ { "msg_contents": "I have a very complex view zinfoexp and running the view as:\nSELECT * FROM zinfoexp WHERE idmembre in (1,84)\ntake 2700 ms\n\nSo, I try another syntax:\nSELECT * FROM zinfoexp WHERE idmembre = 1\nunion\nSELECT * FROM zinfoexp WHERE idmembre = 84\n\nand for me, two calls to my view takes a lot of time (may be x2) and it \ntakes 134 ms !\n\nHow is it possible that the optimizer cannot optimize the IN as UNION ?\nI have a database postgresql 9.3 freshly vacuumed\n\n-- \nJean-Max Reymond\nCKR Solutions Open Source http://www.ckr-solutions.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 20 Mar 2014 17:57:41 +0100", "msg_from": "Jean-Max Reymond <[email protected]>", "msg_from_op": true, "msg_subject": "Performance of UNION vs IN" }, { "msg_contents": "On 20/03/14 17:57, Jean-Max Reymond wrote:\n> I have a very complex view zinfoexp and running the view as:\n> SELECT * FROM zinfoexp WHERE idmembre in (1,84)\n> take 2700 ms\n> \n> So, I try another syntax:\n> SELECT * FROM zinfoexp WHERE idmembre = 1\n> union\n> SELECT * FROM zinfoexp WHERE idmembre = 84\n> \n> and for me, two calls to my view takes a lot of time (may be x2) and it\n> takes 134 ms !\n\ntry\n\n SELECT * FROM zinfoexp WHERE idmembre=1 OR idmembre=84\n\nThis will probably be even faster.\n\nAlso, the 2 statements of your's are not semantically equal. UNION\nimplies DISTINCT, see:\n\nselect * from (values (1), (1), (2)) t(i) UNION select 19;\n i\n----\n 19\n 1\n 2\n(3 rows)\n\nWhat you want is UNION ALL:\n\nselect * from (values (1), (1), (2)) t(i) UNION ALL select 19;\n i\n----\n 1\n 1\n 2\n 19\n(4 rows)\n\n\nTorsten\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 20 Mar 2014 18:13:58 +0100", "msg_from": "=?ISO-8859-15?Q?Torsten_F=F6rtsch?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of UNION vs IN" }, { "msg_contents": "Le 20/03/2014 18:13, Torsten Fᅵrtsch a ᅵcrit :\n> On 20/03/14 17:57, Jean-Max Reymond wrote:\n>> I have a very complex view zinfoexp and running the view as:\n>> SELECT * FROM zinfoexp WHERE idmembre in (1,84)\n>> take 2700 ms\n>>\n>> So, I try another syntax:\n>> SELECT * FROM zinfoexp WHERE idmembre = 1\n>> union\n>> SELECT * FROM zinfoexp WHERE idmembre = 84\n>>\n>> and for me, two calls to my view takes a lot of time (may be x2) and it\n>> takes 134 ms !\n>\n> try\n>\n> SELECT * FROM zinfoexp WHERE idmembre=1 OR idmembre=84\n>\n> This will probably be even faster.\n>\n> Also, the 2 statements of your's are not semantically equal. UNION\n> implies DISTINCT, see:\n>\n> select * from (values (1), (1), (2)) t(i) UNION select 19;\n> i\n> ----\n> 19\n> 1\n> 2\n> (3 rows)\n>\n> What you want is UNION ALL:\n>\n> select * from (values (1), (1), (2)) t(i) UNION ALL select 19;\n> i\n> ----\n> 1\n> 1\n> 2\n> 19\n> (4 rows)\n>\n>\n> Torsten\n>\n\nsame numbers with DISTINCT and UNION ALL (construction of VIEW does an \nimplicit DISTINCT).\n\n-- \nJean-Max Reymond\nᅵruption de l'Etna: http://jmreymond.free.fr/Etna2002\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 21 Mar 2014 06:33:47 +0100", "msg_from": "Jean-Max Reymond <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance of UNION vs IN" } ]
[ { "msg_contents": "We have a slow performing query that we are trying to improve, and it\nappears to be performing a sequential scan at a point where it should be\nutilizing an index. Can anyone tell me why postgres is opting to do it this\nway?\n\nThe original query is as follows:\n\nSELECT DISTINCT\n a1.context_key\nFROM\n virtual_ancestors a1, collection_data, virtual_ancestors a2\nWHERE\n a1.ancestor_key = collection_data.context_key\n AND collection_data.collection_context_key = a2.context_key\n AND a2.ancestor_key = ?\n\nThe key relationships should all using indexed columns, but the query plan\nthat postgres comes up with ends up performing a sequential scan on the\ncollection_data table (in this case about 602k rows) where we would have\nexpected it to utilize the index:\n\n HashAggregate (cost=60905.73..60935.73 rows=3000 width=4) (actual\ntime=3366.165..3367.354 rows=3492 loops=1)\n Buffers: shared hit=16291 read=1222\n -> Nested Loop (cost=17546.26..60898.23 rows=3000 width=4) (actual\ntime=438.332..3357.918 rows=13037 loops=1)\n Buffers: shared hit=16291 read=1222\n -> Hash Join (cost=17546.26..25100.94 rows=98 width=4) (actual\ntime=408.554..415.767 rows=2092 loops=1)\n Hash Cond: (a2.context_key =\ncollection_data.collection_context_key)\n Buffers: shared hit=4850 read=3\n -> Index Only Scan using virtual_ancestors_pkey on\nvirtual_ancestors a2 (cost=0.00..233.32 rows=270 width=4) (actual\ntime=8.532..10.703 rows=1960 loops=1)\n Index Cond: (ancestor_key = 1072173)\n Heap Fetches: 896\n Buffers: shared hit=859 read=3\n -> Hash (cost=10015.56..10015.56 rows=602456 width=8)\n(actual time=399.708..399.708 rows=602570 loops=1)\n Buckets: 65536 Batches: 1 Memory Usage: 23538kB\n Buffers: shared hit=3991\n######## sequential scan occurs here ##########\n -> Seq Scan on collection_data (cost=0.00..10015.56\nrows=602456 width=8) (actual time=0.013..163.509 rows=602570 loops=1)\n Buffers: shared hit=3991\n -> Index Only Scan using virtual_ancestors_pkey on\nvirtual_ancestors a1 (cost=0.00..360.70 rows=458 width=8) (actual\ntime=1.339..1.403 rows=6 loops=2092)\n Index Cond: (ancestor_key = collection_data.context_key)\n Heap Fetches: 7067\n Buffers: shared hit=11441 read=1219\n Total runtime: 3373.058 ms\n\n\nThe table definitions are as follows:\n\n Table \"public.virtual_ancestors\"\n Column | Type | Modifiers\n--------------+----------+-----------\n ancestor_key | integer | not null\n context_key | integer | not null\n degree | smallint | not null\nIndexes:\n \"virtual_ancestors_pkey\" PRIMARY KEY, btree (ancestor_key, context_key)\n \"virtual_context_key_idx\" btree (context_key)\nForeign-key constraints:\n \"virtual_ancestors_ancestor_key_fkey\" FOREIGN KEY (ancestor_key)\nREFERENCES contexts(context_key)\n \"virtual_ancestors_context_key_fkey\" FOREIGN KEY (context_key)\nREFERENCES contexts(context_key)\n\n Table \"public.collection_data\"\n Column | Type | Modifiers\n------------------------+----------------------+-----------\n collection_context_key | integer | not null\n context_key | integer | not null\n type | character varying(1) | not null\n source | character varying(1) | not null\nIndexes:\n \"collection_data_context_key_idx\" btree (context_key)\n \"collection_data_context_key_index\" btree (collection_context_key)\nCLUSTER\nForeign-key constraints:\n \"collection_data_collection_context_key_fkey\" FOREIGN KEY\n(collection_context_key) REFERENCES contexts(context_key) ON DELETE CASCADE\n \"collection_data_context_key_fkey\" FOREIGN KEY (context_key) REFERENCES\ncontexts(context_key) ON DELETE CASCADE\n\nCan anyone suggest a way that we can get postgres to use the\ncollection_data_context_key_index properly? I thought that it might be\nrelated to the fact that collection_data_context_key_index is a CLUSTERED\nindex, but we did some basic experimentation that seems to indicate\notherwise, i.e. the bad plan persists despite re-clustering the index.\n\nWe are using PostgreSQL 9.2.5 on x86_64-unknown-linux-gnu, compiled by gcc\n(Debian 4.4.5-8) 4.4.5, 64-bit\n\nInterestingly, on an instance running PostgreSQL 9.2.4 on\nx86_64-unknown-linux-gnu, compiled by gcc (Debian 4.4.5-8) 4.4.5, 64-bit\nwhere I copied the 2 tables over to a temporary database, the plan comes\nout differently:\n\n HashAggregate (cost=39692.03..39739.98 rows=4795 width=4) (actual\ntime=73.285..75.141 rows=3486 loops=1)\n Buffers: shared hit=22458\n -> Nested Loop (cost=0.00..39680.05 rows=4795 width=4) (actual\ntime=0.077..63.116 rows=13007 loops=1)\n Buffers: shared hit=22458\n -> Nested Loop (cost=0.00..32823.38 rows=164 width=4) (actual\ntime=0.056..17.685 rows=2084 loops=1)\n Buffers: shared hit=7529\n -> Index Only Scan using virtual_ancestors_pkey on\nvirtual_ancestors a2 (cost=0.00..1220.85 rows=396 width=4) (actual\ntime=0.025..2.732 rows=1954 loops=1)\n Index Cond: (ancestor_key = 1072173)\n Heap Fetches: 1954\n Buffers: shared hit=1397\n######## Note the index scan here - this is what it SHOULD be doing\n##############\n -> Index Scan using collection_data_context_key_index on\ncollection_data (cost=0.00..79.24 rows=56 width=8) (actual\ntime=0.004..0.005 rows=1 loops=1954)\n Index Cond: (collection_context_key = a2.context_key)\n Buffers: shared hit=6132\n -> Index Only Scan using virtual_ancestors_pkey on\nvirtual_ancestors a1 (cost=0.00..35.40 rows=641 width=8) (actual\ntime=0.007..0.015 rows=6 loops=2084)\n Index Cond: (ancestor_key = collection_data.context_key)\n Heap Fetches: 13007\n Buffers: shared hit=14929\n Total runtime: 76.431 ms\n\nWhy can't I get the Postgres 9.2.5 instance to use the optimal plan?\n\nThanks in advance!\n /Stefan\n\n-- \n-\nStefan Amshey\n\nWe have a slow performing query that we are trying to improve, and it appears to be performing a sequential scan at a point where it should be utilizing an index. Can anyone tell me why postgres is opting to do it this way?\nThe original query is as follows:SELECT DISTINCT    a1.context_key\nFROM    virtual_ancestors a1, collection_data, virtual_ancestors a2\n\nWHERE    a1.ancestor_key =  collection_data.context_key    AND collection_data.collection_context_key = a2.context_key\n    AND a2.ancestor_key = ?The key relationships should all using indexed columns, but the query plan that postgres comes up with ends up performing a sequential scan on the collection_data table (in this case about 602k rows) where we would have expected it to utilize the index:\n HashAggregate  (cost=60905.73..60935.73 rows=3000 width=4) (actual time=3366.165..3367.354 rows=3492 loops=1)\n   Buffers: shared hit=16291 read=1222   ->  Nested Loop  (cost=17546.26..60898.23 rows=3000 width=4) (actual time=438.332..3357.918 rows=13037 loops=1)\n         Buffers: shared hit=16291 read=1222         ->  Hash Join  (cost=17546.26..25100.94 rows=98 width=4) (actual time=408.554..415.767 rows=2092 loops=1)\n               Hash Cond: (a2.context_key = collection_data.collection_context_key)               Buffers: shared hit=4850 read=3\n               ->  Index Only Scan using virtual_ancestors_pkey on virtual_ancestors a2  (cost=0.00..233.32 rows=270 width=4) (actual time=8.532..10.703 rows=1960 loops=1)\n                     Index Cond: (ancestor_key = 1072173)                     Heap Fetches: 896\n                     Buffers: shared hit=859 read=3               ->  Hash  (cost=10015.56..10015.56 rows=602456 width=8) (actual time=399.708..399.708 rows=602570 loops=1)\n                     Buckets: 65536  Batches: 1  Memory Usage: 23538kB                     Buffers: shared hit=3991\n######## sequential scan occurs here ##########                     ->  Seq Scan on collection_data  (cost=0.00..10015.56 rows=602456 width=8) (actual time=0.013..163.509 rows=602570 loops=1)\n                           Buffers: shared hit=3991         ->  Index Only Scan using virtual_ancestors_pkey on virtual_ancestors a1  (cost=0.00..360.70 rows=458 width=8) (actual time=1.339..1.403 rows=6 loops=2092)\n               Index Cond: (ancestor_key = collection_data.context_key)               Heap Fetches: 7067\n               Buffers: shared hit=11441 read=1219 Total runtime: 3373.058 ms\nThe table definitions are as follows:\n  Table \"public.virtual_ancestors\"    Column    |   Type   | Modifiers\n--------------+----------+----------- ancestor_key | integer  | not null\n\n context_key  | integer  | not null degree       | smallint | not nullIndexes:\n\n    \"virtual_ancestors_pkey\" PRIMARY KEY, btree (ancestor_key, context_key)    \"virtual_context_key_idx\" btree (context_key)\n\nForeign-key constraints:    \"virtual_ancestors_ancestor_key_fkey\" FOREIGN KEY (ancestor_key) REFERENCES contexts(context_key)\n\n    \"virtual_ancestors_context_key_fkey\" FOREIGN KEY (context_key) REFERENCES contexts(context_key)\n             Table \"public.collection_data\"         Column              |         Type         | Modifiers------------------------+----------------------+----------- collection_context_key | integer              | not null\n context_key                | integer              | not null type                            | character varying(1) | not null source                        | character varying(1) | not null\nIndexes:    \"collection_data_context_key_idx\" btree (context_key)    \"collection_data_context_key_index\" btree (collection_context_key) CLUSTERForeign-key constraints:\n    \"collection_data_collection_context_key_fkey\" FOREIGN KEY (collection_context_key) REFERENCES contexts(context_key) ON DELETE CASCADE    \"collection_data_context_key_fkey\" FOREIGN KEY (context_key) REFERENCES contexts(context_key) ON DELETE CASCADE\nCan anyone suggest a way that we can get postgres to use the collection_data_context_key_index properly? I thought that it might be related to the fact that collection_data_context_key_index is a CLUSTERED index, but we did some basic experimentation that seems to indicate otherwise, i.e. the bad plan persists despite re-clustering the index.\nWe are using PostgreSQL 9.2.5 on x86_64-unknown-linux-gnu, compiled by gcc (Debian 4.4.5-8) 4.4.5, 64-bit\nInterestingly, on an instance running PostgreSQL 9.2.4 on x86_64-unknown-linux-gnu, compiled by gcc (Debian 4.4.5-8) 4.4.5, 64-bit where I copied the 2 tables over to a temporary database, the plan comes out differently:\n HashAggregate  (cost=39692.03..39739.98 rows=4795 width=4) (actual time=73.285..75.141 rows=3486 loops=1)\n   Buffers: shared hit=22458   ->  Nested Loop  (cost=0.00..39680.05 rows=4795 width=4) (actual time=0.077..63.116 rows=13007 loops=1)\n         Buffers: shared hit=22458         ->  Nested Loop  (cost=0.00..32823.38 rows=164 width=4) (actual time=0.056..17.685 rows=2084 loops=1)\n               Buffers: shared hit=7529               ->  Index Only Scan using virtual_ancestors_pkey on virtual_ancestors a2  (cost=0.00..1220.85 rows=396 width=4) (actual time=0.025..2.732 rows=1954 loops=1)\n                     Index Cond: (ancestor_key = 1072173)                     Heap Fetches: 1954                     Buffers: shared hit=1397\n######## Note the index scan here - this is what it SHOULD be doing ##############               ->  Index Scan using collection_data_context_key_index on collection_data  (cost=0.00..79.24 rows=56 width=8) (actual time=0.004..0.005 rows=1 loops=1954)\n                     Index Cond: (collection_context_key = a2.context_key)                     Buffers: shared hit=6132\n         ->  Index Only Scan using virtual_ancestors_pkey on virtual_ancestors a1  (cost=0.00..35.40 rows=641 width=8) (actual time=0.007..0.015 rows=6 loops=2084)               Index Cond: (ancestor_key = collection_data.context_key)\n               Heap Fetches: 13007               Buffers: shared hit=14929 Total runtime: 76.431 ms\nWhy can't I get the Postgres 9.2.5 instance to use the optimal plan?\nThanks in advance!     /Stefan-- -Stefan Amshey", "msg_date": "Thu, 20 Mar 2014 16:56:03 -0700", "msg_from": "Stefan Amshey <[email protected]>", "msg_from_op": true, "msg_subject": "slow join not using index properly" }, { "msg_contents": "Hi Stefan!\n\nProbably you need to rewrite your query like this (check it first):\n\nwith RECURSIVE qq(cont_key, anc_key) as\n(\nselect min(a1.context_key), ancestor_key from virtual_ancestors a1\n union select\n (SELECT\n a1.context_key, ancestor_key\n FROM\n virtual_ancestors a1 where context_key > cont_key order by\ncontext_key limit 1) from qq where cont_key is not null\n)\nselect a1.cont_key\n from qq a1, collection_data, virtual_ancestors a2\nWHERE\n a1.anc_key = collection_data.context_key\n AND collection_data.collection_context_key = a2.context_key\n AND a2.ancestor_key = ?\n\nbest regards,\nIlya\n\nOn Fri, Mar 21, 2014 at 12:56 AM, Stefan Amshey <[email protected]> wrote:\n> We have a slow performing query that we are trying to improve, and it\n> appears to be performing a sequential scan at a point where it should be\n> utilizing an index. Can anyone tell me why postgres is opting to do it this\n> way?\n>\n> The original query is as follows:\n>\n> SELECT DISTINCT\n> a1.context_key\n> FROM\n> virtual_ancestors a1, collection_data, virtual_ancestors a2\n> WHERE\n> a1.ancestor_key = collection_data.context_key\n> AND collection_data.collection_context_key = a2.context_key\n> AND a2.ancestor_key = ?\n>\n> The key relationships should all using indexed columns, but the query plan\n> that postgres comes up with ends up performing a sequential scan on the\n> collection_data table (in this case about 602k rows) where we would have\n> expected it to utilize the index:\n>\n> HashAggregate (cost=60905.73..60935.73 rows=3000 width=4) (actual\n> time=3366.165..3367.354 rows=3492 loops=1)\n> Buffers: shared hit=16291 read=1222\n> -> Nested Loop (cost=17546.26..60898.23 rows=3000 width=4) (actual\n> time=438.332..3357.918 rows=13037 loops=1)\n> Buffers: shared hit=16291 read=1222\n> -> Hash Join (cost=17546.26..25100.94 rows=98 width=4) (actual\n> time=408.554..415.767 rows=2092 loops=1)\n> Hash Cond: (a2.context_key =\n> collection_data.collection_context_key)\n> Buffers: shared hit=4850 read=3\n> -> Index Only Scan using virtual_ancestors_pkey on\n> virtual_ancestors a2 (cost=0.00..233.32 rows=270 width=4) (actual\n> time=8.532..10.703 rows=1960 loops=1)\n> Index Cond: (ancestor_key = 1072173)\n> Heap Fetches: 896\n> Buffers: shared hit=859 read=3\n> -> Hash (cost=10015.56..10015.56 rows=602456 width=8)\n> (actual time=399.708..399.708 rows=602570 loops=1)\n> Buckets: 65536 Batches: 1 Memory Usage: 23538kB\n> Buffers: shared hit=3991\n> ######## sequential scan occurs here ##########\n> -> Seq Scan on collection_data (cost=0.00..10015.56\n> rows=602456 width=8) (actual time=0.013..163.509 rows=602570 loops=1)\n> Buffers: shared hit=3991\n> -> Index Only Scan using virtual_ancestors_pkey on\n> virtual_ancestors a1 (cost=0.00..360.70 rows=458 width=8) (actual\n> time=1.339..1.403 rows=6 loops=2092)\n> Index Cond: (ancestor_key = collection_data.context_key)\n> Heap Fetches: 7067\n> Buffers: shared hit=11441 read=1219\n> Total runtime: 3373.058 ms\n>\n>\n> The table definitions are as follows:\n>\n> Table \"public.virtual_ancestors\"\n> Column | Type | Modifiers\n> --------------+----------+-----------\n> ancestor_key | integer | not null\n> context_key | integer | not null\n> degree | smallint | not null\n> Indexes:\n> \"virtual_ancestors_pkey\" PRIMARY KEY, btree (ancestor_key, context_key)\n> \"virtual_context_key_idx\" btree (context_key)\n> Foreign-key constraints:\n> \"virtual_ancestors_ancestor_key_fkey\" FOREIGN KEY (ancestor_key)\n> REFERENCES contexts(context_key)\n> \"virtual_ancestors_context_key_fkey\" FOREIGN KEY (context_key)\n> REFERENCES contexts(context_key)\n>\n> Table \"public.collection_data\"\n> Column | Type | Modifiers\n> ------------------------+----------------------+-----------\n> collection_context_key | integer | not null\n> context_key | integer | not null\n> type | character varying(1) | not null\n> source | character varying(1) | not null\n> Indexes:\n> \"collection_data_context_key_idx\" btree (context_key)\n> \"collection_data_context_key_index\" btree (collection_context_key)\n> CLUSTER\n> Foreign-key constraints:\n> \"collection_data_collection_context_key_fkey\" FOREIGN KEY\n> (collection_context_key) REFERENCES contexts(context_key) ON DELETE CASCADE\n> \"collection_data_context_key_fkey\" FOREIGN KEY (context_key) REFERENCES\n> contexts(context_key) ON DELETE CASCADE\n>\n> Can anyone suggest a way that we can get postgres to use the\n> collection_data_context_key_index properly? I thought that it might be\n> related to the fact that collection_data_context_key_index is a CLUSTERED\n> index, but we did some basic experimentation that seems to indicate\n> otherwise, i.e. the bad plan persists despite re-clustering the index.\n>\n> We are using PostgreSQL 9.2.5 on x86_64-unknown-linux-gnu, compiled by gcc\n> (Debian 4.4.5-8) 4.4.5, 64-bit\n>\n> Interestingly, on an instance running PostgreSQL 9.2.4 on\n> x86_64-unknown-linux-gnu, compiled by gcc (Debian 4.4.5-8) 4.4.5, 64-bit\n> where I copied the 2 tables over to a temporary database, the plan comes out\n> differently:\n>\n> HashAggregate (cost=39692.03..39739.98 rows=4795 width=4) (actual\n> time=73.285..75.141 rows=3486 loops=1)\n> Buffers: shared hit=22458\n> -> Nested Loop (cost=0.00..39680.05 rows=4795 width=4) (actual\n> time=0.077..63.116 rows=13007 loops=1)\n> Buffers: shared hit=22458\n> -> Nested Loop (cost=0.00..32823.38 rows=164 width=4) (actual\n> time=0.056..17.685 rows=2084 loops=1)\n> Buffers: shared hit=7529\n> -> Index Only Scan using virtual_ancestors_pkey on\n> virtual_ancestors a2 (cost=0.00..1220.85 rows=396 width=4) (actual\n> time=0.025..2.732 rows=1954 loops=1)\n> Index Cond: (ancestor_key = 1072173)\n> Heap Fetches: 1954\n> Buffers: shared hit=1397\n> ######## Note the index scan here - this is what it SHOULD be doing\n> ##############\n> -> Index Scan using collection_data_context_key_index on\n> collection_data (cost=0.00..79.24 rows=56 width=8) (actual\n> time=0.004..0.005 rows=1 loops=1954)\n> Index Cond: (collection_context_key = a2.context_key)\n> Buffers: shared hit=6132\n> -> Index Only Scan using virtual_ancestors_pkey on\n> virtual_ancestors a1 (cost=0.00..35.40 rows=641 width=8) (actual\n> time=0.007..0.015 rows=6 loops=2084)\n> Index Cond: (ancestor_key = collection_data.context_key)\n> Heap Fetches: 13007\n> Buffers: shared hit=14929\n> Total runtime: 76.431 ms\n>\n> Why can't I get the Postgres 9.2.5 instance to use the optimal plan?\n>\n> Thanks in advance!\n> /Stefan\n>\n> --\n> -\n> Stefan Amshey\n\n\n\n-- \nIlya Kosmodemiansky,\n\nPostgreSQL-Consulting.com\ntel. +14084142500\ncell. +4915144336040\[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 21 Mar 2014 07:02:40 +0100", "msg_from": "Ilya Kosmodemiansky <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow join not using index properly" }, { "msg_contents": "On 21-03-14 00:56, Stefan Amshey wrote:\n> We have a slow performing query that we are trying to improve, and it \n> appears to be performing a sequential scan at a point where it should \n> be utilizing an index. Can anyone tell me why postgres is opting to do \n> it this way?\n>\n> The original query is as follows:\n>\n> SELECT DISTINCT\n> a1.context_key\n> FROM\n> virtual_ancestors a1, collection_data, virtual_ancestors a2\n> WHERE\n> a1.ancestor_key = collection_data.context_key\n> AND collection_data.collection_context_key = a2.context_key\n> AND a2.ancestor_key = ?\n>\n> The key relationships should all using indexed columns, but the query \n> plan that postgres comes up with ends up performing a sequential scan \n> on the collection_data table (in this case about 602k rows) where we \n> would have expected it to utilize the index:\n>\n> HashAggregate (cost=60905.73..60935.73 rows=3000 width=4) (actual \n> time=3366.165..3367.354 rows=3492 loops=1)\n> Buffers: shared hit=16291 read=1222\n> -> Nested Loop (cost=17546.26..60898.23 rows=3000 width=4) (actual \n> time=438.332..3357.918 rows=13037 loops=1)\n> Buffers: shared hit=16291 read=1222\n> -> Hash Join (cost=17546.26..25100.94 rows=98 width=4) (actual \n> time=408.554..415.767 rows=2092 loops=1)\n> Hash Cond: (a2.context_key = \n> collection_data.collection_context_key)\n> Buffers: shared hit=4850 read=3\n> -> Index Only Scan using virtual_ancestors_pkey on \n> virtual_ancestors a2 (cost=0.00..233.32 rows=270 width=4) (actual \n> time=8.532..10.703 rows=1960 loops=1)\n> Index Cond: (ancestor_key = 1072173)\n> Heap Fetches: 896\n> Buffers: shared hit=859 read=3\n> -> Hash (cost=10015.56..10015.56 rows=602456 width=8) \n> (actual time=399.708..399.708 rows=602570 loops=1)\n> Buckets: 65536 Batches: 1 Memory Usage: 23538kB\n> Buffers: shared hit=3991\n> ######## sequential scan occurs here ##########\n> -> Seq Scan on collection_data (cost=0.00..10015.56 rows=602456 \n> width=8) (actual time=0.013..163.509 rows=602570 loops=1)\n> Buffers: shared hit=3991\n> -> Index Only Scan using virtual_ancestors_pkey on \n> virtual_ancestors a1 (cost=0.00..360.70 rows=458 width=8) (actual \n> time=1.339..1.403 rows=6 loops=2092)\n> Index Cond: (ancestor_key = collection_data.context_key)\n> Heap Fetches: 7067\n> Buffers: shared hit=11441 read=1219\n> Total runtime: 3373.058 ms\n>\n>\n> The table definitions are as follows:\n>\n> Table \"public.virtual_ancestors\"\n> Column | Type | Modifiers\n> --------------+----------+-----------\n> ancestor_key | integer | not null\n> context_key | integer | not null\n> degree | smallint | not null\n> Indexes:\n> \"virtual_ancestors_pkey\" PRIMARY KEY, btree (ancestor_key, \n> context_key)\n> \"virtual_context_key_idx\" btree (context_key)\n> Foreign-key constraints:\n> \"virtual_ancestors_ancestor_key_fkey\" FOREIGN KEY (ancestor_key) \n> REFERENCES contexts(context_key)\n> \"virtual_ancestors_context_key_fkey\" FOREIGN KEY (context_key) \n> REFERENCES contexts(context_key)\n>\n> Table \"public.collection_data\"\n> Column | Type | Modifiers\n> ------------------------+----------------------+-----------\n> collection_context_key | integer | not null\n> context_key | integer | not null\n> type | character varying(1) | not null\n> source | character varying(1) | not null\n> Indexes:\n> \"collection_data_context_key_idx\" btree (context_key)\n> \"collection_data_context_key_index\" btree (collection_context_key) \n> CLUSTER\n> Foreign-key constraints:\n> \"collection_data_collection_context_key_fkey\" FOREIGN KEY \n> (collection_context_key) REFERENCES contexts(context_key) ON DELETE \n> CASCADE\n> \"collection_data_context_key_fkey\" FOREIGN KEY (context_key) \n> REFERENCES contexts(context_key) ON DELETE CASCADE\n>\n> Can anyone suggest a way that we can get postgres to use the \n> collection_data_context_key_index properly? I thought that it might be \n> related to the fact that collection_data_context_key_index is a \n> CLUSTERED index, but we did some basic experimentation that seems to \n> indicate otherwise, i.e. the bad plan persists despite re-clustering \n> the index.\n>\n> We are using PostgreSQL 9.2.5 on x86_64-unknown-linux-gnu, compiled by \n> gcc (Debian 4.4.5-8) 4.4.5, 64-bit\n>\n> Interestingly, on an instance running PostgreSQL 9.2.4 on \n> x86_64-unknown-linux-gnu, compiled by gcc (Debian 4.4.5-8) 4.4.5, \n> 64-bit where I copied the 2 tables over to a temporary database, the \n> plan comes out differently:\n>\n> HashAggregate (cost=39692.03..39739.98 rows=4795 width=4) (actual \n> time=73.285..75.141 rows=3486 loops=1)\n> Buffers: shared hit=22458\n> -> Nested Loop (cost=0.00..39680.05 rows=4795 width=4) (actual \n> time=0.077..63.116 rows=13007 loops=1)\n> Buffers: shared hit=22458\n> -> Nested Loop (cost=0.00..32823.38 rows=164 width=4) \n> (actual time=0.056..17.685 rows=2084 loops=1)\n> Buffers: shared hit=7529\n> -> Index Only Scan using virtual_ancestors_pkey on \n> virtual_ancestors a2 (cost=0.00..1220.85 rows=396 width=4) (actual \n> time=0.025..2.732 rows=1954 loops=1)\n> Index Cond: (ancestor_key = 1072173)\n> Heap Fetches: 1954\n> Buffers: shared hit=1397\n> ######## Note the index scan here - this is what it SHOULD be doing \n> ##############\n> -> Index Scan using collection_data_context_key_index on \n> collection_data (cost=0.00..79.24 rows=56 width=8) (actual \n> time=0.004..0.005 rows=1 loops=1954)\n> Index Cond: (collection_context_key = a2.context_key)\n> Buffers: shared hit=6132\n> -> Index Only Scan using virtual_ancestors_pkey on \n> virtual_ancestors a1 (cost=0.00..35.40 rows=641 width=8) (actual \n> time=0.007..0.015 rows=6 loops=2084)\n> Index Cond: (ancestor_key = collection_data.context_key)\n> Heap Fetches: 13007\n> Buffers: shared hit=14929\n> Total runtime: 76.431 ms\n>\n> Why can't I get the Postgres 9.2.5 instance to use the optimal plan?\n>\n> Thanks in advance!\n> /Stefan\n>\n> -- \n> -\n> Stefan Amshey\nThe first plan expects to process 600000 rows in the sequential scan, \nand the second plan expects to process only one,\nso it looks like the statistics in the first database are out of date, \ndid you run vacuum analyse?\n\n\n\n\n\n\nOn 21-03-14 00:56, Stefan Amshey wrote:\n\n\nWe have a slow performing query that we are trying\n to improve, and it appears to be performing a sequential scan at\n a point where it should be utilizing an index. Can anyone tell\n me why postgres is opting to do it this way?\n \n\n\nThe original query is as follows:\n\n\n\nSELECT\n DISTINCT\n   \n a1.context_key\nFROM\n   \n virtual_ancestors a1, collection_data, virtual_ancestors a2\n\n WHERE\n   \n a1.ancestor_key =  collection_data.context_key\n   \n AND collection_data.collection_context_key = a2.context_key\n   \n AND a2.ancestor_key = ?\n\n\nThe\n key relationships should all using indexed columns, but the\n query plan that postgres comes up with ends up performing a\n sequential scan on the collection_data table (in this case\n about 602k rows) where we would have expected it to utilize\n the index:\n\n\n\n HashAggregate\n  (cost=60905.73..60935.73 rows=3000 width=4) (actual\n time=3366.165..3367.354 rows=3492 loops=1)\n \n  Buffers: shared hit=16291 read=1222\n \n  ->  Nested Loop  (cost=17546.26..60898.23 rows=3000\n width=4) (actual time=438.332..3357.918 rows=13037\n loops=1)\n   \n      Buffers: shared hit=16291 read=1222\n   \n      ->  Hash Join  (cost=17546.26..25100.94 rows=98\n width=4) (actual time=408.554..415.767 rows=2092 loops=1)\n   \n            Hash Cond: (a2.context_key =\n collection_data.collection_context_key)\n   \n            Buffers: shared hit=4850 read=3\n   \n            ->  Index Only Scan using\n virtual_ancestors_pkey on virtual_ancestors a2\n  (cost=0.00..233.32 rows=270 width=4) (actual\n time=8.532..10.703 rows=1960 loops=1)\n   \n                  Index Cond: (ancestor_key = 1072173)\n   \n                  Heap Fetches: 896\n   \n                  Buffers: shared hit=859 read=3\n   \n            ->  Hash  (cost=10015.56..10015.56\n rows=602456 width=8) (actual time=399.708..399.708\n rows=602570 loops=1)\n   \n                  Buckets: 65536  Batches: 1  Memory Usage:\n 23538kB\n   \n                  Buffers: shared hit=3991\n########\n sequential scan occurs here ##########\n   \n                  ->  Seq Scan on\n collection_data  (cost=0.00..10015.56 rows=602456\n width=8) (actual time=0.013..163.509 rows=602570\n loops=1)\n   \n                        Buffers: shared hit=3991\n   \n      ->  Index Only Scan using virtual_ancestors_pkey\n on virtual_ancestors a1  (cost=0.00..360.70 rows=458\n width=8) (actual time=1.339..1.403 rows=6 loops=2092)\n   \n            Index Cond: (ancestor_key =\n collection_data.context_key)\n   \n            Heap Fetches: 7067\n   \n            Buffers: shared hit=11441 read=1219\n Total\n runtime: 3373.058 ms\n\n\n\n\n\nThe\n table definitions are as follows:\n\n\n\n\n \n Table \"public.virtual_ancestors\"\n \n   Column    |   Type   | Modifiers\n--------------+----------+-----------\n ancestor_key\n | integer  | not null\n\n  context_key  | integer  | not null\n degree\n       | smallint | not null\nIndexes:\n\n     \"virtual_ancestors_pkey\" PRIMARY KEY, btree\n (ancestor_key, context_key)\n \n   \"virtual_context_key_idx\" btree (context_key)\n\n Foreign-key constraints:\n \n   \"virtual_ancestors_ancestor_key_fkey\" FOREIGN KEY\n (ancestor_key) REFERENCES contexts(context_key)\n\n     \"virtual_ancestors_context_key_fkey\" FOREIGN KEY\n (context_key) REFERENCES contexts(context_key)\n\n\n\n             Table \"public.collection_data\"\n         Column              |         Type        \n | Modifiers\n------------------------+----------------------+-----------\n collection_context_key | integer              |\n not null\n context_key                | integer            \n  | not null\n type                            | character\n varying(1) | not null\n source                        | character\n varying(1) | not null\nIndexes:\n    \"collection_data_context_key_idx\" btree\n (context_key)\n    \"collection_data_context_key_index\" btree\n (collection_context_key) CLUSTER\nForeign-key constraints:\n    \"collection_data_collection_context_key_fkey\"\n FOREIGN KEY (collection_context_key) REFERENCES\n contexts(context_key) ON DELETE CASCADE\n    \"collection_data_context_key_fkey\" FOREIGN KEY\n (context_key) REFERENCES contexts(context_key) ON\n DELETE CASCADE\n\n\n\nCan\n anyone suggest a way that we can get postgres to use the\n collection_data_context_key_index properly? I thought\n that it might be related to the fact that\n collection_data_context_key_index is a CLUSTERED index,\n but we did some basic experimentation that seems to\n indicate otherwise, i.e. the bad plan persists despite\n re-clustering the index.\n\n\nWe\n are using PostgreSQL 9.2.5 on x86_64-unknown-linux-gnu,\n compiled by gcc (Debian 4.4.5-8) 4.4.5, 64-bit\n\n\nInterestingly,\n on an instance running PostgreSQL 9.2.4 on\n x86_64-unknown-linux-gnu, compiled by gcc (Debian\n 4.4.5-8) 4.4.5, 64-bit where I copied the 2 tables over\n to a temporary database, the plan comes out differently:\n\n\n\n HashAggregate\n  (cost=39692.03..39739.98 rows=4795 width=4) (actual\n time=73.285..75.141 rows=3486 loops=1)\n   Buffers: shared\n hit=22458\n   ->  Nested\n Loop  (cost=0.00..39680.05 rows=4795 width=4)\n (actual time=0.077..63.116 rows=13007 loops=1)\n         Buffers:\n shared hit=22458\n         ->\n  Nested Loop  (cost=0.00..32823.38 rows=164 width=4)\n (actual time=0.056..17.685 rows=2084 loops=1)\n             \n  Buffers: shared hit=7529\n               ->\n  Index Only Scan using virtual_ancestors_pkey on\n virtual_ancestors a2  (cost=0.00..1220.85 rows=396\n width=4) (actual time=0.025..2.732 rows=1954\n loops=1)\n                   \n  Index Cond: (ancestor_key = 1072173)\n                   \n  Heap Fetches: 1954\n                   \n  Buffers: shared hit=1397\n########\n Note the index scan here - this is what it SHOULD be\n doing ##############\n               ->  Index Scan using\n collection_data_context_key_index on\n collection_data  (cost=0.00..79.24 rows=56\n width=8) (actual time=0.004..0.005 rows=1\n loops=1954)\n                   \n  Index Cond: (collection_context_key =\n a2.context_key)\n                   \n  Buffers: shared hit=6132\n\n         ->  Index\n Only Scan using virtual_ancestors_pkey on\n virtual_ancestors a1  (cost=0.00..35.40 rows=641\n width=8) (actual time=0.007..0.015 rows=6\n loops=2084)\n               Index\n Cond: (ancestor_key = collection_data.context_key)\n               Heap\n Fetches: 13007\n             \n  Buffers: shared hit=14929\n Total runtime:\n 76.431 ms\n\n\n\nWhy\n can't I get the Postgres 9.2.5 instance to use the\n optimal plan?\n\n\n\nThanks\n in advance!\n\n\n     /Stefan\n\n\n -- \n -\n Stefan Amshey\n\n\n\n The first plan expects to process 600000 rows in the sequential\n scan, and the second plan expects to process only one,\n so it looks like the statistics in the first database are out of\n date, did you run vacuum analyse?", "msg_date": "Fri, 21 Mar 2014 09:17:39 +0100", "msg_from": "Vincent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow join not using index properly" }, { "msg_contents": "Hi Ilya-\n\nThanks so much for taking a stab at optimizing that query. I had to fiddle\na bit with your proposed version in order to get it function. Here's what I\ncame up with in the end:\n\nwith RECURSIVE qq(cont_key, anc_key) AS\n> (\n> SELECT\n> a1.context_key, ancestor_key\n> FROM\n> virtual_ancestors a1\n> UNION (\n> SELECT\n> a1.context_key, a1.ancestor_key\n> FROM\n> virtual_ancestors a1, qq\n> WHERE\n> context_key > qq.cont_key\n> ORDER BY\n> context_key LIMIT 1\n> )\n> )\n> SELECT\n> distinct a.cont_key\n> FROM\n> qq a, collection_data, virtual_ancestors a2\n> WHERE\n> a.cont_key IS NOT NULL\n> AND a.anc_key = collection_data.context_key\n> AND collection_data.collection_context_key = a2.context_key\n> AND a2.ancestor_key = 1072173;\n\n\nI had to drop the MIN( a1.context_key ) and LIMIT 1 from your version off\nof the first select statement in order to avoid syntax issues or other\nerrors. The version above does produce the same counts as the original, but\nin the end it wasn't really a win for us. Here's the plan it produced:\n\nHashAggregate (cost=707724.36..707726.36 rows=200 width=4) (actual\n> time=27638.844..27639.706 rows=3522 loops=1)\n> Buffers: shared hit=79323, temp read=49378 written=47557\n> CTE qq\n> -> Recursive Union (cost=0.00..398869.78 rows=10814203 width=8)\n> (actual time=0.018..20196.397 rows=10821685 loops=1)\n> Buffers: shared hit=74449, temp read=49378 written=23779\n> -> Seq Scan on virtual_ancestors a1 (cost=0.00..182584.93\n> rows=10814193 width=8) (actual time=0.010..2585.411 rows=10821685 loops=1)\n> Buffers: shared hit=74443\n> -> Limit (cost=0.00..0.08 rows=1 width=8) (actual\n> time=7973.297..7973.298 rows=1 loops=1)\n> Buffers: shared hit=6, temp read=49378 written=1\n> -> Nested Loop (cost=0.00..30881281719119.79\n> rows=389822567470830 width=8) (actual time=7973.296..7973.296 rows=1\n> loops=1)\n> Join Filter: (a1.context_key > qq.cont_key)\n> Rows Removed by Join Filter: 22470607\n> Buffers: shared hit=6, temp read=49378 written=1\n> -> Index Scan using virtual_context_key_idx on\n> virtual_ancestors a1 (cost=0.00..18206859.46 rows=10814193 width=8)\n> (actual time=0.018..0.036 rows=3 loops=1)\n> Buffers: shared hit=6\n> -> WorkTable Scan on qq (cost=0.00..2162838.60\n> rows=108141930 width=4) (actual time=0.008..1375.445 rows=7490203 loops=3)\n> Buffers: temp read=49378 written=1\n> -> Hash Join (cost=25283.37..308847.31 rows=2905 width=4) (actual\n> time=449.167..27629.759 rows=13152 loops=1)\n> Hash Cond: (a.anc_key = collection_data.context_key)\n> Buffers: shared hit=79323, temp read=49378 written=47557\n> -> CTE Scan on qq a (cost=0.00..216284.06 rows=10760132\n> width=8) (actual time=0.021..25265.179 rows=10821685 loops=1)\n> Filter: (cont_key IS NOT NULL)\n> Buffers: shared hit=74449, temp read=49378 written=47557\n> -> Hash (cost=25282.14..25282.14 rows=98 width=4) (actual\n> time=373.836..373.836 rows=2109 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 75kB\n> Buffers: shared hit=4874\n> -> Hash Join (cost=17557.15..25282.14 rows=98 width=4)\n> (actual time=368.374..373.013 rows=2109 loops=1)\n> Hash Cond: (a2.context_key =\n> collection_data.collection_context_key)\n> Buffers: shared hit=4874\n> -> Index Only Scan using virtual_ancestors_pkey on\n> virtual_ancestors a2 (cost=0.00..238.57 rows=272 width=4) (actual\n> time=0.029..1.989 rows=1976 loops=1)\n> Index Cond: (ancestor_key = 1072173)\n> Heap Fetches: 917\n> Buffers: shared hit=883\n> -> Hash (cost=10020.40..10020.40 rows=602940\n> width=8) (actual time=368.057..368.057 rows=603066 loops=1)\n> Buckets: 65536 Batches: 1 Memory Usage:\n> 23558kB\n> Buffers: shared hit=3991\n> -> Seq Scan on collection_data\n> (cost=0.00..10020.40 rows=602940 width=8) (actual time=0.006..146.447\n> rows=603066 loops=1)\n> Buffers: shared hit=3991\n> Total runtime: 27854.200 ms\n\n\nI also tried including the MIN( a1.context_key ) in the first select\nstatement as you had written it, but upon doing that it became necessary to\nadd a GROUP BY clause, and doing that changed the final number of rows\nselected:\n\n> ERROR: column \"a1.ancestor_key\" must appear in the GROUP BY clause or be\n> used in an aggregate function\n> LINE 4: min( a1.context_key ), ancestor_key\n\n ^\n\nIncluding the LIMIT 1 at the end of the first select statement gave a\nsyntax error that I couldn't seem to get past, so I think it might be\ninvalid:\n\n> ERROR: syntax error at or near \"UNION\"\n> LINE 8: UNION (\n> ^\n\n\nSo I landed on the version that I posted above, which seems to select the\nsame set in all of the cases that I tried.\n\nAnyway, thanks again for taking a stab at helping, I do appreciate it. If\nyou have any other ideas that might be of help I'd certainly be happy to\nhear them.\n\nTake care,\n /Stefan\n\n\n\n\nOn Thu, Mar 20, 2014 at 11:02 PM, Ilya Kosmodemiansky <\[email protected]> wrote:\n\n> Hi Stefan!\n>\n> Probably you need to rewrite your query like this (check it first):\n>\n> with RECURSIVE qq(cont_key, anc_key) as\n> (\n> select min(a1.context_key), ancestor_key from virtual_ancestors a1\n> union select\n> (SELECT\n> a1.context_key, ancestor_key\n> FROM\n> virtual_ancestors a1 where context_key > cont_key order by\n> context_key limit 1) from qq where cont_key is not null\n> )\n> select a1.cont_key\n> from qq a1, collection_data, virtual_ancestors a2\n> WHERE\n> a1.anc_key = collection_data.context_key\n> AND collection_data.collection_context_key = a2.context_key\n> AND a2.ancestor_key = ?\n>\n> best regards,\n> Ilya\n>\n> On Fri, Mar 21, 2014 at 12:56 AM, Stefan Amshey <[email protected]>\n> wrote:\n> > We have a slow performing query that we are trying to improve, and it\n> > appears to be performing a sequential scan at a point where it should be\n> > utilizing an index. Can anyone tell me why postgres is opting to do it\n> this\n> > way?\n> >\n> > The original query is as follows:\n> >\n> > SELECT DISTINCT\n> > a1.context_key\n> > FROM\n> > virtual_ancestors a1, collection_data, virtual_ancestors a2\n> > WHERE\n> > a1.ancestor_key = collection_data.context_key\n> > AND collection_data.collection_context_key = a2.context_key\n> > AND a2.ancestor_key = ?\n> >\n> > The key relationships should all using indexed columns, but the query\n> plan\n> > that postgres comes up with ends up performing a sequential scan on the\n> > collection_data table (in this case about 602k rows) where we would have\n> > expected it to utilize the index:\n> >\n> > HashAggregate (cost=60905.73..60935.73 rows=3000 width=4) (actual\n> > time=3366.165..3367.354 rows=3492 loops=1)\n> > Buffers: shared hit=16291 read=1222\n> > -> Nested Loop (cost=17546.26..60898.23 rows=3000 width=4) (actual\n> > time=438.332..3357.918 rows=13037 loops=1)\n> > Buffers: shared hit=16291 read=1222\n> > -> Hash Join (cost=17546.26..25100.94 rows=98 width=4) (actual\n> > time=408.554..415.767 rows=2092 loops=1)\n> > Hash Cond: (a2.context_key =\n> > collection_data.collection_context_key)\n> > Buffers: shared hit=4850 read=3\n> > -> Index Only Scan using virtual_ancestors_pkey on\n> > virtual_ancestors a2 (cost=0.00..233.32 rows=270 width=4) (actual\n> > time=8.532..10.703 rows=1960 loops=1)\n> > Index Cond: (ancestor_key = 1072173)\n> > Heap Fetches: 896\n> > Buffers: shared hit=859 read=3\n> > -> Hash (cost=10015.56..10015.56 rows=602456 width=8)\n> > (actual time=399.708..399.708 rows=602570 loops=1)\n> > Buckets: 65536 Batches: 1 Memory Usage: 23538kB\n> > Buffers: shared hit=3991\n> > ######## sequential scan occurs here ##########\n> > -> Seq Scan on collection_data\n> (cost=0.00..10015.56\n> > rows=602456 width=8) (actual time=0.013..163.509 rows=602570 loops=1)\n> > Buffers: shared hit=3991\n> > -> Index Only Scan using virtual_ancestors_pkey on\n> > virtual_ancestors a1 (cost=0.00..360.70 rows=458 width=8) (actual\n> > time=1.339..1.403 rows=6 loops=2092)\n> > Index Cond: (ancestor_key = collection_data.context_key)\n> > Heap Fetches: 7067\n> > Buffers: shared hit=11441 read=1219\n> > Total runtime: 3373.058 ms\n> >\n> >\n> > The table definitions are as follows:\n> >\n> > Table \"public.virtual_ancestors\"\n> > Column | Type | Modifiers\n> > --------------+----------+-----------\n> > ancestor_key | integer | not null\n> > context_key | integer | not null\n> > degree | smallint | not null\n> > Indexes:\n> > \"virtual_ancestors_pkey\" PRIMARY KEY, btree (ancestor_key,\n> context_key)\n> > \"virtual_context_key_idx\" btree (context_key)\n> > Foreign-key constraints:\n> > \"virtual_ancestors_ancestor_key_fkey\" FOREIGN KEY (ancestor_key)\n> > REFERENCES contexts(context_key)\n> > \"virtual_ancestors_context_key_fkey\" FOREIGN KEY (context_key)\n> > REFERENCES contexts(context_key)\n> >\n> > Table \"public.collection_data\"\n> > Column | Type | Modifiers\n> > ------------------------+----------------------+-----------\n> > collection_context_key | integer | not null\n> > context_key | integer | not null\n> > type | character varying(1) | not null\n> > source | character varying(1) | not null\n> > Indexes:\n> > \"collection_data_context_key_idx\" btree (context_key)\n> > \"collection_data_context_key_index\" btree (collection_context_key)\n> > CLUSTER\n> > Foreign-key constraints:\n> > \"collection_data_collection_context_key_fkey\" FOREIGN KEY\n> > (collection_context_key) REFERENCES contexts(context_key) ON DELETE\n> CASCADE\n> > \"collection_data_context_key_fkey\" FOREIGN KEY (context_key)\n> REFERENCES\n> > contexts(context_key) ON DELETE CASCADE\n> >\n> > Can anyone suggest a way that we can get postgres to use the\n> > collection_data_context_key_index properly? I thought that it might be\n> > related to the fact that collection_data_context_key_index is a CLUSTERED\n> > index, but we did some basic experimentation that seems to indicate\n> > otherwise, i.e. the bad plan persists despite re-clustering the index.\n> >\n> > We are using PostgreSQL 9.2.5 on x86_64-unknown-linux-gnu, compiled by\n> gcc\n> > (Debian 4.4.5-8) 4.4.5, 64-bit\n> >\n> > Interestingly, on an instance running PostgreSQL 9.2.4 on\n> > x86_64-unknown-linux-gnu, compiled by gcc (Debian 4.4.5-8) 4.4.5, 64-bit\n> > where I copied the 2 tables over to a temporary database, the plan comes\n> out\n> > differently:\n> >\n> > HashAggregate (cost=39692.03..39739.98 rows=4795 width=4) (actual\n> > time=73.285..75.141 rows=3486 loops=1)\n> > Buffers: shared hit=22458\n> > -> Nested Loop (cost=0.00..39680.05 rows=4795 width=4) (actual\n> > time=0.077..63.116 rows=13007 loops=1)\n> > Buffers: shared hit=22458\n> > -> Nested Loop (cost=0.00..32823.38 rows=164 width=4) (actual\n> > time=0.056..17.685 rows=2084 loops=1)\n> > Buffers: shared hit=7529\n> > -> Index Only Scan using virtual_ancestors_pkey on\n> > virtual_ancestors a2 (cost=0.00..1220.85 rows=396 width=4) (actual\n> > time=0.025..2.732 rows=1954 loops=1)\n> > Index Cond: (ancestor_key = 1072173)\n> > Heap Fetches: 1954\n> > Buffers: shared hit=1397\n> > ######## Note the index scan here - this is what it SHOULD be doing\n> > ##############\n> > -> Index Scan using collection_data_context_key_index on\n> > collection_data (cost=0.00..79.24 rows=56 width=8) (actual\n> > time=0.004..0.005 rows=1 loops=1954)\n> > Index Cond: (collection_context_key =\n> a2.context_key)\n> > Buffers: shared hit=6132\n> > -> Index Only Scan using virtual_ancestors_pkey on\n> > virtual_ancestors a1 (cost=0.00..35.40 rows=641 width=8) (actual\n> > time=0.007..0.015 rows=6 loops=2084)\n> > Index Cond: (ancestor_key = collection_data.context_key)\n> > Heap Fetches: 13007\n> > Buffers: shared hit=14929\n> > Total runtime: 76.431 ms\n> >\n> > Why can't I get the Postgres 9.2.5 instance to use the optimal plan?\n> >\n> > Thanks in advance!\n> > /Stefan\n> >\n> > --\n> > -\n> > Stefan Amshey\n>\n>\n>\n> --\n> Ilya Kosmodemiansky,\n>\n> PostgreSQL-Consulting.com\n> tel. +14084142500\n> cell. +4915144336040\n> [email protected]\n>\n\n\n\n-- \n-\nStefan Amshey\n\nHi Ilya-Thanks so much for taking a stab at optimizing that query.  I had to fiddle a bit with your proposed version in order to get it function. Here's what I came up with in the end:\n\nwith RECURSIVE qq(cont_key, anc_key) AS\n\n  (    SELECT      a1.context_key, ancestor_key    FROM      virtual_ancestors a1    UNION (      SELECT        a1.context_key, a1.ancestor_key      FROM        virtual_ancestors a1, qq\n\n      WHERE        context_key > qq.cont_key      ORDER BY        context_key LIMIT 1    )  )  SELECT    distinct a.cont_key  FROM    qq a, collection_data, virtual_ancestors a2\n  WHERE\n    a.cont_key IS NOT NULL    AND a.anc_key = collection_data.context_key    AND collection_data.collection_context_key = a2.context_key    AND a2.ancestor_key = 1072173;\n\nI had to drop the MIN( a1.context_key ) and LIMIT 1 from your version off of the first select statement in order to avoid syntax issues or other errors. The version above does produce the same counts as the original, but in the end it wasn't really a win for us. Here's the plan it produced:\nHashAggregate  (cost=707724.36..707726.36 rows=200 width=4) (actual time=27638.844..27639.706 rows=3522 loops=1)\n\n   Buffers: shared hit=79323, temp read=49378 written=47557   CTE qq     ->  Recursive Union  (cost=0.00..398869.78 rows=10814203 width=8) (actual time=0.018..20196.397 rows=10821685 loops=1)           Buffers: shared hit=74449, temp read=49378 written=23779\n\n           ->  Seq Scan on virtual_ancestors a1  (cost=0.00..182584.93 rows=10814193 width=8) (actual time=0.010..2585.411 rows=10821685 loops=1)                 Buffers: shared hit=74443           ->  Limit  (cost=0.00..0.08 rows=1 width=8) (actual time=7973.297..7973.298 rows=1 loops=1)\n\n                 Buffers: shared hit=6, temp read=49378 written=1                 ->  Nested Loop  (cost=0.00..30881281719119.79 rows=389822567470830 width=8) (actual time=7973.296..7973.296 rows=1 loops=1)                       Join Filter: (a1.context_key > qq.cont_key)\n\n                       Rows Removed by Join Filter: 22470607                       Buffers: shared hit=6, temp read=49378 written=1                       ->  Index Scan using virtual_context_key_idx on virtual_ancestors a1  (cost=0.00..18206859.46 rows=10814193 width=8) (actual time=0.018..0.036 rows=3 loops=1)\n\n                             Buffers: shared hit=6                       ->  WorkTable Scan on qq  (cost=0.00..2162838.60 rows=108141930 width=4) (actual time=0.008..1375.445 rows=7490203 loops=3)                             Buffers: temp read=49378 written=1\n\n   ->  Hash Join  (cost=25283.37..308847.31 rows=2905 width=4) (actual time=449.167..27629.759 rows=13152 loops=1)         Hash Cond: (a.anc_key = collection_data.context_key)         Buffers: shared hit=79323, temp read=49378 written=47557\n\n         ->  CTE Scan on qq a  (cost=0.00..216284.06 rows=10760132 width=8) (actual time=0.021..25265.179 rows=10821685 loops=1)               Filter: (cont_key IS NOT NULL)               Buffers: shared hit=74449, temp read=49378 written=47557\n\n         ->  Hash  (cost=25282.14..25282.14 rows=98 width=4) (actual time=373.836..373.836 rows=2109 loops=1)               Buckets: 1024  Batches: 1  Memory Usage: 75kB               Buffers: shared hit=4874\n\n               ->  Hash Join  (cost=17557.15..25282.14 rows=98 width=4) (actual time=368.374..373.013 rows=2109 loops=1)                     Hash Cond: (a2.context_key = collection_data.collection_context_key)\n                     Buffers: shared hit=4874\n                     ->  Index Only Scan using virtual_ancestors_pkey on virtual_ancestors a2  (cost=0.00..238.57 rows=272 width=4) (actual time=0.029..1.989 rows=1976 loops=1)                           Index Cond: (ancestor_key = 1072173)\n\n                           Heap Fetches: 917                           Buffers: shared hit=883                     ->  Hash  (cost=10020.40..10020.40 rows=602940 width=8) (actual time=368.057..368.057 rows=603066 loops=1)\n\n                           Buckets: 65536  Batches: 1  Memory Usage: 23558kB                           Buffers: shared hit=3991                           ->  Seq Scan on collection_data  (cost=0.00..10020.40 rows=602940 width=8) (actual time=0.006..146.447 rows=603066 loops=1)\n\n                                 Buffers: shared hit=3991 Total runtime: 27854.200 msI also tried including the MIN( a1.context_key ) in the first select statement as you had written it, but upon doing that it became necessary to add a GROUP BY clause, and doing that changed the final number of rows selected:\nERROR:  column \"a1.ancestor_key\" must appear in the GROUP BY clause or be used in an aggregate function\n\nLINE 4:       min( a1.context_key ), ancestor_key                                     ^Including the LIMIT 1 at the end of the first select statement gave a syntax error that I couldn't seem to get past, so I think it might be invalid:\nERROR:  syntax error at or near \"UNION\"\n\nLINE 8:     UNION (            ^So I landed on the version that I posted above, which seems to select the same set in all of the cases that I tried.\n\nAnyway, thanks again for taking a stab at helping, I do appreciate it. If you have any other ideas that might be of help I'd certainly be happy to hear them.Take care,\n\n     /Stefan On Thu, Mar 20, 2014 at 11:02 PM, Ilya Kosmodemiansky <[email protected]> wrote:\nHi Stefan!\n\nProbably you need to rewrite your query like this (check it first):\n\nwith RECURSIVE qq(cont_key, anc_key) as\n(\nselect min(a1.context_key), ancestor_key from virtual_ancestors a1\n union select\n  (SELECT\n    a1.context_key, ancestor_key\n  FROM\n    virtual_ancestors a1 where context_key > cont_key order by\ncontext_key limit 1) from qq where cont_key is not null\n)\nselect a1.cont_key\n from qq a1, collection_data, virtual_ancestors a2\nWHERE\n    a1.anc_key =  collection_data.context_key\n    AND collection_data.collection_context_key = a2.context_key\n    AND a2.ancestor_key = ?\n\nbest regards,\nIlya\n\nOn Fri, Mar 21, 2014 at 12:56 AM, Stefan Amshey <[email protected]> wrote:\n> We have a slow performing query that we are trying to improve, and it\n> appears to be performing a sequential scan at a point where it should be\n> utilizing an index. Can anyone tell me why postgres is opting to do it this\n> way?\n>\n> The original query is as follows:\n>\n> SELECT DISTINCT\n>     a1.context_key\n> FROM\n>     virtual_ancestors a1, collection_data, virtual_ancestors a2\n> WHERE\n>     a1.ancestor_key =  collection_data.context_key\n>     AND collection_data.collection_context_key = a2.context_key\n>     AND a2.ancestor_key = ?\n>\n> The key relationships should all using indexed columns, but the query plan\n> that postgres comes up with ends up performing a sequential scan on the\n> collection_data table (in this case about 602k rows) where we would have\n> expected it to utilize the index:\n>\n>  HashAggregate  (cost=60905.73..60935.73 rows=3000 width=4) (actual\n> time=3366.165..3367.354 rows=3492 loops=1)\n>    Buffers: shared hit=16291 read=1222\n>    ->  Nested Loop  (cost=17546.26..60898.23 rows=3000 width=4) (actual\n> time=438.332..3357.918 rows=13037 loops=1)\n>          Buffers: shared hit=16291 read=1222\n>          ->  Hash Join  (cost=17546.26..25100.94 rows=98 width=4) (actual\n> time=408.554..415.767 rows=2092 loops=1)\n>                Hash Cond: (a2.context_key =\n> collection_data.collection_context_key)\n>                Buffers: shared hit=4850 read=3\n>                ->  Index Only Scan using virtual_ancestors_pkey on\n> virtual_ancestors a2  (cost=0.00..233.32 rows=270 width=4) (actual\n> time=8.532..10.703 rows=1960 loops=1)\n>                      Index Cond: (ancestor_key = 1072173)\n>                      Heap Fetches: 896\n>                      Buffers: shared hit=859 read=3\n>                ->  Hash  (cost=10015.56..10015.56 rows=602456 width=8)\n> (actual time=399.708..399.708 rows=602570 loops=1)\n>                      Buckets: 65536  Batches: 1  Memory Usage: 23538kB\n>                      Buffers: shared hit=3991\n> ######## sequential scan occurs here ##########\n>                      ->  Seq Scan on collection_data  (cost=0.00..10015.56\n> rows=602456 width=8) (actual time=0.013..163.509 rows=602570 loops=1)\n>                            Buffers: shared hit=3991\n>          ->  Index Only Scan using virtual_ancestors_pkey on\n> virtual_ancestors a1  (cost=0.00..360.70 rows=458 width=8) (actual\n> time=1.339..1.403 rows=6 loops=2092)\n>                Index Cond: (ancestor_key = collection_data.context_key)\n>                Heap Fetches: 7067\n>                Buffers: shared hit=11441 read=1219\n>  Total runtime: 3373.058 ms\n>\n>\n> The table definitions are as follows:\n>\n>   Table \"public.virtual_ancestors\"\n>     Column    |   Type   | Modifiers\n> --------------+----------+-----------\n>  ancestor_key | integer  | not null\n>  context_key  | integer  | not null\n>  degree       | smallint | not null\n> Indexes:\n>     \"virtual_ancestors_pkey\" PRIMARY KEY, btree (ancestor_key, context_key)\n>     \"virtual_context_key_idx\" btree (context_key)\n> Foreign-key constraints:\n>     \"virtual_ancestors_ancestor_key_fkey\" FOREIGN KEY (ancestor_key)\n> REFERENCES contexts(context_key)\n>     \"virtual_ancestors_context_key_fkey\" FOREIGN KEY (context_key)\n> REFERENCES contexts(context_key)\n>\n>              Table \"public.collection_data\"\n>          Column              |         Type         | Modifiers\n> ------------------------+----------------------+-----------\n>  collection_context_key | integer              | not null\n>  context_key                | integer              | not null\n>  type                            | character varying(1) | not null\n>  source                        | character varying(1) | not null\n> Indexes:\n>     \"collection_data_context_key_idx\" btree (context_key)\n>     \"collection_data_context_key_index\" btree (collection_context_key)\n> CLUSTER\n> Foreign-key constraints:\n>     \"collection_data_collection_context_key_fkey\" FOREIGN KEY\n> (collection_context_key) REFERENCES contexts(context_key) ON DELETE CASCADE\n>     \"collection_data_context_key_fkey\" FOREIGN KEY (context_key) REFERENCES\n> contexts(context_key) ON DELETE CASCADE\n>\n> Can anyone suggest a way that we can get postgres to use the\n> collection_data_context_key_index properly? I thought that it might be\n> related to the fact that collection_data_context_key_index is a CLUSTERED\n> index, but we did some basic experimentation that seems to indicate\n> otherwise, i.e. the bad plan persists despite re-clustering the index.\n>\n> We are using PostgreSQL 9.2.5 on x86_64-unknown-linux-gnu, compiled by gcc\n> (Debian 4.4.5-8) 4.4.5, 64-bit\n>\n> Interestingly, on an instance running PostgreSQL 9.2.4 on\n> x86_64-unknown-linux-gnu, compiled by gcc (Debian 4.4.5-8) 4.4.5, 64-bit\n> where I copied the 2 tables over to a temporary database, the plan comes out\n> differently:\n>\n>  HashAggregate  (cost=39692.03..39739.98 rows=4795 width=4) (actual\n> time=73.285..75.141 rows=3486 loops=1)\n>    Buffers: shared hit=22458\n>    ->  Nested Loop  (cost=0.00..39680.05 rows=4795 width=4) (actual\n> time=0.077..63.116 rows=13007 loops=1)\n>          Buffers: shared hit=22458\n>          ->  Nested Loop  (cost=0.00..32823.38 rows=164 width=4) (actual\n> time=0.056..17.685 rows=2084 loops=1)\n>                Buffers: shared hit=7529\n>                ->  Index Only Scan using virtual_ancestors_pkey on\n> virtual_ancestors a2  (cost=0.00..1220.85 rows=396 width=4) (actual\n> time=0.025..2.732 rows=1954 loops=1)\n>                      Index Cond: (ancestor_key = 1072173)\n>                      Heap Fetches: 1954\n>                      Buffers: shared hit=1397\n> ######## Note the index scan here - this is what it SHOULD be doing\n> ##############\n>                ->  Index Scan using collection_data_context_key_index on\n> collection_data  (cost=0.00..79.24 rows=56 width=8) (actual\n> time=0.004..0.005 rows=1 loops=1954)\n>                      Index Cond: (collection_context_key = a2.context_key)\n>                      Buffers: shared hit=6132\n>          ->  Index Only Scan using virtual_ancestors_pkey on\n> virtual_ancestors a1  (cost=0.00..35.40 rows=641 width=8) (actual\n> time=0.007..0.015 rows=6 loops=2084)\n>                Index Cond: (ancestor_key = collection_data.context_key)\n>                Heap Fetches: 13007\n>                Buffers: shared hit=14929\n>  Total runtime: 76.431 ms\n>\n> Why can't I get the Postgres 9.2.5 instance to use the optimal plan?\n>\n> Thanks in advance!\n>      /Stefan\n>\n> --\n> -\n> Stefan Amshey\n\n\n\n--\nIlya Kosmodemiansky,\n\nPostgreSQL-Consulting.com\ntel. +14084142500\ncell. +4915144336040\[email protected]\n-- -Stefan Amshey", "msg_date": "Mon, 24 Mar 2014 11:40:03 -0700", "msg_from": "Stefan Amshey <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow join not using index properly" }, { "msg_contents": "Hi Stefan,\n\nstupid me - Ive missed some\n\nwith RECURSIVE qq(cont_key, anc_key) AS\n (\n\n SELECT\n a1.context_key, ancestor_key\n FROM\n virtual_ancestors a1\n UNION select ( -- here, in the union\n SELECT\n a1.context_key, a1.ancestor_key\n FROM\n virtual_ancestors a1\n WHERE\n a1.context_key > cont_key\n ORDER BY\n a1.context_key LIMIT 1\n )\nfrom qq where cont_key is not null -- and here\n )\n SELECT\n distinct a.cont_key\n FROM\n qq a, collection_data, virtual_ancestors a2\n WHERE\n a.cont_key IS NOT NULL\n AND a.anc_key = collection_data.context_key\n AND collection_data.collection_\n>\n> context_key = a2.context_key\n> AND a2.ancestor_key = 1072173;\n\nsorry for disorientating\n\nOn Mon, Mar 24, 2014 at 7:40 PM, Stefan Amshey <[email protected]> wrote:\n> Hi Ilya-\n>\n> Thanks so much for taking a stab at optimizing that query. I had to fiddle\n> a bit with your proposed version in order to get it function. Here's what I\n> came up with in the end:\n>>\n>> with RECURSIVE qq(cont_key, anc_key) AS\n>> (\n>>\n>> SELECT\n>> a1.context_key, ancestor_key\n>> FROM\n>> virtual_ancestors a1\n>> UNION (\n>> SELECT\n>> a1.context_key, a1.ancestor_key\n>> FROM\n>> virtual_ancestors a1, qq\n>> WHERE\n>> context_key > qq.cont_key\n>> ORDER BY\n>> context_key LIMIT 1\n>> )\n>> )\n>> SELECT\n>> distinct a.cont_key\n>> FROM\n>> qq a, collection_data, virtual_ancestors a2\n>> WHERE\n>> a.cont_key IS NOT NULL\n>> AND a.anc_key = collection_data.context_key\n>> AND collection_data.collection_context_key = a2.context_key\n>> AND a2.ancestor_key = 1072173;\n>\n>\n> I had to drop the MIN( a1.context_key ) and LIMIT 1 from your version off of\n> the first select statement in order to avoid syntax issues or other errors.\n> The version above does produce the same counts as the original, but in the\n> end it wasn't really a win for us. Here's the plan it produced:\n>\n>> HashAggregate (cost=707724.36..707726.36 rows=200 width=4) (actual\n>> time=27638.844..27639.706 rows=3522 loops=1)\n>> Buffers: shared hit=79323, temp read=49378 written=47557\n>> CTE qq\n>> -> Recursive Union (cost=0.00..398869.78 rows=10814203 width=8)\n>> (actual time=0.018..20196.397 rows=10821685 loops=1)\n>> Buffers: shared hit=74449, temp read=49378 written=23779\n>> -> Seq Scan on virtual_ancestors a1 (cost=0.00..182584.93\n>> rows=10814193 width=8) (actual time=0.010..2585.411 rows=10821685 loops=1)\n>> Buffers: shared hit=74443\n>> -> Limit (cost=0.00..0.08 rows=1 width=8) (actual\n>> time=7973.297..7973.298 rows=1 loops=1)\n>> Buffers: shared hit=6, temp read=49378 written=1\n>> -> Nested Loop (cost=0.00..30881281719119.79\n>> rows=389822567470830 width=8) (actual time=7973.296..7973.296 rows=1\n>> loops=1)\n>> Join Filter: (a1.context_key > qq.cont_key)\n>> Rows Removed by Join Filter: 22470607\n>> Buffers: shared hit=6, temp read=49378 written=1\n>> -> Index Scan using virtual_context_key_idx on\n>> virtual_ancestors a1 (cost=0.00..18206859.46 rows=10814193 width=8) (actual\n>> time=0.018..0.036 rows=3 loops=1)\n>> Buffers: shared hit=6\n>> -> WorkTable Scan on qq (cost=0.00..2162838.60\n>> rows=108141930 width=4) (actual time=0.008..1375.445 rows=7490203 loops=3)\n>> Buffers: temp read=49378 written=1\n>> -> Hash Join (cost=25283.37..308847.31 rows=2905 width=4) (actual\n>> time=449.167..27629.759 rows=13152 loops=1)\n>> Hash Cond: (a.anc_key = collection_data.context_key)\n>> Buffers: shared hit=79323, temp read=49378 written=47557\n>> -> CTE Scan on qq a (cost=0.00..216284.06 rows=10760132\n>> width=8) (actual time=0.021..25265.179 rows=10821685 loops=1)\n>> Filter: (cont_key IS NOT NULL)\n>> Buffers: shared hit=74449, temp read=49378 written=47557\n>> -> Hash (cost=25282.14..25282.14 rows=98 width=4) (actual\n>> time=373.836..373.836 rows=2109 loops=1)\n>> Buckets: 1024 Batches: 1 Memory Usage: 75kB\n>> Buffers: shared hit=4874\n>> -> Hash Join (cost=17557.15..25282.14 rows=98 width=4)\n>> (actual time=368.374..373.013 rows=2109 loops=1)\n>>\n>> Hash Cond: (a2.context_key =\n>> collection_data.collection_context_key)\n>> Buffers: shared hit=4874\n>> -> Index Only Scan using virtual_ancestors_pkey on\n>> virtual_ancestors a2 (cost=0.00..238.57 rows=272 width=4) (actual\n>> time=0.029..1.989 rows=1976 loops=1)\n>>\n>> Index Cond: (ancestor_key = 1072173)\n>> Heap Fetches: 917\n>> Buffers: shared hit=883\n>> -> Hash (cost=10020.40..10020.40 rows=602940\n>> width=8) (actual time=368.057..368.057 rows=603066 loops=1)\n>> Buckets: 65536 Batches: 1 Memory Usage:\n>> 23558kB\n>> Buffers: shared hit=3991\n>> -> Seq Scan on collection_data\n>> (cost=0.00..10020.40 rows=602940 width=8) (actual time=0.006..146.447\n>> rows=603066 loops=1)\n>> Buffers: shared hit=3991\n>> Total runtime: 27854.200 ms\n>\n>\n> I also tried including the MIN( a1.context_key ) in the first select\n> statement as you had written it, but upon doing that it became necessary to\n> add a GROUP BY clause, and doing that changed the final number of rows\n> selected:\n>>\n>> ERROR: column \"a1.ancestor_key\" must appear in the GROUP BY clause or be\n>> used in an aggregate function\n>> LINE 4: min( a1.context_key ), ancestor_key\n>\n> ^\n>\n> Including the LIMIT 1 at the end of the first select statement gave a syntax\n> error that I couldn't seem to get past, so I think it might be invalid:\n>>\n>> ERROR: syntax error at or near \"UNION\"\n>> LINE 8: UNION (\n>> ^\n>\n>\n> So I landed on the version that I posted above, which seems to select the\n> same set in all of the cases that I tried.\n>\n> Anyway, thanks again for taking a stab at helping, I do appreciate it. If\n> you have any other ideas that might be of help I'd certainly be happy to\n> hear them.\n>\n> Take care,\n> /Stefan\n>\n>\n>\n>\n> On Thu, Mar 20, 2014 at 11:02 PM, Ilya Kosmodemiansky\n> <[email protected]> wrote:\n>>\n>> Hi Stefan!\n>>\n>> Probably you need to rewrite your query like this (check it first):\n>>\n>> with RECURSIVE qq(cont_key, anc_key) as\n>> (\n>> select min(a1.context_key), ancestor_key from virtual_ancestors a1\n>> union select\n>> (SELECT\n>> a1.context_key, ancestor_key\n>> FROM\n>> virtual_ancestors a1 where context_key > cont_key order by\n>> context_key limit 1) from qq where cont_key is not null\n>> )\n>> select a1.cont_key\n>> from qq a1, collection_data, virtual_ancestors a2\n>> WHERE\n>> a1.anc_key = collection_data.context_key\n>> AND collection_data.collection_context_key = a2.context_key\n>> AND a2.ancestor_key = ?\n>>\n>> best regards,\n>> Ilya\n>>\n>> On Fri, Mar 21, 2014 at 12:56 AM, Stefan Amshey <[email protected]>\n>> wrote:\n>> > We have a slow performing query that we are trying to improve, and it\n>> > appears to be performing a sequential scan at a point where it should be\n>> > utilizing an index. Can anyone tell me why postgres is opting to do it\n>> > this\n>> > way?\n>> >\n>> > The original query is as follows:\n>> >\n>> > SELECT DISTINCT\n>> > a1.context_key\n>> > FROM\n>> > virtual_ancestors a1, collection_data, virtual_ancestors a2\n>> > WHERE\n>> > a1.ancestor_key = collection_data.context_key\n>> > AND collection_data.collection_context_key = a2.context_key\n>> > AND a2.ancestor_key = ?\n>> >\n>> > The key relationships should all using indexed columns, but the query\n>> > plan\n>> > that postgres comes up with ends up performing a sequential scan on the\n>> > collection_data table (in this case about 602k rows) where we would have\n>> > expected it to utilize the index:\n>> >\n>> > HashAggregate (cost=60905.73..60935.73 rows=3000 width=4) (actual\n>> > time=3366.165..3367.354 rows=3492 loops=1)\n>> > Buffers: shared hit=16291 read=1222\n>> > -> Nested Loop (cost=17546.26..60898.23 rows=3000 width=4) (actual\n>> > time=438.332..3357.918 rows=13037 loops=1)\n>> > Buffers: shared hit=16291 read=1222\n>> > -> Hash Join (cost=17546.26..25100.94 rows=98 width=4)\n>> > (actual\n>> > time=408.554..415.767 rows=2092 loops=1)\n>> > Hash Cond: (a2.context_key =\n>> > collection_data.collection_context_key)\n>> > Buffers: shared hit=4850 read=3\n>> > -> Index Only Scan using virtual_ancestors_pkey on\n>> > virtual_ancestors a2 (cost=0.00..233.32 rows=270 width=4) (actual\n>> > time=8.532..10.703 rows=1960 loops=1)\n>> > Index Cond: (ancestor_key = 1072173)\n>> > Heap Fetches: 896\n>> > Buffers: shared hit=859 read=3\n>> > -> Hash (cost=10015.56..10015.56 rows=602456 width=8)\n>> > (actual time=399.708..399.708 rows=602570 loops=1)\n>> > Buckets: 65536 Batches: 1 Memory Usage: 23538kB\n>> > Buffers: shared hit=3991\n>> > ######## sequential scan occurs here ##########\n>> > -> Seq Scan on collection_data\n>> > (cost=0.00..10015.56\n>> > rows=602456 width=8) (actual time=0.013..163.509 rows=602570 loops=1)\n>> > Buffers: shared hit=3991\n>> > -> Index Only Scan using virtual_ancestors_pkey on\n>> > virtual_ancestors a1 (cost=0.00..360.70 rows=458 width=8) (actual\n>> > time=1.339..1.403 rows=6 loops=2092)\n>> > Index Cond: (ancestor_key = collection_data.context_key)\n>> > Heap Fetches: 7067\n>> > Buffers: shared hit=11441 read=1219\n>> > Total runtime: 3373.058 ms\n>> >\n>> >\n>> > The table definitions are as follows:\n>> >\n>> > Table \"public.virtual_ancestors\"\n>> > Column | Type | Modifiers\n>> > --------------+----------+-----------\n>> > ancestor_key | integer | not null\n>> > context_key | integer | not null\n>> > degree | smallint | not null\n>> > Indexes:\n>> > \"virtual_ancestors_pkey\" PRIMARY KEY, btree (ancestor_key,\n>> > context_key)\n>> > \"virtual_context_key_idx\" btree (context_key)\n>> > Foreign-key constraints:\n>> > \"virtual_ancestors_ancestor_key_fkey\" FOREIGN KEY (ancestor_key)\n>> > REFERENCES contexts(context_key)\n>> > \"virtual_ancestors_context_key_fkey\" FOREIGN KEY (context_key)\n>> > REFERENCES contexts(context_key)\n>> >\n>> > Table \"public.collection_data\"\n>> > Column | Type | Modifiers\n>> > ------------------------+----------------------+-----------\n>> > collection_context_key | integer | not null\n>> > context_key | integer | not null\n>> > type | character varying(1) | not null\n>> > source | character varying(1) | not null\n>> > Indexes:\n>> > \"collection_data_context_key_idx\" btree (context_key)\n>> > \"collection_data_context_key_index\" btree (collection_context_key)\n>> > CLUSTER\n>> > Foreign-key constraints:\n>> > \"collection_data_collection_context_key_fkey\" FOREIGN KEY\n>> > (collection_context_key) REFERENCES contexts(context_key) ON DELETE\n>> > CASCADE\n>> > \"collection_data_context_key_fkey\" FOREIGN KEY (context_key)\n>> > REFERENCES\n>> > contexts(context_key) ON DELETE CASCADE\n>> >\n>> > Can anyone suggest a way that we can get postgres to use the\n>> > collection_data_context_key_index properly? I thought that it might be\n>> > related to the fact that collection_data_context_key_index is a\n>> > CLUSTERED\n>> > index, but we did some basic experimentation that seems to indicate\n>> > otherwise, i.e. the bad plan persists despite re-clustering the index.\n>> >\n>> > We are using PostgreSQL 9.2.5 on x86_64-unknown-linux-gnu, compiled by\n>> > gcc\n>> > (Debian 4.4.5-8) 4.4.5, 64-bit\n>> >\n>> > Interestingly, on an instance running PostgreSQL 9.2.4 on\n>> > x86_64-unknown-linux-gnu, compiled by gcc (Debian 4.4.5-8) 4.4.5, 64-bit\n>> > where I copied the 2 tables over to a temporary database, the plan comes\n>> > out\n>> > differently:\n>> >\n>> > HashAggregate (cost=39692.03..39739.98 rows=4795 width=4) (actual\n>> > time=73.285..75.141 rows=3486 loops=1)\n>> > Buffers: shared hit=22458\n>> > -> Nested Loop (cost=0.00..39680.05 rows=4795 width=4) (actual\n>> > time=0.077..63.116 rows=13007 loops=1)\n>> > Buffers: shared hit=22458\n>> > -> Nested Loop (cost=0.00..32823.38 rows=164 width=4) (actual\n>> > time=0.056..17.685 rows=2084 loops=1)\n>> > Buffers: shared hit=7529\n>> > -> Index Only Scan using virtual_ancestors_pkey on\n>> > virtual_ancestors a2 (cost=0.00..1220.85 rows=396 width=4) (actual\n>> > time=0.025..2.732 rows=1954 loops=1)\n>> > Index Cond: (ancestor_key = 1072173)\n>> > Heap Fetches: 1954\n>> > Buffers: shared hit=1397\n>> > ######## Note the index scan here - this is what it SHOULD be doing\n>> > ##############\n>> > -> Index Scan using collection_data_context_key_index on\n>> > collection_data (cost=0.00..79.24 rows=56 width=8) (actual\n>> > time=0.004..0.005 rows=1 loops=1954)\n>> > Index Cond: (collection_context_key =\n>> > a2.context_key)\n>> > Buffers: shared hit=6132\n>> > -> Index Only Scan using virtual_ancestors_pkey on\n>> > virtual_ancestors a1 (cost=0.00..35.40 rows=641 width=8) (actual\n>> > time=0.007..0.015 rows=6 loops=2084)\n>> > Index Cond: (ancestor_key = collection_data.context_key)\n>> > Heap Fetches: 13007\n>> > Buffers: shared hit=14929\n>> > Total runtime: 76.431 ms\n>> >\n>> > Why can't I get the Postgres 9.2.5 instance to use the optimal plan?\n>> >\n>> > Thanks in advance!\n>> > /Stefan\n>> >\n>> > --\n>> > -\n>> > Stefan Amshey\n>>\n>>\n>>\n>> --\n>> Ilya Kosmodemiansky,\n>>\n>> PostgreSQL-Consulting.com\n>> tel. +14084142500\n>> cell. +4915144336040\n>> [email protected]\n>\n>\n>\n>\n> --\n> -\n> Stefan Amshey\n\n\n\n-- \nIlya Kosmodemiansky,\n\nPostgreSQL-Consulting.com\ntel. +14084142500\ncell. +4915144336040\[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 25 Mar 2014 08:05:51 +0100", "msg_from": "Ilya Kosmodemiansky <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow join not using index properly" } ]
[ { "msg_contents": "Hi all,\n\nBrett Wooldridge, the creator of HikariCP [1] - a high performance\nJava connection pool - is contemplating the idea to change the way\npooling is done in HikariCP and have a fixed-size pool of connections\nalways open.\n\nNo maxPoolSize, no minIdle, no minPoolSize, juste a poolSize parameter\nwhich sets the size of the pool. At application startup, all the\nconnections are opened and maintained by the pool throughout the life\nof the application.\n\nThe basic idea is that if you decide that your application might need\n100 connections at time, you set poolSize to 100 and HikariCP\nmaintains 100 connections open.\n\nI recall very old posts on this list where people were talking about\ncode paths sensitive to the number of connections open (or even\nmax_connections) and that it wasn't such a good idea to keep\nconnections open if they were not really needed.\n\nAs a lot of scalability work has been done since this (very old) time,\nI was wondering if it was still the rule of thumb or if the idea of\nBrett to completely simplify the connection management is the way to\ngo.\n\nIt seems that at least another pool implementation is going this way\nso I thought it might be a good idea to have the opinion of the\ndatabase side of things. This way, it will be easier to take a well\ninformed decision.\n\nThanks in advance for your comments/advices.\n\n-- \nGuillaume\n\n[1] https://github.com/brettwooldridge/HikariCP\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 21 Mar 2014 15:49:16 +0100", "msg_from": "Guillaume Smet <[email protected]>", "msg_from_op": true, "msg_subject": "Connection pooling - Number of connections" }, { "msg_contents": "Guillaume Smet wrote\n> Brett Wooldridge, the creator of HikariCP [1] - a high performance\n> Java connection pool - is contemplating the idea to change the way\n> pooling is done in HikariCP and have a fixed-size pool of connections\n> always open.\n> \n> No maxPoolSize, no minIdle, no minPoolSize, juste a poolSize parameter\n> which sets the size of the pool. At application startup, all the\n> connections are opened and maintained by the pool throughout the life\n> of the application.\n> \n> The basic idea is that if you decide that your application might need\n> 100 connections at time, you set poolSize to 100 and HikariCP\n> maintains 100 connections open.\n> \n> I recall very old posts on this list where people were talking about\n> code paths sensitive to the number of connections open (or even\n> max_connections) and that it wasn't such a good idea to keep\n> connections open if they were not really needed.\n> \n> As a lot of scalability work has been done since this (very old) time,\n> I was wondering if it was still the rule of thumb or if the idea of\n> Brett to completely simplify the connection management is the way to\n> go.\n> \n> It seems that at least another pool implementation is going this way\n> so I thought it might be a good idea to have the opinion of the\n> database side of things. This way, it will be easier to take a well\n> informed decision.\n\nThe developer, not the pool implementer, is going to ultimately decide which\ntrade-offs to incur. Having a connection open, even if idle, consumes\nresources and performance no matter how minimal.\n\nPool management does cost cycles as well so if one does not need pool\nmanagement then getting rid of it is probably worthwhile to them. The\nquestion is whether you want to only meet the need of this specific user or\nwhether you want to provide them with flexibility. \n\nIf existing pool management implementations are reasonably well implemented\nand efficient then focusing effort on a potentially under-served use-case\ndefinitely has merit.\n\nConsider this train-of-thought: no matter how large the pool size if you\nare constantly keeping, say, 90% of the connections actively working then\nhaving, on average, 10% of the connections sitting idle is probably not\ngoing to be noticeable on the server and the reduction in overhead of\nmanaging a pool is typically a net positive. Now, I had no clue what\npercentage is actually true, or under what conditions and pool sizes it may\nvary, but that is a calculation that someone deciding on between managed and\nun-managed pools would need to make.\n\nDavid J.\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Connection-pooling-Number-of-connections-tp5797025p5797030.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 21 Mar 2014 08:49:43 -0700 (PDT)", "msg_from": "David Johnston <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Connection pooling - Number of connections" }, { "msg_contents": "On Fri, Mar 21, 2014 at 4:49 PM, David Johnston <[email protected]> wrote:\n> Consider this train-of-thought: no matter how large the pool size if you\n> are constantly keeping, say, 90% of the connections actively working then\n> having, on average, 10% of the connections sitting idle is probably not\n> going to be noticeable on the server and the reduction in overhead of\n> managing a pool is typically a net positive. Now, I had no clue what\n> percentage is actually true, or under what conditions and pool sizes it may\n> vary, but that is a calculation that someone deciding on between managed and\n> un-managed pools would need to make.\n\nSure.\n\nThe big question is if it is suited for general purpose or if having\n100 connections open when 10 only are necessary at the time is causing\nany unnecessary contention/spinlock issues/performance\noverhead/whatever...\n\n-- \nGuillaume\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 21 Mar 2014 17:05:03 +0100", "msg_from": "Guillaume Smet <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Connection pooling - Number of connections" }, { "msg_contents": "Guillaume Smet <[email protected]> writes:\n> On Fri, Mar 21, 2014 at 4:49 PM, David Johnston <[email protected]> wrote:\n>> Consider this train-of-thought: no matter how large the pool size if you\n>> are constantly keeping, say, 90% of the connections actively working then\n>> having, on average, 10% of the connections sitting idle is probably not\n>> going to be noticeable on the server and the reduction in overhead of\n>> managing a pool is typically a net positive. Now, I had no clue what\n>> percentage is actually true, or under what conditions and pool sizes it may\n>> vary, but that is a calculation that someone deciding on between managed and\n>> un-managed pools would need to make.\n\n> Sure.\n\n> The big question is if it is suited for general purpose or if having\n> 100 connections open when 10 only are necessary at the time is causing\n> any unnecessary contention/spinlock issues/performance\n> overhead/whatever...\n\nIt will cost you, in ProcArray scans for example. But lots-of-idle-\nconnections is exactly what a pooler is supposed to prevent. If you have\na server that can handle say 10 active queries, you should have a pool\nsize of 10, not 100. (If you have a server that can actually handle\n100 active queries, I'd like to have your IT budget.)\n\nThe proposed design sounds fairly reasonable to me, as long as users are\nclear on how to set the pool size --- and in particular that bigger is\nnot better. Clueless users could definitely shoot themselves in the\nfoot, though.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 21 Mar 2014 12:17:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Connection pooling - Number of connections" }, { "msg_contents": "Reaching the maxPoolSize from the minPoolSize means creating the\nconnections at the crucial moment where the client application is in the\ndesperate need of completing an important query/transaction which the\nprimary responsibility since it cannot hold the data collected.\n\nSo here the connection creation action is the costliest among all the other\nmanagement tasks. so keeping the connections ready is the best option.\n\npoolSize parameter is very good in the sense when the application owner\nknow what is the optimal number to put, after having application\nperformance analysed with the history of previous settings and the\nimprovements made on it. server sizing always shows up in this sort of\nanalysis.\n\n\n\n\nOn Fri, Mar 21, 2014 at 9:47 PM, Tom Lane <[email protected]> wrote:\n\n> Guillaume Smet <[email protected]> writes:\n> > On Fri, Mar 21, 2014 at 4:49 PM, David Johnston <[email protected]>\n> wrote:\n> >> Consider this train-of-thought: no matter how large the pool size if\n> you\n> >> are constantly keeping, say, 90% of the connections actively working\n> then\n> >> having, on average, 10% of the connections sitting idle is probably not\n> >> going to be noticeable on the server and the reduction in overhead of\n> >> managing a pool is typically a net positive. Now, I had no clue what\n> >> percentage is actually true, or under what conditions and pool sizes it\n> may\n> >> vary, but that is a calculation that someone deciding on between\n> managed and\n> >> un-managed pools would need to make.\n>\n> > Sure.\n>\n> > The big question is if it is suited for general purpose or if having\n> > 100 connections open when 10 only are necessary at the time is causing\n> > any unnecessary contention/spinlock issues/performance\n> > overhead/whatever...\n>\n> It will cost you, in ProcArray scans for example. But lots-of-idle-\n> connections is exactly what a pooler is supposed to prevent. If you have\n> a server that can handle say 10 active queries, you should have a pool\n> size of 10, not 100. (If you have a server that can actually handle\n> 100 active queries, I'd like to have your IT budget.)\n>\n> The proposed design sounds fairly reasonable to me, as long as users are\n> clear on how to set the pool size --- and in particular that bigger is\n> not better. Clueless users could definitely shoot themselves in the\n> foot, though.\n>\n> regards, tom lane\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nReaching the maxPoolSize from the minPoolSize means creating the connections at the crucial moment where the client application is in the desperate need of completing an important query/transaction which the primary responsibility since it cannot hold the data collected.\nSo here the connection creation action is the costliest among all the other management tasks. so keeping the connections ready is the best option.poolSize parameter is very good in the sense when the application owner know what is the optimal number to put, after having application performance analysed with the history of previous settings and the improvements made on it. server sizing always shows up in this sort of analysis.\nOn Fri, Mar 21, 2014 at 9:47 PM, Tom Lane <[email protected]> wrote:\nGuillaume Smet <[email protected]> writes:\n\n> On Fri, Mar 21, 2014 at 4:49 PM, David Johnston <[email protected]> wrote:\n>> Consider this train-of-thought:  no matter how large the pool size if you\n>> are constantly keeping, say, 90% of the connections actively working then\n>> having, on average, 10% of the connections sitting idle is probably not\n>> going to be noticeable on the server and the reduction in overhead of\n>> managing a pool is typically a net positive.  Now, I had no clue what\n>> percentage is actually true, or under what conditions and pool sizes it may\n>> vary, but that is a calculation that someone deciding on between managed and\n>> un-managed pools would need to make.\n\n> Sure.\n\n> The big question is if it is suited for general purpose or if having\n> 100 connections open when 10 only are necessary at the time is causing\n> any unnecessary contention/spinlock issues/performance\n> overhead/whatever...\n\nIt will cost you, in ProcArray scans for example.  But lots-of-idle-\nconnections is exactly what a pooler is supposed to prevent.  If you have\na server that can handle say 10 active queries, you should have a pool\nsize of 10, not 100.  (If you have a server that can actually handle\n100 active queries, I'd like to have your IT budget.)\n\nThe proposed design sounds fairly reasonable to me, as long as users are\nclear on how to set the pool size --- and in particular that bigger is\nnot better.  Clueless users could definitely shoot themselves in the\nfoot, though.\n\n                        regards, tom lane\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Fri, 21 Mar 2014 23:21:57 +0530", "msg_from": "Sethu Prasad <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Connection pooling - Number of connections" }, { "msg_contents": "Hi Tom,\n\nOn Fri, Mar 21, 2014 at 5:17 PM, Tom Lane <[email protected]> wrote:\n> It will cost you, in ProcArray scans for example. But lots-of-idle-\n> connections is exactly what a pooler is supposed to prevent. If you have\n> a server that can handle say 10 active queries, you should have a pool\n> size of 10, not 100. (If you have a server that can actually handle\n> 100 active queries, I'd like to have your IT budget.)\n>\n> The proposed design sounds fairly reasonable to me, as long as users are\n> clear on how to set the pool size --- and in particular that bigger is\n> not better. Clueless users could definitely shoot themselves in the\n> foot, though.\n\nYeah, well.\n\nMy understanding of what happened on the field is that people usually\nset the pool size limit quite high because they don't want to\nexperience connection starvation even if there is a temporary slowdown\nof their application/database.\n\nIs the overhead of having 100 connections open noticeable or is it\nbetter to not have them but not so bad to have them?\n\nThanks.\n\n-- \nGuillaume\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 21 Mar 2014 19:36:10 +0100", "msg_from": "Guillaume Smet <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Connection pooling - Number of connections" }, { "msg_contents": "Hi Sethu,\n\nOn Fri, Mar 21, 2014 at 6:51 PM, Sethu Prasad <[email protected]> wrote:\n> So here the connection creation action is the costliest among all the other\n> management tasks. so keeping the connections ready is the best option.\n\nThat's why you often have a minIdle parameter which allows to create\nidle connections in advance.\n\n> poolSize parameter is very good in the sense when the application owner know\n> what is the optimal number to put, after having application performance\n> analysed with the history of previous settings and the improvements made on\n> it. server sizing always shows up in this sort of analysis.\n\nIt supposes that you do this job. From my experience, most of the \"not\nso demanding\" apps are put into production without this sort of\ndetailed analysis.\n\nYou do it for your critical high throughput applications, not for the others.\n\nThat said, interesting discussion. Not exactly what I expected.\n\n-- \nGuillaume\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 21 Mar 2014 19:38:56 +0100", "msg_from": "Guillaume Smet <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Connection pooling - Number of connections" }, { "msg_contents": "Sethu Prasad wrote\n> Reaching the maxPoolSize from the minPoolSize means creating the\n> connections at the crucial moment where the client application is in the\n> desperate need of completing an important query/transaction which the\n> primary responsibility since it cannot hold the data collected.\n\nOne query is slowed down a little in the unusual situation where not enough\npooled connections are available. To fix that you want to slow down the\nentire server all of the time? Really? And even if this is sometimes the\nbest option your assertion is unqualified so do you really think this is\nbest for everyone, always?\n\nI think it is good to give developers options but if your situation is 10 /\n100 then a fixed 100 connection pool is probably not the best configuration.\n\nThe question I'd ask is if you are developing a new driver what problem and\nuse-case are you trying to accommodate? For those in the general case a\nresizing pool is probably the best bet. It will usually stabilize at\ntypical volume so that an optimum number of connections are maintained while\nstill allowing for some expansion in times of excess demand. A fixed size\npool would be something an experienced user would decide they need based\nupon their usage patterns and need to eke out every last bit of performance\nin the extremes situations while only trading a little bit of performance\nwhen the connections are not maxed out.\n\nDavid J.\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Connection-pooling-Number-of-connections-tp5797025p5797061.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 21 Mar 2014 11:41:05 -0700 (PDT)", "msg_from": "David Johnston <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Connection pooling - Number of connections" }, { "msg_contents": "On Fri, Mar 21, 2014 at 3:36 PM, Guillaume Smet\n<[email protected]> wrote:\n> On Fri, Mar 21, 2014 at 5:17 PM, Tom Lane <[email protected]> wrote:\n>> It will cost you, in ProcArray scans for example. But lots-of-idle-\n>> connections is exactly what a pooler is supposed to prevent. If you have\n>> a server that can handle say 10 active queries, you should have a pool\n>> size of 10, not 100. (If you have a server that can actually handle\n>> 100 active queries, I'd like to have your IT budget.)\n>>\n>> The proposed design sounds fairly reasonable to me, as long as users are\n>> clear on how to set the pool size --- and in particular that bigger is\n>> not better. Clueless users could definitely shoot themselves in the\n>> foot, though.\n>\n> Yeah, well.\n>\n> My understanding of what happened on the field is that people usually\n> set the pool size limit quite high because they don't want to\n> experience connection starvation even if there is a temporary slowdown\n> of their application/database.\n\nMy experience is that small transaction-mode connection pools used to\nserve very quick queries can sometimes not fully use the hardware if\nthe connections aren't set in autocommit mode on the client side,\nbecause the network roundtrips hold onto the server slot for a sizable\nportion of the lifecycle.\n\nSo, my recommendation: use protocol-level autocommit for read-only\nqueries and cores+spindles workers - that will use your hardware\nfully.\n\nOn Fri, Mar 21, 2014 at 3:41 PM, David Johnston <[email protected]> wrote:\n>> Reaching the maxPoolSize from the minPoolSize means creating the\n>> connections at the crucial moment where the client application is in the\n>> desperate need of completing an important query/transaction which the\n>> primary responsibility since it cannot hold the data collected.\n>\n> One query is slowed down a little in the unusual situation where not enough\n> pooled connections are available. To fix that you want to slow down the\n> entire server all of the time? Really? And even if this is sometimes the\n> best option your assertion is unqualified so do you really think this is\n> best for everyone, always?\n\nI don't think a variable connection pool makes any sense if you cast\ndeadlocks aside for a moment.\n\nThe only reason for connection starvation for a properly sized pool,\nis hardware overload. When you have hardware overload, you really\ndon't want to throw more load at it, you want to let it cool down.\n\nThe solution, if you have many heavy, OLAP-style queries that block\nthe rest, is to have two pools, and size both so that you don't\noverload (or at least you overload controlledly) the server. Send the\nOLAP queries to one pool, and the OLTP queries to the other, and you\nguarantee a smooth flow, even if you cannot guarantee 100% hardware\nutilization and maximum thoughput both.\n\nNow, if we consider deadlocks, you might have connection starvation\nbecause all of the connections are waiting for an operation that is\ndeadlocked in application code (the deadlock cannot occur at the DB\nlevel if you use transaction mode, but you can have open transactions\nwaiting on a mutex that is waiting for a connection). I've experienced\napplication-level deadlocks like these, and the solution for me has\nalways been to have a reasonable timeout and connections for a reserve\npool - when connections are waiting for more than X seconds (that one\nconsiders abnormal given the knowledge you have about load\ncharacteristics), grow the pool to free up resources and try to\ndislodge the deadlocked application threads.\n\nSometimes, regular overload triggers the reserve pool, so you cannot\nbe too generous on the number of connections you'll have in reserve.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 21 Mar 2014 16:04:06 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Connection pooling - Number of connections" }, { "msg_contents": "Hi, Brett Wooldridge here, one of the principals of HikariCP. I thought I'd\nwade into the conversation pool a little myself if you guys don't mind.\n\nSpeaking to David's point...\n>> Reaching the maxPoolSize from the minPoolSize means creating the \n>> connections at the crucial moment where the client application is in the \n>> desperate need of completing an important query/transaction which the \n>> primary responsibility since it cannot hold the data collected. \n\nThis was one of the reasons I was proposing the fixed pool design. In my\nexperience, even in pools that maintain a minimum number of idle\nconnections, responding to spike demands is problematic. If you have a pool\nwith say 30 max. connections, and a 10 minimum idle connection goal, a\nsudden spike demand for 20 connections means the pool can satisfy 10\ninstantly but then is left to [try to] establish 10 connections before the\napplication's connectionTimeout (read acquisition timeout from the pool) is\nreached. This in turn generates a spike demand on the database slowing down\nnot only the connection establishments themselves but also slowing down the\ncompletion of transactions that might actually return connections to the\npool.\n\nAs I think Tom noted is a slidestack I read somewhere, there is a \"knee\" in\nthe performance curve beyond which additional connections cause a drop in\nTPS. While users think it is a good idea to have 10 idle connections but a\nmaxPoolSize of 100, the reality is, they can retire/reuse connections faster\nwith a much smaller maxPoolSize. And I didn't see a pool of a few dozen\nconnections actually impacting performance much when half of them are idle\nand half are executing transactions (ie. the idle ones don't impact the\noverall performance much).\n\nFinally, one of my contentions was, either your database server has\nresources or it doesn't. Either it has enough memory and processing power\nfor N connections or it doesn't. If the pool is set below, near, or at that\ncapacity what is the purpose of releasing connections in that case? Yes, it\nfrees up memory, but that memory is not really available for other use given\nthat at any instant the maximum capacity of the pool may be demanded.\nInstead releasing resources only to try to reallocate them during a demand\npeak seems counter-productive.\n\n-Brett\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Connection-pooling-Number-of-connections-tp5797025p5797135.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 22 Mar 2014 04:26:34 -0700 (PDT)", "msg_from": "Brett Wooldridge <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Connection pooling - Number of connections" } ]
[ { "msg_contents": "Hi there,\n\nI've got a relatively simple query that contains expensive BCRYPT\nfunctions that gets optimized in a way that causes postgres to compute\nmore bcrypt hashes than necessary, thereby dramatically slowing things\ndown.\n\nIn a certain part of our application we need to lookup users by their\nusername, email address and password. Now we don't store plaintext\npasswords and so the query needs to compute bcrypt hashes on the fly:\n\n SELECT DISTINCT u.*\n FROM auth_user u\n JOIN bb_userprofile p ON p.user_id = u.id\n JOIN bb_identity i ON i.profile_id = p.id\n WHERE\n (\n (\n u.username ILIKE 'detkin'\n OR\n i.email ILIKE '[email protected]'\n )\n AND\n (\n SUBSTRING(password FROM 8) = CRYPT(\n 'detkin', SUBSTRING(password FROM 8))\n )\n )\n\nThese queries are generated by a parser that translates from an\nexternal query language to SQL run on the database. This test db\ncontains 12 user records.\n\nWith a single bcrypt hash taking ~300ms to compute, this is a recipe\nfor disaster and so the app only allows queries that require only a\nvery small number of bcrypt computation.\n\nE.g. the user must always \"AND\" the password lookup with a clause like\n\" username = 'foo' AND password = 'bar'\" (username is unique).\n\nHowever, while the query above technically only needs to compute 1\nhash (there is a user 'detkin' and email '[email protected]' does not\nexist), it instead creates a query plan that computes hashes *before*\nfiltering on username and email, leading to 12 hash computations and a\nvery slow query.\n\nThe EXPLAIN (ANALYZE, BUFFERS) is here: http://explain.depesz.com/s/yhE\n\nThe schemas for the 3 tables involved are here:\nhttp://pgsql.privatepaste.com/f72020ad0a\n\nAs a quick experiment I tried moving the joins and email lookup into a\nnested IN query, but that still generates a plan that computes hashes\nfor all 12 users, before picking out the 1 whose username matches.\n\nIs there any way I can get postgres to perform the hash calculations\non the *result* of the other parts of the where clause, instead of the\nother way around? Or else rewrite the query?\n\nCheers,\nErik\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 21 Mar 2014 17:59:19 -0700", "msg_from": "Erik van Zijst <[email protected]>", "msg_from_op": true, "msg_subject": "Suboptimal query plan when using expensive BCRYPT functions" }, { "msg_contents": "On Fri, Mar 21, 2014 at 5:59 PM, Erik van Zijst <[email protected]>wrote:\n\n> Hi there,\n>\n> I've got a relatively simple query that contains expensive BCRYPT\n> functions that gets optimized in a way that causes postgres to compute\n> more bcrypt hashes than necessary, thereby dramatically slowing things\n> down.\n>\n> In a certain part of our application we need to lookup users by their\n> username, email address and password. Now we don't store plaintext\n> passwords and so the query needs to compute bcrypt hashes on the fly:\n>\n> SELECT DISTINCT u.*\n> FROM auth_user u\n> JOIN bb_userprofile p ON p.user_id = u.id\n> JOIN bb_identity i ON i.profile_id = p.id\n> WHERE\n> (\n> (\n> u.username ILIKE 'detkin'\n> OR\n> i.email ILIKE '[email protected]'\n> )\n> AND\n> (\n> SUBSTRING(password FROM 8) = CRYPT(\n> 'detkin', SUBSTRING(password FROM 8))\n> )\n> )\n>\n> These queries are generated by a parser that translates from an\n> external query language to SQL run on the database. This test db\n> contains 12 user records.\n>\n> With a single bcrypt hash taking ~300ms to compute, this is a recipe\n> for disaster and so the app only allows queries that require only a\n> very small number of bcrypt computation.\n>\n> E.g. the user must always \"AND\" the password lookup with a clause like\n> \" username = 'foo' AND password = 'bar'\" (username is unique).\n>\n> However, while the query above technically only needs to compute 1\n> hash (there is a user 'detkin' and email '[email protected]' does not\n> exist), it instead creates a query plan that computes hashes *before*\n> filtering on username and email, leading to 12 hash computations and a\n> very slow query.\n>\n> The EXPLAIN (ANALYZE, BUFFERS) is here: http://explain.depesz.com/s/yhE\n>\n> The schemas for the 3 tables involved are here:\n> http://pgsql.privatepaste.com/f72020ad0a\n>\n> As a quick experiment I tried moving the joins and email lookup into a\n> nested IN query, but that still generates a plan that computes hashes\n> for all 12 users, before picking out the 1 whose username matches.\n>\n> Is there any way I can get postgres to perform the hash calculations\n> on the *result* of the other parts of the where clause, instead of the\n> other way around? Or else rewrite the query?\n>\n> Cheers,\n> Erik\n>\n\n(untested), but how about something like the following:\n\nWITH au AS (\n SELECT DISTINCT u.*\n FROM auth_user u\n JOIN bb_userprofile p ON p.user_id = u.id\n JOIN bb_identity i ON i.profile_id = p.id\n WHERE u.username ILIKE 'detkin'\n OR i.email ILIKE '[email protected]')\n\nSELECT au.*\nFROM au\nWHERE SUBSTRING(au.password FROM 8) = CRYPT('detkin', SUBSTRING(au.password\nFROM 8));\n\nOn Fri, Mar 21, 2014 at 5:59 PM, Erik van Zijst <[email protected]> wrote:\nHi there,\n\nI've got a relatively simple query that contains expensive BCRYPT\nfunctions that gets optimized in a way that causes postgres to compute\nmore bcrypt hashes than necessary, thereby dramatically slowing things\ndown.\n\nIn a certain part of our application we need to lookup users by their\nusername, email address and password. Now we don't store plaintext\npasswords and so the query needs to compute bcrypt hashes on the fly:\n\n    SELECT DISTINCT u.*\n    FROM auth_user u\n    JOIN bb_userprofile p ON p.user_id = u.id\n    JOIN bb_identity i ON i.profile_id = p.id\n    WHERE\n    (\n      (\n        u.username ILIKE 'detkin'\n        OR\n        i.email ILIKE '[email protected]'\n      )\n      AND\n      (\n        SUBSTRING(password FROM 8) = CRYPT(\n          'detkin', SUBSTRING(password FROM 8))\n      )\n    )\n\nThese queries are generated by a parser that translates from an\nexternal query language to SQL run on the database. This test db\ncontains 12 user records.\n\nWith a single bcrypt hash taking ~300ms to compute, this is a recipe\nfor disaster and so the app only allows queries that require only a\nvery small number of bcrypt computation.\n\nE.g. the user must always \"AND\" the password lookup with a clause like\n\" username = 'foo' AND password = 'bar'\" (username is unique).\n\nHowever, while the query above technically only needs to compute 1\nhash (there is a user 'detkin' and email '[email protected]' does not\nexist), it instead creates a query plan that computes hashes *before*\nfiltering on username and email, leading to 12 hash computations and a\nvery slow query.\n\nThe EXPLAIN (ANALYZE, BUFFERS) is here: http://explain.depesz.com/s/yhE\n\nThe schemas for the 3 tables involved are here:\nhttp://pgsql.privatepaste.com/f72020ad0a\n\nAs a quick experiment I tried moving the joins and email lookup into a\nnested IN query, but that still generates a plan that computes hashes\nfor all 12 users, before picking out the 1 whose username matches.\n\nIs there any way I can get postgres to perform the hash calculations\non the *result* of the other parts of the where clause, instead of the\nother way around? Or else rewrite the query?\n\nCheers,\nErik(untested), but how about something like the following:WITH au AS (    SELECT DISTINCT u.*    FROM auth_user u    JOIN bb_userprofile p ON p.user_id = u.id\n    JOIN bb_identity i ON i.profile_id = p.id    WHERE u.username ILIKE 'detkin'    OR i.email ILIKE '[email protected]')SELECT au.*\nFROM auWHERE SUBSTRING(au.password FROM 8) = CRYPT('detkin', SUBSTRING(au.password FROM 8));", "msg_date": "Sat, 22 Mar 2014 13:51:38 -0700", "msg_from": "bricklen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Suboptimal query plan when using expensive BCRYPT functions" }, { "msg_contents": "Yes, that works (it does at least on my small test database).\n\nHowever, these queries are generated by a parser that translates\ncomplex parse trees from a higher level DSL that doesn't lend itself\nwell to logically isolating the crypt checks from the remaining\nconditions, as password checks might be present at arbitrary depths.\n\nFor example:\n\n (\n active eq true\n AND\n (\n password eq \"foo\"\n OR\n password eq \"bar\"\n )\n )\n AND\n (\n username eq \"erik\"\n OR\n email contains \"bar\"\n )\n\nCurrently the SQL generator translates each AST node into individual\npredicates that straightforwardly concatenate into a single SQL WHERE\nclause. For this to work, the individual nodes should compose well. I\ndon't immediately see how the above query could be automatically\ntranslated into SQL when taking the WITH-AS approach.\n\nI could nonetheless take a stab at it, but life would certainly be\neasier if I could translate each component independently and leave\noptimization to the query planner.\n\nCheers,\nErik\n\n\nOn Sat, Mar 22, 2014 at 1:51 PM, bricklen <[email protected]> wrote:\n>\n> On Fri, Mar 21, 2014 at 5:59 PM, Erik van Zijst <[email protected]>\n> wrote:\n>>\n>> Hi there,\n>>\n>> I've got a relatively simple query that contains expensive BCRYPT\n>> functions that gets optimized in a way that causes postgres to compute\n>> more bcrypt hashes than necessary, thereby dramatically slowing things\n>> down.\n>>\n>> In a certain part of our application we need to lookup users by their\n>> username, email address and password. Now we don't store plaintext\n>> passwords and so the query needs to compute bcrypt hashes on the fly:\n>>\n>> SELECT DISTINCT u.*\n>> FROM auth_user u\n>> JOIN bb_userprofile p ON p.user_id = u.id\n>> JOIN bb_identity i ON i.profile_id = p.id\n>> WHERE\n>> (\n>> (\n>> u.username ILIKE 'detkin'\n>> OR\n>> i.email ILIKE '[email protected]'\n>> )\n>> AND\n>> (\n>> SUBSTRING(password FROM 8) = CRYPT(\n>> 'detkin', SUBSTRING(password FROM 8))\n>> )\n>> )\n>>\n>> These queries are generated by a parser that translates from an\n>> external query language to SQL run on the database. This test db\n>> contains 12 user records.\n>>\n>> With a single bcrypt hash taking ~300ms to compute, this is a recipe\n>> for disaster and so the app only allows queries that require only a\n>> very small number of bcrypt computation.\n>>\n>> E.g. the user must always \"AND\" the password lookup with a clause like\n>> \" username = 'foo' AND password = 'bar'\" (username is unique).\n>>\n>> However, while the query above technically only needs to compute 1\n>> hash (there is a user 'detkin' and email '[email protected]' does not\n>> exist), it instead creates a query plan that computes hashes *before*\n>> filtering on username and email, leading to 12 hash computations and a\n>> very slow query.\n>>\n>> The EXPLAIN (ANALYZE, BUFFERS) is here: http://explain.depesz.com/s/yhE\n>>\n>> The schemas for the 3 tables involved are here:\n>> http://pgsql.privatepaste.com/f72020ad0a\n>>\n>> As a quick experiment I tried moving the joins and email lookup into a\n>> nested IN query, but that still generates a plan that computes hashes\n>> for all 12 users, before picking out the 1 whose username matches.\n>>\n>> Is there any way I can get postgres to perform the hash calculations\n>> on the *result* of the other parts of the where clause, instead of the\n>> other way around? Or else rewrite the query?\n>>\n>> Cheers,\n>> Erik\n>\n>\n> (untested), but how about something like the following:\n>\n> WITH au AS (\n>\n> SELECT DISTINCT u.*\n> FROM auth_user u\n> JOIN bb_userprofile p ON p.user_id = u.id\n> JOIN bb_identity i ON i.profile_id = p.id\n> WHERE u.username ILIKE 'detkin'\n>\n> OR i.email ILIKE '[email protected]')\n>\n> SELECT au.*\n> FROM au\n> WHERE SUBSTRING(au.password FROM 8) = CRYPT('detkin', SUBSTRING(au.password\n> FROM 8));\n>\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 22 Mar 2014 15:27:41 -0700", "msg_from": "Erik van Zijst <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Suboptimal query plan when using expensive BCRYPT functions" }, { "msg_contents": "On Sat, Mar 22, 2014 at 3:27 PM, Erik van Zijst <[email protected]>wrote:\n\n> Yes, that works (it does at least on my small test database).\n>\n> However, these queries are generated by a parser that translates\n> complex parse trees from a higher level DSL that doesn't lend itself\n> well to logically isolating the crypt checks from the remaining\n> conditions, as password checks might be present at arbitrary depths.\n>\n> For example:\n>\n> (\n> active eq true\n> AND\n> (\n> password eq \"foo\"\n> OR\n> password eq \"bar\"\n> )\n> )\n> AND\n> (\n> username eq \"erik\"\n> OR\n> email contains \"bar\"\n> )\n>\n> Currently the SQL generator translates each AST node into individual\n> predicates that straightforwardly concatenate into a single SQL WHERE\n> clause. For this to work, the individual nodes should compose well. I\n> don't immediately see how the above query could be automatically\n> translated into SQL when taking the WITH-AS approach.\n>\n> I could nonetheless take a stab at it, but life would certainly be\n> easier if I could translate each component independently and leave\n> optimization to the query planner.\n>\n\n\nHow about encapsulating the revised query inside a db function? That\nsimplifies the query for your query generator to something like \"select\nx,y,z from your_func(p_user,p_email,p_crypt)\"\n\nOn Sat, Mar 22, 2014 at 3:27 PM, Erik van Zijst <[email protected]> wrote:\nYes, that works (it does at least on my small test database).\n\nHowever, these queries are generated by a parser that translates\ncomplex parse trees from a higher level DSL that doesn't lend itself\nwell to logically isolating the crypt checks from the remaining\nconditions, as password checks might be present at arbitrary depths.\n\nFor example:\n\n    (\n      active eq true\n      AND\n      (\n        password eq \"foo\"\n        OR\n        password eq \"bar\"\n      )\n    )\n    AND\n    (\n      username eq \"erik\"\n      OR\n      email contains \"bar\"\n    )\n\nCurrently the SQL generator translates each AST node into individual\npredicates that straightforwardly concatenate into a single SQL WHERE\nclause. For this to work, the individual nodes should compose well. I\ndon't immediately see how the above query could be automatically\ntranslated into SQL when taking the WITH-AS approach.\n\nI could nonetheless take a stab at it, but life would certainly be\neasier if I could translate each component independently and leave\noptimization to the query planner.How about encapsulating the revised query inside a db function? That simplifies the query for your query generator to something like \"select x,y,z from your_func(p_user,p_email,p_crypt)\"", "msg_date": "Sat, 22 Mar 2014 15:56:31 -0700", "msg_from": "bricklen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Suboptimal query plan when using expensive BCRYPT functions" }, { "msg_contents": "On Sat, Mar 22, 2014 at 3:56 PM, bricklen <[email protected]> wrote:\n> On Sat, Mar 22, 2014 at 3:27 PM, Erik van Zijst <[email protected]>\n>> I could nonetheless take a stab at it, but life would certainly be\n>> easier if I could translate each component independently and leave\n>> optimization to the query planner.\n>\n> How about encapsulating the revised query inside a db function? That\n> simplifies the query for your query generator to something like \"select\n> x,y,z from your_func(p_user,p_email,p_crypt)\"\n\nI'm not really sure I understand how a db function would make things\neasier. What would the implementation for your_func() be and what\nwould the SQL look like for the DSL example which contains multiple\npassword checks?\n\nCheers,\nErik\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 22 Mar 2014 20:37:28 -0700", "msg_from": "Erik van Zijst <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Suboptimal query plan when using expensive BCRYPT functions" }, { "msg_contents": "On Sat, Mar 22, 2014 at 8:37 PM, Erik van Zijst <[email protected]>wrote:\n\n> On Sat, Mar 22, 2014 at 3:56 PM, bricklen <[email protected]> wrote:\n> > On Sat, Mar 22, 2014 at 3:27 PM, Erik van Zijst <\n> [email protected]>\n> >> I could nonetheless take a stab at it, but life would certainly be\n> >> easier if I could translate each component independently and leave\n> >> optimization to the query planner.\n> >\n> > How about encapsulating the revised query inside a db function? That\n> > simplifies the query for your query generator to something like \"select\n> > x,y,z from your_func(p_user,p_email,p_crypt)\"\n>\n> I'm not really sure I understand how a db function would make things\n> easier. What would the implementation for your_func() be and what\n> would the SQL look like for the DSL example which contains multiple\n> password checks?\n>\n\nI just reread your previous post about the checks being at potentially\narbitrary depths. In that case, the function may or may not help. Without a\nrepresentative database to test with I can't say one way or the other.\nPerhaps someone else will have some other ideas of what could be useful\nhere.\n\nOn Sat, Mar 22, 2014 at 8:37 PM, Erik van Zijst <[email protected]> wrote:\nOn Sat, Mar 22, 2014 at 3:56 PM, bricklen <[email protected]> wrote:\n\n> On Sat, Mar 22, 2014 at 3:27 PM, Erik van Zijst <[email protected]>\n>> I could nonetheless take a stab at it, but life would certainly be\n>> easier if I could translate each component independently and leave\n>> optimization to the query planner.\n>\n> How about encapsulating the revised query inside a db function? That\n> simplifies the query for your query generator to something like \"select\n> x,y,z from your_func(p_user,p_email,p_crypt)\"\n\nI'm not really sure I understand how a db function would make things\neasier. What would the implementation for your_func() be and what\nwould the SQL look like for the DSL example which contains multiple\npassword checks?I just reread your previous post about the checks being at potentially arbitrary depths. In that case, the function may or may not help. Without a representative database to test with I can't say one way or the other. Perhaps someone else will have some other ideas of what could be useful here.", "msg_date": "Sat, 22 Mar 2014 20:56:09 -0700", "msg_from": "bricklen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Suboptimal query plan when using expensive BCRYPT functions" }, { "msg_contents": "bricklen <[email protected]> writes:\n> Perhaps someone else will have some other ideas of what could be useful\n> here.\n\nMaybe I'm missing something ... but isn't the OP's query completely bogus?\n\n SELECT DISTINCT u.*\n FROM auth_user u\n JOIN bb_userprofile p ON p.user_id = u.id\n JOIN bb_identity i ON i.profile_id = p.id\n WHERE\n (\n (\n u.username ILIKE 'detkin'\n OR\n i.email ILIKE 'foo(at)example(dot)com'\n )\n AND\n (\n SUBSTRING(password FROM 8) = CRYPT(\n 'detkin', SUBSTRING(password FROM 8))\n )\n )\n\nGranting that there are not chance collisions of password hashes (which\nwould surely be a bad thing if there were), success of the second AND arm\nmeans that we are on user detkin's row of auth_user. Therefore the OR\nbusiness is entirely nonfunctional: if the password test passes, then\nthe u.username ILIKE 'detkin' clause succeeds a fortiori, while if the\npassword test fails, it hardly matters what i.email is, because the WHERE\nclause as a whole fails. Ergo, the whole WHERE clause might as well just\nbe written \"u.username = 'detkin'\". If it were a RIGHT JOIN rather than\njust a JOIN, this argument would fail ... but as written, the query\nmakes little sense.\n\nI'll pass gently over the question of whether the password test as shown\ncould ever succeed at all.\n\nI suppose we've been shown a lobotomized version of the real logic,\nbut it's hard to give advice in such situations.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 23 Mar 2014 02:40:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Suboptimal query plan when using expensive BCRYPT functions" }, { "msg_contents": "On Sat, Mar 22, 2014 at 11:40 PM, Tom Lane <[email protected]> wrote:\n> Maybe I'm missing something ... but isn't the OP's query completely bogus?\n>\n> SELECT DISTINCT u.*\n> FROM auth_user u\n> JOIN bb_userprofile p ON p.user_id = u.id\n> JOIN bb_identity i ON i.profile_id = p.id\n> WHERE\n> (\n> (\n> u.username ILIKE 'detkin'\n> OR\n> i.email ILIKE 'foo(at)example(dot)com'\n> )\n> AND\n> (\n> SUBSTRING(password FROM 8) = CRYPT(\n> 'detkin', SUBSTRING(password FROM 8))\n> )\n> )\n>\n> Granting that there are not chance collisions of password hashes (which\n> would surely be a bad thing if there were),\n\nWould it?\n\nAny hashing system is inherently open to collision (although you're\nmore likely to find 2 identical snowflakes), but how does that affect\nour situation? It means you simply would have found another password\nfor that user that is just as valid. The system will accept it.\n\n> success of the second AND arm\n> means that we are on user detkin's row of auth_user.\n\nMy password could be 'detkin' too, but my username is 'erik'.\n\n> Therefore the OR\n> business is entirely nonfunctional: if the password test passes, then\n> the u.username ILIKE 'detkin' clause succeeds a fortiori, while if the\n> password test fails, it hardly matters what i.email is, because the WHERE\n> clause as a whole fails.\n\nMy email could be '[email protected]', my username 'erik' and my\npassword 'detkin'.\n\nUsers are identified through their unique username or email address.\nPasswords are not unique.\n\n> I suppose we've been shown a lobotomized version of the real logic,\n> but it's hard to give advice in such situations.\n\nThis is an actual query taken from the system.\n\nCheers,\nErik\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 23 Mar 2014 11:30:21 -0700", "msg_from": "Erik van Zijst <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Suboptimal query plan when using expensive BCRYPT functions" }, { "msg_contents": "On 03/22/2014 02:59 AM, Erik van Zijst wrote:\n> Is there any way I can get postgres to perform the hash calculations\n> on the *result* of the other parts of the where clause, instead of the\n> other way around? Or else rewrite the query?\n\nThe planner doesn't know that the crypt function is expensive. That can \nbe fixed with \"ALTER FUNCTION crypt(text, text) COST <high value>\". Even \nwith that, I'm not sure if the planner is smart enough to optimize the \nquery the way you'd want, but it's worth a try.\n\n- Heikki\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 24 Mar 2014 09:08:46 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Suboptimal query plan when using expensive BCRYPT functions" }, { "msg_contents": "On Mon, Mar 24, 2014 at 12:08 AM, Heikki Linnakangas\n<[email protected]> wrote:\n> On 03/22/2014 02:59 AM, Erik van Zijst wrote:\n>>\n>> Is there any way I can get postgres to perform the hash calculations\n>> on the *result* of the other parts of the where clause, instead of the\n>> other way around? Or else rewrite the query?\n>\n>\n> The planner doesn't know that the crypt function is expensive. That can be\n> fixed with \"ALTER FUNCTION crypt(text, text) COST <high value>\". Even with\n> that, I'm not sure if the planner is smart enough to optimize the query the\n> way you'd want, but it's worth a try.\n\nThanks. That's the kind of hint I was looking for.\n\nI just gave it a go, but unfortunately it's not enough to get the\noptimizer to change the plan, regardless of the cost value.\n\nCheers,\nErik\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 24 Mar 2014 10:37:53 -0700", "msg_from": "Erik van Zijst <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Suboptimal query plan when using expensive BCRYPT functions" } ]
[ { "msg_contents": "Hi, Brett Wooldridge here, one of the principals of HikariCP. I thought\nI'd wade into the conversation pool a little myself if you guys don't mind.\n\nSpeaking to David's point...\n>> Reaching the maxPoolSize from the minPoolSize means creating the\n>> connections at the crucial moment where the client application is in the\n>> desperate need of completing an important query/transaction which the\n>> primary responsibility since it cannot hold the data collected.\n\nThis was one of the reasons I was proposing the fixed pool design. In my\nexperience, even in pools that maintain a minimum number of idle\nconnections, responding to spike demands is problematic. If you have a\npool with say 30 max. connections, and a 10 minimum idle connection goal, a\nsudden spike demand for 20 connections means the pool can satisfy 10\ninstantly but then is left to [try to] establish 10 connections before the\napplication's connectionTimeout (read acquisition timeout from the pool) is\nreached. This in turn generates a spike demand on the database slowing\ndown not only the connection establishments themselves but also slowing\ndown the completion of transactions that might actually return connections\nto the pool.\n\nAs I think Tom noted is a slidestack I read somewhere, there is a \"knee\" in\nthe performance curve beyond which additional connections cause a drop in\nTPS. While users often think it is a good idea to have maxPoolSize of 100,\nthe reality is they can retire/reuse connections faster with a much smaller\npool. I didn't see a pool of a 2 or 3 dozen connections actually impacting\nperformance much when half of them are idle and half are executing\ntransactions (ie. the idle ones don't impact the overall performance much).\n\nFinally, one of my contentions was, either your database server has\nresources or it doesn't. Either it has enough memory and processing power\nfor N connections or it doesn't. If the pool is set below, near, or at\nthat capacity what is the purpose of releasing connections in that case?\n Yes, it frees up memory, but that memory is not really available for other\nuse given that at any instant the maximum capacity of the pool may be\ndemanded.\nInstead releasing resources only to try to reallocate them during a demand\npeak seems counter-productive.\n\nI'd appreciate any shared thoughts on my presuppositions.\n\n-Brett\n\nHi, Brett Wooldridge here, one of the principals of HikariCP.  I thought I'd wade into the conversation pool a little myself if you guys don't mind. \nSpeaking to David's point... \n>> Reaching the maxPoolSize from the minPoolSize means creating the \n>> connections at the crucial moment where the client application is in the \n>> desperate need of completing an important query/transaction which the \n>> primary responsibility since it cannot hold the data collected. \nThis was one of the reasons I was proposing the fixed pool design.  In my experience, even in pools that maintain a minimum number of idle connections, responding to spike demands is problematic.  If you have a pool with say 30 max. connections, and a 10 minimum idle connection goal, a sudden spike demand for 20 connections means the pool can satisfy 10 instantly but then is left to [try to] establish 10 connections before the application's connectionTimeout (read acquisition timeout from the pool) is reached.  This in turn generates a spike demand on the database slowing down not only the connection establishments themselves but also slowing down the completion of transactions that might actually return connections to the pool. \nAs I think Tom noted is a slidestack I read somewhere, there is a \"knee\" in the performance curve beyond which additional connections cause a drop in TPS.  While users often think it is a good idea to have maxPoolSize of 100, the reality is they can retire/reuse connections faster with a much smaller pool.  I didn't see a pool of a 2 or 3 dozen connections actually impacting performance much when half of them are idle and half are executing transactions (ie. the idle ones don't impact the overall performance much). \nFinally, one of my contentions was, either your database server has resources or it doesn't.  Either it has enough memory and processing power for N connections or it doesn't.  If the pool is set below, near, or at that capacity what is the purpose of releasing connections in that case?  Yes, it frees up memory, but that memory is not really available for other use given that at any instant the maximum capacity of the pool may be demanded. \nInstead releasing resources only to try to reallocate them during a demand peak seems counter-productive. \nI'd appreciate any shared thoughts on my presuppositions.-Brett", "msg_date": "Mon, 24 Mar 2014 22:27:10 +0900", "msg_from": "Brett Wooldridge <[email protected]>", "msg_from_op": true, "msg_subject": "Connection pooling - Number of connections" }, { "msg_contents": "On 25/03/14 02:27, Brett Wooldridge wrote:\n> Hi, Brett Wooldridge here, one of the principals of HikariCP. I \n> thought I'd wade into the conversation pool a little myself if you \n> guys don't mind.\n>\n> Speaking to David's point...\n> >> Reaching the maxPoolSize from the minPoolSize means creating the\n> >> connections at the crucial moment where the client application is in the\n> >> desperate need of completing an important query/transaction which the\n> >> primary responsibility since it cannot hold the data collected.\n>\n> This was one of the reasons I was proposing the fixed pool design. In \n> my experience, even in pools that maintain a minimum number of idle \n> connections, responding to spike demands is problematic. If you have \n> a pool with say 30 max. connections, and a 10 minimum idle connection \n> goal, a sudden spike demand for 20 connections means the pool can \n> satisfy 10 instantly but then is left to [try to] establish 10 \n> connections before the application's connectionTimeout (read \n> acquisition timeout from the pool) is reached. This in turn generates \n> a spike demand on the database slowing down not only the connection \n> establishments themselves but also slowing down the completion of \n> transactions that might actually return connections to the pool.\n>\n> As I think Tom noted is a slidestack I read somewhere, there is a \n> \"knee\" in the performance curve beyond which additional connections \n> cause a drop in TPS. While users often think it is a good idea to \n> have maxPoolSize of 100, the reality is they can retire/reuse \n> connections faster with a much smaller pool. I didn't see a pool of a \n> 2 or 3 dozen connections actually impacting performance much when half \n> of them are idle and half are executing transactions (ie. the idle \n> ones don't impact the overall performance much).\n>\n> Finally, one of my contentions was, either your database server has \n> resources or it doesn't. Either it has enough memory and processing \n> power for N connections or it doesn't. If the pool is set below, \n> near, or at that capacity what is the purpose of releasing connections \n> in that case? Yes, it frees up memory, but that memory is not really \n> available for other use given that at any instant the maximum capacity \n> of the pool may be demanded.\n> Instead releasing resources only to try to reallocate them during a \n> demand peak seems counter-productive.\n>\n> I'd appreciate any shared thoughts on my presuppositions.\n>\n> -Brett\n>\nSurely no code changes are required, as one can simply set the min and \nmax pool sizes to be the same?\n\n\nCheers,\nGavin\n\n\n\n\n\n\n\nOn 25/03/14 02:27, Brett Wooldridge\n wrote:\n\n\nHi,\n Brett Wooldridge here, one of the principals of HikariCP.  I\n thought I'd wade into the conversation pool a little myself if\n you guys don't mind. \n\nSpeaking\n to David's point... \n>>\n Reaching the maxPoolSize from the minPoolSize means creating\n the \n>>\n connections at the crucial moment where the client application\n is in the \n>>\n desperate need of completing an important query/transaction\n which the \n>>\n primary responsibility since it cannot hold the data\n collected. \n\nThis\n was one of the reasons I was proposing the fixed pool design.\n  In my experience, even in pools that maintain a minimum\n number of idle connections, responding to spike demands is\n problematic.  If you have a pool with say 30 max. connections,\n and a 10 minimum idle connection goal, a sudden spike demand\n for 20 connections means the pool can satisfy 10 instantly but\n then is left to [try to] establish 10 connections before the\n application's connectionTimeout (read acquisition timeout from\n the pool) is reached.  This in turn generates a spike demand\n on the database slowing down not only the connection\n establishments themselves but also slowing down the completion\n of transactions that might actually return connections to the\n pool. \n\nAs\n I think Tom noted is a slidestack I read somewhere, there is a\n \"knee\" in the performance curve beyond which additional\n connections cause a drop in TPS.  While users often think it\n is a good idea to have maxPoolSize of 100, the reality is they\n can retire/reuse connections faster with a much smaller pool.\n  I didn't see a pool of a 2 or 3 dozen connections actually\n impacting performance much when half of them are idle and half\n are executing transactions (ie. the idle ones don't impact the\n overall performance much). \n\nFinally,\n one of my contentions was, either your database server has\n resources or it doesn't.  Either it has enough memory and\n processing power for N connections or it doesn't.  If the pool\n is set below, near, or at that capacity what is the purpose of\n releasing connections in that case?  Yes, it frees up memory,\n but that memory is not really available for other use given\n that at any instant the maximum capacity of the pool may be\n demanded. \nInstead\n releasing resources only to try to reallocate them during a\n demand peak seems counter-productive. \n\n I'd appreciate any shared thoughts on my presuppositions.\n \n-Brett \n\n\n\n\n\n Surely no code changes are required, as one can simply set the min\n and max pool sizes to be the same?\n\n\n Cheers,\n Gavin", "msg_date": "Tue, 25 Mar 2014 09:24:05 +1300", "msg_from": "Gavin Flower <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Connection pooling - Number of connections" }, { "msg_contents": "On Tue, Mar 25, 2014 at 5:24 AM, Gavin Flower <[email protected]\n> wrote:\n\n> Surely no code changes are required, as one can simply set the min and\n> max pool sizes to be the same?\n>\n> Cheers,\n> Gavin\n>\n\nTo be sure it can be implemented that way, but its a question of design\ntargets. For example, if a pool is allowed to grow and shrink, the design\nmight encompass a pool of threads that try to maintain the\nconfigured-minimum idle connections to respond to spike demands. And there\nis the additional setting in the pool for users to [typically] misconfigure.\n\nHowever, if the pool is fixed size, and attrition from the pool is only by\nidle timeout (typically measured in minutes), the design does not need to\naccount for spike demand. Likely connections that have dropped out can\neither be restored on-demand rather than something running constantly in\nthe background trying to maintain and idle level.\n\nOne of the attributes of HikariCP is a minimalistic set of configuration\noptions with sane defaults, and minimalistic code. There are many\ncompetitor pools, offering dozens of settings ranging form banal to unsafe.\n HikariCP initially even offered some of these options, but one-ny-one\nthey're getting the ax. More and more we're trying to look at what is the\ntrue core functionality that user's need -- eliminate what is unnecessary\nand easily misconfigured.\n\nThus a debate started over in our group, with some similar views as\nexpressed here (on both sides). Guillaume Smet was concerned about the\nimpact of idle connections on active ones in PostgreSQL (in a fixed pool\nscenario) and wanted to ask some of the experts over here.\n\nOn Tue, Mar 25, 2014 at 5:24 AM, Gavin Flower <[email protected]> wrote:\n\n\nSurely no code changes are required, as one can simply set the min\n and max pool sizes to be the same?\n Cheers,\n Gavin\n\nTo be sure it can be implemented that way, but its a question of design targets.  For example, if a pool is allowed to grow and shrink, the design might encompass a pool of threads that try to maintain the configured-minimum idle connections to respond to spike demands.  And there is the additional setting in the pool for users to [typically] misconfigure.\nHowever, if the pool is fixed size, and attrition from the pool is only by idle timeout (typically measured in minutes), the design does not need to account for spike demand.  Likely connections that have dropped out can either be restored on-demand rather than something running constantly in the background trying to maintain and idle level.\nOne of the attributes of HikariCP is a minimalistic set of configuration options with sane defaults, and minimalistic code.  There are many competitor pools, offering dozens of settings ranging form banal to unsafe.  HikariCP initially even offered some of these options, but one-ny-one they're getting the ax.  More and more we're trying to look at what is the true core functionality that user's need -- eliminate what is unnecessary and easily misconfigured.\nThus a debate started over in our group, with some similar views as expressed here (on both sides).  Guillaume Smet was concerned about the impact of idle connections on active ones in PostgreSQL (in a fixed pool scenario) and wanted to ask some of the experts over here.", "msg_date": "Tue, 25 Mar 2014 09:23:33 +0900", "msg_from": "Brett Wooldridge <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Connection pooling - Number of connections" }, { "msg_contents": "On 25/03/14 13:23, Brett Wooldridge wrote:\n> On Tue, Mar 25, 2014 at 5:24 AM, Gavin Flower \n> <[email protected] <mailto:[email protected]>> \n> wrote:\n>\n>> Surely no code changes are required, as one can simply set the\n>> min and max pool sizes to be the same?\n> Cheers,\n> Gavin\n>\n>\n> To be sure it can be implemented that way, but its a question of \n> design targets. For example, if a pool is allowed to grow and shrink, \n> the design might encompass a pool of threads that try to maintain the \n> configured-minimum idle connections to respond to spike demands. And \n> there is the additional setting in the pool for users to [typically] \n> misconfigure.\n>\n> However, if the pool is fixed size, and attrition from the pool is \n> only by idle timeout (typically measured in minutes), the design does \n> not need to account for spike demand. Likely connections that have \n> dropped out can either be restored on-demand rather than something \n> running constantly in the background trying to maintain and idle level.\n>\n> One of the attributes of HikariCP is a minimalistic set of \n> configuration options with sane defaults, and minimalistic code. \n> There are many competitor pools, offering dozens of settings ranging \n> form banal to unsafe. HikariCP initially even offered some of these \n> options, but one-ny-one they're getting the ax. More and more we're \n> trying to look at what is the true core functionality that user's need \n> -- eliminate what is unnecessary and easily misconfigured.\n>\n> Thus a debate started over in our group, with some similar views as \n> expressed here (on both sides). Guillaume Smet was concerned about \n> the impact of idle connections on active ones in PostgreSQL (in a \n> fixed pool scenario) and wanted to ask some of the experts over here.\n>\nWould it be a valid option to switch in simpler code when min = max and \nboth could be set to the same default? This would allow more efficient \ncode to be run for a fixed pool size and allow a sane default, while \npreserving the option to have a range, though obviously not as simple as \nonly allowing a fixed pool size in terms of code complexity.\n\n\n\n\n\n\n\nOn 25/03/14 13:23, Brett Wooldridge\n wrote:\n\n\nOn Tue, Mar 25, 2014 at 5:24 AM, Gavin Flower <[email protected]>\n wrote:\n\n\n\n\n\n\n\n\nSurely no\n code changes are required, as one can simply\n set the min and max pool sizes to be the\n same?\n\n\n\n\n Cheers,\n Gavin\n\n\n\n\n\nTo be sure it can be implemented that\n way, but its a question of design targets.  For example, if a\n pool is allowed to grow and shrink, the design might encompass\n a pool of threads that try to maintain the configured-minimum\n idle connections to respond to spike demands.  And there is\n the additional setting in the pool for users to [typically]\n misconfigure.\n\n\nHowever, if the pool is fixed size, and\n attrition from the pool is only by idle timeout (typically\n measured in minutes), the design does not need to account for\n spike demand.  Likely connections that have dropped out can\n either be restored on-demand rather than something running\n constantly in the background trying to maintain and idle\n level.\n\n\nOne of the attributes of HikariCP is a\n minimalistic set of configuration options with sane defaults,\n and minimalistic code.  There are many competitor pools,\n offering dozens of settings ranging form banal to unsafe.\n  HikariCP initially even offered some of these options, but\n one-ny-one they're getting the ax.  More and more we're trying\n to look at what is the true core functionality that user's\n need -- eliminate what is unnecessary and easily\n misconfigured.\n\n\nThus a debate started over in our\n group, with some similar views as expressed here (on both\n sides).  Guillaume Smet was concerned about the impact of idle\n connections on active ones in PostgreSQL (in a fixed pool\n scenario) and wanted to ask some of the experts over here.\n\n\n\n\n Would it be a valid option to switch in simpler code when min = max\n and both could be set to the same default?  This would allow more\n efficient code to be run for a fixed pool size and allow a sane\n default, while preserving the option to have a range, though\n obviously not as simple as only allowing a fixed pool size in terms\n of code complexity.", "msg_date": "Tue, 25 Mar 2014 14:42:32 +1300", "msg_from": "Gavin Flower <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Connection pooling - Number of connections" }, { "msg_contents": "Sure. It's all just code. It's not particularly a question of efficiency,\nI'm sure it could be made equally efficient. But \"simpler\" code-wise would\nbe not having two implementations, or not having one that is designed to\ntry to keep up with spike demands. The question for this group was really\naround PostgreSQL performance regarding the impact of idle connections. If\nthere is a pool of 20 connections, and half of them are idle, what is the\nimpact performance-wise of the idle connections on the active connections?\n\n\nOn Tue, Mar 25, 2014 at 10:42 AM, Gavin Flower <\[email protected]> wrote:\n\n> On 25/03/14 13:23, Brett Wooldridge wrote:\n>\n> On Tue, Mar 25, 2014 at 5:24 AM, Gavin Flower <\n> [email protected]> wrote:\n>\n>> Surely no code changes are required, as one can simply set the min\n>> and max pool sizes to be the same?\n>>\n>> Cheers,\n>> Gavin\n>>\n>\n> To be sure it can be implemented that way, but its a question of design\n> targets. For example, if a pool is allowed to grow and shrink, the design\n> might encompass a pool of threads that try to maintain the\n> configured-minimum idle connections to respond to spike demands. And there\n> is the additional setting in the pool for users to [typically] misconfigure.\n>\n> However, if the pool is fixed size, and attrition from the pool is only\n> by idle timeout (typically measured in minutes), the design does not need\n> to account for spike demand. Likely connections that have dropped out can\n> either be restored on-demand rather than something running constantly in\n> the background trying to maintain and idle level.\n>\n> One of the attributes of HikariCP is a minimalistic set of configuration\n> options with sane defaults, and minimalistic code. There are many\n> competitor pools, offering dozens of settings ranging form banal to unsafe.\n> HikariCP initially even offered some of these options, but one-ny-one\n> they're getting the ax. More and more we're trying to look at what is the\n> true core functionality that user's need -- eliminate what is unnecessary\n> and easily misconfigured.\n>\n> Thus a debate started over in our group, with some similar views as\n> expressed here (on both sides). Guillaume Smet was concerned about the\n> impact of idle connections on active ones in PostgreSQL (in a fixed pool\n> scenario) and wanted to ask some of the experts over here.\n>\n> Would it be a valid option to switch in simpler code when min = max and\n> both could be set to the same default? This would allow more efficient\n> code to be run for a fixed pool size and allow a sane default, while\n> preserving the option to have a range, though obviously not as simple as\n> only allowing a fixed pool size in terms of code complexity.\n>\n>\n\nSure.  It's all just code.  It's not particularly a question of efficiency, I'm sure it could be made equally efficient.  But \"simpler\" code-wise would be not having two implementations, or not having one that is designed to try to keep up with spike demands.  The question for this group was really around PostgreSQL performance regarding the impact of idle connections.  If there is a pool of 20 connections, and half of them are idle, what is the impact performance-wise of the idle connections on the active connections?\nOn Tue, Mar 25, 2014 at 10:42 AM, Gavin Flower <[email protected]> wrote:\n\n\nOn 25/03/14 13:23, Brett Wooldridge\n wrote:\n\n\nOn Tue, Mar 25, 2014 at 5:24 AM, Gavin Flower <[email protected]>\n wrote:\n\n\n\n\n\n\n\n\nSurely no\n code changes are required, as one can simply\n set the min and max pool sizes to be the\n same?\n\n\n\n\n Cheers,\n Gavin\n\n\n\n\n\nTo be sure it can be implemented that\n way, but its a question of design targets.  For example, if a\n pool is allowed to grow and shrink, the design might encompass\n a pool of threads that try to maintain the configured-minimum\n idle connections to respond to spike demands.  And there is\n the additional setting in the pool for users to [typically]\n misconfigure.\n\n\nHowever, if the pool is fixed size, and\n attrition from the pool is only by idle timeout (typically\n measured in minutes), the design does not need to account for\n spike demand.  Likely connections that have dropped out can\n either be restored on-demand rather than something running\n constantly in the background trying to maintain and idle\n level.\n\n\nOne of the attributes of HikariCP is a\n minimalistic set of configuration options with sane defaults,\n and minimalistic code.  There are many competitor pools,\n offering dozens of settings ranging form banal to unsafe.\n  HikariCP initially even offered some of these options, but\n one-ny-one they're getting the ax.  More and more we're trying\n to look at what is the true core functionality that user's\n need -- eliminate what is unnecessary and easily\n misconfigured.\n\n\nThus a debate started over in our\n group, with some similar views as expressed here (on both\n sides).  Guillaume Smet was concerned about the impact of idle\n connections on active ones in PostgreSQL (in a fixed pool\n scenario) and wanted to ask some of the experts over here.\n\n\n\n\n Would it be a valid option to switch in simpler code when min = max\n and both could be set to the same default?  This would allow more\n efficient code to be run for a fixed pool size and allow a sane\n default, while preserving the option to have a range, though\n obviously not as simple as only allowing a fixed pool size in terms\n of code complexity.", "msg_date": "Tue, 25 Mar 2014 11:45:52 +0900", "msg_from": "Brett Wooldridge <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Connection pooling - Number of connections" }, { "msg_contents": "On 03/24/2014 06:27 AM, Brett Wooldridge wrote:\n> This was one of the reasons I was proposing the fixed pool design. In my\n> experience, even in pools that maintain a minimum number of idle\n> connections, responding to spike demands is problematic. If you have a\n> pool with say 30 max. connections, and a 10 minimum idle connection goal, a\n> sudden spike demand for 20 connections means the pool can satisfy 10\n> instantly but then is left to [try to] establish 10 connections before the\n> application's connectionTimeout (read acquisition timeout from the pool) is\n> reached. This in turn generates a spike demand on the database slowing\n> down not only the connection establishments themselves but also slowing\n> down the completion of transactions that might actually return connections\n> to the pool.\n\nSo what makes sense really depends on what your actual connection\npattern is. Idle connections aren't free; aside from PostgreSQL\nlock-checking overhead, they hold on to any virtual memory allocated to\nthem when they were working. In the aggregate, this can add up to quite\na bit of memory, which can then cause the OS to decide not to cache some\ndata you could really use.\n\nNow, if your peak is 100 connections and your median is 50, this doesn't\nsignify. But I know more than a few workloads where the peak is 1000\nand the median is 25, and in that case you want to drop the idle\nconnections gradually. The key is to keep enough of a buffer of ready\nconnections to deal with the next peak when it comes in.\n\nThat also means that even if the pool is a fixed size, you want to\nrotate in and out the actual sessions, so that they don't hang onto\nmaximum virtual memory indefinitely.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 26 Mar 2014 17:35:11 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Connection pooling - Number of connections" }, { "msg_contents": "On Mon, Mar 24, 2014 at 6:27 AM, Brett Wooldridge <\[email protected]> wrote:\n\n> Hi, Brett Wooldridge here, one of the principals of HikariCP. I thought\n> I'd wade into the conversation pool a little myself if you guys don't mind.\n>\n> Speaking to David's point...\n> >> Reaching the maxPoolSize from the minPoolSize means creating the\n> >> connections at the crucial moment where the client application is in\n> the\n> >> desperate need of completing an important query/transaction which the\n> >> primary responsibility since it cannot hold the data collected.\n>\n> This was one of the reasons I was proposing the fixed pool design. In my\n> experience, even in pools that maintain a minimum number of idle\n> connections, responding to spike demands is problematic. If you have a\n> pool with say 30 max. connections, and a 10 minimum idle connection goal, a\n> sudden spike demand for 20 connections means the pool can satisfy 10\n> instantly but then is left to [try to] establish 10 connections before the\n> application's connectionTimeout (read acquisition timeout from the pool) is\n> reached. This in turn generates a spike demand on the database slowing\n> down not only the connection establishments themselves but also slowing\n> down the completion of transactions that might actually return connections\n> to the pool.\n>\n\n\nDo you have publishable test scripts to demonstrate this? It would be nice\nfor people to be able to try it out for themselves, on their own hardware,\nto see what it does. I have seen some database products for which I would\nnot doubt this effect is real, having seen prodigiously long connection set\nup times. But I'm skeptical that that is a meaningful problem for\nPostgreSQL, at least not with the md5 authentication method.\n\n\n> As I think Tom noted is a slidestack I read somewhere, there is a \"knee\"\n> in the performance curve beyond which additional connections cause a drop\n> in TPS. While users often think it is a good idea to have maxPoolSize of\n> 100, the reality is they can retire/reuse connections faster with a much\n> smaller pool. I didn't see a pool of a 2 or 3 dozen connections actually\n> impacting performance much when half of them are idle and half are\n> executing transactions (ie. the idle ones don't impact the overall\n> performance much).\n>\n\nI think the knee applies mostly to active connections. I've seen no\nindication of completely idle connections causing observable problems in\nrecent releases, until the number of them gets absurd.\n\nAnd the location of the knee for the number of active connections is going\nto depend greatly on the hardware and the work load.\n\n\n> Finally, one of my contentions was, either your database server has\n> resources or it doesn't. Either it has enough memory and processing power\n> for N connections or it doesn't. If the pool is set below, near, or at\n> that capacity what is the purpose of releasing connections in that case?\n> Yes, it frees up memory, but that memory is not really available for other\n> use given that at any instant the maximum capacity of the pool may be\n> demanded.\n> Instead releasing resources only to try to reallocate them during a demand\n> peak seems counter-productive.\n>\n\nI pretty much agree with you on that. Most of the arguments I see for\ngetting rid of idle connections really seem to be arguments for lowering\nthe maximum number.\n\nThat said, I don't like it when people take away my options because they\nthink I might not be able to set them correctly. Just say the default value\nfor the min is the same as the max, and most people don't need to change\nthat. If you don't want to implement a feature, that is one thing, but to\ntake out a feature that already exists just because you think I'm not quite\nclever enough to use it is quite another thing.\n\nAlso, often people don't have a realistic work-load generator with which to\ntest the maximum, or a budget to run such tests. They will have no idea\nwhere to set it. A plausible thing to do in that case is to just set it\nrather high and hope the actual usage never get high enough to test whether\nthe chosen value was correct. That would be more problematic if the pool\nsize is static.\n\nCheers,\n\nJeff\n\nOn Mon, Mar 24, 2014 at 6:27 AM, Brett Wooldridge <[email protected]> wrote:\nHi, Brett Wooldridge here, one of the principals of HikariCP.  I thought I'd wade into the conversation pool a little myself if you guys don't mind. \nSpeaking to David's point... \n>> Reaching the maxPoolSize from the minPoolSize means creating the \n>> connections at the crucial moment where the client application is in the \n>> desperate need of completing an important query/transaction which the \n>> primary responsibility since it cannot hold the data collected. \nThis was one of the reasons I was proposing the fixed pool design.  In my experience, even in pools that maintain a minimum number of idle connections, responding to spike demands is problematic.  If you have a pool with say 30 max. connections, and a 10 minimum idle connection goal, a sudden spike demand for 20 connections means the pool can satisfy 10 instantly but then is left to [try to] establish 10 connections before the application's connectionTimeout (read acquisition timeout from the pool) is reached.  This in turn generates a spike demand on the database slowing down not only the connection establishments themselves but also slowing down the completion of transactions that might actually return connections to the pool. \nDo you have publishable test scripts to demonstrate this?  It would be nice for people to be able to try it out for themselves, on their own hardware, to see what it does.  I have seen some database products for which I would not doubt this effect is real, having seen prodigiously long connection set up times.  But I'm skeptical that that is a meaningful problem for PostgreSQL, at least not with the md5 authentication method.\n As I think Tom noted is a slidestack I read somewhere, there is a \"knee\" in the performance curve beyond which additional connections cause a drop in TPS.  While users often think it is a good idea to have maxPoolSize of 100, the reality is they can retire/reuse connections faster with a much smaller pool.  I didn't see a pool of a 2 or 3 dozen connections actually impacting performance much when half of them are idle and half are executing transactions (ie. the idle ones don't impact the overall performance much). \nI think the knee applies mostly to active connections.  I've seen no indication of completely idle connections causing observable problems in recent releases, until the number of them gets absurd.\nAnd the location of the knee for the number of active connections is going to depend greatly on the hardware and the work load. \nFinally, one of my contentions was, either your database server has resources or it doesn't.  Either it has enough memory and processing power for N connections or it doesn't.  If the pool is set below, near, or at that capacity what is the purpose of releasing connections in that case?  Yes, it frees up memory, but that memory is not really available for other use given that at any instant the maximum capacity of the pool may be demanded. \nInstead releasing resources only to try to reallocate them during a demand peak seems counter-productive. \nI pretty much agree with you on that.  Most of the arguments I see for getting rid of idle connections really seem to be arguments for lowering the maximum number.That said, I don't like it when people take away my options because they think I might not be able to set them correctly. Just say the default value for the min is the same as the max, and most people don't need to change that. If you don't want to implement a feature, that is one thing, but to take out a feature that already exists just because you think I'm not quite clever enough to use it is quite another thing.\nAlso, often people don't have a realistic work-load generator with which to test the maximum, or a budget to run such tests.  They will have no idea where to set it.  A plausible thing to do in that case is to just set it rather high and hope the actual usage never get high enough to test whether the chosen value was correct.  That would be more problematic if the pool size is static.\nCheers,Jeff", "msg_date": "Sat, 29 Mar 2014 14:20:37 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Connection pooling - Number of connections" }, { "msg_contents": "Sent from my iPhone\n> On Mar 27, 2014, at 9:35 AM, Josh Berkus <[email protected]> wrote:\n> \n>> On 03/24/2014 06:27 AM, Brett Wooldridge wrote:\n>> This was one of the reasons I was proposing the fixed pool design. In my\n>> experience, even in pools that maintain a minimum number of idle\n>> connections, responding to spike demands is problematic. If you have a\n>> pool with say 30 max. connections, and a 10 minimum idle connection goal, a\n>> sudden spike demand for 20 connections means the pool can satisfy 10\n>> instantly but then is left to [try to] establish 10 connections before the\n>> application's connectionTimeout (read acquisition timeout from the pool) is\n>> reached. This in turn generates a spike demand on the database slowing\n>> down not only the connection establishments themselves but also slowing\n>> down the completion of transactions that might actually return connections\n>> to the pool.\n> \n> Now, if your peak is 100 connections and your median is 50, this doesn't\n> signify. But I know more than a few workloads where the peak is 1000\n> and the median is 25, and in that case you want to drop the idle\n> connections gradually.\n\nIn the end we've gone with a maxPoolSize + minIdle model where the default is that they are equal (fixed pool).\n\nThough I won't dispute that such workloads (1000 active connections) exist, in that someone created it, I would love to hear their justification. Unless they have >128 CPU cores and solid state storage they are basically spinning their wheels.\n\n> That also means that even if the pool is a fixed size, you want to\n> rotate in and out the actual sessions, so that they don't hang onto\n> maximum virtual memory indefinitely.\n\nWe do this, there is a maxLifeTime setting to rotate out connections.\n\n-Brett\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 30 Mar 2014 12:41:30 +0900", "msg_from": "Brett Wooldridge <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Connection pooling - Number of connections" } ]
[ { "msg_contents": "Hi everyone!\n\n\nI've been working on a puzzling issue for a few days am am hoping that someone has seen something similar or can help. There have been some odd behaviors on one of my production facing postgres servers.\n\n\nversion info from postgres: PostgreSQL 9.1.9 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-4), 64-bit\n\n\nThe symptom: The database machine (running postgres 9.1.9 on CentOS 6.4) is running a low utilization most of the time, but once every day or two, it will appear to slow down to the point where queries back up and clients are unable to connect. Once this event occurs, there are lots of concurrent queries, I see slow queries appear in the logs, but there doesn't appear to be anything abnormal that I have been able to see that causes this behavior. The event will occur just long enough for monitoring to alarm. The team will respond to alerts to take a look, but within a minute or three at most, load returns back to normal levels and all running queries complete in expected times.\n\n\nAt the time of the event, we see a spike in system CPU and load average, but we do not see a corresponding spike in disk reads or writes which would indicate IO load. Initial troubleshooting to monitor active processes led us to see a flurry of activity in ps waiting on semtimedop. Our efforts internally to diagnose this problem are to sample pg_locks and pg_stat_activity every 5s plus running a script to look for at least one postgres process waiting on a semaphore, and if it finds one, it gets a stack trace of every running postgres processes with GDB. It also uses strace on 5 processes to find out which semaphore they're waiting on.\n\n\nWhat we were catching in the following stack trace seems to be representative of where things are waiting when we see an event - here are two examples that are representative:\n\n\n----- 47245 -----\n0x00000037392eb197 in semop () from /lib64/libc.so.6\n#0 0x00000037392eb197 in semop () from /lib64/libc.so.6\n#1 0x00000000005e0c87 in PGSemaphoreLock ()\n#2 0x000000000061e3af in LWLockAcquire ()\n#3 0x000000000060aa0f in ReadBuffer_common ()\n#4 0x000000000060b2e4 in ReadBufferExtended ()\n#5 0x000000000047708d in _bt_relandgetbuf ()\n#6 0x000000000047aac4 in _bt_search ()\n#7 0x000000000047af8d in _bt_first ()\n#8 0x0000000000479704 in btgetbitmap ()\n#9 0x00000000006e7e00 in FunctionCall2Coll ()\n#10 0x0000000000473120 in index_getbitmap ()\n#11 0x00000000005726b8 in MultiExecBitmapIndexScan ()\n#12 0x000000000057214d in BitmapHeapNext ()\n#13 0x000000000056b18e in ExecScan ()\n#14 0x0000000000563ed8 in ExecProcNode ()\n#15 0x0000000000562d72 in standard_ExecutorRun ()\n#16 0x000000000062ce67 in PortalRunSelect ()\n#17 0x000000000062e128 in PortalRun ()\n#18 0x000000000062bb66 in PostgresMain ()\n#19 0x00000000005ecd01 in ServerLoop ()\n#20 0x00000000005ef401 in PostmasterMain ()\n#21 0x0000000000590ff8 in main ()\n\n----- 47257 -----\n0x00000037392eb197 in semop () from /lib64/libc.so.6\n#0 0x00000037392eb197 in semop () from /lib64/libc.so.6\n#1 0x00000000005e0c87 in PGSemaphoreLock ()\n#2 0x000000000061e3af in LWLockAcquire ()\n#3 0x000000000060aa0f in ReadBuffer_common ()\n#4 0x000000000060b2e4 in ReadBufferExtended ()\n#5 0x000000000047708d in _bt_relandgetbuf ()\n#6 0x000000000047aac4 in _bt_search ()\n#7 0x000000000047af8d in _bt_first ()\n#8 0x00000000004797d1 in btgettuple ()\n#9 0x00000000006e7e00 in FunctionCall2Coll ()\n#10 0x000000000047339d in index_getnext ()\n#11 0x0000000000575ed6 in IndexNext ()\n#12 0x000000000056b18e in ExecScan ()\n#13 0x0000000000563ee8 in ExecProcNode ()\n#14 0x0000000000562d72 in standard_ExecutorRun ()\n#15 0x000000000062ce67 in PortalRunSelect ()\n#16 0x000000000062e128 in PortalRun ()\n#17 0x000000000062bb66 in PostgresMain ()\n#18 0x00000000005ecd01 in ServerLoop ()\n#19 0x00000000005ef401 in PostmasterMain ()\n#20 0x0000000000590ff8 in main ()\n\n\nHas any on the forum seen something similar? Any suggestions on what to look at next? If it is helpful to describe the server hardware, it's got 2 E5-2670 cpu and 256 GB of ram, and the database is hosted on 1.6TB raid 10 local storage (15K 300 GB drives). The workload is predominantly read and the queries are mostly fairly simple selects from a single large table generally specifying the primary key as part of the where clause along with a few other filters.\n\n\nThanks,\n\nMatt\n\n\n\n\n\n\n\n\nHi everyone! \n\n\nI've been working on a puzzling issue for a few days am am hoping that someone has seen something similar or can help.  There have been some odd behaviors on one of my production facing postgres servers.  \n\n\nversion info from postgres: PostgreSQL 9.1.9 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-4), 64-bit\n\n\nThe symptom:   The database machine (running postgres 9.1.9 on CentOS 6.4) is running a low utilization most of the time, but once every day or two, it will appear to slow down to the point where queries back up and clients are unable to connect.\n  Once this event occurs, there are lots of concurrent queries, I see slow queries appear in the logs, but there doesn't appear to be anything abnormal that I have been able to see that causes this behavior.  The event will occur just long enough for monitoring\n to alarm.   The team will respond to alerts to take a look, but within a minute or three at most, load returns back to normal levels and all running queries complete in expected times.   \n\n\nAt the time of the event, we see a spike in system CPU and load average, but we do not see a corresponding spike in disk reads or writes which would indicate IO load.   Initial troubleshooting to monitor active processes\n led us to see a flurry of activity in ps waiting on semtimedop.   Our efforts internally to diagnose this problem are to sample\n pg_locks and pg_stat_activity every 5s plus running a script to look for at least one postgres process waiting on a semaphore, and if it finds one, it gets a stack trace of every running postgres processes with\n GDB.  It also uses strace on 5 processes to find out which semaphore they're waiting on.  \n\n\nWhat we were catching in the following stack trace seems to be representative of where things are waiting when we see an event - here are two examples that are representative:\n\n\n----- 47245 -----\n0x00000037392eb197 in semop () from /lib64/libc.so.6\n#0  0x00000037392eb197 in semop () from /lib64/libc.so.6\n#1  0x00000000005e0c87 in PGSemaphoreLock ()\n#2  0x000000000061e3af in LWLockAcquire ()\n#3  0x000000000060aa0f in ReadBuffer_common ()\n#4  0x000000000060b2e4 in ReadBufferExtended ()\n#5  0x000000000047708d in _bt_relandgetbuf ()\n#6  0x000000000047aac4 in _bt_search ()\n#7  0x000000000047af8d in _bt_first ()\n#8  0x0000000000479704 in btgetbitmap ()\n#9  0x00000000006e7e00 in FunctionCall2Coll ()\n#10 0x0000000000473120 in index_getbitmap ()\n#11 0x00000000005726b8 in MultiExecBitmapIndexScan ()\n#12 0x000000000057214d in BitmapHeapNext ()\n#13 0x000000000056b18e in ExecScan ()\n#14 0x0000000000563ed8 in ExecProcNode ()\n#15 0x0000000000562d72 in standard_ExecutorRun ()\n#16 0x000000000062ce67 in PortalRunSelect ()\n#17 0x000000000062e128 in PortalRun ()\n#18 0x000000000062bb66 in PostgresMain ()\n#19 0x00000000005ecd01 in ServerLoop ()\n#20 0x00000000005ef401 in PostmasterMain ()\n#21 0x0000000000590ff8 in main ()\n\n----- 47257 -----\n0x00000037392eb197 in semop () from /lib64/libc.so.6\n#0  0x00000037392eb197 in semop () from /lib64/libc.so.6\n#1  0x00000000005e0c87 in PGSemaphoreLock ()\n#2  0x000000000061e3af in LWLockAcquire ()\n#3  0x000000000060aa0f in ReadBuffer_common ()\n#4  0x000000000060b2e4 in ReadBufferExtended ()\n#5  0x000000000047708d in _bt_relandgetbuf ()\n#6  0x000000000047aac4 in _bt_search ()\n#7  0x000000000047af8d in _bt_first ()\n#8  0x00000000004797d1 in btgettuple ()\n#9  0x00000000006e7e00 in FunctionCall2Coll ()\n#10 0x000000000047339d in index_getnext ()\n#11 0x0000000000575ed6 in IndexNext ()\n#12 0x000000000056b18e in ExecScan ()\n#13 0x0000000000563ee8 in ExecProcNode ()\n#14 0x0000000000562d72 in standard_ExecutorRun ()\n#15 0x000000000062ce67 in PortalRunSelect ()\n#16 0x000000000062e128 in PortalRun ()\n#17 0x000000000062bb66 in PostgresMain ()\n#18 0x00000000005ecd01 in ServerLoop ()\n#19 0x00000000005ef401 in PostmasterMain ()\n#20 0x0000000000590ff8 in main ()\n\n\n\nHas any on the forum seen something similar?   Any suggestions on what to look at next?    If it is helpful to describe the server hardware, it's got 2 E5-2670 cpu and 256 GB of ram, and the database is hosted on 1.6TB raid\n 10 local storage (15K 300 GB drives).   The workload is predominantly read and the queries are mostly fairly simple selects from a single large table generally specifying the primary key as part of the where clause along\n with a few other filters.  \n\n\nThanks,\nMatt", "msg_date": "Mon, 24 Mar 2014 14:37:57 +0000", "msg_from": "Matthew Spilich <[email protected]>", "msg_from_op": true, "msg_subject": "semaphore waits and performance stall" } ]
[ { "msg_contents": "Hai,\n\nCan anyone tell me the difference and performance between pgdump and pg_basebackup if I want to backup a large database.\n\nThanks\n\nHai,Can anyone tell me the difference and performance between pgdump and pg_basebackup if I want to backup a large database.Thanks", "msg_date": "Mon, 24 Mar 2014 21:45:09 -0700 (PDT)", "msg_from": "gianfranco caca <[email protected]>", "msg_from_op": true, "msg_subject": "pg_dump vs pg_basebackup" }, { "msg_contents": "Hi gianfranco,\n\n\nHow exactly large is your database and how heavy is a workload on it?\nUsually if you have more than ~200Gb, better to use pg_basebackup\nbecause pg_dump will take too long time. And please take in mind, that\npg_dump makes dump, which is actually not the same thing as a backup.\n\nBest regards,\nIlya\n\nOn Tue, Mar 25, 2014 at 5:45 AM, gianfranco caca <[email protected]> wrote:\n> Hai,\n>\n> Can anyone tell me the difference and performance between pgdump and\n> pg_basebackup if I want to backup a large database.\n>\n> Thanks\n\n\n\n-- \nIlya Kosmodemiansky,\n\nPostgreSQL-Consulting.com\ntel. +14084142500\ncell. +4915144336040\[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 25 Mar 2014 08:11:32 +0100", "msg_from": "Ilya Kosmodemiansky <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump vs pg_basebackup" }, { "msg_contents": "Hai ilya,\n\nThanks for the respond. The database is estimated over 100gb and the workload will be high. Can we use a pg_basebackup with pitr to restore based on transaction time?\n\nThanks\n\n\n\n\nOn Tuesday, 25 March 2014, 15:13, Ilya Kosmodemiansky <[email protected]> wrote:\n \nHi gianfranco,\n\n\nHow exactly large is your database and how heavy is a workload on it?\nUsually if you have more than ~200Gb, better to use pg_basebackup\nbecause pg_dump will take too long time. And please take in mind, that\npg_dump makes dump, which is  actually not the same thing as a backup.\n\nBest regards,\nIlya\n\n\nOn Tue, Mar 25, 2014 at 5:45 AM, gianfranco caca <[email protected]> wrote:\n> Hai,\n>\n> Can anyone tell me the difference and performance between pgdump and\n> pg_basebackup if I want to backup a large database.\n>\n> Thanks\n\n\n\n-- \nIlya Kosmodemiansky,\n\nPostgreSQL-Consulting.com\ntel. +14084142500\ncell. +4915144336040\[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\nHai ilya,Thanks for the respond. The database is estimated over 100gb and the workload will be high. Can we use a pg_basebackup with pitr to restore based on transaction time?Thanks On Tuesday, 25 March 2014, 15:13, Ilya Kosmodemiansky <[email protected]> wrote: Hi gianfranco,How exactly large is your database and how heavy is a workload on it?Usually if you have more than ~200Gb, better to\n use pg_basebackupbecause pg_dump will take too long time. And please take in mind, thatpg_dump makes dump, which is  actually not the same thing as a backup.Best regards,IlyaOn Tue, Mar 25, 2014 at 5:45 AM, gianfranco caca <[email protected]> wrote:> Hai,>> Can anyone tell me the difference and performance between pgdump and> pg_basebackup if I want to backup a large database.>> Thanks-- Ilya Kosmodemiansky,PostgreSQL-Consulting.comtel.\n +14084142500cell. [email protected] Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Tue, 25 Mar 2014 00:19:43 -0700 (PDT)", "msg_from": "gianfranco caca <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_dump vs pg_basebackup" }, { "msg_contents": "Yes, you need to set recovery_target_time in your recovery.conf while\nperforming recovery\n(http://www.postgresql.org/docs/9.3/static/recovery-target-settings.html).\nThat could be a tricky thing - depends on that exactly you need. All\nthose transactions, which were not committed at given timestamp, will\nbe rollbacked, so read url above carefully.\n\nOn Tue, Mar 25, 2014 at 8:19 AM, gianfranco caca <[email protected]> wrote:\n> Hai ilya,\n>\n> Thanks for the respond. The database is estimated over 100gb and the\n> workload will be high. Can we use a pg_basebackup with pitr to restore based\n> on transaction time?\n>\n> Thanks\n>\n>\n> On Tuesday, 25 March 2014, 15:13, Ilya Kosmodemiansky\n> <[email protected]> wrote:\n> Hi gianfranco,\n>\n>\n> How exactly large is your database and how heavy is a workload on it?\n> Usually if you have more than ~200Gb, better to use pg_basebackup\n> because pg_dump will take too long time. And please take in mind, that\n> pg_dump makes dump, which is actually not the same thing as a backup.\n>\n> Best regards,\n> Ilya\n>\n> On Tue, Mar 25, 2014 at 5:45 AM, gianfranco caca <[email protected]> wrote:\n>> Hai,\n>>\n>> Can anyone tell me the difference and performance between pgdump and\n>> pg_basebackup if I want to backup a large database.\n>>\n>> Thanks\n>\n>\n>\n>\n> --\n> Ilya Kosmodemiansky,\n>\n> PostgreSQL-Consulting.com\n> tel. +14084142500\n> cell. +4915144336040\n> [email protected]\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n>\n\n\n\n-- \nIlya Kosmodemiansky,\n\nPostgreSQL-Consulting.com\ntel. +14084142500\ncell. +4915144336040\[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 25 Mar 2014 08:33:20 +0100", "msg_from": "Ilya Kosmodemiansky <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump vs pg_basebackup" }, { "msg_contents": "gianfranco caca wrote\n> Hai,\n> \n> Can anyone tell me the difference and performance between pgdump and\n> pg_basebackup if I want to backup a large database.\n> \n> Thanks\n\nYes. And many of their words have been written down in the documentation in\na chapter named \"Backup and Restore\". Do you have a specific question about\nwhat is written there?\n\nI'll add that comparing the performance of both is relatively meaningless.\nYou need to understand how each works then choose the correct tool for your\nsituation.\n\nLastly, you should actually do both, on a development database, and measure\nthe time and effort while practicing both routines (backup and restoring)\nyourself.\n\nDavid J.\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/pg-dump-vs-pg-basebackup-tp5797351p5797364.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 25 Mar 2014 00:39:19 -0700 (PDT)", "msg_from": "David Johnston <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump vs pg_basebackup" }, { "msg_contents": "On Tue, Mar 25, 2014 at 4:39 AM, David Johnston <[email protected]> wrote:\n>> Hai,\n>>\n>> Can anyone tell me the difference and performance between pgdump and\n>> pg_basebackup if I want to backup a large database.\n>>\n>> Thanks\n>\n> Yes. And many of their words have been written down in the documentation in\n> a chapter named \"Backup and Restore\". Do you have a specific question about\n> what is written there?\n>\n> I'll add that comparing the performance of both is relatively meaningless.\n> You need to understand how each works then choose the correct tool for your\n> situation.\n\n\nI don't know if meaningless is the right word here. I have a ~450G\ndatabase, and the difference is quite meaningful to me, as it is\nmeasured in days.\n\nThe difference being, pg_basebackup is dumber and using it is harder,\nbut its performance is only limited by sequential I/O capacity (which\nis usually quite high). It is also used in conjunction with PITR to\nget not only that, but also incremental backups, which is something\nyou really want for big databass. pg_dump, on the other hand, will\nonly do full dumps and it will be limited both by I/O and CPU power,\nbecause the reformatting involved in making a dump is considerable. In\nmy experience, a base backup takes hours, while a dump takes days.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 25 Mar 2014 09:05:15 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump vs pg_basebackup" }, { "msg_contents": "\nOn 03/25/2014 05:05 AM, Claudio Freire wrote:\n>\n> On Tue, Mar 25, 2014 at 4:39 AM, David Johnston <[email protected]> wrote:\n>>> Hai,\n>>>\n>>> Can anyone tell me the difference and performance between pgdump and\n>>> pg_basebackup if I want to backup a large database.\n>>>\n\nHonestly,\n\nNeither is particularly good at backing up large databases. I would look \ninto PITR with rsync.\n\nJD\n\n\n-- \nCommand Prompt, Inc. - http://www.commandprompt.com/ 509-416-6579\nPostgreSQL Support, Training, Professional Services and Development\nHigh Availability, Oracle Conversion, Postgres-XC, @cmdpromptinc\nPolitical Correctness is for cowards.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 25 Mar 2014 07:56:45 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump vs pg_basebackup" }, { "msg_contents": "2014-03-25 15:56 GMT+01:00 Joshua D. Drake <[email protected]>:\n\n>\n> On 03/25/2014 05:05 AM, Claudio Freire wrote:\n>\n>>\n>> On Tue, Mar 25, 2014 at 4:39 AM, David Johnston <[email protected]> wrote:\n>>\n>>> Hai,\n>>>>\n>>>> Can anyone tell me the difference and performance between pgdump and\n>>>> pg_basebackup if I want to backup a large database.\n>>>>\n>>>>\n> Honestly,\n>\n> Neither is particularly good at backing up large databases. I would look\n> into PITR with rsync.\n>\n> JD\n>\n>\n> --\n> Command Prompt, Inc. - http://www.commandprompt.com/ 509-416-6579\n> PostgreSQL Support, Training, Professional Services and Development\n> High Availability, Oracle Conversion, Postgres-XC, @cmdpromptinc\n> Political Correctness is for cowards.\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\nFor large database it's possible also to consider , also, to change\ndatabase status in backup mode and after take a snapshoot and returning to\nnormal mode, saving also all archive after you finish the backup.\n\nWith that snapshoot you could easy mount it and restore on another machine\nor open in readonly mode (hot standby and after do a logical dump ) , a lot\nof storage have these capabilities and also filesystem or volume manager.\n\nI think these is the fater option you have.\n\nMat Dba\n\n2014-03-25 15:56 GMT+01:00 Joshua D. Drake <[email protected]>:\n\nOn 03/25/2014 05:05 AM, Claudio Freire wrote:\n\n\nOn Tue, Mar 25, 2014 at 4:39 AM, David Johnston <[email protected]> wrote:\n\nHai,\n\nCan anyone tell me the difference and performance between pgdump and\npg_basebackup if I want to backup a large database.\n\n\n\nHonestly,\n\nNeither is particularly good at backing up large databases. I would look into PITR with rsync.\n\nJD\n\n\n-- \nCommand Prompt, Inc. - http://www.commandprompt.com/  509-416-6579\nPostgreSQL Support, Training, Professional Services and Development\nHigh Availability, Oracle Conversion, Postgres-XC, @cmdpromptinc\nPolitical Correctness is for cowards.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\nFor large database it's possible also to consider , also, to change database status in backup mode and after take a snapshoot and returning to normal mode, saving also all archive after you finish the backup.\nWith that snapshoot you could easy  mount it and restore on another machine or open in readonly mode (hot standby and after do a logical dump ) , a lot of storage have these capabilities and also filesystem or volume manager.\nI think these is the fater  option you have.Mat Dba", "msg_date": "Tue, 25 Mar 2014 16:10:53 +0100", "msg_from": "desmodemone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump vs pg_basebackup" }, { "msg_contents": "Joshua,\n\nthat is really good point: an alternative is to use pg_basebackup\nthrough ssh tunnel with compression, but rsync is much simpler.\n\nOn Tue, Mar 25, 2014 at 3:56 PM, Joshua D. Drake <[email protected]> wrote:\n>\n> On 03/25/2014 05:05 AM, Claudio Freire wrote:\n>>\n>>\n>> On Tue, Mar 25, 2014 at 4:39 AM, David Johnston <[email protected]> wrote:\n>>>>\n>>>> Hai,\n>>>>\n>>>> Can anyone tell me the difference and performance between pgdump and\n>>>> pg_basebackup if I want to backup a large database.\n>>>>\n>\n> Honestly,\n>\n> Neither is particularly good at backing up large databases. I would look\n> into PITR with rsync.\n>\n> JD\n>\n>\n> --\n> Command Prompt, Inc. - http://www.commandprompt.com/ 509-416-6579\n> PostgreSQL Support, Training, Professional Services and Development\n> High Availability, Oracle Conversion, Postgres-XC, @cmdpromptinc\n> Political Correctness is for cowards.\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \nIlya Kosmodemiansky,\n\nPostgreSQL-Consulting.com\ntel. +14084142500\ncell. +4915144336040\[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 25 Mar 2014 16:18:32 +0100", "msg_from": "Ilya Kosmodemiansky <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump vs pg_basebackup" }, { "msg_contents": "I would say that's the one thing that rsync is *not*. pg_basebackup takes\ncare of a lot of things under the hood. rsync is a lot more complicated, in\nparticular in failure scenarios, since you have to manually deal with\npg_start/stop_backup().\n\nThere are definitely reasons you'd prefer rsync over pg_basebackup, but I\ndon't believe simplicity is one of them.\n\n//Magnus\n\n\nOn Tue, Mar 25, 2014 at 4:18 PM, Ilya Kosmodemiansky <\[email protected]> wrote:\n\n> Joshua,\n>\n> that is really good point: an alternative is to use pg_basebackup\n> through ssh tunnel with compression, but rsync is much simpler.\n>\n> On Tue, Mar 25, 2014 at 3:56 PM, Joshua D. Drake <[email protected]>\n> wrote:\n> >\n> > On 03/25/2014 05:05 AM, Claudio Freire wrote:\n> >>\n> >>\n> >> On Tue, Mar 25, 2014 at 4:39 AM, David Johnston <[email protected]>\n> wrote:\n> >>>>\n> >>>> Hai,\n> >>>>\n> >>>> Can anyone tell me the difference and performance between pgdump and\n> >>>> pg_basebackup if I want to backup a large database.\n> >>>>\n> >\n> > Honestly,\n> >\n> > Neither is particularly good at backing up large databases. I would look\n> > into PITR with rsync.\n> >\n> > JD\n> >\n> >\n> > --\n> > Command Prompt, Inc. - http://www.commandprompt.com/ 509-416-6579\n> > PostgreSQL Support, Training, Professional Services and Development\n> > High Availability, Oracle Conversion, Postgres-XC, @cmdpromptinc\n> > Political Correctness is for cowards.\n> >\n> >\n> >\n> > --\n> > Sent via pgsql-performance mailing list (\n> [email protected])\n> > To make changes to your subscription:\n> > http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n>\n> --\n> Ilya Kosmodemiansky,\n>\n> PostgreSQL-Consulting.com\n> tel. +14084142500\n> cell. +4915144336040\n> [email protected]\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \n Magnus Hagander\n Me: http://www.hagander.net/\n Work: http://www.redpill-linpro.com/\n\nI would say that's the one thing that rsync is *not*. pg_basebackup takes care of a lot of things under the hood. rsync is a lot more complicated, in particular in failure scenarios, since you have to manually deal with pg_start/stop_backup().\nThere are definitely reasons you'd prefer rsync over pg_basebackup, but I don't believe simplicity is one of them.//Magnus\nOn Tue, Mar 25, 2014 at 4:18 PM, Ilya Kosmodemiansky <[email protected]> wrote:\nJoshua,\n\nthat is really good point: an alternative is to use pg_basebackup\nthrough ssh tunnel with compression, but rsync is much simpler.\n\nOn Tue, Mar 25, 2014 at 3:56 PM, Joshua D. Drake <[email protected]> wrote:\n>\n> On 03/25/2014 05:05 AM, Claudio Freire wrote:\n>>\n>>\n>> On Tue, Mar 25, 2014 at 4:39 AM, David Johnston <[email protected]> wrote:\n>>>>\n>>>> Hai,\n>>>>\n>>>> Can anyone tell me the difference and performance between pgdump and\n>>>> pg_basebackup if I want to backup a large database.\n>>>>\n>\n> Honestly,\n>\n> Neither is particularly good at backing up large databases. I would look\n> into PITR with rsync.\n>\n> JD\n>\n>\n> --\n> Command Prompt, Inc. - http://www.commandprompt.com/  509-416-6579\n> PostgreSQL Support, Training, Professional Services and Development\n> High Availability, Oracle Conversion, Postgres-XC, @cmdpromptinc\n> Political Correctness is for cowards.\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n--\nIlya Kosmodemiansky,\n\nPostgreSQL-Consulting.com\ntel. +14084142500\ncell. +4915144336040\[email protected]\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n--  Magnus Hagander Me: http://www.hagander.net/ Work: http://www.redpill-linpro.com/", "msg_date": "Tue, 25 Mar 2014 16:21:43 +0100", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump vs pg_basebackup" }, { "msg_contents": "\nOn 03/25/2014 08:18 AM, Ilya Kosmodemiansky wrote:\n>\n> Joshua,\n>\n> that is really good point: an alternative is to use pg_basebackup\n> through ssh tunnel with compression, but rsync is much simpler.\n\nOr rsync over ssh. The advantage is that you can create backups that \ndon't have to be restored, just started. You can also use the \ndifferential portions of rsync to do it multiple times a day without \nmuch issue.\n\nJD\n\n-- \nCommand Prompt, Inc. - http://www.commandprompt.com/ 509-416-6579\nPostgreSQL Support, Training, Professional Services and Development\nHigh Availability, Oracle Conversion, Postgres-XC, @cmdpromptinc\nPolitical Correctness is for cowards.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 25 Mar 2014 08:22:55 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump vs pg_basebackup" }, { "msg_contents": "Joshua,\n\nOn Tue, Mar 25, 2014 at 4:22 PM, Joshua D. Drake <[email protected]> wrote:\nThe advantage is that you can create backups that don't\n> have to be restored, just started. You can also use the differential\n> portions of rsync to do it multiple times a day without much issue.\n\nAre you sure, that it is a nice idea on a database with heavy write workload?\n\nAnd also Im not sure, that differential backups using rsync will be\nrecoverable, if you have actually meant that.\n\n>\n>\n> JD\n>\n> --\n> Command Prompt, Inc. - http://www.commandprompt.com/ 509-416-6579\n> PostgreSQL Support, Training, Professional Services and Development\n> High Availability, Oracle Conversion, Postgres-XC, @cmdpromptinc\n> Political Correctness is for cowards.\n\n\n\n-- \nIlya Kosmodemiansky,\n\nPostgreSQL-Consulting.com\ntel. +14084142500\ncell. +4915144336040\[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 25 Mar 2014 16:29:11 +0100", "msg_from": "Ilya Kosmodemiansky <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump vs pg_basebackup" }, { "msg_contents": "Magnus,\n\nThat is correct, but I'am afraid that such all-in-one functionality\nalso hides from one how backup really works. Probably such sort of\nknowledge is so essential for a DBA, that it is better to learn both\nmethods, at least to be able to choose correctly? But maybe it is a\nrhetorical question.\n\nOn Tue, Mar 25, 2014 at 4:21 PM, Magnus Hagander <[email protected]> wrote:\n> I would say that's the one thing that rsync is *not*. pg_basebackup takes\n> care of a lot of things under the hood. rsync is a lot more complicated, in\n> particular in failure scenarios, since you have to manually deal with\n> pg_start/stop_backup().\n>\n> There are definitely reasons you'd prefer rsync over pg_basebackup, but I\n> don't believe simplicity is one of them.\n>\n> //Magnus\n>\n>\n> On Tue, Mar 25, 2014 at 4:18 PM, Ilya Kosmodemiansky\n> <[email protected]> wrote:\n>>\n>> Joshua,\n>>\n>> that is really good point: an alternative is to use pg_basebackup\n>> through ssh tunnel with compression, but rsync is much simpler.\n>>\n>> On Tue, Mar 25, 2014 at 3:56 PM, Joshua D. Drake <[email protected]>\n>> wrote:\n>> >\n>> > On 03/25/2014 05:05 AM, Claudio Freire wrote:\n>> >>\n>> >>\n>> >> On Tue, Mar 25, 2014 at 4:39 AM, David Johnston <[email protected]>\n>> >> wrote:\n>> >>>>\n>> >>>> Hai,\n>> >>>>\n>> >>>> Can anyone tell me the difference and performance between pgdump and\n>> >>>> pg_basebackup if I want to backup a large database.\n>> >>>>\n>> >\n>> > Honestly,\n>> >\n>> > Neither is particularly good at backing up large databases. I would look\n>> > into PITR with rsync.\n>> >\n>> > JD\n>> >\n>> >\n>> > --\n>> > Command Prompt, Inc. - http://www.commandprompt.com/ 509-416-6579\n>> > PostgreSQL Support, Training, Professional Services and Development\n>> > High Availability, Oracle Conversion, Postgres-XC, @cmdpromptinc\n>> > Political Correctness is for cowards.\n>> >\n>> >\n>> >\n>> > --\n>> > Sent via pgsql-performance mailing list\n>> > ([email protected])\n>> > To make changes to your subscription:\n>> > http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>>\n>>\n>> --\n>> Ilya Kosmodemiansky,\n>>\n>> PostgreSQL-Consulting.com\n>> tel. +14084142500\n>> cell. +4915144336040\n>> [email protected]\n>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n>\n>\n> --\n> Magnus Hagander\n> Me: http://www.hagander.net/\n> Work: http://www.redpill-linpro.com/\n\n\n\n-- \nIlya Kosmodemiansky,\n\nPostgreSQL-Consulting.com\ntel. +14084142500\ncell. +4915144336040\[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 25 Mar 2014 16:37:41 +0100", "msg_from": "Ilya Kosmodemiansky <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump vs pg_basebackup" }, { "msg_contents": "Oh, I agree it's good that you should know both methods. I only disagree\nwith that the choice of rsync be made with the argument of simplicity.\nSimplicity is one of the main reasons to choose the *other* method\n(pg_basebackup), and the rsync method is for more advanced usecases. But\nit's definitely good to know both!\n\n//Magnus\n\n\nOn Tue, Mar 25, 2014 at 4:37 PM, Ilya Kosmodemiansky <\[email protected]> wrote:\n\n> Magnus,\n>\n> That is correct, but I'am afraid that such all-in-one functionality\n> also hides from one how backup really works. Probably such sort of\n> knowledge is so essential for a DBA, that it is better to learn both\n> methods, at least to be able to choose correctly? But maybe it is a\n> rhetorical question.\n>\n> On Tue, Mar 25, 2014 at 4:21 PM, Magnus Hagander <[email protected]>\n> wrote:\n> > I would say that's the one thing that rsync is *not*. pg_basebackup takes\n> > care of a lot of things under the hood. rsync is a lot more complicated,\n> in\n> > particular in failure scenarios, since you have to manually deal with\n> > pg_start/stop_backup().\n> >\n> > There are definitely reasons you'd prefer rsync over pg_basebackup, but I\n> > don't believe simplicity is one of them.\n> >\n> > //Magnus\n> >\n> >\n> > On Tue, Mar 25, 2014 at 4:18 PM, Ilya Kosmodemiansky\n> > <[email protected]> wrote:\n> >>\n> >> Joshua,\n> >>\n> >> that is really good point: an alternative is to use pg_basebackup\n> >> through ssh tunnel with compression, but rsync is much simpler.\n> >>\n> >> On Tue, Mar 25, 2014 at 3:56 PM, Joshua D. Drake <[email protected]>\n> >> wrote:\n> >> >\n> >> > On 03/25/2014 05:05 AM, Claudio Freire wrote:\n> >> >>\n> >> >>\n> >> >> On Tue, Mar 25, 2014 at 4:39 AM, David Johnston <[email protected]>\n> >> >> wrote:\n> >> >>>>\n> >> >>>> Hai,\n> >> >>>>\n> >> >>>> Can anyone tell me the difference and performance between pgdump\n> and\n> >> >>>> pg_basebackup if I want to backup a large database.\n> >> >>>>\n> >> >\n> >> > Honestly,\n> >> >\n> >> > Neither is particularly good at backing up large databases. I would\n> look\n> >> > into PITR with rsync.\n> >> >\n> >> > JD\n> >> >\n> >> >\n> >> > --\n> >> > Command Prompt, Inc. - http://www.commandprompt.com/ 509-416-6579\n> >> > PostgreSQL Support, Training, Professional Services and Development\n> >> > High Availability, Oracle Conversion, Postgres-XC, @cmdpromptinc\n> >> > Political Correctness is for cowards.\n> >> >\n> >> >\n> >> >\n> >> > --\n> >> > Sent via pgsql-performance mailing list\n> >> > ([email protected])\n> >> > To make changes to your subscription:\n> >> > http://www.postgresql.org/mailpref/pgsql-performance\n> >>\n> >>\n> >>\n> >> --\n> >> Ilya Kosmodemiansky,\n> >>\n> >> PostgreSQL-Consulting.com\n> >> tel. +14084142500\n> >> cell. +4915144336040\n> >> [email protected]\n> >>\n> >>\n> >> --\n> >> Sent via pgsql-performance mailing list (\n> [email protected])\n> >> To make changes to your subscription:\n> >> http://www.postgresql.org/mailpref/pgsql-performance\n> >\n> >\n> >\n> >\n> > --\n> > Magnus Hagander\n> > Me: http://www.hagander.net/\n> > Work: http://www.redpill-linpro.com/\n>\n>\n>\n> --\n> Ilya Kosmodemiansky,\n>\n> PostgreSQL-Consulting.com\n> tel. +14084142500\n> cell. +4915144336040\n> [email protected]\n>\n\n\n\n-- \n Magnus Hagander\n Me: http://www.hagander.net/\n Work: http://www.redpill-linpro.com/\n\nOh, I agree it's good that you should know both methods. I only disagree with that the choice of rsync be made with the argument of simplicity. Simplicity is one of the main reasons to choose the *other* method (pg_basebackup), and the rsync method is for more advanced usecases. But it's definitely good to know both!\n//MagnusOn Tue, Mar 25, 2014 at 4:37 PM, Ilya Kosmodemiansky <[email protected]> wrote:\nMagnus,\n\nThat is correct, but I'am afraid that such all-in-one functionality\nalso hides from one how backup really works. Probably such sort of\nknowledge is so essential for a DBA, that it is better to learn both\nmethods, at least to be able to choose correctly? But maybe it is a\nrhetorical question.\n\nOn Tue, Mar 25, 2014 at 4:21 PM, Magnus Hagander <[email protected]> wrote:\n> I would say that's the one thing that rsync is *not*. pg_basebackup takes\n> care of a lot of things under the hood. rsync is a lot more complicated, in\n> particular in failure scenarios, since you have to manually deal with\n> pg_start/stop_backup().\n>\n> There are definitely reasons you'd prefer rsync over pg_basebackup, but I\n> don't believe simplicity is one of them.\n>\n> //Magnus\n>\n>\n> On Tue, Mar 25, 2014 at 4:18 PM, Ilya Kosmodemiansky\n> <[email protected]> wrote:\n>>\n>> Joshua,\n>>\n>> that is really good point: an alternative is to use pg_basebackup\n>> through ssh tunnel with compression, but rsync is much simpler.\n>>\n>> On Tue, Mar 25, 2014 at 3:56 PM, Joshua D. Drake <[email protected]>\n>> wrote:\n>> >\n>> > On 03/25/2014 05:05 AM, Claudio Freire wrote:\n>> >>\n>> >>\n>> >> On Tue, Mar 25, 2014 at 4:39 AM, David Johnston <[email protected]>\n>> >> wrote:\n>> >>>>\n>> >>>> Hai,\n>> >>>>\n>> >>>> Can anyone tell me the difference and performance between pgdump and\n>> >>>> pg_basebackup if I want to backup a large database.\n>> >>>>\n>> >\n>> > Honestly,\n>> >\n>> > Neither is particularly good at backing up large databases. I would look\n>> > into PITR with rsync.\n>> >\n>> > JD\n>> >\n>> >\n>> > --\n>> > Command Prompt, Inc. - http://www.commandprompt.com/  509-416-6579\n>> > PostgreSQL Support, Training, Professional Services and Development\n>> > High Availability, Oracle Conversion, Postgres-XC, @cmdpromptinc\n>> > Political Correctness is for cowards.\n>> >\n>> >\n>> >\n>> > --\n>> > Sent via pgsql-performance mailing list\n>> > ([email protected])\n>> > To make changes to your subscription:\n>> > http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>>\n>>\n>> --\n>> Ilya Kosmodemiansky,\n>>\n>> PostgreSQL-Consulting.com\n>> tel. +14084142500\n>> cell. +4915144336040\n>> [email protected]\n>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n>\n>\n> --\n>  Magnus Hagander\n>  Me: http://www.hagander.net/\n>  Work: http://www.redpill-linpro.com/\n\n\n\n--\nIlya Kosmodemiansky,\n\nPostgreSQL-Consulting.com\ntel. +14084142500\ncell. +4915144336040\[email protected]\n--  Magnus Hagander Me: http://www.hagander.net/ Work: http://www.redpill-linpro.com/", "msg_date": "Tue, 25 Mar 2014 16:40:02 +0100", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump vs pg_basebackup" }, { "msg_contents": "OK, agreed. Ive got your point;-)\n\nOn Tue, Mar 25, 2014 at 4:40 PM, Magnus Hagander <[email protected]> wrote:\n> Oh, I agree it's good that you should know both methods. I only disagree\n> with that the choice of rsync be made with the argument of simplicity.\n> Simplicity is one of the main reasons to choose the *other* method\n> (pg_basebackup), and the rsync method is for more advanced usecases. But\n> it's definitely good to know both!\n>\n> //Magnus\n>\n>\n>\n> On Tue, Mar 25, 2014 at 4:37 PM, Ilya Kosmodemiansky\n> <[email protected]> wrote:\n>>\n>> Magnus,\n>>\n>> That is correct, but I'am afraid that such all-in-one functionality\n>> also hides from one how backup really works. Probably such sort of\n>> knowledge is so essential for a DBA, that it is better to learn both\n>> methods, at least to be able to choose correctly? But maybe it is a\n>> rhetorical question.\n>>\n>> On Tue, Mar 25, 2014 at 4:21 PM, Magnus Hagander <[email protected]>\n>> wrote:\n>> > I would say that's the one thing that rsync is *not*. pg_basebackup\n>> > takes\n>> > care of a lot of things under the hood. rsync is a lot more complicated,\n>> > in\n>> > particular in failure scenarios, since you have to manually deal with\n>> > pg_start/stop_backup().\n>> >\n>> > There are definitely reasons you'd prefer rsync over pg_basebackup, but\n>> > I\n>> > don't believe simplicity is one of them.\n>> >\n>> > //Magnus\n>> >\n>> >\n>> > On Tue, Mar 25, 2014 at 4:18 PM, Ilya Kosmodemiansky\n>> > <[email protected]> wrote:\n>> >>\n>> >> Joshua,\n>> >>\n>> >> that is really good point: an alternative is to use pg_basebackup\n>> >> through ssh tunnel with compression, but rsync is much simpler.\n>> >>\n>> >> On Tue, Mar 25, 2014 at 3:56 PM, Joshua D. Drake <[email protected]>\n>> >> wrote:\n>> >> >\n>> >> > On 03/25/2014 05:05 AM, Claudio Freire wrote:\n>> >> >>\n>> >> >>\n>> >> >> On Tue, Mar 25, 2014 at 4:39 AM, David Johnston <[email protected]>\n>> >> >> wrote:\n>> >> >>>>\n>> >> >>>> Hai,\n>> >> >>>>\n>> >> >>>> Can anyone tell me the difference and performance between pgdump\n>> >> >>>> and\n>> >> >>>> pg_basebackup if I want to backup a large database.\n>> >> >>>>\n>> >> >\n>> >> > Honestly,\n>> >> >\n>> >> > Neither is particularly good at backing up large databases. I would\n>> >> > look\n>> >> > into PITR with rsync.\n>> >> >\n>> >> > JD\n>> >> >\n>> >> >\n>> >> > --\n>> >> > Command Prompt, Inc. - http://www.commandprompt.com/ 509-416-6579\n>> >> > PostgreSQL Support, Training, Professional Services and Development\n>> >> > High Availability, Oracle Conversion, Postgres-XC, @cmdpromptinc\n>> >> > Political Correctness is for cowards.\n>> >> >\n>> >> >\n>> >> >\n>> >> > --\n>> >> > Sent via pgsql-performance mailing list\n>> >> > ([email protected])\n>> >> > To make changes to your subscription:\n>> >> > http://www.postgresql.org/mailpref/pgsql-performance\n>> >>\n>> >>\n>> >>\n>> >> --\n>> >> Ilya Kosmodemiansky,\n>> >>\n>> >> PostgreSQL-Consulting.com\n>> >> tel. +14084142500\n>> >> cell. +4915144336040\n>> >> [email protected]\n>> >>\n>> >>\n>> >> --\n>> >> Sent via pgsql-performance mailing list\n>> >> ([email protected])\n>> >> To make changes to your subscription:\n>> >> http://www.postgresql.org/mailpref/pgsql-performance\n>> >\n>> >\n>> >\n>> >\n>> > --\n>> > Magnus Hagander\n>> > Me: http://www.hagander.net/\n>> > Work: http://www.redpill-linpro.com/\n>>\n>>\n>>\n>> --\n>> Ilya Kosmodemiansky,\n>>\n>> PostgreSQL-Consulting.com\n>> tel. +14084142500\n>> cell. +4915144336040\n>> [email protected]\n>\n>\n>\n>\n> --\n> Magnus Hagander\n> Me: http://www.hagander.net/\n> Work: http://www.redpill-linpro.com/\n\n\n\n-- \nIlya Kosmodemiansky,\n\nPostgreSQL-Consulting.com\ntel. +14084142500\ncell. +4915144336040\[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 25 Mar 2014 16:41:46 +0100", "msg_from": "Ilya Kosmodemiansky <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump vs pg_basebackup" }, { "msg_contents": "\nPostgresql rsync backups require the DB to be shutdown during the 'second' rsync.\n\n1. rsync the DB onto the backup filesystem (produces e.g. 95-99.99% consistent DB on the backup filesystem)\n2. shut down the DB\n3. rsync the shut down DB onto the backup filesystem (synchronises the last few files to make the DB consistent, and is usually very fast)\n4. start the DB up again\n\nIs there any way to notify postgres to pause transactions (and note that they should be restarted), and flush out write buffers etc, instead of doing a full shutdown? \ne.g. so that the second rsync call would bring the backup filesystem's representation of the DB into a recoverable state without needing to shutdown the production DB completely. \n\nG\n\nOn 25 Mar 2014, at 16:29, Ilya Kosmodemiansky <[email protected]> wrote:\n\n> Joshua,\n> \n> On Tue, Mar 25, 2014 at 4:22 PM, Joshua D. Drake <[email protected]> wrote:\n> The advantage is that you can create backups that don't\n>> have to be restored, just started. You can also use the differential\n>> portions of rsync to do it multiple times a day without much issue.\n> \n> Are you sure, that it is a nice idea on a database with heavy write workload?\n> \n> And also Im not sure, that differential backups using rsync will be\n> recoverable, if you have actually meant that.\n> \n>> \n>> \n>> JD\n>> \n>> --\n>> Command Prompt, Inc. - http://www.commandprompt.com/ 509-416-6579\n>> PostgreSQL Support, Training, Professional Services and Development\n>> High Availability, Oracle Conversion, Postgres-XC, @cmdpromptinc\n>> Political Correctness is for cowards.\n> \n> \n> \n> -- \n> Ilya Kosmodemiansky,\n> \n> PostgreSQL-Consulting.com\n> tel. +14084142500\n> cell. +4915144336040\n> [email protected]\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 25 Mar 2014 15:48:07 +0000", "msg_from": "\"Graeme B. Bell\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump vs pg_basebackup" }, { "msg_contents": "\nOn 03/25/2014 08:21 AM, Magnus Hagander wrote:\n> I would say that's the one thing that rsync is *not*. pg_basebackup\n> takes care of a lot of things under the hood. rsync is a lot more\n> complicated, in particular in failure scenarios, since you have to\n> manually deal with pg_start/stop_backup().\n>\n> There are definitely reasons you'd prefer rsync over pg_basebackup, but\n> I don't believe simplicity is one of them.\n>\n> //Magnus\n\nGood God man... since when do you top post!\n\nWell there are tools that use rsync to solve those issues :P. We even \nhave one that does multi-threaded rsync so you can pull many Terabytes \nin very little time (relatively).\n\nJD\n\n\n-- \nCommand Prompt, Inc. - http://www.commandprompt.com/ 509-416-6579\nPostgreSQL Support, Training, Professional Services and Development\nHigh Availability, Oracle Conversion, Postgres-XC, @cmdpromptinc\nPolitical Correctness is for cowards.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 25 Mar 2014 08:55:18 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump vs pg_basebackup" }, { "msg_contents": "On Tuesday, March 25, 2014 03:48:07 PM Graeme B. Bell wrote:\n> Postgresql rsync backups require the DB to be shutdown during the 'second'\n> rsync.\n> \n> 1. rsync the DB onto the backup filesystem (produces e.g. 95-99.99%\n> consistent DB on the backup filesystem) 2. shut down the DB\n> 3. rsync the shut down DB onto the backup filesystem (synchronises the\n> last few files to make the DB consistent, and is usually very fast) 4.\n> start the DB up again\n> \n> Is there any way to notify postgres to pause transactions (and note that\n> they should be restarted), and flush out write buffers etc, instead of\n> doing a full shutdown? e.g. so that the second rsync call would bring the\n> backup filesystem's representation of the DB into a recoverable state\n> without needing to shutdown the production DB completely.\n> \n\nYou use pg_start_backup() before rsync, and pg_stop_backup() after. And keep \nall your WAL log files. No need to pause transactions; whatever happens during \nthe rsync just gets replayed during recovery (as I understand it). You do need \nto do a PITR restore to make use of this rsync copy.\n\nThat's basically what pg_basebackup does, I believe (I haven't used it, I only \ndo rsyncs).\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 25 Mar 2014 09:12:46 -0700", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump vs pg_basebackup" }, { "msg_contents": "On Tue, Mar 25, 2014 at 12:22 PM, Joshua D. Drake <[email protected]> wrote:\n> On 03/25/2014 08:18 AM, Ilya Kosmodemiansky wrote:\n>>\n>>\n>> Joshua,\n>>\n>> that is really good point: an alternative is to use pg_basebackup\n>> through ssh tunnel with compression, but rsync is much simpler.\n>\n>\n> Or rsync over ssh. The advantage is that you can create backups that don't\n> have to be restored, just started. You can also use the differential\n> portions of rsync to do it multiple times a day without much issue.\n\n\nrsync's delta transfer isn't relly very effective with postgres. You\ndon't save any I/O, just network traffic, and in general the\nbottleneck is I/O (unless you have a monster I/O subsys or a snail of\na network one).\n\nThere were some musing about making delta transfer more efficient in\npg in hackers, but I don't think anything tangible came out of that,\nso it's basically equivalent to a full transfer. The only reason to\nleverage rsync's delta transfer would be to decrease the time between\npg_start_backup and pg_stop_backup, which could only matter if you're\nlow on WAL space, but the reduction, in my experience, isn't stellar.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 25 Mar 2014 13:46:03 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump vs pg_basebackup" } ]
[ { "msg_contents": "Hi everyone!\n\n\nI've been working on a puzzling issue for a few days am am hoping that someone has seen something similar or can help. There have been some odd behaviors on one of my production facing postgres servers.\n\n\nversion info from postgres: PostgreSQL 9.1.9 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-4), 64-bit\n\n\nThe symptom: The database machine (running postgres 9.1.9 on CentOS 6.4) is running a low utilization most of the time, but once every day or two, it will appear to slow down to the point where queries back up and clients are unable to connect. Once this event occurs, there are lots of concurrent queries, I see slow queries appear in the logs, but there doesn't appear to be anything abnormal that I have been able to see that causes this behavior. The event will occur just long enough for monitoring to alarm. We will respond to alerts to take a look, but within a minute or three at most, load returns back to normal levels and all running queries complete in expected times.\n\n\nAt the time of the event, we see a spike in system CPU and load average, but we do not see a corresponding spike in disk reads or writes which would indicate IO load. Initial troubleshooting to monitor active processes led us to see a flurry of activity in ps waiting on semtimedop. Our efforts internally to diagnose this problem are to sample pg_locks and pg_stat_activity every 5s plus running a script to look for at least one postgres process waiting on a semaphore, and if it finds one, it gets a stack trace of every running postgres processes with GDB. It also uses strace on 5 processes to find out which semaphore they're waiting on.\n\n\nWhat we were catching in the following stack trace seems to be representative of where things are waiting when we see an event - here are two examples that are representative - lots of threads will appear to be in this state:\n\n\n----- 47245 -----\n0x00000037392eb197 in semop () from /lib64/libc.so.6\n#0 0x00000037392eb197 in semop () from /lib64/libc.so.6\n#1 0x00000000005e0c87 in PGSemaphoreLock ()\n#2 0x000000000061e3af in LWLockAcquire ()\n#3 0x000000000060aa0f in ReadBuffer_common ()\n#4 0x000000000060b2e4 in ReadBufferExtended ()\n#5 0x000000000047708d in _bt_relandgetbuf ()\n#6 0x000000000047aac4 in _bt_search ()\n#7 0x000000000047af8d in _bt_first ()\n#8 0x0000000000479704 in btgetbitmap ()\n#9 0x00000000006e7e00 in FunctionCall2Coll ()\n#10 0x0000000000473120 in index_getbitmap ()\n#11 0x00000000005726b8 in MultiExecBitmapIndexScan ()\n#12 0x000000000057214d in BitmapHeapNext ()\n#13 0x000000000056b18e in ExecScan ()\n#14 0x0000000000563ed8 in ExecProcNode ()\n#15 0x0000000000562d72 in standard_ExecutorRun ()\n#16 0x000000000062ce67 in PortalRunSelect ()\n#17 0x000000000062e128 in PortalRun ()\n#18 0x000000000062bb66 in PostgresMain ()\n#19 0x00000000005ecd01 in ServerLoop ()\n#20 0x00000000005ef401 in PostmasterMain ()\n#21 0x0000000000590ff8 in main ()\n\n----- 47257 -----\n0x00000037392eb197 in semop () from /lib64/libc.so.6\n#0 0x00000037392eb197 in semop () from /lib64/libc.so.6\n#1 0x00000000005e0c87 in PGSemaphoreLock ()\n#2 0x000000000061e3af in LWLockAcquire ()\n#3 0x000000000060aa0f in ReadBuffer_common ()\n#4 0x000000000060b2e4 in ReadBufferExtended ()\n#5 0x000000000047708d in _bt_relandgetbuf ()\n#6 0x000000000047aac4 in _bt_search ()\n#7 0x000000000047af8d in _bt_first ()\n#8 0x00000000004797d1 in btgettuple ()\n#9 0x00000000006e7e00 in FunctionCall2Coll ()\n#10 0x000000000047339d in index_getnext ()\n#11 0x0000000000575ed6 in IndexNext ()\n#12 0x000000000056b18e in ExecScan ()\n#13 0x0000000000563ee8 in ExecProcNode ()\n#14 0x0000000000562d72 in standard_ExecutorRun ()\n#15 0x000000000062ce67 in PortalRunSelect ()\n#16 0x000000000062e128 in PortalRun ()\n#17 0x000000000062bb66 in PostgresMain ()\n#18 0x00000000005ecd01 in ServerLoop ()\n#19 0x00000000005ef401 in PostmasterMain ()\n#20 0x0000000000590ff8 in main ()\n\n\nHas any on the forum seen something similar? Any suggestions on what to look at next? If it is helpful to describe the server hardware, it's got 2 E5-2670 cpu and 256 GB of ram, and the database is hosted on 1.6TB raid 10 local storage (15K 300 GB drives). The workload is predominantly read and the queries are mostly fairly simple selects from a single large table generally specifying the primary key as part of the where clause along with a few other filters.\n\n\nThanks,\n\nMatt\n\n\n\n\n\n\n\n\nHi everyone! \n\n\nI've been working on a puzzling issue for a few days am am hoping that someone has seen something similar or can help.  There have been some odd behaviors on one of my production facing postgres servers.  \n\n\nversion info from postgres: PostgreSQL 9.1.9 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-4), 64-bit\n\n\nThe symptom:   The database machine (running postgres 9.1.9 on CentOS 6.4) is running a low utilization most of the time, but once every day or two, it will appear to slow down to the point where queries back up and clients are unable to connect.\n  Once this event occurs, there are lots of concurrent queries, I see slow queries appear in the logs, but there doesn't appear to be anything abnormal that I have been able to see that causes this behavior.  The event will occur just long enough for monitoring\n to alarm.   We will respond to alerts to take a look, but within a minute or three at most, load returns back to normal levels and all running queries complete in expected times.   \n\n\nAt the time of the event, we see a spike in system CPU and load average, but we do not see a corresponding spike in disk reads or writes which would indicate IO load.   Initial troubleshooting to monitor active processes\n led us to see a flurry of activity in ps waiting on semtimedop.   Our efforts internally to diagnose this problem are to sample\n pg_locks and pg_stat_activity every 5s plus running a script to look for at least one postgres process waiting on a semaphore, and if it finds one, it gets a stack trace of every running postgres processes with\n GDB.  It also uses strace on 5 processes to find out which semaphore they're waiting on.  \n\n\nWhat we were catching in the following stack trace seems to be representative of where things are waiting when we see an event - here are two examples that are representative - lots of threads will appear to be in this state:\n\n\n----- 47245 -----\n0x00000037392eb197 in semop () from /lib64/libc.so.6\n#0  0x00000037392eb197 in semop () from /lib64/libc.so.6\n#1  0x00000000005e0c87 in PGSemaphoreLock ()\n#2  0x000000000061e3af in LWLockAcquire ()\n#3  0x000000000060aa0f in ReadBuffer_common ()\n#4  0x000000000060b2e4 in ReadBufferExtended ()\n#5  0x000000000047708d in _bt_relandgetbuf ()\n#6  0x000000000047aac4 in _bt_search ()\n#7  0x000000000047af8d in _bt_first ()\n#8  0x0000000000479704 in btgetbitmap ()\n#9  0x00000000006e7e00 in FunctionCall2Coll ()\n#10 0x0000000000473120 in index_getbitmap ()\n#11 0x00000000005726b8 in MultiExecBitmapIndexScan ()\n#12 0x000000000057214d in BitmapHeapNext ()\n#13 0x000000000056b18e in ExecScan ()\n#14 0x0000000000563ed8 in ExecProcNode ()\n#15 0x0000000000562d72 in standard_ExecutorRun ()\n#16 0x000000000062ce67 in PortalRunSelect ()\n#17 0x000000000062e128 in PortalRun ()\n#18 0x000000000062bb66 in PostgresMain ()\n#19 0x00000000005ecd01 in ServerLoop ()\n#20 0x00000000005ef401 in PostmasterMain ()\n#21 0x0000000000590ff8 in main ()\n\n----- 47257 -----\n0x00000037392eb197 in semop () from /lib64/libc.so.6\n#0  0x00000037392eb197 in semop () from /lib64/libc.so.6\n#1  0x00000000005e0c87 in PGSemaphoreLock ()\n#2  0x000000000061e3af in LWLockAcquire ()\n#3  0x000000000060aa0f in ReadBuffer_common ()\n#4  0x000000000060b2e4 in ReadBufferExtended ()\n#5  0x000000000047708d in _bt_relandgetbuf ()\n#6  0x000000000047aac4 in _bt_search ()\n#7  0x000000000047af8d in _bt_first ()\n#8  0x00000000004797d1 in btgettuple ()\n#9  0x00000000006e7e00 in FunctionCall2Coll ()\n#10 0x000000000047339d in index_getnext ()\n#11 0x0000000000575ed6 in IndexNext ()\n#12 0x000000000056b18e in ExecScan ()\n#13 0x0000000000563ee8 in ExecProcNode ()\n#14 0x0000000000562d72 in standard_ExecutorRun ()\n#15 0x000000000062ce67 in PortalRunSelect ()\n#16 0x000000000062e128 in PortalRun ()\n#17 0x000000000062bb66 in PostgresMain ()\n#18 0x00000000005ecd01 in ServerLoop ()\n#19 0x00000000005ef401 in PostmasterMain ()\n#20 0x0000000000590ff8 in main ()\n\n\n\nHas any on the forum seen something similar?   Any suggestions on what to look at next?    If it is helpful to describe the server hardware, it's got 2 E5-2670 cpu and 256 GB of ram, and the database is hosted on 1.6TB raid\n 10 local storage (15K 300 GB drives).   The workload is predominantly read and the queries are mostly fairly simple selects from a single large table generally specifying the primary key as part of the where clause along\n with a few other filters.  \n\n\nThanks,\nMatt", "msg_date": "Tue, 25 Mar 2014 12:46:43 +0000", "msg_from": "Matthew Spilich <[email protected]>", "msg_from_op": true, "msg_subject": "Stalls on PGSemaphoreLock" }, { "msg_contents": "On Mar 25, 2014, at 8:46 AM, Matthew Spilich wrote:\n\n> The symptom: The database machine (running postgres 9.1.9 on CentOS 6.4) is running a low utilization most of the time, but once every day or two, it will appear to slow down to the point where queries back up and clients are unable to connect. Once this event occurs, there are lots of concurrent queries, I see slow queries appear in the logs, but there doesn't appear to be anything abnormal that I have been able to see that causes this behavior.\n...\n> Has any on the forum seen something similar? Any suggestions on what to look at next? If it is helpful to describe the server hardware, it's got 2 E5-2670 cpu and 256 GB of ram, and the database is hosted on 1.6TB raid 10 local storage (15K 300 GB drives). \n\n\n\nI could be way off here, but years ago I experienced something like this (in oracle land) and after some stressful chasing, the marginal failure of the raid controller revealed itself. Same kind of event, steady traffic and then some i/o would not complete and normal ops would stack up. Anyway, what you report reminded me of that event. The E5 is a few years old, I wonder if the raid controller firmware needs a patch? I suppose a marginal power supply might cause a similar \"hang.\" Anyway, marginal failures are very painful. Have you checked sar or OS logging at event time?\n\n\nOn Mar 25, 2014, at 8:46 AM, Matthew Spilich wrote:The symptom:   The database machine (running postgres 9.1.9 on CentOS 6.4) is running a low utilization most of the time, but once every day or two, it will appear to slow down to the point where queries back up and clients are unable to connect.  Once this event occurs, there are lots of concurrent queries, I see slow queries appear in the logs, but there doesn't appear to be anything abnormal that I have been able to see that causes this behavior....Has any on the forum seen something similar?   Any suggestions on what to look at next?    If it is helpful to describe the server hardware, it's got 2 E5-2670 cpu and 256 GB of ram, and the database is hosted on 1.6TB raid 10 local storage (15K 300 GB drives).  I could be way off here, but years ago I experienced something like this (in oracle land) and after some stressful chasing, the marginal failure of the raid controller revealed itself.  Same kind of event, steady traffic and then some i/o would not complete and normal ops would stack up.  Anyway, what you report reminded me of that event.  The E5 is a few years old, I wonder if the raid controller firmware needs a patch?  I suppose a marginal power supply might cause a similar \"hang.\"  Anyway, marginal failures are very painful.  Have you checked sar or OS logging at event time?", "msg_date": "Tue, 25 Mar 2014 13:17:11 -0400", "msg_from": "Ray Stell <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Stalls on PGSemaphoreLock" }, { "msg_contents": "Hello\n\nRecently I have a similar problem. The first symptom was a freeze of the connection and 100% of CPU SYS during 2 et 10 minutes, 1 or 2 times per day.\nConnection impossible, slow query. The strace on one backend show a very long system call on semop().\nWe have a node with 48 cores dans 128 Go of memory.\n\nWe have disable the hugepage and upgrade the semaphore configuration, and since that time, we no longer have any problem of freeze on our instance.\n\nCan you check the hugepage and semaphore configuration on our node ?\n\nI am interested in this case, so do not hesitate to let me make a comeback. Thanks.\n\nexcuse me for my bad english !!!\n\n________________________________________\nDe : [email protected] [[email protected]] de la part de Ray Stell [[email protected]]\nDate d'envoi : mardi 25 mars 2014 18:17\nÀ : Matthew Spilich\nCc : [email protected]\nObjet : Re: [PERFORM] Stalls on PGSemaphoreLock\n\nOn Mar 25, 2014, at 8:46 AM, Matthew Spilich wrote:\n\nThe symptom: The database machine (running postgres 9.1.9 on CentOS 6.4) is running a low utilization most of the time, but once every day or two, it will appear to slow down to the point where queries back up and clients are unable to connect. Once this event occurs, there are lots of concurrent queries, I see slow queries appear in the logs, but there doesn't appear to be anything abnormal that I have been able to see that causes this behavior.\n...\nHas any on the forum seen something similar? Any suggestions on what to look at next? If it is helpful to describe the server hardware, it's got 2 E5-2670 cpu and 256 GB of ram, and the database is hosted on 1.6TB raid 10 local storage (15K 300 GB drives).\n\n\nI could be way off here, but years ago I experienced something like this (in oracle land) and after some stressful chasing, the marginal failure of the raid controller revealed itself. Same kind of event, steady traffic and then some i/o would not complete and normal ops would stack up. Anyway, what you report reminded me of that event. The E5 is a few years old, I wonder if the raid controller firmware needs a patch? I suppose a marginal power supply might cause a similar \"hang.\" Anyway, marginal failures are very painful. Have you checked sar or OS logging at event time?\n\n\nCe message et les pièces jointes sont confidentiels et réservés à l'usage exclusif de ses destinataires. Il peut également être protégé par le secret professionnel. Si vous recevez ce message par erreur, merci d'en avertir immédiatement l'expéditeur et de le détruire. L'intégrité du message ne pouvant être assurée sur Internet, la responsabilité de Worldline ne pourra être recherchée quant au contenu de ce message. Bien que les meilleurs efforts soient faits pour maintenir cette transmission exempte de tout virus, l'expéditeur ne donne aucune garantie à cet égard et sa responsabilité ne saurait être recherchée pour tout dommage résultant d'un virus transmis.\n\nThis e-mail and the documents attached are confidential and intended solely for the addressee; it may also be privileged. If you receive this e-mail in error, please notify the sender immediately and destroy it. As its integrity cannot be secured on the Internet, the Worldline liability cannot be triggered for the message content. Although the sender endeavours to maintain a computer virus-free network, the sender does not warrant that this transmission is virus-free and will not be liable for any damages resulting from any virus transmitted.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 25 Mar 2014 18:45:17 +0100", "msg_from": "Pavy Philippe <[email protected]>", "msg_from_op": false, "msg_subject": "RE : Stalls on PGSemaphoreLock" }, { "msg_contents": "Thanks all: \n\nRay: Thanks, we started to look at the hardware/firmware, but didn't get to the the level of detail or running sar. I will probably collect more detail in this area if I continue to see issues.\n\nPavy - I hope that you are right that the hugepage setting is the issue. I was under the impression that I had it disabled already because this has been an known issue for us in the past, but it turns out this was not the case for this server in question. I have disabled it at this time, but it will take a few days of running without issue before I am comfortable declaring that this is the solution. Can you elaborate on the change you mention to \"upgrade the semaphore configuration\"? I think this is not something I have looked at before.\n\nAshutosh - Thanks for the reply, I started to do that at first. I turned on log_statement=all for a few hours and I generated a few GB of log file, and I didn't want to leave it running in that state for too long because the issue happens every few days, and not on any regular schedule, so I reverted that after collecting a few GB of detail in the pg log. What I'm doing now to sample every few seconds is I think giving me a decent picture of what is going on with the incident occurs and is a level of data collection that I am more comfortable will not impact operations. I am also logging at the level of 'mod' and all duration > 500ms. I don't see that large write operations are a contributing factor leading up to these incidents.\n\nI'm hoping that disabling the hugepage setting will be the solution to this. I'll check back in a day or two with feedback.\n\nThanks,\nMatt\n\n\n________________________________________\nFrom: Pavy Philippe [[email protected]]\nSent: Tuesday, March 25, 2014 1:45 PM\nTo: Ray Stell; Matthew Spilich\nCc: [email protected]\nSubject: RE : [PERFORM] Stalls on PGSemaphoreLock\n\nHello\n\nRecently I have a similar problem. The first symptom was a freeze of the connection and 100% of CPU SYS during 2 et 10 minutes, 1 or 2 times per day.\nConnection impossible, slow query. The strace on one backend show a very long system call on semop().\nWe have a node with 48 cores dans 128 Go of memory.\n\nWe have disable the hugepage and upgrade the semaphore configuration, and since that time, we no longer have any problem of freeze on our instance.\n\nCan you check the hugepage and semaphore configuration on our node ?\n\nI am interested in this case, so do not hesitate to let me make a comeback. Thanks.\n\nexcuse me for my bad english !!!\n\n________________________________________\nDe : [email protected] [[email protected]] de la part de Ray Stell [[email protected]]\nDate d'envoi : mardi 25 mars 2014 18:17\nÀ : Matthew Spilich\nCc : [email protected]\nObjet : Re: [PERFORM] Stalls on PGSemaphoreLock\n\nOn Mar 25, 2014, at 8:46 AM, Matthew Spilich wrote:\n\nThe symptom: The database machine (running postgres 9.1.9 on CentOS 6.4) is running a low utilization most of the time, but once every day or two, it will appear to slow down to the point where queries back up and clients are unable to connect. Once this event occurs, there are lots of concurrent queries, I see slow queries appear in the logs, but there doesn't appear to be anything abnormal that I have been able to see that causes this behavior.\n...\nHas any on the forum seen something similar? Any suggestions on what to look at next? If it is helpful to describe the server hardware, it's got 2 E5-2670 cpu and 256 GB of ram, and the database is hosted on 1.6TB raid 10 local storage (15K 300 GB drives).\n\n\nI could be way off here, but years ago I experienced something like this (in oracle land) and after some stressful chasing, the marginal failure of the raid controller revealed itself. Same kind of event, steady traffic and then some i/o would not complete and normal ops would stack up. Anyway, what you report reminded me of that event. The E5 is a few years old, I wonder if the raid controller firmware needs a patch? I suppose a marginal power supply might cause a similar \"hang.\" Anyway, marginal failures are very painful. Have you checked sar or OS logging at event time?\n\n\nCe message et les pièces jointes sont confidentiels et réservés à l'usage exclusif de ses destinataires. Il peut également être protégé par le secret professionnel. Si vous recevez ce message par erreur, merci d'en avertir immédiatement l'expéditeur et de le détruire. L'intégrité du message ne pouvant être assurée sur Internet, la responsabilité de Worldline ne pourra être recherchée quant au contenu de ce message. Bien que les meilleurs efforts soient faits pour maintenir cette transmission exempte de tout virus, l'expéditeur ne donne aucune garantie à cet égard et sa responsabilité ne saurait être recherchée pour tout dommage résultant d'un virus transmis.\n\nThis e-mail and the documents attached are confidential and intended solely for the addressee; it may also be privileged. If you receive this e-mail in error, please notify the sender immediately and destroy it. As its integrity cannot be secured on the Internet, the Worldline liability cannot be triggered for the message content. Although the sender endeavours to maintain a computer virus-free network, the sender does not warrant that this transmission is virus-free and will not be liable for any damages resulting from any virus transmitted.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 25 Mar 2014 18:38:27 +0000", "msg_from": "Matthew Spilich <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Stalls on PGSemaphoreLock" }, { "msg_contents": "Here, we were the transparent hugepage always actif:\n cat /sys/kernel/mm/redhat_transparent_hugepage/enabled\n [always] never\n\nWe changed to:\ncat /sys/kernel/mm/redhat_transparent_hugepage/enabled\n always [never]\n\n\n\nFor the semaphore, our initial configuration was:\n cat /proc/sys/kernel/sem\n 250 32000 32 128\n\nAnd we changed to:\n cat /proc/sys/kernel/sem\n 5010 641280 5010 128\n\n\n\n\n-----Message d'origine-----\nDe : [email protected] [mailto:[email protected]] De la part de Matthew Spilich\nEnvoyé : mardi 25 mars 2014 19:38\nÀ : [email protected]\nObjet : Re: [PERFORM] Stalls on PGSemaphoreLock\n\nThanks all:\n\nRay: Thanks, we started to look at the hardware/firmware, but didn't get to the the level of detail or running sar. I will probably collect more detail in this area if I continue to see issues.\n\nPavy - I hope that you are right that the hugepage setting is the issue. I was under the impression that I had it disabled already because this has been an known issue for us in the past, but it turns out this was not the case for this server in question. I have disabled it at this time, but it will take a few days of running without issue before I am comfortable declaring that this is the solution. Can you elaborate on the change you mention to \"upgrade the semaphore configuration\"? I think this is not something I have looked at before.\n\nAshutosh - Thanks for the reply, I started to do that at first. I turned on log_statement=all for a few hours and I generated a few GB of log file, and I didn't want to leave it running in that state for too long because the issue happens every few days, and not on any regular schedule, so I reverted that after collecting a few GB of detail in the pg log. What I'm doing now to sample every few seconds is I think giving me a decent picture of what is going on with the incident occurs and is a level of data collection that I am more comfortable will not impact operations. I am also logging at the level of 'mod' and all duration > 500ms. I don't see that large write operations are a contributing factor leading up to these incidents.\n\nI'm hoping that disabling the hugepage setting will be the solution to this. I'll check back in a day or two with feedback.\n\nThanks,\nMatt\n\n\n________________________________________\nFrom: Pavy Philippe [[email protected]]\nSent: Tuesday, March 25, 2014 1:45 PM\nTo: Ray Stell; Matthew Spilich\nCc: [email protected]\nSubject: RE : [PERFORM] Stalls on PGSemaphoreLock\n\nHello\n\nRecently I have a similar problem. The first symptom was a freeze of the connection and 100% of CPU SYS during 2 et 10 minutes, 1 or 2 times per day.\nConnection impossible, slow query. The strace on one backend show a very long system call on semop().\nWe have a node with 48 cores dans 128 Go of memory.\n\nWe have disable the hugepage and upgrade the semaphore configuration, and since that time, we no longer have any problem of freeze on our instance.\n\nCan you check the hugepage and semaphore configuration on our node ?\n\nI am interested in this case, so do not hesitate to let me make a comeback. Thanks.\n\nexcuse me for my bad english !!!\n\n________________________________________\nDe : [email protected] [[email protected]] de la part de Ray Stell [[email protected]] Date d'envoi : mardi 25 mars 2014 18:17 À : Matthew Spilich Cc : [email protected] Objet : Re: [PERFORM] Stalls on PGSemaphoreLock\n\nOn Mar 25, 2014, at 8:46 AM, Matthew Spilich wrote:\n\nThe symptom: The database machine (running postgres 9.1.9 on CentOS 6.4) is running a low utilization most of the time, but once every day or two, it will appear to slow down to the point where queries back up and clients are unable to connect. Once this event occurs, there are lots of concurrent queries, I see slow queries appear in the logs, but there doesn't appear to be anything abnormal that I have been able to see that causes this behavior.\n...\nHas any on the forum seen something similar? Any suggestions on what to look at next? If it is helpful to describe the server hardware, it's got 2 E5-2670 cpu and 256 GB of ram, and the database is hosted on 1.6TB raid 10 local storage (15K 300 GB drives).\n\n\nI could be way off here, but years ago I experienced something like this (in oracle land) and after some stressful chasing, the marginal failure of the raid controller revealed itself. Same kind of event, steady traffic and then some i/o would not complete and normal ops would stack up. Anyway, what you report reminded me of that event. The E5 is a few years old, I wonder if the raid controller firmware needs a patch? I suppose a marginal power supply might cause a similar \"hang.\" Anyway, marginal failures are very painful. Have you checked sar or OS logging at event time?\n\n\nCe message et les pièces jointes sont confidentiels et réservés à l'usage exclusif de ses destinataires. Il peut également être protégé par le secret professionnel. Si vous recevez ce message par erreur, merci d'en avertir immédiatement l'expéditeur et de le détruire. L'intégrité du message ne pouvant être assurée sur Internet, la responsabilité de Worldline ne pourra être recherchée quant au contenu de ce message. Bien que les meilleurs efforts soient faits pour maintenir cette transmission exempte de tout virus, l'expéditeur ne donne aucune garantie à cet égard et sa responsabilité ne saurait être recherchée pour tout dommage résultant d'un virus transmis.\n\nThis e-mail and the documents attached are confidential and intended solely for the addressee; it may also be privileged. If you receive this e-mail in error, please notify the sender immediately and destroy it. As its integrity cannot be secured on the Internet, the Worldline liability cannot be triggered for the message content. Although the sender endeavours to maintain a computer virus-free network, the sender does not warrant that this transmission is virus-free and will not be liable for any damages resulting from any virus transmitted.\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\nCe message et les pièces jointes sont confidentiels et réservés à l'usage exclusif de ses destinataires. Il peut également être protégé par le secret professionnel. Si vous recevez ce message par erreur, merci d'en avertir immédiatement l'expéditeur et de le détruire. L'intégrité du message ne pouvant être assurée sur Internet, la responsabilité de Worldline ne pourra être recherchée quant au contenu de ce message. Bien que les meilleurs efforts soient faits pour maintenir cette transmission exempte de tout virus, l'expéditeur ne donne aucune garantie à cet égard et sa responsabilité ne saurait être recherchée pour tout dommage résultant d'un virus transmis.\n\nThis e-mail and the documents attached are confidential and intended solely for the addressee; it may also be privileged. If you receive this e-mail in error, please notify the sender immediately and destroy it. As its integrity cannot be secured on the Internet, the Worldline liability cannot be triggered for the message content. Although the sender endeavours to maintain a computer virus-free network, the sender does not warrant that this transmission is virus-free and will not be liable for any damages resulting from any virus transmitted.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 25 Mar 2014 21:10:21 +0100", "msg_from": "Pavy Philippe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Stalls on PGSemaphoreLock" }, { "msg_contents": "2014-03-25, Matthew Spilich <[email protected]>:\n\n> Has any on the forum seen something similar? Any suggestions on what\n> to look at next? If it is helpful to describe the server hardware, it's\n> got 2 E5-2670 cpu and 256 GB of ram, and the database is hosted on 1.6TB raid\n> 10 local storage (15K 300 GB drives). The workload is predominantly\n> read and the queries are mostly fairly simple selects from a single large\n> table generally specifying the primary key as part of the where clause\n> along with a few other filters.\n>\nI have seen something similar. It was because of\nlarge shared_buffers.\n\n2014-03-25, Matthew Spilich <[email protected]>:\n\n\nHas any on the forum seen something similar?   Any suggestions on what to look at next?    If it is helpful to describe the server hardware, it's got 2 E5-2670 cpu and 256 GB of ram, and the database is hosted on 1.6TB raid\n 10 local storage (15K 300 GB drives).   The workload is predominantly read and the queries are mostly fairly simple selects from a single large table generally specifying the primary key as part of the where clause along\n with a few other filters.  I have seen something similar. It was because oflarge shared_buffers.", "msg_date": "Tue, 25 Mar 2014 22:35:31 +0200", "msg_from": "Emre Hasegeli <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Stalls on PGSemaphoreLock" }, { "msg_contents": "On Mar 25, 2014, at 8:46 AM, Matthew Spilich wrote:\n> Has any on the forum seen something similar? \n\nI think I reported similar phenomenon in my SIGMOD 2013 paper (Latch-free\ndata structures for DBMS: design, implementation, and evaluation,\n<http://dl.acm.org/citation.cfm?id=2463720>). \n\n> ----- 47245 -----\n> 0x00000037392eb197 in semop () from /lib64/libc.so.6\n> #0 0x00000037392eb197 in semop () from /lib64/libc.so.6\n> #1 0x00000000005e0c87 in PGSemaphoreLock ()\n> #2 0x000000000061e3af in LWLockAcquire ()\n> #3 0x000000000060aa0f in ReadBuffer_common ()\n> #4 0x000000000060b2e4 in ReadBufferExtended ()\n...\n\n> ----- 47257 -----\n> 0x00000037392eb197 in semop () from /lib64/libc.so.6\n> #0 0x00000037392eb197 in semop () from /lib64/libc.so.6\n> #1 0x00000000005e0c87 in PGSemaphoreLock ()\n> #2 0x000000000061e3af in LWLockAcquire ()\n> #3 0x000000000060aa0f in ReadBuffer_common ()\n> #4 0x000000000060b2e4 in ReadBufferExtended ()\n...\n\nThese stack trace results indicate that there was heavy contention of\nLWLocks for buffers. What I observed is that, in a similar situation, there\nwas also heavy contention on spin locks that ensure mutual exclusion of\nLWLock status data. Those contentions resulted in a sudden increase in CPU\nutilization, which is consistent with the following description.\n> At the time of the event, we see a spike in system CPU and load average,\nbut we do not see a corresponding spike in disk reads or writes which would\nindicate IO load.\n\nIf the cause of the problem is the same as what I observed, a possible\ninstant countermeasure is increasing the value of 'NUM_BUFFER_PARTITIONS'\ndefined in src/include/storage/lwlock.h from 16 to, for example, 128 or 256,\nand build the binary.\n# Using latch-free buffer manager, proposed in my paper, would take long\ntime, since it is not unincorporated in the upstream.\n\n--\nTakashi Horikawa, Ph.D.,\nKnowledge Discovery Research Laboratories,\nNEC Corporation.", "msg_date": "Wed, 26 Mar 2014 06:26:37 +0000", "msg_from": "Takashi Horikawa <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Stalls on PGSemaphoreLock" }, { "msg_contents": "Hi Pavy!\nWhat kernel version/RHEL release are you running on the servers you are experiencing these issues?\nI'm interested in knowing since I suspect similar issues on some of our database servers.\n\nBest regards, Martin \n\n> -----Original Message-----\n> From: [email protected] [mailto:pgsql-performance-\n> [email protected]] On Behalf Of Pavy Philippe\n> Sent: den 25 mars 2014 9:10\n> To: Matthew Spilich; [email protected]\n> Subject: Re: [PERFORM] Stalls on PGSemaphoreLock\n> \n> Here, we were the transparent hugepage always actif:\n> cat /sys/kernel/mm/redhat_transparent_hugepage/enabled\n> [always] never\n> \n> We changed to:\n> cat /sys/kernel/mm/redhat_transparent_hugepage/enabled\n> always [never]\n> \n> \n> \n> For the semaphore, our initial configuration was:\n> cat /proc/sys/kernel/sem\n> 250 32000 32 128\n> \n> And we changed to:\n> cat /proc/sys/kernel/sem\n> 5010 641280 5010 128\n> \n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 27 Mar 2014 07:38:51 +0000", "msg_from": "\"Gudmundsson Martin (mg)\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Stalls on PGSemaphoreLock" }, { "msg_contents": "Hi all - I am a little delayed in reporting back on this issue, but it was indeed the hugepage defrag setting that was the cause of my issue. One item that we noticed as we were testing this issue that I wanted to report back to the forum is that these settings \n\ncat /sys/kernel/mm/transparent_hugepage/defrag \nalways [never]\n\ncat /sys/kernel/mm/transparent_hugepage/enabled \nalways [never]\n\nWere not sicky on reboot for my version of CentOS, which probably explains why I thought this was disabled already only to have it crop back up. Anyway, I wanted to report back these findings to close the loop on this and to thank the community again for their support.\n\nBest,\nMatt\n\n________________________________________\nFrom: Pavy Philippe [[email protected]]\nSent: Tuesday, March 25, 2014 4:10 PM\nTo: Matthew Spilich; [email protected]\nSubject: RE: [PERFORM] Stalls on PGSemaphoreLock\n\nHere, we were the transparent hugepage always actif:\n cat /sys/kernel/mm/redhat_transparent_hugepage/enabled\n [always] never\n\nWe changed to:\ncat /sys/kernel/mm/redhat_transparent_hugepage/enabled\n always [never]\n\n\n\nFor the semaphore, our initial configuration was:\n cat /proc/sys/kernel/sem\n 250 32000 32 128\n\nAnd we changed to:\n cat /proc/sys/kernel/sem\n 5010 641280 5010 128\n\n\n\n\n-----Message d'origine-----\nDe : [email protected] [mailto:[email protected]] De la part de Matthew Spilich\nEnvoyé : mardi 25 mars 2014 19:38\nÀ : [email protected]\nObjet : Re: [PERFORM] Stalls on PGSemaphoreLock\n\nThanks all:\n\nRay: Thanks, we started to look at the hardware/firmware, but didn't get to the the level of detail or running sar. I will probably collect more detail in this area if I continue to see issues.\n\nPavy - I hope that you are right that the hugepage setting is the issue. I was under the impression that I had it disabled already because this has been an known issue for us in the past, but it turns out this was not the case for this server in question. I have disabled it at this time, but it will take a few days of running without issue before I am comfortable declaring that this is the solution. Can you elaborate on the change you mention to \"upgrade the semaphore configuration\"? I think this is not something I have looked at before.\n\nAshutosh - Thanks for the reply, I started to do that at first. I turned on log_statement=all for a few hours and I generated a few GB of log file, and I didn't want to leave it running in that state for too long because the issue happens every few days, and not on any regular schedule, so I reverted that after collecting a few GB of detail in the pg log. What I'm doing now to sample every few seconds is I think giving me a decent picture of what is going on with the incident occurs and is a level of data collection that I am more comfortable will not impact operations. I am also logging at the level of 'mod' and all duration > 500ms. I don't see that large write operations are a contributing factor leading up to these incidents.\n\nI'm hoping that disabling the hugepage setting will be the solution to this. I'll check back in a day or two with feedback.\n\nThanks,\nMatt\n\n\n________________________________________\nFrom: Pavy Philippe [[email protected]]\nSent: Tuesday, March 25, 2014 1:45 PM\nTo: Ray Stell; Matthew Spilich\nCc: [email protected]\nSubject: RE : [PERFORM] Stalls on PGSemaphoreLock\n\nHello\n\nRecently I have a similar problem. The first symptom was a freeze of the connection and 100% of CPU SYS during 2 et 10 minutes, 1 or 2 times per day.\nConnection impossible, slow query. The strace on one backend show a very long system call on semop().\nWe have a node with 48 cores dans 128 Go of memory.\n\nWe have disable the hugepage and upgrade the semaphore configuration, and since that time, we no longer have any problem of freeze on our instance.\n\nCan you check the hugepage and semaphore configuration on our node ?\n\nI am interested in this case, so do not hesitate to let me make a comeback. Thanks.\n\nexcuse me for my bad english !!!\n\n________________________________________\nDe : [email protected] [[email protected]] de la part de Ray Stell [[email protected]] Date d'envoi : mardi 25 mars 2014 18:17 À : Matthew Spilich Cc : [email protected] Objet : Re: [PERFORM] Stalls on PGSemaphoreLock\n\nOn Mar 25, 2014, at 8:46 AM, Matthew Spilich wrote:\n\nThe symptom: The database machine (running postgres 9.1.9 on CentOS 6.4) is running a low utilization most of the time, but once every day or two, it will appear to slow down to the point where queries back up and clients are unable to connect. Once this event occurs, there are lots of concurrent queries, I see slow queries appear in the logs, but there doesn't appear to be anything abnormal that I have been able to see that causes this behavior.\n...\nHas any on the forum seen something similar? Any suggestions on what to look at next? If it is helpful to describe the server hardware, it's got 2 E5-2670 cpu and 256 GB of ram, and the database is hosted on 1.6TB raid 10 local storage (15K 300 GB drives).\n\n\nI could be way off here, but years ago I experienced something like this (in oracle land) and after some stressful chasing, the marginal failure of the raid controller revealed itself. Same kind of event, steady traffic and then some i/o would not complete and normal ops would stack up. Anyway, what you report reminded me of that event. The E5 is a few years old, I wonder if the raid controller firmware needs a patch? I suppose a marginal power supply might cause a similar \"hang.\" Anyway, marginal failures are very painful. Have you checked sar or OS logging at event time?\n\n\nCe message et les pièces jointes sont confidentiels et réservés à l'usage exclusif de ses destinataires. Il peut également être protégé par le secret professionnel. Si vous recevez ce message par erreur, merci d'en avertir immédiatement l'expéditeur et de le détruire. L'intégrité du message ne pouvant être assurée sur Internet, la responsabilité de Worldline ne pourra être recherchée quant au contenu de ce message. Bien que les meilleurs efforts soient faits pour maintenir cette transmission exempte de tout virus, l'expéditeur ne donne aucune garantie à cet égard et sa responsabilité ne saurait être recherchée pour tout dommage résultant d'un virus transmis.\n\nThis e-mail and the documents attached are confidential and intended solely for the addressee; it may also be privileged. If you receive this e-mail in error, please notify the sender immediately and destroy it. As its integrity cannot be secured on the Internet, the Worldline liability cannot be triggered for the message content. Although the sender endeavours to maintain a computer virus-free network, the sender does not warrant that this transmission is virus-free and will not be liable for any damages resulting from any virus transmitted.\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\nCe message et les pièces jointes sont confidentiels et réservés à l'usage exclusif de ses destinataires. Il peut également être protégé par le secret professionnel. Si vous recevez ce message par erreur, merci d'en avertir immédiatement l'expéditeur et de le détruire. L'intégrité du message ne pouvant être assurée sur Internet, la responsabilité de Worldline ne pourra être recherchée quant au contenu de ce message. Bien que les meilleurs efforts soient faits pour maintenir cette transmission exempte de tout virus, l'expéditeur ne donne aucune garantie à cet égard et sa responsabilité ne saurait être recherchée pour tout dommage résultant d'un virus transmis.\n\nThis e-mail and the documents attached are confidential and intended solely for the addressee; it may also be privileged. If you receive this e-mail in error, please notify the sender immediately and destroy it. As its integrity cannot be secured on the Internet, the Worldline liability cannot be triggered for the message content. Although the sender endeavours to maintain a computer virus-free network, the sender does not warrant that this transmission is virus-free and will not be liable for any damages resulting from any virus transmitted.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 22 Apr 2014 15:12:32 +0000", "msg_from": "Matthew Spilich <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Stalls on PGSemaphoreLock" }, { "msg_contents": "On Tue, Apr 22, 2014 at 12:12 PM, Matthew Spilich\n<[email protected]>wrote:\n\n> Hi all - I am a little delayed in reporting back on this issue, but it\n> was indeed the hugepage defrag setting that was the cause of my issue.\n>\n\nThe transparent huge pages features seems so bogus for database workloads,\nthat it is one of the first things I disable on new servers (I have tried\nto let it enabled sometimes, but every time the system was better with it\ndisabled).\n\n\n> One item that we noticed as we were testing this issue that I wanted to\n> report back to the forum is that these settings\n> ...\n> Were not sicky on reboot for my version of CentOS, which probably explains\n> why I thought this was disabled already only to have it crop back up.\n> Anyway, I wanted to report back these findings to close the loop on this\n> and to thank the community again for their support.\n>\n\nJust changing files at /sys/ is not permanent, so I recommend adding these\ncommands into your /etc/rc.local file:\n\n test -f /sys/kernel/mm/transparent_hugepage/enabled && echo never >\n/sys/kernel/mm/transparent_hugepage/enabled\n test -f /sys/kernel/mm/transparent_hugepage/defrag && echo never >\n/sys/kernel/mm/transparent_hugepage/defrag\n\nThe test's are just to make sure the file does exists, as its location\nchanges depending on the distro you are using and may also change on kernel\nupgrades.\n\nIt is also possible to add transparent_hugepage=never on grub.conf file,\nbut I personally dislike this option.\n\nRegards,\n-- \nMatheus de Oliveira\nAnalista de Banco de Dados\nDextra Sistemas - MPS.Br nível F!\nwww.dextra.com.br/postgres\n\nOn Tue, Apr 22, 2014 at 12:12 PM, Matthew Spilich <[email protected]> wrote:\nHi all -  I am a little delayed in reporting back on this issue, but it was indeed the hugepage defrag setting that was the cause of my issue.\nThe transparent huge pages features seems so bogus for database \nworkloads, that it is one of the first things I disable on new servers \n(I have tried to let it enabled sometimes, but every time the system was\n better with it disabled).    One item that we noticed as we were testing this issue that I wanted to report back to the forum is that these settings\n\n\n...\nWere not sicky on reboot for my version of CentOS, which probably explains why I thought this was disabled already only to have it crop back up.   Anyway, I wanted to report back these findings to close the loop on this and to thank the community again for their support.\nJust changing files at /sys/ is not permanent, so I recommend adding these commands into your /etc/rc.local file:    test -f /sys/kernel/mm/transparent_hugepage/enabled && echo never > /sys/kernel/mm/transparent_hugepage/enabled\n\n    test -f /sys/kernel/mm/transparent_hugepage/defrag && echo never > /sys/kernel/mm/transparent_hugepage/defragThe test's are just to make sure the file does exists, as its location changes depending on the distro you are using and may also change on kernel upgrades.\nIt is also possible to add transparent_hugepage=never on grub.conf file, but I personally dislike this option.\n\nRegards,-- Matheus de OliveiraAnalista de Banco de DadosDextra Sistemas - MPS.Br nível F!www.dextra.com.br/postgres", "msg_date": "Tue, 22 Apr 2014 12:48:25 -0300", "msg_from": "Matheus de Oliveira <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Stalls on PGSemaphoreLock" } ]
[ { "msg_contents": "I read from several sources, what maximum shared_buffers is 8GB. \n\nDoes this true? If yes, why exactly this number is maximum number of shared_buffers for good performance (on Linux 64-bits)?\n\nThanks!\n\n\nI read from several sources, what maximum shared_buffers is 8GB. Does this true? If yes, why exactly this number is maximum number of shared_buffers for good performance (on Linux 64-bits)?Thanks!", "msg_date": "Wed, 26 Mar 2014 16:21:51 +0400", "msg_from": "=?UTF-8?B?QWxleGV5IFZhc2lsaWV2?= <[email protected]>", "msg_from_op": true, "msg_subject": "=?UTF-8?B?V2h5IHNoYXJlZF9idWZmZXJzIG1heCBpcyA4R0I/?=" }, { "msg_contents": "Hi Alexey,\n\nOn Wed, Mar 26, 2014 at 1:21 PM, Alexey Vasiliev <[email protected]> wrote:\n> I read from several sources, what maximum shared_buffers is 8GB.\n\nI believe that was an issue on some older versions, and thats why was\nmentioned in several talks. Today it is a sort of apocrypha.\n\n> Does this true? If yes, why exactly this number is maximum number of\n> shared_buffers for good performance (on Linux 64-bits)?\n\n25% of available RAM is a good idea to start. Sometimes, if you have\nheavy workload _and_ it is possible to reside whole database in\nmemory, better to use something larger, about ~75% of RAM.\n\nBest regards,\nIlya\n-- \nIlya Kosmodemiansky,\n\nPostgreSQL-Consulting.com\ntel. +14084142500\ncell. +4915144336040\[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 26 Mar 2014 13:35:15 +0100", "msg_from": "Ilya Kosmodemiansky <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why shared_buffers max is 8GB?" }, { "msg_contents": "Il 26/mar/2014 13:36 \"Ilya Kosmodemiansky\" <\[email protected]> ha scritto:\n>\n> Hi Alexey,\n>\n> On Wed, Mar 26, 2014 at 1:21 PM, Alexey Vasiliev <[email protected]>\nwrote:\n> > I read from several sources, what maximum shared_buffers is 8GB.\n>\n> I believe that was an issue on some older versions, and thats why was\n> mentioned in several talks. Today it is a sort of apocrypha.\n>\n> > Does this true? If yes, why exactly this number is maximum number of\n> > shared_buffers for good performance (on Linux 64-bits)?\n>\n> 25% of available RAM is a good idea to start. Sometimes, if you have\n> heavy workload _and_ it is possible to reside whole database in\n> memory, better to use something larger, about ~75% of RAM.\n>\n> Best regards,\n> Ilya\n> --\n> Ilya Kosmodemiansky,\n>\n> PostgreSQL-Consulting.com\n> tel. +14084142500\n> cell. +4915144336040\n> [email protected]\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\nmax is 1024mb.\n\nyou have to test your workload if it's too low you will get too much i/o (\nthe filesystem cache could help.. not always /*nfs*/), if too high your\ncpu will be eated by lru/ latch/ and so on.\n\nMat Dba\n\n\nIl 26/mar/2014 13:36 \"Ilya Kosmodemiansky\" <[email protected]> ha scritto:\n>\n> Hi Alexey,\n>\n> On Wed, Mar 26, 2014 at 1:21 PM, Alexey Vasiliev <[email protected]> wrote:\n> > I read from several sources, what maximum shared_buffers is 8GB.\n>\n> I believe that was an issue on some older versions, and thats why was\n> mentioned in several talks. Today it is a sort of apocrypha.\n>\n> > Does this true? If yes, why exactly this number is maximum number of\n> > shared_buffers for good performance (on Linux 64-bits)?\n>\n> 25% of available RAM is a good idea to start. Sometimes, if you have\n> heavy workload _and_ it is possible to reside whole database in\n> memory, better to use something larger, about  ~75% of RAM.\n>\n> Best regards,\n> Ilya\n> --\n> Ilya Kosmodemiansky,\n>\n> PostgreSQL-Consulting.com\n> tel. +14084142500\n> cell. +4915144336040\n> [email protected]\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\nmax is 1024mb.\nyou have to test your workload if it's too low you will get too much i/o ( the filesystem cache could help.. not always /*nfs*/),  if too high your cpu will be eated by lru/ latch/ and so on.\nMat Dba", "msg_date": "Wed, 26 Mar 2014 13:45:15 +0100", "msg_from": "desmodemone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why shared_buffers max is 8GB?" }, { "msg_contents": "desmodemone wrote:\r\n\r\n> max is 1024mb.\r\n\r\nThat must be a typo.\r\nIt can surely be much higher.\r\n\r\nYours,\r\nLaurenz Albe\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 26 Mar 2014 13:23:32 +0000", "msg_from": "Albe Laurenz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why shared_buffers max is 8GB?" }, { "msg_contents": "I wanted to follow up from this question. I’m running on 9.3.4\nMy DB server has 32GB ram so I have assigned 8GB shared_buffer_memory. It is quite a big db but with not much traffic. When there is traffic, it’s usually big. \n\nLately, the kernel has been killing the postmaster for having assigned too much shared memory. Latest crash was when loading a 500MB file. \n\nShould I reduce the shared buffers in order for this to be more robust?\n\nThanks\nMarkella\n\nFrom: desmodemone \nSent: Wednesday, March 26, 2014 12:45 PM\nTo: [email protected] \nCc: [email protected] ; Alexey Vasiliev \nSubject: Re: [PERFORM] Why shared_buffers max is 8GB?\n\n\nIl 26/mar/2014 13:36 \"Ilya Kosmodemiansky\" <[email protected]> ha scritto:\n>\n> Hi Alexey,\n>\n> On Wed, Mar 26, 2014 at 1:21 PM, Alexey Vasiliev <[email protected]> wrote:\n> > I read from several sources, what maximum shared_buffers is 8GB.\n>\n> I believe that was an issue on some older versions, and thats why was\n> mentioned in several talks. Today it is a sort of apocrypha.\n>\n> > Does this true? If yes, why exactly this number is maximum number of\n> > shared_buffers for good performance (on Linux 64-bits)?\n>\n> 25% of available RAM is a good idea to start. Sometimes, if you have\n> heavy workload _and_ it is possible to reside whole database in\n> memory, better to use something larger, about ~75% of RAM.\n>\n> Best regards,\n> Ilya\n> --\n> Ilya Kosmodemiansky,\n>\n> PostgreSQL-Consulting.com\n> tel. +14084142500\n> cell. +4915144336040\n> [email protected]\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\nmax is 1024mb.\n\nyou have to test your workload if it's too low you will get too much i/o ( the filesystem cache could help.. not always /*nfs*/), if too high your cpu will be eated by lru/ latch/ and so on.\n\nMat Dba\n\n\n\n\n\n\nI wanted to follow up from this question. I’m running on 9.3.4\nMy DB server has 32GB ram so I have assigned 8GB shared_buffer_memory. It \nis quite a big db but with not much traffic. When there is traffic, it’s usually \nbig. \n \nLately, the kernel has been killing the postmaster for having assigned too \nmuch shared memory. Latest crash was when loading a 500MB file. \n \nShould I reduce the shared buffers in order for this to be more \nrobust?\n \nThanks\nMarkella\n\n\n \n\nFrom: desmodemone \nSent: Wednesday, March 26, 2014 12:45 PM\nTo: [email protected]\n\nCc: [email protected] \n; Alexey \nVasiliev \nSubject: Re: [PERFORM] Why shared_buffers max is \n8GB?\n \n\nIl 26/mar/2014 13:36 \"Ilya Kosmodemiansky\" <[email protected]> \nha scritto:>> Hi Alexey,>> On Wed, Mar 26, 2014 at \n1:21 PM, Alexey Vasiliev <[email protected]> wrote:> \n> I read from several sources, what maximum shared_buffers is \n8GB.>> I believe that was an issue on some older versions, and \nthats why was> mentioned in several talks. Today it is a sort of \napocrypha.>> > Does this true? If yes, why exactly this number \nis maximum number of> > shared_buffers for good performance (on Linux \n64-bits)?>> 25% of available RAM is a good idea to start. \nSometimes, if you have> heavy workload _and_ it is possible to reside \nwhole database in> memory, better to use something larger, about  \n~75% of RAM.>> Best regards,> Ilya> --> Ilya \nKosmodemiansky,>> PostgreSQL-Consulting.com> tel. \n+14084142500> cell. +4915144336040> [email protected]>>> \n--> Sent via pgsql-performance mailing list ([email protected])> \nTo make changes to your subscription:> http://www.postgresql.org/mailpref/pgsql-performance\nmax is 1024mb.\nyou have to test your workload if it's too low you will get too much \ni/o ( the filesystem cache could help.. not always /*nfs*/),  if too high \nyour cpu will be eated by lru/ latch/ and so on.\nMat Dba", "msg_date": "Wed, 26 Mar 2014 13:24:19 +0000", "msg_from": "Markella Skempri <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why shared_buffers max is 8GB?" }, { "msg_contents": "> max is 1024mb.\n> you have to test your workload if it's too low you will get too much\n> i/o ( the filesystem cache could help.. not always /*nfs*/), if too\n> high your cpu will be eated by lru/ latch/ and so on.\n> Mat Dba\n\n\nThe max is most certainly NOT 1024MB.\n\nhttp://www.postgresql.org/docs/9.3/static/runtime-config-resource.html\n\nShared buffers is dependant on a number of other settings as well as \nworkload and database size. \n\nHave a look at pgtune for help with configuration.\n=============================================\n\nRomax Technology Limited \nA limited company registered in England and Wales.\nRegistered office:\nRutherford House \nNottingham Science and Technology Park \nNottingham \nNG7 2PZ \nEngland\nRegistration Number: 2345696\nVAT Number: 526 246 746\n\nTelephone numbers:\n+44 (0)115 951 88 00 (main)\n\nFor other office locations see:\nhttp://www.romaxtech.com/Contact\n=================================\n===============\nE-mail: [email protected]\nWebsite: www.romaxtech.com\n=================================\n\n================\nConfidentiality Statement\nThis transmission is for the addressee only and contains information that \nis confidential and privileged.\nUnless you are the named addressee, or authorised to receive it on behalf \nof the addressee \nyou may not copy or use it, or disclose it to anyone else. \nIf you have received this transmission in error please delete from your \nsystem and contact the sender. Thank you for your cooperation.\n=================================================\n> max is 1024mb.\n> you have to test your workload if it's too low\nyou will get too much\n> i/o ( the filesystem cache could help.. not always /*nfs*/), \nif too\n> high your cpu will be eated by lru/ latch/ and so on.\n> Mat Dba\n\n\nThe max is most certainly NOT 1024MB.\n\nhttp://www.postgresql.org/docs/9.3/static/runtime-config-resource.html\n\nShared buffers is dependant on a number of other settings\nas well as workload and database size. \n\nHave a look at pgtune for help with configuration.\n=============================================\n\nRomax Technology Limited \nA limited company registered in England and Wales.\nRegistered office:\nRutherford House \nNottingham Science and Technology Park \nNottingham \nNG7 2PZ \nEngland\nRegistration Number: 2345696\nVAT Number: 526 246 746\n\nTelephone numbers:\n+44 (0)115 951 88 00 (main)\n\nFor other office locations see:\nhttp://www.romaxtech.com/Contact\n=================================\n===============\nE-mail: [email protected]\nWebsite: www.romaxtech.com\n=================================\n\n================\nConfidentiality Statement\nThis transmission is for the addressee only and contains information that\nis confidential and privileged.\nUnless you are the named addressee, or authorised to receive it on behalf\nof the addressee \nyou may not copy or use it, or disclose it to anyone else. \nIf you have received this transmission in error please delete from your\nsystem and contact the sender. Thank you for your cooperation.\n=================================================", "msg_date": "Wed, 26 Mar 2014 13:30:02 +0000", "msg_from": "Martin French <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why shared_buffers max is 8GB?" }, { "msg_contents": "> I wanted to follow up from this question. I’m running on 9.3.4\r\n> My DB server has 32GB ram so I have assigned 8GB \r\n> shared_buffer_memory. It is quite a big db but with not much \r\n> traffic. When there is traffic, it’s usually big. \r\n> \r\n> Lately, the kernel has been killing the postmaster for having \r\n> assigned too much shared memory. Latest crash was when loading a 500MB \r\nfile. \r\n> \r\n> Should I reduce the shared buffers in order for this to be more robust?\r\n> \r\n> Thanks\r\n> Markella\r\n\r\nIt may be that other memory settings are contributing towards this \r\n(work_mem, maintenance_work_mem, max_connections etc). \r\n\r\nI would suggest that the OOM killer is working as intended and something \r\nis not quite right within the config. \r\n\r\nYou may want to review the memory consumption at peak times taking into \r\nconsideration anything else running on the machine. \r\n=============================================\r\n\r\nRomax Technology Limited \r\nA limited company registered in England and Wales.\r\nRegistered office:\r\nRutherford House \r\nNottingham Science and Technology Park \r\nNottingham \r\nNG7 2PZ \r\nEngland\r\nRegistration Number: 2345696\r\nVAT Number: 526 246 746\r\n\r\nTelephone numbers:\r\n+44 (0)115 951 88 00 (main)\r\n\r\nFor other office locations see:\r\nhttp://www.romaxtech.com/Contact\r\n=================================\r\n===============\r\nE-mail: [email protected]\r\nWebsite: www.romaxtech.com\r\n=================================\r\n\r\n================\r\nConfidentiality Statement\r\nThis transmission is for the addressee only and contains information that \r\nis confidential and privileged.\r\nUnless you are the named addressee, or authorised to receive it on behalf \r\nof the addressee \r\nyou may not copy or use it, or disclose it to anyone else. \r\nIf you have received this transmission in error please delete from your \r\nsystem and contact the sender. Thank you for your cooperation.\r\n=================================================\r\n\n\r\n> I wanted to follow up from this question. I’m running on 9.3.4\n> My DB server has 32GB ram so I have assigned\r\n8GB \r\n> shared_buffer_memory. It is quite a big db but with not much \r\n> traffic. When there is traffic, it’s usually big. \n>  \n> Lately, the kernel has been killing the postmaster\r\nfor having \r\n> assigned too much shared memory. Latest crash was when loading a 500MB\r\nfile. \n>  \n> Should I reduce the shared buffers in order for\r\nthis to be more robust?\n>  \n> Thanks\n> Markella\n\nIt may be that other memory settings are contributing\r\ntowards this (work_mem, maintenance_work_mem, max_connections etc). \n\nI would suggest that the OOM killer is working as\r\nintended and something is not quite right within the config. \n\nYou may want to review the memory consumption at peak\r\ntimes taking into consideration anything else running on the machine. \r\n=============================================\n\r\nRomax Technology Limited \r\nA limited company registered in England and Wales.\r\nRegistered office:\r\nRutherford House \r\nNottingham Science and Technology Park \r\nNottingham \r\nNG7 2PZ \r\nEngland\r\nRegistration Number: 2345696\r\nVAT Number: 526 246 746\n\r\nTelephone numbers:\r\n+44 (0)115 951 88 00 (main)\n\r\nFor other office locations see:\nhttp://www.romaxtech.com/Contact\r\n=================================\r\n===============\r\nE-mail: [email protected]\r\nWebsite: www.romaxtech.com\r\n=================================\n\r\n================\r\nConfidentiality Statement\r\nThis transmission is for the addressee only and contains information that\r\nis confidential and privileged.\r\nUnless you are the named addressee, or authorised to receive it on behalf\r\nof the addressee \r\nyou may not copy or use it, or disclose it to anyone else. \r\nIf you have received this transmission in error please delete from your\r\nsystem and contact the sender. Thank you for your cooperation.\r\n=================================================", "msg_date": "Wed, 26 Mar 2014 13:32:51 +0000", "msg_from": "Martin French <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why shared_buffers max is 8GB?" }, { "msg_contents": "Thanks Martin, \nHowever this is a database dedicated server and nothing much else is running on it. Also, I never saw this happening with 9.2 – but I can’t vouch for the size of files that I was uploading. \n\nFrom: Martin French \nSent: Wednesday, March 26, 2014 1:32 PM\nTo: Markella Skempri \nCc: desmodemone ; [email protected] ; Alexey Vasiliev ; [email protected] ; [email protected] \nSubject: Re: [PERFORM] Why shared_buffers max is 8GB?\n\n\n> I wanted to follow up from this question. I’m running on 9.3.4 \n> My DB server has 32GB ram so I have assigned 8GB \n> shared_buffer_memory. It is quite a big db but with not much \n> traffic. When there is traffic, it’s usually big. \n> \n> Lately, the kernel has been killing the postmaster for having \n> assigned too much shared memory. Latest crash was when loading a 500MB file. \n> \n> Should I reduce the shared buffers in order for this to be more robust? \n> \n> Thanks \n> Markella \n\nIt may be that other memory settings are contributing towards this (work_mem, maintenance_work_mem, max_connections etc). \n\nI would suggest that the OOM killer is working as intended and something is not quite right within the config. \n\nYou may want to review the memory consumption at peak times taking into consideration anything else running on the machine. \n=============================================\n\nRomax Technology Limited \nA limited company registered in England and Wales.\nRegistered office:\nRutherford House \nNottingham Science and Technology Park \nNottingham \nNG7 2PZ \nEngland\nRegistration Number: 2345696\nVAT Number: 526 246 746\n\nTelephone numbers:\n+44 (0)115 951 88 00 (main)\n\nFor other office locations see:\nhttp://www.romaxtech.com/Contact\n=================================\n===============\nE-mail: [email protected]\nWebsite: www.romaxtech.com\n=================================\n\n================\nConfidentiality Statement\nThis transmission is for the addressee only and contains information that is confidential and privileged.\nUnless you are the named addressee, or authorised to receive it on behalf of the addressee \nyou may not copy or use it, or disclose it to anyone else. \nIf you have received this transmission in error please delete from your system and contact the sender. Thank you for your cooperation.\n================================================= \n\n\n\n\n \n\n\nThanks Martin, \nHowever this is a database dedicated server and \nnothing much else is running on it. Also, I never saw this happening with 9.2 – \nbut I can’t vouch for the size of  files that I was uploading. \n\n \n\nFrom: Martin French \nSent: Wednesday, March 26, 2014 1:32 PM\nTo: Markella Skempri \nCc: desmodemone ; [email protected] ; Alexey Vasiliev \n; [email protected] \n; [email protected]\n\nSubject: Re: [PERFORM] Why shared_buffers max is \n8GB?\n \n> I wanted to follow up from this question. I’m running on \n9.3.4 > My DB server has 32GB ram so I have \nassigned 8GB > shared_buffer_memory. It is quite a big db but with not \nmuch > traffic. When there is traffic, it’s usually big. \n>  > Lately, the kernel has been killing the postmaster for having \n> assigned too much shared memory. Latest crash was when loading a 500MB \nfile. >  > Should I reduce the shared buffers in order for this to be more \nrobust? >  > Thanks > Markella\nIt may be that other memory settings are contributing \ntowards this (work_mem, maintenance_work_mem, max_connections etc). \nI would suggest that the OOM killer is \nworking as intended and something is not quite right within the config. \nYou may want to review the memory \nconsumption at peak times taking into consideration anything else running on the \nmachine. =============================================Romax \nTechnology Limited A limited company registered in England and \nWales.Registered office:Rutherford House Nottingham Science and \nTechnology Park Nottingham NG7 2PZ EnglandRegistration Number: \n2345696VAT Number: 526 246 746Telephone numbers:+44 (0)115 951 \n88 00 (main)For other office locations see:http://www.romaxtech.com/Contact================================================E-mail: \[email protected]: www.romaxtech.com=================================================Confidentiality \nStatementThis transmission is for the addressee only and contains \ninformation that is confidential and privileged.Unless you are the named \naddressee, or authorised to receive it on behalf of the addressee you may \nnot copy or use it, or disclose it to anyone else. If you have received this \ntransmission in error please delete from your system and contact the sender. \nThank you for your \ncooperation.=================================================", "msg_date": "Wed, 26 Mar 2014 13:47:47 +0000", "msg_from": "Markella Skempri <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why shared_buffers max is 8GB?" }, { "msg_contents": "Markella Skempri <[email protected]> wrote on 26/03/2014 \r\n13:47:47:\r\n\r\n> \r\n> Thanks Martin, \r\n> However this is a database dedicated server and nothing much else is\r\n> running on it. Also, I never saw this happening with 9.2 – but I \r\n> can’t vouch for the size of files that I was uploading. \r\n> \r\n> From: Martin French \r\n> Sent: Wednesday, March 26, 2014 1:32 PM\r\n> To: Markella Skempri \r\n> Cc: desmodemone ; [email protected] ; Alexey Vasiliev ; \r\n> [email protected] ; \r\[email protected] \r\n> Subject: Re: [PERFORM] Why shared_buffers max is 8GB?\r\n> \r\n> \r\n> > I wanted to follow up from this question. I’m running on 9.3.4 \r\n> > My DB server has 32GB ram so I have assigned 8GB \r\n> > shared_buffer_memory. It is quite a big db but with not much \r\n> > traffic. When there is traffic, it’s usually big. \r\n> > \r\n> > Lately, the kernel has been killing the postmaster for having \r\n> > assigned too much shared memory. Latest crash was when loading a \r\n> 500MB file. \r\n> > \r\n> > Should I reduce the shared buffers in order for this to be more \r\nrobust? \r\n> > \r\n> > Thanks \r\n> > Markella \r\n> \r\n> It may be that other memory settings are contributing towards this \r\n> (work_mem, maintenance_work_mem, max_connections etc). \r\n> \r\n> I would suggest that the OOM killer is working as intended and \r\n> something is not quite right within the config. \r\n> \r\n> You may want to review the memory consumption at peak times taking \r\n> into consideration anything else running on the machine. \r\n> =============================================\r\n> \r\n\r\nMy course of action here would be to halve all the memory settings and \r\nbenchmark performance, then increment in small amounts periodically \r\nrepeating the file load situation until the crash happens again, obviously \r\nwith a keen eye on the postgres logs as well as on the syslog output.\r\n\r\nThe other *potentially dangerous* thing you could do is to alter the OOM \r\nkillers behaviour in a couple of ways (assuming PostgreSQL wasn't compiled \r\nwith -DLINUX_OOM_ADJ=0).\r\n\r\nThis can be done like so:\r\n1. Alter the Kernel behaviour with:\r\nsysctl -w vm.overcommit_memory=2\r\n2. Tell the Kernel not to kill Postmaster Process.\r\necho -17 >> /proc/$(ps -ef | grep postmaster | grep -v grep | awk '{print \r\n$2}')/oom_adj\r\n\r\nI cannot state enough that this could cause unpredictable behaviour of the \r\nOOM killer, and thus; the box itself, and is not 100% guaranteed to stop \r\nthe OOM killer taking Postgres out.\r\n\r\n=============================================\r\n\r\nRomax Technology Limited \r\nA limited company registered in England and Wales.\r\nRegistered office:\r\nRutherford House \r\nNottingham Science and Technology Park \r\nNottingham \r\nNG7 2PZ \r\nEngland\r\nRegistration Number: 2345696\r\nVAT Number: 526 246 746\r\n\r\nTelephone numbers:\r\n+44 (0)115 951 88 00 (main)\r\n\r\nFor other office locations see:\r\nhttp://www.romaxtech.com/Contact\r\n=================================\r\n===============\r\nE-mail: [email protected]\r\nWebsite: www.romaxtech.com\r\n=================================\r\n\r\n================\r\nConfidentiality Statement\r\nThis transmission is for the addressee only and contains information that \r\nis confidential and privileged.\r\nUnless you are the named addressee, or authorised to receive it on behalf \r\nof the addressee \r\nyou may not copy or use it, or disclose it to anyone else. \r\nIf you have received this transmission in error please delete from your \r\nsystem and contact the sender. Thank you for your cooperation.\r\n=================================================\r\n\nMarkella Skempri <[email protected]>\r\nwrote on 26/03/2014 13:47:47:\n\r\n>  \n> Thanks Martin, \n> However this is a database dedicated server and\r\nnothing much else is\r\n> running on it. Also, I never saw this happening with 9.2 – but I\r\n\r\n> can’t vouch for the size of  files that I was uploading. \n>  \n> From: Martin French \n> Sent: Wednesday, March 26, 2014 1:32 PM\n> To: Markella Skempri \n> Cc: desmodemone ; [email protected]\r\n; Alexey Vasiliev ; \r\n> [email protected] ; [email protected]\r\n\n> Subject: Re: [PERFORM] Why shared_buffers max\r\nis 8GB?\n>  \n> \r\n> > I wanted to follow up from this question. I’m running on 9.3.4\r\n\r\n> > My DB server has 32GB ram so I have assigned 8GB \r\n> > shared_buffer_memory. It is quite a big db but with not much\r\n\r\n> > traffic. When there is traffic, it’s usually big. \r\n> >  \r\n> > Lately, the kernel has been killing the postmaster for having\r\n\r\n> > assigned too much shared memory. Latest crash was when loading\r\na \r\n> 500MB file. \r\n> >  \r\n> > Should I reduce the shared buffers in order for this to be more\r\nrobust? \r\n> >  \r\n> > Thanks \r\n> > Markella \r\n> \r\n> It may be that other memory settings are contributing towards this\r\n\r\n> (work_mem, maintenance_work_mem, max_connections etc). \r\n> \r\n> I would suggest that the OOM killer is working as intended and \r\n> something is not quite right within the config. \r\n> \r\n> You may want to review the memory consumption at peak times taking\r\n\r\n> into consideration anything else running on the machine. \r\n> =============================================\r\n> \n\r\nMy course of action here would be to halve all the memory settings and\r\nbenchmark performance, then increment in small amounts periodically repeating\r\nthe file load situation until the crash happens again, obviously with a\r\nkeen eye on the postgres logs as well as on the syslog output.\n\nThe other *potentially dangerous* thing you could\r\ndo is to alter the OOM killers behaviour in a couple of ways (assuming\r\nPostgreSQL wasn't compiled with -DLINUX_OOM_ADJ=0).\n\nThis can be done like so:\n1.        Alter the Kernel\r\nbehaviour with:\nsysctl -w vm.overcommit_memory=2\n2.        Tell the Kernel\r\nnot to kill Postmaster Process.\necho -17 >> /proc/$(ps -ef | grep postmaster\r\n| grep -v grep | awk '{print $2}')/oom_adj\n\nI cannot state enough that this could cause unpredictable\r\nbehaviour of the OOM killer, and thus; the box itself, and is not 100%\r\nguaranteed to stop the OOM killer taking Postgres out.\n\r\n=============================================\n\r\nRomax Technology Limited \r\nA limited company registered in England and Wales.\r\nRegistered office:\r\nRutherford House \r\nNottingham Science and Technology Park \r\nNottingham \r\nNG7 2PZ \r\nEngland\r\nRegistration Number: 2345696\r\nVAT Number: 526 246 746\n\r\nTelephone numbers:\r\n+44 (0)115 951 88 00 (main)\n\r\nFor other office locations see:\nhttp://www.romaxtech.com/Contact\r\n=================================\r\n===============\r\nE-mail: [email protected]\r\nWebsite: www.romaxtech.com\r\n=================================\n\r\n================\r\nConfidentiality Statement\r\nThis transmission is for the addressee only and contains information that\r\nis confidential and privileged.\r\nUnless you are the named addressee, or authorised to receive it on behalf\r\nof the addressee \r\nyou may not copy or use it, or disclose it to anyone else. \r\nIf you have received this transmission in error please delete from your\r\nsystem and contact the sender. Thank you for your cooperation.\r\n=================================================", "msg_date": "Wed, 26 Mar 2014 14:04:45 +0000", "msg_from": "Martin French <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why shared_buffers max is 8GB?" }, { "msg_contents": "On Wed, Mar 26, 2014 at 6:21 AM, Alexey Vasiliev <[email protected]> wrote:\n> I read from several sources, what maximum shared_buffers is 8GB.\n>\n> Does this true? If yes, why exactly this number is maximum number of\n> shared_buffers for good performance (on Linux 64-bits)?\n\nOn most machines the limit is higher than you'd ever want to set it. I\nhave a set of servers with 1TB RAM and shared buffers on them is set\nto 10G and even that is probably higher than it needs to be. The old\n1/4 of memory advice comes from the days when db server memory was in\nthe 1 to 16GB range and even then it was more of a starting place. It\nhas been found through experience and experiment that few setups can\nuse more shared buffers than a few gigabytes and get better\nperformance.\n\n\nOn Wed, Mar 26, 2014 at 7:24 AM, Markella Skempri\n<[email protected]> wrote:\n> I wanted to follow up from this question. I'm running on 9.3.4\n> My DB server has 32GB ram so I have assigned 8GB shared_buffer_memory. It is\n> quite a big db but with not much traffic. When there is traffic, it's\n> usually big.\n>\n> Lately, the kernel has been killing the postmaster for having assigned too\n> much shared memory. Latest crash was when loading a 500MB file.\n>\n> Should I reduce the shared buffers in order for this to be more robust?\n\nIt's not JUST your shared_buffers here. What are your changed settings\nin postgresql.conf? Specifically work_mem, max_connections,\ntemp_buffers and to a lesser extent maintenance_work_mem.\n\nHere's the thing. If you set shared_buffers, work_mem, and\nmax_connections too low you get a minor problem. Some apps can't\nconnect, pg is a little slow. If you set them too high you start\nkilling your DB with the OOM killer which is a major problem.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 26 Mar 2014 08:23:49 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why shared_buffers max is 8GB?" }, { "msg_contents": "Yes, I rember was 1024*G*b , sorry,\n\n\n2014-03-26 14:23 GMT+01:00 Albe Laurenz <[email protected]>:\n\n> desmodemone wrote:\n>\n> > max is 1024mb.\n>\n> That must be a typo.\n> It can surely be much higher.\n>\n> Yours,\n> Laurenz Albe\n>\n\nYes, I rember was 1024Gb , sorry, 2014-03-26 14:23 GMT+01:00 Albe Laurenz <[email protected]>:\ndesmodemone wrote:\n\n> max is 1024mb.\n\nThat must be a typo.\nIt can surely be much higher.\n\nYours,\nLaurenz Albe", "msg_date": "Wed, 26 Mar 2014 15:23:59 +0100", "msg_from": "desmodemone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why shared_buffers max is 8GB?" }, { "msg_contents": "\n> On most machines the limit is higher than you'd ever want to set it. I\n> have a set of servers with 1TB RAM and shared buffers on them is set\n> to 10G and even that is probably higher than it needs to be. The old\n> 1/4 of memory advice comes from the days when db server memory\n> was in the 1 to 16GB range and even then it was more of a starting place. It\n> has been found through experience and experiment that few setups\n> can use more shared buffers than a few gigabytes and get better\n> performance.\n\nThis is really the core of the issue. You can set shared_buffers to almost any level, into multiple TBs if you really wanted to. Whether or not this is prudent however, is entirely different. There are many considerations at play with shared buffers:\n\n* Shared buffers must (currently) compete with OS inode caches. If this is shared buffers are too high, much of the cached data is already cached by the operating system, and you end up with wasted RAM.\n* Checkpoints must commit dirty shared buffers to disk. The larger this is, the more risk you have when checkpoints come, up to and including an unresponsive database. Writing to disks isn't free, and sadly this is still on the slower side unless all of your storage is SSD-based. You don't want to set this too much higher than your disk write cache.\n* Performance gains taper off quickly. Most DBAs don't see gains after 4GB, and fewer still see any gains above 8GB. We have ours set at 4GB after a lot of TPS and risk analysis.\n* Since shared_buffers is the amount of memory that could potentially remain uncommitted to data files, the larger this is, the longer crash recovery can take. Having this too high could mean the difference between a five-minute outage, and a five-second outage. The checkpoint_* settings control how this is distributed and maintained, but the risk starts here.\n\nWith that said, we really need to update the WIKI page to reflect all of this. It's still claiming the 25% memory rule:\n\nhttps://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 26 Mar 2014 16:14:33 +0000", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why shared_buffers max is 8GB?" }, { "msg_contents": "On Wed, Mar 26, 2014 at 5:14 PM, Shaun Thomas <[email protected]> wrote:\n> * Checkpoints must commit dirty shared buffers to disk. The larger this is, the more risk you have when checkpoints come, up to and including an unresponsive database. Writing to disks isn't free, and sadly this is still on the slower side unless all of your storage is SSD-based. You don't want to set this too much higher than your disk write cache.\n\nWe use on some heavy working machines 48GB of shared buffers (and\nsometimes more - depends on amount of RAM). Of course that works only\nwith good enough hardware raid with large bbu, well tuned linux (dirty\nbytes appropriate to raid cache size etc) and aggressively tuned both\ncheckpoints and background writer:\n\n bgwriter_delay | 10\n bgwriter_lru_maxpages | 1000\n bgwriter_lru_multiplier | 10\ncheckpoint_completion_target | 0.9\n checkpoint_segments | 300\n checkpoint_timeout | 3600\n\nand it really makes sense\n\n\n-- \nIlya Kosmodemiansky,\n\nPostgreSQL-Consulting.com\ntel. +14084142500\ncell. +4915144336040\[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 26 Mar 2014 17:36:01 +0100", "msg_from": "Ilya Kosmodemiansky <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why shared_buffers max is 8GB?" }, { "msg_contents": "On Wed, Mar 26, 2014 at 1:21 PM, Alexey Vasiliev <[email protected]>wrote:\n\n> I read from several sources, what maximum shared_buffers is 8GB.\n>\n> Does this true? If yes, why exactly this number is maximum number\n> of shared_buffers for good performance (on Linux 64-bits)?\n>\n> Thanks!\n>\n>\nI've seen cases when going higher than 8GB memory lead to the improved\nperformance. Some of the server we are running has 128GB and 32GB\nshared_buffers with a better performance than one it had with 8GB.\n\nOne should be aware of several drawbacks:\n- OOM killer (Linux). If you allocate more memory than you have on the\nsystem (+swap) and your vm.overcommit_memory setting is left to defaults\n(0), the postmaster will be killed by the Linux OOM killer. Set it to 2 and\nkeep in mind other settings (work_mem, maintenance_work_mem, temp and wal\nbuffers) when determining the shared buffer size.\n- Checkpoints. In the worst case most of your shared buffers will be\nflushed to disk during checkpoint, affecting the overall system\nperformance. Make sure bgwriter is actively and aggressively evicting dirty\nbuffers and checkpoint is spread over the checkpoint_interval with the\ncheckpoint_completion_target.\n- Monitoring. One can use pg_buffercache to peek inside the shared buffers\nand see which relations are there and how big is the usage count.\n\nIn most cases 8GB should be enough even for the servers with hundreds of GB\nof data, since the FS uses the rest of the memory as a cache (make sure you\ngive a hint to the planner on how much memory is left for this with the\neffective_cache_size), but the exact answer is a matter of performance\ntesting.\n\nNow, the last question would be what was the initial justification for the\n8GB barrier, I've heard that there were a lock congestion when dealing with\nhuge pool of buffers, but I think that was fixed even in the pre-9.0 era.\n\n-- \nRegards,\nAlexey Klyukin\n\nOn Wed, Mar 26, 2014 at 1:21 PM, Alexey Vasiliev <[email protected]> wrote:\n\nI read from several sources, what maximum shared_buffers is 8GB. Does this true? If yes, why exactly this number is maximum number of shared_buffers for good performance (on Linux 64-bits)?Thanks!\nI've seen cases when going higher than 8GB memory lead to the improved performance. Some of the server we are running has 128GB and 32GB shared_buffers with a better performance than one it had with 8GB.\nOne should be aware of several drawbacks:- OOM killer (Linux). If you allocate more memory than you have on the system (+swap) and your vm.overcommit_memory setting is left to defaults (0), the postmaster will be killed by the Linux OOM killer. Set it to 2 and keep in mind other settings (work_mem, maintenance_work_mem, temp and wal buffers) when determining the shared buffer size.\n- Checkpoints. In the worst case most of your shared buffers will be flushed to disk during checkpoint, affecting the overall system performance. Make sure bgwriter is actively and aggressively evicting dirty buffers and checkpoint is spread over the checkpoint_interval with the checkpoint_completion_target.\n- Monitoring. One can use pg_buffercache to peek inside the shared buffers and see which relations are there and how big is the usage count. In most cases 8GB should be enough even for the servers with hundreds of GB of data, since the FS uses the rest of the memory as a cache (make sure you give a hint to the planner on how much memory is left for this with the effective_cache_size), but the exact answer is a matter of performance testing.\nNow, the last question would be what was the initial justification for the 8GB barrier, I've heard that there were a lock congestion when dealing with huge pool of buffers, but I think that was fixed even in the pre-9.0 era.\n-- Regards,Alexey Klyukin", "msg_date": "Wed, 2 Apr 2014 11:38:57 +0200", "msg_from": "Alexey Klyukin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why shared_buffers max is 8GB?" }, { "msg_contents": "On Wed, Apr 2, 2014 at 11:38:57AM +0200, Alexey Klyukin wrote:\n> In most cases 8GB should be enough even for the servers with hundreds of GB of\n> data, since the FS uses the rest of the memory as a cache (make sure you give a\n> hint to the planner on how much memory is left for this with the\n> effective_cache_size), but the exact answer is a matter of performance testing.\n> \n> Now, the last question would be what was the initial justification for the 8GB\n> barrier, I've heard that there were a lock congestion when dealing with huge\n> pool of buffers, but I think that was fixed even in the pre-9.0 era.\n\nThe issue in earlier releases was the overhead of managing more then 1\nmillion 8k buffers. I have not seen any recent tests to confirm that\noverhead is still significant.\n\nA larger issue is that going over 8GB doesn't help unless you are\naccessing more than 8GB of data in a short period of time. Add to that\nthe problem if potentially dirtying all the buffers and flushing it to a\nnow-smaller kernel buffer cache, and you can see why the 8GB limit is\nrecommended.\n\nI do think this merits more testing against the current Postgres source\ncode.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + Everyone has their own god. +\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 9 Apr 2014 07:46:27 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why shared_buffers max is 8GB?" } ]
[ { "msg_contents": "I'm working with a large database using 8.4 that's partitioned on 4 week\nboundaries and when I use a prepared statement that limits by time as one\nof the bind parameters the planner seems to not select just the partitions\nof interest but wants to scan all of them. From reading the documentation\nof 9.3 ( http://www.postgresql.org/docs/9.3/static/sql-prepare.html ), it\nseems that this sort of limitation has been fixed. Is that a correct\ninterpretation of the documentation?\nThanks,\nDave\n\nI'm working with a large database using 8.4 that's partitioned on 4 week boundaries and when I use a prepared statement that limits by time as one of the bind parameters the planner seems to not select just the partitions of interest but wants to scan all of them. From reading the documentation of 9.3 ( http://www.postgresql.org/docs/9.3/static/sql-prepare.html ), it seems that this sort of limitation has been fixed. Is that a correct interpretation of the documentation?\nThanks,Dave", "msg_date": "Thu, 27 Mar 2014 13:42:39 -0700", "msg_from": "Dave Johansen <[email protected]>", "msg_from_op": true, "msg_subject": "Partitions and prepared statements?" } ]
[ { "msg_contents": "Hi all,\n\n tl;dr - How can I speed up my count-distinct query?\n\n I apologize in advance if this question has been asked already. I'm\nfinding the mailing list hard to navigate. I'm trying to speed up a query\nthat will find a count of distinct emails with in a table using Postgres\n9.3.3. The name of the table is participants. Our domain is set up such\nthat duplicate emails are allowed so long as a particular corresponding\ncolumn value is unique.\n\n\n*TABLE participants*\n\n* id serial NOT NULL (primary key)*\n\n* email character varying(255)*\n\n* (other columns omitted)*\n\n\n\nI have the following index defined:\n\n*index_participants_on_email ON participants USING btree (email COLLATE\npg_catalog.\"default\");*\n\nThe query I'm trying to run is select count(distinct email) from\nparticipants. I've also tried the *group by* equivalent. *On a table size\nof 2 million rows, the query takes about 1 minute to return.* This is way\ntoo long. After running analyze, I see that the index is being ignored and\na full table scan is performed.\n\nSo, I tried running the following after dropping the index:\ncreate index email_idx on participants(email) where email=email;\nset enable_bitmapscan = false;\nset seq_page_cost = 0.1;\nset random_page_cost = 0.2;\ncreate index email_idx_2 on participants(email);\ncluster participants using email_idx_2;\n\nWith these settings in place, if I run *select count(distinct email) from\nparticipants* I get\n\n\"Aggregate (cost=29586.20..29586.21 rows=1 width=18) (actual\ntime=54243.643..54243.644 rows=1 loops=1)\"\n\" -> Seq Scan on participants (cost=0.00..24586.18 rows=2000008\nwidth=18) (actual time=0.030..550.296 rows=2000008 loops=1)\"\n*\"Total runtime: 54243.669 ms\"*\n\nWhen I run the following, I get MUCH better results\n*select count(1) from (select email from participants where email=email\ngroup by email) x;*\n\n\"Aggregate (cost=1856.36..1856.37 rows=1 width=0) (actual\ntime=1393.573..1393.573 rows=1 loops=1)\"\n\" Output: count(1)\"\n\" -> Group (cost=0.43..1731.36 rows=10000 width=18) (actual\ntime=0.052..1205.977 rows=2000008 loops=1)\"\n\" Output: participants.email\"\n\" -> Index Only Scan using email_idx on public.participants\n (cost=0.43..1706.36 rows=10000 width=18) (actual time=0.050..625.248\nrows=2000008 loops=1)\"\n\" Output: participants.email\"\n\" Heap Fetches: 2000008\"\n*\"Total runtime: 1393.599 ms\"*\n\nThis query has a weird where clause (email=email) because I'm trying to\nforce the analyzer's hand to use the index.\n\n*I'm concerned about setting the enable_bitmapscan and seq_page_cost values\nbecause I'm not yet sure what the consequences are. Can anyone enlighten\nme on the recommended way to speed up this query?*\n\n Thanks\n\n  Hi all,  tl;dr - How can I speed up my count-distinct query?  \n  I apologize in advance if this question has been asked already.  I'm finding the mailing list hard to navigate.  I'm trying to speed up a query that will find a count of distinct emails with in a table using Postgres 9.3.3.  The name of the table is participants.  Our domain is set up such that duplicate emails are allowed so long as a particular corresponding column value is unique.\nTABLE participants\n  id serial NOT NULL (primary key)\n  email character varying(255)  (other columns omitted)\n I have the following index defined:\nindex_participants_on_email ON participants USING btree (email COLLATE pg_catalog.\"default\");\nThe query I'm trying to run is select count(distinct email) from participants.  I've also tried the group by equivalent.  On a table size of 2 million rows, the query takes about 1 minute to return.  This is way too long.  After running analyze, I see that the index is being ignored and a full table scan is performed.\nSo, I tried running the following after dropping the index:\ncreate index email_idx on participants(email) where email=email;set enable_bitmapscan = false;\nset seq_page_cost = 0.1;set random_page_cost = 0.2;\ncreate index email_idx_2 on participants(email);cluster participants using email_idx_2;\nWith these settings in place, if I run select count(distinct email) from participants I get\n\"Aggregate  (cost=29586.20..29586.21 rows=1 width=18) (actual time=54243.643..54243.644 rows=1 loops=1)\"\n\"  ->  Seq Scan on participants  (cost=0.00..24586.18 rows=2000008 width=18) (actual time=0.030..550.296 rows=2000008 loops=1)\"\n\"Total runtime: 54243.669 ms\"\nWhen I run the following, I get MUCH better resultsselect count(1) from (select email from participants where email=email group by email) x;\n\"Aggregate  (cost=1856.36..1856.37 rows=1 width=0) (actual time=1393.573..1393.573 rows=1 loops=1)\"\n\"  Output: count(1)\"\"  ->  Group  (cost=0.43..1731.36 rows=10000 width=18) (actual time=0.052..1205.977 rows=2000008 loops=1)\"\n\"        Output: participants.email\"\"        ->  Index Only Scan using email_idx on public.participants  (cost=0.43..1706.36 rows=10000 width=18) (actual time=0.050..625.248 rows=2000008 loops=1)\"\n\"              Output: participants.email\"\"              Heap Fetches: 2000008\"\"Total runtime: 1393.599 ms\"\nThis query has a weird where clause (email=email) because I'm trying to force the analyzer's hand to use the index.\nI'm concerned about setting the enable_bitmapscan and seq_page_cost values because I'm not yet sure what the consequences are.  Can anyone enlighten me on the recommended way to speed up this query?\n Thanks", "msg_date": "Sun, 30 Mar 2014 14:45:51 -0500", "msg_from": "Christopher Jackson <[email protected]>", "msg_from_op": true, "msg_subject": "Slow Count-Distinct Query" }, { "msg_contents": ">  tl;dr - How can I speed up my count-distinct query?  \n\nYou can't.\n\nDoing a count(distinct x) is much different than a count(1), which can simply scan available indexes. To build a distinct, it has to construct an in-memory hash of every valid email, and count the distinct values therein. This will pretty much never be fast, especially with 2M rows involved.\n\nI could be wrong about this, and the back-end folks might have a different answer, but I wouldn't hold my breath.\n\n--\nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd | Suite 400 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 31 Mar 2014 13:17:55 +0000", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow Count-Distinct Query" }, { "msg_contents": "Christopher Jackson <[email protected]> writes:\n> tl;dr - How can I speed up my count-distinct query?\n\nEXPLAIN doesn't provide a lot of visibility into what the Aggregate plan\nnode is doing, but in this case what it's doing is an internal sort/uniq\noperation to implement the DISTINCT. You didn't say what value of\nwork_mem you're using, but it'd need to be probably 50-100MB to prevent\nthat sort from spilling to disk (and therefore being slow).\n\nNote that the indexscan is actually *slower* than the seqscan so far as\nthe table access is concerned; if the table were big enough to not fit\nin RAM, this would get very much worse. So I'm not impressed with trying\nto force the optimizer's hand as you've done here --- it might be a nice\nplan now, but it's brittle. See if a bigger work_mem improves matters\nenough with the regular plan.\n\n> *I'm concerned about setting the enable_bitmapscan and seq_page_cost values\n> because I'm not yet sure what the consequences are. Can anyone enlighten\n> me on the recommended way to speed up this query?*\n\nTurning off enable_bitmapscan globally would be a seriously bad idea.\nChanging the cost settings to these values globally might be all right;\nit would amount to optimizing for all-in-memory cases, which might or\nmight not be a good idea for your situation. For that matter, greatly\nincreasing work_mem globally is usually not thought to be smart either;\nremember that it's a per-sort-operation setting so you may need to\nprovision a considerable multiple of the setting as physical RAM,\ndepending on how many queries you expect to run concurrently. So all in\nall you might be well advised to just set special values for this one\nquery, whichever solution approach you use.\n\nI doubt you need the \"where email=email\" hack, in any case. That isn't\nforcing the optimizer's decision in any meaningful fashion.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 31 Mar 2014 10:15:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow Count-Distinct Query" }, { "msg_contents": "Tom and Shawn,\n\n Thanks for the feedback. This has been helpful. It's worth noting that\nI was spiking this out on my local box using default memory utilization\nsettings. I'll revisit this once we get our production box set up. It's\ngood to know what the best practices are around the enable_bitmapscan and\nseq_page_cost settings are. Also, it's good to know that my hack wasn't\nactually yielding anything. I'll check back in once our production\nenvironment is up and running. For what it's worth, we're using Heroku and\nwe're thinking of going with the Standard Tengu tier as a start. This will\ngive us 1.7GB of RAM, so hopefully bumping up the work_mem setting\nshouldn't be a problem. Does that make sense?\n\n Thanks for the help,\n Chris\n\n\nOn Mon, Mar 31, 2014 at 9:15 AM, Tom Lane <[email protected]> wrote:\n\n> Christopher Jackson <[email protected]> writes:\n> > tl;dr - How can I speed up my count-distinct query?\n>\n> EXPLAIN doesn't provide a lot of visibility into what the Aggregate plan\n> node is doing, but in this case what it's doing is an internal sort/uniq\n> operation to implement the DISTINCT. You didn't say what value of\n> work_mem you're using, but it'd need to be probably 50-100MB to prevent\n> that sort from spilling to disk (and therefore being slow).\n>\n> Note that the indexscan is actually *slower* than the seqscan so far as\n> the table access is concerned; if the table were big enough to not fit\n> in RAM, this would get very much worse. So I'm not impressed with trying\n> to force the optimizer's hand as you've done here --- it might be a nice\n> plan now, but it's brittle. See if a bigger work_mem improves matters\n> enough with the regular plan.\n>\n> > *I'm concerned about setting the enable_bitmapscan and seq_page_cost\n> values\n> > because I'm not yet sure what the consequences are. Can anyone enlighten\n> > me on the recommended way to speed up this query?*\n>\n> Turning off enable_bitmapscan globally would be a seriously bad idea.\n> Changing the cost settings to these values globally might be all right;\n> it would amount to optimizing for all-in-memory cases, which might or\n> might not be a good idea for your situation. For that matter, greatly\n> increasing work_mem globally is usually not thought to be smart either;\n> remember that it's a per-sort-operation setting so you may need to\n> provision a considerable multiple of the setting as physical RAM,\n> depending on how many queries you expect to run concurrently. So all in\n> all you might be well advised to just set special values for this one\n> query, whichever solution approach you use.\n>\n> I doubt you need the \"where email=email\" hack, in any case. That isn't\n> forcing the optimizer's decision in any meaningful fashion.\n>\n> regards, tom lane\n>\n\n   Tom and Shawn,   Thanks for the feedback.  This has been helpful.  It's worth noting that I was spiking this out on my local box using default memory utilization settings.  I'll revisit this once we get our production box set up.  It's good to know what the best practices are around the enable_bitmapscan and seq_page_cost settings are.  Also, it's good to know that my hack wasn't actually yielding anything.  I'll check back in once our production environment is up and running.  For what it's worth, we're using Heroku and we're thinking of going with the Standard Tengu tier as a start.  This will give us 1.7GB of RAM, so hopefully bumping up the work_mem setting shouldn't be a problem.  Does that make sense?\n  Thanks for the help,     ChrisOn Mon, Mar 31, 2014 at 9:15 AM, Tom Lane <[email protected]> wrote:\nChristopher Jackson <[email protected]> writes:\n>   tl;dr - How can I speed up my count-distinct query?\n\nEXPLAIN doesn't provide a lot of visibility into what the Aggregate plan\nnode is doing, but in this case what it's doing is an internal sort/uniq\noperation to implement the DISTINCT.  You didn't say what value of\nwork_mem you're using, but it'd need to be probably 50-100MB to prevent\nthat sort from spilling to disk (and therefore being slow).\n\nNote that the indexscan is actually *slower* than the seqscan so far as\nthe table access is concerned; if the table were big enough to not fit\nin RAM, this would get very much worse.  So I'm not impressed with trying\nto force the optimizer's hand as you've done here --- it might be a nice\nplan now, but it's brittle.  See if a bigger work_mem improves matters\nenough with the regular plan.\n\n> *I'm concerned about setting the enable_bitmapscan and seq_page_cost values\n> because I'm not yet sure what the consequences are.  Can anyone enlighten\n> me on the recommended way to speed up this query?*\n\nTurning off enable_bitmapscan globally would be a seriously bad idea.\nChanging the cost settings to these values globally might be all right;\nit would amount to optimizing for all-in-memory cases, which might or\nmight not be a good idea for your situation.  For that matter, greatly\nincreasing work_mem globally is usually not thought to be smart either;\nremember that it's a per-sort-operation setting so you may need to\nprovision a considerable multiple of the setting as physical RAM,\ndepending on how many queries you expect to run concurrently.  So all in\nall you might be well advised to just set special values for this one\nquery, whichever solution approach you use.\n\nI doubt you need the \"where email=email\" hack, in any case.  That isn't\nforcing the optimizer's decision in any meaningful fashion.\n\n                        regards, tom lane", "msg_date": "Mon, 31 Mar 2014 10:53:33 -0500", "msg_from": "Christopher Jackson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow Count-Distinct Query" }, { "msg_contents": "On Sun, Mar 30, 2014 at 12:45 PM, Christopher Jackson <[email protected]>wrote:\n\n> Hi all,\n>\n> tl;dr - How can I speed up my count-distinct query?\n>\n\nDepending on how often you need to run that query and how important it is\nto you, if you are willing to accept a performance hit on\nINSERT/UPDATE/DELETE of the \"participants\" table, you could create a\nsummary table containing just the count of unique email addresses or the\nlist of unique email addresses populated via trigger on\nINSERT/UPDATE/DELETE of the participants table. Another option is try out\nthe new Materialized views (\nhttp://www.postgresql.org/docs/current/static/sql-creatematerializedview.html)\navailable in 9.3.\n\nOn Sun, Mar 30, 2014 at 12:45 PM, Christopher Jackson <[email protected]> wrote:\n  Hi all,\n  tl;dr - How can I speed up my count-distinct query?  Depending on how often you need to run that query and how important it is to you, if you are willing to accept a performance hit on INSERT/UPDATE/DELETE of the \"participants\" table, you could create a summary table containing just the count of unique email addresses or the list of unique email addresses populated via trigger on INSERT/UPDATE/DELETE of the  participants table. Another option is try out the new Materialized views (http://www.postgresql.org/docs/current/static/sql-creatematerializedview.html) available in 9.3.", "msg_date": "Tue, 1 Apr 2014 17:34:09 -0700", "msg_from": "bricklen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow Count-Distinct Query" }, { "msg_contents": "Hi Bricklen,\n\n Thanks for the feedback. I'll play around with materialized views. My\nunderstanding is they have to be manually triggered for refresh and there's\nan exclusive lock on the view while the refresh is taking place. Is this\nyour understanding as well? I'm using PG 9.3.3. If this is true, I'm\ncurious what clever ways people have come up with to mitigate any issues\nwith the lock.\n\n Thanks again,\n Chris\n\n\nOn Tue, Apr 1, 2014 at 7:34 PM, bricklen <[email protected]> wrote:\n\n>\n> On Sun, Mar 30, 2014 at 12:45 PM, Christopher Jackson <[email protected]>wrote:\n>\n>> Hi all,\n>>\n>> tl;dr - How can I speed up my count-distinct query?\n>>\n>\n> Depending on how often you need to run that query and how important it is\n> to you, if you are willing to accept a performance hit on\n> INSERT/UPDATE/DELETE of the \"participants\" table, you could create a\n> summary table containing just the count of unique email addresses or the\n> list of unique email addresses populated via trigger on\n> INSERT/UPDATE/DELETE of the participants table. Another option is try out\n> the new Materialized views (\n> http://www.postgresql.org/docs/current/static/sql-creatematerializedview.html)\n> available in 9.3.\n>\n>\n\n    Hi Bricklen,    Thanks for the feedback.  I'll play around with materialized views.  My understanding is they have to be manually triggered for refresh and there's an exclusive lock on the view while the refresh is taking place.  Is this your understanding as well?  I'm using PG 9.3.3.  If this is true, I'm curious what clever ways people have come up with to mitigate any issues with the lock.\n   Thanks again,      ChrisOn Tue, Apr 1, 2014 at 7:34 PM, bricklen <[email protected]> wrote:\nOn Sun, Mar 30, 2014 at 12:45 PM, Christopher Jackson <[email protected]> wrote:\n  Hi all,\n  tl;dr - How can I speed up my count-distinct query?  Depending on how often you need to run that query and how important it is to you, if you are willing to accept a performance hit on INSERT/UPDATE/DELETE of the \"participants\" table, you could create a summary table containing just the count of unique email addresses or the list of unique email addresses populated via trigger on INSERT/UPDATE/DELETE of the  participants table. Another option is try out the new Materialized views (http://www.postgresql.org/docs/current/static/sql-creatematerializedview.html) available in 9.3.", "msg_date": "Tue, 1 Apr 2014 23:22:14 -0500", "msg_from": "Christopher Jackson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow Count-Distinct Query" }, { "msg_contents": "On Wed, Apr 2, 2014 at 1:22 PM, Christopher Jackson <[email protected]> wrote:\n>\n> Hi Bricklen,\n>\n> Thanks for the feedback. I'll play around with materialized views. My\n> understanding is they have to be manually triggered for refresh\nYep.\n\n> and there's an exclusive lock on the view while the refresh is taking place. Is this\n> your understanding as well?\nRe-yep.\n\n> I'm using PG 9.3.3. If this is true, I'm\n> curious what clever ways people have come up with to mitigate any issues\n> with the lock.\nKevin Grittner has implemented REFRESH MATERIALIZED VIEW CONCURRENTLY\nin 9.4. A unique index is needed on the materialized view as well to\nauthorize this concurrent operation. It has the merit to allow SELECT\noperations on the matview during the refresh.\n-- \nMichael\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 2 Apr 2014 13:48:12 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow Count-Distinct Query" }, { "msg_contents": "Hi Bricklen,\n\n Thanks again for the feedback. The concurrent refresh sounds cool. I\njust saw the 9.4 release is tentatively scheduled for later this year. Do\nyou know what people have been doing for view refreshes in the meantime?\n\n Thanks\n\n\nOn Tue, Apr 1, 2014 at 11:48 PM, Michael Paquier\n<[email protected]>wrote:\n\n> On Wed, Apr 2, 2014 at 1:22 PM, Christopher Jackson <[email protected]>\n> wrote:\n> >\n> > Hi Bricklen,\n> >\n> > Thanks for the feedback. I'll play around with materialized views.\n> My\n> > understanding is they have to be manually triggered for refresh\n> Yep.\n>\n> > and there's an exclusive lock on the view while the refresh is taking\n> place. Is this\n> > your understanding as well?\n> Re-yep.\n>\n> > I'm using PG 9.3.3. If this is true, I'm\n> > curious what clever ways people have come up with to mitigate any issues\n> > with the lock.\n> Kevin Grittner has implemented REFRESH MATERIALIZED VIEW CONCURRENTLY\n> in 9.4. A unique index is needed on the materialized view as well to\n> authorize this concurrent operation. It has the merit to allow SELECT\n> operations on the matview during the refresh.\n> --\n> Michael\n>\n\n    Hi Bricklen,   Thanks again for the feedback.  The concurrent refresh sounds cool.  I just saw the 9.4 release is tentatively scheduled for later this year.  Do you know what people have been doing for view refreshes in the meantime?\n   ThanksOn Tue, Apr 1, 2014 at 11:48 PM, Michael Paquier <[email protected]> wrote:\nOn Wed, Apr 2, 2014 at 1:22 PM, Christopher Jackson <[email protected]> wrote:\n\n>\n>     Hi Bricklen,\n>\n>     Thanks for the feedback.  I'll play around with materialized views.  My\n> understanding is they have to be manually triggered for refresh\nYep.\n\n> and there's an exclusive lock on the view while the refresh is taking place.  Is this\n> your understanding as well?\nRe-yep.\n\n> I'm using PG 9.3.3.  If this is true, I'm\n> curious what clever ways people have come up with to mitigate any issues\n> with the lock.\nKevin Grittner has implemented REFRESH MATERIALIZED VIEW CONCURRENTLY\nin 9.4. A unique index is needed on the materialized view as well to\nauthorize this concurrent operation. It has the merit to allow SELECT\noperations on the matview during the refresh.\n--\nMichael", "msg_date": "Tue, 1 Apr 2014 23:54:02 -0500", "msg_from": "Christopher Jackson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow Count-Distinct Query" } ]
[ { "msg_contents": "Hi,\n\nI am a freshman to postgresql, also pgpool-II. I have some performance\nissues once I bring in the pgpool-II to build the pg cluster. Here I post\nsome system info and the configurations of postgresql and pgpool, hopping\nyou can help me to solve this problem.\n\nBTW, I am using the postgres 9.2.4 installed on the Amazon AWS with debian\nOS(64bits), and the version of pgpool is 3.2.7. There are two nodes in the\ncluster, working in master/slave mode and replicating data using the\nstreaming replication feature.\n\nMem info:\ncora@apollo:~$ free\n total used free shared buffers cached\nMem: 31446876 7625428 23821448 0 9468 6312080\n-/+ buffers/cache: 1303880 30142996\nSwap: 0 0 0\n\nLinux info:\ncora@apollo:~$ cat /proc/version\nLinux version 2.6.32-5-xen-amd64 (Debian 2.6.32-48squeeze1)\n([email protected]) (gcc version 4.3.5 (Debian 4.3.5-4) ) #1 SMP Mon Feb 25\n02:51:39 UTC 2013\n\nThe settings of postgresql(only showing the non-default parameters)\nlisten_addresses = '*'\nport = 9797 \nmax_connections = 750\nssl_renegotiation_limit = 0\nshared_buffers = 15GB \ntemp_buffers = 32MB\nwork_mem = 64MB \nmaintenance_work_mem = 128MB \neffective_io_concurrency = 1000\nwal_level = hot_standby\ncheckpoint_segments = 32 \narchive_mode = on\narchive_command = 'rsync -a %p apollo:/var/lib/postgresql/9.2/archive/%f\n</dev/null' \nmax_wal_senders = 1\nwal_keep_segments = 32\nhot_standby = on\nenable_indexscan = on\nenable_seqscan = on\nrandom_page_cost = 2.0 \neffective_cache_size = 5GB\ndefault_statistics_target = 10000\nconstraint_exclusion = on\nautovacuum = on\nlog_autovacuum_min_duration = 100\nautovacuum_max_workers = 6 \nautovacuum_naptime = 30min \nautovacuum_vacuum_threshold = 1000 \nautovacuum_analyze_threshold = 5000 \nautovacuum_vacuum_scale_factor = 0.2 \nautovacuum_analyze_scale_factor = 0.1 \nautovacuum_freeze_max_age = 200000000 \nautovacuum_vacuum_cost_delay = 20ms \nautovacuum_vacuum_cost_limit = -1 \nhot_standby_feedback = off\n\n\nThe configuration of pgpool-II\nlisten_addresses = '*'\nport = 5432\nsocket_dir = '/var/run/pgpool2'\npcp_port = 9898\npcp_socket_dir = '/var/run/pgpool2'\nbackend_hostname0 = 'apollo'\nbackend_port0 = 9797\nbackend_weight0 = 1\nbackend_data_directory0 = '/var/lib/postgresql/9.2/main'\nbackend_flag0 = 'ALLOW_TO_FAILOVER'\n\nbackend_hostname1 = 'apollo2'\nbackend_port1 = 9797\nbackend_weight1 = 1\nbackend_data_directory1 = '/var/lib/postgresql/9.2/main'\nbackend_flag1 = 'ALLOW_TO_FAILOVER'\nenable_pool_hba = on\npool_passwd = 'pool_passwd'\nssl = off\nnum_init_children = 32\nmax_pool = 5\nchild_life_time = 0\nchild_max_connections = 0\nconnection_life_time = 0\nclient_idle_limit = 0\ndebug_level = 0\npid_file_name = '/var/run/pgpool2/pgpool.pid'\nlogdir = '/var/log/pgpool2'\nconnection_cache = off\nreplication_mode = off\ninsert_lock = off\nreplicate_select = off\nload_balance_mode = on\nignore_leading_white_space = on\nwhite_function_list = 'foo'\nblack_function_list = ''\nmaster_slave_mode = on\nmaster_slave_sub_mode = 'stream'\nsr_check_user = 'postgres'\nsr_check_password = '×××'\ndelay_threshold = 0\nhealth_check_period = 10\nhealth_check_timeout = 20\nfailover_command = '/var/lib/postgresql/9.2/main/failover.sh %d \"%h\" %p %D\n%m %M \"%H\" %P'\nfailback_command = '/bin/rm -f /tmp/trigger_file0'\nrecovery_user = 'postgres'\nrecovery_password = 'postgres'\nrecovery_1st_stage_command = 'basebackup.sh'\nuse_watchdog = off\n\nAt first, I turn on the hot_standby_feedback, but after getting customers'\ncomplaint about the poor performance, I decide to turn off, but it seems\nthat it didn't help. \n\nBefore starting the cluster, one bulk updates through java code would cost\nabout 1 hour to finish, but then it would take twice amount of time.\n\nSo many thanks for giving your advises...\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/performance-degradation-after-launching-postgres-cluster-using-pgpool-II-tp5797953.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 31 Mar 2014 01:31:39 -0700 (PDT)", "msg_from": "Cora Ma <[email protected]>", "msg_from_op": true, "msg_subject": "performance degradation after launching postgres cluster using\n pgpool-II" }, { "msg_contents": "> Before starting the cluster, one bulk updates through java code would cost\n> about 1 hour to finish, but then it would take twice amount of time.\n\npgpool-II is not very good at handling extended protocol (mostly used\nin Java). If you need to execute large updates, you'd better to\nconnect to PostgreSQL directly. Note that there's no problem that some\nsessions connect via pgpool-II, while others directly connect\nPostgreSQL.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese: http://www.sraoss.co.jp\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 07 Apr 2014 22:46:10 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance degradation after launching postgres\n cluster using pgpool-II" }, { "msg_contents": "Feeling a little sorry about replying you later.\nThen, I want to say so many thanks for this advice, cause we pointed our jobserver to the master directly and then the performance issues resolved. My client is so curious that why pgpool doesn't support so welcome platform like java. Hoping it can get improved in the following versions.\nSo, besides pgpool, do you have any other solutions for postgresql cluster?\nThanks a lot,Cora\n\nDate: Mon, 7 Apr 2014 06:47:13 -0700\nFrom: [email protected]\nTo: [email protected]\nSubject: Re: performance degradation after launching postgres cluster using pgpool-II\n\n\n\n\t> Before starting the cluster, one bulk updates through java code would cost\n\n> about 1 hour to finish, but then it would take twice amount of time.\n\n\npgpool-II is not very good at handling extended protocol (mostly used\n\nin Java). If you need to execute large updates, you'd better to\n\nconnect to PostgreSQL directly. Note that there's no problem that some\n\nsessions connect via pgpool-II, while others directly connect\n\nPostgreSQL.\n\n\nBest regards,\n\n--\n\nTatsuo Ishii\n\nSRA OSS, Inc. Japan\n\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese: http://www.sraoss.co.jp\n\n\n-- \n\nSent via pgsql-performance mailing list ([hidden email])\n\nTo make changes to your subscription:\n\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\t\n\t\n\t\n\t\n\n\t\n\n\t\n\t\n\t\tIf you reply to this email, your message will be added to the discussion below:\n\t\thttp://postgresql.1045698.n5.nabble.com/performance-degradation-after-launching-postgres-cluster-using-pgpool-II-tp5797953p5798972.html\n\t\n\t\n\t\t\n\t\tTo unsubscribe from performance degradation after launching postgres cluster using pgpool-II, click here.\n\n\t\tNAML\n\t \t\t \t \t\t \n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/performance-degradation-after-launching-postgres-cluster-using-pgpool-II-tp5797953p5799639.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\nFeeling a little sorry about replying you later.Then, I want to say so many thanks for this advice, cause we pointed our jobserver to the master directly and then the performance issues resolved. My client is so curious that why pgpool doesn't support so welcome platform like java. Hoping it can get improved in the following versions.So, besides pgpool, do you have any other solutions for postgresql cluster?Thanks a lot,CoraDate: Mon, 7 Apr 2014 06:47:13 -0700From: [hidden email]To: [hidden email]Subject: Re: performance degradation after launching postgres cluster using pgpool-II\n\n\t> Before starting the cluster, one bulk updates through java code would cost\n> about 1 hour to finish, but then it would take twice amount of time.\npgpool-II is not very good at handling extended protocol (mostly used\nin Java). If you need to execute large updates, you'd better to\nconnect to PostgreSQL directly. Note that there's no problem that some\nsessions connect via pgpool-II, while others directly connect\nPostgreSQL.\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.phpJapanese: http://www.sraoss.co.jp-- \nSent via pgsql-performance mailing list ([hidden email])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n\nIf you reply to this email, your message will be added to the discussion below:\nhttp://postgresql.1045698.n5.nabble.com/performance-degradation-after-launching-postgres-cluster-using-pgpool-II-tp5797953p5798972.html\n\n\n\t\t\n\t\tTo unsubscribe from performance degradation after launching postgres cluster using pgpool-II, click here.\nNAML\n \n\nView this message in context: RE: performance degradation after launching postgres cluster using pgpool-II\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.", "msg_date": "Fri, 11 Apr 2014 00:29:42 -0700 (PDT)", "msg_from": "Cora Ma <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance degradation after launching postgres cluster using\n pgpool-II" } ]
[ { "msg_contents": "I'm running postgresql 9.3 on a production server. An hour ago, out of the \"blue\", I ran into an issue I have never encountered before: my server started to use CPU as crazy. The server is a standard ubuntu 12.04 LTE installation running only Postgres and Redis.\n\nThe incident can be seen on the in numbers below: \n\nhttps://s3-eu-west-1.amazonaws.com/autouncle-public/other/cpu.png\n\nI imidiatly took a look at pg_stat_activity but nothing in there seemed suspicious. I also had a look at the postgres log, but nothing was in there too. I have pg_stat_statements running, so I reseted that one, and nothing really suspicious occurred in there, expect for the fact, that all queries were taking 100x times longer than usual.\n\nI have tried the following with no luck:\n\n\t• Restart clients connecting to the db\n\t• Restart postgres\n\t• Restart the whole server\n\nI have run memory tests on the server as well, and nothing seems to be wrong.\n\nNo changes in any software running on the servers has been made within the last 24 hours.\n\nThe question is: I have a streaming replication server running, which I have now done a failover to, and it runs fine. However I still have no clue why my master suddenly has become so CPU consuming, and how I can debug / trace it further down?\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 31 Mar 2014 12:25:52 +0200", "msg_from": "=?windows-1252?Q?Niels_Kristian_Schj=F8dt?=\n <[email protected]>", "msg_from_op": true, "msg_subject": "Sudden crazy high CPU usage" }, { "msg_contents": "On Mon, Mar 31, 2014 at 5:25 AM, Niels Kristian Schjødt\n<[email protected]> wrote:\n> I'm running postgresql 9.3 on a production server. An hour ago, out of the \"blue\", I ran into an issue I have never encountered before: my server started to use CPU as crazy. The server is a standard ubuntu 12.04 LTE installation running only Postgres and Redis.\n>\n> The incident can be seen on the in numbers below:\n>\n> https://s3-eu-west-1.amazonaws.com/autouncle-public/other/cpu.png\n>\n> I imidiatly took a look at pg_stat_activity but nothing in there seemed suspicious. I also had a look at the postgres log, but nothing was in there too. I have pg_stat_statements running, so I reseted that one, and nothing really suspicious occurred in there, expect for the fact, that all queries were taking 100x times longer than usual.\n>\n> I have tried the following with no luck:\n>\n> * Restart clients connecting to the db\n> * Restart postgres\n> * Restart the whole server\n>\n> I have run memory tests on the server as well, and nothing seems to be wrong.\n>\n> No changes in any software running on the servers has been made within the last 24 hours.\n>\n> The question is: I have a streaming replication server running, which I have now done a failover to, and it runs fine. However I still have no clue why my master suddenly has become so CPU consuming, and how I can debug / trace it further down?\n\nUsing linux 6? One possible culprit is \"Transparent Huge Page\nCompaction\". It tends to hit severs with a lot of memory, especially\nif they've configured a lot of shared buffers. Google it a for a lot\nof info.\n\nThere may be other issues masquerading as this one but it's the first\nthing to rule out. Symptoms are very high cpu utilization and poor\nperformance that strikes without warning and then resolves also\nwithout warning (typically seconds or minutes after the event).\n\nFor starters, take a look at the value of:\n\n/sys/kernel/mm/redhat_transparent_hugepage/enabled\n\nAnd do some due diligence research.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 31 Mar 2014 08:47:21 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sudden crazy high CPU usage" }, { "msg_contents": "Thanks, this seems to persist after a reboot of the server though, and I have never in my server’s 3 months life time experienced anything like it.\n\nNiels Kristian Schjødt\nCo-founder & Developer\n\nE-Mail: [email protected]\nMobile: 0045 28 73 04 93\n\n\n\nwww.autouncle.com\nFollow us: Facebook | Google+ | LinkedIn | Twitter \nGet app for: iPhone & iPad | Android\n\n\n\nDen 31/03/2014 kl. 15.47 skrev Merlin Moncure <[email protected]>:\n\n> On Mon, Mar 31, 2014 at 5:25 AM, Niels Kristian Schjødt\n> <[email protected]> wrote:\n>> I'm running postgresql 9.3 on a production server. An hour ago, out of the \"blue\", I ran into an issue I have never encountered before: my server started to use CPU as crazy. The server is a standard ubuntu 12.04 LTE installation running only Postgres and Redis.\n>> \n>> The incident can be seen on the in numbers below:\n>> \n>> https://s3-eu-west-1.amazonaws.com/autouncle-public/other/cpu.png\n>> \n>> I imidiatly took a look at pg_stat_activity but nothing in there seemed suspicious. I also had a look at the postgres log, but nothing was in there too. I have pg_stat_statements running, so I reseted that one, and nothing really suspicious occurred in there, expect for the fact, that all queries were taking 100x times longer than usual.\n>> \n>> I have tried the following with no luck:\n>> \n>> * Restart clients connecting to the db\n>> * Restart postgres\n>> * Restart the whole server\n>> \n>> I have run memory tests on the server as well, and nothing seems to be wrong.\n>> \n>> No changes in any software running on the servers has been made within the last 24 hours.\n>> \n>> The question is: I have a streaming replication server running, which I have now done a failover to, and it runs fine. However I still have no clue why my master suddenly has become so CPU consuming, and how I can debug / trace it further down?\n> \n> Using linux 6? One possible culprit is \"Transparent Huge Page\n> Compaction\". It tends to hit severs with a lot of memory, especially\n> if they've configured a lot of shared buffers. Google it a for a lot\n> of info.\n> \n> There may be other issues masquerading as this one but it's the first\n> thing to rule out. Symptoms are very high cpu utilization and poor\n> performance that strikes without warning and then resolves also\n> without warning (typically seconds or minutes after the event).\n> \n> For starters, take a look at the value of:\n> \n> /sys/kernel/mm/redhat_transparent_hugepage/enabled\n> \n> And do some due diligence research.\n> \n> merlin", "msg_date": "Mon, 31 Mar 2014 16:24:30 +0200", "msg_from": "=?iso-8859-1?Q?Niels_Kristian_Schj=F8dt?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sudden crazy high CPU usage" }, { "msg_contents": "On Mon, Mar 31, 2014 at 9:24 AM, Niels Kristian Schjødt <\[email protected]> wrote:\n\n> Thanks, this seems to persist after a reboot of the server though, and I\n> have never in my server's 3 months life time experienced anything like it.\n>\n\n\nhuh. Any chance of getting 'perf' installed and running a perf top?\n\nmerlin\n\nOn Mon, Mar 31, 2014 at 9:24 AM, Niels Kristian Schjødt <[email protected]> wrote:\nThanks, this seems to persist after a reboot of the server though, and I have never in my server’s 3 months life time experienced anything like it.\nhuh.  Any chance of getting 'perf' installed and running a perf top?merlin", "msg_date": "Mon, 31 Mar 2014 09:36:55 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sudden crazy high CPU usage" }, { "msg_contents": "On Mon, Mar 31, 2014 at 8:24 AM, Niels Kristian Schjødt\n<[email protected]> wrote:\n>\n> Thanks, this seems to persist after a reboot of the server though, and I have never in my server's 3 months life time experienced anything like it.\n\nCould it be overheating and therefore throttling the cores?\n\nAlso another thing to look at on large memory machines with > 1 CPU\nsocket is zone_reclaim_mode being set to 1. Always set it to 0 on a\nlinux machine running postgres.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 31 Mar 2014 08:50:22 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sudden crazy high CPU usage" }, { "msg_contents": "Yes, I could install “perf”, though I’m not familiar with it. What would i do? :-)\n\nNiels Kristian Schjødt\nCo-founder & Developer\n\nE-Mail: [email protected]\nMobile: 0045 28 73 04 93\n\n\n\nwww.autouncle.com\nFollow us: Facebook | Google+ | LinkedIn | Twitter \nGet app for: iPhone & iPad | Android\n\n\n\nDen 31/03/2014 kl. 16.36 skrev Merlin Moncure <[email protected]>:\n\n> perf", "msg_date": "Mon, 31 Mar 2014 19:16:58 +0200", "msg_from": "=?iso-8859-1?Q?Niels_Kristian_Schj=F8dt?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sudden crazy high CPU usage" }, { "msg_contents": "Thanks, I don’t think overheating is an issue, it’s a large dell server, and I have checked the historic CPU temperature in the servers control panel, and no overheating has shown.\n\nZone_reclaim_mode is already set to 0 \n\nDen 31/03/2014 kl. 16.50 skrev Scott Marlowe <[email protected]>:\n\n> On Mon, Mar 31, 2014 at 8:24 AM, Niels Kristian Schjødt\n> <[email protected]> wrote:\n>> \n>> Thanks, this seems to persist after a reboot of the server though, and I have never in my server's 3 months life time experienced anything like it.\n> \n> Could it be overheating and therefore throttling the cores?\n> \n> Also another thing to look at on large memory machines with > 1 CPU\n> socket is zone_reclaim_mode being set to 1. Always set it to 0 on a\n> linux machine running postgres.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 31 Mar 2014 19:21:05 +0200", "msg_from": "=?windows-1252?Q?Niels_Kristian_Schj=F8dt?=\n <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sudden crazy high CPU usage" }, { "msg_contents": "On Mon, Mar 31, 2014 at 3:25 AM, Niels Kristian Schjødt\n<[email protected]> wrote:\n> I'm running postgresql 9.3 on a production server. An hour ago, out of the \"blue\", I ran into an issue I have never encountered before: my server started to use CPU as crazy. The server is a standard ubuntu 12.04 LTE installation running only Postgres and Redis.\n>\n> The incident can be seen on the in numbers below:\n>\n> https://s3-eu-west-1.amazonaws.com/autouncle-public/other/cpu.png\n\nThe increase doesn't look so sudden. My guess is that the server got\nsome new activity. The advice is to setup the statistics collecting\nscript by the link [1] and review the results for a period of hour or\nso. It shows charts of statements by CPU/IO/calls with aggregated\nstats, so you could probably find out more than with pure\npg_stat_statements.\n\n[1] https://github.com/grayhemp/pgcookbook/blob/master/statement_statistics_collecting_and_reporting.md\n\n-- \nKind regards,\nSergey Konoplev\nPostgreSQL Consultant and DBA\n\nhttp://www.linkedin.com/in/grayhemp\n+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979\[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 31 Mar 2014 10:36:13 -0700", "msg_from": "Sergey Konoplev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sudden crazy high CPU usage" }, { "msg_contents": "In New Relic, go back a half hour before the problem started so you can't\nsee that this spike happened and send the same screenshot in. My guess is\nyou have increased activity hitting the DB. Do you have pgbouncer or some\nkind of connection pooling sitting in front? 198 open server connections\ncould account for an increase in load like you're seeing. Do you have\npostgresql addon in New Relic to show you how many queries are hitting the\nsystem to correlate data to?\n\nOn Mon, Mar 31, 2014 at 1:36 PM, Sergey Konoplev <[email protected]> wrote:\n\n> On Mon, Mar 31, 2014 at 3:25 AM, Niels Kristian Schjødt\n> <[email protected]> wrote:\n> > I'm running postgresql 9.3 on a production server. An hour ago, out of\n> the \"blue\", I ran into an issue I have never encountered before: my server\n> started to use CPU as crazy. The server is a standard ubuntu 12.04 LTE\n> installation running only Postgres and Redis.\n> >\n> > The incident can be seen on the in numbers below:\n> >\n> > https://s3-eu-west-1.amazonaws.com/autouncle-public/other/cpu.png\n>\n> The increase doesn't look so sudden. My guess is that the server got\n> some new activity. The advice is to setup the statistics collecting\n> script by the link [1] and review the results for a period of hour or\n> so. It shows charts of statements by CPU/IO/calls with aggregated\n> stats, so you could probably find out more than with pure\n> pg_stat_statements.\n>\n> [1]\n> https://github.com/grayhemp/pgcookbook/blob/master/statement_statistics_collecting_and_reporting.md\n>\n> --\n> Kind regards,\n> Sergey Konoplev\n> PostgreSQL Consultant and DBA\n>\n> http://www.linkedin.com/in/grayhemp\n> +1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979\n> [email protected]\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nIn New Relic, go back a half hour before the problem started so you can't see that this spike happened and send the same screenshot in. My guess is you have increased activity hitting the DB. Do you have pgbouncer or some kind of connection pooling sitting in front? 198 open server connections could account for an increase in load like you're seeing. Do you have postgresql addon in New Relic to show you how many queries are hitting the system to correlate data to?\nOn Mon, Mar 31, 2014 at 1:36 PM, Sergey Konoplev <[email protected]> wrote:\nOn Mon, Mar 31, 2014 at 3:25 AM, Niels Kristian Schjødt\n\n\n<[email protected]> wrote:\n> I'm running postgresql 9.3 on a production server. An hour ago, out of the \"blue\", I ran into an issue I have never encountered before: my server started to use CPU as crazy. The server is a standard ubuntu 12.04 LTE installation running only Postgres and Redis.\n\n\n\n\n>\n> The incident can be seen on the in numbers below:\n>\n> https://s3-eu-west-1.amazonaws.com/autouncle-public/other/cpu.png\n\nThe increase doesn't look so sudden. My guess is that the server got\nsome new activity. The advice is to setup the statistics collecting\nscript by the link [1] and review the results for a period of hour or\nso. It shows charts of statements by CPU/IO/calls with aggregated\nstats, so you could probably find out more than with pure\npg_stat_statements.\n\n[1] https://github.com/grayhemp/pgcookbook/blob/master/statement_statistics_collecting_and_reporting.md\n\n--\nKind regards,\nSergey Konoplev\nPostgreSQL Consultant and DBA\n\nhttp://www.linkedin.com/in/grayhemp\n+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979\[email protected]\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Mon, 31 Mar 2014 16:01:38 -0400", "msg_from": "Will Platnick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sudden crazy high CPU usage" }, { "msg_contents": "Sorry, but nothing unusual here either, I have compared the time just before with the same time the days before and the throughput pattern is exactly the same. No differences.\n\n\nDen 31/03/2014 kl. 22.01 skrev Will Platnick <[email protected]>:\n\n> \n> In New Relic, go back a half hour before the problem started so you can't see that this spike happened and send the same screenshot in. My guess is you have increased activity hitting the DB. Do you have pgbouncer or some kind of connection pooling sitting in front? 198 open server connections could account for an increase in load like you're seeing. Do you have postgresql addon in New Relic to show you how many queries are hitting the system to correlate data to?\n> \n> On Mon, Mar 31, 2014 at 1:36 PM, Sergey Konoplev <[email protected]> wrote:\n> On Mon, Mar 31, 2014 at 3:25 AM, Niels Kristian Schjødt\n> <[email protected]> wrote:\n> > I'm running postgresql 9.3 on a production server. An hour ago, out of the \"blue\", I ran into an issue I have never encountered before: my server started to use CPU as crazy. The server is a standard ubuntu 12.04 LTE installation running only Postgres and Redis.\n> >\n> > The incident can be seen on the in numbers below:\n> >\n> > https://s3-eu-west-1.amazonaws.com/autouncle-public/other/cpu.png\n> \n> The increase doesn't look so sudden. My guess is that the server got\n> some new activity. The advice is to setup the statistics collecting\n> script by the link [1] and review the results for a period of hour or\n> so. It shows charts of statements by CPU/IO/calls with aggregated\n> stats, so you could probably find out more than with pure\n> pg_stat_statements.\n> \n> [1] https://github.com/grayhemp/pgcookbook/blob/master/statement_statistics_collecting_and_reporting.md\n> \n> --\n> Kind regards,\n> Sergey Konoplev\n> PostgreSQL Consultant and DBA\n> \n> http://www.linkedin.com/in/grayhemp\n> +1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979\n> [email protected]\n> \n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n\nSorry, but nothing unusual here either, I have compared the time just before with the same time the days before and the throughput pattern is exactly the same. No differences.\nDen 31/03/2014 kl. 22.01 skrev Will Platnick <[email protected]>:In New Relic, go back a half hour before the problem started so you can't see that this spike happened and send the same screenshot in. My guess is you have increased activity hitting the DB. Do you have pgbouncer or some kind of connection pooling sitting in front? 198 open server connections could account for an increase in load like you're seeing. Do you have postgresql addon in New Relic to show you how many queries are hitting the system to correlate data to?\nOn Mon, Mar 31, 2014 at 1:36 PM, Sergey Konoplev <[email protected]> wrote:\nOn Mon, Mar 31, 2014 at 3:25 AM, Niels Kristian Schjødt\n\n\n<[email protected]> wrote:\n> I'm running postgresql 9.3 on a production server. An hour ago, out of the \"blue\", I ran into an issue I have never encountered before: my server started to use CPU as crazy. The server is a standard ubuntu 12.04 LTE installation running only Postgres and Redis.\n\n\n\n\n>\n> The incident can be seen on the in numbers below:\n>\n> https://s3-eu-west-1.amazonaws.com/autouncle-public/other/cpu.png\n\nThe increase doesn't look so sudden. My guess is that the server got\nsome new activity. The advice is to setup the statistics collecting\nscript by the link [1] and review the results for a period of hour or\nso. It shows charts of statements by CPU/IO/calls with aggregated\nstats, so you could probably find out more than with pure\npg_stat_statements.\n\n[1] https://github.com/grayhemp/pgcookbook/blob/master/statement_statistics_collecting_and_reporting.md\n\n--\nKind regards,\nSergey Konoplev\nPostgreSQL Consultant and DBA\n\nhttp://www.linkedin.com/in/grayhemp\n+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979\[email protected]\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Tue, 1 Apr 2014 23:50:59 +0200", "msg_from": "=?iso-8859-1?Q?Niels_Kristian_Schj=F8dt?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sudden crazy high CPU usage" }, { "msg_contents": "On 2014-03-31 19:16:58 +0200, Niels Kristian Schjødt wrote:\n> Yes, I could install “perf”, though I’m not familiar with it. What would i do? :-)\n\nAs root:\nperf record -a sleep 5\nperf report > my-nice-perf-report.txt\n\nAnd then send the my-nice-perf-report.txt file.\n\nLocally it's much nicer to see the output using \"perf report\" without\nredirect into a file, you'll get an interactive UI.\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 2 Apr 2014 00:16:53 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sudden crazy high CPU usage" }, { "msg_contents": "On Wed, Apr 2, 2014 at 7:16 AM, Andres Freund <[email protected]> wrote:\n> On 2014-03-31 19:16:58 +0200, Niels Kristian Schjødt wrote:\n>> Yes, I could install \"perf\", though I'm not familiar with it. What would i do? :-)\n>\n> As root:\n> perf record -a sleep 5\n> perf report > my-nice-perf-report.txt\n>\n> And then send the my-nice-perf-report.txt file.\n>\n> Locally it's much nicer to see the output using \"perf report\" without\n> redirect into a file, you'll get an interactive UI.\nThe Postgres wiki has a page dedicated to perf as well here:\nhttps://wiki.postgresql.org/wiki/Profiling_with_perf\n\nRegards,\n-- \nMichael\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 3 Apr 2014 09:00:09 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sudden crazy high CPU usage" } ]
[ { "msg_contents": "Hello PGSQL performance community,\nYou might remember that I pinged you in July 2012 to introduce the TPC-V benchmark, and to ask for a feature similar to clustered indexes. I am now back with more data, and a question about checkpoints. As far as the plans for the benchmark, we are hoping to release a benchmarking kit for multi-VM servers this year (and of course one can always simply configure it to run on one database)\n\nAnyways, I am facing a situation that is affecting performance when checkpoints end. This becomes a big problem when you have many VMs sharing the same underlying storage, but I have reduced the problem to a single VM/single database here to make it easier to discuss.\n\nComplete config info is in the attached files. Briefly, it is a 6-vCPU VM with 91G of memory, and 70GB in PGSQL shared buffers. The host has 512GB of memory and 4 sockets of Westmere (E7-4870) processors with HT enabled.\n\nThe data tablespace is on an ext4 file system on a (virtual) disk which is striped on 16 SSD drives in RAID 0. This is obviously overkill for the load we are putting on this VM, but in the usual config, the 16 SSDs are shared by 24 VMs. Log is on an ext3 file system on 4 spinning drives in RAID 1.\n\nWe are running PGSQL version 9.2 on RHEL 6.4, and here are some parameters of interest (postgresql.conf in the attachment):\ncheckpoint_segments = 1200\ncheckpoint_timeout = 360s\ncheckpoint_completion_target = 0.8\nwal_sync_method = open_datasync\nwal_buffers = 16MB\nwal_writer_delay = 10ms\neffective_io_concurrency = 10\neffective_cache_size = 1024MB\n\nWhen running tests, I noticed that when a checkpoint completes, we have a big burst of writes to the data disk. The log disk has a very steady write rate that is not affected by checkpoints except for the known phenomenon of more bytes in each log write when a new checkpoint period starts. In a multi-VM config with all VMs sharing the same data disks, when these write bursts happen, all VMs take a hit.\n\nSo I set out to see what causes this write burst. After playing around with PGSQL parameters and observing its behavior, it appears that the bursts aren't produced by the database engine; they are produced by the file system. I suspect PGSQL has to issue a sync(2)/fsync(2)/sync_file_range(2) system call at the completion of the checkpoint to ensure that all blocks are flushed to disk before creating a checkpoint marker. To test this, I ran a loop to call sync(8) once a second.\n\nThe pdf labeled \"Chart 280\" has the throughput, data disk activity, and checkpoint start/completion timestamps for the baseline case. You can see that the checkpoint completion, the write burst, and the throughput dip all occur at the same time, so much so that it is hard to see the checkpoint completion line under the graph of writes. It looks like the file system does a mini flush every 30 seconds. The pdf labeled \"Chart 274\" is the case with sync commands running in the background. You can see that everything is more smooth.\n\nIs there something I can set in the PGSQL parameters or in the file system parameters to force a steady flow of writes to disk rather than waiting for a sync system call? Mounting with \"commit=1\" did not make a difference.\n\nThanks,\nReza\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 2 Apr 2014 17:38:14 -0700", "msg_from": "Reza Taheri <[email protected]>", "msg_from_op": true, "msg_subject": "PGSQL, checkpoints, and file system syncs" }, { "msg_contents": "> Is there something I can set in the PGSQL parameters or in the file system\n> parameters to force a steady flow of writes to disk rather than waiting for\n> a sync system call? Mounting with \"commit=1\" did not make a difference.\n\nThe PostgreSQL devs actually had a long talk with the Linux kernel devs over this exact issue, actually. While we wait for the results of that to bear some fruit, I'd recommend using the dirty_background_bytes and dirty_bytes settings both on the VM side, and on the host server. To avoid excessive flushes, you want to avoid having more dirty memory than the system can handle in one gulp.\n\nThe dirty_bytes setting will begin flushing disks synchronously when the amount of dirty memory reaches this amount. While dirty_background_bytes will flush in the background when the amount of dirty memory hits the specified limit. It's the background flushing that will prevent your current problems, and it should be set at the same level as the amount of write cache your system has available.\n\nSo if you are on a 1GB RAID card, set it to 1GB. Once you have 1GB of dirty memory (from a checkpoint or whatever), Linux will begin flushing.\n\nThis is a pretty well-known issue on Linux systems with large amounts of RAM. Most VM servers fit that profile, so I'm not surprised it's hurting you.\n\n--\nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd | Suite 400 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 8 Apr 2014 14:00:05 +0000", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PGSQL, checkpoints, and file system syncs" }, { "msg_contents": "On Tue, Apr 8, 2014 at 02:00:05PM +0000, Shaun Thomas wrote:\n> So if you are on a 1GB RAID card, set it to 1GB. Once you have 1GB\n> of dirty memory (from a checkpoint or whatever), Linux will begin\n> flushing.\n>\n> This is a pretty well-known issue on Linux systems with large amounts\n> of RAM. Most VM servers fit that profile, so I'm not surprised it's\n> hurting you.\n\nAgreed. The dirty kernel defaults Linux defaults are too high for\nsystems with large amounts of memory. See sysclt -a | grep dirty for a\nlist.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + Everyone has their own god. +\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 9 Apr 2014 12:42:23 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PGSQL, checkpoints, and file system syncs" } ]
[ { "msg_contents": "Hello PGSQL performance community,\nYou might remember that I pinged you in July 2012 to introduce the TPC-V benchmark. I am now back with more data, and a question about checkpoints. As far as the plans for the benchmark, we are hoping to release a benchmarking kit for multi-VM servers this year (and of course one can always simply configure it to run on one database)\n\nI am now dealing with a situation of performance dips when checkpoints complete. To simplify the discussion, I have reproduced the problem on a single VM/single database.\n\nComplete config info is in the attached files. Briefly, it is a 6-vCPU VM with 91G of memory, and 70GB in PGSQL shared buffers. The host has 512GB of memory and 4 sockets of Westmere (E7-4870) processors with HT enabled.\n\nThe data tablespace is on an ext4 file system on a (virtual) disk which is striped on 16 SSD drives in RAID 0. This is obviously overkill for the load we are putting on this one VM, but in the usual benchmarking config, the 16 SSDs are shared by 24 VMs. Log is on an ext3 file system on 4 spinning drives in RAID 1.\n\nWe are running PGSQL version 9.2 on RHEL 6.4; here are some parameters of interest (postgresql.conf is in the attachment):\ncheckpoint_segments = 1200\ncheckpoint_timeout = 360s\ncheckpoint_completion_target = 0.8\nwal_sync_method = open_datasync\nwal_buffers = 16MB\nwal_writer_delay = 10ms\neffective_io_concurrency = 10\neffective_cache_size = 1024MB\n\nWhen running tests, I noticed that when a checkpoint completes, we have a big burst of writes to the data disk. The log disk has a very steady write rate that is not affected by checkpoints except for the known phenomenon of more bytes in each log write when a new checkpoint period starts. In a multi-VM config with all VMs sharing the same data disks, when these write bursts happen, all VMs take a hit.\n\nSo I set out to see what causes this write burst. After playing around with PGSQL parameters and observing its behavior, it appears that the bursts aren't produced by the database engine; they are produced by the file system. I suspect PGSQL has to issue a sync(2)/fsync(2)/sync_file_range(2) system call at the completion of the checkpoint to ensure that all blocks are flushed to disk before creating a checkpoint marker. To test this, I ran a loop to call sync(8) once a second.\n\nThe graphs in file \"run280.mht\" have the throughput, data disk activity, and checkpoint start/completion timestamps for the baseline case. You can see that the checkpoint completion, the write burst, and the throughput dip all occur at the same time, so much so that it is hard to see the checkpoint completion line under the graph of writes. It looks like the file system does a mini flush every 30 seconds. The file \"run274.mht\" is the case with sync commands running in the background. You can see that everything is more smooth.\n\nIs there something I can set in the PGSQL parameters or in the file system parameters to force a steady flow of writes to disk rather than waiting for a sync system call? Mounting with \"commit=1\" did not make a difference.\n\nThanks,\nReza\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 3 Apr 2014 10:39:34 -0700", "msg_from": "Reza Taheri <[email protected]>", "msg_from_op": true, "msg_subject": "PGSQL, checkpoints, and file system syncs" }, { "msg_contents": "On 04/03/2014 08:39 PM, Reza Taheri wrote:\n> Hello PGSQL performance community,\n> You might remember that I pinged you in July 2012 to introduce the TPC-V benchmark. I am now back with more data, and a question about checkpoints. As far as the plans for the benchmark, we are hoping to release a benchmarking kit for multi-VM servers this year (and of course one can always simply configure it to run on one database)\n>\n> I am now dealing with a situation of performance dips when checkpoints complete. To simplify the discussion, I have reproduced the problem on a single VM/single database.\n>\n> Complete config info is in the attached files. Briefly, it is a 6-vCPU VM with 91G of memory, and 70GB in PGSQL shared buffers. The host has 512GB of memory and 4 sockets of Westmere (E7-4870) processors with HT enabled.\n>\n> The data tablespace is on an ext4 file system on a (virtual) disk which is striped on 16 SSD drives in RAID 0. This is obviously overkill for the load we are putting on this one VM, but in the usual benchmarking config, the 16 SSDs are shared by 24 VMs. Log is on an ext3 file system on 4 spinning drives in RAID 1.\n>\n> We are running PGSQL version 9.2 on RHEL 6.4; here are some parameters of interest (postgresql.conf is in the attachment):\n> checkpoint_segments = 1200\n> checkpoint_timeout = 360s\n> checkpoint_completion_target = 0.8\n> wal_sync_method = open_datasync\n> wal_buffers = 16MB\n> wal_writer_delay = 10ms\n> effective_io_concurrency = 10\n> effective_cache_size = 1024MB\n>\n> When running tests, I noticed that when a checkpoint completes, we have a big burst of writes to the data disk. The log disk has a very steady write rate that is not affected by checkpoints except for the known phenomenon of more bytes in each log write when a new checkpoint period starts. In a multi-VM config with all VMs sharing the same data disks, when these write bursts happen, all VMs take a hit.\n>\n> So I set out to see what causes this write burst. After playing around with PGSQL parameters and observing its behavior, it appears that the bursts aren't produced by the database engine; they are produced by the file system. I suspect PGSQL has to issue a sync(2)/fsync(2)/sync_file_range(2) system call at the completion of the checkpoint to ensure that all blocks are flushed to disk before creating a checkpoint marker. To test this, I ran a loop to call sync(8) once a second.\n>\n> The graphs in file \"run280.mht\" have the throughput, data disk activity, and checkpoint start/completion timestamps for the baseline case. You can see that the checkpoint completion, the write burst, and the throughput dip all occur at the same time, so much so that it is hard to see the checkpoint completion line under the graph of writes. It looks like the file system does a mini flush every 30 seconds. The file \"run274.mht\" is the case with sync commands running in the background. You can see that everything is more smooth.\n>\n> Is there something I can set in the PGSQL parameters or in the file system parameters to force a steady flow of writes to disk rather than waiting for a sync system call? Mounting with \"commit=1\" did not make a difference.\n\nTry setting the vm.dirty_bytes sysctl. Something like 256MB might be a \ngood starting point.\n\nThis comes up fairly often, see e.g.: \nhttp://www.postgresql.org/message-id/flat/[email protected]\n\n- Heikki\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 03 Apr 2014 21:01:08 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PGSQL, checkpoints, and file system syncs" }, { "msg_contents": "> Try setting the vm.dirty_bytes sysctl. Something like 256MB might be a good\n> starting point.\n> \n> This comes up fairly often, see e.g.:\n> http://www.postgresql.org/message-id/flat/27C32FD4-0142-44FE-8488-\n> [email protected]\n> \n> - Heikki\n\nThanks, Heikki. That sounds like my problem alright. I will play with these parameters right away, and will report back.\n\nCheers,\nReza\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 3 Apr 2014 11:11:46 -0700", "msg_from": "Reza Taheri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PGSQL, checkpoints, and file system syncs" }, { "msg_contents": "Hi Reza,\n\nvm.dirty_bytes indeed makes sense, but just in case: how exactly is\nyour ext4 mount? Particularly, have you disabled barrier?\n\nIlya\n\nOn Thu, Apr 3, 2014 at 8:11 PM, Reza Taheri <[email protected]> wrote:\n>> Try setting the vm.dirty_bytes sysctl. Something like 256MB might be a good\n>> starting point.\n\n\n\n-- \nIlya Kosmodemiansky,\n\nPostgreSQL-Consulting.com\ntel. +14084142500\ncell. +4915144336040\[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 4 Apr 2014 08:11:49 +0200", "msg_from": "Ilya Kosmodemiansky <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PGSQL, checkpoints, and file system syncs" }, { "msg_contents": "Hi Ilya,\nI mount the esx4 file system with: nofail,noatime,nodiratime,nobarrier,commit=1\nThe esxt3 file system for the log is mounted with: nofail,noatime,nodiratime,data=writeback\n\nAttached is a tar file with charts from last night's runs. File 0_dirty_background_bytes.htm has the default with dirty_background_bytes=0 and dirty_background_ratio=10 (it's a warm-up run, so it starts with low throughput and a high read rate). I still see a little bit of burstiness with dirty_background_bytes=50MB, but no burstiness with dirty_background_ratio from 10MB down to even 1MB.\n\nThanks everyone for the advice,\nReza\n\n> -----Original Message-----\n> From: Ilya Kosmodemiansky [mailto:ilya.kosmodemiansky@postgresql-\n> consulting.com]\n> Sent: Thursday, April 03, 2014 11:12 PM\n> To: Reza Taheri\n> Cc: Heikki Linnakangas; [email protected]\n> Subject: Re: [PERFORM] PGSQL, checkpoints, and file system syncs\n> \n> Hi Reza,\n> \n> vm.dirty_bytes indeed makes sense, but just in case: how exactly is your\n> ext4 mount? Particularly, have you disabled barrier?\n> \n> Ilya\n> \n> On Thu, Apr 3, 2014 at 8:11 PM, Reza Taheri <[email protected]> wrote:\n> >> Try setting the vm.dirty_bytes sysctl. Something like 256MB might be\n> >> a good starting point.\n> \n> \n> \n> --\n> Ilya Kosmodemiansky,\n> \n> PostgreSQL-Consulting.com\n> tel. +14084142500\n> cell. +4915144336040\n> [email protected]\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Fri, 4 Apr 2014 13:27:34 -0700", "msg_from": "Reza Taheri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PGSQL, checkpoints, and file system syncs" }, { "msg_contents": "On Thu, Apr 3, 2014 at 09:01:08PM +0300, Heikki Linnakangas wrote:\n> >Is there something I can set in the PGSQL parameters or in the file\n> >system parameters to force a steady flow of writes to disk rather\n> >than waiting for a sync system call? Mounting with \"commit=1\" did not\n> >make a difference.\n>\n> Try setting the vm.dirty_bytes sysctl. Something like 256MB might be a\n> good starting point.\n>\n> This comes up fairly often, see e.g.:\n> http://www.postgresql.org/message-id/flat/27C32FD4-0142-44FE-8488-9F36\n> [email protected]\n\nUh, should he set vm.dirty_bytes or vm.dirty_background_bytes? It is my\nunderstanding that vm.dirty_background_bytes starts the I/O while still\naccepting writes, while vm.dirty_bytes stops accepting writes during the\nI/O, which isn't optimal. See:\n\n\thttps://www.kernel.org/doc/Documentation/sysctl/vm.txt\n\t\n\tdirty_bytes\n\t\n\tContains the amount of dirty memory at which a process generating disk\n\twrites will itself start writeback.\n\n\tdirty_background_bytes\n\t\n\tContains the amount of dirty memory at which the background kernel\n\tflusher threads will start writeback.\n\t\nI think we want the flusher to be active, not necessarly the writing\nprocess.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + Everyone has their own god. +\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 9 Apr 2014 12:51:27 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PGSQL, checkpoints, and file system syncs" } ]
[ { "msg_contents": "Sorry that i just joined the list and have to break the thread to reply to\nTom Lane's response on this @\nhttp://www.postgresql.org/message-id/[email protected]\n\n\nNote that the indexscan is actually *slower* than the seqscan so far as\n> the table access is concerned; if the table were big enough to not fit\n> in RAM, this would get very much worse. So I'm not impressed with trying\n> to force the optimizer's hand as you've done here --- it might be a nice\n> plan now, but it's brittle. See if a bigger work_mem improves matters\n> enough with the regular plan.\n\n\nI agree to the point that hand tuning optimiser is brittle and something\nthat should not be done. But the reason to that was to force the index-only\nscan (Not the index scan). I feel Index-only scan would speed up given\npostgres is row oriented and we are running count-distinct on a column in a\ntable with lot of columns (Say 6-7 in number). I think that is what have\ncontributed to the gain in performance.\n\nI did a similar test with around 2 million tuples with work_mem = 250 MB\nand got the query to respond with 2x speed up. But the speed-up got with\nindex-only scan was huge and response was in sub-seconds whereas with\nwork_mem the response was couple of seconds.\n\n\n> I doubt you need the \"where email=email\" hack, in any case. That isn't\n> forcing the optimizer's decision in any meaningful fashion.\n\n\nThis is the where clause which forced the index-only scan on partial index.\nAFAIK, whenever the the tuples estimated to be processed by query is going\nto be high, Postgres does a seq scan. This was happening even in the\nfollowing query even though an index was present in the email column and\nindex-only could have had much faster processing. I think the planner is\ntaking a hit on estimating the cost of the query with index-only scan. So\nthe little trick we had put in was to create a partial index on email field\nof the participants table and use the same condition of the partial index\nin the query to trick the optimiser to use index-only scans.\n\nQuery : select count(distinct email) from participants;\n\nPlease revert me back if i'm missing something.\n\n\n-- \nThanks,\nM. Varadharajan\n\n------------------------------------------------\n\n\"Experience is what you get when you didn't get what you wanted\"\n -By Prof. Randy Pausch in \"The Last Lecture\"\n\nMy Journal :- www.thinkasgeek.wordpress.com\n\nSorry that i just joined the list and have to break the thread to reply to Tom Lane's response on this @ http://www.postgresql.org/message-id/[email protected]\nNote that the indexscan is actually *slower* than the seqscan so far as\n\nthe table access is concerned; if the table were big enough to not fitin RAM, this would get very much worse.  So I'm not impressed with tryingto force the optimizer's hand as you've done here --- it might be a nice\n\nplan now, but it's brittle.  See if a bigger work_mem improves mattersenough with the regular plan.I agree to the point that hand tuning optimiser is brittle and something that should not be done. But the reason to that was to force the index-only scan (Not the index scan). I feel Index-only scan would speed up given postgres is row oriented and we are running count-distinct on a column in a table with lot of columns (Say 6-7 in number). I think that is what have contributed to the gain in performance. \nI did a similar test with around 2 million tuples with work_mem = 250 MB and got the query to respond with 2x speed up. But the speed-up got with index-only scan was huge and response was in sub-seconds whereas with work_mem the response was couple of seconds.\nI doubt you need the \"where email=email\" hack, in any case.  That isn't\n\nforcing the optimizer's decision in any meaningful fashion.This is the where clause which forced the index-only scan on partial index. AFAIK, whenever the the tuples estimated to be processed by query is going to be high, Postgres does a seq scan. This was happening even in the following query even though an index was present in the email column and index-only could have had much faster processing. I think the planner is taking a hit on estimating the cost of the query with index-only scan. So the little trick we had put in was to create a partial index on email field of the participants table and use the same condition of the partial index in the query to trick the optimiser to use index-only scans.\nQuery : select count(distinct email) from participants;Please revert me back if i'm missing something.\n-- Thanks,M. Varadharajan------------------------------------------------\"Experience is what you get when you didn't get what you wanted\"               -By Prof. Randy Pausch in \"The Last Lecture\"\nMy Journal :- www.thinkasgeek.wordpress.com", "msg_date": "Fri, 4 Apr 2014 15:01:58 +0530", "msg_from": "Varadharajan Mukundan <[email protected]>", "msg_from_op": true, "msg_subject": "Fwd: Slow Count-Distinct Query" }, { "msg_contents": "On Fri, Apr 4, 2014 at 2:31 AM, Varadharajan Mukundan\n<[email protected]>wrote:\n\n> Sorry that i just joined the list and have to break the thread to reply to\n> Tom Lane's response on this @\n> http://www.postgresql.org/message-id/[email protected]\n>\n>\n> Note that the indexscan is actually *slower* than the seqscan so far as\n>> the table access is concerned; if the table were big enough to not fit\n>> in RAM, this would get very much worse. So I'm not impressed with trying\n>> to force the optimizer's hand as you've done here --- it might be a nice\n>> plan now, but it's brittle. See if a bigger work_mem improves matters\n>> enough with the regular plan.\n>\n>\n> I agree to the point that hand tuning optimiser is brittle and something\n> that should not be done. But the reason to that was to force the index-only\n> scan (Not the index scan). I feel Index-only scan would speed up given\n> postgres is row oriented and we are running count-distinct on a column in a\n> table with lot of columns (Say 6-7 in number). I think that is what have\n> contributed to the gain in performance.\n>\n\n\nIt looks like the original emailer wrote a query that the planner is not\nsmart enough to plan properly (A known limitation of that kind of query).\n He then made a bunch of changes, none of which worked. He then re-wrote\nthe query into a form for which the planner does a better job on. What we\ndo not know is, what would happen if he undoes all of those other changes\nand *just* uses the new form of the query?\n\n\n\n>\n> I did a similar test with around 2 million tuples with work_mem = 250 MB\n> and got the query to respond with 2x speed up. But the speed-up got with\n> index-only scan was huge and response was in sub-seconds whereas with\n> work_mem the response was couple of seconds.\n>\n\nThis change is almost certainly due to the change from a sort to a hash\naggregate, and nothing to do with the index-only scan at all.\n\nOn Fri, Apr 4, 2014 at 2:31 AM, Varadharajan Mukundan <[email protected]> wrote:\nSorry that i just joined the list and have to break the thread to reply to Tom Lane's response on this @ http://www.postgresql.org/message-id/[email protected]\n\nNote that the indexscan is actually *slower* than the seqscan so far as\n\n\nthe table access is concerned; if the table were big enough to not fitin RAM, this would get very much worse.  So I'm not impressed with tryingto force the optimizer's hand as you've done here --- it might be a nice\n\n\nplan now, but it's brittle.  See if a bigger work_mem improves mattersenough with the regular plan.I agree to the point that hand tuning optimiser is brittle and something that should not be done. But the reason to that was to force the index-only scan (Not the index scan). I feel Index-only scan would speed up given postgres is row oriented and we are running count-distinct on a column in a table with lot of columns (Say 6-7 in number). I think that is what have contributed to the gain in performance. \nIt looks like the original emailer wrote a query that the planner is not smart enough to plan properly (A known limitation of that kind of query).  He then made a bunch of changes, none of which worked.  He then re-wrote the query into a form for which the planner does a better job on.  What we do not know is, what would happen if he undoes all of those other changes and *just* uses the new form of the query?\n \nI did a similar test with around 2 million tuples with work_mem = 250 MB and got the query to respond with 2x speed up. But the speed-up got with index-only scan was huge and response was in sub-seconds whereas with work_mem the response was couple of seconds.\nThis change is almost certainly due to the change from a sort to a hash aggregate, and nothing to do with the index-only scan at all.", "msg_date": "Fri, 4 Apr 2014 10:59:03 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fwd: Slow Count-Distinct Query" }, { "msg_contents": "Hi Jeff,\n\nIt looks like the original emailer wrote a query that the planner is not\n> smart enough to plan properly (A known limitation of that kind of query).\n> He then made a bunch of changes, none of which worked. He then re-wrote\n> the query into a form for which the planner does a better job on. What we\n> do not know is, what would happen if he undoes all of those other changes\n> and *just* uses the new form of the query?\n>\n\n\nI was also pairing up with Chris (original emailer) on this issue and in\norder to reproduce it, i've a created a two column table with following\nschema with 1.9 Million dummy rows:\n\n=================================\n\na=# \\d participants\n Table \"public.participants\"\n Column | Type | Modifiers\n--------+------------------------+-----------------------------------------------------------\n id | integer | not null default\nnextval('participants_id_seq'::regclass)\n email | character varying(255) |\n\nI've tried out various scenarios on this table and recorded it as a\ntranscript below: (Please read it as we read a shell script from top to\nbottom continuously to get the whole idea):\n\n*; Create table and Insert 1.9 Million rows*\n\na=# create table participants(id serial, email varchar(255));\nNOTICE: CREATE TABLE will create implicit sequence \"participants_id_seq\"\nfor serial column \"participants.id\"\nCREATE TABLE\na=# \\d participants\n Table \"public.participants\"\n Column | Type | Modifiers\n--------+------------------------+-----------------------------------------------------------\n id | integer | not null default\nnextval('participants_id_seq'::regclass)\n email | character varying(255) |\n\na=# copy participants from '/tmp/a.csv' with csv;\nCOPY 1999935\n\n\n*; Queries without any index*\n\na=# EXPLAIN (ANALYZE) select count(1) from (select email from participants\ngroup by email) x;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=263449.51..263449.52 rows=1 width=0) (actual\ntime=3898.329..3898.329 rows=1 loops=1)\n -> Group (cost=241033.44..251033.12 rows=993311 width=16) (actual\ntime=2766.805..3807.693 rows=1000000 loops=1)\n -> Sort (cost=241033.44..246033.28 rows=1999935 width=16)\n(actual time=2766.803..3453.922 rows=1999935 loops=1)\n Sort Key: participants.email\n Sort Method: external merge Disk: 52552kB\n -> Seq Scan on participants (cost=0.00..21910.20\nrows=1999935 width=16) (actual time=0.013..362.511 rows=1999935 loops=1)\n Total runtime: 3902.460 ms\n(7 rows)\n\n\na=# EXPLAIN (ANALYZE) select count(distinct email) from participants;\n QUERY PLAN\n\n-------------------------------------------------------------------------------------------------------------------------------\nAggregate (cost=37738.19..37738.20 rows=1 width=16) (actual\ntime=3272.854..3272.855 rows=1 loops=1)\n -> Seq Scan on participants (cost=0.00..32738.35 rows=1999935\nwidth=16) (actual time=0.049..236.518 rows=1999935 loops=1)\n Total runtime: 3272.905 ms\n(3 rows)\n\na=# EXPLAIN (ANALYZE) select count(1) from (select email from participants\nwhere email=email group by email) x;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=37874.94..37874.96 rows=1 width=0) (actual\ntime=1549.258..1549.258 rows=1 loops=1)\n -> HashAggregate (cost=37763.19..37812.86 rows=4967 width=16) (actual\ntime=1168.114..1461.672 rows=1000000 loops=1)\n -> Seq Scan on participants (cost=0.00..37738.19 rows=10000\nwidth=16) (actual time=0.045..411.267 rows=1999935 loops=1)\n Filter: ((email)::text = (email)::text)\n Total runtime: 1567.586 ms\n(5 rows)\n\n\n*; Creation of idx on email field*\n\na=# create index email_idx on participants(email);\nCREATE INDEX\n\na=# EXPLAIN (ANALYZE) select count(distinct email) from participants;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=37738.19..37738.20 rows=1 width=16) (actual\ntime=3305.203..3305.203 rows=1 loops=1)\n -> Seq Scan on participants (cost=0.00..32738.35 rows=1999935\nwidth=16) (actual time=0.052..237.409 rows=1999935 loops=1)\n Total runtime: 3305.253 ms\n(3 rows)\n\na=# EXPLAIN (ANALYZE) select count(1) from (select email from\nparticipants group by email) x;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=48622.59..48622.60 rows=1 width=0) (actual\ntime=1242.718..1242.718 rows=1 loops=1)\n -> HashAggregate (cost=26273.09..36206.20 rows=993311 width=16)\n(actual time=855.215..1150.781 rows=1000000 loops=1)\n -> Seq Scan on participants (cost=0.00..21273.25 rows=1999935\nwidth=16) (actual time=0.058..217.105 rows=1999935 loops=1)\n Total runtime: 1264.234 ms\n(4 rows)\n\na=# drop index email_idx;\nDROP INDEX\n\n*; Creation of partial index on email *\n\na=# create index email_idx on participants(email) where email=email;\nCREATE INDEX\n\na=# EXPLAIN (ANALYZE) select count(distinct email) from participants\nwhere email=email;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=12949.26..12949.27 rows=1 width=16) (actual\ntime=3465.472..3465.473 rows=1 loops=1)\n -> Bitmap Heap Scan on participants (cost=249.14..12924.26 rows=10000\nwidth=16) (actual time=161.776..421.223 rows=1999935 loops=1)\n Recheck Cond: ((email)::text = (email)::text)\n -> Bitmap Index Scan on email_idx (cost=0.00..246.64 rows=10000\nwidth=0) (actual time=159.446..159.446 rows=1999935 loops=1)\n Total runtime: 3466.867 ms\n(5 rows)\n\na=# set enable_bitmapscan = false;\na=# set seq_page_cost = 0.1;\nSET\na=# set random_page_cost = 0.2;\nSET\n\na=# explain analyze select count(distinct email) from participants where\nemail=email;\n QUERY\nPLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=1517.16..1517.17 rows=1 width=16) (actual\ntime=3504.310..3504.310 rows=1 loops=1)\n -> Index Only Scan using email_idx on participants (cost=0.00..1492.16\nrows=10000 width=16) (actual time=0.101..795.595 rows=1999935 loops=1)\n Heap Fetches: 1999935\n Total runtime: 3504.358 ms\n(4 rows)\n\na=# explain analyze select count(1) from (select email from participants\nwhere email=email group by email) x;\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=1579.25..1579.26 rows=1 width=0) (actual\ntime=1193.912..1193.912 rows=1 loops=1)\n -> Group (cost=0.00..1517.16 rows=4967 width=16) (actual\ntime=0.096..1101.828 rows=1000000 loops=1)\n -> Index Only Scan using email_idx on participants\n (cost=0.00..1492.16 rows=10000 width=16) (actual time=0.091..719.281\nrows=1999935 loops=1)\n Heap Fetches: 1999935\n Total runtime: 1193.963 ms\n(5 rows)\n\n*; Oh yes, cluster the rows by email*\n\na=# create index email_idx_2 on participants(email)\na-# ;\nCREATE INDEX\n\na=# cluster participants using email_idx_2;\nCLUSTER\n\n\na=# EXPLAIN (ANALYZE) select count(1) from (select email from participants\nwhere email=email group by email) x;\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=3376.63..3376.64 rows=1 width=0) (actual\ntime=942.118..942.119 rows=1 loops=1)\n -> Group (cost=0.00..3314.54 rows=4967 width=16) (actual\ntime=0.080..851.315 rows=1000000 loops=1)\n -> Index Only Scan using email_idx on participants\n (cost=0.00..3289.54 rows=10000 width=16) (actual time=0.076..472.101\nrows=1999935 loops=1)\n Heap Fetches: 1999935\n Total runtime: 942.159 ms\n(5 rows)\n\n*; Description of the table*\n\na=# \\d participants\n Table \"public.participants\"\n Column | Type | Modifiers\n--------+------------------------+-----------------------------------------------------------\n id | integer | not null default\nnextval('participants_id_seq'::regclass)\n email | character varying(255) |\nIndexes:\n \"email_idx\" btree (email) WHERE email::text = email::text\n \"email_idx_2\" btree (email) CLUSTER\n\n\n*Summary : Query execution time dropped from 3.9 secs to 900 ms*\n\n\n====================================\n\n\nThe gain from 3.9 secs to 900 ms is huge in this case and it would be more\nevident in a bigger table (with more rows and more columns).\n\n\n\nI did a similar test with around 2 million tuples with work_mem = 250 MB\n> and got the query to respond with 2x speed up. But the speed-up got with\n> index-only scan was huge and response was in sub-seconds whereas with\n> work_mem the response was couple of seconds.\n\n\n\n> This change is almost certainly due to the change from a sort to a hash\n> aggregate, and nothing to do with the index-only scan at all.\n\n\n\nI think i did not represent things more clearly. The experiment consisted\nof two options:\n\n1. Increase work_mem and use the query without any fancy tuning [select\ncount(1) from (select email from participants group by email) x]\n2. No increase in work_mem, just force index_only scan.\n\nHere is how they performed:\n\na=# explain analyze select count(1) from (select email from participants\ngroup by email) x;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=60087.68..60087.69 rows=1 width=0) (actual\ntime=1256.626..1256.626 rows=1 loops=1)\n -> HashAggregate (cost=37738.19..47671.30 rows=993311 width=16)\n(actual time=871.800..1168.298 rows=1000000 loops=1)\n -> Seq Scan on participants (cost=0.00..32738.35 rows=1999935\nwidth=16) (actual time=0.060..216.678 rows=1999935 loops=1)\n Total runtime: 1277.964 ms\n(4 rows)\n\na=# EXPLAIN (ANALYZE) select count(1) from (select email from participants\nwhere email=email group by email) x;\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=3376.63..3376.64 rows=1 width=0) (actual\ntime=942.118..942.119 rows=1 loops=1)\n -> Group (cost=0.00..3314.54 rows=4967 width=16) (actual\ntime=0.080..851.315 rows=1000000 loops=1)\n -> Index Only Scan using email_idx on participants\n (cost=0.00..3289.54 rows=10000 width=16) (actual time=0.076..472.101\nrows=1999935 loops=1)\n Heap Fetches: 1999935\n Total runtime: 942.159 ms\n(5 rows)\n\nComment i made was that with index_only scan, i was able to get the query\nto respond in 940 ms with a work_mem of about 1 MB (default in my system)\nwhereas in other case (scenario 1) it took a work_mem of 250 MB (I agree\nthat 250MB might not be optimal, but its just a ballpark number) to respond\nin 1.2 seconds.\n\n-- \nThanks,\nM. Varadharajan\n\n------------------------------------------------\n\n\"Experience is what you get when you didn't get what you wanted\"\n -By Prof. Randy Pausch in \"The Last Lecture\"\n\nMy Journal :- www.thinkasgeek.wordpress.com\n\nHi Jeff,\nIt looks like the original emailer wrote a query that the planner is not smart enough to plan properly (A known limitation of that kind of query).  He then made a bunch of changes, none of which worked.  He then re-wrote the query into a form for which the planner does a better job on.  What we do not know is, what would happen if he undoes all of those other changes and *just* uses the new form of the query?\nI was also pairing up with Chris (original emailer) on this issue and in order to reproduce it, i've a created a two column table with following schema with 1.9 Million dummy rows:\n=================================a=# \\d participants                                 Table \"public.participants\" Column |          Type          |                         Modifiers\n--------+------------------------+----------------------------------------------------------- id     | integer                | not null default nextval('participants_id_seq'::regclass) email  | character varying(255) |\nI've tried out various scenarios on this table and recorded it as a transcript below: (Please read it as we read a shell script from top to bottom continuously to get the whole idea):\n; Create table and Insert 1.9 Million rows\na=# create table participants(id serial, email varchar(255));NOTICE:  CREATE TABLE will create implicit sequence \"participants_id_seq\" for serial column \"participants.id\"\nCREATE TABLEa=# \\d participants                                 Table \"public.participants\" Column |          Type          |                         Modifiers--------+------------------------+-----------------------------------------------------------\n id     | integer                | not null default nextval('participants_id_seq'::regclass) email  | character varying(255) |a=# copy participants from '/tmp/a.csv' with csv;\nCOPY 1999935; Queries without any indexa=# EXPLAIN (ANALYZE)  select count(1) from (select email from participants group by email) x;\n                                                                QUERY PLAN-------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate  (cost=263449.51..263449.52 rows=1 width=0) (actual time=3898.329..3898.329 rows=1 loops=1)   ->  Group  (cost=241033.44..251033.12 rows=993311 width=16) (actual time=2766.805..3807.693 rows=1000000 loops=1)\n         ->  Sort  (cost=241033.44..246033.28 rows=1999935 width=16) (actual time=2766.803..3453.922 rows=1999935 loops=1)               Sort Key: participants.email               Sort Method: external merge  Disk: 52552kB\n               ->  Seq Scan on participants  (cost=0.00..21910.20 rows=1999935 width=16) (actual time=0.013..362.511 rows=1999935 loops=1) Total runtime: 3902.460 ms(7 rows)\na=#  EXPLAIN (ANALYZE)  select count(distinct email) from participants;                                                          QUERY PLAN                                                           ------------------------------------------------------------------------------------------------------------------------------- Aggregate  (cost=37738.19..37738.20 rows=1 width=16) (actual time=3272.854..3272.855 rows=1 loops=1)\n   ->  Seq Scan on participants  (cost=0.00..32738.35 rows=1999935 width=16) (actual time=0.049..236.518 rows=1999935 loops=1) Total runtime: 3272.905 ms\n(3 rows)a=# EXPLAIN (ANALYZE) select count(1) from (select email from participants where email=email group by email) x;                                                            QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------- Aggregate  (cost=37874.94..37874.96 rows=1 width=0) (actual time=1549.258..1549.258 rows=1 loops=1)\n   ->  HashAggregate  (cost=37763.19..37812.86 rows=4967 width=16) (actual time=1168.114..1461.672 rows=1000000 loops=1)         ->  Seq Scan on participants  (cost=0.00..37738.19 rows=10000 width=16) (actual time=0.045..411.267 rows=1999935 loops=1)\n               Filter: ((email)::text = (email)::text) Total runtime: 1567.586 ms(5 rows); Creation of idx on email field\na=# create index email_idx on participants(email);CREATE INDEXa=#  EXPLAIN (ANALYZE)  select count(distinct email) from participants;                                                          QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------- Aggregate  (cost=37738.19..37738.20 rows=1 width=16) (actual time=3305.203..3305.203 rows=1 loops=1)\n   ->  Seq Scan on participants  (cost=0.00..32738.35 rows=1999935 width=16) (actual time=0.052..237.409 rows=1999935 loops=1) Total runtime: 3305.253 ms(3 rows)\na=#  EXPLAIN (ANALYZE)  select count(1) from (select email from participants group by email) x;                                                             QUERY PLAN-------------------------------------------------------------------------------------------------------------------------------------\n Aggregate  (cost=48622.59..48622.60 rows=1 width=0) (actual time=1242.718..1242.718 rows=1 loops=1)   ->  HashAggregate  (cost=26273.09..36206.20 rows=993311 width=16) (actual time=855.215..1150.781 rows=1000000 loops=1)\n         ->  Seq Scan on participants  (cost=0.00..21273.25 rows=1999935 width=16) (actual time=0.058..217.105 rows=1999935 loops=1) Total runtime: 1264.234 ms(4 rows)\na=# drop index email_idx;DROP INDEX; Creation of partial index on email a=# create index email_idx on participants(email) where email=email;\nCREATE INDEXa=#  EXPLAIN (ANALYZE)  select count(distinct email) from participants where email=email;                                                               QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------- Aggregate  (cost=12949.26..12949.27 rows=1 width=16) (actual time=3465.472..3465.473 rows=1 loops=1)\n   ->  Bitmap Heap Scan on participants  (cost=249.14..12924.26 rows=10000 width=16) (actual time=161.776..421.223 rows=1999935 loops=1)         Recheck Cond: ((email)::text = (email)::text)         ->  Bitmap Index Scan on email_idx  (cost=0.00..246.64 rows=10000 width=0) (actual time=159.446..159.446 rows=1999935 loops=1)\n Total runtime: 3466.867 ms(5 rows)a=# set enable_bitmapscan = false;a=# set seq_page_cost = 0.1;SETa=# set random_page_cost = 0.2;SET\na=# explain analyze select count(distinct email) from participants where email=email;                                                                    QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate  (cost=1517.16..1517.17 rows=1 width=16) (actual time=3504.310..3504.310 rows=1 loops=1)   ->  Index Only Scan using email_idx on participants  (cost=0.00..1492.16 rows=10000 width=16) (actual time=0.101..795.595 rows=1999935 loops=1)\n         Heap Fetches: 1999935 Total runtime: 3504.358 ms(4 rows)a=# explain analyze select count(1) from (select email from participants where email=email group by email) x;\n                                                                       QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate  (cost=1579.25..1579.26 rows=1 width=0) (actual time=1193.912..1193.912 rows=1 loops=1)   ->  Group  (cost=0.00..1517.16 rows=4967 width=16) (actual time=0.096..1101.828 rows=1000000 loops=1)\n         ->  Index Only Scan using email_idx on participants  (cost=0.00..1492.16 rows=10000 width=16) (actual time=0.091..719.281 rows=1999935 loops=1)               Heap Fetches: 1999935 Total runtime: 1193.963 ms\n(5 rows); Oh yes, cluster the rows by emaila=# create index email_idx_2 on participants(email)a-# ;CREATE INDEX\na=# cluster participants using email_idx_2;CLUSTERa=# EXPLAIN (ANALYZE) select count(1) from (select email from participants where email=email group by email) x;\n                                                                       QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate  (cost=3376.63..3376.64 rows=1 width=0) (actual time=942.118..942.119 rows=1 loops=1)   ->  Group  (cost=0.00..3314.54 rows=4967 width=16) (actual time=0.080..851.315 rows=1000000 loops=1)\n         ->  Index Only Scan using email_idx on participants  (cost=0.00..3289.54 rows=10000 width=16) (actual time=0.076..472.101 rows=1999935 loops=1)               Heap Fetches: 1999935 Total runtime: 942.159 ms\n(5 rows); Description of the tablea=# \\d participants                                 Table \"public.participants\" Column |          Type          |                         Modifiers\n--------+------------------------+----------------------------------------------------------- id     | integer                | not null default nextval('participants_id_seq'::regclass) email  | character varying(255) |\nIndexes:    \"email_idx\" btree (email) WHERE email::text = email::text    \"email_idx_2\" btree (email) CLUSTER\nSummary : Query execution time dropped from 3.9 secs to 900 ms\n====================================The gain from 3.9 secs to 900 ms is huge in this case and it would be more evident in a bigger table (with more rows and more columns). \n\nI did a similar test with around 2 million tuples with work_mem = 250 MB and got the query to respond with 2x speed up. But the speed-up got with index-only scan was huge and response was in sub-seconds whereas with work_mem the response was couple of seconds.\n This change is almost certainly due to the change from a sort to a hash aggregate, and nothing to do with the index-only scan at all.\n\nI think i did not represent things more clearly. The experiment consisted of two options:\n1. Increase work_mem and use the query without any fancy tuning [select count(1) from (select email from participants group by email) x]2. No increase in work_mem, just force index_only scan.\nHere is how they performed:a=# explain analyze select count(1) from (select email from participants group by email) x;                                                             QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------- Aggregate  (cost=60087.68..60087.69 rows=1 width=0) (actual time=1256.626..1256.626 rows=1 loops=1)\n   ->  HashAggregate  (cost=37738.19..47671.30 rows=993311 width=16) (actual time=871.800..1168.298 rows=1000000 loops=1)         ->  Seq Scan on participants  (cost=0.00..32738.35 rows=1999935 width=16) (actual time=0.060..216.678 rows=1999935 loops=1)\n Total runtime: 1277.964 ms(4 rows)a=# EXPLAIN (ANALYZE) select count(1) from (select email from participants where email=email group by email) x;\n                                                                       QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate  (cost=3376.63..3376.64 rows=1 width=0) (actual time=942.118..942.119 rows=1 loops=1)   ->  Group  (cost=0.00..3314.54 rows=4967 width=16) (actual time=0.080..851.315 rows=1000000 loops=1)\n         ->  Index Only Scan using email_idx on participants  (cost=0.00..3289.54 rows=10000 width=16) (actual time=0.076..472.101 rows=1999935 loops=1)\n               Heap Fetches: 1999935 Total runtime: 942.159 ms(5 rows)\nComment i made was that with index_only scan, i was able to get the query to respond in 940 ms with a work_mem of about 1 MB (default in my system) whereas in other case (scenario 1) it took  a work_mem of 250 MB (I agree that 250MB might not be optimal, but its just a ballpark number) to respond in 1.2 seconds.\n-- Thanks,M. Varadharajan------------------------------------------------\"Experience is what you get when you didn't get what you wanted\"               -By Prof. Randy Pausch in \"The Last Lecture\"\nMy Journal :- www.thinkasgeek.wordpress.com", "msg_date": "Sat, 5 Apr 2014 06:43:12 +0530", "msg_from": "Varadharajan Mukundan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fwd: Slow Count-Distinct Query" }, { "msg_contents": "On Friday, April 4, 2014, Varadharajan Mukundan <[email protected]>\nwrote:\n\n> Hi Jeff,\n>\n> It looks like the original emailer wrote a query that the planner is not\n>> smart enough to plan properly (A known limitation of that kind of query).\n>> He then made a bunch of changes, none of which worked. He then re-wrote\n>> the query into a form for which the planner does a better job on. What we\n>> do not know is, what would happen if he undoes all of those other changes\n>> and *just* uses the new form of the query?\n>>\n>\n>\n> I was also pairing up with Chris (original emailer) on this issue and in\n> order to reproduce it, i've a created a two column table with following\n> schema with 1.9 Million dummy rows:\n>\n> =================================\n>\n> a=# \\d participants\n> Table \"public.participants\"\n> Column | Type | Modifiers\n>\n> --------+------------------------+-----------------------------------------------------------\n> id | integer | not null default\n> nextval('participants_id_seq'::regclass)\n> email | character varying(255) |\n>\n> I've tried out various scenarios on this table and recorded it as a\n> transcript below: (Please read it as we read a shell script from top to\n> bottom continuously to get the whole idea):\n>\n> *; Create table and Insert 1.9 Million rows*\n>\n> a=# create table participants(id serial, email varchar(255));\n> NOTICE: CREATE TABLE will create implicit sequence \"participants_id_seq\"\n> for serial column \"participants.id\"\n> CREATE TABLE\n> a=# \\d participants\n> Table \"public.participants\"\n> Column | Type | Modifiers\n>\n> --------+------------------------+-----------------------------------------------------------\n> id | integer | not null default\n> nextval('participants_id_seq'::regclass)\n> email | character varying(255) |\n>\n> a=# copy participants from '/tmp/a.csv' with csv;\n> COPY 1999935\n>\n\nThanks for the detailed response. I don't have access to your /tmp/a.csv\nof course, I so I just used this:\n\ninsert into participants (email) select md5(floor(random()*1000000)::text)\nfrom generate_series(1,2000000);\n\nThis gives each email showing up about twice.\n\n\n\n>\n> a=# EXPLAIN (ANALYZE) select count(1) from (select email from participants\n> where email=email group by email) x;\n> QUERY PLAN\n>\n> -----------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=37874.94..37874.96 rows=1 width=0) (actual\n> time=1549.258..1549.258 rows=1 loops=1)\n> -> HashAggregate (cost=37763.19..37812.86 rows=4967 width=16) (actual\n> time=1168.114..1461.672 rows=1000000 loops=1)\n> -> Seq Scan on participants (cost=0.00..37738.19 rows=10000\n> width=16) (actual time=0.045..411.267 rows=1999935 loops=1)\n> Filter: ((email)::text = (email)::text)\n> Total runtime: 1567.586 ms\n> (5 rows)\n>\n\nWhat you have done here is trick the planner into thinking your query will\nbe 200 times smaller than it actually is, and thus the hash table will be\n200 times smaller than it actually is and therefore will fit in allowed\nmemory. This is effective at getting the more efficient hash agg. But it\nno more safe than just giving it explicit permission to use that much\nmemory for this query by increasing work_mem by 200 fold.\n\nI am kind of surprised that the planner is so easily fooled by that.\n\n\n>\n>\n> *; Creation of idx on email field*\n>\n> a=# create index email_idx on participants(email);\n> CREATE INDEX\n>\n>\n\n> a=# EXPLAIN (ANALYZE) select count(1) from (select email from\n> participants group by email) x;\n> QUERY PLAN\n>\n> -------------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=48622.59..48622.60 rows=1 width=0) (actual\n> time=1242.718..1242.718 rows=1 loops=1)\n> -> HashAggregate (cost=26273.09..36206.20 rows=993311 width=16)\n> (actual time=855.215..1150.781 rows=1000000 loops=1)\n> -> Seq Scan on participants (cost=0.00..21273.25 rows=1999935\n> width=16) (actual time=0.058..217.105 rows=1999935 loops=1)\n> Total runtime: 1264.234 ms\n> (4 rows)\n>\n\n\nI can't reproduce this at all, except by increasing work_mem. The hash\ntable needed for this is no smaller than the hash table needed before the\nindex was built. Did you increase work_mem before the above plan?\n\nInstead what I get is the index only scan (to provide order) feeding into a\nGroup.\n\n\n>\n> a=# drop index email_idx;\n> DROP INDEX\n>\n> *; Creation of partial index on email *\n>\n> a=# create index email_idx on participants(email) where email=email;\n> CREATE INDEX\n>\n> a=# EXPLAIN (ANALYZE) select count(distinct email) from participants\n> where email=email;\n> QUERY PLAN\n>\n> -----------------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=12949.26..12949.27 rows=1 width=16) (actual\n> time=3465.472..3465.473 rows=1 loops=1)\n> -> Bitmap Heap Scan on participants (cost=249.14..12924.26 rows=10000\n> width=16) (actual time=161.776..421.223 rows=1999935 loops=1)\n> Recheck Cond: ((email)::text = (email)::text)\n> -> Bitmap Index Scan on email_idx (cost=0.00..246.64 rows=10000\n> width=0) (actual time=159.446..159.446 rows=1999935 loops=1)\n> Total runtime: 3466.867 ms\n>\n\nI also cannot get this.\n\n\n\n> (5 rows)\n>\n> a=# set enable_bitmapscan = false;\n> a=# set seq_page_cost = 0.1;\n> SET\n> a=# set random_page_cost = 0.2;\n> SET\n>\n\n\nThese don't seem to accomplish anything for you. They switch the slow form\nof the query between two plans with about the same speed.\n\n\n>\n> a=# explain analyze select count(distinct email) from participants where\n> email=email;\n> QUERY\n> PLAN\n>\n> ---------------------------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=1517.16..1517.17 rows=1 width=16) (actual\n> time=3504.310..3504.310 rows=1 loops=1)\n> -> Index Only Scan using email_idx on participants\n> (cost=0.00..1492.16 rows=10000 width=16) (actual time=0.101..795.595 rows=1999935\n> loops=1)\n> Heap Fetches: 1999935\n> Total runtime: 3504.358 ms\n> (4 rows)\n>\n> a=# explain analyze select count(1) from (select email from participants\n> where email=email group by email) x;\n>\n> QUERY PLAN\n>\n> ---------------------------------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=1579.25..1579.26 rows=1 width=0) (actual\n> time=1193.912..1193.912 rows=1 loops=1)\n> -> Group (cost=0.00..1517.16 rows=4967 width=16) (actual\n> time=0.096..1101.828 rows=1000000 loops=1)\n> -> Index Only Scan using email_idx on participants\n> (cost=0.00..1492.16 rows=10000 width=16) (actual time=0.091..719.281\n> rows=1999935 loops=1)\n> Heap Fetches: 1999935\n> Total runtime: 1193.963 ms\n> (5 rows)\n>\n> *; Oh yes, cluster the rows by email*\n>\n> a=# create index email_idx_2 on participants(email)\n> a-# ;\n> CREATE INDEX\n>\n> a=# cluster participants using email_idx_2;\n> CLUSTER\n>\n>\n> a=# EXPLAIN (ANALYZE) select count(1) from (select email from participants\n> where email=email group by email) x;\n>\n> QUERY PLAN\n>\n> ---------------------------------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=3376.63..3376.64 rows=1 width=0) (actual\n> time=942.118..942.119 rows=1 loops=1)\n> -> Group (cost=0.00..3314.54 rows=4967 width=16) (actual\n> time=0.080..851.315 rows=1000000 loops=1)\n> -> Index Only Scan using email_idx on participants\n> (cost=0.00..3289.54 rows=10000 width=16) (actual time=0.076..472.101\n> rows=1999935 loops=1)\n> Heap Fetches: 1999935\n> Total runtime: 942.159 ms\n> (5 rows)\n>\n\nOnce I cluster and vacuum and analyze the table, I get this plan without\nhaving the partial index (just the regular index), without the \"where\nemail=email\" and without disabling the bitmap scan or changing the\npage_cost parameters.\n\nI usually get this plan without the cluster, to. I think it depends on the\nluck of the sampling in the autoanalyze.\n\nCheers,\n\nJeff\n\n>\n\nOn Friday, April 4, 2014, Varadharajan Mukundan <[email protected]> wrote:\nHi Jeff,\nIt looks like the original emailer wrote a query that the planner is not smart enough to plan properly (A known limitation of that kind of query).  He then made a bunch of changes, none of which worked.  He then re-wrote the query into a form for which the planner does a better job on.  What we do not know is, what would happen if he undoes all of those other changes and *just* uses the new form of the query?\nI was also pairing up with Chris (original emailer) on this issue and in order to reproduce it, i've a created a two column table with following schema with 1.9 Million dummy rows:\n=================================a=# \\d participants                                 Table \"public.participants\" Column |          Type          |                         Modifiers\n--------+------------------------+----------------------------------------------------------- id     | integer                | not null default nextval('participants_id_seq'::regclass)\n\n\n email  | character varying(255) |\nI've tried out various scenarios on this table and recorded it as a transcript below: (Please read it as we read a shell script from top to bottom continuously to get the whole idea):\n; Create table and Insert 1.9 Million rows\na=# create table participants(id serial, email varchar(255));NOTICE:  CREATE TABLE will create implicit sequence \"participants_id_seq\" for serial column \"participants.id\"\nCREATE TABLEa=# \\d participants                                 Table \"public.participants\" Column |          Type          |                         Modifiers--------+------------------------+-----------------------------------------------------------\n id     | integer                | not null default nextval('participants_id_seq'::regclass) email  | character varying(255) |a=# copy participants from '/tmp/a.csv' with csv;\nCOPY 1999935Thanks for the detailed response.  I don't have access to your /tmp/a.csv of course, I so I just used this:\ninsert into participants (email) select md5(floor(random()*1000000)::text) from generate_series(1,2000000);This gives each email showing up about twice.\n \na=# EXPLAIN (ANALYZE) select count(1) from (select email from participants where email=email group by email) x;                                                            QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------- Aggregate  (cost=37874.94..37874.96 rows=1 width=0) (actual time=1549.258..1549.258 rows=1 loops=1)\n   ->  HashAggregate  (cost=37763.19..37812.86 rows=4967 width=16) (actual time=1168.114..1461.672 rows=1000000 loops=1)         ->  Seq Scan on participants  (cost=0.00..37738.19 rows=10000 width=16) (actual time=0.045..411.267 rows=1999935 loops=1)\n               Filter: ((email)::text = (email)::text) Total runtime: 1567.586 ms(5 rows)What you have done here is trick the planner into thinking your query will be 200 times smaller than it actually is, and thus the hash table will be 200 times smaller than it actually is and therefore will fit in allowed memory.  This is effective at getting the more efficient hash agg.  But it no more safe than just giving it explicit permission to use that much memory for this query by increasing work_mem by 200 fold.\nI am kind of surprised that the planner is so easily fooled by that. \n; Creation of idx on email field\na=# create index email_idx on participants(email);CREATE INDEX \n\n\na=#  EXPLAIN (ANALYZE)  select count(1) from (select email from participants group by email) x;                                                             QUERY PLAN-------------------------------------------------------------------------------------------------------------------------------------\n Aggregate  (cost=48622.59..48622.60 rows=1 width=0) (actual time=1242.718..1242.718 rows=1 loops=1)   ->  HashAggregate  (cost=26273.09..36206.20 rows=993311 width=16) (actual time=855.215..1150.781 rows=1000000 loops=1)\n         ->  Seq Scan on participants  (cost=0.00..21273.25 rows=1999935 width=16) (actual time=0.058..217.105 rows=1999935 loops=1) Total runtime: 1264.234 ms(4 rows)\nI can't reproduce this at all, except by increasing work_mem.   The hash table needed for this is no smaller than the hash table needed before the index was built.  Did you increase work_mem before the above plan?\nInstead what I get is the index only scan (to provide order) feeding into a Group. \n\n\na=# drop index email_idx;DROP INDEX; Creation of partial index on email a=# create index email_idx on participants(email) where email=email;\nCREATE INDEXa=#  EXPLAIN (ANALYZE)  select count(distinct email) from participants where email=email;                                                               QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------- Aggregate  (cost=12949.26..12949.27 rows=1 width=16) (actual time=3465.472..3465.473 rows=1 loops=1)\n   ->  Bitmap Heap Scan on participants  (cost=249.14..12924.26 rows=10000 width=16) (actual time=161.776..421.223 rows=1999935 loops=1)         Recheck Cond: ((email)::text = (email)::text)\n\n\n         ->  Bitmap Index Scan on email_idx  (cost=0.00..246.64 rows=10000 width=0) (actual time=159.446..159.446 rows=1999935 loops=1)\n Total runtime: 3466.867 msI also cannot get this. \n(5 rows)a=# set enable_bitmapscan = false;a=# set seq_page_cost = 0.1;\nSETa=# set random_page_cost = 0.2;SETThese don't seem to accomplish anything for you.  They switch the slow form of the query between two plans with about the same speed.\n  \na=# explain analyze select count(distinct email) from participants where email=email;                                                                    QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate  (cost=1517.16..1517.17 rows=1 width=16) (actual time=3504.310..3504.310 rows=1 loops=1)   ->  Index Only Scan using email_idx on participants  (cost=0.00..1492.16 rows=10000 width=16) (actual time=0.101..795.595 rows=1999935 loops=1)\n         Heap Fetches: 1999935 Total runtime: 3504.358 ms(4 rows)a=# explain analyze select count(1) from (select email from participants where email=email group by email) x;\n                                                                       QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate  (cost=1579.25..1579.26 rows=1 width=0) (actual time=1193.912..1193.912 rows=1 loops=1)   ->  Group  (cost=0.00..1517.16 rows=4967 width=16) (actual time=0.096..1101.828 rows=1000000 loops=1)\n         ->  Index Only Scan using email_idx on participants  (cost=0.00..1492.16 rows=10000 width=16) (actual time=0.091..719.281 rows=1999935 loops=1)               Heap Fetches: 1999935 Total runtime: 1193.963 ms\n(5 rows); Oh yes, cluster the rows by emaila=# create index email_idx_2 on participants(email)a-# ;CREATE INDEX\n\n\n\na=# cluster participants using email_idx_2;CLUSTERa=# EXPLAIN (ANALYZE) select count(1) from (select email from participants where email=email group by email) x;\n\n\n\n                                                                       QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate  (cost=3376.63..3376.64 rows=1 width=0) (actual time=942.118..942.119 rows=1 loops=1)   ->  Group  (cost=0.00..3314.54 rows=4967 width=16) (actual time=0.080..851.315 rows=1000000 loops=1)\n         ->  Index Only Scan using email_idx on participants  (cost=0.00..3289.54 rows=10000 width=16) (actual time=0.076..472.101 rows=1999935 loops=1)               Heap Fetches: 1999935 Total runtime: 942.159 ms\n(5 rows)Once I cluster and vacuum and analyze the table, I get this plan without having the partial index (just the regular index), without the \"where email=email\" and without disabling the bitmap scan or changing the page_cost parameters.\nI usually get this plan without the cluster, to.  I think it depends on the luck of the sampling in the autoanalyze.Cheers,Jeff", "msg_date": "Sun, 6 Apr 2014 10:26:18 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Slow Count-Distinct Query" }, { "msg_contents": "Hi Jeff,\n\nInstead what I get is the index only scan (to provide order) feeding into a\n> Group.\n>\n\nThat's interesting. We tested out in two versions of Postgres (9.2 and 9.3)\nin different Mac machines and ended up with index-only scan only after the\npartial index. I remember doing a vacuum full analyse after each and every\nstep.\n\n\n> I usually get this plan without the cluster, to. I think it depends on\n> the luck of the sampling in the autoanalyze.\n>\n>\nThat's interesting as well. I think something like increasing the sample\nsize would make it much better? Because, we had to perform so many steps to\nget the index-only scan working whereas its really obvious for anyone to\nguess that it should be the right approach. Also in a far corner of my\nmind, i'm also thinking whether any OS specific parameter would be\nconsidered (and is different in your system compared to my system) for\ncoming up plans and choosing one of them.\n\n-- \nThanks,\nM. Varadharajan\n\n------------------------------------------------\n\n\"Experience is what you get when you didn't get what you wanted\"\n -By Prof. Randy Pausch in \"The Last Lecture\"\n\nMy Journal :- www.thinkasgeek.wordpress.com\n\nHi Jeff,Instead what I get is the index only scan (to provide order) feeding into a Group.\nThat's interesting. We tested out in two versions of Postgres (9.2 and 9.3) in different Mac machines and ended up with index-only scan only after the partial index. I remember doing a vacuum full analyse after each and every step.\n I usually get this plan without the cluster, to.  I think it depends on the luck of the sampling in the autoanalyze.\nThat's interesting as well. I think something like increasing the sample size would make it much better? Because, we had to perform so many steps to get the index-only scan working whereas its really obvious for anyone to guess that it should be the right approach. Also in a far corner of my mind, i'm also thinking whether any OS specific parameter would be considered (and is different in your system compared to my system) for coming up plans and choosing one of them.\n-- Thanks,M. Varadharajan------------------------------------------------\"Experience is what you get when you didn't get what you wanted\"               -By Prof. Randy Pausch in \"The Last Lecture\"\nMy Journal :- www.thinkasgeek.wordpress.com", "msg_date": "Mon, 7 Apr 2014 08:04:20 +0530", "msg_from": "Varadharajan Mukundan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow Count-Distinct Query" } ]
[ { "msg_contents": "Summary\nWe are porting an application to PostgreSQL. The appplication already\nruns with DB2 (LUW version) and Oracle. One query in particular executes\nslower on Postgres than it does on other Database platforms, notably DB2\nLUW and Oracle. (Please understand, we are not comparing databases here,\nwe are porting an application. We do not mean to start a flame war\nhere).\n\nThe table contains 9.9 million records. At start of the query, all\nvalues of the is_grc_002 column are NULL. We perform a manual Vacuum\nAnalyze on the table (from PgAdmin III) immediately before running the\nquery.\n\nThe query is slow compared to other database products. For example, DB2\nLUW completes this query in about 10 minutes. Postgres needs close to 50\nminutes to process the same query on the same data. Sometimes, Postgres\nneeds more than 2 hours.\n\nThe application performs an update query on every row\nof the table. The exact SQL of this query is:\n\nupdate t67cdi_nl_cmp_descr set is_grc_002='Y'\n\nThis post contains the data of two runs of the query. the first with\nexplain analyze. The second run is with explain buffers. Between the\nruns, an explicit Vacuum Analyze was done on the table.\n\nObservations\nWe tried removing the index on the field is_grc_002. That did not have a\nbig impact.\n\nWe tried removing all indexes. That reduces the runtime to ~3 minutes.\nWhen we start to put indexes back, the run time of the query increases\nagain with each index added. \n\nAll Postgres data is on a single disk. We know things can be optimized\nfurther by dividing the disk load. But the other DB systems DB2, Oracle\nworked in similar conditions.\n\nHypothesis\nwe have tried many things to solve this problem ourselves, but to no\navail so far. Our hypothesis is that\nthe Postgres creates new records for all rows and then needs to update\nall 15 indexes to make them point to the new rows. There does not seem\nto be a way to avoid that.\n\nQuestion:\n- Is our hypothesis correct?\n- Can the forum please advise us on possible ways to make the query\nfaster?\n\nAny insight is much appreciated.\n\n\nregards,\n\nHans Drexler\n\n\n\n\n\nFull Table and Index Schema\nThe table referenced by the troublesome query is fully specified at the\nend of this post, including all indexes.\n\n\nTable Metadata\n- No large objects\n- Many columns do have a significant amount of null values.\n- The table is filled, and then used for doing queries. No updates or\ndeletes after the table is filled.\n- All 9.9M rows are inserted in one go before the querying starts.\n- There are many indexes (see schema)\n- There are no triggers or functions on this table.\n\nHistory\nThis query has always been slow. We are trying to speed up this query as\nto make the product performance on PostgreSQL on par with other\ndatabases.\n\nHardware\nThe speed difference between Postgres and other databases is experienced\non all hardware. In this case, the query runs on a laptop with 12GB ram\nand SSD disk.\n\n\nOutput of \"explain analyze\" is below:\n\"Update on t67cdi_nl_cmp_descr (cost=0.00..434456.10 rows=9863510\nwidth=237) (actual time=2821074.381..2821074.381 rows=0 loops=1)\"\n\" -> Seq Scan on t67cdi_nl_cmp_descr (cost=0.00..434456.10\nrows=9863510 width=237) (actual time=0.048..17104.003 rows=9863467\nloops=1)\"\n\"Total runtime: 2821078.939 ms\"\n\n\nOutput of \"explain analyze buffers\" is below:\n\"Update on t67cdi_nl_cmp_descr (cost=0.00..769880.31 rows=9864731\nwidth=236) (actual time=3216741.867..3216741.867 rows=0 loops=1)\"\n\" Buffers: shared hit=547205289 read=23073881 written=20825900\"\n\" -> Seq Scan on t67cdi_nl_cmp_descr (cost=0.00..769880.31\nrows=9864731 width=236) (actual time=2.074..29984.076 rows=9863467\nloops=1)\"\n\" Buffers: shared hit=1013 read=670220 written=320246\"\n\"Total runtime: 3216743.861 ms\"\n\n\nPostgreSQL version number you are running: \n\n\"PostgreSQL 9.1.11 on x86_64-unknown-linux-gnu, compiled by gcc\n(Ubuntu/Linaro 4.8.1-10ubuntu9) 4.8.1, 64-bit\"\n\nHow you installed PostgreSQL: apt-get (on Ubuntu)\n\nChanges made to the settings in the postgresql.conf file: \n\nshared_buffers = 512MB\nwork_mem = 50MB\nmaintenance_work_mem = 64MB\nsynchronous_commit = off\nwal_sync_method = fdatasync\nwal_buffers = -1\ncheckpoint_segments = 256\ncheckpoint_completion_target = 0.9\neffective_cache_size = 4GB\ndefault_statistics_target = 1000\ntrack_counts = on\nautovacuum = on\nlog_autovacuum_min_duration = 1000\nautovacuum_naptime = 1min\nautovacuum_vacuum_scale_factor = 0.2\nautovacuum_analyze_scale_factor = 0.1\n\n\nOperating system and version:\nLinux hans-think 3.11.0-15-generic #25-Ubuntu SMP Thu Jan 30 17:22:01\nUTC 2014 x86_64 x86_64 x86_64 GNU/Linux\n\nWhat program you're using to connect to PostgreSQL: PgAdmin III\n \nIs there anything relevant or unusual in the PostgreSQL server logs?:\n \nFor questions about any kind of error: There is no error, just slowness.\n\n\nFull Table and Index Schema\n\n-- Table: t67cdi_nl_cmp_descr\n\n-- DROP TABLE t67cdi_nl_cmp_descr;\n\nCREATE TABLE t67cdi_nl_cmp_descr\n(\n id_001 character varying(285),\n is_grc_002 character varying(3),\n grc_id_003 character varying(108),\n src_id_004 character varying(108),\n src_name_005 character varying(30),\n adr_type_006 character varying(30),\n name_cw1_007 character varying(45),\n name_rest_008 character varying(150),\n name_compr_009 bytea,\n name_fo1_010 character varying(30),\n name_lf_011 character varying(30),\n street_cw_012 character varying(45),\n street_hnr_013 character varying(15),\n street_hna_014 character varying(30),\n street_fo_015 character varying(30),\n place_cw_016 character varying(45),\n place_pc_017 character varying(21),\n place_pc4_018 character varying(12),\n place_fo_019 character varying(30),\n email_020 character varying(240),\n phone_021 character varying(150),\n fax_022 character varying(150),\n establ_date_023 character varying(24),\n establ_year_024 character varying(12),\n coc_no_025 character varying(90),\n vat_no_026 character varying(90),\n country_027 character varying(30),\n emaildomain_028 character varying(120),\n internal_id_029 character varying(60),\n email5_030 character varying(15),\n duns_no_031 character varying(90),\n extref_no_032 character varying(90),\n coc_dosno_033 character varying(12),\n coc_brno_034 character varying(36),\n global_id_035 character varying(75),\n filter_036 character varying(3)\n)\nWITH (\n OIDS=FALSE\n);\nALTER TABLE t67cdi_nl_cmp_descr\n OWNER TO postgres;\n\n-- Index: t67cdi_nl_cmp_descrs_prkey\n\n-- DROP INDEX t67cdi_nl_cmp_descrs_prkey;\n\nCREATE INDEX t67cdi_nl_cmp_descrs_prkey\n ON t67cdi_nl_cmp_descr\n USING btree\n (id_001 COLLATE pg_catalog.\"default\");\n\n-- Index: t67nl_cmp_coc\n\n-- DROP INDEX t67nl_cmp_coc;\n\nCREATE INDEX t67nl_cmp_coc\n ON t67cdi_nl_cmp_descr\n USING btree\n (coc_no_025 COLLATE pg_catalog.\"default\");\n\n-- Index: t67nl_cmp_duns\n\n-- DROP INDEX t67nl_cmp_duns;\n\nCREATE INDEX t67nl_cmp_duns\n ON t67cdi_nl_cmp_descr\n USING btree\n (duns_no_031 COLLATE pg_catalog.\"default\");\n\n-- Index: t67nl_cmp_email\n\n-- DROP INDEX t67nl_cmp_email;\n\nCREATE INDEX t67nl_cmp_email\n ON t67cdi_nl_cmp_descr\n USING btree\n (email5_030 COLLATE pg_catalog.\"default\");\n\n-- Index: t67nl_cmp_glb_id\n\n-- DROP INDEX t67nl_cmp_glb_id;\n\nCREATE INDEX t67nl_cmp_glb_id\n ON t67cdi_nl_cmp_descr\n USING btree\n (global_id_035 COLLATE pg_catalog.\"default\");\n\n-- Index: t67nl_cmp_is_grc\n\n-- DROP INDEX t67nl_cmp_is_grc;\n\nCREATE INDEX t67nl_cmp_is_grc\n ON t67cdi_nl_cmp_descr\n USING btree\n (is_grc_002 COLLATE pg_catalog.\"default\");\n\n-- Index: t67nl_cmp_n_s_h\n\n-- DROP INDEX t67nl_cmp_n_s_h;\n\nCREATE INDEX t67nl_cmp_n_s_h\n ON t67cdi_nl_cmp_descr\n USING btree\n (name_fo1_010 COLLATE pg_catalog.\"default\", street_fo_015 COLLATE\npg_catalog.\"default\", street_hnr_013 COLLATE pg_catalog.\"default\");\n\n-- Index: t67nl_cmp_nm_est\n\n-- DROP INDEX t67nl_cmp_nm_est;\n\nCREATE INDEX t67nl_cmp_nm_est\n ON t67cdi_nl_cmp_descr\n USING btree\n (name_fo1_010 COLLATE pg_catalog.\"default\", establ_year_024 COLLATE\npg_catalog.\"default\");\n\n-- Index: t67nl_cmp_nm_p_s\n\n-- DROP INDEX t67nl_cmp_nm_p_s;\n\nCREATE INDEX t67nl_cmp_nm_p_s\n ON t67cdi_nl_cmp_descr\n USING btree\n (name_fo1_010 COLLATE pg_catalog.\"default\", place_pc_017 COLLATE\npg_catalog.\"default\", street_fo_015 COLLATE pg_catalog.\"default\");\n\n-- Index: t67nl_cmp_nm_pl\n\n-- DROP INDEX t67nl_cmp_nm_pl;\n\nCREATE INDEX t67nl_cmp_nm_pl\n ON t67cdi_nl_cmp_descr\n USING btree\n (name_fo1_010 COLLATE pg_catalog.\"default\", place_fo_019 COLLATE\npg_catalog.\"default\");\n\n-- Index: t67nl_cmp_p_s_h\n\n-- DROP INDEX t67nl_cmp_p_s_h;\n\nCREATE INDEX t67nl_cmp_p_s_h\n ON t67cdi_nl_cmp_descr\n USING btree\n (place_pc_017 COLLATE pg_catalog.\"default\", street_fo_015 COLLATE\npg_catalog.\"default\", street_hnr_013 COLLATE pg_catalog.\"default\");\n\n-- Index: t67nl_cmp_pc6\n\n-- DROP INDEX t67nl_cmp_pc6;\n\nCREATE INDEX t67nl_cmp_pc6\n ON t67cdi_nl_cmp_descr\n USING btree\n (place_pc_017 COLLATE pg_catalog.\"default\");\n\n-- Index: t67nl_cmp_pc_hnr\n\n-- DROP INDEX t67nl_cmp_pc_hnr;\n\nCREATE INDEX t67nl_cmp_pc_hnr\n ON t67cdi_nl_cmp_descr\n USING btree\n (place_pc_017 COLLATE pg_catalog.\"default\", street_hnr_013 COLLATE\npg_catalog.\"default\");\n\n-- Index: t67nl_cmp_pl_st\n\n-- DROP INDEX t67nl_cmp_pl_st;\n\nCREATE INDEX t67nl_cmp_pl_st\n ON t67cdi_nl_cmp_descr\n USING btree\n (place_fo_019 COLLATE pg_catalog.\"default\", street_fo_015 COLLATE\npg_catalog.\"default\");\n\n-- Index: t67nl_cmp_src_id\n\n-- DROP INDEX t67nl_cmp_src_id;\n\nCREATE INDEX t67nl_cmp_src_id\n ON t67cdi_nl_cmp_descr\n USING btree\n (src_id_004 COLLATE pg_catalog.\"default\");\n\n\n\nOutput of manual vacuum just before running the query\n=====================================================\nINFO: vacuuming \"public.t67cdi_nl_cmp_descr\"\nINFO: index \"t67cdi_nl_cmp_descrs_prkey\" now contains 9863467 row\nversions in 48895 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.08s/0.00u sec elapsed 0.09 sec.\nINFO: index \"t67nl_cmp_coc\" now contains 9863467 row versions in 27047\npages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.04s/0.00u sec elapsed 0.05 sec.\nINFO: index \"t67nl_cmp_duns\" now contains 9863467 row versions in 27047\npages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.04s/0.00u sec elapsed 0.05 sec.\nINFO: index \"t67nl_cmp_email\" now contains 9863467 row versions in\n27047 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.06s/0.00u sec elapsed 0.05 sec.\nINFO: index \"t67nl_cmp_glb_id\" now contains 9863467 row versions in\n27047 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.05s/0.00u sec elapsed 0.05 sec.\nINFO: index \"t67nl_cmp_is_grc\" now contains 9863467 row versions in\n27047 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.06s/0.00u sec elapsed 0.05 sec.\nINFO: index \"t67nl_cmp_n_s_h\" now contains 9863467 row versions in\n43444 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.05s/0.01u sec elapsed 0.08 sec.\nINFO: index \"t67nl_cmp_nm_est\" now contains 9863467 row versions in\n32855 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.04s/0.02u sec elapsed 0.06 sec.\nINFO: index \"t67nl_cmp_nm_p_s\" now contains 9863467 row versions in\n49318 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.09s/0.00u sec elapsed 0.09 sec.\nINFO: index \"t67nl_cmp_nm_pl\" now contains 9863467 row versions in\n40094 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.04s/0.03u sec elapsed 0.08 sec.\nINFO: index \"t67nl_cmp_p_s_h\" now contains 9863467 row versions in\n43500 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.06s/0.01u sec elapsed 0.08 sec.\nINFO: index \"t67nl_cmp_pc6\" now contains 9863467 row versions in 27047\npages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.04s/0.00u sec elapsed 0.05 sec.\nINFO: index \"t67nl_cmp_pc_hnr\" now contains 9863467 row versions in\n37978 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.02s/0.05u sec elapsed 0.07 sec.\nINFO: index \"t67nl_cmp_pl_st\" now contains 9863467 row versions in\n39741 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.05s/0.02u sec elapsed 0.07 sec.\nINFO: index \"t67nl_cmp_src_id\" now contains 9863467 row versions in\n37981 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.05s/0.02u sec elapsed 0.07 sec.\nINFO: \"t67cdi_nl_cmp_descr\": found 0 removable, 9863467 nonremovable\nrow versions in 335821 out of 335821 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 2.31s/0.87u sec elapsed 8.13 sec.\nINFO: vacuuming \"pg_toast.pg_toast_18470\"\nINFO: index \"pg_toast_18470_index\" now contains 0 row versions in 1\npages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"pg_toast_18470\": found 0 removable, 0 nonremovable row versions\nin 0 out of 0 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: analyzing \"public.t67cdi_nl_cmp_descr\"\nINFO: \"t67cdi_nl_cmp_descr\": scanned 300000 of 335821 pages, containing\n8811404 live rows and 0 dead rows; 300000 rows in sample, 9863510\nestimated total rows\nTotal query runtime: 41066 ms.\n\nOutput of manual vacuum after running the query\n===============================================\nINFO: vacuuming \"public.t67cdi_nl_cmp_descr\"\nINFO: scanned index \"t67cdi_nl_cmp_descrs_prkey\" to remove 9863467 row\nversions\nDETAIL: CPU 1.57s/5.19u sec elapsed 46.73 sec.\nINFO: scanned index \"t67nl_cmp_coc\" to remove 9863467 row versions\nDETAIL: CPU 0.31s/4.31u sec elapsed 6.38 sec.\nINFO: scanned index \"t67nl_cmp_duns\" to remove 9863467 row versions\nDETAIL: CPU 0.28s/4.44u sec elapsed 6.52 sec.\nINFO: scanned index \"t67nl_cmp_email\" to remove 9863467 row versions\nDETAIL: CPU 0.58s/4.69u sec elapsed 16.53 sec.\nINFO: scanned index \"t67nl_cmp_glb_id\" to remove 9863467 row versions\nDETAIL: CPU 0.38s/4.31u sec elapsed 7.79 sec.\nINFO: scanned index \"t67nl_cmp_is_grc\" to remove 9863467 row versions\nDETAIL: CPU 0.42s/2.44u sec elapsed 5.46 sec.\nINFO: scanned index \"t67nl_cmp_n_s_h\" to remove 9863467 row versions\nDETAIL: CPU 1.27s/9.19u sec elapsed 43.46 sec.\nINFO: scanned index \"t67nl_cmp_nm_est\" to remove 9863467 row versions\nDETAIL: CPU 1.05s/7.71u sec elapsed 34.34 sec.\nINFO: scanned index \"t67nl_cmp_nm_p_s\" to remove 9863467 row versions\nDETAIL: CPU 1.37s/9.27u sec elapsed 47.96 sec.\nINFO: scanned index \"t67nl_cmp_nm_pl\" to remove 9863467 row versions\nDETAIL: CPU 1.17s/8.97u sec elapsed 40.85 sec.\nINFO: scanned index \"t67nl_cmp_p_s_h\" to remove 9863467 row versions\nDETAIL: CPU 1.21s/6.30u sec elapsed 41.02 sec.\nINFO: scanned index \"t67nl_cmp_pc6\" to remove 9863467 row versions\nDETAIL: CPU 0.75s/5.46u sec elapsed 29.20 sec.\nINFO: scanned index \"t67nl_cmp_pc_hnr\" to remove 9863467 row versions\nDETAIL: CPU 1.20s/5.88u sec elapsed 37.26 sec.\nINFO: scanned index \"t67nl_cmp_pl_st\" to remove 9863467 row versions\nDETAIL: CPU 1.12s/6.54u sec elapsed 39.78 sec.\nINFO: scanned index \"t67nl_cmp_src_id\" to remove 9863467 row versions\nDETAIL: CPU 1.09s/4.32u sec elapsed 33.92 sec.\nINFO: \"t67cdi_nl_cmp_descr\": removed 9863467 row versions in 335821\npages\nDETAIL: CPU 4.26s/1.85u sec elapsed 95.42 sec.\nINFO: index \"t67cdi_nl_cmp_descrs_prkey\" now contains 9863467 row\nversions in 106595 pages\nDETAIL: 9863467 index row versions were removed.\n7 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"t67nl_cmp_coc\" now contains 9863467 row versions in 61281\npages\nDETAIL: 9863467 index row versions were removed.\n26947 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"t67nl_cmp_duns\" now contains 9863467 row versions in 61334\npages\nDETAIL: 9863467 index row versions were removed.\n26948 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"t67nl_cmp_email\" now contains 9863467 row versions in\n62651 pages\nDETAIL: 9863467 index row versions were removed.\n20427 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"t67nl_cmp_glb_id\" now contains 9863467 row versions in\n61337 pages\nDETAIL: 9863467 index row versions were removed.\n26947 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"t67nl_cmp_is_grc\" now contains 9863467 row versions in\n61324 pages\nDETAIL: 9863467 index row versions were removed.\n26949 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"t67nl_cmp_n_s_h\" now contains 9863467 row versions in\n91727 pages\nDETAIL: 9863467 index row versions were removed.\n45 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"t67nl_cmp_nm_est\" now contains 9863467 row versions in\n81825 pages\nDETAIL: 9863467 index row versions were removed.\n8666 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"t67nl_cmp_nm_p_s\" now contains 9863467 row versions in\n107385 pages\nDETAIL: 9863467 index row versions were removed.\n54 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"t67nl_cmp_nm_pl\" now contains 9863467 row versions in\n89118 pages\nDETAIL: 9863467 index row versions were removed.\n989 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"t67nl_cmp_p_s_h\" now contains 9863467 row versions in\n97583 pages\nDETAIL: 9863467 index row versions were removed.\n1801 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"t67nl_cmp_pc6\" now contains 9863467 row versions in 63184\npages\nDETAIL: 9863467 index row versions were removed.\n1446 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"t67nl_cmp_pc_hnr\" now contains 9863467 row versions in\n85071 pages\nDETAIL: 9863467 index row versions were removed.\n1609 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"t67nl_cmp_pl_st\" now contains 9863467 row versions in\n97723 pages\nDETAIL: 9863467 index row versions were removed.\n4957 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"t67nl_cmp_src_id\" now contains 9863467 row versions in\n80756 pages\nDETAIL: 9863467 index row versions were removed.\n4 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"t67cdi_nl_cmp_descr\": found 471832 removable, 9863467\nnonremovable row versions in 671233 out of 671233 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 20.83s/92.41u sec elapsed 555.37 sec.\nINFO: vacuuming \"pg_toast.pg_toast_18470\"\nINFO: index \"pg_toast_18470_index\" now contains 0 row versions in 1\npages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"pg_toast_18470\": found 0 removable, 0 nonremovable row versions\nin 0 out of 0 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: analyzing \"public.t67cdi_nl_cmp_descr\"\nINFO: \"t67cdi_nl_cmp_descr\": scanned 300000 of 671233 pages, containing\n4409629 live rows and 0 dead rows; 300000 rows in sample, 9864731\nestimated total rows\nTotal query runtime: 598517 ms.\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 4 Apr 2014 12:00:31 +0000", "msg_from": "Hans Drexler <[email protected]>", "msg_from_op": true, "msg_subject": "Batch update query performance" }, { "msg_contents": "Hans Drexler wrote:\r\n> We are porting an application to PostgreSQL. The appplication already\r\n> runs with DB2 (LUW version) and Oracle. One query in particular executes\r\n> slower on Postgres than it does on other Database platforms, notably DB2\r\n> LUW and Oracle. (Please understand, we are not comparing databases here,\r\n> we are porting an application. We do not mean to start a flame war\r\n> here).\r\n\r\n[...]\r\n\r\n> Postgres needs close to 50\r\n> minutes to process the same query on the same data. Sometimes, Postgres\r\n> needs more than 2 hours.\r\n> \r\n> The application performs an update query on every row\r\n> of the table. The exact SQL of this query is:\r\n> \r\n> update t67cdi_nl_cmp_descr set is_grc_002='Y'\r\n\r\n[...]\r\n\r\n> We tried removing all indexes. That reduces the runtime to ~3 minutes.\r\n> When we start to put indexes back, the run time of the query increases\r\n> again with each index added.\r\n\r\nDo I read that right that the duration of the update is reduced from\r\n50 or 120 minutes to 3 when you drop all the indexes?\r\n\r\n[...]\r\n\r\n> Hypothesis\r\n> we have tried many things to solve this problem ourselves, but to no\r\n> avail so far. Our hypothesis is that\r\n> the Postgres creates new records for all rows and then needs to update\r\n> all 15 indexes to make them point to the new rows. There does not seem\r\n> to be a way to avoid that.\r\n> \r\n> Question:\r\n> - Is our hypothesis correct?\r\n> - Can the forum please advise us on possible ways to make the query\r\n> faster?\r\n\r\nYour hypothesis may be correct.\r\nWhat you could do is to create the table will a fillfactor of 50 or less\r\nbefore populating it. Then 50% of the space will be left empty and\r\ncould be used to put the updated data on the same page as the original\r\ndata. That way you could take advantage of HOT (heap only tuples)\r\nwhich will avoid the need to update indexes that do not reference the\r\nupdated column.\r\n\r\nIf I count right, you have got 15 indexes on this table.\r\nMaybe you could check if you need them all.\r\n\r\nYours,\r\nLaurenz Albe\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 7 Apr 2014 12:06:30 +0000", "msg_from": "Albe Laurenz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Batch update query performance" }, { "msg_contents": "On 04/07/2014 03:06 PM, Albe Laurenz wrote:\n> Hans Drexler wrote:\n>> Postgres needs close to 50\n>> minutes to process the same query on the same data. Sometimes, Postgres\n>> needs more than 2 hours.\n>>\n>> The application performs an update query on every row\n>> of the table. The exact SQL of this query is:\n>>\n>> update t67cdi_nl_cmp_descr set is_grc_002='Y'\n>\n> [...]\n>\n>> We tried removing all indexes. That reduces the runtime to ~3 minutes.\n>> When we start to put indexes back, the run time of the query increases\n>> again with each index added.\n>\n> Do I read that right that the duration of the update is reduced from\n> 50 or 120 minutes to 3 when you drop all the indexes?\n\nIf that's true, you might be able to drop and re-create the indexes as \npart of the same transaction, and come out ahead. DROP/CREATE INDEX is \ntransactional in PostgreSQL, so you can do:\n\nBEGIN;\nDROP INDEX index1;\n...\nDROP INDEX index15;\nUPDATE t67cdi_nl_cmp_descr SET is_grc_002='Y'\nCREATE INDEX index1 ...;\n...\nCREATE INDEX index15 ...;\nCOMMIT;\n\nThis will take an AccessExclusiveLock on the table, though, so the table \nwill be inaccessible to concurrent queries while it's running.\n\nActually, since you are effectively rewriting the table anyway, you \ncould create a new table with same structure, insert all rows from the \nold table, with is_grc_002 set to 'Y', drop the old table, and rename \nthe new table into its place.\n\nDo all the rows really need to be updated? If some of the rows already \nhave is_grc_002='Y', you can avoid rewriting those rows by adding a \nWHERE-clause: WHERE NOT is_grc_002='Y' OR is_grc_002 IS NULL.\n\nYou could also play tricks with partitioning. Don't store the is_grc_002 \nrow in the table at all. Instead, create two tables, one for the rows \nthat implicitly have is_grc_002='Y' and another for all the other rows. \nThen create a view on the union of the two tables, which adds the \nis_grc_002 column. Instead of doing a full-table update, you can just \nalter the view to display is_grc_002='Y' for both tables (and add a new \ntable to hold new rows with is_grc_002<>'Y').\n\n>> Hypothesis\n>> we have tried many things to solve this problem ourselves, but to no\n>> avail so far. Our hypothesis is that\n>> the Postgres creates new records for all rows and then needs to update\n>> all 15 indexes to make them point to the new rows. There does not seem\n>> to be a way to avoid that.\n>>\n>> Question:\n>> - Is our hypothesis correct?\n>> - Can the forum please advise us on possible ways to make the query\n>> faster?\n>\n> Your hypothesis may be correct.\n\nYeah, sounds about right. A full-table UPDATE like that is pretty much \nthe worst-case scenario for PostgreSQL's MVCC system, unfortunately.\n\n- Heikki\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 07 Apr 2014 15:53:33 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Batch update query performance" }, { "msg_contents": "On Fri, Apr 4, 2014 at 5:00 AM, Hans Drexler <\[email protected]> wrote:\n\n>\n> update t67cdi_nl_cmp_descr set is_grc_002='Y'\n>\n> This post contains the data of two runs of the query. the first with\n> explain analyze. The second run is with explain buffers. Between the\n> runs, an explicit Vacuum Analyze was done on the table.\n>\n> Observations\n> We tried removing the index on the field is_grc_002. That did not have a\n> big impact.\n>\n\nTo benefit from HOT update, you need both spare room in the table, and to\nnot have an index on the updated column.\nSo just dropping the index is probably not enough for a full-table update\nas you don't have the spare room. You also have to populate the table with\na lower fillfactor, as has already been noted, as well as dropping the\nindex.\n\nIs this update a one-time thing, or does the application do it on a regular\nbasis?\n\nCheers,\n\nJeff\n\nOn Fri, Apr 4, 2014 at 5:00 AM, Hans Drexler <[email protected]> wrote:\n\nupdate t67cdi_nl_cmp_descr set is_grc_002='Y'\n\nThis post contains the data of two runs of the query. the first with\nexplain analyze. The second run is with explain buffers. Between the\nruns, an explicit Vacuum Analyze was done on the table.\n\nObservations\nWe tried removing the index on the field is_grc_002. That did not have a\nbig impact.To benefit from HOT update, you need both spare room in the table,  and to not have an index on the updated column.So just dropping the index is probably not enough for a full-table update as you don't have the spare room.  You also have to populate the table with a lower fillfactor, as has already been noted, as well as dropping the index.  \nIs this update a one-time thing, or does the application do it on a regular basis?\nCheers,Jeff", "msg_date": "Mon, 7 Apr 2014 14:26:48 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Batch update query performance" } ]
[ { "msg_contents": "Hello,\n\nMy question is about multiprocess and materialized View.\nhttp://www.postgresql.org/docs/9.3/static/sql-creatematerializedview.html\nI (will) have something like 3600 materialised views, and I would like to\nknow the way to refresh them in a multithread way\n(anderstand 8 cpu cores -> 8 refresh process in the same time)\n\n\nThanks a lot,\n\nHello,My question is about multiprocess and materialized View.\nhttp://www.postgresql.org/docs/9.3/static/sql-creatematerializedview.html\nI (will) have something like 3600 materialised views, and I would like to know the way to refresh them in a multithread way(anderstand 8 cpu cores -> 8 refresh process  in the same time)\nThanks a lot,", "msg_date": "Fri, 4 Apr 2014 18:29:42 +0200", "msg_from": "Nicolas Paris <[email protected]>", "msg_from_op": true, "msg_subject": "PGSQL 9.3 - Materialized View - multithreading" }, { "msg_contents": "On 4 April 2014 17:29, Nicolas Paris <[email protected]> wrote:\n> Hello,\n>\n> My question is about multiprocess and materialized View.\n> http://www.postgresql.org/docs/9.3/static/sql-creatematerializedview.html\n> I (will) have something like 3600 materialised views, and I would like to\n> know the way to refresh them in a multithread way\n> (anderstand 8 cpu cores -> 8 refresh process in the same time)\n\nThe only thing that immediately comes to mind would be running a\nrather hacky DO function in 4 separate sessions:\n\nDO $$\nDECLARE\n session CONSTANT BIGINT := 0;\n rec RECORD;\nBEGIN\n FOR rec IN SELECT quote_ident(nspname) || '.' ||\nquote_ident(relname) AS mv FROM pg_class c INNER JOIN pg_namespace n\nON c.relnamespace = n.oid WHERE relkind = 'm' AND c.oid::bigint % 8 =\nsession LOOP\n RAISE NOTICE 'Refreshing materialized view: %', rec.mv;\n EXECUTE 'REFRESH MATERIALIZED VIEW ' || rec.mv || ';';\n END LOOP;\nEND$$ language plpgsql;\n\nWhere you would set session to 0 for the first session, 1 for the\nnext, 2 for the next and 3 for the next, and so on until you reach 7\nfor the last. These would each be run in a separate parallel session,\nalthough someone may come up with a better solution.\n\n-- \nThom\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 4 Apr 2014 17:54:07 +0100", "msg_from": "Thom Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PGSQL 9.3 - Materialized View - multithreading" }, { "msg_contents": "Thanks,\n\n\"The only thing that immediately comes to mind would be running a\n rather hacky DO function in 4 separate sessions:\"\nYou mean 8 sessions I guess.\n\n8 separate sessions ?\nHave you any idea how to manage sessions ? Is it possible to create\nseparate session internaly ?\nDo I have to make 8 external connection to database, to get 8 process.\nIt would be great if I could manage session internaly, in a pl/sql by\nexample.\n\n\nLe 04/04/2014 18:54, Thom Brown a �crit :\n> On 4 April 2014 17:29, Nicolas Paris <[email protected]> wrote:\n>> Hello,\n>>\n>> My question is about multiprocess and materialized View.\n>> http://www.postgresql.org/docs/9.3/static/sql-creatematerializedview.html\n>> I (will) have something like 3600 materialised views, and I would like to\n>> know the way to refresh them in a multithread way\n>> (anderstand 8 cpu cores -> 8 refresh process in the same time)\n> \n> The only thing that immediately comes to mind would be running a\n> rather hacky DO function in 4 separate sessions:\n> \n> DO $$\n> DECLARE\n> session CONSTANT BIGINT := 0;\n> rec RECORD;\n> BEGIN\n> FOR rec IN SELECT quote_ident(nspname) || '.' ||\n> quote_ident(relname) AS mv FROM pg_class c INNER JOIN pg_namespace n\n> ON c.relnamespace = n.oid WHERE relkind = 'm' AND c.oid::bigint % 8 =\n> session LOOP\n> RAISE NOTICE 'Refreshing materialized view: %', rec.mv;\n> EXECUTE 'REFRESH MATERIALIZED VIEW ' || rec.mv || ';';\n> END LOOP;\n> END$$ language plpgsql;\n> \n> Where you would set session to 0 for the first session, 1 for the\n> next, 2 for the next and 3 for the next, and so on until you reach 7\n> for the last. These would each be run in a separate parallel session,\n> although someone may come up with a better solution.\n> \n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 04 Apr 2014 21:49:12 +0200", "msg_from": "PARIS Nicolas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PGSQL 9.3 - Materialized View - multithreading" }, { "msg_contents": "On 4 April 2014 20:49, PARIS Nicolas <[email protected]> wrote:\n> Thanks,\n>\n> \"The only thing that immediately comes to mind would be running a\n> rather hacky DO function in 4 separate sessions:\"\n> You mean 8 sessions I guess.\n\nYes, typo.\n\n> 8 separate sessions ?\n> Have you any idea how to manage sessions ? Is it possible to create\n> separate session internaly ?\n> Do I have to make 8 external connection to database, to get 8 process.\n> It would be great if I could manage session internaly, in a pl/sql by\n> example.\n\nWell you can't have multiple sessions per connection, so yes, you'd\nneed to issue each of them in separate connections.\n\nI can't think of a more convenient way of doing it, but the solution\nI've proposed isn't particularly elegant anyway.\n\n-- \nThom\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 4 Apr 2014 20:57:19 +0100", "msg_from": "Thom Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PGSQL 9.3 - Materialized View - multithreading" }, { "msg_contents": "Ok thanks,\n\nAnd what about triggers. 8 triggers based on the same event won't be\nmultithreaded ?\n\n\n\n\nLe 04/04/2014 21:57, Thom Brown a �crit :\n> On 4 April 2014 20:49, PARIS Nicolas <[email protected]> wrote:\n>> Thanks,\n>>\n>> \"The only thing that immediately comes to mind would be running a\n>> rather hacky DO function in 4 separate sessions:\"\n>> You mean 8 sessions I guess.\n> \n> Yes, typo.\n> \n>> 8 separate sessions ?\n>> Have you any idea how to manage sessions ? Is it possible to create\n>> separate session internaly ?\n>> Do I have to make 8 external connection to database, to get 8 process.\n>> It would be great if I could manage session internaly, in a pl/sql by\n>> example.\n> \n> Well you can't have multiple sessions per connection, so yes, you'd\n> need to issue each of them in separate connections.\n> \n> I can't think of a more convenient way of doing it, but the solution\n> I've proposed isn't particularly elegant anyway.\n> \n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 04 Apr 2014 22:07:02 +0200", "msg_from": "PARIS Nicolas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PGSQL 9.3 - Materialized View - multithreading" }, { "msg_contents": "On 4 April 2014 21:07, PARIS Nicolas <[email protected]> wrote:\n> Ok thanks,\n>\n> And what about triggers. 8 triggers based on the same event won't be\n> multithreaded ?\n\nI'm not clear on how triggers come into this. You can't have triggers\non materialized views, and they don't fire triggers on tables or views\nthat they are based on.\n-- \nThom\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 4 Apr 2014 21:14:26 +0100", "msg_from": "Thom Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PGSQL 9.3 - Materialized View - multithreading" }, { "msg_contents": "this postgres documentation :\nhttp://www.postgresql.org/docs/9.3/static/ecpg-connect.html\nsays it is actually possible to manage connection in C stored procedure.\n\nI may be wrong...\n\n\nLe 04/04/2014 22:14, Thom Brown a �crit :\n> lear on how triggers come into this. You can't have triggers\n> on materialized views, and they don't fire triggers on tables or views\n> that they are based o\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 04 Apr 2014 22:26:22 +0200", "msg_from": "PARIS Nicolas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PGSQL 9.3 - Materialized View - multithreading" }, { "msg_contents": "On Fri, Apr 04, 2014 at 10:26:22PM +0200, PARIS Nicolas wrote:\n> this postgres documentation :\n> http://www.postgresql.org/docs/9.3/static/ecpg-connect.html\n> says it is actually possible to manage connection in C stored procedure.\n> \n> I may be wrong...\n> \n> \n> Le 04/04/2014 22:14, Thom Brown a �crit :\n> > lear on how triggers come into this. You can't have triggers\n> > on materialized views, and they don't fire triggers on tables or views\n> > that they are based o\n> \n\nHi,\n\nI do not know if it can be used in this fashion, but could pl/proxy be\nused by defining a cluster to be the same server and use a partitioned\nremote call? Someone with pl/proxy experience may have more information.\n\nRegards,\nKen\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 4 Apr 2014 15:34:46 -0500", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PGSQL 9.3 - Materialized View - multithreading" }, { "msg_contents": "On 4 April 2014 21:26, PARIS Nicolas <[email protected]> wrote:\n> this postgres documentation :\n> http://www.postgresql.org/docs/9.3/static/ecpg-connect.html\n> says it is actually possible to manage connection in C stored procedure.\n>\n> I may be wrong...\n\nThat page doesn't refer to triggers at all, so I'm still not sure what you mean.\n\n-- \nThom\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 6 Apr 2014 20:07:41 +0100", "msg_from": "Thom Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PGSQL 9.3 - Materialized View - multithreading" }, { "msg_contents": "Right, not refering triggers, seems to be kind of mix C/sql compiled (=\nexternal).\nTo conclude :\n- pl/proxy, it appears difficult, and not designed to.\n- pgAgent (supposed to apply jobs in a multithreaded way)\n- bash (xargs does the job)\n- external scripts (R, python, perl...)\n\nSo I will test pgAgent and feedback it\n\n\nThanks\n\nLe 06/04/2014 21:07, Thom Brown a �crit :\n> On 4 April 2014 21:26, PARIS Nicolas <[email protected]> wrote:\n>> this postgres documentation :\n>> http://www.postgresql.org/docs/9.3/static/ecpg-connect.html\n>> says it is actually possible to manage connection in C stored procedure.\n>>\n>> I may be wrong...\n> \n> That page doesn't refer to triggers at all, so I'm still not sure what you mean.\n> \n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 07 Apr 2014 00:21:43 +0200", "msg_from": "PARIS Nicolas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PGSQL 9.3 - Materialized View - multithreading" }, { "msg_contents": "On 04 Apr 2014, at 18:29, Nicolas Paris <[email protected]> wrote:\n\n> Hello,\n> \n> My question is about multiprocess and materialized View.\n> http://www.postgresql.org/docs/9.3/static/sql-creatematerializedview.html\n> I (will) have something like 3600 materialised views, and I would like to know the way to refresh them in a multithread way\n> (anderstand 8 cpu cores -> 8 refresh process in the same time)\n\nHi Nick,\n\nout of DB solution: \n\n1. Produce a text file which contains the 3600 refresh commands you want to run in parallel. You can do that with select and format() if you don't have a list already. \n\n2. I'm going to simulate your 3600 'refresh' commands here with some select and sleep statements that finish at unknown times.\n\n(In BASH): \n for i in {1..3600} ; do echo \"echo \\\"select pg_sleep(1+random()::int*10); select $i\\\" | psql mydb\" ; done > 3600commands\n\n3. Install Gnu Parallel and type: \n\nparallel < 3600commands\n\n4. Parallel will automatically work out the appropriate number of cores/threads for your CPUs, or you can control it manually with -j. \nIt will also give you a live progress report if you use --progress.\ne.g. this command balances 8 jobs at a time, prints a dynamic progress report and dumps stdout to /dev/null\n\nparallel -j 8 --progress < 3600commands > /dev/null\n\n5. If you want to make debugging easier use the parameter --tag to tag output for each command. \n\nOf course it would be much more elegant if someone implemented something like Gnu Parallel inside postgres or psql ... :-)\n\nHope this helps & have a nice day, \n\nGraeme.\n\n\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 7 Apr 2014 10:29:34 +0000", "msg_from": "\"Graeme B. Bell\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PGSQL 9.3 - Materialized View - multithreading" }, { "msg_contents": "Hello,\nThanks for this clear explanation !\n\nThen I have a sub-question :\nSupposed I have 3600 materialised views say 600 mat views from 6 main\ntable. (A,B,C,D,E,F are repetead 600 times with some differences)\nIs it faster to :\n1) parallel refresh 600 time A, then 600 time B etc,\nOR\n2) parallel refresh 600 time A,B,C,D,E,F\n\nI guess 1) is faster because they are 600 access to same table loaded in\nmemory ? But do parallel access to the same table implies concurency\n and bad performance ?\n\nThanks\n\nNicolas PARIS\n\n\n2014-04-07 12:29 GMT+02:00 Graeme B. Bell <[email protected]>:\n\n> On 04 Apr 2014, at 18:29, Nicolas Paris <[email protected]> wrote:\n>\n> > Hello,\n> >\n> > My question is about multiprocess and materialized View.\n> >\n> http://www.postgresql.org/docs/9.3/static/sql-creatematerializedview.html\n> > I (will) have something like 3600 materialised views, and I would like\n> to know the way to refresh them in a multithread way\n> > (anderstand 8 cpu cores -> 8 refresh process in the same time)\n>\n> Hi Nick,\n>\n> out of DB solution:\n>\n> 1. Produce a text file which contains the 3600 refresh commands you want\n> to run in parallel. You can do that with select and format() if you don't\n> have a list already.\n>\n> 2. I'm going to simulate your 3600 'refresh' commands here with some\n> select and sleep statements that finish at unknown times.\n>\n> (In BASH):\n> for i in {1..3600} ; do echo \"echo \\\"select\n> pg_sleep(1+random()::int*10); select $i\\\" | psql mydb\" ; done > 3600commands\n>\n> 3. Install Gnu Parallel and type:\n>\n> parallel < 3600commands\n>\n> 4. Parallel will automatically work out the appropriate number of\n> cores/threads for your CPUs, or you can control it manually with -j.\n> It will also give you a live progress report if you use --progress.\n> e.g. this command balances 8 jobs at a time, prints a dynamic progress\n> report and dumps stdout to /dev/null\n>\n> parallel -j 8 --progress < 3600commands > /dev/null\n>\n> 5. If you want to make debugging easier use the parameter --tag to tag\n> output for each command.\n>\n> Of course it would be much more elegant if someone implemented something\n> like Gnu Parallel inside postgres or psql ... :-)\n>\n> Hope this helps & have a nice day,\n>\n> Graeme.\n>\n>\n>\n>\n>\n>\n\nHello,Thanks for this clear explanation !\nThen I have a sub-question :Supposed I have 3600 materialised views say 600 mat views from 6 main table. (A,B,C,D,E,F are repetead 600 times with some differences) \nIs it faster to :1) parallel refresh  600 time A, then 600 time B etc,\nOR2) parallel refresh  600 time A,B,C,D,E,F\nI guess 1) is faster because they are 600 access to same table loaded in memory ? But do parallel access to the same table implies concurency\n and bad performance ?ThanksNicolas PARIS\n2014-04-07 12:29 GMT+02:00 Graeme B. Bell <[email protected]>:\nOn 04 Apr 2014, at 18:29, Nicolas Paris <[email protected]> wrote:\n\n> Hello,\n>\n> My question is about multiprocess and materialized View.\n> http://www.postgresql.org/docs/9.3/static/sql-creatematerializedview.html\n> I (will) have something like 3600 materialised views, and I would like to know the way to refresh them in a multithread way\n> (anderstand 8 cpu cores -> 8 refresh process  in the same time)\n\nHi Nick,\n\nout of DB solution:\n\n1. Produce a text file which contains the 3600 refresh commands you want to run in parallel. You can do that with select and format() if you don't have a list already.\n\n2. I'm going to simulate your 3600 'refresh' commands here with some select and sleep statements that finish at unknown times.\n\n(In BASH):\n  for i in {1..3600} ; do echo \"echo \\\"select pg_sleep(1+random()::int*10); select $i\\\" | psql mydb\" ; done > 3600commands\n\n3. Install Gnu Parallel     and type:\n\nparallel < 3600commands\n\n4. Parallel will automatically work out the appropriate number of cores/threads for your CPUs, or you can control it manually with -j.\nIt will also give you a live progress report if you use --progress.\ne.g. this command balances 8 jobs at a time, prints a dynamic progress report and dumps stdout to /dev/null\n\nparallel -j 8 --progress  < 3600commands > /dev/null\n\n5. If you want to make debugging easier use the parameter --tag to tag output for each command.\n\nOf course it would be much more elegant if someone implemented something like Gnu Parallel inside postgres or psql ... :-)\n\nHope this helps & have a nice day,\n\nGraeme.", "msg_date": "Mon, 7 Apr 2014 14:49:12 +0200", "msg_from": "Nicolas Paris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PGSQL 9.3 - Materialized View - multithreading" }, { "msg_contents": "\nHi again Nick.\n\nGlad it helped. \n\nGenerally, I would expect that doing all the A's first, then all the B's, and so on, would be fastest since you can re-use the data from cache.\n\nConcurrency when reading isn't generally a problem. Lots of things can read at the same time and it will be nice and fast.\nIt's concurrent writes or concurrent read/write of the same data item that causes problems with locking. That shouldn't be happening here, judging by your description.\n\nIf possible, try to make sure nothing is modifying those source tables A/B/C/D/E/F when you are doing your view refresh.\n\nGraeme.\n\nOn 07 Apr 2014, at 14:49, Nicolas Paris <[email protected]> wrote:\n\n> Hello,\n> Thanks for this clear explanation !\n> \n> Then I have a sub-question :\n> Supposed I have 3600 materialised views say 600 mat views from 6 main table. (A,B,C,D,E,F are repetead 600 times with some differences) \n> Is it faster to :\n> 1) parallel refresh 600 time A, then 600 time B etc,\n> OR\n> 2) parallel refresh 600 time A,B,C,D,E,F\n> \n> I guess 1) is faster because they are 600 access to same table loaded in memory ? But do parallel access to the same table implies concurency\n> and bad performance ?\n> \n> Thanks\n> \n> Nicolas PARIS\n> \n> \n> 2014-04-07 12:29 GMT+02:00 Graeme B. Bell <[email protected]>:\n> On 04 Apr 2014, at 18:29, Nicolas Paris <[email protected]> wrote:\n> \n> > Hello,\n> >\n> > My question is about multiprocess and materialized View.\n> > http://www.postgresql.org/docs/9.3/static/sql-creatematerializedview.html\n> > I (will) have something like 3600 materialised views, and I would like to know the way to refresh them in a multithread way\n> > (anderstand 8 cpu cores -> 8 refresh process in the same time)\n> \n> Hi Nick,\n> \n> out of DB solution:\n> \n> 1. Produce a text file which contains the 3600 refresh commands you want to run in parallel. You can do that with select and format() if you don't have a list already.\n> \n> 2. I'm going to simulate your 3600 'refresh' commands here with some select and sleep statements that finish at unknown times.\n> \n> (In BASH):\n> for i in {1..3600} ; do echo \"echo \\\"select pg_sleep(1+random()::int*10); select $i\\\" | psql mydb\" ; done > 3600commands\n> \n> 3. Install Gnu Parallel and type:\n> \n> parallel < 3600commands\n> \n> 4. Parallel will automatically work out the appropriate number of cores/threads for your CPUs, or you can control it manually with -j.\n> It will also give you a live progress report if you use --progress.\n> e.g. this command balances 8 jobs at a time, prints a dynamic progress report and dumps stdout to /dev/null\n> \n> parallel -j 8 --progress < 3600commands > /dev/null\n> \n> 5. If you want to make debugging easier use the parameter --tag to tag output for each command.\n> \n> Of course it would be much more elegant if someone implemented something like Gnu Parallel inside postgres or psql ... :-)\n> \n> Hope this helps & have a nice day,\n> \n> Graeme.\n> \n> \n> \n> \n> \n> \n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 7 Apr 2014 12:59:25 +0000", "msg_from": "\"Graeme B. Bell\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PGSQL 9.3 - Materialized View - multithreading" }, { "msg_contents": "Excellent.\n\nMaybe the last sub-question :\n\nThose 3600 mat views do have *indexes*.\nI guess I will get better performances in *dropping indexes* first, then\nrefresh, then *re-creating indexes*.\n\nAre there other way to improve performances (like mat views storage\nparameters<http://www.postgresql.org/docs/9.3/static/sql-createtable.html#SQL-CREATETABLE-STORAGE-PARAMETERS>),\nbecause this routines will be at night, and need to be finished quickly.\n\nThanks\n\nNicolas PARIS\n\n\n2014-04-07 14:59 GMT+02:00 Graeme B. Bell <[email protected]>:\n\n>\n> Hi again Nick.\n>\n> Glad it helped.\n>\n> Generally, I would expect that doing all the A's first, then all the B's,\n> and so on, would be fastest since you can re-use the data from cache.\n>\n> Concurrency when reading isn't generally a problem. Lots of things can\n> read at the same time and it will be nice and fast.\n> It's concurrent writes or concurrent read/write of the same data item that\n> causes problems with locking. That shouldn't be happening here, judging by\n> your description.\n>\n> If possible, try to make sure nothing is modifying those source tables\n> A/B/C/D/E/F when you are doing your view refresh.\n>\n> Graeme.\n>\n> On 07 Apr 2014, at 14:49, Nicolas Paris <[email protected]> wrote:\n>\n> > Hello,\n> > Thanks for this clear explanation !\n> >\n> > Then I have a sub-question :\n> > Supposed I have 3600 materialised views say 600 mat views from 6 main\n> table. (A,B,C,D,E,F are repetead 600 times with some differences)\n> > Is it faster to :\n> > 1) parallel refresh 600 time A, then 600 time B etc,\n> > OR\n> > 2) parallel refresh 600 time A,B,C,D,E,F\n> >\n> > I guess 1) is faster because they are 600 access to same table loaded in\n> memory ? But do parallel access to the same table implies concurency\n> > and bad performance ?\n> >\n> > Thanks\n> >\n> > Nicolas PARIS\n> >\n> >\n> > 2014-04-07 12:29 GMT+02:00 Graeme B. Bell <[email protected]>:\n> > On 04 Apr 2014, at 18:29, Nicolas Paris <[email protected]> wrote:\n> >\n> > > Hello,\n> > >\n> > > My question is about multiprocess and materialized View.\n> > >\n> http://www.postgresql.org/docs/9.3/static/sql-creatematerializedview.html\n> > > I (will) have something like 3600 materialised views, and I would like\n> to know the way to refresh them in a multithread way\n> > > (anderstand 8 cpu cores -> 8 refresh process in the same time)\n> >\n> > Hi Nick,\n> >\n> > out of DB solution:\n> >\n> > 1. Produce a text file which contains the 3600 refresh commands you want\n> to run in parallel. You can do that with select and format() if you don't\n> have a list already.\n> >\n> > 2. I'm going to simulate your 3600 'refresh' commands here with some\n> select and sleep statements that finish at unknown times.\n> >\n> > (In BASH):\n> > for i in {1..3600} ; do echo \"echo \\\"select\n> pg_sleep(1+random()::int*10); select $i\\\" | psql mydb\" ; done > 3600commands\n> >\n> > 3. Install Gnu Parallel and type:\n> >\n> > parallel < 3600commands\n> >\n> > 4. Parallel will automatically work out the appropriate number of\n> cores/threads for your CPUs, or you can control it manually with -j.\n> > It will also give you a live progress report if you use --progress.\n> > e.g. this command balances 8 jobs at a time, prints a dynamic progress\n> report and dumps stdout to /dev/null\n> >\n> > parallel -j 8 --progress < 3600commands > /dev/null\n> >\n> > 5. If you want to make debugging easier use the parameter --tag to tag\n> output for each command.\n> >\n> > Of course it would be much more elegant if someone implemented something\n> like Gnu Parallel inside postgres or psql ... :-)\n> >\n> > Hope this helps & have a nice day,\n> >\n> > Graeme.\n> >\n> >\n> >\n> >\n> >\n> >\n>\n>\n\nExcellent.Maybe the last sub-question :\nThose 3600 mat views do have indexes. I guess I will get better performances in dropping indexes first, then refresh, then re-creating indexes.\nAre there other way to improve performances (like mat views storage parameters), because this routines will be at night, and need to be finished quickly.\nThanksNicolas PARIS\n2014-04-07 14:59 GMT+02:00 Graeme B. Bell <[email protected]>:\n\nHi again Nick.\n\nGlad it helped.\n\nGenerally, I would expect that doing all the A's first, then all the B's, and so on, would be fastest since you can re-use the data from cache.\n\nConcurrency when reading isn't generally a problem. Lots of things can read at the same time and it will be nice and fast.\nIt's concurrent writes or concurrent read/write of the same data item that causes problems with locking. That shouldn't be happening here, judging by your description.\n\nIf possible, try to make sure nothing is modifying those source tables A/B/C/D/E/F when you are doing your view refresh.\n\nGraeme.\n\nOn 07 Apr 2014, at 14:49, Nicolas Paris <[email protected]> wrote:\n\n> Hello,\n> Thanks for this clear explanation !\n>\n> Then I have a sub-question :\n> Supposed I have 3600 materialised views say 600 mat views from 6 main table. (A,B,C,D,E,F are repetead 600 times with some differences)\n> Is it faster to :\n> 1) parallel refresh  600 time A, then 600 time B etc,\n> OR\n> 2) parallel refresh  600 time A,B,C,D,E,F\n>\n> I guess 1) is faster because they are 600 access to same table loaded in memory ? But do parallel access to the same table implies concurency\n>  and bad performance ?\n>\n> Thanks\n>\n> Nicolas PARIS\n>\n>\n> 2014-04-07 12:29 GMT+02:00 Graeme B. Bell <[email protected]>:\n> On 04 Apr 2014, at 18:29, Nicolas Paris <[email protected]> wrote:\n>\n> > Hello,\n> >\n> > My question is about multiprocess and materialized View.\n> > http://www.postgresql.org/docs/9.3/static/sql-creatematerializedview.html\n> > I (will) have something like 3600 materialised views, and I would like to know the way to refresh them in a multithread way\n> > (anderstand 8 cpu cores -> 8 refresh process  in the same time)\n>\n> Hi Nick,\n>\n> out of DB solution:\n>\n> 1. Produce a text file which contains the 3600 refresh commands you want to run in parallel. You can do that with select and format() if you don't have a list already.\n>\n> 2. I'm going to simulate your 3600 'refresh' commands here with some select and sleep statements that finish at unknown times.\n>\n> (In BASH):\n>   for i in {1..3600} ; do echo \"echo \\\"select pg_sleep(1+random()::int*10); select $i\\\" | psql mydb\" ; done > 3600commands\n>\n> 3. Install Gnu Parallel     and type:\n>\n> parallel < 3600commands\n>\n> 4. Parallel will automatically work out the appropriate number of cores/threads for your CPUs, or you can control it manually with -j.\n> It will also give you a live progress report if you use --progress.\n> e.g. this command balances 8 jobs at a time, prints a dynamic progress report and dumps stdout to /dev/null\n>\n> parallel -j 8 --progress  < 3600commands > /dev/null\n>\n> 5. If you want to make debugging easier use the parameter --tag to tag output for each command.\n>\n> Of course it would be much more elegant if someone implemented something like Gnu Parallel inside postgres or psql ... :-)\n>\n> Hope this helps & have a nice day,\n>\n> Graeme.\n>\n>\n>\n>\n>\n>", "msg_date": "Mon, 7 Apr 2014 15:56:43 +0200", "msg_from": "Nicolas Paris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PGSQL 9.3 - Materialized View - multithreading" }, { "msg_contents": "\n- http://wiki.postgresql.org/wiki/Performance_Optimization\n- run it on the most powerful machine you can find\n- get some more memory\n- get a big (512-1TB) SSD drive\n- avoid recalculating the same things over and over. if your views have many similar elements, then calculate those first into a partial result, then build the final views from the partial result.\n- make sure your source tables are fully indexed and have good statistics\n- run all the views once with \\timing and keep track of how long they took. Fix the slow ones.\n\nG\n\n\nOn 07 Apr 2014, at 15:56, Nicolas Paris <[email protected]> wrote:\n\n> Excellent.\n> \n> Maybe the last sub-question :\n> \n> Those 3600 mat views do have indexes. \n> I guess I will get better performances in dropping indexes first, then refresh, then re-creating indexes.\n> \n> Are there other way to improve performances (like mat views storage parameters), because this routines will be at night, and need to be finished quickly.\n> \n> Thanks\n> \n> Nicolas PARIS\n> \n> \n> 2014-04-07 14:59 GMT+02:00 Graeme B. Bell <[email protected]>:\n> \n> Hi again Nick.\n> \n> Glad it helped.\n> \n> Generally, I would expect that doing all the A's first, then all the B's, and so on, would be fastest since you can re-use the data from cache.\n> \n> Concurrency when reading isn't generally a problem. Lots of things can read at the same time and it will be nice and fast.\n> It's concurrent writes or concurrent read/write of the same data item that causes problems with locking. That shouldn't be happening here, judging by your description.\n> \n> If possible, try to make sure nothing is modifying those source tables A/B/C/D/E/F when you are doing your view refresh.\n> \n> Graeme.\n> \n> On 07 Apr 2014, at 14:49, Nicolas Paris <[email protected]> wrote:\n> \n> > Hello,\n> > Thanks for this clear explanation !\n> >\n> > Then I have a sub-question :\n> > Supposed I have 3600 materialised views say 600 mat views from 6 main table. (A,B,C,D,E,F are repetead 600 times with some differences)\n> > Is it faster to :\n> > 1) parallel refresh 600 time A, then 600 time B etc,\n> > OR\n> > 2) parallel refresh 600 time A,B,C,D,E,F\n> >\n> > I guess 1) is faster because they are 600 access to same table loaded in memory ? But do parallel access to the same table implies concurency\n> > and bad performance ?\n> >\n> > Thanks\n> >\n> > Nicolas PARIS\n> >\n> >\n> > 2014-04-07 12:29 GMT+02:00 Graeme B. Bell <[email protected]>:\n> > On 04 Apr 2014, at 18:29, Nicolas Paris <[email protected]> wrote:\n> >\n> > > Hello,\n> > >\n> > > My question is about multiprocess and materialized View.\n> > > http://www.postgresql.org/docs/9.3/static/sql-creatematerializedview.html\n> > > I (will) have something like 3600 materialised views, and I would like to know the way to refresh them in a multithread way\n> > > (anderstand 8 cpu cores -> 8 refresh process in the same time)\n> >\n> > Hi Nick,\n> >\n> > out of DB solution:\n> >\n> > 1. Produce a text file which contains the 3600 refresh commands you want to run in parallel. You can do that with select and format() if you don't have a list already.\n> >\n> > 2. I'm going to simulate your 3600 'refresh' commands here with some select and sleep statements that finish at unknown times.\n> >\n> > (In BASH):\n> > for i in {1..3600} ; do echo \"echo \\\"select pg_sleep(1+random()::int*10); select $i\\\" | psql mydb\" ; done > 3600commands\n> >\n> > 3. Install Gnu Parallel and type:\n> >\n> > parallel < 3600commands\n> >\n> > 4. Parallel will automatically work out the appropriate number of cores/threads for your CPUs, or you can control it manually with -j.\n> > It will also give you a live progress report if you use --progress.\n> > e.g. this command balances 8 jobs at a time, prints a dynamic progress report and dumps stdout to /dev/null\n> >\n> > parallel -j 8 --progress < 3600commands > /dev/null\n> >\n> > 5. If you want to make debugging easier use the parameter --tag to tag output for each command.\n> >\n> > Of course it would be much more elegant if someone implemented something like Gnu Parallel inside postgres or psql ... :-)\n> >\n> > Hope this helps & have a nice day,\n> >\n> > Graeme.\n> >\n> >\n> >\n> >\n> >\n> >\n> \n> \n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 7 Apr 2014 14:05:11 +0000", "msg_from": "\"Graeme B. Bell\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PGSQL 9.3 - Materialized View - multithreading" } ]
[ { "msg_contents": "Hi all,\n\nDisclaimer: this question probably belongs on the hackers list, but the \ninstructions say you have to try somewhere else first... toss-up between \nthis list and a bug report; list seemed more appropriate as a starting \npoint. Happy to file a bug if that's more appropriate, though.\n\nThis is with pgsql-9.3.4, x86_64-linux, home-built with `./configure \n--prefix=...' and gcc-4.7.\nTPC-C courtesy of oltpbenchmark.com. 12WH TPC-C, 24 clients.\n\nI get a strange behavior across repeated runs: each 100-second run is a \nbit slower than the one preceding it, when run with SSI (SERIALIZABLE). \nSwitching to SI (REPEATABLE_READ) removes the problem, so it's \napparently not due to the database growing. The database is completely \nshut down (pg_ctl stop) between runs, but the data lives in tmpfs, so \nthere's no I/O problem here. 64GB RAM, so no paging, either.\n\nNote that this slowdown is in addition to the 30% performance from using \nSSI on my 24-core machine. I understand that the latter is a known \nbottleneck; my question is why the bottleneck should get worse over time:\n\nWith SI, I get ~4.4ktps, consistently.\nWith SSI, I get 3.9, 3.8, 3.4. 3.3, 3.1, 2.9, ...\n\nSo the question: what should I look for to diagnose/triage this problem? \nI'm willing to do some legwork, but have no idea where to go next.\n\nI've tried linux perf, but all it says is that lots of time is going to \nLWLock (but callgraph tracing doesn't work in my not-bleeding-edge \nkernel). Looking through the logs, the abort rates due to SSI aren't \nchanging in any obvious way. I've been hacking on SSI for over a month \nnow as part of a research project, and am fairly familiar with \npredicate.c, but I don't see any obvious reason this behavior should \narise (in particular, SLRU storage seems to be re-initialized every time \nthe postmaster restarts, so there shouldn't be any particular memory \neffect due to SIREAD locks). I'm also familiar with both Cahill's and \nPorts/Grittner's published descriptions of SSI, but again, nothing \nobvious jumps out.\n\nIn my experience this sort of behavior indicates a type of bug where \nfixing it would have a large impact on performance (because the early \n\"damage\" is done so quickly that even the very first run doesn't live up \nto its true potential).\n\n$ cat pgsql.conf\nshared_buffers = 8GB\nsynchronous_commit = off\ncheckpoint_segments = 64\nmax_pred_locks_per_transaction = 2000\ndefault_statistics_target = 100\nmaintenance_work_mem = 2GB\ncheckpoint_completion_target = 0.9\neffective_cache_size = 40GB\nwork_mem = 1920MB\nwal_buffers = 16MB\n\nThanks,\nRyan\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 05 Apr 2014 22:25:13 -0400", "msg_from": "Ryan Johnson <[email protected]>", "msg_from_op": true, "msg_subject": "SSI slows down over time" }, { "msg_contents": "On 04/06/2014 05:25 AM, Ryan Johnson wrote:\n> I've tried linux perf, but all it says is that lots of time is going to\n> LWLock (but callgraph tracing doesn't work in my not-bleeding-edge\n> kernel).\n\nMake sure you compile with the \"-fno-omit-frame-pointer\" flag.\n- Heikki\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 06 Apr 2014 11:30:46 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSI slows down over time" }, { "msg_contents": "Ryan Johnson <[email protected]> writes:\n> I get a strange behavior across repeated runs: each 100-second run is a \n> bit slower than the one preceding it, when run with SSI (SERIALIZABLE). \n> ... So the question: what should I look for to diagnose/triage this problem? \n\nIn the past I've seen behaviors like this that traced to the range of\n\"interesting\" transaction IDs getting wider as time went on, so that\nmore pages of pg_clog were hot, leading to more I/O traffic in the\nclog SLRU buffers. Possibly this is some effect like that.\n\n> I've tried linux perf, but all it says is that lots of time is going to \n> LWLock (but callgraph tracing doesn't work in my not-bleeding-edge \n> kernel).\n\nYou could recompile with -DLWLOCK_STATS to enable gathering stats on\nwhich LWLocks are heavily contended. That would be a starting point\nfor localizing the cause.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 06 Apr 2014 10:55:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSI slows down over time" }, { "msg_contents": "On 06/04/2014 4:30 AM, Heikki Linnakangas wrote:\n> On 04/06/2014 05:25 AM, Ryan Johnson wrote:\n>> I've tried linux perf, but all it says is that lots of time is going to\n>> LWLock (but callgraph tracing doesn't work in my not-bleeding-edge\n>> kernel).\n>\n> Make sure you compile with the \"-fno-omit-frame-pointer\" flag.\nOh, right. Forgot about that. With -g in place, it shows 5% of time \ngoing to LWLockAcquire, virtually all of that in PredicateLockPage. \nHowever, top reports only 50% utilization, so I'd *really* like to have \nstack traces and \"counts\" for stack traces the worker thread spends its \ntime blocked in. I don't know a way to get that out of perf, though. \nIt's near-trivial to do in dtrace, but unfortunately I don't have a \nSolaris or BSD machine handy.\n\nDoes pgsql give a way to trace latencies due to LWLock?\n\nThanks,\nRyan\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 07 Apr 2014 06:52:54 -0400", "msg_from": "Ryan Johnson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SSI slows down over time" }, { "msg_contents": "On 06/04/2014 10:55 AM, Tom Lane wrote:\n> Ryan Johnson <[email protected]> writes:\n>> I get a strange behavior across repeated runs: each 100-second run is a\n>> bit slower than the one preceding it, when run with SSI (SERIALIZABLE).\n>> ... So the question: what should I look for to diagnose/triage this problem?\n> In the past I've seen behaviors like this that traced to the range of\n> \"interesting\" transaction IDs getting wider as time went on, so that\n> more pages of pg_clog were hot, leading to more I/O traffic in the\n> clog SLRU buffers. Possibly this is some effect like that.\nThe effect disappears completely if I run under SI instead of SSI, \nthough. That makes me suspect strongly that the problem lurks in \nSSI-specific infrastrucure.\n\nHowever, I did notice that the SLRU buffer that holds \"old\" SSI \ntransactions sometimes spikes from hundreds to millions of entries (by \nannotating the source to ereport a warning whenever the difference \nbetween buffer head and tale is at least 10% higher than the previous \nrecord). Not sure if that's related, though: I'm pretty sure SSI never \nscans SLRU, it's only used for random lookups.\n\n>> I've tried linux perf, but all it says is that lots of time is going to\n>> LWLock (but callgraph tracing doesn't work in my not-bleeding-edge\n>> kernel).\n> You could recompile with -DLWLOCK_STATS to enable gathering stats on\n> which LWLocks are heavily contended. That would be a starting point\n> for localizing the cause.\n\nHere are the offenders (100-second run, 24 clients, ~2.2ktps):\n> lwlock 7 shacq 0 exacq 7002810 blk 896235 spindelay 213\n> lwlock 28 shacq 94984166 exacq 3938085 blk 572762 spindelay 163\n> lwlock 65 shacq 3347000 exacq 2933440 blk 255927 spindelay 90\n> lwlock 79 shacq 1742574 exacq 3009663 blk 216178 spindelay 41\n> lwlock 76 shacq 2293386 exacq 2892242 blk 205199 spindelay 70\n> lwlock 66 shacq 498909 exacq 2987485 blk 171508 spindelay 48\n> lwlock 80 shacq 512107 exacq 3181753 blk 165772 spindelay 43\n> lwlock 71 shacq 815733 exacq 3088157 blk 165579 spindelay 48\n> lwlock 74 shacq 603321 exacq 3065391 blk 159953 spindelay 56\n> lwlock 67 shacq 695465 exacq 2918970 blk 149339 spindelay 28\n> lwlock 69 shacq 411203 exacq 3044007 blk 148655 spindelay 34\n> lwlock 72 shacq 515260 exacq 2973321 blk 147533 spindelay 43\n> lwlock 30 shacq 41628636 exacq 8799 blk 143889 spindelay 186\n> lwlock 75 shacq 409472 exacq 2987227 blk 143196 spindelay 38\n> lwlock 77 shacq 409401 exacq 2946972 blk 139507 spindelay 34\n> lwlock 73 shacq 402544 exacq 2943467 blk 139380 spindelay 43\n> lwlock 78 shacq 404220 exacq 2912665 blk 137625 spindelay 21\n> lwlock 70 shacq 603643 exacq 2816730 blk 135851 spindelay 37\n> lwlock 68 shacq 403533 exacq 2862017 blk 131946 spindelay 30\n> lwlock 29 shacq 0 exacq 255302 blk 75838 spindelay 1\n> lwlock 0 shacq 0 exacq 561508 blk 12445 spindelay 3\n> lwlock 11 shacq 1245499 exacq 219717 blk 5501 spindelay 10\n> lwlock 4 shacq 381211 exacq 209146 blk 1273 spindelay 4\n> lwlock 3 shacq 16 exacq 209081 blk 522 spindelay 0\n> lwlock 8 shacq 0 exacq 137961 blk 50 spindelay 0\n> lwlock 2097366 shacq 0 exacq 384586 blk 1 spindelay 0\n> lwlock 2097365 shacq 0 exacq 370176 blk 1 spindelay 0\n> lwlock 2097359 shacq 0 exacq 363845 blk 1 spindelay 0\n\nThe above aggregates the per-lock stats from all processes, filters out \nlocks with fewer than 10000 accesses (shared+exclusive) or with zero \nblk, then sorts by highest blk first.\n\nAccording to [1], locks {28, 29, 30} are {SerializableXactHashLock, \nSerializableFinishedListLock, SerializablePredicateLockListLock}, all \nSSI-related; locks 65-80 are the sixteen PredicateLockMgrLocks that the \npost mentions. Looking in lwlock.h, lock 7 (which tops the list) is the \nWALInsertLock. That lock was *not* mentioned in the pgsql-hackers post.\n\nRe-running the same analysis for SI instead of SSI gives 4.6ktps and a \nmuch shorter list:\n> lwlock 7 shacq 0 exacq 14050121 blk 3429384 spindelay 347\n> lwlock 11 shacq 3133994 exacq 450325 blk 23456 spindelay 29\n> lwlock 0 shacq 0 exacq 684775 blk 19158 spindelay 3\n> lwlock 4 shacq 780846 exacq 428771 blk 4539 spindelay 6\n> lwlock 3 shacq 19 exacq 428705 blk 1147 spindelay 0\n> lwlock 59 shacq 0 exacq 125943 blk 203 spindelay 0\n> lwlock 8 shacq 0 exacq 287629 blk 116 spindelay 0\n> lwlock 2097358 shacq 0 exacq 752361 blk 1 spindelay 0\n> lwlock 2097355 shacq 0 exacq 755086 blk 1 spindelay 0\n> lwlock 2097352 shacq 0 exacq 760232 blk 1 spindelay 0\n\nHowever, all of this only confirms that SSI has a lock bottleneck; it \ndoesn't say why the bottleneck gets worse over time.\n\n[1] \nhttp://www.postgresql.org/message-id/CA+TgmoYAiSM2jWEndReY5PL0sKbhgg7dbDH6r=oXKYzi9B7KJA@mail.gmail.com\n\nThoughts?\nRyan\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 07 Apr 2014 10:11:37 -0400", "msg_from": "Ryan Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSI slows down over time" }, { "msg_contents": "On 06/04/2014 10:55 AM, Tom Lane wrote:\n> Ryan Johnson <[email protected]> writes:\n>> I get a strange behavior across repeated runs: each 100-second run is a\n>> bit slower than the one preceding it, when run with SSI (SERIALIZABLE).\n>> ... So the question: what should I look for to diagnose/triage this problem?\n> In the past I've seen behaviors like this that traced to the range of\n> \"interesting\" transaction IDs getting wider as time went on, so that\n> more pages of pg_clog were hot, leading to more I/O traffic in the\n> clog SLRU buffers. Possibly this is some effect like that.\nThe effect disappears completely if I run under SI instead of SSI, \nthough. That makes me suspect strongly that the problem lurks in \nSSI-specific infrastrucure.\n\nHowever, I did notice that the SLRU buffer that holds \"old\" SSI \ntransactions sometimes spikes from hundreds to millions of entries (by \nannotating the source to ereport a warning whenever the difference \nbetween head and tail is at least 10% higher than the previous record). \nNot sure if that's related, though: I'm pretty sure SSI never scans \nSLRU, it's only used for random lookups.\n\n>> I've tried linux perf, but all it says is that lots of time is going to\n>> LWLock (but callgraph tracing doesn't work in my not-bleeding-edge\n>> kernel).\n> You could recompile with -DLWLOCK_STATS to enable gathering stats on\n> which LWLocks are heavily contended. That would be a starting point\n> for localizing the cause.\n\nHere are the offenders (100-second run, 24 clients, ~2.2ktps):\n> lwlock 7 shacq 0 exacq 7002810 blk 896235 spindelay 213\n> lwlock 28 shacq 94984166 exacq 3938085 blk 572762 spindelay 163\n> lwlock 65 shacq 3347000 exacq 2933440 blk 255927 spindelay 90\n> lwlock 79 shacq 1742574 exacq 3009663 blk 216178 spindelay 41\n> lwlock 76 shacq 2293386 exacq 2892242 blk 205199 spindelay 70\n> lwlock 66 shacq 498909 exacq 2987485 blk 171508 spindelay 48\n> lwlock 80 shacq 512107 exacq 3181753 blk 165772 spindelay 43\n> lwlock 71 shacq 815733 exacq 3088157 blk 165579 spindelay 48\n> lwlock 74 shacq 603321 exacq 3065391 blk 159953 spindelay 56\n> lwlock 67 shacq 695465 exacq 2918970 blk 149339 spindelay 28\n> lwlock 69 shacq 411203 exacq 3044007 blk 148655 spindelay 34\n> lwlock 72 shacq 515260 exacq 2973321 blk 147533 spindelay 43\n> lwlock 30 shacq 41628636 exacq 8799 blk 143889 spindelay 186\n> lwlock 75 shacq 409472 exacq 2987227 blk 143196 spindelay 38\n> lwlock 77 shacq 409401 exacq 2946972 blk 139507 spindelay 34\n> lwlock 73 shacq 402544 exacq 2943467 blk 139380 spindelay 43\n> lwlock 78 shacq 404220 exacq 2912665 blk 137625 spindelay 21\n> lwlock 70 shacq 603643 exacq 2816730 blk 135851 spindelay 37\n> lwlock 68 shacq 403533 exacq 2862017 blk 131946 spindelay 30\n> lwlock 29 shacq 0 exacq 255302 blk 75838 spindelay 1\n> lwlock 0 shacq 0 exacq 561508 blk 12445 spindelay 3\n> lwlock 11 shacq 1245499 exacq 219717 blk 5501 spindelay 10\n> lwlock 4 shacq 381211 exacq 209146 blk 1273 spindelay 4\n> lwlock 3 shacq 16 exacq 209081 blk 522 spindelay 0\n> lwlock 8 shacq 0 exacq 137961 blk 50 spindelay 0\n> lwlock 2097366 shacq 0 exacq 384586 blk 1 spindelay 0\n> lwlock 2097365 shacq 0 exacq 370176 blk 1 spindelay 0\n> lwlock 2097359 shacq 0 exacq 363845 blk 1 spindelay 0\n\nThe above aggregates the per-lock stats from all processes, filters out \nlocks with fewer than 10000 accesses (shared+exclusive) or with zero \nblk, then sorts by highest blk first.\n\nAccording to [1], locks {28, 29, 30} are {SerializableXactHashLock, \nSerializableFinishedListLock, SerializablePredicateLockListLock}, all \nSSI-related; locks 65-80 are the sixteen PredicateLockMgrLocks that the \npost mentions. Looking in lwlock.h, lock 7 (which tops the list) is the \nWALInsertLock. That lock was *not* mentioned in the pgsql-hackers post.\n\nRe-running the same analysis for SI instead of SSI gives 4.6ktps and a \nmuch shorter list:\n> lwlock 7 shacq 0 exacq 14050121 blk 3429384 spindelay \n> 347\n> lwlock 11 shacq 3133994 exacq 450325 blk 23456 spindelay 29\n> lwlock 0 shacq 0 exacq 684775 blk 19158 spindelay 3\n> lwlock 4 shacq 780846 exacq 428771 blk 4539 spindelay 6\n> lwlock 3 shacq 19 exacq 428705 blk 1147 spindelay 0\n> lwlock 59 shacq 0 exacq 125943 blk 203 spindelay 0\n> lwlock 8 shacq 0 exacq 287629 blk 116 spindelay 0\n> lwlock 2097358 shacq 0 exacq 752361 blk 1 spindelay 0\n> lwlock 2097355 shacq 0 exacq 755086 blk 1 spindelay 0\n> lwlock 2097352 shacq 0 exacq 760232 blk 1 spindelay 0\n\nHowever, all of this only confirms that SSI has a lock bottleneck; it \ndoesn't say why the bottleneck gets worse over time.\n\n[1] \nhttp://www.postgresql.org/message-id/CA+TgmoYAiSM2jWEndReY5PL0sKbhgg7dbDH6r=oXKYzi9B7KJA@mail.gmail.com \n\n\nThoughts?\nRyan\n\n\n\n\n\n\n\nOn 06/04/2014 10:55 AM, Tom Lane wrote:\n\n\nRyan Johnson <[email protected]> writes:\n\n\nI get a strange behavior across repeated runs: each 100-second run is a \nbit slower than the one preceding it, when run with SSI (SERIALIZABLE). \n... So the question: what should I look for to diagnose/triage this problem? \n\n\n\nIn the past I've seen behaviors like this that traced to the range of\n\"interesting\" transaction IDs getting wider as time went on, so that\nmore pages of pg_clog were hot, leading to more I/O traffic in the\nclog SLRU buffers. Possibly this is some effect like that.\n\n The effect disappears completely if I run under SI instead of SSI,\n though. That makes me suspect strongly that the problem lurks in\n SSI-specific infrastrucure.\n \n\n However, I did notice that the SLRU buffer that holds \"old\" SSI\n transactions sometimes spikes from hundreds to millions of entries\n (by annotating the source to ereport a warning whenever the\n difference between head and tail is at least 10% higher than the\n previous record). Not sure if that's related, though: I'm pretty\n sure SSI never scans SLRU, it's only used for random lookups.\n\n\n\nI've tried linux perf, but all it says is that lots of time is going to \nLWLock (but callgraph tracing doesn't work in my not-bleeding-edge \nkernel).\n\n\n\nYou could recompile with -DLWLOCK_STATS to enable gathering stats on\nwhich LWLocks are heavily contended. That would be a starting point\nfor localizing the cause.\n\n\n Here are the offenders (100-second run, 24 clients, ~2.2ktps):\n \nlwlock        7\n shacq         0 exacq 7002810 blk  896235 spindelay  213\n \n lwlock       28 shacq  94984166 exacq   3938085 blk 572762\n spindelay  163\n \n lwlock       65 shacq   3347000 exacq   2933440 blk 255927\n spindelay   90\n \n lwlock       79 shacq   1742574 exacq   3009663 blk 216178\n spindelay   41\n \n lwlock       76 shacq   2293386 exacq   2892242 blk 205199\n spindelay   70\n \n lwlock       66 shacq    498909 exacq   2987485 blk 171508\n spindelay   48\n \n lwlock       80 shacq    512107 exacq   3181753 blk 165772\n spindelay   43\n \n lwlock       71 shacq    815733 exacq   3088157 blk 165579\n spindelay   48\n \n lwlock       74 shacq    603321 exacq   3065391 blk 159953\n spindelay   56\n \n lwlock       67 shacq    695465 exacq   2918970 blk 149339\n spindelay   28\n \n lwlock       69 shacq    411203 exacq   3044007 blk 148655\n spindelay   34\n \n lwlock       72 shacq    515260 exacq   2973321 blk 147533\n spindelay   43\n \n lwlock       30 shacq  41628636 exacq      8799 blk 143889\n spindelay  186\n \n lwlock       75 shacq    409472 exacq   2987227 blk 143196\n spindelay   38\n \n lwlock       77 shacq    409401 exacq   2946972 blk 139507\n spindelay   34\n \n lwlock       73 shacq    402544 exacq   2943467 blk 139380\n spindelay   43\n \n lwlock       78 shacq    404220 exacq   2912665 blk 137625\n spindelay   21\n \n lwlock       70 shacq    603643 exacq   2816730 blk 135851\n spindelay   37\n \n lwlock       68 shacq    403533 exacq   2862017 blk 131946\n spindelay   30\n \n lwlock       29 shacq         0 exacq    255302 blk 75838\n spindelay    1\n \n lwlock        0 shacq         0 exacq    561508 blk 12445\n spindelay    3\n \n lwlock       11 shacq   1245499 exacq    219717 blk 5501\n spindelay   10\n \n lwlock        4 shacq    381211 exacq    209146 blk 1273\n spindelay    4\n \n lwlock        3 shacq        16 exacq    209081 blk 522\n spindelay    0\n \n lwlock        8 shacq         0 exacq    137961 blk 50\n spindelay    0\n \n lwlock  2097366 shacq         0 exacq    384586 blk 1 spindelay   \n 0\n \n lwlock  2097365 shacq         0 exacq    370176 blk 1 spindelay   \n 0\n \n lwlock  2097359 shacq         0 exacq    363845 blk 1 spindelay   \n 0\n \n\n\n The above aggregates the per-lock stats from all processes, filters\n out locks with fewer than 10000 accesses (shared+exclusive) or with\n zero blk, then sorts by highest blk first.\n \n\n According to [1], locks {28, 29, 30} are {SerializableXactHashLock,\n SerializableFinishedListLock, SerializablePredicateLockListLock},\n all SSI-related; locks 65-80 are the sixteen PredicateLockMgrLocks\n that the post mentions. Looking in lwlock.h, lock 7 (which tops the\n list) is the WALInsertLock. That lock was *not*\n mentioned in the pgsql-hackers post.\n \n\n Re-running the same analysis for SI instead of SSI gives 4.6ktps and\n a much shorter list:\n \nlwlock        7\n shacq         0 exacq 14050121 blk  3429384 spindelay  347\n \n lwlock       11 shacq   3133994 exacq    450325 blk 23456\n spindelay   29\n \n lwlock        0 shacq         0 exacq    684775 blk 19158\n spindelay    3\n \n lwlock        4 shacq    780846 exacq    428771 blk 4539\n spindelay    6\n \n lwlock        3 shacq        19 exacq    428705 blk 1147\n spindelay    0\n \n lwlock       59 shacq         0 exacq    125943 blk 203\n spindelay    0\n \n lwlock        8 shacq         0 exacq    287629 blk 116\n spindelay    0\n \n lwlock  2097358 shacq         0 exacq    752361 blk 1 spindelay   \n 0\n \n lwlock  2097355 shacq         0 exacq    755086 blk 1 spindelay   \n 0\n \n lwlock  2097352 shacq         0 exacq    760232 blk 1 spindelay   \n 0\n \n\n\n However, all of this only confirms that SSI has a lock bottleneck;\n it doesn't say why the bottleneck gets worse over time.\n \n\n [1] http://www.postgresql.org/message-id/CA+TgmoYAiSM2jWEndReY5PL0sKbhgg7dbDH6r=oXKYzi9B7KJA@mail.gmail.com\n\n\n Thoughts?\n \n Ryan", "msg_date": "Mon, 07 Apr 2014 10:36:14 -0400", "msg_from": "Ryan Johnson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SSI slows down over time" }, { "msg_contents": "On 05/04/2014 10:25 PM, Ryan Johnson wrote:\n> Hi all,\n>\n> Disclaimer: this question probably belongs on the hackers list, but \n> the instructions say you have to try somewhere else first... toss-up \n> between this list and a bug report; list seemed more appropriate as a \n> starting point. Happy to file a bug if that's more appropriate, though.\n>\n> This is with pgsql-9.3.4, x86_64-linux, home-built with `./configure \n> --prefix=...' and gcc-4.7.\n> TPC-C courtesy of oltpbenchmark.com. 12WH TPC-C, 24 clients.\n>\n> I get a strange behavior across repeated runs: each 100-second run is \n> a bit slower than the one preceding it, when run with SSI \n> (SERIALIZABLE). Switching to SI (REPEATABLE_READ) removes the problem, \n> so it's apparently not due to the database growing. The database is \n> completely shut down (pg_ctl stop) between runs, but the data lives in \n> tmpfs, so there's no I/O problem here. 64GB RAM, so no paging, either.\n\nThe plot thickens...\n\nI just had a run die with an out of (tmpfs) disk space error; the \npg_serial directory occupies 16GB, or 64825 segments (just under the 65k \nlimit for SLRU). A bit of source diving confirms that this is the \nbacking store for the OldSerXid SLRU that SSI uses. I'm not sure what \nwould prevent SLRU space from being reclaimed, though, given that a \ncomplete, clean, database shut-down happens between every run. In \ntheory, all SSI info can be forgotten any time there are no serializable \ntransactions in the system.\n\nI nuked the pgsql data directory and started over, and started firing \noff 30-second runs (with pg_ctl start/stop in between each). On about \nthe sixth run, throughput dropped to ~200tps and the benchmark harness \nterminated with an assertion error. I didn't see anything interesting in \nthe server logs (the database shut down normally), but the pg_serial \ndirectory had ballooned from ~100kB to 8GB.\n\nI tried to repro, and a series of 30-second runs gave the following \nthroughputs (tps):\n*4615\n3155 3149 3115 3206 3162 3069 3005 2978 2953 **308\n2871 2876 2838 2853 2817 2768 2736 2782 2732 2833\n2749 2675 2771 2700 2675 2682 2647 2572 2626 2567\n*4394\n\nThat ** entry was the 8GB blow-up again. All files in the directory had \nbeen created at the same time (= not during a previous run), and \npersisted through the runs that followed. There was also a run where \nabort rates jumped through the roof (~40k aborts rather than the usual \n2000 or so), with a huge number of \"out of shared memory\" errors; \napparently max_predicate_locks=2000 wasn't high enough.\n\nThe two * entries were produced by runs under SI, and confirm that the \nrest of the system has not been slowing down nearly as much as SSI. SI \nthroughput dropped by 5% as the database quadrupled in size. SSI \nthroughput dropped by 23% during the same interval. And this was \nactually one of the better sets of runs; I had a few last week that \ndipped below 1ktps.\n\nI'm not sure what to make of this, thoughts?\n\nRyan\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 07 Apr 2014 10:38:52 -0400", "msg_from": "Ryan Johnson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SSI slows down over time" }, { "msg_contents": "On Mon, Apr 7, 2014 at 10:38:52AM -0400, Ryan Johnson wrote:\n> The two * entries were produced by runs under SI, and confirm that\n> the rest of the system has not been slowing down nearly as much as\n> SSI. SI throughput dropped by 5% as the database quadrupled in size.\n> SSI throughput dropped by 23% during the same interval. And this was\n> actually one of the better sets of runs; I had a few last week that\n> dipped below 1ktps.\n> \n> I'm not sure what to make of this, thoughts?\n\nSeems it is time to ask on hackers.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + Everyone has their own god. +\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 9 Apr 2014 17:21:05 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSI slows down over time" }, { "msg_contents": "On 09/04/2014 5:21 PM, Bruce Momjian wrote:\n> On Mon, Apr 7, 2014 at 10:38:52AM -0400, Ryan Johnson wrote:\n>> The two * entries were produced by runs under SI, and confirm that\n>> the rest of the system has not been slowing down nearly as much as\n>> SSI. SI throughput dropped by 5% as the database quadrupled in size.\n>> SSI throughput dropped by 23% during the same interval. And this was\n>> actually one of the better sets of runs; I had a few last week that\n>> dipped below 1ktps.\n>>\n>> I'm not sure what to make of this, thoughts?\n> Seems it is time to ask on hackers.\nAck.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 14 Apr 2014 08:59:02 -0400", "msg_from": "Ryan Johnson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SSI slows down over time" } ]
[ { "msg_contents": "While waiting for a query to finish (activated through a web interface), I\nran the same query using psql through a ssh-connection with much different\nruntimes.\n\nI have configured the server to log queries taking more than five seconds\nand in the log the query for which I waited was logged as:\n\n2014-04-07 12:01:38 SAST LOG: duration: 466754.684 ms plan:\n Query Text: SELECT isi_alt_names.code FROM rresearch,\nisi_alt_names WHERE ((((UPPER(rresearch.ny) = 'GUANGZHOU') AND\\\n (UPPER(rresearch.nu) = 'PEOPLES R CHINA')) AND (isi_alt_names.rsc_id =\nrresearch.id)) AND (isi_alt_names.code IS NOT NULL)) \\\nORDER BY rresearch.id, isi_alt_names.id LIMIT 2 OFFSET 0;\n Limit (cost=384216.93..384216.94 rows=2 width=15)\n -> Sort (cost=384216.93..384244.77 rows=11137 width=15)\n Sort Key: rresearch.id, isi_alt_names.id\n -> Nested Loop (cost=138757.99..384105.56 rows=11137\nwidth=15)\n -> Bitmap Heap Scan on rresearch\n(cost=138757.99..161224.50 rows=11337 width=4)\n Recheck Cond: ((upper((ny)::text) =\n'GUANGZHOU'::text) AND (upper((nu)::text) = 'PEOPLES R CHINA'\\\n::text))\n -> BitmapAnd (cost=138757.99..138757.99\nrows=11337 width=0)\n -> Bitmap Index Scan on\nrresearch_ny_idx (cost=0.00..4930.62 rows=215233 width=0)\n Index Cond: (upper((ny)::text) =\n'GUANGZHOU'::text)\n -> Bitmap Index Scan on\nrresearch_nu_idx (cost=0.00..133821.46 rows=6229156 width=0)\n Index Cond: (upper((nu)::text) =\n'PEOPLES R CHINA'::text)\n -> Index Scan using isi_alt_countrynames_rsc_id_idx\non isi_alt_names (cost=0.00..19.65 rows=1 width=1\\\n5)\n Index Cond: (rsc_id = rresearch.id)\n Filter: (code IS NOT NULL)\n\n\nWhile this was going on, I only changed the query to include the schema\n(the web-based query used search_path) and ran it. Query Analyze said:\n\n\"Limit (cost=384288.35..384288.36 rows=2 width=15) (actual\ntime=2945.338..2945.340 rows=2 loops=1)\"\n\" Output: isi_alt_names.code, rresearch.id, isi_alt_names.id\"\n\" Buffers: shared hit=1408146\"\n\" -> Sort (cost=384288.35..384316.20 rows=11137 width=15) (actual\ntime=2945.338..2945.338 rows=2 loops=1)\"\n\" Output: isi_alt_names.code, rresearch.id, isi_alt_names.id\"\n\" Sort Key: rresearch.id, isi_alt_names.id\"\n\" Sort Method: top-N heapsort Memory: 25kB\"\n\" Buffers: shared hit=1408146\"\n\" -> Nested Loop (cost=138757.99..384176.98 rows=11137 width=15)\n(actual time=1530.875..2876.376 rows=241920 loops=1)\"\n\" Output: isi_alt_names.code, rresearch.id, isi_alt_names.id\"\n\" Buffers: shared hit=1408146\"\n\" -> Bitmap Heap Scan on isi.rresearch\n(cost=138757.99..161224.50 rows=11337 width=4) (actual\ntime=1530.848..1750.169 rows=241337 loops=1)\"\n\" Output: rresearch.id, rresearch.cn, rresearch.nf,\nrresearch.nc, rresearch.nd, rresearch.nn, rresearch.ny, rresearch.np,\nrresearch.nu, rresearch.nz, rresearch.uuid, rresearch.tsv\"\n\" Recheck Cond: ((upper((rresearch.ny)::text) =\n'GUANGZHOU'::text) AND (upper((rresearch.nu)::text) = 'PEOPLES R\nCHINA'::text))\"\n\" Buffers: shared hit=195242\"\n\" -> BitmapAnd (cost=138757.99..138757.99 rows=11337\nwidth=0) (actual time=1484.363..1484.363 rows=0 loops=1)\"\n\" Buffers: shared hit=31173\"\n\" -> Bitmap Index Scan on rresearch_ny_idx\n(cost=0.00..4930.62 rows=215233 width=0) (actual time=60.997..60.997\nrows=241354 loops=1)\"\n\" Index Cond: (upper((rresearch.ny)::text) =\n'GUANGZHOU'::text)\"\n\" Buffers: shared hit=1124\"\n\" -> Bitmap Index Scan on rresearch_nu_idx\n(cost=0.00..133821.46 rows=6229156 width=0) (actual time=1350.819..1350.819\nrows=6434248 loops=1)\"\n\" Index Cond: (upper((rresearch.nu)::text) =\n'PEOPLES R CHINA'::text)\"\n\" Buffers: shared hit=30049\"\n\" -> Index Scan using isi_alt_countrynames_rsc_id_idx on\nisi.isi_alt_names (cost=0.00..19.65 rows=1 width=15) (actual\ntime=0.003..0.004 rows=1 loops=241337)\"\n\" Output: isi_alt_names.rsc_id, isi_alt_names.code,\nisi_alt_names.id, isi_alt_names.institution\"\n\" Index Cond: (isi_alt_names.rsc_id = rresearch.id)\"\n\" Filter: (isi_alt_names.code IS NOT NULL)\"\n\" Buffers: shared hit=1212904\"\n\"Total runtime: 2945.400 ms\"\n\nI then ran the query and the result was produced in about the same time as\n(2945 ms).\n\nWhat can cause such a huge discrepancy? I have checked and there was no\nother process blocking the query.\n\nRegards\nJohann\n\n-- \nBecause experiencing your loyal love is better than life itself,\nmy lips will praise you. (Psalm 63:3)\n\nWhile waiting for a query to finish (activated through a web interface), I ran the same query using psql through a ssh-connection with much different runtimes.I have configured the server to log queries taking more than five seconds and in the log the query for which I waited was logged as:\n2014-04-07 12:01:38 SAST LOG:  duration: 466754.684 ms  plan:        Query Text: SELECT  isi_alt_names.code FROM rresearch, isi_alt_names WHERE ((((UPPER(rresearch.ny) = 'GUANGZHOU') AND\\ (UPPER(rresearch.nu) = 'PEOPLES R CHINA')) AND (isi_alt_names.rsc_id = rresearch.id)) AND (isi_alt_names.code IS NOT NULL)) \\\nORDER BY rresearch.id, isi_alt_names.id LIMIT 2 OFFSET 0;        Limit  (cost=384216.93..384216.94 rows=2 width=15)          ->  Sort  (cost=384216.93..384244.77 rows=11137 width=15)\n                Sort Key: rresearch.id, isi_alt_names.id                ->  Nested Loop  (cost=138757.99..384105.56 rows=11137 width=15)                      ->  Bitmap Heap Scan on rresearch  (cost=138757.99..161224.50 rows=11337 width=4)\n                            Recheck Cond: ((upper((ny)::text) = 'GUANGZHOU'::text) AND (upper((nu)::text) = 'PEOPLES R CHINA'\\::text))                            ->  BitmapAnd  (cost=138757.99..138757.99 rows=11337 width=0)\n                                  ->  Bitmap Index Scan on rresearch_ny_idx  (cost=0.00..4930.62 rows=215233 width=0)                                        Index Cond: (upper((ny)::text) = 'GUANGZHOU'::text)\n                                  ->  Bitmap Index Scan on rresearch_nu_idx  (cost=0.00..133821.46 rows=6229156 width=0)                                        Index Cond: (upper((nu)::text) = 'PEOPLES R CHINA'::text)\n                      ->  Index Scan using isi_alt_countrynames_rsc_id_idx on isi_alt_names  (cost=0.00..19.65 rows=1 width=1\\5)                            Index Cond: (rsc_id = rresearch.id)\n                            Filter: (code IS NOT NULL)While this was going on, I only changed the query to include the schema (the web-based query used search_path) and ran it.  Query Analyze said:\n\"Limit  (cost=384288.35..384288.36 rows=2 width=15) (actual time=2945.338..2945.340 rows=2 loops=1)\"\"  Output: isi_alt_names.code, rresearch.id, isi_alt_names.id\"\n\"  Buffers: shared hit=1408146\"\"  ->  Sort  (cost=384288.35..384316.20 rows=11137 width=15) (actual time=2945.338..2945.338 rows=2 loops=1)\"\"        Output: isi_alt_names.code, rresearch.id, isi_alt_names.id\"\n\"        Sort Key: rresearch.id, isi_alt_names.id\"\"        Sort Method: top-N heapsort  Memory: 25kB\"\"        Buffers: shared hit=1408146\"\n\"        ->  Nested Loop  (cost=138757.99..384176.98 rows=11137 width=15) (actual time=1530.875..2876.376 rows=241920 loops=1)\"\"              Output: isi_alt_names.code, rresearch.id, isi_alt_names.id\"\n\"              Buffers: shared hit=1408146\"\"              ->  Bitmap Heap Scan on isi.rresearch  (cost=138757.99..161224.50 rows=11337 width=4) (actual time=1530.848..1750.169 rows=241337 loops=1)\"\n\"                    Output: rresearch.id, rresearch.cn, rresearch.nf, rresearch.nc, rresearch.nd, rresearch.nn, rresearch.ny, rresearch.np, rresearch.nu, rresearch.nz, rresearch.uuid, rresearch.tsv\"\n\"                    Recheck Cond: ((upper((rresearch.ny)::text) = 'GUANGZHOU'::text) AND (upper((rresearch.nu)::text) = 'PEOPLES R CHINA'::text))\"\"                    Buffers: shared hit=195242\"\n\"                    ->  BitmapAnd  (cost=138757.99..138757.99 rows=11337 width=0) (actual time=1484.363..1484.363 rows=0 loops=1)\"\"                          Buffers: shared hit=31173\"\"                          ->  Bitmap Index Scan on rresearch_ny_idx  (cost=0.00..4930.62 rows=215233 width=0) (actual time=60.997..60.997 rows=241354 loops=1)\"\n\"                                Index Cond: (upper((rresearch.ny)::text) = 'GUANGZHOU'::text)\"\"                                Buffers: shared hit=1124\"\"                          ->  Bitmap Index Scan on rresearch_nu_idx  (cost=0.00..133821.46 rows=6229156 width=0) (actual time=1350.819..1350.819 rows=6434248 loops=1)\"\n\"                                Index Cond: (upper((rresearch.nu)::text) = 'PEOPLES R CHINA'::text)\"\"                                Buffers: shared hit=30049\"\n\"              ->  Index Scan using isi_alt_countrynames_rsc_id_idx on isi.isi_alt_names  (cost=0.00..19.65 rows=1 width=15) (actual time=0.003..0.004 rows=1 loops=241337)\"\"                    Output: isi_alt_names.rsc_id, isi_alt_names.code, isi_alt_names.id, isi_alt_names.institution\"\n\"                    Index Cond: (isi_alt_names.rsc_id = rresearch.id)\"\"                    Filter: (isi_alt_names.code IS NOT NULL)\"\"                    Buffers: shared hit=1212904\"\n\"Total runtime: 2945.400 ms\"I then ran the query and the result was produced in about the same time as (2945 ms).What can cause such a huge discrepancy?  I have checked and there was no other process blocking the query.\nRegardsJohann-- Because experiencing your loyal love is better than life itself, my lips will praise you.  (Psalm 63:3)", "msg_date": "Mon, 7 Apr 2014 12:25:52 +0200", "msg_from": "Johann Spies <[email protected]>", "msg_from_op": true, "msg_subject": "The same query - much different runtimes" }, { "msg_contents": "On Mon, Apr 7, 2014 at 3:55 PM, Johann Spies <[email protected]> wrote:\n\n>\n>\n> I then ran the query and the result was produced in about the same time as\n> (2945 ms).\n>\n> What can cause such a huge discrepancy?\n>\n\nMay be when you reran the query, most of the data blocks were cached either\nin the shared buffers or the OS cache. That could drastically improve the\nperformance. I can see a large number of shared buffer hits in the explain\nanalyze output of the query ran through psql session.\n\nThanks,\nPavan\n\n-- \nPavan Deolasee\nhttp://www.linkedin.com/in/pavandeolasee\n\nOn Mon, Apr 7, 2014 at 3:55 PM, Johann Spies <[email protected]> wrote:\nI then ran the query and the result was produced in about the same time as (2945 ms).\nWhat can cause such a huge discrepancy?  May be when you reran the query, most of the data blocks were cached either in the shared buffers or the OS cache. That could drastically improve the performance. I can see a large number of shared buffer hits in the explain analyze output of the query ran through psql session.\nThanks,Pavan-- Pavan Deolaseehttp://www.linkedin.com/in/pavandeolasee", "msg_date": "Mon, 7 Apr 2014 16:28:45 +0530", "msg_from": "Pavan Deolasee <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The same query - much different runtimes" } ]
[ { "msg_contents": "Hi,\n\nWe recently upgraded from pg 9.0.5 to 9.3.2 and we are observing much\nhigher load on our hot standbys (we have 3). As you can see from the query\nplans below, we have some queries that are running 4-5 times slower now,\nmany due to what looks like a bad plan in 9.3. Are there any known issues\nwith query plan regressions in 9.3? Any ideas about how I can get back the\nold planning behavior with 9.3.2?\n\nThanks in advance for any help!\n\nOn Production System\n----------------------*Postgres 9.3.2*\nIntel(R) Xeon(R) CPU E5649 (2.53 Ghz 6-core)\n12 GB RAM\nIntel 710 SSD\n\n---------------------\n\nexplain analyze select distinct on (t1.id) t1.id, t1.hostname as name,\nt1.active, t1.domain_id, t1.base, t1.port, t1.inter_domain_flag from\nlocation t1, host t2, container t3, resource_location t4 where t2.id =\n34725278 and t3.id = t2.container_id and t4.location_id = t1.id and\nt4.parent_id in (select * from parentContainers(t3.id)) and t1.license\nis not null and (t1.license_end_date is null or t1.license_end_date >=\ncurrent_date) and t1.active <> 0 and t3.active <> 0 and t4.active <> 0\nand t1.domain_id = t2.domain_id and t2.domain_id = t3.domain_id and\nt3.domain_id = t4.domain_id and (0 = 0 or t1.active <> 0);\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=313.44..313.45 rows=1 width=35) (actual\ntime=989.836..989.837 rows=1 loops=1)\n -> Sort (cost=313.44..313.44 rows=1 width=35) (actual\ntime=989.836..989.837 rows=1 loops=1)\n Sort Key: t1.id\n Sort Method: quicksort Memory: 25kB\n -> Nested Loop (cost=1.27..313.43 rows=1 width=35) (actual\ntime=922.484..989.791 rows=1 loops=1)\n Join Filter: (SubPlan 1)\n Rows Removed by Join Filter: 742\n -> Nested Loop (cost=0.99..33.80 rows=1 width=53)\n(actual time=0.174..5.168 rows=934 loops=1)\n Join Filter: (t2.domain_id = t1.domain_id)\n -> Nested Loop (cost=0.71..11.23 rows=1\nwidth=18) (actual time=0.101..0.103 rows=1 loops=1)\n -> Index Scan using host_pkey on host t2\n(cost=0.29..5.29 rows=1 width=12) (actual time=0.041..0.042 rows=1\nloops=1)\n Index Cond: (id = 34725278::numeric)\n -> Index Scan using container_pkey on\ncontainer t3 (cost=0.42..5.43 rows=1 width=12) (actual\ntime=0.057..0.058 rows=1 loops=1)\n Index Cond: (id = t2.container_id)\n Filter: ((active <> 0::numeric) AND\n(t2.domain_id = domain_id))\n -> Index Scan using idx_location_domain_id on\nlocation t1 (cost=0.28..18.55 rows=8 width=35) (actual\ntime=0.065..3.768 rows=934 loops=1)\n Index Cond: (domain_id = t3.domain_id)\n Filter: ((license IS NOT NULL) AND (active\n<> 0::numeric) AND ((license_end_date IS NULL) OR (license_end_date >=\n('now'::cstring)::date)))\n Rows Removed by Filter: 297\n -> Index Scan using idx_resource_location_domain_id on\nresource_location t4 (cost=0.28..27.63 rows=1 width=21) (actual\ntime=0.532..0.849 rows=1 loops=934)\n Index Cond: (domain_id = t1.domain_id)\n Filter: ((active <> 0::numeric) AND (t1.id = location_id))\n Rows Removed by Filter: 1003\n SubPlan 1\n -> Function Scan on parentcontainers\n(cost=0.25..500.25 rows=1000 width=32) (actual time=0.253..0.253\nrows=2 loops=743)\n Total runtime: 990.045 ms\n(26 rows)\n\n\nOn test box:\n----------------------*Postgres 9.0.2*\nIntel(R) Xeon(R) CPU E5345 (2.33 Ghz 4-core)\n8 GB RAM\n6 x SAS 10K RAID 10\n\n----------------------\n\nexplain analyze select distinct on (t1.id) t1.id, t1.hostname as name,\nt1.active, t1.domain_id, t1.base, t1.port, t1.inter_domain_flag from\nlocation t1, host t2, container t3, resource_location t4 where t2.id =\n34725278 and t3.id = t2.container_id and t4.location_id = t1.id and\nt4.parent_id in (select * from parentContainers(t3.id)) and t1.license\nis not null and (t1.license_end_date is null or t1.license_end_date >=\ncurrent_date) and t1.active <> 0 and t3.active <> 0 and t4.active <> 0\nand t1.domain_id = t2.domain_id and t2.domain_id = t3.domain_id and\nt3.domain_id = t4.domain_id and (0 = 0 or t1.active <> 0);\n\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=389.96..389.97 rows=1 width=1192) (actual\ntime=217.479..217.480 rows=1 loops=1)\n -> Sort (cost=389.96..389.97 rows=1 width=1192) (actual\ntime=217.477..217.477 rows=1 loops=1)\n Sort Key: t1.id\n Sort Method: quicksort Memory: 25kB\n -> Nested Loop (cost=9.28..389.95 rows=1 width=1192)\n(actual time=103.359..217.437 rows=1 loops=1)\n Join Filter: ((t1.domain_id = t3.domain_id) AND (SubPlan 1))\n -> Nested Loop (cost=9.28..130.66 rows=1 width=1320)\n(actual time=18.494..29.577 rows=744 loops=1)\n Join Filter: (t2.domain_id = t1.domain_id)\n -> Nested Loop (cost=9.28..49.44 rows=12\nwidth=160) (actual time=18.434..21.279 rows=1000 loops=1)\n -> Index Scan using host_pkey on host t2\n(cost=0.00..7.26 rows=1 width=64) (actual time=0.054..0.055 rows=1\nloops=1)\n Index Cond: (id = 34725278::numeric)\n -> Bitmap Heap Scan on resource_location\nt4 (cost=9.28..36.15 rows=12 width=96) (actual time=18.370..20.638\nrows=1000 loops=1)\n Recheck Cond: (t4.domain_id = t2.domain_id)\n Filter: (t4.active <> 0::numeric)\n -> Bitmap Index Scan on\nidx_resource_location_domain_id (cost=0.00..9.28 rows=12 width=0)\n(actual time=10.377..10.377 rows=1004 loops=1)\n Index Cond: (t4.domain_id = t2.domain_id)\n -> Index Scan using location_pkey on location t1\n (cost=0.00..6.26 rows=1 width=1192) (actual time=0.006..0.007 rows=1\nloops=1000)\n Index Cond: (t1.id = t4.location_id)\n Filter: ((t1.license IS NOT NULL) AND\n(t1.active <> 0::numeric) AND ((t1.license_end_date IS NULL) OR\n(t1.license_end_date >= ('now'::text)::date)))\n -> Index Scan using container_pkey on container t3\n(cost=0.00..7.29 rows=1 width=64) (actual time=0.005..0.006 rows=1\nloops=744)\n Index Cond: (t3.id = t2.container_id)\n Filter: (t3.active <> 0::numeric)\n SubPlan 1\n -> Function Scan on parentcontainers\n(cost=0.25..500.25 rows=1000 width=32) (actual time=0.243..0.243\nrows=2 loops=744)\n Total runtime: 217.735 ms\n\nHi,We recently upgraded from pg 9.0.5 to 9.3.2 and we are observing much higher load on our hot standbys (we have 3).  As you can see from the query plans below, we have some queries that are running 4-5 times slower now, many due to what looks like a bad plan in 9.3.  Are there any known issues with query plan regressions in 9.3?  Any ideas about how I can get back the old planning behavior with 9.3.2?\nThanks in advance for any help!On Production System\n----------------------\nPostgres 9.3.2\nIntel(R) Xeon(R) CPU E5649 (2.53 Ghz 6-core)\n12 GB RAM\nIntel 710 SSD---------------------\n\nexplain analyze select distinct on (t1.id) t1.id, t1.hostname as name, t1.active, t1.domain_id, t1.base, t1.port, t1.inter_domain_flag from location t1, host t2, container t3, resource_location t4 where t2.id = 34725278 and t3.id = t2.container_id and t4.location_id = t1.id and t4.parent_id in (select * from parentContainers(t3.id)) and t1.license is not null and (t1.license_end_date is null or t1.license_end_date >= current_date) and t1.active <> 0 and t3.active <> 0 and t4.active <> 0 and t1.domain_id = t2.domain_id and t2.domain_id = t3.domain_id and t3.domain_id = t4.domain_id and (0 = 0 or t1.active <> 0);\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=313.44..313.45 rows=1 width=35) (actual time=989.836..989.837 rows=1 loops=1)\n -> Sort (cost=313.44..313.44 rows=1 width=35) (actual time=989.836..989.837 rows=1 loops=1)\n Sort Key: t1.id\n Sort Method: quicksort Memory: 25kB\n -> Nested Loop (cost=1.27..313.43 rows=1 width=35) (actual time=922.484..989.791 rows=1 loops=1)\n Join Filter: (SubPlan 1)\n Rows Removed by Join Filter: 742\n -> Nested Loop (cost=0.99..33.80 rows=1 width=53) (actual time=0.174..5.168 rows=934 loops=1)\n Join Filter: (t2.domain_id = t1.domain_id)\n -> Nested Loop (cost=0.71..11.23 rows=1 width=18) (actual time=0.101..0.103 rows=1 loops=1)\n -> Index Scan using host_pkey on host t2 (cost=0.29..5.29 rows=1 width=12) (actual time=0.041..0.042 rows=1 loops=1)\n Index Cond: (id = 34725278::numeric)\n -> Index Scan using container_pkey on container t3 (cost=0.42..5.43 rows=1 width=12) (actual time=0.057..0.058 rows=1 loops=1)\n Index Cond: (id = t2.container_id)\n Filter: ((active <> 0::numeric) AND (t2.domain_id = domain_id))\n -> Index Scan using idx_location_domain_id on location t1 (cost=0.28..18.55 rows=8 width=35) (actual time=0.065..3.768 rows=934 loops=1)\n Index Cond: (domain_id = t3.domain_id)\n Filter: ((license IS NOT NULL) AND (active <> 0::numeric) AND ((license_end_date IS NULL) OR (license_end_date >= ('now'::cstring)::date)))\n Rows Removed by Filter: 297\n -> Index Scan using idx_resource_location_domain_id on resource_location t4 (cost=0.28..27.63 rows=1 width=21) (actual time=0.532..0.849 rows=1 loops=934)\n Index Cond: (domain_id = t1.domain_id)\n Filter: ((active <> 0::numeric) AND (t1.id = location_id))\n Rows Removed by Filter: 1003\n SubPlan 1\n -> Function Scan on parentcontainers (cost=0.25..500.25 rows=1000 width=32) (actual time=0.253..0.253 rows=2 loops=743)\n Total runtime: 990.045 ms\n(26 rows)\n\n\nOn test box:\n----------------------\nPostgres 9.0.2\nIntel(R) Xeon(R) CPU E5345 (2.33 Ghz 4-core)\n8 GB RAM\n6 x SAS 10K RAID 10----------------------\n\nexplain analyze select distinct on (t1.id) t1.id, t1.hostname as name, t1.active, t1.domain_id, t1.base, t1.port, t1.inter_domain_flag from location t1, host t2, container t3, resource_location t4 where t2.id = 34725278 and t3.id = t2.container_id and t4.location_id = t1.id and t4.parent_id in (select * from parentContainers(t3.id)) and t1.license is not null and (t1.license_end_date is null or t1.license_end_date >= current_date) and t1.active <> 0 and t3.active <> 0 and t4.active <> 0 and t1.domain_id = t2.domain_id and t2.domain_id = t3.domain_id and t3.domain_id = t4.domain_id and (0 = 0 or t1.active <> 0);\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=389.96..389.97 rows=1 width=1192) (actual time=217.479..217.480 rows=1 loops=1)\n -> Sort (cost=389.96..389.97 rows=1 width=1192) (actual time=217.477..217.477 rows=1 loops=1)\n Sort Key: t1.id\n Sort Method: quicksort Memory: 25kB\n -> Nested Loop (cost=9.28..389.95 rows=1 width=1192) (actual time=103.359..217.437 rows=1 loops=1)\n Join Filter: ((t1.domain_id = t3.domain_id) AND (SubPlan 1))\n -> Nested Loop (cost=9.28..130.66 rows=1 width=1320) (actual time=18.494..29.577 rows=744 loops=1)\n Join Filter: (t2.domain_id = t1.domain_id)\n -> Nested Loop (cost=9.28..49.44 rows=12 width=160) (actual time=18.434..21.279 rows=1000 loops=1)\n -> Index Scan using host_pkey on host t2 (cost=0.00..7.26 rows=1 width=64) (actual time=0.054..0.055 rows=1 loops=1)\n Index Cond: (id = 34725278::numeric)\n -> Bitmap Heap Scan on resource_location t4 (cost=9.28..36.15 rows=12 width=96) (actual time=18.370..20.638 rows=1000 loops=1)\n Recheck Cond: (t4.domain_id = t2.domain_id)\n Filter: (t4.active <> 0::numeric)\n -> Bitmap Index Scan on idx_resource_location_domain_id (cost=0.00..9.28 rows=12 width=0) (actual time=10.377..10.377 rows=1004 loops=1)\n Index Cond: (t4.domain_id = t2.domain_id)\n -> Index Scan using location_pkey on location t1 (cost=0.00..6.26 rows=1 width=1192) (actual time=0.006..0.007 rows=1 loops=1000)\n Index Cond: (t1.id = t4.location_id)\n Filter: ((t1.license IS NOT NULL) AND (t1.active <> 0::numeric) AND ((t1.license_end_date IS NULL) OR (t1.license_end_date >= ('now'::text)::date)))\n -> Index Scan using container_pkey on container t3 (cost=0.00..7.29 rows=1 width=64) (actual time=0.005..0.006 rows=1 loops=744)\n Index Cond: (t3.id = t2.container_id)\n Filter: (t3.active <> 0::numeric)\n SubPlan 1\n -> Function Scan on parentcontainers (cost=0.25..500.25 rows=1000 width=32) (actual time=0.243..0.243 rows=2 loops=744)\n Total runtime: 217.735 ms", "msg_date": "Mon, 7 Apr 2014 14:34:27 -0400", "msg_from": "uher dslij <[email protected]>", "msg_from_op": true, "msg_subject": "Performance regressions in PG 9.3 vs PG 9.0" }, { "msg_contents": "As a follow up to this issue on Graeme's suggestion in a private email,\n\nI checked the statistics in both databases, and they were the same (these\nvalues as seen from pg_stat_user_tables to not seem to propagate to slave\ndatabases however). I even ran a manual analyze on the master database in\nthe 9.3.2 cluster, it did not affect query performance in the least bit.\n\nWe've installed all versions of postgres and tested the same query on the\nsame data:\n\nPG 9.0.x : 196 ms\nPG 9.1.13 : 181 ms\nPG 9.2.8 : 861 ms\nPG 9.3.4 : 861 ms\n\nThe EXPLAINs all pretty much look like my original post. The planner in\n9.2 and above is simply not using bitmap heap scans or bitmap index scans?\n What could be the reason for this?\n\nThanks in advance,\n\n\nOn Mon, Apr 7, 2014 at 2:34 PM, uher dslij <[email protected]> wrote:\n\n> Hi,\n>\n> We recently upgraded from pg 9.0.5 to 9.3.2 and we are observing much\n> higher load on our hot standbys (we have 3). As you can see from the query\n> plans below, we have some queries that are running 4-5 times slower now,\n> many due to what looks like a bad plan in 9.3. Are there any known issues\n> with query plan regressions in 9.3? Any ideas about how I can get back the\n> old planning behavior with 9.3.2?\n>\n> Thanks in advance for any help!\n>\n> On Production System\n> ----------------------*Postgres 9.3.2*\n> Intel(R) Xeon(R) CPU E5649 (2.53 Ghz 6-core)\n> 12 GB RAM\n> Intel 710 SSD\n>\n> ---------------------\n>\n> explain analyze select distinct on (t1.id) t1.id, t1.hostname as name, t1.active, t1.domain_id, t1.base, t1.port, t1.inter_domain_flag from location t1, host t2, container t3, resource_location t4 where t2.id = 34725278 and t3.id = t2.container_id and t4.location_id = t1.id and t4.parent_id in (select * from parentContainers(t3.id)) and t1.license is not null and (t1.license_end_date is null or t1.license_end_date >= current_date) and t1.active <> 0 and t3.active <> 0 and t4.active <> 0 and t1.domain_id = t2.domain_id and t2.domain_id = t3.domain_id and t3.domain_id = t4.domain_id and (0 = 0 or t1.active <> 0);\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Unique (cost=313.44..313.45 rows=1 width=35) (actual time=989.836..989.837 rows=1 loops=1)\n> -> Sort (cost=313.44..313.44 rows=1 width=35) (actual time=989.836..989.837 rows=1 loops=1)\n> Sort Key: t1.id\n> Sort Method: quicksort Memory: 25kB\n> -> Nested Loop (cost=1.27..313.43 rows=1 width=35) (actual time=922.484..989.791 rows=1 loops=1)\n> Join Filter: (SubPlan 1)\n> Rows Removed by Join Filter: 742\n> -> Nested Loop (cost=0.99..33.80 rows=1 width=53) (actual time=0.174..5.168 rows=934 loops=1)\n> Join Filter: (t2.domain_id = t1.domain_id)\n> -> Nested Loop (cost=0.71..11.23 rows=1 width=18) (actual time=0.101..0.103 rows=1 loops=1)\n> -> Index Scan using host_pkey on host t2 (cost=0.29..5.29 rows=1 width=12) (actual time=0.041..0.042 rows=1 loops=1)\n> Index Cond: (id = 34725278::numeric)\n> -> Index Scan using container_pkey on container t3 (cost=0.42..5.43 rows=1 width=12) (actual time=0.057..0.058 rows=1 loops=1)\n> Index Cond: (id = t2.container_id)\n> Filter: ((active <> 0::numeric) AND (t2.domain_id = domain_id))\n> -> Index Scan using idx_location_domain_id on location t1 (cost=0.28..18.55 rows=8 width=35) (actual time=0.065..3.768 rows=934 loops=1)\n> Index Cond: (domain_id = t3.domain_id)\n> Filter: ((license IS NOT NULL) AND (active <> 0::numeric) AND ((license_end_date IS NULL) OR (license_end_date >= ('now'::cstring)::date)))\n> Rows Removed by Filter: 297\n> -> Index Scan using idx_resource_location_domain_id on resource_location t4 (cost=0.28..27.63 rows=1 width=21) (actual time=0.532..0.849 rows=1 loops=934)\n> Index Cond: (domain_id = t1.domain_id)\n> Filter: ((active <> 0::numeric) AND (t1.id = location_id))\n> Rows Removed by Filter: 1003\n> SubPlan 1\n> -> Function Scan on parentcontainers (cost=0.25..500.25 rows=1000 width=32) (actual time=0.253..0.253 rows=2 loops=743)\n> Total runtime: 990.045 ms\n> (26 rows)\n>\n>\n> On test box:\n> ----------------------*Postgres 9.0.2*\n> Intel(R) Xeon(R) CPU E5345 (2.33 Ghz 4-core)\n> 8 GB RAM\n> 6 x SAS 10K RAID 10\n>\n> ----------------------\n>\n> explain analyze select distinct on (t1.id) t1.id, t1.hostname as name, t1.active, t1.domain_id, t1.base, t1.port, t1.inter_domain_flag from location t1, host t2, container t3, resource_location t4 where t2.id = 34725278 and t3.id = t2.container_id and t4.location_id = t1.id and t4.parent_id in (select * from parentContainers(t3.id)) and t1.license is not null and (t1.license_end_date is null or t1.license_end_date >= current_date) and t1.active <> 0 and t3.active <> 0 and t4.active <> 0 and t1.domain_id = t2.domain_id and t2.domain_id = t3.domain_id and t3.domain_id = t4.domain_id and (0 = 0 or t1.active <> 0);\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Unique (cost=389.96..389.97 rows=1 width=1192) (actual time=217.479..217.480 rows=1 loops=1)\n> -> Sort (cost=389.96..389.97 rows=1 width=1192) (actual time=217.477..217.477 rows=1 loops=1)\n> Sort Key: t1.id\n> Sort Method: quicksort Memory: 25kB\n> -> Nested Loop (cost=9.28..389.95 rows=1 width=1192) (actual time=103.359..217.437 rows=1 loops=1)\n> Join Filter: ((t1.domain_id = t3.domain_id) AND (SubPlan 1))\n> -> Nested Loop (cost=9.28..130.66 rows=1 width=1320) (actual time=18.494..29.577 rows=744 loops=1)\n> Join Filter: (t2.domain_id = t1.domain_id)\n> -> Nested Loop (cost=9.28..49.44 rows=12 width=160) (actual time=18.434..21.279 rows=1000 loops=1)\n> -> Index Scan using host_pkey on host t2 (cost=0.00..7.26 rows=1 width=64) (actual time=0.054..0.055 rows=1 loops=1)\n> Index Cond: (id = 34725278::numeric)\n> -> Bitmap Heap Scan on resource_location t4 (cost=9.28..36.15 rows=12 width=96) (actual time=18.370..20.638 rows=1000 loops=1)\n> Recheck Cond: (t4.domain_id = t2.domain_id)\n> Filter: (t4.active <> 0::numeric)\n> -> Bitmap Index Scan on idx_resource_location_domain_id (cost=0.00..9.28 rows=12 width=0) (actual time=10.377..10.377 rows=1004 loops=1)\n> Index Cond: (t4.domain_id = t2.domain_id)\n> -> Index Scan using location_pkey on location t1 (cost=0.00..6.26 rows=1 width=1192) (actual time=0.006..0.007 rows=1 loops=1000)\n> Index Cond: (t1.id = t4.location_id)\n> Filter: ((t1.license IS NOT NULL) AND (t1.active <> 0::numeric) AND ((t1.license_end_date IS NULL) OR (t1.license_end_date >= ('now'::text)::date)))\n> -> Index Scan using container_pkey on container t3 (cost=0.00..7.29 rows=1 width=64) (actual time=0.005..0.006 rows=1 loops=744)\n> Index Cond: (t3.id = t2.container_id)\n> Filter: (t3.active <> 0::numeric)\n> SubPlan 1\n> -> Function Scan on parentcontainers (cost=0.25..500.25 rows=1000 width=32) (actual time=0.243..0.243 rows=2 loops=744)\n> Total runtime: 217.735 ms\n>\n>\n\nAs a follow up to this issue on Graeme's suggestion in a private email,\nI checked the statistics in both databases, and they were the same (these values as seen from pg_stat_user_tables to not seem to propagate to slave databases however).  I even ran a manual analyze on the master database in the 9.3.2 cluster, it did not affect query performance in the least bit.\nWe've installed all versions of postgres and tested the same query on the same data:\nPG 9.0.x : 196 msPG 9.1.13 : 181 ms\nPG 9.2.8 : 861 msPG 9.3.4 : 861 ms\nThe EXPLAINs all pretty much look like my original post.  The planner in 9.2 and above is simply not using bitmap heap scans or bitmap index scans?  What could be the reason for this?\nThanks in advance,On Mon, Apr 7, 2014 at 2:34 PM, uher dslij <[email protected]> wrote:\nHi,We recently upgraded from pg 9.0.5 to 9.3.2 and we are observing much higher load on our hot standbys (we have 3).  As you can see from the query plans below, we have some queries that are running 4-5 times slower now, many due to what looks like a bad plan in 9.3.  Are there any known issues with query plan regressions in 9.3?  Any ideas about how I can get back the old planning behavior with 9.3.2?\nThanks in advance for any help!On Production System\n----------------------\nPostgres 9.3.2\nIntel(R) Xeon(R) CPU E5649 (2.53 Ghz 6-core)\n12 GB RAM\nIntel 710 SSD---------------------\n\nexplain analyze select distinct on (t1.id) t1.id, t1.hostname as name, t1.active, t1.domain_id, t1.base, t1.port, t1.inter_domain_flag from location t1, host t2, container t3, resource_location t4 where t2.id = 34725278 and t3.id = t2.container_id and t4.location_id = t1.id and t4.parent_id in (select * from parentContainers(t3.id)) and t1.license is not null and (t1.license_end_date is null or t1.license_end_date >= current_date) and t1.active <> 0 and t3.active <> 0 and t4.active <> 0 and t1.domain_id = t2.domain_id and t2.domain_id = t3.domain_id and t3.domain_id = t4.domain_id and (0 = 0 or t1.active <> 0);\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=313.44..313.45 rows=1 width=35) (actual time=989.836..989.837 rows=1 loops=1)\n -> Sort (cost=313.44..313.44 rows=1 width=35) (actual time=989.836..989.837 rows=1 loops=1)\n Sort Key: t1.id\n Sort Method: quicksort Memory: 25kB\n -> Nested Loop (cost=1.27..313.43 rows=1 width=35) (actual time=922.484..989.791 rows=1 loops=1)\n Join Filter: (SubPlan 1)\n Rows Removed by Join Filter: 742\n -> Nested Loop (cost=0.99..33.80 rows=1 width=53) (actual time=0.174..5.168 rows=934 loops=1)\n Join Filter: (t2.domain_id = t1.domain_id)\n -> Nested Loop (cost=0.71..11.23 rows=1 width=18) (actual time=0.101..0.103 rows=1 loops=1)\n -> Index Scan using host_pkey on host t2 (cost=0.29..5.29 rows=1 width=12) (actual time=0.041..0.042 rows=1 loops=1)\n Index Cond: (id = 34725278::numeric)\n -> Index Scan using container_pkey on container t3 (cost=0.42..5.43 rows=1 width=12) (actual time=0.057..0.058 rows=1 loops=1)\n Index Cond: (id = t2.container_id)\n Filter: ((active <> 0::numeric) AND (t2.domain_id = domain_id))\n -> Index Scan using idx_location_domain_id on location t1 (cost=0.28..18.55 rows=8 width=35) (actual time=0.065..3.768 rows=934 loops=1)\n Index Cond: (domain_id = t3.domain_id)\n Filter: ((license IS NOT NULL) AND (active <> 0::numeric) AND ((license_end_date IS NULL) OR (license_end_date >= ('now'::cstring)::date)))\n Rows Removed by Filter: 297\n -> Index Scan using idx_resource_location_domain_id on resource_location t4 (cost=0.28..27.63 rows=1 width=21) (actual time=0.532..0.849 rows=1 loops=934)\n Index Cond: (domain_id = t1.domain_id)\n Filter: ((active <> 0::numeric) AND (t1.id = location_id))\n Rows Removed by Filter: 1003\n SubPlan 1\n -> Function Scan on parentcontainers (cost=0.25..500.25 rows=1000 width=32) (actual time=0.253..0.253 rows=2 loops=743)\n Total runtime: 990.045 ms\n(26 rows)\n\n\nOn test box:\n----------------------\nPostgres 9.0.2\nIntel(R) Xeon(R) CPU E5345 (2.33 Ghz 4-core)\n8 GB RAM\n6 x SAS 10K RAID 10----------------------\n\nexplain analyze select distinct on (t1.id) t1.id, t1.hostname as name, t1.active, t1.domain_id, t1.base, t1.port, t1.inter_domain_flag from location t1, host t2, container t3, resource_location t4 where t2.id = 34725278 and t3.id = t2.container_id and t4.location_id = t1.id and t4.parent_id in (select * from parentContainers(t3.id)) and t1.license is not null and (t1.license_end_date is null or t1.license_end_date >= current_date) and t1.active <> 0 and t3.active <> 0 and t4.active <> 0 and t1.domain_id = t2.domain_id and t2.domain_id = t3.domain_id and t3.domain_id = t4.domain_id and (0 = 0 or t1.active <> 0);\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=389.96..389.97 rows=1 width=1192) (actual time=217.479..217.480 rows=1 loops=1)\n -> Sort (cost=389.96..389.97 rows=1 width=1192) (actual time=217.477..217.477 rows=1 loops=1)\n Sort Key: t1.id\n Sort Method: quicksort Memory: 25kB\n -> Nested Loop (cost=9.28..389.95 rows=1 width=1192) (actual time=103.359..217.437 rows=1 loops=1)\n Join Filter: ((t1.domain_id = t3.domain_id) AND (SubPlan 1))\n -> Nested Loop (cost=9.28..130.66 rows=1 width=1320) (actual time=18.494..29.577 rows=744 loops=1)\n Join Filter: (t2.domain_id = t1.domain_id)\n -> Nested Loop (cost=9.28..49.44 rows=12 width=160) (actual time=18.434..21.279 rows=1000 loops=1)\n -> Index Scan using host_pkey on host t2 (cost=0.00..7.26 rows=1 width=64) (actual time=0.054..0.055 rows=1 loops=1)\n Index Cond: (id = 34725278::numeric)\n -> Bitmap Heap Scan on resource_location t4 (cost=9.28..36.15 rows=12 width=96) (actual time=18.370..20.638 rows=1000 loops=1)\n Recheck Cond: (t4.domain_id = t2.domain_id)\n Filter: (t4.active <> 0::numeric)\n -> Bitmap Index Scan on idx_resource_location_domain_id (cost=0.00..9.28 rows=12 width=0) (actual time=10.377..10.377 rows=1004 loops=1)\n Index Cond: (t4.domain_id = t2.domain_id)\n -> Index Scan using location_pkey on location t1 (cost=0.00..6.26 rows=1 width=1192) (actual time=0.006..0.007 rows=1 loops=1000)\n Index Cond: (t1.id = t4.location_id)\n Filter: ((t1.license IS NOT NULL) AND (t1.active <> 0::numeric) AND ((t1.license_end_date IS NULL) OR (t1.license_end_date >= ('now'::text)::date)))\n -> Index Scan using container_pkey on container t3 (cost=0.00..7.29 rows=1 width=64) (actual time=0.005..0.006 rows=1 loops=744)\n Index Cond: (t3.id = t2.container_id)\n Filter: (t3.active <> 0::numeric)\n SubPlan 1\n -> Function Scan on parentcontainers (cost=0.25..500.25 rows=1000 width=32) (actual time=0.243..0.243 rows=2 loops=744)\n Total runtime: 217.735 ms", "msg_date": "Tue, 8 Apr 2014 15:58:52 -0400", "msg_from": "uher dslij <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance regressions in PG 9.3 vs PG 9.0" }, { "msg_contents": "uher dslij <[email protected]> writes:\n> The EXPLAINs all pretty much look like my original post. The planner in\n> 9.2 and above is simply not using bitmap heap scans or bitmap index scans?\n> What could be the reason for this?\n\nI don't see any reason to think this is a planner regression. The\nrowcount estimates are pretty far off in both versions; so it's just a\nmatter of luck that 9.0 is choosing a better join order than 9.3.\n\nI'd try cranking up the statistics targets for the join columns\n(particularly domain_id) and see if that leads to better estimates.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 08 Apr 2014 17:26:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance regressions in PG 9.3 vs PG 9.0" }, { "msg_contents": "Thanks for your reply Tom. I've found that the culprit is the function\nparentContainers(), which recurses in a folder structure and looks like\nthis:\n\ncreate function parentContainers(numeric) returns setof numeric\nas '\n select parentContainers( (select container_id from container where id =\n$1 ) )\nunion\nselect id from container where id = $1\n ' language sql stable returns null on null input;\n\n\nIs is declared stable, but I know that is just planner hint, so it doesn't\nguarantee that it will only get called once. If I replace the function\ncall with the two values this function returns,\n\n\nOn Tue, Apr 8, 2014 at 5:26 PM, Tom Lane <[email protected]> wrote:\n\n> uher dslij <[email protected]> writes:\n> > The EXPLAINs all pretty much look like my original post. The planner in\n> > 9.2 and above is simply not using bitmap heap scans or bitmap index\n> scans?\n> > What could be the reason for this?\n>\n> I don't see any reason to think this is a planner regression. The\n> rowcount estimates are pretty far off in both versions; so it's just a\n> matter of luck that 9.0 is choosing a better join order than 9.3.\n>\n> I'd try cranking up the statistics targets for the join columns\n> (particularly domain_id) and see if that leads to better estimates.\n>\n> regards, tom lane\n>\n\nThanks for your reply Tom.  I've found that the culprit is the function parentContainers(), which recurses in a folder structure and looks like this:create function parentContainers(numeric) returns setof numeric\nas '    select parentContainers( (select container_id from container where id = $1 ) )unionselect id from container where id = $1        ' language sql stable returns null on null input;\nIs is declared stable, but I know that is just planner hint, so it doesn't guarantee that it will only get called once.  If I replace the function call with the two values this function returns, \nOn Tue, Apr 8, 2014 at 5:26 PM, Tom Lane <[email protected]> wrote:\nuher dslij <[email protected]> writes:\n> The EXPLAINs all pretty much look like my original post.  The planner in\n> 9.2 and above is simply not using bitmap heap scans or bitmap index scans?\n>  What could be the reason for this?\n\nI don't see any reason to think this is a planner regression.  The\nrowcount estimates are pretty far off in both versions; so it's just a\nmatter of luck that 9.0 is choosing a better join order than 9.3.\n\nI'd try cranking up the statistics targets for the join columns\n(particularly domain_id) and see if that leads to better estimates.\n\n                        regards, tom lane", "msg_date": "Tue, 8 Apr 2014 20:16:30 -0400", "msg_from": "uher dslij <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance regressions in PG 9.3 vs PG 9.0" }, { "msg_contents": "Sorry for the premature send on that last email. Here is the full one:\n\nThanks for your reply Tom. I've found that the culprit is the function\nparentContainers(), which recurses up a folder structure and looks like\nthis:\n\ncreate function parentContainers(numeric) returns setof numeric\nas '\n select parentContainers( (select container_id from container where id =\n$1 ) )\nunion\n select id from container where id = $1\n' language sql stable returns null on null input;\n\nIt is declared stable, but I know that STABLE is just planner hint, so it\ndoesn't guarantee that it will only get called once. If I replace the\nfunction call with the two values this function returns, I get < 1 ms\nruntime on all versions of pg. So there is data to support the statement\nthat we were relying on planner luck before and that luck has run out.\n\nWhat is the best practice to ensure a stable function only gets called\nonce? Should I use a CTE to cache the result? Is there a better way?\n\nThanks in advance,\n\n\nOn Tue, Apr 8, 2014 at 5:26 PM, Tom Lane <[email protected]> wrote:\n\n> uher dslij <[email protected]> writes:\n> > The EXPLAINs all pretty much look like my original post. The planner in\n> > 9.2 and above is simply not using bitmap heap scans or bitmap index\n> scans?\n> > What could be the reason for this?\n>\n> I don't see any reason to think this is a planner regression. The\n> rowcount estimates are pretty far off in both versions; so it's just a\n> matter of luck that 9.0 is choosing a better join order than 9.3.\n>\n> I'd try cranking up the statistics targets for the join columns\n> (particularly domain_id) and see if that leads to better estimates.\n>\n> regards, tom lane\n>\n\nSorry for the premature send on that last email.  Here is the full one:\nThanks for your reply Tom.  I've found that the culprit is the function parentContainers(), which recurses up a folder structure and looks like this:\ncreate function parentContainers(numeric) returns setof numericas '    select parentContainers( (select container_id from container where id = $1 ) )\nunion    select id from container where id = $1' language sql stable returns null on null input;\nIt is declared stable, but I know that STABLE is just planner hint, so it doesn't guarantee that it will only get called once.  If I replace the function call with the two values this function returns, I get < 1 ms runtime on all versions of pg.  So there is data to support the statement that we were relying on planner luck before and that luck has run out.\nWhat is the best practice to ensure a stable function only gets called once?  Should I use a CTE to cache the result?  Is there a better way?\nThanks in advance,On Tue, Apr 8, 2014 at 5:26 PM, Tom Lane <[email protected]> wrote:\nuher dslij <[email protected]> writes:\n> The EXPLAINs all pretty much look like my original post.  The planner in\n> 9.2 and above is simply not using bitmap heap scans or bitmap index scans?\n>  What could be the reason for this?\n\nI don't see any reason to think this is a planner regression.  The\nrowcount estimates are pretty far off in both versions; so it's just a\nmatter of luck that 9.0 is choosing a better join order than 9.3.\n\nI'd try cranking up the statistics targets for the join columns\n(particularly domain_id) and see if that leads to better estimates.\n\n                        regards, tom lane", "msg_date": "Tue, 8 Apr 2014 20:23:21 -0400", "msg_from": "uher dslij <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance regressions in PG 9.3 vs PG 9.0" }, { "msg_contents": "uher dslij <[email protected]> writes:\n> Thanks for your reply Tom. I've found that the culprit is the function\n> parentContainers(), which recurses up a folder structure and looks like\n> this:\n\nHmm ... I had noticed the execution of that in a subplan, but it appeared\nthat the subplan was being done the same number of times and took about\nthe same amount of time in both 9.0 and 9.3, so I'd discounted it as the\nsource of trouble. Still, it's hard to argue with experimental evidence.\n\n> create function parentContainers(numeric) returns setof numeric\n> as '\n> select parentContainers( (select container_id from container where id =\n> $1 ) )\n> union\n> select id from container where id = $1\n> ' language sql stable returns null on null input;\n\nYeah, that looks like performance trouble waiting to happen --- it's not\nclear what would bound the recursion, for one thing. Have you considered\nreplacing this with a RECURSIVE UNION construct? Wasn't there in 9.0\nof course, but 9.3 can do that.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 08 Apr 2014 21:39:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance regressions in PG 9.3 vs PG 9.0" } ]
[ { "msg_contents": "Hi All,\n\nI have been looking for a solution to a problem where my query is executing for a long time because it is running into a nested loop problem.\n\nI have done explain analyze and it shows the query taking a very long time due to nested loops.\n\nOn the DB side, there are indices in place for all the required columns. By setting nested loop off there is a drastic increase in performance (from 40,000 ms to 600 ms) but I know this is not a right practice.\n\nMy postgres version is 9.3.2 on linux.\n\nPlease find the link for the query plan below :\n\nhttp://explain.depesz.com/s/l9o\n\n\nAlso, find below the query that is being executed.\n\nSELECT DISTINCT\n \"Sektion/Fachbereich\".\"parent\",\n \"Studienfach\".\"ltxt\",\n SUM(CASE\n WHEN \"Studiengang\".\"faktor\" IS NOT NULL\n AND \"Studiengang\".\"faktor\" >= 0 THEN \"Studiengang\".\"faktor\" * \"Studierende\".\"summe\"\n ELSE \"Studierende\".\"summe\"\n END)\n\nFROM (\n SELECT sos_stg_aggr.tid_stg, sos_stg_aggr.ca12_staat, sos_stg_aggr.geschlecht, sos_stg_aggr.alter, sos_stg_aggr.hzbart, sos_stg_aggr.hmkfzkz, sos_stg_aggr.hmkfz, sos_stg_aggr.semkfzkz, sos_stg_aggr.semkfz, sos_stg_aggr.hzbkfzkz, sos_stg_aggr.hzbkfz, sos_stg_aggr.hrst, sos_stg_aggr.studiengang_nr, sos_stg_aggr.fach_nr, sos_stg_aggr.fach_sem_zahl, sos_stg_aggr.sem_rueck_beur_ein, sos_stg_aggr.kz_rueck_beur_ein, sos_stg_aggr.klinsem, sos_stg_aggr.hssem, sos_stg_aggr.stuart, sos_stg_aggr.stutyp, sos_stg_aggr.stufrm, sos_stg_aggr.stichtag, sos_stg_aggr.summe, sos_stg_aggr.hzbart_int, sos_stg_aggr.matrikel_nr, sos_stg_aggr.ch27_grund_beurl, sos_stg_aggr.ch62_grund_exmatr, sos_stg_aggr.hzbnote, textcat(sos_stg_aggr.studiengang_nr::text, sos_stg_aggr.fach_nr::text) AS koepfe_faelle\n FROM sos_stg_aggr\n\n union all\n\n SELECT sos_stg_aggr.tid_stg, sos_stg_aggr.ca12_staat, sos_stg_aggr.geschlecht, sos_stg_aggr.alter, sos_stg_aggr.hzbart, sos_stg_aggr.hmkfzkz, sos_stg_aggr.hmkfz, sos_stg_aggr.semkfzkz, sos_stg_aggr.semkfz, sos_stg_aggr.hzbkfzkz, sos_stg_aggr.hzbkfz, sos_stg_aggr.hrst, sos_stg_aggr.studiengang_nr, sos_stg_aggr.fach_nr, sos_stg_aggr.fach_sem_zahl, sos_stg_aggr.sem_rueck_beur_ein, sos_stg_aggr.kz_rueck_beur_ein, sos_stg_aggr.klinsem, sos_stg_aggr.hssem, sos_stg_aggr.stuart, sos_stg_aggr.stutyp, sos_stg_aggr.stufrm, sos_stg_aggr.stichtag, sos_stg_aggr.summe, sos_stg_aggr.hzbart_int, sos_stg_aggr.matrikel_nr, sos_stg_aggr.ch27_grund_beurl, sos_stg_aggr.ch62_grund_exmatr, sos_stg_aggr.hzbnote, '21' AS koepfe_faelle\n FROM sos_stg_aggr\n where sos_stg_aggr.tid_stg in (select distinct lehr_stg_ab_tid from lehr_stg_ab2fb)\n\n) AS \"Studierende\"\nINNER JOIN (\n select astat::integer, trim(druck) as druck from sos_k_status\n\n) AS \"Rückmeldestatus\"\nON (\n \"Studierende\".\"kz_rueck_beur_ein\" = \"Rückmeldestatus\".\"astat\"\n)\nINNER JOIN (\n select tid, trim(name) as name from sos_stichtag\n\n) AS \"Stichtag\"\nON (\n \"Studierende\".\"stichtag\" = \"Stichtag\".\"tid\"\n)\nINNER JOIN (\n select abschluss, kz_fach, stg, pversion, regel, trim(text) as text, fb, lehr, anteil, tid,null as faktor from lehr_stg_ab\nwhere lehr_stg_ab.tid not in (select lehr_stg_ab_tid from lehr_stg_ab2fb)\n\nunion\nselect abschluss, kz_fach, stg, pversion, regel, trim(text) as text, lehr_stg_ab2fb.fb, lehr, anteil, tid,faktor from lehr_stg_ab\ninner join lehr_stg_ab2fb\non lehr_stg_ab2fb.lehr_stg_ab_tid = lehr_stg_ab.tid\n\n) AS \"Studiengang\"\nON (\n \"Studierende\".\"tid_stg\" = \"Studiengang\".\"tid\"\n)\nINNER JOIN (\n select astat, astfr, astgrp, fb, trim(ltxt) as ltxt, stg from k_stg\n\n) AS \"Studienfach\"\nON (\n \"Studiengang\".\"stg\" = \"Studienfach\".\"stg\"\n)\nAND (\n \"Studienfach\".\"ltxt\" IS NOT NULL\n)\nINNER JOIN (\n select instnr, ch110_institut, btrim(druck) as druck, btrim(parent) as parent from unikn_k_fb\n\n) AS \"Sektion/Fachbereich\"\nON (\n \"Studiengang\".\"fb\" = \"Sektion/Fachbereich\".\"instnr\"\n)\nINNER JOIN (\n select apnr, trim(druck) as druck from cifx where key=613\n\n) AS \"Hörerstatus\"\nON (\n \"Studierende\".\"hrst\" = \"Hörerstatus\".\"apnr\"\n)\nWHERE\n(\n \"Sektion/Fachbereich\".\"druck\" = 'FB Biologie'\n)\nAND\n (\n (\n \"Hörerstatus\".\"druck\" = 'Haupthörer/in'\n AND \"Stichtag\".\"name\" = 'Amtl. Statistik Land'\n AND \"Rückmeldestatus\".\"druck\" IN ('Beurlaubung', 'Ersteinschreibung', 'Neueinschreibung', 'Rückmeldung')\n AND \"Studierende\".\"sem_rueck_beur_ein\" = 20132\n )\n)\nGROUP BY\n \"Sektion/Fachbereich\".\"parent\",\n \"Studienfach\".\"ltxt\"\n\n\nAccording to my analysis, the where clause after the Union All is taking a lot of time for execution.\n\nAny help with an alternative way to represent the query or what the cause of issue would be very helpful.\n\n\nThanks in advance,\nManoj\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 08 Apr 2014 11:35:39 +0200", "msg_from": "\"Manoj Gadi\" <[email protected]>", "msg_from_op": true, "msg_subject": "Nested loop issue" }, { "msg_contents": "REPLACE -- where sos_stg_aggr.tid_stg in (select distinct lehr_stg_ab_tid from lehr_stg_ab2fb)\n\nWITH -- where sos_stg_aggr.tid_stg EXISTS (select distinct lehr_stg_ab_tid from lehr_stg_ab2fb)\n\nSimilarly others also like -- lehr_stg_ab.tid not in (select lehr_stg_ab_tid from lehr_stg_ab2fb) with NOT EXISTS\n\n\nThis should surely going to improve performance depending on results from inner query.\n\nRegards\nDhananjay\nOpenSCG\n\n\nOn Tuesday, 8 April 2014 3:06 PM, Manoj Gadi <[email protected]> wrote:\n \nHi All,\n\nI have been looking for a solution to a problem where my query is executing for a long time because it is running into a nested loop problem.\n\nI have done explain analyze and it shows the query taking a very long time due to nested loops.\n\nOn the DB side, there are indices in place for all the required columns. By setting nested loop off there is a drastic increase in performance (from 40,000 ms to 600 ms) but I know this is not a right practice.\n\nMy postgres version is 9.3.2 on linux.\n\nPlease find the link for the query plan below :\n\nhttp://explain.depesz.com/s/l9o\n\n\nAlso, find below the query that is being executed.\n\nSELECT DISTINCT\n  \"Sektion/Fachbereich\".\"parent\",\n  \"Studienfach\".\"ltxt\",\n  SUM(CASE\n      WHEN \"Studiengang\".\"faktor\" IS NOT NULL\n      AND \"Studiengang\".\"faktor\" >= 0 THEN \"Studiengang\".\"faktor\" * \"Studierende\".\"summe\"\n      ELSE \"Studierende\".\"summe\"\n  END)\n\nFROM (\n  SELECT sos_stg_aggr.tid_stg, sos_stg_aggr.ca12_staat, sos_stg_aggr.geschlecht, sos_stg_aggr.alter, sos_stg_aggr.hzbart, sos_stg_aggr.hmkfzkz, sos_stg_aggr.hmkfz, sos_stg_aggr.semkfzkz, sos_stg_aggr.semkfz, sos_stg_aggr.hzbkfzkz, sos_stg_aggr.hzbkfz, sos_stg_aggr.hrst, sos_stg_aggr.studiengang_nr, sos_stg_aggr.fach_nr, sos_stg_aggr.fach_sem_zahl, sos_stg_aggr.sem_rueck_beur_ein, sos_stg_aggr.kz_rueck_beur_ein, sos_stg_aggr.klinsem, sos_stg_aggr.hssem, sos_stg_aggr.stuart, sos_stg_aggr.stutyp, sos_stg_aggr.stufrm, sos_stg_aggr.stichtag, sos_stg_aggr.summe, sos_stg_aggr.hzbart_int, sos_stg_aggr.matrikel_nr, sos_stg_aggr.ch27_grund_beurl, sos_stg_aggr.ch62_grund_exmatr, sos_stg_aggr.hzbnote, textcat(sos_stg_aggr.studiengang_nr::text, sos_stg_aggr.fach_nr::text) AS koepfe_faelle\n  FROM sos_stg_aggr\n\n  union all\n\n  SELECT sos_stg_aggr.tid_stg, sos_stg_aggr.ca12_staat, sos_stg_aggr.geschlecht, sos_stg_aggr.alter, sos_stg_aggr.hzbart, sos_stg_aggr.hmkfzkz, sos_stg_aggr.hmkfz, sos_stg_aggr.semkfzkz, sos_stg_aggr.semkfz, sos_stg_aggr.hzbkfzkz, sos_stg_aggr.hzbkfz, sos_stg_aggr.hrst, sos_stg_aggr.studiengang_nr, sos_stg_aggr.fach_nr, sos_stg_aggr.fach_sem_zahl, sos_stg_aggr.sem_rueck_beur_ein, sos_stg_aggr.kz_rueck_beur_ein, sos_stg_aggr.klinsem, sos_stg_aggr.hssem, sos_stg_aggr.stuart, sos_stg_aggr.stutyp, sos_stg_aggr.stufrm, sos_stg_aggr.stichtag, sos_stg_aggr.summe, sos_stg_aggr.hzbart_int, sos_stg_aggr.matrikel_nr, sos_stg_aggr.ch27_grund_beurl, sos_stg_aggr.ch62_grund_exmatr, sos_stg_aggr.hzbnote, '21' AS koepfe_faelle\n  FROM sos_stg_aggr\n  where sos_stg_aggr.tid_stg in (select distinct lehr_stg_ab_tid from lehr_stg_ab2fb)\n\n) AS \"Studierende\"\nINNER JOIN (\n  select astat::integer, trim(druck) as druck from sos_k_status\n\n) AS \"Rückmeldestatus\"\nON (\n  \"Studierende\".\"kz_rueck_beur_ein\" = \"Rückmeldestatus\".\"astat\"\n)\nINNER JOIN (\n  select tid, trim(name) as name from sos_stichtag\n\n) AS \"Stichtag\"\nON (\n  \"Studierende\".\"stichtag\" = \"Stichtag\".\"tid\"\n)\nINNER JOIN (\n  select abschluss, kz_fach, stg, pversion, regel, trim(text) as text, fb, lehr, anteil, tid,null as faktor from lehr_stg_ab\nwhere lehr_stg_ab.tid not in (select lehr_stg_ab_tid from lehr_stg_ab2fb)\n\nunion\nselect abschluss, kz_fach, stg, pversion, regel, trim(text) as text, lehr_stg_ab2fb.fb, lehr, anteil, tid,faktor from lehr_stg_ab\ninner join lehr_stg_ab2fb\non lehr_stg_ab2fb.lehr_stg_ab_tid = lehr_stg_ab.tid\n\n) AS \"Studiengang\"\nON (\n  \"Studierende\".\"tid_stg\" = \"Studiengang\".\"tid\"\n)\nINNER JOIN (\n  select astat, astfr, astgrp, fb, trim(ltxt) as ltxt, stg from k_stg\n\n) AS \"Studienfach\"\nON (\n  \"Studiengang\".\"stg\" = \"Studienfach\".\"stg\"\n)\nAND (\n  \"Studienfach\".\"ltxt\" IS NOT NULL\n)\nINNER JOIN (\n  select instnr, ch110_institut, btrim(druck) as druck, btrim(parent) as parent from unikn_k_fb\n\n) AS \"Sektion/Fachbereich\"\nON (\n  \"Studiengang\".\"fb\" = \"Sektion/Fachbereich\".\"instnr\"\n)\nINNER JOIN (\n  select apnr, trim(druck) as druck from cifx where key=613\n\n) AS \"Hörerstatus\"\nON (\n  \"Studierende\".\"hrst\" = \"Hörerstatus\".\"apnr\"\n)\nWHERE\n(\n  \"Sektion/Fachbereich\".\"druck\" = 'FB Biologie'\n)\nAND\n(\n  (\n      \"Hörerstatus\".\"druck\" = 'Haupthörer/in'\n      AND \"Stichtag\".\"name\" = 'Amtl. Statistik Land'\n      AND \"Rückmeldestatus\".\"druck\" IN ('Beurlaubung', 'Ersteinschreibung', 'Neueinschreibung', 'Rückmeldung')\n      AND \"Studierende\".\"sem_rueck_beur_ein\" = 20132\n  )\n)\nGROUP BY\n  \"Sektion/Fachbereich\".\"parent\",\n  \"Studienfach\".\"ltxt\"\n\n\nAccording to my analysis, the where clause after the Union All is taking a lot of time for execution.\n\nAny help with an alternative way to represent the query or what the cause of issue would be very helpful.\n\n\nThanks in advance,\nManoj\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\nREPLACE -- where sos_stg_aggr.tid_stg in (select distinct lehr_stg_ab_tid from lehr_stg_ab2fb)WITH -- where sos_stg_aggr.tid_stg EXISTS (select distinct lehr_stg_ab_tid from lehr_stg_ab2fb)Similarly others also like -- lehr_stg_ab.tid not in (select lehr_stg_ab_tid from lehr_stg_ab2fb) with NOT EXISTSThis should surely going to improve performance depending on\n results from inner query.RegardsDhananjayOpenSCG On Tuesday, 8 April 2014 3:06 PM, Manoj Gadi <[email protected]> wrote: Hi All,I have been looking for a solution to a problem where my query is executing for a long time because it is running into a nested loop problem.I have done explain analyze and it shows the query taking a very long time due to nested\n loops.On the DB side, there are indices in place for all the required columns. By setting nested loop off there is a drastic increase in performance (from 40,000 ms to 600 ms) but I know this is not a right practice.My postgres version is 9.3.2 on linux.Please find the link for the query plan below :http://explain.depesz.com/s/l9oAlso, find below the query that is being executed.SELECT DISTINCT  \"Sektion/Fachbereich\".\"parent\",  \"Studienfach\".\"ltxt\",  SUM(CASE      WHEN \"Studiengang\".\"faktor\" IS NOT NULL      AND \"Studiengang\".\"faktor\" >= 0 THEN \"Studiengang\".\"faktor\" * \"Studierende\".\"summe\"      ELSE \"Studierende\".\"summe\"  END)FROM (  SELECT sos_stg_aggr.tid_stg, sos_stg_aggr.ca12_staat, sos_stg_aggr.geschlecht,\n sos_stg_aggr.alter, sos_stg_aggr.hzbart, sos_stg_aggr.hmkfzkz, sos_stg_aggr.hmkfz, sos_stg_aggr.semkfzkz, sos_stg_aggr.semkfz, sos_stg_aggr.hzbkfzkz, sos_stg_aggr.hzbkfz, sos_stg_aggr.hrst, sos_stg_aggr.studiengang_nr, sos_stg_aggr.fach_nr, sos_stg_aggr.fach_sem_zahl, sos_stg_aggr.sem_rueck_beur_ein, sos_stg_aggr.kz_rueck_beur_ein, sos_stg_aggr.klinsem, sos_stg_aggr.hssem, sos_stg_aggr.stuart, sos_stg_aggr.stutyp, sos_stg_aggr.stufrm, sos_stg_aggr.stichtag, sos_stg_aggr.summe, sos_stg_aggr.hzbart_int, sos_stg_aggr.matrikel_nr, sos_stg_aggr.ch27_grund_beurl, sos_stg_aggr.ch62_grund_exmatr, sos_stg_aggr.hzbnote, textcat(sos_stg_aggr.studiengang_nr::text, sos_stg_aggr.fach_nr::text) AS koepfe_faelle  FROM sos_stg_aggr  union all  SELECT sos_stg_aggr.tid_stg, sos_stg_aggr.ca12_staat, sos_stg_aggr.geschlecht, sos_stg_aggr.alter, sos_stg_aggr.hzbart, sos_stg_aggr.hmkfzkz, sos_stg_aggr.hmkfz, sos_stg_aggr.semkfzkz,\n sos_stg_aggr.semkfz, sos_stg_aggr.hzbkfzkz, sos_stg_aggr.hzbkfz, sos_stg_aggr.hrst, sos_stg_aggr.studiengang_nr, sos_stg_aggr.fach_nr, sos_stg_aggr.fach_sem_zahl, sos_stg_aggr.sem_rueck_beur_ein, sos_stg_aggr.kz_rueck_beur_ein, sos_stg_aggr.klinsem, sos_stg_aggr.hssem, sos_stg_aggr.stuart, sos_stg_aggr.stutyp, sos_stg_aggr.stufrm, sos_stg_aggr.stichtag, sos_stg_aggr.summe, sos_stg_aggr.hzbart_int, sos_stg_aggr.matrikel_nr, sos_stg_aggr.ch27_grund_beurl, sos_stg_aggr.ch62_grund_exmatr, sos_stg_aggr.hzbnote, '21' AS koepfe_faelle  FROM sos_stg_aggr  where sos_stg_aggr.tid_stg in (select distinct lehr_stg_ab_tid from lehr_stg_ab2fb)) AS \"Studierende\"INNER JOIN (  select astat::integer, trim(druck) as druck from sos_k_status) AS \"Rückmeldestatus\"ON (  \"Studierende\".\"kz_rueck_beur_ein\" = \"Rückmeldestatus\".\"astat\")INNER JOIN (  select tid, trim(name) as name from\n sos_stichtag) AS \"Stichtag\"ON (  \"Studierende\".\"stichtag\" = \"Stichtag\".\"tid\")INNER JOIN (  select abschluss, kz_fach, stg, pversion, regel, trim(text) as text, fb, lehr, anteil, tid,null as faktor from lehr_stg_abwhere lehr_stg_ab.tid not in (select lehr_stg_ab_tid from lehr_stg_ab2fb)unionselect abschluss, kz_fach, stg, pversion, regel, trim(text) as text, lehr_stg_ab2fb.fb, lehr, anteil, tid,faktor from lehr_stg_abinner join lehr_stg_ab2fbon lehr_stg_ab2fb.lehr_stg_ab_tid = lehr_stg_ab.tid) AS \"Studiengang\"ON (  \"Studierende\".\"tid_stg\" = \"Studiengang\".\"tid\")INNER JOIN (  select astat, astfr, astgrp, fb, trim(ltxt) as ltxt, stg from k_stg) AS \"Studienfach\"ON (  \"Studiengang\".\"stg\" = \"Studienfach\".\"stg\")AND (  \"Studienfach\".\"ltxt\" IS NOT NULL)INNER JOIN (  select instnr, ch110_institut,\n btrim(druck) as druck, btrim(parent) as parent from unikn_k_fb) AS \"Sektion/Fachbereich\"ON (  \"Studiengang\".\"fb\" = \"Sektion/Fachbereich\".\"instnr\")INNER JOIN (  select apnr, trim(druck) as druck from cifx where key=613) AS \"Hörerstatus\"ON (  \"Studierende\".\"hrst\" = \"Hörerstatus\".\"apnr\")WHERE(  \"Sektion/Fachbereich\".\"druck\" = 'FB Biologie')AND (  (      \"Hörerstatus\".\"druck\" = 'Haupthörer/in'      AND \"Stichtag\".\"name\" = 'Amtl. Statistik Land'      AND \"Rückmeldestatus\".\"druck\" IN ('Beurlaubung', 'Ersteinschreibung', 'Neueinschreibung', 'Rückmeldung')      AND \"Studierende\".\"sem_rueck_beur_ein\" = 20132  ))GROUP BY  \"Sektion/Fachbereich\".\"parent\",  \"Studienfach\".\"ltxt\"According to my analysis, the where clause after the\n Union All is taking a lot of time for execution.Any help with an alternative way to represent the query or what the cause of issue would be very helpful.Thanks in advance,Manoj-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 9 Apr 2014 04:40:09 +0800 (SGT)", "msg_from": "Dhananjay Singh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Nested loop issue" } ]
[ { "msg_contents": "Hi all. I have a function that uses a \"simple\" select between 3 tables. There is a function argument to help choose how a WHERE clause applies. This is the code section:\n\nselect * from....\n[...]\nwhere case $3 \n when 'I' then [filter 1]\n when 'E' then [filter 2]\n when 'P' then [filter 3]\nelse true end\n\nWhen the function is called with, say, parameter $3 = 'I', the funcion run in 250ms,\nbut when there is no case involved, and i call directly \"with [filter 1]\" the function runs in 70ms.\n\nLooks like the CASE is doing something nasty.\nAny hints about this?\n\nThanks!\nGerardo\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 8 Apr 2014 08:53:41 -0300 (ART)", "msg_from": "Gerardo Herzig <[email protected]>", "msg_from_op": true, "msg_subject": "performance drop when function argument is evaluated in WHERE clause" }, { "msg_contents": "Gerardo Herzig <[email protected]> writes:\n> Hi all. I have a function that uses a \"simple\" select between 3 tables. There is a function argument to help choose how a WHERE clause applies. This is the code section:\n> select * from....\n> [...]\n> where case $3 \n> when 'I' then [filter 1]\n> when 'E' then [filter 2]\n> when 'P' then [filter 3]\n> else true end\n\n> When the function is called with, say, parameter $3 = 'I', the funcion run in 250ms,\n> but when there is no case involved, and i call directly \"with [filter 1]\" the function runs in 70ms.\n\n> Looks like the CASE is doing something nasty.\n> Any hints about this?\n\nDon't do it like that. You're preventing the optimizer from understanding\nwhich filter applies. Better to write three separate SQL commands\nsurrounded by an if/then/else construct.\n\n(BTW, what PG version is that? I would think recent versions would\nrealize that dynamically generating a plan each time would work around\nthis. Of course, that approach isn't all that cheap either. You'd\nprobably still be better off splitting it up manually.)\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-sql mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-sql\n", "msg_date": "Tue, 08 Apr 2014 09:50:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] performance drop when function argument is evaluated in\n WHERE clause" }, { "msg_contents": "Tom, thanks (as allways) for your answer. This is a 9.1.12. I have to say, im not very happy about if-elif-else'ing at all. \nThe \"conditional filter\" es a pretty common pattern in our functions, i would have to add (and maintain) a substantial amount of extra code.\n\nAnd i dont really understand why the optimizer issues, since the arguments are immutable \"strings\", and should (or could at least) be evaluated only once.\n\nThanks again for your time!\n\nGerardo\n\n\n\n----- Mensaje original -----\n> De: \"Tom Lane\" <[email protected]>\n> Para: \"Gerardo Herzig\" <[email protected]>\n> CC: [email protected], \"pgsql-sql\" <[email protected]>\n> Enviados: Martes, 8 de Abril 2014 10:50:01\n> Asunto: Re: [PERFORM] performance drop when function argument is evaluated in WHERE clause\n> \n> Gerardo Herzig <[email protected]> writes:\n> > Hi all. I have a function that uses a \"simple\" select between 3\n> > tables. There is a function argument to help choose how a WHERE\n> > clause applies. This is the code section:\n> > select * from....\n> > [...]\n> > where case $3\n> > when 'I' then [filter 1]\n> > when 'E' then [filter 2]\n> > when 'P' then [filter 3]\n> > else true end\n> \n> > When the function is called with, say, parameter $3 = 'I', the\n> > funcion run in 250ms,\n> > but when there is no case involved, and i call directly \"with\n> > [filter 1]\" the function runs in 70ms.\n> \n> > Looks like the CASE is doing something nasty.\n> > Any hints about this?\n> \n> Don't do it like that. You're preventing the optimizer from\n> understanding\n> which filter applies. Better to write three separate SQL commands\n> surrounded by an if/then/else construct.\n> \n> (BTW, what PG version is that? I would think recent versions would\n> realize that dynamically generating a plan each time would work\n> around\n> this. Of course, that approach isn't all that cheap either. You'd\n> probably still be better off splitting it up manually.)\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \nSent via pgsql-sql mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-sql\n", "msg_date": "Tue, 8 Apr 2014 14:33:27 -0300 (ART)", "msg_from": "Gerardo Herzig <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] performance drop when function argument is evaluated\n in WHERE clause" }, { "msg_contents": "Gerardo Herzig <[email protected]> writes:\n> Tom, thanks (as allways) for your answer. This is a 9.1.12. I have to say, im not very happy about if-elif-else'ing at all. \n> The \"conditional filter\" es a pretty common pattern in our functions, i would have to add (and maintain) a substantial amount of extra code.\n\nIn that case consider moving to 9.2 or later. I believe it'd handle\nthis scenario better.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-sql mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-sql\n", "msg_date": "Tue, 08 Apr 2014 13:41:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [PERFORM] performance drop when function argument is\n evaluated in WHERE clause" } ]
[ { "msg_contents": "I have a fairly large table (~100M rows), let's call it \"events\", and\namong other things it has a couple of columns on it, columns that\nwe'll call entity_type_id (an integer) and and published_at (a\ntimestamp). It has, among others, indices on (published_at) and\n(entity_type_id, published_at).\n\nA very common query against this table is of the form...\n\nSELECT * FROM events WHERE entity_type_id = XXX ORDER BY published_at DESC LIMIT 25;\n\n... to get the most recent 25 events from the table for a given type\nof entity, and generally the query planner does the expected thing of\nusing the two-part index on (entity_type_id, published_at). Every now\nand again, though, I have found the query planner deciding that it\nought use the single column (published_at) index. This can,\nunsurprisingly, result in horrendous performance if events for a given\nentity type are rare, as we end up with a very long walk of an index.\n\nI had this happen again yesterday and I noticed something of\nparticular interest pertaining to the event. Specifically, the query\nwas for an entity type that the system had only seen for the first\ntime one day prior, and furthermore the events table had not been\nanalyzed by the statistics collector for a couple of weeks.\n\nMy intuition is that the query planner, when working with an enormous\ntable, and furthermore encountering an entity type that the statistics\ncollector had never previously seen, would assume that the number of\nrows in the events table of that entity type would be very small, and\ntherefore the two-part index on (entity_type_id, published_at) would\nbe the right choice. Nonetheless, an EXPLAIN was showing usage of the\n(published_at) index, and since there were only ~20 rows in the entire\nevents table for that entity type the queries were getting the worst\npossible execution imaginable, i.e. reading in the whole table to find\nthe rows that hit, but doing it with the random I/O of an index walk.\n\nAs an experiment, I ran a VACUUM ANALYZE on the events table, and then\nre-ran the EXPLAIN of the query, and... Same query plan again...\nMaybe for whatever issue I am having the random sampling nature of the\nstatistics collector made it unhelpful, i.e. in its sampling of the\n~100M rows it never hit a single row that had the new entity type\nspecified?\n\nOther possibly relevant pieces of information... The entity type\ncolumn has a cardinality in the neighborhood of a couple dozen.\nMeanwhile, for some of the entity types there is a large and ongoing\nnumber of events, and for other entity types there is a smaller and\nmore sporadic number of events. Every now and again a new entity type\nshows up.\n\nI can't understand why the query planner would make this choice.\nMaybe it has gotten ideas into its head about the distribution of\ndata? Or maybe there is a subtle bug that my data set is triggering?\nOr maybe I need to turn some knobs on statistics collection? Or maybe\nit's all of these things together? I worry that even if there is a\nknob turning exercise that helps that we're still going to get burned\nwhenever a new entity type shows up until we re-run ANALYZE, assuming\nthat I can find a fix that involves tweaking statistics collection. I\njust can't fathom how it would ever be the case that Postgres's choice\nof index usage in this case would make sense. It doesn't even slot\ncleanly into the problem space of \"why did Postgres do a sequential\nscan instead of an index scan?\". If you're doing a query of the\ndescribed form and the entity type is specified, wouldn't the two-part\nindex theoretically _always_ yield better performance than the\none-part index? Maybe I have a flawed understanding of the cost of\nusing various indexes? Maybe there is something analogous between\nsequential-versus-index-scan and one-part-versus-two-part-index scan\nchoices?\n\nFWIW, we're running on 8.4.X and using the out-of-the-box\ndefault_statistics_target setting and haven't dabbled with setting\ntable level statistics configurations.\n\nThoughts? Recommended reading?\n\n -- AWG\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 8 Apr 2014 08:48:35 -0400", "msg_from": "\"Andrew W. Gibbs\" <[email protected]>", "msg_from_op": true, "msg_subject": "query against large table not using sensible index to find very small\n amount of data" }, { "msg_contents": "\n> Other possibly relevant pieces of information... The entity type\n> column has a cardinality in the neighborhood of a couple dozen.\n> Meanwhile, for some of the entity types there is a large and ongoing\n> number of events, and for other entity types there is a smaller and\n> more sporadic number of events. Every now and again a new entity\n> type shows up.\n\nWith that as the case, I have two questions for you:\n\n1. Why do you have a low cardinality column as the first column in an index?\n2. Do you have any queries at all that only use the entity type as the only where clause?\n\nI agree that the planner is probably wrong here, but these choices aren't helping. The low cardinality of the first column causes very large buckets that don't limit results very well at all. Combined with the order-by clause, the planner really wants to walk the date index backwards to find results instead. I would do a couple of things.\n\nFirst, remove the type/date index. Next, do a count of each type in the table with something like this:\n\nSELECT type_id, count(1)\n FROM my_table\n GROUP BY 2\n\nAny type that is more than 20% of the table will probably never be useful in an index. At this point, you have a choice. You can create a new index with date and type *in that order* or create a new partial index on date and type that also ignores the top matches. For instance, if you had a type that was 90% of the values, this would be my suggestion:\n\nCREATE INDEX idx_foo_table_date_event_type_part ON foo_table (event_date, event_type)\n WHERE event_type != 14;\n\nOr whatever. If the IDs are basically evenly distributed, it won't really matter.\n\nIn any case, index order matters. The planner wants to restrict data as quickly as possible. If you provide an order clause, it wants to read the index in that order. Your specified type as the first column disrupts that, so it has to fetch the values first, which is usually more expensive. Even if that's wrong in your particular case, planner stats are not precise enough to know that.\n\nEither way, try moving the indexes around. I can't think of many indexes in our database where I have the low cardinality value as the first column. Databases have an easier time managing many shallow buckets of values, than a few deep ones.\n\n--\nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd | Suite 400 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 8 Apr 2014 13:39:41 +0000", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query against large table not using sensible index to\n find very small amount of data" }, { "msg_contents": "\"Andrew W. Gibbs\" <[email protected]> writes:\n> A very common query against this table is of the form...\n\n> SELECT * FROM events WHERE entity_type_id = XXX ORDER BY published_at DESC LIMIT 25;\n\n> ... to get the most recent 25 events from the table for a given type\n> of entity, and generally the query planner does the expected thing of\n> using the two-part index on (entity_type_id, published_at). Every now\n> and again, though, I have found the query planner deciding that it\n> ought use the single column (published_at) index.\n\nWhat is the estimated rows count according to EXPLAIN when it does that,\nversus when it chooses the better plan?\n\n> FWIW, we're running on 8.4.X and using the out-of-the-box\n> default_statistics_target setting and haven't dabbled with setting\n> table level statistics configurations.\n\n8.4.X is due to reach EOL in July, so you really ought to be thinking\nabout an upgrade. It's not clear from the given info whether this issue\nis fixable with stats configuration adjustments, is a bug already fixed\nin later versions, or neither, but we're unlikely to make any significant\nchanges in the 8.4 planner code at this point...\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 08 Apr 2014 09:55:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query against large table not using sensible index to find very\n small amount of data" }, { "msg_contents": "On Tue, Apr 8, 2014 at 6:39 AM, Shaun Thomas <[email protected]>wrote:\n\n>\n> > Other possibly relevant pieces of information... The entity type\n> > column has a cardinality in the neighborhood of a couple dozen.\n> > Meanwhile, for some of the entity types there is a large and ongoing\n> > number of events, and for other entity types there is a smaller and\n> > more sporadic number of events. Every now and again a new entity\n> > type shows up.\n>\n> With that as the case, I have two questions for you:\n>\n> 1. Why do you have a low cardinality column as the first column in an\n> index?\n>\n\nBecause if he didn't have it, the planner would never be able to use it.\n Remember, the problem is when the planner chooses NOT to use that index.\n\nCheers,\n\nJeff\n\nOn Tue, Apr 8, 2014 at 6:39 AM, Shaun Thomas <[email protected]> wrote:\n\n> Other possibly relevant pieces of information...  The entity type\n> column has a cardinality in the neighborhood of a couple dozen.\n> Meanwhile, for some of the entity types there is a large and ongoing\n> number of events, and for other entity types there is a smaller and\n> more sporadic number of events.  Every now and again a new entity\n> type shows up.\n\nWith that as the case, I have two questions for you:\n\n1. Why do you have a low cardinality column as the first column in an index?Because if he didn't have it, the planner would never be able to use it.  Remember, the problem is when the planner chooses NOT to use that index.\nCheers,Jeff", "msg_date": "Tue, 8 Apr 2014 15:51:46 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query against large table not using sensible index to\n find very small amount of data" }, { "msg_contents": "Your understanding of the utility of multi-part indices does not jive\nwith my own.\n\nWhile I agree that a partial index might be in order here, that ought\njust be a performance optimization that lowers the footprint of the\nindex from an index size and index maintenance standpoint, not\nsomething that governs when the index is used for an item whose entity\ntype rarely comes up in the table. If a couple of the entity types\nwere to constitute 80% of the events, then using a partial index would\nreduce the performance strain of maintaining the index by 80%, but\nthis ought not govern the query planner's behavior when doing queries\non entity types that were not among those.\n\nMy general understanding of the utility of multi-part indices is that\nthey will come into play when some number of the leading columns\nappear in the query as fixed values and furthermore if a subsequent\ncolumn appears as part of a ranging operation.\n\nI know that a b-tree structure isn't exactly the same as a\nbinary-tree, but it is roughly equivalent for the purposes of our\nconversation... I believe you can think of multi-part indices as\n(roughly) equivalent either to nested binary trees, or as equivalent\nto a binary tree whose keys are the concatenation of the various\ncolumns. In the former case, doing a range scan would be a matter of\nhopping through the nested trees until you got to the terminal range\nscan operation, and in the latter case doing a range scan would be a\nmatter of finding the first node in the tree that fell within the\nvalues for your concatenation and then walking through the tree. Yes,\nthat's not exactly what happens with a b-tree, but it's pretty\nsimilar, the main differences being performance operations, I believe.\n\nGiven that, I don't understand how having a multi-part index with the\ncolumn over which I intend to range comes _earlier_ than the column(s)\nthat I intend to have be fixed would be helpful. This is especially\ntrue given that the timestamp columns are are the granularity of\n_milliseconds_ and my data set sees a constant stream of inputs with\nbursts up to ~100 events per second. I think what you are describing\ncould only make sense if the date column were at a large granularity,\ne.g hours or days.\n\nOr maybe I have missed something...\n\n -- AWG\n\nOn Tue, Apr 08, 2014 at 01:39:41PM +0000, Shaun Thomas wrote:\n> \n> > Other possibly relevant pieces of information... The entity type\n> > column has a cardinality in the neighborhood of a couple dozen.\n> > Meanwhile, for some of the entity types there is a large and ongoing\n> > number of events, and for other entity types there is a smaller and\n> > more sporadic number of events. Every now and again a new entity\n> > type shows up.\n> \n> With that as the case, I have two questions for you:\n> \n> 1. Why do you have a low cardinality column as the first column in an index?\n> 2. Do you have any queries at all that only use the entity type as the only where clause?\n> \n> I agree that the planner is probably wrong here, but these choices aren't helping. The low cardinality of the first column causes very large buckets that don't limit results very well at all. Combined with the order-by clause, the planner really wants to walk the date index backwards to find results instead. I would do a couple of things.\n> \n> First, remove the type/date index. Next, do a count of each type in the table with something like this:\n> \n> SELECT type_id, count(1)\n> FROM my_table\n> GROUP BY 2\n> \n> Any type that is more than 20% of the table will probably never be useful in an index. At this point, you have a choice. You can create a new index with date and type *in that order* or create a new partial index on date and type that also ignores the top matches. For instance, if you had a type that was 90% of the values, this would be my suggestion:\n> \n> CREATE INDEX idx_foo_table_date_event_type_part ON foo_table (event_date, event_type)\n> WHERE event_type != 14;\n> \n> Or whatever. If the IDs are basically evenly distributed, it won't really matter.\n> \n> In any case, index order matters. The planner wants to restrict data as quickly as possible. If you provide an order clause, it wants to read the index in that order. Your specified type as the first column disrupts that, so it has to fetch the values first, which is usually more expensive. Even if that's wrong in your particular case, planner stats are not precise enough to know that.\n> \n> Either way, try moving the indexes around. I can't think of many indexes in our database where I have the low cardinality value as the first column. Databases have an easier time managing many shallow buckets of values, than a few deep ones.\n> \n> --\n> Shaun Thomas\n> OptionsHouse | 141 W. Jackson Blvd | Suite 400 | Chicago IL, 60604\n> 312-676-8870\n> [email protected]\n> \n> ______________________________________________\n> \n> See http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 8 Apr 2014 20:57:18 -0400", "msg_from": "\"'Andrew W. Gibbs'\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query against large table not using sensible index to find very\n small amount of data" }, { "msg_contents": "Tom,\n\nWe have continued to explore the issue and one of my teammates,\ncopied, has made some interesting additional discoveries.\n\nI apparently glossed over a subtle distinction about the query being\nissued. My original reported query structure was of the form...\n\nSELECT * FROM events WHERE entity_type_id = XXX ORDER BY published_at DESC LIMIT 25;\n\n... but in reality it was more like...\n\nSELECT * FROM events WHERE entity_type_id = (SELECT id FROM entity_types WHERE name = ?) ORDER BY published_at DESC LIMIT 25;\n\n... and these queries in fact yield dramatically different cost\nanalyses, so much so that if you switch to using the former one by\nvirtue of doing a query yourself for the entity_type_id instead of\nusing a subquery, then the system uses the two-part index as hoped.\n\nI suspect that this stems in part from the non-even distribution of\nthe entity_type_id value in the events table for which there are 20-30\nvalues but two or three of them account for a very large share of the\ntable (and Postgres only seems to track the fraction taken by each of\nthe top ten or so values). For the query that originated my\nconsternation, I presume the planner said \"I don't know which value of\nentity_type_id the subquery will yield, so I'll assume an average\ndensity based on everything I've seen in the table\" (which really\nprobably means only the top ten values it has seen), whereas when we\nhard-code the entity_type_id by doing the sub-query ourselves\nbeforehand the query planner says \"that value must be either really\nrare or non-existent because I haven't even seen it since my last\nANALYZE of the table and this table is huge\".\n\nMaybe this is an inherent limitation of the query planner because it\ndoes not want to explore parts of the plan by actually executing\nsubqueries so that it can make more informed choices about the larger\nquery? We restored a back-up of the system onto another machine, ran\nthe conversion to Postgres 9, cranked up the stats collection\nconfigurations all the way, ran ANALYZE, and still got the same\nresults, which leads me to believe that there is an issue with the\nquery planner regarding its ability to do statistical analysis\npertaining to columns in a WHERE clause being specified by a sub-query\n(our entity_types table is extremely small, and presumably thus always\nin memory, thus a subquery would be insanely cheap, but I appreciate\nthat we're way down in the weeds of query planning by this point, and\nthat there may be fundamental problems with issuing actual queries so\nas to do exploratory query planning).\n\nWe (Scott, really) continued to explore this (using the original\nquery, not the tweaked one) by doing a mix of alternately dropping\nindexes, tuning execution cost configuration parameters, and clearing\nthe OS cache between queries. One of the outcomes from this was the\nrealization that random_page_cost is the dominant factor for the query\nplan involving the two-part index, such that when we slash it from the\ndefault 4 to specifying 2 that it slashes the cost almost exactly in\nhalf for using the two-part index and causes it to be used even though\nthe query planner is over-estimating the prevalence of the column\nvalue due (presumably) to not knowing how the subquery was going to\nplay out.\n\nThis brings me back to my musings about Postgres b-tree index\nimplementation... Why should using a two-part index with the WHERE\nclause fixing the first column's value yield a query with more random\nI/O than walking the single column index and filtering out the\nnon-matching rows? Given my understanding of index implementation, it\nseems like using the two-part index in even the degenerate case of a\ntable with only one entity_type_id would yield almost exactly the same\nI/O load as using the one-part index, and so a statistical\ndistribution of the table that was at all better than that degenerate\ncase would cause selection of the two-part index. This makes me think\nthat either this illustrates a second query planner issue or that my\nunderstanding of the implementation of b-tree indexes in Postgres is\nflawed.\n\nIt seems obvious to me that we need to tweak the cost configuration\nparameters in our Postgres installation, at the least lowering\nrandom_page_cost to something more in-line with the hardware we have,\nbut even that that feels like we would just be skirting issues with\nthe query planner when either there is a subtle flaw in the planner or\na major flaw in my understanding of b-tree index implementation.\n\nMind you, I raise these issues as someone who profoundly loves\nPostgres, though perhaps is loving it too hard these days. I would\nreally like to get a fuller understanding of what is happening here so\nas to craft a permanent solution. I am worried that even if we tweak\none or more of the cost configuration parameters that it might still\nbe prudent to issue the subquery's look-up prior to the main query and\nthen embed its results so that the query planner can act with better\nknowledge of the specified entity_type_id value's prevalence in the\nevents table, even though this would feel a little bit like a hack.\n\nAny insights would be greatly appreciate.\n\n -- AWG\n\nOn Tue, Apr 08, 2014 at 09:55:38AM -0400, Tom Lane wrote:\n> \"Andrew W. Gibbs\" <[email protected]> writes:\n> > A very common query against this table is of the form...\n> \n> > SELECT * FROM events WHERE entity_type_id = XXX ORDER BY published_at DESC LIMIT 25;\n> \n> > ... to get the most recent 25 events from the table for a given type\n> > of entity, and generally the query planner does the expected thing of\n> > using the two-part index on (entity_type_id, published_at). Every now\n> > and again, though, I have found the query planner deciding that it\n> > ought use the single column (published_at) index.\n> \n> What is the estimated rows count according to EXPLAIN when it does that,\n> versus when it chooses the better plan?\n> \n> > FLAW, we're running on 8.4.X and using the out-of-the-box\n> > default_statistics_target setting and haven't dabbled with setting\n> > table level statistics configurations.\n> \n> 8.4.X is due to reach SOL in July, so you really ought to be thinking\n> about an upgrade. It's not clear from the given info whether this issue\n> is fixable with stats configuration adjustments, is a bug already fixed\n> in later versions, or neither, but we're unlikely to make any significant\n> changes in the 8.4 planner code at this point...\n> \n> \t\t\tregards, tom lane\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 12 Apr 2014 20:12:18 -0400", "msg_from": "\"Andrew W. Gibbs\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: query against large table not using sensible index to find very\n small amount of data" } ]
[ { "msg_contents": "Dear Jeff, Albe and Heikki,\n\nLet me start by thanking you for your time. It is really nice to have a\nreal supportive community. Thank you.\n\nAfter reading the answers, we decided to do an experiment with a\nfillfactor of 40% and dropping the index on the is_grc_002 field (but\nretaining the other indexes.) The experiment showed a reduction in\nrun-time to ~125 seconds. That is almost 25 times faster than it was. We\nare now doing more tests to verify this fix. We will send a SOLVED\nmessage when the fix is verified (unless you state to not bother...)\n\nWe think we understand why the improvement works. Let me state our\nunderstanding here. Please comment if we got it wrong.\n\nIndex entries point to record pages. An update on a row results in a new\nrow instance. If the new instance can be written in the same page as the\nold instance, then no indexes need to be updated because the index still\npoints to the correct page. (Unless the update itself modifies an\nindexed value). By specifying a fillfactor of 40%, there will be room\nfor an updated version of each row in the page.\nWe assume (sorry) that vacuuming the table will release the space of the\nold rows, so that we can again do an update query and reuse the freed up\nspace in the pages.\n\n\nJeff, answering your question: The update is done after each cycle. It\nwill actually also update rows that were already updated before. We\nrealize this is actually wasteful.\n\n\nSo we might change\n\nupdate t67cdi_nl_cmp_descr set is_grc_002='Y'\n\nto\n\nupdate t67cdi_nl_cmp_descr set is_grc_002='Y' where is_grc_002 is null\n\nThis will avoid creating new records for records that where already\nchanged before. This might give us additional speed improvement.\n\n\nAlbe, answering your question: Yes, the update was indeed finished in ~3\nminutes when all indexes were dropped. Fifteen indexes is indeed a big\nnumber. These indexes are configured by consultants on a\nproject-by-project basis. They are not hard coded in the software. I\nwill however advice the consultant to have a critical look at the big\nnumber of indexes used in this case.\n\nHeikki, Thank you for the nifty techniques. I especially like the\npossible solution with partitioning the table and using a view. We don't\nthink we can do this at this point. Let me elaborate a bit to explain\nwhy.\n\nThe application works in cycles. Each cycles adds more records to the\ntable. New records have a NULL value in field is_grc_002. At the end of\nthe cycle, the value of all existing records is changed to 'Y'. During\nthe cycle, some records have is_grc_002 NULL and other have value 'Y'.\nThis is used for the processing. A view can only work if all rows have a\nsingle value. We could use two tables (one with records from old cycles,\none with records from the new cycle) and a view. But that means copying\nall records from the \"new\" to the \"older\" table in each cycle.\n\nKind regards,\n\nHans Drexler\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 8 Apr 2014 13:21:25 +0000", "msg_from": "Hans Drexler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: batch update query performance" }, { "msg_contents": "On Tue, Apr 8, 2014 at 6:21 AM, Hans Drexler <\[email protected]> wrote:\n\n> Dear Jeff, Albe and Heikki,\n>\n> Let me start by thanking you for your time. It is really nice to have a\n> real supportive community. Thank you.\n>\n> After reading the answers, we decided to do an experiment with a\n> fillfactor of 40% and dropping the index on the is_grc_002 field (but\n> retaining the other indexes.) The experiment showed a reduction in\n> run-time to ~125 seconds. That is almost 25 times faster than it was. We\n> are now doing more tests to verify this fix. We will send a SOLVED\n> message when the fix is verified (unless you state to not bother...)\n>\n> We think we understand why the improvement works. Let me state our\n> understanding here. Please comment if we got it wrong.\n>\n> Index entries point to record pages. An update on a row results in a new\n> row instance. If the new instance can be written in the same page as the\n> old instance, then no indexes need to be updated because the index still\n> points to the correct page. (Unless the update itself modifies an\n> indexed value). By specifying a fillfactor of 40%, there will be room\n> for an updated version of each row in the page.\n>\n\nThis is mostly correct. The index entry does not have *just* a page, it\nalso has an offset to a slot on that page. However, once it gets to the\npage there is a mechanism for chaining slots together, so you can still\nfind the new version given the slot of an older version on the same page.\n (If there were a way to have the index store *just* the page, then it\nwould be even more useful for HOT, as then only the indexes for the values\nactually changed would need to get updates, as opposed to now where every\nindex needs to be updated if any index needs to be updated. But that would\nhave other trade-offs)\n\n\n\n> We assume (sorry) that vacuuming the table will release the space of the\n> old rows, so that we can again do an update query and reuse the freed up\n> space in the pages.\n>\n\nOnce no transaction can possibly be interested in the old version, then a\nvacuum can free it up for reuse. In the special case of old HOT-updated\ntuples, any other process that happens to visit the page can also clean\nthem up once they are old enough, not just vacuums.\n\nBut if someone has a long running transaction open, even if that\ntransaction never has and never will touch the table being vacuumed, it\nwill still prevent the space from being reused.\n\n\n>\n>\n> Jeff, answering your question: The update is done after each cycle. It\n> will actually also update rows that were already updated before. We\n> realize this is actually wasteful.\n>\n>\n> So we might change\n>\n> update t67cdi_nl_cmp_descr set is_grc_002='Y'\n>\n> to\n>\n> update t67cdi_nl_cmp_descr set is_grc_002='Y' where is_grc_002 is null\n>\n\n> This will avoid creating new records for records that where already\n> changed before. This might give us additional speed improvement.\n>\n\n\nThat will probably help a lot. The HOT update code is smart enough to\nrealize that changing from 'Y' to 'Y' does not prevent the HOT update from\nworking, but it still needs to find room on the same page for a new copy of\nthe tuple or else it cannot use the HOT mechanism anyway. Once you add\nthis restriction to the where clause, you might find that it is better to\nleave the index in place and put up with the index updates for those rows\nwhich actually do need to be updated, rather than keep dropping the and\nrebuilding the index. It would depend on what proportion of the table is\ngetting updated each time.\n\nCheers,\n\nJeff\n\nOn Tue, Apr 8, 2014 at 6:21 AM, Hans Drexler <[email protected]> wrote:\nDear Jeff, Albe and Heikki,\n\nLet me start by thanking you for your time. It is really nice to have a\nreal supportive community. Thank you.\n\nAfter reading the answers, we decided to do an experiment with a\nfillfactor of 40% and dropping the index on the is_grc_002 field (but\nretaining the other indexes.) The experiment showed a reduction in\nrun-time to ~125 seconds. That is almost 25 times faster than it was. We\nare now doing more tests to verify this fix. We will send a SOLVED\nmessage when the fix is verified (unless you state to not bother...)\n\nWe think we understand why the improvement works. Let me state our\nunderstanding here. Please comment if we got it wrong.\n\nIndex entries point to record pages. An update on a row results in a new\nrow instance. If the new instance can be written in the same page as the\nold instance, then no indexes need to be updated because the index still\npoints to the correct page. (Unless the update itself modifies an\nindexed value). By specifying a fillfactor of 40%, there will be room\nfor an updated version of each row in the page.This is mostly correct.  The index entry does not have *just* a page, it also has an offset to a slot on that page.  However, once it gets to the page there is a mechanism for chaining slots together, so you can still find the new version given the slot of an older version on the same page.  (If there were a way to have the index store *just* the page, then it would be even more useful for HOT, as then only the indexes for the values actually changed would need to get updates, as opposed to now where every index needs to be updated if any index needs to be updated.  But that would have other trade-offs)\n \nWe assume (sorry) that vacuuming the table will release the space of the\nold rows, so that we can again do an update query and reuse the freed up\nspace in the pages.Once no transaction can possibly be interested in the old version, then a vacuum can free it up for reuse.  In the special case of old HOT-updated tuples, any other process that happens to visit the page can also clean them up once they are old enough, not just vacuums.  \nBut if someone has a long running transaction open, even if that transaction never has and never will touch the table being vacuumed, it will still prevent the space from being reused. \n\n\n\nJeff, answering your question: The update is done after each cycle. It\nwill actually also update rows that were already updated before. We\nrealize this is actually wasteful.\n\n\nSo we might change\n\nupdate t67cdi_nl_cmp_descr set is_grc_002='Y'\n\nto\n\nupdate t67cdi_nl_cmp_descr set is_grc_002='Y' where is_grc_002 is null\n\nThis will avoid creating new records for records that where already\nchanged before. This might give us additional speed improvement.That will probably help a lot.  The HOT update code is smart enough to realize that changing from 'Y' to 'Y' does not prevent the HOT update from working, but it still needs to find room on the same page for a new copy of the tuple or else it cannot use the HOT mechanism anyway.  Once you add this restriction to the where clause, you might find that it is better to leave the index in place and put up with the index updates for those rows which actually do need to be updated, rather than keep dropping the and rebuilding the index.  It would depend on what proportion of the table is getting updated each time.\nCheers,Jeff", "msg_date": "Sun, 13 Apr 2014 17:02:04 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: batch update query performance" } ]
[ { "msg_contents": "I am looking for advice on dealing with large tables of environmental model\ndata and looking for alternatives to my current optimization approaches.\n Basically, I have about 1 Billion records stored in a table which I access\nin groups of roughly 23 Million at a time. Which means that I have\nsomewhere in the neighborhood of 400-500 sets of 23Mil points.\n\nThe 23Mil that I pull at a time are keyed on 3 different columns, it's all\nindexed, and retrieval happens in say, 2-3 minutes (my hardware is so-so).\n So, my thought is to use some kind of caching and wonder if I can get\nadvice - here are my thoughts on options, would love to hear others:\n\n* use cached tables for this - since my # of actual data groups is small,\nwhy not just retrieve them once, then keep them around in a specially named\ntable (I do this with some other stuff, using a 30 day cache expiration)\n* Use some sort of stored procedure? I don't even know if such a thing\nreally exists in PG and how it works.\n* Use table partitioning?\n\nThanks,\n/r/b\n\n-- \n--\nRobert W. Burgholzer\n 'Making the simple complicated is commonplace; making the complicated\nsimple, awesomely simple, that's creativity.' - Charles Mingus\nAthletics: http://athleticalgorithm.wordpress.com/\nScience: http://robertwb.wordpress.com/\nWine: http://reesvineyard.wordpress.com/\n\nI am looking for advice on dealing with large tables of environmental model data and looking for alternatives to my current optimization approaches.  Basically, I have about 1 Billion records stored in a table which I access in groups of roughly 23 Million at a time.   Which means that I have somewhere in the neighborhood of 400-500 sets of 23Mil points.\nThe 23Mil that I pull at a time are keyed on 3 different columns, it's all indexed, and retrieval happens in say, 2-3 minutes (my hardware is so-so).  So, my thought is to use some kind of caching and wonder if I can get advice - here are my thoughts on options, would love to hear others:\n* use cached tables for this - since my # of actual data groups is small, why not just retrieve them once, then keep them around in a specially named table (I do this with some other stuff, using a 30 day cache expiration)\n* Use some sort of stored procedure?  I don't even know if such a thing really exists in PG and how it works.* Use table partitioning?Thanks,/r/b\n-- --Robert W. Burgholzer 'Making the simple complicated is commonplace; making the complicated simple, awesomely simple, that's creativity.'  - Charles MingusAthletics: http://athleticalgorithm.wordpress.com/ \nScience: http://robertwb.wordpress.com/Wine: http://reesvineyard.wordpress.com/", "msg_date": "Tue, 8 Apr 2014 17:20:19 -0400", "msg_from": "Robert Burgholzer <[email protected]>", "msg_from_op": true, "msg_subject": "Optimizing Time Series Access" }, { "msg_contents": "Dave,\nThanks for asking about the structure. I can say that it appears to me to\nbe fairly moderately structured, and I will list those aspects that I think\nmake it defined (STRUCTURED), and those which are more variable\n(Moderately...).\n\nSTRUCTURED:\nLocation - Values are keyed according to a location, and there are only\nabout 500 locations in my data set, so theoretically the data is able to be\nstructured by these locations\ndataval - Always numerical data\n\nMODERATELY STRUCTURED:\ntimestamp - A set of values will be sequential in time, but be on a\nvariable scale (15 minutes to 1 hour to 1 day are general temporal scale).\nscenarioid - there may be several copies of each piece of data representing\ndifferent model \"scenarios\".\nparam_group, param_block & param_name - descriptor of a piece of data -\nthere may be an infinite number of these depending upon what our models are\ndoing, but most of them have between 3-10 parameters.\n\n\n\nOn Tue, Apr 8, 2014 at 7:30 PM, Dave Duke <[email protected]> wrote:\n\n> Could you be more specific about the data is random or structured in some\n> way ?\n>\n>\n>\n>\n> On 8 Apr 2014, at 22:20, Robert Burgholzer <[email protected]> wrote:\n>\n> I am looking for advice on dealing with large tables of environmental\n> model data and looking for alternatives to my current optimization\n> approaches. Basically, I have about 1 Billion records stored in a table\n> which I access in groups of roughly 23 Million at a time. Which means\n> that I have somewhere in the neighborhood of 400-500 sets of 23Mil points.\n>\n> The 23Mil that I pull at a time are keyed on 3 different columns, it's all\n> indexed, and retrieval happens in say, 2-3 minutes (my hardware is so-so).\n> So, my thought is to use some kind of caching and wonder if I can get\n> advice - here are my thoughts on options, would love to hear others:\n>\n> * use cached tables for this - since my # of actual data groups is small,\n> why not just retrieve them once, then keep them around in a specially named\n> table (I do this with some other stuff, using a 30 day cache expiration)\n> * Use some sort of stored procedure? I don't even know if such a thing\n> really exists in PG and how it works.\n> * Use table partitioning?\n>\n> Thanks,\n> /r/b\n>\n> --\n> --\n> Robert W. Burgholzer\n> 'Making the simple complicated is commonplace; making the complicated\n> simple, awesomely simple, that's creativity.' - Charles Mingus\n> Athletics: http://athleticalgorithm.wordpress.com/\n> Science: http://robertwb.wordpress.com/\n> Wine: http://reesvineyard.wordpress.com/\n>\n>\n\n\n-- \n--\nRobert W. Burgholzer\n 'Making the simple complicated is commonplace; making the complicated\nsimple, awesomely simple, that's creativity.' - Charles Mingus\nAthletics: http://athleticalgorithm.wordpress.com/\nScience: http://robertwb.wordpress.com/\nWine: http://reesvineyard.wordpress.com/\n\nDave,Thanks for asking about the structure.  I can say that it appears to me to be fairly moderately structured, and I will list those aspects that I think make it defined (STRUCTURED), and those which are more variable (Moderately...).   \nSTRUCTURED:Location - Values are keyed according to a location, and there are only about 500 locations in my data set, so theoretically the data is able to be structured by these locations \ndataval - Always numerical dataMODERATELY STRUCTURED:timestamp - A set of values will be sequential in time, but be on a variable scale (15 minutes to 1 hour to 1 day are general temporal scale).  \nscenarioid - there may be several copies of each piece of data representing different model \"scenarios\".param_group, param_block & param_name - descriptor of a piece of data - there may be an infinite number of these depending upon what our models are doing, but most of them have between 3-10 parameters.\nOn Tue, Apr 8, 2014 at 7:30 PM, Dave Duke <[email protected]> wrote:\nCould you be more specific about the data is random or structured in some way ?\nOn 8 Apr 2014, at 22:20, Robert Burgholzer <[email protected]> wrote:\nI am looking for advice on dealing with large tables of environmental model data and looking for alternatives to my current optimization approaches.  Basically, I have about 1 Billion records stored in a table which I access in groups of roughly 23 Million at a time.   Which means that I have somewhere in the neighborhood of 400-500 sets of 23Mil points.\nThe 23Mil that I pull at a time are keyed on 3 different columns, it's all indexed, and retrieval happens in say, 2-3 minutes (my hardware is so-so).  So, my thought is to use some kind of caching and wonder if I can get advice - here are my thoughts on options, would love to hear others:\n* use cached tables for this - since my # of actual data groups is small, why not just retrieve them once, then keep them around in a specially named table (I do this with some other stuff, using a 30 day cache expiration)\n* Use some sort of stored procedure?  I don't even know if such a thing really exists in PG and how it works.* Use table partitioning?Thanks,/r/b\n\n-- --Robert W. Burgholzer 'Making the simple complicated is commonplace; making the complicated simple, awesomely simple, that's creativity.'  - Charles MingusAthletics: http://athleticalgorithm.wordpress.com/ \nScience: http://robertwb.wordpress.com/Wine: http://reesvineyard.wordpress.com/\n\n\n-- --Robert W. Burgholzer 'Making the simple complicated is commonplace; making the complicated simple, awesomely simple, that's creativity.'  - Charles Mingus\nAthletics: http://athleticalgorithm.wordpress.com/ Science: http://robertwb.wordpress.com/\nWine: http://reesvineyard.wordpress.com/", "msg_date": "Wed, 9 Apr 2014 15:48:57 -0400", "msg_from": "Robert Burgholzer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimizing Time Series Access" } ]
[ { "msg_contents": "I have this table that is quite big (several gig).\n\nI was looking for a row manually (because a query would take too long)\n- I know there is correlation between id and date, so I was doing\nmanual binary search for the id range that holds certain date, and I\nfound an interesting case where the planner makes a significant snafu:\n\nselect created from non_bid_logs where id >= 788991892 order by id limit 100;\n\n> Limit (cost=0.00..185.15 rows=100 width=16)\n> -> Index Scan using non_bid_logs_pkey on non_bid_logs (cost=0.00..33973433.99 rows=18349427 width=16)\n> Index Cond: (id >= 788991892)\n\n\nThat uses the pk over id to get the first 100 rows above that. Quite\nstraightforward and correct - and fast.\n\nNow... I originally tried:\n\nselect created from non_bid_logs where id >= 788991892 limit 100;\n\nThe same plan should work, and still be fast. But I get:\n\n> Limit (cost=0.00..12.30 rows=100 width=8)\n> -> Seq Scan on non_bid_logs (cost=0.00..2257215.96 rows=18350037 width=8)\n> Filter: (id >= 788991892)\n\nThis seems like a snafu of cost estimation. The planner should know\nabout the spatial correlation of \"id\", it's not clustered manually,\nbut quite naturally clustered, and yet it estimates the limit will\nfind the rows so fast?\n\nIf I do:\n\nselect correlation from pg_stats where tablename = 'non_bid_logs' and\nattname = 'id';\n\nI get:\n\n0.272682\n\nI don't know if that's realistic, I don't really know how to interpret\nthat number. But, experimentally, the seqscan performs horribly.\n\nIf I set enable_seqscan=off, and retry, I get:\n\n> Limit (cost=0.00..185.16 rows=100 width=8)\n> -> Index Scan using non_bid_logs_pkey on non_bid_logs (cost=0.00..33978925.99 rows=18351396 width=8)\n> Index Cond: (id >= 788991892)\n\nSo the planner knows about the index, it's just that it believes\n(somehow foolishly) that the seqscan will be faster.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 9 Apr 2014 15:56:23 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": true, "msg_subject": "Interesting case of index un-usage" } ]